AI News, The Atlantic

MIT News - Artificial intelligence

In a tent filled with electronic screens, students and postdocs took turns explaining how they had created something new by combining computing with topics they felt passionate about, including predicting panic selling on Wall Street, analyzing the filler ingredients in common drugs, and developing more energy-efficient software and hardware. The poster session featured undergraduates, graduate students, and postdocs from each of MIT’s five schools. Eight projects are highlighted here.

In collaboration with Boston Children’s Hospital and Harvard Medical School, MIT researchers are using AI to explore autism’s hidden origins.  Working with his advisors, Bonnie Berger and Po-Ru Loh, professors of math and medicine at MIT and Harvard respectively, graduate student Maxwell Sherman has helped develop an algorithm to detect previously unidentified mutations in people with autism which cause some cells to carry too much or too little DNA.  The team has found that up to 1 percent of people with autism carry the mutations, and that inexpensive consumer genetic tests can detect them with a mere saliva sample.

Hundreds of U.S. children who carry the mutations and are at risk for autism could be identified this way each year, researchers say.   “Early detection of autism gives kids earlier access to supportive services,” says Sherman, “and that can have lasting benefits.”  Can deep learning models be trusted?

Ideally, this means moving to a world in which we talk to robots instead of programming them.  In a project led by Boris Katz, a researcher at the Computer Science and Artificial Intelligence Laboratory and Nicholas Roy, a professor in MIT’s Department of Aeronautics and Astronautics, graduate student Yen-Ling Kuo has designed a set of experiments to understand how humans and robots can cooperate and what robots must learn to follow commands.

To identify at-risk mothers sooner, researchers at MIT, Harvard Medical School, Brigham Women’s Hospital, and Partners in Health, Rwanda, are developing a computational tool to predict whether a mother’s post-surgical wound is likely to be infected.   Researchers gathered C-section wound photos from 527 women, using health workers who captured the pictures with their smartphones 10 to 12 days after surgery.

Working with his advisor, Richard Fletcher, a researcher in MIT’s D-Lab, graduate student Subby Olubeko helped train a pair of models to pick out the wounds that developed into infections.  When they tested the logistic regression model on the full dataset, it gave almost perfect predictions.  The color of the wound’s drainage, and how bright the wound appears at its center, are two of the features the model picks up on, says Olubeko.

“Native ads were supposed to help the news industry cope with the financial crisis, but what if they’re reinforcing the public’s mistrust of the media and driving readers away from quality news?” says graduate student Manon Revel.  Claims of fake news dominated the 2016 U.S. presidential elections, but politicized native ads were also common.

Curious to measure their reach, Revel joined a project led by Adam Berinsky, a professor in MIT’s Department of Political Science, Munther Dahleh, a professor in EECS and director of IDSS, Dean Eckles, a professor at MIT’s Sloan School of Management, and Ali Jadbabaie, a CEE professor who is associate director of IDSS.   Analyzing a sample of native ads that popped up on readers’ screens before the election, they found that 25 percent could be considered highly political, and that 75 percent fit the description of clickbait.

AI ML MarketPlace

Artificial intelligence is playing strategy games, writing news articles, folding proteins, and teaching grandmasters new moves in Go.

Two new books both take a similar approach. Possible Minds, edited by John Brockman and published last week by Penguin Press, asks 25 important thinkers — including Max Tegmark, Jaan Tallinn, Steven Pinker, and Stuart Russell — to each contribute a short essay on “ways of looking” at AI.

(McKinsey Global Institute director James Manyika, in Architects of Intelligence, compares it to electricity in its transformative potential.) It is easy for the people involved to see that there’s something enormous here, but surprisingly difficult for them to anticipate which of its potential promises will bear fruit, or when, or whether that will be for the good.

Almost everyone agrees that certain questions — when general AI (that is, AI that has human-level problem-solving abilities) will happen, how it’ll be built, whether it’s dangerous, how our lives will change — are questions of critical importance, but they disagree on almost everything else, even basic definitions. Surveys show different experts estimating that we’ll arrive at general AI any time from 20 years to two centuries from now.

In the introduction to Possible Minds, Brockman writes of the AI pioneers, “over the decades I rode with them on waves of enthusiasm, and into valleys of disappointment.” The specter of those past “AI winters” — periods when advances in AI research stalled —haunts most of the essayists, whether or not they think we’re headed for another one.

“If the founders of the field were able to see what we tout as great advances today, they would be very disappointed because it appears we have not made much progress.” Even among those who are more optimistic about AI, there’s fear that expectations are rising too high, and that there might be backlash — less funding, an exodus of researchers and interest — if they’re not met.

I think the rise of deep learning was unfortunately coupled with false hopes and dreams of a sure path to achieving AGI, and I think that resetting everyone’s expectations about that would be very helpful.” Alan Turing and John Von Neumann were some of the first to anticipate the potential of AI.

Google’s Ray Kurzweil, famous for his Singulatarian optimism, insists in his segment that that day is in 2029 — and, he tells Ford, “there’s a growing group of people who think I’m too conservative.” The experts in both books have extraordinarily varied visions of AI and what it means.

The first is when AGI will happen, with some experts confident that it’s distant, some confident that it’s terrifyingly close, and many unwilling to be nailed down on the topic — perhaps waiting to see what challenges come into focus when we crest the next hill in AI progress.

He quotes a recent survey as finding “AI systems will probably (over 50 percent) reach overall human ability by 2040-50, and very likely (with 90 percent probability) by 2075.” The second disagreement is over whether there’s a serious danger that AI will wipe out humanity — a concern that has become increasingly pronounced in light of recent AI advances.

Norbert Wiener’s 1950 book The Human Use of Human Beings, the text that inspired Possible Minds, is among the earliest texts to grapple with the argument at the core of AI safety worries: that is, that the fact an advanced AI will “understand what we really meant” will not cause it to reliably adopt approaches that humans approve of.

“A recent survey of AI researchers who published at the two major international AI conferences in 2015 found that 40 percent now think that risks from highly advanced AI are either ‘an important problem’ or ‘among the most important problems in the field,’” Tallinn writes in his essay in Possible Minds.

At that point, the course of action is already clear, and sitting there waiting for the remaining 60 percent to come around isn’t part of it.” It is in puzzling through this disagreement that I found myself most frustrated with the format of both books, which seem to open window after window into the minds of researchers and scientists, only to leave it to the reader to sketch floor plans and notice how the views through all of these windows don’t line up.

Artificial Lawyer: Richard Tromans - AI Strategy & Implementation

Artificial Lawyer: Richard Tromans - AI Strategy & Implementation Stephen Turner Lawyers of Tomorrow SUMMARY: Richard Tromans Artificial Lawyer, how is ...

*STEM* Kids and Family Tech Expo in NYC by Child's Play Communications

Thank you Child's Play Communications for inviting us to the Kids and Family Tech Expo in New York City! It was fun to see tech brands exemplifying the ...

Tech News #5 MIT Cheetah 3 Robot Machine Learning AI Pixelplayer Jio fiber Seagate Barracuda SSDs

Welcome to my YouTube Channel *** *** You will find latest | July 2018 | Technology News here**** 1. MIT AI & Machine Learning Robot - Cheetah 3 The robot ...

Flying Car ? Car but no driver ? - 360C from Volvo how Amazon Go works

Welcome to Tech #News Series***** In this Video you will get details of 1. Uber Flying Cars/Taxis -UberArial 2. Amazon Go cashier-less Stores using AI and ...

Latest False Prophet Message: Antichrist ForeRunners

ACCESS TO THE WEBSITE: WELCOME EAGLES! Get Access to TradCatKnight's Exclusive Content

HOTTEST TOYS FOR THE HOLIDAYS Gift Guide | New York City "HoliDAY of Play" Showcase | Toy Insider

HOTTEST TOYS FOR THE HOLIDAYS Gift Guide | New York City "HoliDAY of Play" Showcase | Toy Insider The holiday shopping season has begun! Not sure ...

ASUS ZENFONE 5Z REVIEW phone with AI enhancements!

ASUS ZENFONE 5Z honesh Feedback *** **ZenFone 5Z is designed to impress, using the finest materials. Its spectacular edge-to-edge 2.5D-curved screen ...

ENSIT | INNOROBOTS 2016 | HANNIBAL TV | MAGON TEAM | cameleon robot

Note 1 ▻ the savvy engineer was selected among the top 40 cad blogs on the planet between 2017 and 2019 by Feedspot ☢ ☢ Note 2 ▻ Guys ! we do have a ...

Video Spinn - The Easiest Way To Automate Video Creation

- Video Spinn - The Easiest Way To Automate Video Creation Please subscribe to our YouTube .

Kentucky Cattle News - November 2018

Visit the largest, registered Angus breeder in Kentucky. Get to know Chuck, Beef It's What's For Dinner's newest asset that's a different kind of AI. See what you ...