AI News, Deep Learning Summit, Boston 2015

Deep Learning Summit, Boston 2015

After the sell-out success of the San Francisco edition, the Deep Learning Summit is coming to Boston.

Discover advances in deep learning and smart artificial intelligence from the world's leading innovators.

The Deep Learning Summit in Boston looks to showcase similarly distinguished experts leading the smart artificial intelligence revolution. Key themes to be discussed at the summit include:  Deep Learning Algorithms Neural

Discover advances in deep learning and smart artificial intelligence from the world's leading innovators.

The Deep Learning Summit in Boston looks to showcase similarly distinguished experts leading the smart artificial intelligence revolution. Key themes to be discussed at the summit include:  Early Bird tickets expire 10 April.

Experts Predict When Artificial Intelligence Will Exceed Human Performance

Artificial intelligence is changing the world and doing it at breakneck speed.

They surveyed the world’s leading researchers in artificial intelligence by asking them when they think intelligent machines will better humans in a wide range of tasks.

Grace and co asked them all—1,634 of them—to fill in a survey about when artificial intelligence would be better and cheaper than humans at a variety of tasks.

Grave and co then calculated their median responses The experts predict that AI will outperform humans in the next 10years in tasks such as translating languages (by 2024), writing high school essays (by 2026), and driving trucks (by 2027).

AI won’t be better than humans at working in retail until 2031, able to write a bestselling book until 2049, or capable of workingas a surgeon until 2053.

(This was in 2015, remember.) In fact, Google’s DeepMind subsidiary has already developed an artificial intelligence capable of beating the best humans.

So any predicted change that is further away than that means the change will happen beyond the working lifetime of everyone who is working today.

To find out if different groups made different predictions, Grace and co looked at how the predictions changed with the age of the researchers, the number of their citations (i.e., their expertise), and their region of origin.

Policy experts trade ideas for intelligent ways to regulate artificial intelligence

“This is more complex, because the technology itself is very fast, changing all the time, and is complex as well.” Firth-Butterfield and other policy experts weighed in on the challenges of regulating AI, and dropped some hints about the road ahead, today at the Carnegie Mellon University – K&L Gates Conference on Ethics and AI in Pittsburgh.

Lorrie Faith Cranor, a CMU professor who spent a stint as the Federal Trade Commission’s chief technologist, said the most likely scenario for a serious push toward AI regulation would be “that somebody dies — and I think we’re already starting to see that with self-driving cars.” Concerns about a potential AI apocalypse are also a driver.

This week, diplomats from more than 120 nations are meeting in Geneva around the world are meeting in Geneva to discuss the potential risks posed by lethal autonomous weapon systems, known colloquially as “killer robots.” The meeting’s chairman, Indian ambassador Amandeep Gill, said heading off the use of such unconventional weapons will require unconventional methods.

So we are not even talking about that.” Instead, Gill said he’s trying to “create a safe space” for a wide variety of stakeholders, including industry executives as well as government officials, to work on maximizing transparency and trust.

How worried should we be about artificial intelligence? I asked 17 experts.

Imagine that, in 20 or 30 years, a company creates the first artificially intelligent humanoid robot.

She was developed in secret, for obvious reasons, and now she’s managed to escape, leaving behind — or potentially destroying — the handful of people who knew of her existence.

reached out to 17 thought leaders — AI experts, computer engineers, roboticists, physicists, and social scientists — with a single question: “How worried should we be about artificial intelligence?” There was no consensus.

[For an in-depth explanation of the three forms of AI and which is worth worrying about, read my explainer here.] The transition to machine superintelligence is a very grave matter, and we should take seriously the possibility that things could go radically wrong.

— Nick Bostrom, director of the Future of Humanity Institute, Oxford University If [AI] contributed either to the capacities of Russians hacking or the campaigns for Brexit or the US presidential elections, or to campaigns being able to manipulate voters into not bothering to vote based on their social media profiles, or if it's part of the socio-technological forces that have led to increases of wealth inequality and political polarization like the ones in the late 19th and early 20th centuries that brought us two world wars and a great depression, then we should be very afraid.

affiliate at Princeton’s Center for Information Technology Policy One obvious risk is that we fail to specify objectives correctly, resulting in behavior that is undesirable and has irreversible impact on a global scale.

I think we will probably figure out decent solutions for this 'accidental value misalignment' problem, although it may require some rigid enforcement.

My current guesses for the most likely failure modes are twofold: The gradual enfeeblement of human society as more knowledge and know-how resides in and is transmitted through machines and fewer humans are motivated to learn the hard stuff in the absence of real need.

— Sebastian Thrun, computer science professor, Stanford University We should worry a lot about climate change, nuclear weapons, antibiotic-resistant pathogens, and reactionary and neo-fascist political movements.

AI is already helping us address issues like climate change by collecting and analyzing data from wireless networks that monitor the oceans and greenhouse gases.

— Bryan Caplan, economics professor, George Mason University I'm somewhat concerned about what I think of as 'intermediate stages,' in which, say, self-driving cars share the road with human drivers.

In other words, I'm concerned about the growing pains associated with technological progress, but such is the nature of being human, exploring, and advancing the state of the art.

Nevertheless, fortune favors the prepared mind, so it is important to explore all the possibilities, both good and bad, now, to help us be better prepared for a future that will arrive whether we like it or not.

— Lawrence Krauss, director, Origins Project and Foundations professor, Arizona State University AI has the special property that it's easy to imagine scary science fiction scenarios in which artificial minds grab control of all the machines on Earth, and enslave its pitiful human population.

It is absolutely right to think very carefully and thoroughly about what those consequences might be, and how we might guard against them, without preventing real progress on improved artificial intelligence.

(I don't see AI as fundamentally different from so many other technologies — the borders are arbitrary.) Will we be able to adapt by inventing new jobs, particularly in the service sector and in the human face of bureaucracy?

One key issue is how to prepare for significantly reduced employment due to future AI technology being able to handle much of routine work.

Live Event Series: Intel's approach to Artificial Intelligence

What is Intel's approach to Artificial Intelligence? Watch my interview with Intel's Lisa Spelman Watch in my latest video. #IntelPartner. My name is Ronald Van...

Talent Connect Live: Day 2

The Talent Connect Livestream is your front row seat to a three-day gathering of the world's top leaders, innovators and influencers in the talent space. Join the ...

IBM's Watson Supercomputer Destroys Humans in Jeopardy | Engadget

IBM's Watson supercomputer destroys all humans in Jeopardy. » Subscribe To Engadget Today: » Watch More Engadget Video Here: ..

Customer Keynote: Google Cloud Customer Innovation Series - Tuesday (Cloud Next '18)

The Google Cloud Customer Innovation Series is a new addition to Next '18. These sessions will feature aa group of global business and technology leaders ...

Fuzzy and Techie: A False Divide?

Techie” students who pursue STEM subjects are commonly seen as greater drivers of innovation than “fuzzy” students who pursue the humanities and sciences.

2018 Building a Bridge to Credit Visibility Symposium —

The transcript can be downloaded here: ...

MIT 6.S094: Introduction to Deep Learning and Self-Driving Cars

This is lecture 1 of course 6.S094: Deep Learning for Self-Driving Cars taught in Winter 2017. Course website: Lecture 1 slides: ..

Azure AI: Making AI real for your business - GS009

In this session, learn about the innovations powering Azure AI that you can use today to drive real impact. Deliver immersive experiences by building AI-powered ...

Uli Chettipally: "Punish the Machine! Spare the Doctor and Save the Patient" | Talks at Google

Uli K. Chettipally, MD., MPH. is a pioneer at the intersection of artificial intelligence and healthcare. As the co-founder and CTO of Kaiser Permanente's CREST ...