AI News, Cognitive/Artificial Intelligence systems hold the power to transform ... artificial intelligence
What’s next for AI – Conversation with context
In the past few years, advances in artificial intelligence have captured the public imagination and led to widespread acceptance of AI-infused assistants.
HCI shifting to conversation, but users expect more While the move toward conversation might seem like a natural progression of HCI, AI thought leaders point out that talking to machines actually represents a tectonic shift in computing.
“They're seeing some early successes in a few narrow applications like customer support and smart appliances, but people are getting frustrated because they have overly high expectations.” The source of such high expectations?
Significant advances in machine learning have allowed conversational systems to better recognize speech and transform text into speech—two key elements in natural language processing (NLP).
As a result, conversational agents can respond with human-like quickness via voice and text, leading users to wrongly assume these agents are also capable of unbound, back-and-forth exchanges.
“Even when people know that they are having a conversation with a computer, it’s surprising to see that they not only appreciate that the computer has empathy— they expect it.” These limitations exist because computers have not yet made the great strides in natural language understanding and dialogue that they’ve achieved in NLP.
Context involves many interconnected layers of accumulated knowledge that humans acquire and apply in conversation with little effort but computers cannot yet amass—the purpose of a conversation, where the person you’re speaking with has likely just been or where they are now, applicable learnings from previous interactions with people who had the same purpose, general information about the world as it relates to this purpose, what has been said previously in the course of the current and past conversations with this person and so much more.
They’re looking particularly close at machine learning algorithms and, more specifically, training them using two techniques—supervised learning and reinforcement learning—that have been leveraged to teach AI systems to perform many other tasks.
In a supervised learning approach, I could look at, ‘when the dialog was in a certain context, what did the call center agent say?’ The system could then learn to imitate that call center agent.” Yoshua Bengio, deep learning pioneer and University of Montreal professor, anticipates that we’ll see systems that can understand as well as do a good job at generating natural language within the next five years.
future of powerful AI assistants In the short term, conversation grounded in broader context will give rise to personal assistants with more robust utility—both in our work and personal lives.
The Dark Secret at the Heart of AI
Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey.
The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence.
Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems.
The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation.
There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.
But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users.
But banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable.
“Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.” There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right.
The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease.
Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver.
If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed.
Many thought it made the most sense to build machines that reasoned according to rules and logic, making their inner workings transparent to anyone who cared to examine some code.
But it was not until the start of this decade, after several clever tweaks and refinements, that very large—or “deep”—neural networks demonstrated dramatic improvements in automated perception.
It has given computers extraordinary powers, like the ability to recognize spoken words almost as well as a person could, a skill too complex to code into the machine by hand.
The same approach can be applied, roughly speaking, to other inputs that lead a machine to teach itself: the sounds that make up words in speech, the letters and words that create sentences in text, or the steering-wheel movements required for driving.
The resulting images, produced by a project known as Deep Dream, showed grotesque, alien-like animals emerging from clouds and plants, and hallucinatory pagodas blooming across forests and mountain ranges.
In 2015, Clune’s group showed how certain images could fool such a network into perceiving things that aren’t there, because the images exploit the low-level patterns the system searches for.
The images that turn up are abstract (imagine an impressionistic take on a flamingo or a school bus), highlighting the mysterious nature of the machine’s perceptual abilities.
It is the interplay of calculations inside a deep neural network that is crucial to higher-level pattern recognition and complex decision-making, but those calculations are a quagmire of mathematical functions and variables.
“But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.” In the office next to Jaakkola is Regina Barzilay, an MIT professor who is determined to apply machine learning to medicine.
The diagnosis was shocking in itself, but Barzilay was also dismayed that cutting-edge statistical and machine-learning methods were not being used to help with oncological research or to guide patient treatment.
She envisions using more of the raw data that she says is currently underutilized: “imaging data, pathology data, all this information.” How well can we get along with machines that are
After she finished cancer treatment last year, Barzilay and her students began working with doctors at Massachusetts General Hospital to develop a system capable of mining pathology reports to identify patients with specific clinical characteristics that researchers might want to study.
Barzilay and her students are also developing a deep-learning algorithm capable of finding early signs of breast cancer in mammogram images, and they aim to give this system some ability to explain its reasoning, too.
The U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft, identify targets, and help analysts sift through huge piles of intelligence data.
A silver-haired veteran of the agency who previously oversaw the DARPA project that eventually led to the creation of Siri, Gunning says automation is creeping into countless areas of the military.
But soldiers probably won’t feel comfortable in a robotic tank that doesn’t explain itself to them, and analysts will be reluctant to act on information without some reasoning.
A chapter of Dennett’s latest book, From Bacteria to Bach and Back, an encyclopedic treatise on consciousness, suggests that a natural part of the evolution of intelligence itself is the creation of systems capable of performing tasks their creators do not know how to do.
But since there may be no perfect answer, we should be as cautious of AI explanations as we are of each other’s—no matter how clever a machine seems.“If it can’t do better than us at explaining what it’s doing,” he says, “then don’t trustit.”
- On 28. februar 2021
The incredible inventions of intuitive AI | Maurice Conti
What do you get when you give a design tool a digital nervous system? Computers that improve our ability to think and imagine, and robotic systems that come ...
5G and The AI Control Grid
Max Igan - Surviving the Matrix - Episode 302 - American Voice Radio, August 4th, 2017 - Support The Crowhouse: ..
The Intelligence Revolution: Coupling AI and the Human Brain | Ed Boyden
Edward Boyden is a Hertz Foundation Fellow and recipient of the prestigious Hertz Foundation Grant for graduate study in the applications of the physical, ...
#219: McKinsey & Company (McKinsey Global Institute) on Artificial Intelligence and Machine Learning
Data and automation have the power to transform business and society. The impact of data on our lives will be profound as industry and the government use ...
What Is Neuromorphic Computing (How AI Will Think)
This video is the eleventh in a multi-part series discussing computing. In this video, we'll be discussing what cognitive computing is and the impact it will have on ...
Artificial intelligence, video games and the mysteries of the mind | Raia Hadsell | TEDxExeterSalon
Artificial intelligence could be the powerful tool we need to solve some of the biggest problems facing our world, argues Raia Hadsell. In this talk, she offers an ...
Ned Block: "Why AI Approaches to Cognition Won't Work for Consciousness" | Talks at Google
Professor Block's research is at the center of the vibrant academic debate about the true nature of consciousness. His work often straddles the boundary of ...
The Birth of Artificial Intelligence
This video was made possible by Skillshare. Be one of the first 500 people to sign up with this link and get your first 2 months of premium subscription for FREE!
Cognitive Computing and Revolution: Data Analytics to Artificial Intelligence - Valentina Salapura
This video was recorded at IntelliSys 2017 - Valentina is with the IBM Research in the Services Innovation Lab where she is ..
Using Machine Learning and Data Science to Solve Real Business Problems (DataEDGE 2018)
Sourav Dey, Managing Director of Machine Learning, Manifold — AI and machine learning have the power to transform entire industries. Companies in ...