AI News, Artificial intelligence system uses transparent, human-like reasoning to solve problems

Artificial intelligence system uses transparent, human-like reasoning to solve problems

A child is presented with a picture of various shapes and is asked to find the big red circle.

It is important to know, for example, what exactly a neural network used in self-driving cars thinks the difference is between a pedestrian and stop sign, and at what point along its chain of reasoning does it see that difference.

Like workers down an assembly line, each module builds off what the module before it has figured out to eventually produce the final, correct answer. As a whole, TbD-net utilizes one AI technique that interprets human language questions and breaks those sentences into subtasks, followed by multiple computer vision AI techniques that interpret the imagery.

These visualizations let the human analyst see how a module is interpreting the image.    Take, for example, the following question posed to TbD-net: “In this image, what color is the large metal cube?'

The researchers evaluated the model using a visual question-answering dataset consisting of 70,000 training images and 700,000 questions, along with test and validation sets of 15,000 images and 150,000 questions.

To build trust in these systems, users will need the ability to inspect the reasoning process so that they can understand why and how a model could make wrong predictions.  Paul Metzger, leader of the Intelligence and Decision Technologies Group, says the research “is part of Lincoln Laboratory’s work toward becoming a world leader in applied machine learning research and artificial intelligence that fosters human-machine collaboration.” The details of this work are described in the paper, “Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning,' which was presented at the Conference on Computer Vision and Pattern Recognition (CVPR) this summer.

Transparent Reasoning: How MIT Builds Neural Networks that can Explain Themselves

If we can plot some of the best-known deep learning models in a chart that correlates accuracy and interpretability, we will get something like the following: To validate some of their ideas of how to improve interpretability, the MIT researchers focused on a visual question answering(VQA) scenario in which a model must be capable to make complex spatial reasoning over an image.

When confronted with a question such as “What color is the cube to the right of the large metal sphere?”, a model must identify which sphere is the large metal one, understand what it means for an object to be to the right of another, and apply this concept spatially to the attended sphere.

Attention: The Attention module takes as input image features and a previous attention to refine (or an all-one tensor if it is the first Attention in the network) and outputs a heatmap of dimension 1 x H x W corresponding to the objects of interest.

For example, in the question “What color is the cube to the right of the small sphere?”, the network should determine the position of the small sphere using a series of Attention modules, then use a Relate module to attend to the region that is spatially to the right.

As an example, when answering the question “Is anything the same color as the small cube?”, the network should localize the small cube via Attention modules, then use a Same module to determine its color and output an attention mask localizing all other objects sharing that color.

Each module’s output is depicted visually in what the group calls an “attention mask.” The attention mask shows heat-map blobs over objects in the image that the module is identifying as its answer.

The MIT team tested TbD-Net using the CLEVR dataset generator to produce a dataset consisting of 70,000 training images and 700,000 questions, along with test and validation sets of 15,000 images and 150,000 questions.

AI Neural Network Can Perform Human-Like Reasoning

Scientists have taken the mask off a new neural network to better understand how it makes its decisions.

The model, dubbed the Transparency by Design Network (TbD-net), visually renders its thought process as it solves problems, enabling human analysts to interpret its decision-making process, which ultimately outperforms today’s best visual-reasoning neural networks.

However, the researchers hope to make the inner workings transparent for the new network, which could allow the researchers to teach the neural network to correct any incorrect assumptions.

The entire network uses AI techniques to interpret human language questions and breaks the sentences into subtasks, followed by multiple computer vision AI techniques that interpret the imagery.

To answer questions like “what color is a large metal cube in a given image,” the module first isolates the large objects in the image to produce an attention mask.

TbD-net achieved a 98.7 percent accuracy after using a visual question-answering dataset consisting of 70,000 training images and 700,000 questions with test and validation sets of 15,000 images and 150,000 questions.

A new artificial intelligence system solves problems through human-like reasoning

A child is presented with a picture of various shapes and is asked to find the big red circle.

The model performs better than today's best visual reasoning neural networks.  Understanding how a neural network comes to its decisions has been a long-standing challenge for artificial intelligence (AI) researchers.

We'd want to know, for example, what exactly a neural network used in self-driving cars thinks the difference is between a pedestrian and stop sign, and where along its chain of reasoning it sees the difference.

Like workers down an assembly line, each module builds off what the module before it has figured out to eventually produce the final, correct answer. As a whole, TbD-net utilizes one AI technique that interprets human language questions and breaks those sentences into subtasks, followed by multiple computer vision AI techniques that interpret the imagery.

These visualizations let the human analyst see how a module is interpreting the image.   Take, for example, the following question posed to TbD-net: 'In this image, what color is the large metal cube?'

The researchers evaluated the model using a visual question-answering dataset consisting of 70,000 training images and 700,000 questions, along with test and validation sets of 15,000 images and 150,000 questions.

The initial model achieved 98.7 percent test accuracy on the dataset, which, according to the researchers, 'far outperforms other neural module network–based approaches.'

To build trust in these systems, users will need the ability to inspect the reasoning process so that they can understand why and how a model could make wrong predictions.  'This research is part of Lincoln Laboratory's work toward becoming a world leader in applied machine learning research and artificial intelligence that fosters human-machine collaboration,' Paul Metzger, leader of the Intelligence and Decision Technologies Group, said.

A.I. | Transparent neural network boasts human-like reasoning

MIT researchers claim to have created an AI model that sets a new standard for understanding how a neural network makes decisions.

As a result, it is able to answer complex spatial reasoning questions such as, “What colour is the cube to the right of the large metal sphere?” The model breaks this question down into its component concepts, identifying which sphere is the large metal one, understanding what it means for an object to be to the right of another one, and then finding the cube and interpreting its colour.

A heat-map is layered over objects in the image to show researchers how the module is interpreting it, allowing them to understand the neural network’s decision-making process at each step.

Past efforts to overcome the problem of black box AI models, such as Cornell University’s use of transparent model distillation, have gone some way to tackling these issues, but TbD-net’s overt rendering of its reasoning process takes neural network transparency to a new level – without sacrificing the accuracy of the model.

The system is capable of performing complex reasoning tasks in an explicitly interpretable manner, closing the performance gap between interpretable models and state-of-the-art visual reasoning methods.

With computer vision and visual reasoning systems set to play a huge part in autonomous vehicles, satellite imagery, surveillance, smart city monitoring, and many other applications, this represents a major breakthrough in creating highly accurate, transparent-by-design neural networks.

Auburn Coach Wife Kristi Malzahn Agrees with Match & eHarmony: Men are Jerks

My advice is this: Settle! That's right. Don't worry about passion or intense connection. Don't nix a guy based on his annoying habit of yelling "Bravo!" in movie ...