AI News, Smarter AIs could help us understand how our brains interpret the world

Smarter AIs could help us understand how our brains interpret the world

PHILADELPHIA, PENNSYLVANIA—While artificial intelligence (AI) has been busy trouncing humans at Go and spawning eerily personable Alexas, some neuroscientists have harbored a different hope: that the types of algorithms driving those technologies can also yield some insight into the squishy, wet computers in our skulls.

At the Conference on Cognitive Computational Neuroscience here this month, researchers presented new tools for comparing data from living brains with readouts from computational models known as deep neural networks.

“You can still look at different parts of anetwork—say, different layers—and ask what kinds of information can be read out.” The answers might give scientists clues about how the brain breaks apart and processes the world around it, says cognitive neuroscientist Elissa Aminoff at Fordham University in New York City.

If a neural network identifies a forest by picking up on those same features, monitoring its activity might help neuroscientists determine which brain regions use what kinds of information.

It contains functional magnetic resonance imaging (fMRI) scans of brain activity from four people observing about 5000 images of natural scenes—a dog;

The scenes come from image collections that computer vision researchers commonly use to train and test deep neural networks, which should make it easier to compare how computer models and brains represent the images.

The team hopes neuroscientists will submit new brain data that challenge the best models’ performance, revealing ways that they could become more like the brain.

These relatively simple networks “are more penetrable and much easier to work with” than most neural networks, Kubilius says, and they have a brainlike feature that many models lack: They retain information in memory and feed it back from later layers to earlier ones.

Attention in Neural Networks

In this video, we discuss Attention in neural networks. We go through Soft and hard attention, discuss the architecture with examples. SUBSCRIBE to the channel ...

Attention is all you need attentional neural network models – Łukasz Kaiser

Łukasz Kaiser - Research Scientist at Google Brain - talks about attentional neural network models and the quick developments that have been made in this ...

ENHANCE! Upscaling Images with Neural Networks

When characters on a TV show “enhance!” a blurry image, you probably laugh and tell your friends that it's impossible to do that in real life. But over the past year ...

How to Make an Image Classifier - Intro to Deep Learning #6

We're going to make our own Image Classifier for cats & dogs in 40 lines of Python! First we'll go over the history of image classification, then we'll dive into the ...

Brain Circuits: Harvard Medical School Researchers Crawl a Neural Network

Scientists can finally look at circuits in the brain in all of their complexity. How the mind works is one of the greatest mysteries in nature, and this research ...

One Neural network learns EVERYTHING ?!

We explore a neural network architecture that can solve multiple tasks: multimodal Neural Network. We discuss important components and concepts along the ...

Whiteboard Wednesdays - Introduction to Convolutional Neural Networks (CNN)

In this week's Whiteboard Wednesdays video, the first in a two-part series, Megha Daga explores Convolutional Neural Networks which are biologically inspired ...

Human brain mapping and brain decoding. | Jack Gallant | TEDxSanFrancisco

How can we find systematic relationships between the self and the world? By mapping the brain says Jack Gallant, and he is sharing beautiful brain imaging ...

Brainstorm: Imaging neural activity at the speed of brain

Brainstorm is a collaborative, open-source application dedicated to MEG/EEG/sEEG/ECoG data analysis (visualization, processing and advanced source ...

Recursive Neural Tensor Nets - Ep. 11 (Deep Learning SIMPLIFIED)

Certain patterns are innately hierarchical, like the underlying parse tree of a natural language sentence. A Recursive Neural Tensor Network (RNTN) is a ...