AI News, Research team sets new mark for 'deep learning'

Research team sets new mark for 'deep learning'

In tests, the group's 'deep rendering mixture model' largely taught itself how to distinguish handwritten digits using a standard dataset of 10,000 digits written by federal employees and high school students.

In results presented this month at the Neural Information Processing Systems (NIPS) conference in Barcelona, Spain, the researchers described how they trained their algorithm by giving it just 10 correct examples of each handwritten digit between zero and nine and then presenting it with several thousand more examples that it used to further teach itself.

'In deep-learning parlance, our system uses a method known as semisupervised learning,' said lead researcher Ankit Patel, an assistant professor with joint appointments in neuroscience at Baylor and electrical and computer engineering at Rice.

Patel said he and graduate student Tan Nguyen, a co-author on the new study, set out to design a semisupervised learning system for visual data that didn't require much 'hand-holding' in the form of training examples.

For instance, neural networks that use supervised learning would typically be given hundreds or even thousands of training examples of handwritten digits before they would be tested on the database of 10,000 handwritten digits in the Mixed National Institute of Standards and Technology (MNIST) database.

The semisupervised Rice-Baylor algorithm is a 'convolutional neural network,' a piece of software made up of layers of artificial neurons whose design was inspired by biological neurons.

These artificial neurons, or processing units, are organized in layers, and the first layer scans an image and does simple tasks like searching for edges and color changes.

Patel had already spent more than a decade studying and applying machine learning in jobs ranging from high-volume commodities training to strategic missile defense, and he'd just wrapped up a four-year postdoctoral stint in the lab of Rice's Richard Baraniuk, another co-author on the new study.

In late 2015, Baraniuk, Patel and Nguyen published the first theoretical framework that could both derive the exact structure of convolutional neural networks and provide principled solutions to alleviate some of their limitations.

Rice, Baylor team sets new mark for ‘deep learning’

Neuroscience and artificial intelligence experts from Rice University and Baylor College of Medicine have taken inspiration from the human brain in creating a new “deep learning”

largely taught itself how to distinguish handwritten digits using a standard dataset of 10,000 digits written by federal employees and high school students.

In results presented this month at the Neural Information Processing Systems (NIPS) conference in Barcelona, Spain, the researchers described how they trained their algorithm by giving it just 10 correct examples of each handwritten digit between zero and nine and then presenting it with several thousand more examples that it used to further teach itself.

Patel said he and graduate student Tan Nguyen, a co-author on the new study, set out to design a semisupervised learning system for visual data that didn’t require much “hand-holding”

For instance, neural networks that use supervised learning would typically be given hundreds or even thousands of training examples of handwritten digits before they would be tested on the database of 10,000 handwritten digits in the Mixed National Institute of Standards and Technology (MNIST) database.

These artificial neurons, or processing units, are organized in layers, and the first layer scans an image and does simple tasks like searching for edges and color changes.

“You give it an image, and each layer processes the image a little bit more and understands it in a deeper way, and by the last layer, you’ve got a really deep and abstract understanding of the image.

They’re looking for patterns that are both very common and very important for visual interpretation, and each one trains itself to look for a specific pattern, like a 45-degree edge or a 30-degree red-to-blue transition.

Patel had already spent more than a decade studying and applying machine learning in jobs ranging from high-volume commodities training to strategic missile defense, and he’d just wrapped up a four-year postdoctoral stint in the lab of Rice’s Richard Baraniuk, another co-author on the new study.

In late 2015, Baraniuk, Patel and Nguyen published the first theoretical framework that could both derive the exact structure of convolutional neural networks and provide principled solutions to alleviate some of their limitations.

Introduction to Deep Learning: What Are Convolutional Neural Networks?

A convolutional neural network, or CNN, is a network architecture for deep learning.

You can train a CNN to do image analysis tasks including scene classification, object detection and segmentation, and image processing.

1) Local receptive fields 2) Shared weights and biases, and 3) Activation and pooling Finally, we’ll briefly discuss the three ways to train CNNs for image analysis.

The local receptive field is translated across an image to create a feature map from the input layer to the hidden layer neurons.

Pooling reduces the dimensionality of the feature map by condensing the output of small regions of neurons into a single output.

Just like in a typical neural network, the final layer connects every neuron from the last hidden layer to the output neurons.

This method is highly accurate, although it is also the most challenging, as you might need hundreds of thousands of labeled images and significant computational resources.

The second method relies on transfer learning, which is based on the idea that you can use knowledge of one type of problem to solve a similar problem.

For example, you could use a CNN model that has been trained to recognize animals to initialize and train a new model that differentiates between cars and trucks.

For example, a hidden layer that has learned how to detect edges in an image is broadly relevant to images from many different domains.

Research paperComputer vision-based limestone rock-type classification using probabilistic neural network

Proper quality planning of limestone raw materials is an essential job of maintaining desired feed in cement plant.

The color image histogram-based features that include weighted mean, skewness and kurtosis features are extracted for all three color space red, green, and blue.

The developed PPN is validated using the test data set and results reveal that the proposed vision-based model can perform satisfactorily for classifying limestone rock-types.

Why would Musical Training Benefit the Neural Encoding of Speech? The OPERA Hypothesis

painting lessons, and found that after musical (but not painting) training, children showed enhanced reading abilities and improved auditory discrimination in speech, with the latter shown by both behavioral and neural measures (scalp-recorded cortical EEG).

The second line of research has demonstrated that a substantial portion of children with reading problems have auditory processing deficits, leading researchers to wonder whether musical training might be helpful for such children (e.g., Overy, 2003;

In terms of connections to reading, Goswami has suggested that problems in envelope perception during language development could result in less robust phonological representations at the syllable-level, which would then undermine the ability to consciously segment syllables into individual speech sounds (phonemes).

Phonological deficits are a core feature of dyslexia (if the dyslexia is not primarily due to visual problems), likely because reading requires the ability to segment words into individual speech sounds in order to map the sounds onto visual symbols.

Given that envelope is an important acoustic feature in both speech and music, the OPERA hypothesis leads to the prediction that musical training which relies on high-precision envelope processing could benefit the neural processing of speech envelopes, via mechanisms of adaptive neural plasticity, if the five conditions of OPERA are met.

(The children heard hundreds of repetitions of the sentence in one ear, while watching a movie and hearing the soundtrack quietly in the other ear.) The EEG response to the sentence was averaged across trials and low-pass filtered at 40 Hz to focus on cortical responses and how they might reflect the amplitude envelope of the sentence.

(The authors argue that the neural EEG signals they studied are likely to arise from activity in secondary auditory cortex.) Notably, the quality of tracking, as measured by cross-correlating the EEG waveform with the speech envelope, was far superior in right-hemisphere electrodes (approx.

In music, one set of abilities that should depend on envelope processing are rhythmic abilities, because such abilities depend on sensitivity to the timing of musical notes, and envelope is an important cue for the perceptual onset and duration of event onsets in music (Gordon, 1987).

Furthermore, normal 8-year-old children show positive correlations between performance on rhythm discrimination tasks (but not pitch discrimination tasks) and reading tasks, even after factoring out effects of age, parental education, and the number of hours children spend reading per week (Corrigall and Trainor, 2010).

In ordinary musical circumstances, the amplitude envelope of sounds is an acoustic feature relevant to timbre, and while timbre is an important attribute of musical sound, it (unlike pitch) is rarely a primary structural parameter for musical sequences, at least in Western melodic music (Patel, 2008).

For example, using modern digital technology in which keyboards can produce any type of synthesized sounds, one could create synthesized sounds with similar spectral content but slightly different amplitude envelope patterns, and create musical activities (e.g., composing, playing, listening) which rely on the ability to distinguish such sounds.

Note that such sounds need not be musically unnatural: acoustic research on orchestral wind-instrument tones has shown, for example, that the spectrum of the flute, bassoon, trombone, and French horn are similar, and that listeners rely on envelope cues in distinguishing between these instruments (Strong and Clark, 1967).

Fortunately, music processing is known to have a strong relationship to the brain's emotion systems (Koelsch, 2010) and it seems plausible that the musical tasks could be made pleasurable (e.g., via the use of attractive musical sounds and melodies, and stimulating rewards for good performance).

To summarize, the OPERA hypothesis predicts that musical training which requires high-precision amplitude envelope processing will benefit the neural encoding of amplitude envelopes in speech, via mechanisms of neural plasticity, if the five conditions of OPERA are met.

How to Predict Stock Prices Easily - Intro to Deep Learning #7

Only a few days left to signup for my Decentralized Applications course! We're going to predict the closing price of the S&P 500 using a ..

Autonomic Nervous System: Crash Course A&P #13

Hank takes you on a tour of your two-part autonomic nervous system. This episode explains how your sympathetic nervous system and parasympathetic nervous ...

Intro - The Math of Intelligence

Only a few days left to signup for my Decentralized Applications course! Welcome to The Math of Intelligence! In this 3 month course, ..

Electrical Activity In Brain -1

Electrical Activity In Brain -1.

Stimulating the Brain to Unlock Learning and Recovery in Mental Illness: Dr. Vikaas Sohal

IMHRO Assistant Professor Vikaas Sohal describes his lab's exciting work: by stimulating specific neurons in the brains of mice in specific patterns, they have ...

Your First ML App - Machine Learning for Hackers #1

Only a few days left to signup for my Decentralized Applications course! This video will get you up and running with your first ML app in ..

Module 3 Lecture 1 Neural Control A review

Lectures by Prof. Laxmidhar Behera, Department of Electrical Engineering, Indian Institute of Technology, Kanpur. For more details on NPTEL visit ...

Deep Learning For Realtime Malware Detection - Domenic Puzio and Kate Highnam

Domain generation algorithm (DGA) malware makes callouts to unique web addresses to avoid detection by static rules engines. To counter this type of malware ...

Algorithmic Music Generation with Recurrent Neural Networks

This is the result of a project I worked on for CS224D with Aran Nayebi. The idea is to design a neural network that can generate music using your music library ...

Anthony Wagner presents Detecting Memories Using Brain Imaging. Stanford, March, 2013

Dr. Anthony Wagner presents Detecting Memories Using Brain Imaging: Implications for Law and Neuroscience at the Colloquium on Law, Neuroscience, and ...