AI News, "Big Data and Artificial Intelligence: Intelligence Matters" entries

"Big Data and Artificial Intelligence: Intelligence Matters" entries

Like many data scientists, I’m excited about advances in large-scale machine learning, particularly recent success stories in computer vision and speech recognition.

While it can sometimes be nice to mimic nature, in the case of the brain, machine learning researchers recognize that understanding and identifying the essential neural processes is much more critical.

A related example cited by machine learning researchers is flight: wing flapping and feathers aren’t critical, but an understanding of physics and aerodynamics is essential.

She points out that a more meaningful goal should be to “extract and integrate relevant neural processing strategies when applicable, but also identify where there may be opportunities to be more efficient.” The goal in technology shouldn’t be to build algorithms that mimic neural function.

Deep Learning

He told Page, who had read an early draft, that he wanted to start a company to develop his ideas about how to build a truly intelligent computer: one that could understand language and then make inferences and decisions on its own.

The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs.

Last June, a Google deep-learning system that had been shown 10 million images from YouTube videos proved almost twice as good as any previous image recognition effort at identifying objects such as cats.

In October, Microsoft chief research officer Rick Rashid wowed attendees at a lecture in China with a demonstration of speech software that transcribed his spoken words into English text with an error rate of 7 percent, translated them into Chinese-language text, and then simulated his own voice uttering them in Mandarin.

Hinton, who will split his time between the university and Google, says he plans to “take ideas out of this field and apply them to real problems” such as image recognition, search, and natural-language understanding, he says.

Extending deep learning into applications beyond speech and image recognition will require more conceptual and software breakthroughs, not to mention many more advances in processing power.

Neural networks, developed in the 1950s not long after the dawn of AI research, looked promising because they attempted to simulate the way the brain worked, though in greatly simplified form.

These weights determine how each simulated neuron responds—with a mathematical output between 0 and 1—to a digitized feature such as an edge or a shade of blue in an image, or a particular energy level at one frequency in a phoneme, the individual unit of sound in spoken syllables.

Programmers would train a neural network to detect an object or phoneme by blitzing the network with digitized versions of images containing those objects or sound waves containing those phonemes.

The eventual goal of this training was to get the network to consistently recognize the patterns in speech or sets of images that we humans know as, say, the phoneme “d” or the image of a dog.

This is much the same way a child learns what a dog is by noticing the details of head shape, behavior, and the like in furry, barking animals that other people call dogs.

Once that layer accurately recognizes those features, they’re fed to the next layer, which trains itself to recognize more complex features, like a corner or a combination of speech sounds.

Because the multiple layers of neurons allow for more precise training on the many variants of a sound, the system can recognize scraps of sound more reliably, especially in noisy environments such as subway platforms.

Hawkins, author of On Intelligence, a 2004 book on how the brain works and how it might provide a guide to building intelligent machines, says deep learning fails to account for the concept of time.

Brains process streams of sensory data, he says, and human learning depends on our ability to recall sequences of patterns: when you watch a video of a cat doing something funny, it’s the motion that matters, not a series of still images like those Google used in its experiment.

In high school, he wrote software that enabled a computer to create original music in various classical styles, which he demonstrated in a 1965 appearance on the TV show I’ve Got a Secret.

Since then, his inventions have included several firsts—a print-to-speech reading machine, software that could scan and digitize printed text in any font, music synthesizers that could re-create the sound of orchestral instruments, and a speech recognition system with a large vocabulary.

This isn’t his immediate goal at Google, but it matches that of Google cofounder Sergey Brin, who said in the company’s early days that he wanted to build the equivalent of the sentient computer HAL in 2001: A Space Odyssey—except one that wouldn’t kill people.

“My mandate is to give computers enough understanding of natural language to do useful things—do a better job of search, do a better job of answering questions,” he says.

queries as quirky as “a long, tiresome speech delivered by a frothy pie topping.” (Watson’s correct answer: “What is a meringue harangue?”) Kurzweil isn’t focused solely on deep learning, though he says his approach to speech recognition is based on similar theories about how the brain works.

“That’s not a project I think I’ll ever finish.” Though Kurzweil’s vision is still years from reality, deep learning is likely to spur other applications beyond speech and image recognition in the nearer term.

Microsoft’s Peter Lee says there’s promising early research on potential uses of deep learning in machine vision—technologies that use imaging for applications such as industrial inspection and robot guidance.

New Theory Cracks Open the Black Box of Deep Neural Networks

Even as machines known as “deep neural networks” have learned to converse, drive cars, beat video games and Go champions, dream, paint pictures and help make scientific discoveries, they have also confounded their human creators, who never expected so-called “deep-learning” algorithms to work so well.

During deep learning, connections in the network are strengthened or weakened as needed to make the system better at sending signals from input data—the pixels of a photo of a dog, for instance—up through the layers to neurons associated with the right high-level concepts, such as “dog.” After a deep neural network has “learned” from thousands of sample dog photos, it can identify dogs in new photos as accurately as people can.

The magic leap from special cases to general concepts during learning gives deep neural networks their power, just as it underlies human reasoning, creativity and the other faculties collectively termed “intelligence.” Experts wonder what it is about deep learning that enables generalization—and to what extent brains apprehend reality in the same way.

Some researchers remain skeptical that the theory fully accounts for the success of deep learning, but Kyle Cranmer, a particle physicist at New York University who uses machine learning to analyze particle collisions at the Large Hadron Collider, said that as a general principle of learning, it “somehow smells right.” Geoffrey Hinton, a pioneer of deep learning who works at Google and the University of Toronto, emailed Tishby after watching his Berlin talk.

“I have to listen to it another 10,000 times to really understand it, but it’s very rare nowadays to hear a talk with a really original idea in it that may be the answer to a really major puzzle.” According to Tishby, who views the information bottleneck as a fundamental principle behind learning, whether you’re an algorithm, a housefly, a conscious being, or a physics calculation of emergent behavior, that long-awaited answer “is that the most important part of learning is actually forgetting.” Tishby began contemplating the information bottleneck around the time that other researchers were first mulling over deep neural networks, though neither concept had been named yet.

“For many years people thought information theory wasn’t the right way to think about relevance, starting with misconceptions that go all the way to Shannon himself.” Claude Shannon, the founder of information theory, in a sense liberated the study of information starting in the 1940s by allowing it to be considered in the abstract—as 1s and 0s with purely mathematical meaning.

Using information theory, he realized, “you can define ‘relevant’ in a precise sense.” Imagine X is a complex data set, like the pixels of a dog photo, and Y is a simpler variable represented by those data, like the word “dog.” You can capture all the “relevant” information in X about Y by compressing X as much as you can without losing the ability to predict Y.

“My only luck was that deep neural networks became so important.” Though the concept behind deep neural networks had been kicked around for decades, their performance in tasks like speech and image recognition only took off in the early 2010s, due to improved training regimens and more powerful computer processors.

The basic algorithm used in the majority of deep-learning procedures to tweak neural connections in response to data is called “stochastic gradient descent”: Each time the training data are fed into the network, a cascade of firing activity sweeps upward through the layers of artificial neurons.

When the signal reaches the top layer, the final firing pattern can be compared to the correct label for the image—1 or 0, “dog” or “no dog.” Any differences between this firing pattern and the correct pattern are “back-propagated” down the layers, meaning that, like a teacher correcting an exam, the algorithm strengthens or weakens each connection to make the network layer better at producing the correct output signal.

As a deep neural network tweaks its connections by stochastic gradient descent, at first the number of bits it stores about the input data stays roughly constant or increases slightly, as connections adjust to encode patterns in the input and the network gets good at fitting labels to it.

Questions about whether the bottleneck holds up for larger neural networks are partly addressed by Tishby and Shwartz-Ziv’s most recent experiments, not included in their preliminary paper, in which they train much larger, 330,000-connection-deep neural networks to recognize handwritten digits in the 60,000-image Modified National Institute of Standards and Technology database, a well-known benchmark for gauging the performance of deep-learning algorithms.

Brenden Lake, an assistant professor of psychology and data science at New York University who studies similarities and differences in how humans and machines learn, said that Tishby’s findings represent “an important step towards opening the black box of neural networks,” but he stressed that the brain represents a much bigger, blacker black box.

Our adult brains, which boast several hundred trillion connections between 86 billion neurons, in all likelihood employ a bag of tricks to enhance generalization, going beyond the basic image- and sound-recognition learning procedures that occur during infancy and that may in many ways resemble deep learning.

The Dark Secret at the Heart of AI

Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey.

The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence.

Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems.

The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation.

There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users.

But banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable.

“Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.” There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right.

The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease.

Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver.

If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed.

Many thought it made the most sense to build machines that reasoned according to rules and logic, making their inner workings transparent to anyone who cared to examine some code.

But it was not until the start of this decade, after several clever tweaks and refinements, that very large—or “deep”—neural networks demonstrated dramatic improvements in automated perception.

It has given computers extraordinary powers, like the ability to recognize spoken words almost as well as a person could, a skill too complex to code into the machine by hand.

The same approach can be applied, roughly speaking, to other inputs that lead a machine to teach itself: the sounds that make up words in speech, the letters and words that create sentences in text, or the steering-wheel movements required for driving.

The resulting images, produced by a project known as Deep Dream, showed grotesque, alien-like animals emerging from clouds and plants, and hallucinatory pagodas blooming across forests and mountain ranges.

In 2015, Clune’s group showed how certain images could fool such a network into perceiving things that aren’t there, because the images exploit the low-level patterns the system searches for.

The images that turn up are abstract (imagine an impressionistic take on a flamingo or a school bus), highlighting the mysterious nature of the machine’s perceptual abilities.

It is the interplay of calculations inside a deep neural network that is crucial to higher-level pattern recognition and complex decision-making, but those calculations are a quagmire of mathematical functions and variables.

“But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.” In the office next to Jaakkola is Regina Barzilay, an MIT professor who is determined to apply machine learning to medicine.

The diagnosis was shocking in itself, but Barzilay was also dismayed that cutting-edge statistical and machine-learning methods were not being used to help with oncological research or to guide patient treatment.

She envisions using more of the raw data that she says is currently underutilized: “imaging data, pathology data, all this information.” How well can we get along with machines that are

After she finished cancer treatment last year, Barzilay and her students began working with doctors at Massachusetts General Hospital to develop a system capable of mining pathology reports to identify patients with specific clinical characteristics that researchers might want to study.

Barzilay and her students are also developing a deep-learning algorithm capable of finding early signs of breast cancer in mammogram images, and they aim to give this system some ability to explain its reasoning, too.

The U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft, identify targets, and help analysts sift through huge piles of intelligence data.

A silver-haired veteran of the agency who previously oversaw the DARPA project that eventually led to the creation of Siri, Gunning says automation is creeping into countless areas of the military.

But soldiers probably won’t feel comfortable in a robotic tank that doesn’t explain itself to them, and analysts will be reluctant to act on information without some reasoning.

A chapter of Dennett’s latest book, From Bacteria to Bach and Back, an encyclopedic treatise on consciousness, suggests that a natural part of the evolution of intelligence itself is the creation of systems capable of performing tasks their creators do not know how to do.

But since there may be no perfect answer, we should be as cautious of AI explanations as we are of each other’s—no matter how clever a machine seems.“If it can’t do better than us at explaining what it’s doing,” he says, “then don’t trustit.”

Mapping the Brain to Build Better Machines

Take a three year-old to the zoo, and she intuitively knows that the long-necked creature nibbling leaves is the same thing as the giraffe in her picture book.

“We are still more flexible in thinking and can anticipate, imagine and create future events.” An ambitious new program, funded by the federal government’s intelligence arm, aims to bring artificial intelligence more in line with our own mental powers.

Koch and his colleagues are now creating a complete wiring diagram of a small cube of brain — a million cubic microns, totaling one five-hundredth the volume of a poppy seed.

That tiny portion houses about 100,000 neurons, 3 to 15 million neuronal connections, or synapses, and enough neural wiring to span the width of Manhattan, were it all untangled and laid end-to-end.

In a paper published in the journal Nature in March, Wei-Chung Allen Lee — a neuroscientist at Harvard University who is working with Koch’s team — and his collaborators mapped out a wiring diagram of 50 neurons and more than 1,000 of their partners.

By pairing this map with information about each neuron’s job in the brain — some respond to a visual input of vertical bars, for example — they derived a simple rule for how neurons in this part of the cortex are anatomically connected.

While the implicit goal of the Microns project is technological — IARPA funds research that could eventually lead to data-analysis tools for the intelligence community, among other things — new and profound insights into the brain will have to come first.

Without knowing all the component parts, he said, “maybe we’re missing the beauty of the structure.” The convoluted folds covering the brain’s surface form the cerebral cortex, a pizza-sized sheet of tissue that’s scrunched to fit into our skulls.

New technologies designed to trace the shape, activity and connectivity of thousands of neurons are finally allowing researchers to analyze how cells within a module interact with each other;

“Different teams have different guesses for what’s inside.” The researchers will focus on a part of the cortex that processes vision, a sensory system that neuroscientists have explored intensively and that computer scientists have long striven to emulate.

Tai Sing Lee’s team, co-led by George Church, theorizes that the brain has built a library of parts — bits and pieces of objects and people — and learns rules for how to put those parts together.

The team will then computationally stitch together each cross section to create a densely packed three-dimensional map that charts millions of neural wires on their intricate path through the cortex.

“That’s what [Tolias] has started to do.” Among these thousands of neuronal connections, Tolias’s team uncovered three general rules that govern how the cells are connected: Some talk mainly to neurons of their own kind;

(Tolias’s team defined their cells based on neural anatomy rather than function, which Wei Lee’s team did in their study.) Using just these three wiring rules, the researchers could simulate the circuit fairly accurately.

And although neural networks have enjoyed a major renaissance — the voice- and face-recognition programs that have rapidly become part of our daily lives are based on neural network algorithms, as is AlphaGo, the computer that recently defeated the world’s top Go player — the rules that artificial neural networks use to alter their connections are almost certainly different than the ones employed by the brain.

Contemporary neural networks “are based on what we knew about the brain in the 1960s,” said Terry Sejnowski, a computational neuroscientist at the Salk Institute in San Diego who developed early neural network algorithms with Geoffrey Hinton, a computer scientist at the University of Toronto.

“Our knowledge of how the brain is organized is exploding.” For example, today’s neural networks are comprised of a feed-forward architecture, where information flows from input to output through a series of layers.

Microns researchers aim to decipher the rules governing feedback loops — such as which cells these loops connect, what triggers their activity, and how that activity effects the circuit’s output — then translate those rules into an algorithm.

“If you could implement [feedback circuitry] in a deep network, you could go from a network that has kind of a knee-jerk reaction — give input and get output — to one that’s more reflective, that can start thinking about inputs and testing hypotheses,” said Sejnowski, who serves as an advisor to President Obama’s $100 million BRAIN Initiative, of which the Microns project is a part.

Neuroscientists reveal how the brain can enhance connections

When the brain forms memories or learns a new task, it encodes the new information by tuning connections between neurons.

“This mechanism that we’ve uncovered on the presynaptic side adds to a toolkit that we have for understanding how synapses can change,” says Troy Littleton, a professor in the departments of Biology and Brain and Cognitive Sciences at MIT, a member of MIT’s Picower Institute for Learning and Memory, and the senior author of the study, which appears in the Nov. 18 issue of Neuron.

Learning more about how synapses change their connections could help scientists better understand neurodevelopmental disorders such as autism, since many of the genetic alterations linked to autism are found in genes that code for synaptic proteins.

Over the past 30 years, scientists have found that strong input to a postsynaptic cell causes it to traffic more receptors for neurotransmitters to its surface, amplifying the signal it receives from the presynaptic cell.

When the presynaptic neuron registers an influx of calcium ions, carrying the electrical surge of the action potential, vesicles that store neurotransmitters fuse to the cell’s membrane and spill their contents outside the cell, where they bind to receptors on the postsynaptic neuron.

“When we gave a strong activity pulse to these neurons, these mini events, which are normally very low-frequency, suddenly ramped up and they stayed elevated for several minutes before going down.” Synaptic growth The enhancement of minis appears to provoke the postsynaptic neuron to release a signaling factor, still unidentified, that goes back to the presynaptic cell and activates an enzyme called PKA.

“Machinery in the presynaptic terminal can be modified in a very acute manner to drive certain forms of plasticity, which could be really important not only in development, but also in more mature states where synaptic changes can occur during behavioral processes like learning and memory,” Cho says.

How to Rewire Your Brain to Create a New Reality | The Science of Law of Attraction

Science explains how to rewire your brain to create a new reality, and why this is possible. The law of attraction is rooted in the principles of quantum physics.

How Can the Brain Efficiently Build an Understanding of the Natural World

CBMM Special Seminar Ann M. Hermundstad, PhD, Janelia Research Campus Abstract: The brain exploits the statistical regularities of the natural world. In the ...

Dr. Joe Dispenza UNLOCK the FULL Potential of Your MIND! The Law Of Attraction & Quantum Physics

Can people overcome habits, illness and disease through the power of thought? Dr. Joe Dispenza has clear evidence that you can ..

We Create Our Reality

Frederick Travis, PhD, director of the Center for Brain, Consciousness and Cognition, explains that the concept "We create our reality" is more than a ...

Electrical experiments with plants that count and communicate | Greg Gage

Neuroscientist Greg Gage takes sophisticated equipment used to study the brain out of graduate-level labs and brings them to middle- and high-school ...

Break the Addiction to Negative Thoughts & Emotions to Create What You Want - Dr. Joe Dispenza

FREE PDF ☯ My Top 5 Law of Attraction Tips That I Used to COMPLETELY Change My Life Click Here ➡ Learn to break the addiction to ..

The most important lesson from 83,000 brain scans | Daniel Amen | TEDxOrangeCoast

Never miss a talk! SUBSCRIBE to the TEDx channel: In the spirit of ideas worth spreading, TEDx is a program of local, self-organized events ..

My Neurons, My Self

With ever more refined techniques for measuring complex brain activity, scientists are challenging the understanding of thought, memory and emotion–what we ...

Ray Kurzweil: "How to Create a Mind" | Talks at Google

How to Create a Mind: The Secret of Human Thought Revealed About the book: In How to Create a Mind, The Secret of Human Thought Revealed, the bold ...

Biblical Series I: Introduction to the Idea of God

Lecture I in my Psychological Significance of the Biblical Stories series from May 16th at Isabel Bader Theatre in Toronto. In this lecture, I describe what I ...