AI News, BOOK REVIEW: The Terminator Is Not Coming. The Future Will Thank Us.

The Terminator Is Not Coming. The Future Will Thank Us.

The 21st century is a watershed time in human evolution.

We have made excellent progress on the science, and see a clear path to creating intelligent machines, including ones that are faster and more capable than humans.

Intelligent machines will radically transform our world in the 21st century, similar to how computers transformed our world in the 20th century.

Our tendency is to imagine that a new technology will be applied to problems and tasks we are familiar with, but inevitably new and unexpected applications surface that no one envisioned.

Similarly, today it is tempting to imagine that intelligent machines will look like humans, perform human-like tasks, converse with humans, and have human-like desires and emotions.

Self-replicating viruses and bacteria have probably killed more humans than anything else in history, and continue to hold the potential to wipe out all of humanity.

Although a computer virus could result in some terrible consequences, it is hard to imagine one extinguishing humankind in the way that biological viruses could.

Many doomsday scenarios related to machine intelligence have at their core the idea that intelligent machines could reproduce on their own, outpacing our ability to control them.

It would be relatively easy for a human to give these virtual intelligent “machines” the ability to self-replicate in the same way as a computer virus.

This could be dangerous, but, as with today’s computer viruses, the maximum possible damage caused by an intelligent machine replicating in a computer network would be limited.

A few people have suggested the possibility of self-replicating nano-machines, sometimes called “gray goo.” As these nano-machines multiply exponentially, they would quickly destroy the habitats for all other life.

But intelligent machines would not have the ability to self-replicate in nature unless we go to extreme lengths to give them this capability, and currently we don’t know how to do that.

Adding intelligence to an already self-replicating entity could make a bad situation worse, but intelligence itself doesn’t lead to self-replication, unless perhaps you believe in misconception #2.

The second misconception is that intelligent machines will have human-like desires, and would therefore devise ways to self-replicate, or free themselves from their human creators, or simply act based on their own desires and not care about us.

Some people are concerned about an “intelligence explosion” or a “singularity” where machines that are smarter than humans create machines that are smarter still which create even smarter machines, and so on.

However, for an intelligent machine to discover new truths, to learn new skills, to extend knowledge beyond what has been achieved before, it will need to go through the same difficult and slow discovery process humans do.

Intelligence isn’t something that can be increased by turning a knob or adding more “intelligence bits.” In addition to a big brain, it requires iterative manipulation and measurement of physical things.

We might create intelligent machines that directly sense and think about proteins, or tirelessly explore the human genome to discover the foundations of disease.

I don’t know of anyone being terribly concerned about this problem, because it is far in the future, whereas we definitely should care about whether human activity will make the Earth uninhabitable in the next 100 years.

Similarly, machine intelligence poses an evolving series of potential dangers, some in the near future, but some so far in the future we can’t even imagine them today.

The big question we have to answer is are we doing something today that cannot be undone, something that sets in course a series of events that will inevitably lead to the extinction or enslavement of all humanity.

The machine-intelligence technology we are creating today, based on neocortical principles, will not lead to self- replicating robots with uncontrollable intentions.

But the risks and bad outcomes arising from machine intelligence are not substantively different from ones we have faced in the past, nothing terrifying like an unstoppable virus, a self-replicating gray goo, or a spiteful god.

Indeed, instead of shortening our time on this planet, machine intelligence will help us extend it by generating vast new knowledge and understanding, and by creating amazing new tools to improve the human condition.

An engineer, serial entrepreneur, scientist, inventor and author, he was a founder of two mobile computing companies, Palm and Handspring, and was the architect of many computing products, including the PalmPilot and Treo smartphone.

What Intelligent Machines Need to Learn From the Neocortex

Computers have transformed work and play, transportation and medicine, entertainment and sports.

Although machine-learning techniques such as deep neural networks have recently made impressive gains, they are still a world away from being intelligent, from being able to understand and act in the world the way that we do.

In recent years, we have made significant strides in our work, and we have identified several features of biological intelligence that we believe will need to be incorporated into future thinking machines.

The neocortex is a deeply folded sheet some 2 millimeters thick that, if laid out flat, would be about as big as a large dinner napkin.

Your experience of the world around you—recognizing a friend’s face, enjoying a piece of music, holding a bar of soap in your hand—is the result of input from your eyes, ears, and other sensory organs traveling to your neocortex and causing groups of neurons to fire.

Of the 30 billion neurons in the neocortex, 1 or 2 percent are firing at any given instant, which means that many millions of neurons will be active at any point in time.

For example, when you think of your friend’s face, a pattern of neural firing occurs in the neocortex that is similar to the one that occurs when you are actually seeing your friend’s face.

The existence of such a universal algorithm is exciting because if we can figure out what that algorithm is, we can get at the heart of what it means to be intelligent, and incorporate that knowledge into future machines.

While it is true that today’s AI techniques reference neuroscience, they use an overly simplified neuron model, one that omits essential features of real neurons, and they are connected in ways that do not reflect the reality of our brain’s complex architecture.

They are why AI today may be good at labeling images or recognizing spoken words but is not able to reason, plan, and act in creative ways.

It turns out that just 15 to 20 active synapses on a branch are sufficient to recognize a pattern of activity in a large population of neurons.

Some of these recognized patterns cause the neuron to become active, but others change the internal state of the cell and act as a prediction of future activity.

Neuroscientists used to believe that learning occurred solely by modifying the effectiveness of existing synapses so that when an input arrived at a synapse it would either be more likely or less likely to make the cell fire.

Because the branches of a dendrite are mostly independent, when a neuron learns to recognize a new pattern on one of its dendrites, it doesn’t interfere with what the neuron has already learned on other dendrites.

Intelligent machines don’t have to model all the complexity of biological neurons, but the capabilities enabled by dendrites and learning by rewiring are essential.

In a computer’s memory, all combinations of 1s and 0s are potentially valid, so if you change one bit it will typically result in an entirely different meaning, in much the same way that changing the letter i to a in the word fire results in an unrelated word, fare.

If we think of each neuron as a bit, then to represent a piece of information the brain uses thousands of bits (many more than the 8 to 64 used in computers), but only a small percentage of the bits are 1 at any time;

Each of the active neurons represents some aspect of a cat, such as “pet,” or “furry,” or “clawed.” If a few neurons die, or a few extra neurons become active, the new SDR will still be a good representation of “cat” because most of the active neurons are still the same.

Imagine you have one SDR representing “cat” and another representing “bird.” Both the “cat” and “bird” SDR would have the same active neurons representing “pet” and “clawed,” but they wouldn’t share the neuron for “furry.” This example is simplified, but the overlap property is important because it makes it immediately clear to the brain how the two objects are similar or different.

Deep-learning networks also use hierarchies, but they often require 100 levels of processing to recognize an image, whereas the neocortex achieves the same result with just four levels.

Every time your body moves, the neocortex takes the current motor command, converts it into a location in the object’s reference frame, and then combines the location with the sensory input to learn 3D models of the world.

These three fundamental attributes of the neocortex—learning by rewiring, sparse distributed representations, and sensorimotor integration—will be cornerstones of machine intelligence.

From the earliest days of AI, critics dismissed the idea of trying to emulate human brains, often with the refrain that “airplanes don’t flap their wings.” In reality, Wilbur and Orville Wright studied birds in detail.

In short, the Wright brothers studied birds and then chose which elements of bird flight were essential for human flight and which could be ignored.

While it is exciting for today’s computers to classify images and recognize spoken queries, we are not close to building truly intelligent machines.

For example, if we are ever to inhabit other planets, we will need machines to act on our behalf, travel through space, build structures, mine resources, and independently solve complex problems in environments where humans cannot survive.

In the 1940s, the pioneers of the computing age sensed that computing was going to be big and beneficial, and that it would likely transform human society.

In 20 years, we will look back and see this as the time when advances in brain theory and machine learning started the era of true machine intelligence.

Deep Learning

He told Page, who had read an early draft, that he wanted to start a company to develop his ideas about how to build a truly intelligent computer: one that could understand language and then make inferences and decisions on its own.

The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs.

Last June, a Google deep-learning system that had been shown 10 million images from YouTube videos proved almost twice as good as any previous image recognition effort at identifying objects such as cats.

In October, Microsoft chief research officer Rick Rashid wowed attendees at a lecture in China with a demonstration of speech software that transcribed his spoken words into English text with an error rate of 7 percent, translated them into Chinese-language text, and then simulated his own voice uttering them in Mandarin.

Hinton, who will split his time between the university and Google, says he plans to “take ideas out of this field and apply them to real problems” such as image recognition, search, and natural-language understanding, he says.

Extending deep learning into applications beyond speech and image recognition will require more conceptual and software breakthroughs, not to mention many more advances in processing power.

Neural networks, developed in the 1950s not long after the dawn of AI research, looked promising because they attempted to simulate the way the brain worked, though in greatly simplified form.

These weights determine how each simulated neuron responds—with a mathematical output between 0 and 1—to a digitized feature such as an edge or a shade of blue in an image, or a particular energy level at one frequency in a phoneme, the individual unit of sound in spoken syllables.

Programmers would train a neural network to detect an object or phoneme by blitzing the network with digitized versions of images containing those objects or sound waves containing those phonemes.

The eventual goal of this training was to get the network to consistently recognize the patterns in speech or sets of images that we humans know as, say, the phoneme “d” or the image of a dog.

This is much the same way a child learns what a dog is by noticing the details of head shape, behavior, and the like in furry, barking animals that other people call dogs.

Once that layer accurately recognizes those features, they’re fed to the next layer, which trains itself to recognize more complex features, like a corner or a combination of speech sounds.

Because the multiple layers of neurons allow for more precise training on the many variants of a sound, the system can recognize scraps of sound more reliably, especially in noisy environments such as subway platforms.

Hawkins, author of On Intelligence, a 2004 book on how the brain works and how it might provide a guide to building intelligent machines, says deep learning fails to account for the concept of time.

Brains process streams of sensory data, he says, and human learning depends on our ability to recall sequences of patterns: when you watch a video of a cat doing something funny, it’s the motion that matters, not a series of still images like those Google used in its experiment.

In high school, he wrote software that enabled a computer to create original music in various classical styles, which he demonstrated in a 1965 appearance on the TV show I’ve Got a Secret.

Since then, his inventions have included several firsts—a print-to-speech reading machine, software that could scan and digitize printed text in any font, music synthesizers that could re-create the sound of orchestral instruments, and a speech recognition system with a large vocabulary.

This isn’t his immediate goal at Google, but it matches that of Google cofounder Sergey Brin, who said in the company’s early days that he wanted to build the equivalent of the sentient computer HAL in 2001: A Space Odyssey—except one that wouldn’t kill people.

“My mandate is to give computers enough understanding of natural language to do useful things—do a better job of search, do a better job of answering questions,” he says.

queries as quirky as “a long, tiresome speech delivered by a frothy pie topping.” (Watson’s correct answer: “What is a meringue harangue?”) Kurzweil isn’t focused solely on deep learning, though he says his approach to speech recognition is based on similar theories about how the brain works.

“That’s not a project I think I’ll ever finish.” Though Kurzweil’s vision is still years from reality, deep learning is likely to spur other applications beyond speech and image recognition in the nearer term.

Microsoft’s Peter Lee says there’s promising early research on potential uses of deep learning in machine vision—technologies that use imaging for applications such as industrial inspection and robot guidance.

The Terminator Is Not Coming. The Future Will Thank Us.

The 21st century is a watershed time in human evolution.

We have made excellent progress on the science, and see a clear path to creating intelligent machines, including ones that are faster and more capable than humans.

Intelligent machines will radically transform our world in the 21st century, similar to how computers transformed our world in the 20th century.

Our tendency is to imagine that a new technology will be applied to problems and tasks we are familiar with, but inevitably new and unexpected applications surface that no one envisioned.

Similarly, today it is tempting to imagine that intelligent machines will look like humans, perform human-like tasks, converse with humans, and have human-like desires and emotions.

Self-replicating viruses and bacteria have probably killed more humans than anything else in history, and continue to hold the potential to wipe out all of humanity.

Although a computer virus could result in some terrible consequences, it is hard to imagine one extinguishing humankind in the way that biological viruses could.

Many doomsday scenarios related to machine intelligence have at their core the idea that intelligent machines could reproduce on their own, outpacing our ability to control them.

It would be relatively easy for a human to give these virtual intelligent “machines” the ability to self-replicate in the same way as a computer virus.

This could be dangerous, but, as with today’s computer viruses, the maximum possible damage caused by an intelligent machine replicating in a computer network would be limited.

A few people have suggested the possibility of self-replicating nano-machines, sometimes called “gray goo.” As these nano-machines multiply exponentially, they would quickly destroy the habitats for all other life.

But intelligent machines would not have the ability to self-replicate in nature unless we go to extreme lengths to give them this capability, and currently we don’t know how to do that.

Adding intelligence to an already self-replicating entity could make a bad situation worse, but intelligence itself doesn’t lead to self-replication, unless perhaps you believe in misconception #2.

The second misconception is that intelligent machines will have human-like desires, and would therefore devise ways to self-replicate, or free themselves from their human creators, or simply act based on their own desires and not care about us.

Some people are concerned about an “intelligence explosion” or a “singularity” where machines that are smarter than humans create machines that are smarter still which create even smarter machines, and so on.

However, for an intelligent machine to discover new truths, to learn new skills, to extend knowledge beyond what has been achieved before, it will need to go through the same difficult and slow discovery process humans do.

Intelligence isn’t something that can be increased by turning a knob or adding more “intelligence bits.” In addition to a big brain, it requires iterative manipulation and measurement of physical things.

We might create intelligent machines that directly sense and think about proteins, or tirelessly explore the human genome to discover the foundations of disease.

I don’t know of anyone being terribly concerned about this problem, because it is far in the future, whereas we definitely should care about whether human activity will make the Earth uninhabitable in the next 100 years.

Similarly, machine intelligence poses an evolving series of potential dangers, some in the near future, but some so far in the future we can’t even imagine them today.

The big question we have to answer is are we doing something today that cannot be undone, something that sets in course a series of events that will inevitably lead to the extinction or enslavement of all humanity.

The machine-intelligence technology we are creating today, based on neocortical principles, will not lead to self- replicating robots with uncontrollable intentions.

But the risks and bad outcomes arising from machine intelligence are not substantively different from ones we have faced in the past, nothing terrifying like an unstoppable virus, a self-replicating gray goo, or a spiteful god.

Indeed, instead of shortening our time on this planet, machine intelligence will help us extend it by generating vast new knowledge and understanding, and by creating amazing new tools to improve the human condition.

An engineer, serial entrepreneur, scientist, inventor and author, he was a founder of two mobile computing companies, Palm and Handspring, and was the architect of many computing products, including the PalmPilot and Treo smartphone.

Intelligence and Machines: Creating Intelligent Machines by Modeling the Brain with Jeff Hawkins

Visit: Are intelligent machines possible? If they are, what will they be like? Jeff Hawkins, an inventor, engineer, neuroscientist, author and ..

Jeff Hawkins: Advances in Modeling Neocortex and its Impact on Machine Intelligence

Smith Group Lecture by Jeff Hawkins presented at the Beckman Institute for Advanced Science and Technology at the University of Illinois at ...

Jeff Hawkins on the neocortex

ABOUT THE VIDEO Software can already reap benefits from emulating the brain, and a revolution in artificial intelligence (AI) is already underway. This video ...

Does the Neocortex Use Grid Cell-Like Mechanisms to Learn the Structure of Objects?

Jeff Hawkins, Numenta Computational Theories of the Brain

Jeff Hawkins - Lessons From The Neocortex For AI - Numenta

Jeffrey Hawkins is the American founder of Palm Computing and Handspring. He has since turned to work on neuroscience full-time, founded the Redwood ...

Intelligence and Machines: Creating Intelligent Machines By Modeling the Brain

Jeff Hawkins, Numenta Symposium on Visions of the Theory of Computing, May 30, 2013, hosted by the Simons Institute for the Theory of Computing at UC ...

Intelligence and Learning in Brains and Machines

Speaker begins at 3:26 The Hebrew University of Jerusalem Heller Lectures Series in Computational Neuroscience The Interdisciplinary Center for Neural ...

How Your Brain Is Getting Hacked: Facebook, Tinder, Slot Machines | Tristan Harris

Casinos, magicians, and the makers of social media platforms all know something about you: your mind is very vulnerable to influence. Just as the magician ...

On Intelligence with Jeff Hawkins - Conversations with History

Visit: Conversations host Harry Kreisler welcomes Jeff Hawkins, founder of both Palm Computing and Handspring and creator of the ..

Extended Intelligence: smart machines and AI

Learn more at PwC.com - Miles Everson, PwC's Global Advisory Leader, discusses the impact of automation and artificial intelligence ..