AI News, Understanding Artificial General Intelligence — An Interview With Hiroshi Yamakawa
- On Monday, November 20, 2017
- By Read More
Understanding Artificial General Intelligence — An Interview With Hiroshi Yamakawa
Click here to see this page in other languages : Japanese Artificial general intelligence (AGI) is something of a holy grail for many artificial intelligence researchers.
Today’s narrow AI systems are only capable of specific tasks — such as internet searches, driving a car, or playing a video game — but none of the systems today can do all of these tasks.
Rather than just solving a number of problems using experience, AGI, we believe, will be more similar to human intelligence that can solve various problems which were not assumed in the design phase.
HY: The whole brain architecture is an engineering-based research approach “to create a human-like artificial general intelligence (AGI) by learning from the architecture of the entire brain.” Basically, this approach to building AGI is the integration of artificial neural networks and machine-learning modules while using the brain’s hard wiring as a reference.
Even if superintelligence exceeds human intelligence in the near future, it will be comparatively easy to communicate with AI designed to think like a human, and this will be useful as machines and humans continue to live and interact with each other.
Because of this difficulty, one meaningful characteristic of whole brain architecture is that though based on brain architecture, it is designed to be a functional assembly of parts that can still be broken down and used.
It is now said that convolutional neural networks have essentially outperformed the system/interaction between the temporal lobe and visual cortex in terms of image recognition tasks.
From this point on, we need to come closer to simulating the functions of different structures of the brain, and even without the whole brain architecture, we need to be able to assemble several structures together to reproduce some behavioral level functions.
Then, I believe, we’ll have a path to expand that development process to cover the rest of the brain functions, and finally integrate as whole brain..
In order to make human-friendly artificial general intelligence a public good for all of mankind, we seek to continually expand open, collaborative efforts to develop AI based on an architecture modeled after the brain.
Of course, technological development will advance not only offensive power but also defensive power, but it is not easy to have defensive power to contain attacking power at the same time.
If scientific and technological development are promoted using artificial intelligence technology, for example, many countries will easily hold intercontinental ballistic fleets, and artificial intelligence can be extremely dangerous to living organisms by using nanotechnology.
In that future, society will be an ecosystem formed by augmented human beings and various public AIs, in what I dub ‘an ecosystem of shared intelligent agents’ (EcSIA).
In implementing such control, the grace and wealth that EcSIA affords needs to be properly distributed to everyone.” Assuming no global catastrophe halts progress, what are the odds of human level AGI in the next 10 years?
In my current role as the editorial chairman for the Japanese Society of Artificial Intelligence (JSAI) journal, I’m promoting a plan to have a series of discussions starting in the July edition on the theme of “Singularity and AI,” in which we’ll have AI specialists discuss the singularity from a technical viewpoint.
Additionally, one special characteristic of these guidelines is that the ninth principle listed, a call for ethical compliance of AI itself, states that AI in the future should also abide by the same ethical principles as AI researchers.
HY: If we look at things from the standpoint of a moral society, we are all human, and without even looking from the viewpoints of one country or another, in general we should start with the mentality that we have more common characteristics than different.
As a very personal view, I think that “surviving intelligence” is something that should be preserved in the future because I feel that it is very fortunate that we have established an intelligent society now, beyond the stormy sea of evolution.
So here’s how it actually feels to stand there: Which probably feels pretty normal… _______________ The Far Future—Coming Soon Imagine taking a time machine back to 1750—a time when the world was in a permanent power outage, long-distance communication meant either yelling loudly or firing a cannon in the air, and all transportation ran on hay.
It’s impossible for us to understand what it would be like for him to see shiny capsules racing by on a highway, talk to people who had been on the other side of the ocean earlier in the day, watch sports that were being played 1,000 miles away, hear a musical performance that happened 50 years ago, and play with my magical wizard rectangle that he could use to capture a real-life image or record a living moment, generate a map with a paranormal moving blue dot that shows him where he is, look at someone’s face and chat with them even though they’re on the other side of the country, and worlds of other inconceivable sorcery.
But here’s the interesting thing—if he then went back to 1750 and got jealous that we got to see his reaction and decided he wanted to try the same thing, he’d take the time machine and go back the same distance, get someone from around the year 1500, bring him to 1750, and show him everything.
The 1500 guy would learn some mind-bending shit about space and physics, he’d be impressed with how committed Europe turned out to be with that new imperialism fad, and he’d have to do some major revisions of his world map conception.
If someone from a purely hunter-gatherer world—from a time when humans were, more or less, just another animal species—saw the vast human empires of 1750 with their towering churches, their ocean-crossing ships, their concept of being “inside,” and their enormous mountain of collective, accumulated human knowledge and discovery—he’d likely die.
If he went back 12,000 years to 24,000 BC and got a guy and brought him to 12,000 BC, he’d show the guy everything and the guy would be like, “Okay what’s your point who cares.” For the 12,000 BC guy to have the same fun, he’d have to go back over 100,000 years and get someone he could show fire and language to for the first time.
19th century humanity knew more and had better technology than 15th century humanity, so it’s no surprise that humanity made far more advances in the 19th century than in the 15th century—15th century humanity was no match for 19th century humanity.11← open these This works on smaller scales too.
The average rate of advancement between 1985 and 2015 was higher than the rate between 1955 and 1985—because the former was a more advanced world—so much more change happened in the most recent 30 years than in the prior 30.
Kurzweil suggests that the progress of the entire 20th century would have been achieved in only 20 years at the rate of advancement in the year 2000—in other words, by 2000, the rate of progress was five times faster than the average rate of progress during the 20th century.
We base our ideas about the world on our personal experience, and that experience has ingrained the rate of growth of the recent past in our heads as “the way things happen.” We’re also limited by our imagination, which takes our experience and uses it to conjure future predictions—but often, what we know simply doesn’t give us the tools to think accurately about the future.2 When we hear a prediction about the future that contradicts our experience-based notion of how things work, our instinct is that the prediction must be naive.
Logic also suggests that if the most advanced species on a planet keeps making larger and larger leaps forward at an ever-faster rate, at some point, they’ll make a leap so great that it completely alters life as they know it and the perception they have of what it means to be a human—kind of like how evolution kept making great leaps toward intelligence until finally it made such a large leap to the human being that it completely altered what it meant for any creature to live on planet Earth.
And if you spend some time reading about what’s going on today in science and technology, you start to see a lot of signs quietly hinting that life as we currently know it cannot withstand the leap that’s coming next.
In 1993, Vernor Vinge wrote a famous essay in which he applied the term to the moment in the future when our technology’s intelligence exceeds our own—a moment for him when life as we know it will be forever changed and normal rules will no longer apply.
Ray Kurzweil then muddled things a bit by defining the singularity as the time when the Law of Accelerating Returns has reached such an extreme pace that technological progress is happening at a seemingly-infinite pace, and after which we’ll be living in a whole new world.
Professor Linda Gottfredson describes intelligence as “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.” AGI would be able to do all of those things as easily as you can.
AI Caliber 3) Artificial Superintelligence (ASI): Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” Artificial Superintelligence ranges from a computer that’s just a little smarter than a human to one that’s trillions of times smarter—across the board.
Let’s take a close look at what the leading thinkers in the field believe this road looks like and why this revolution might happen way sooner than you might think: Where We Are Currently—A World Running on ANI Artificial Narrow Intelligence is machine intelligence that equals or exceeds human intelligence or efficiency at a specific thing.
Your email spam filter is a classic type of ANI—it starts off loaded with intelligence about how to figure out what’s spam and what’s not, and then it learns and tailors its intelligence to you as it gets experience with your particular preferences.
Sophisticated ANI systems are widely used in sectors and industries like military, manufacturing, and finance (algorithmic high-frequency AI traders account for more than half of equity shares traded on US markets6), and in expert systems like those that help doctors make diagnoses and, most famously, IBM’s Watson, who contained enough facts and understood coy Trebek-speak well enough to soundly beat the most prolific Jeopardy champions.
At worst, a glitchy or badly-programmed ANI can cause an isolated catastrophe like knocking out a power grid, causing a harmful nuclear power plant malfunction, or triggering a financial markets disaster (like the 2010 Flash Crash when an ANI program reacted the wrong way to an unexpected situation and caused the stock market to briefly plummet, taking $1 trillion of market value with it, only part of which was recovered when the mistake was corrected).
Or, as computer scientist Donald Knuth puts it, “AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do ‘without thinking.'”7 What you quickly realize when you think about this is that those things that seem easy to us are actually unbelievably complicated, and they only seem easy because those skills have been optimized in us (and most animals) by hundreds of millions of years of animal evolution.
When you reach your hand up toward an object, the muscles, tendons, and bones in your shoulder, elbow, and wrist instantly perform a long series of physics operations, in conjunction with your eyes, to allow you to move your hand in a straight line through three dimensions.
Same idea goes for why it’s not that malware is dumb for not being able to figure out the slanty word recognition test when you sign up for a new account on a site—it’s that your brain is super impressive for being able to.
On the other hand, multiplying big numbers or playing chess are new activities for biological creatures and we haven’t had any time to evolve a proficiency at them, so a computer doesn’t need to work too hard to beat us.
But if you pick up the black and reveal the whole image… …you have no problem giving a full description of the various opaque and translucent cylinders, slats, and 3-D corners, but the computer would fail miserably.
Your brain is doing a ton of fancy shit to interpret the implied depth, shade-mixing, and room lighting the picture is trying to portray.8 And looking at the picture below, a computer sees a two-dimensional white, black, and gray collage, while you easily see what it really is—a photo of an entirely-black, 3-D rock: Credit: Matthew Lloyd And everything we just mentioned is still only taking in stagnant information and processing it.
One way to express this capacity is in the total calculations per second (cps) the brain could manage, and you could come to this number by figuring out the maximum cps of each structure in the brain and then adding them all together.
Moore’s Law is a historically-reliable rule that the world’s maximum computing power doubles approximately every two years, meaning computer hardware advancement, like general human advancement through history, grows exponentially.
Looking at how this relates to Kurzweil’s cps/$1,000 metric, we’re currently at about 10 trillion cps/$1,000, right on pace with this graph’s predicted trajectory:9 So the world’s $1,000 computers are now beating the mouse brain and they’re at about a thousandth of human level.
This is like scientists toiling over how that kid who sits next to them in class is so smart and keeps doing so well on the tests, and even though they keep studying diligently, they can’t do nearly as well as that kid, and then they finally decide “k fuck it I’m just gonna copy that kid’s answers.” It makes sense—we’re stumped trying to build a super-complex computer, and there happens to be a perfect prototype for one in each of our heads.
More extreme plagiarism involves a strategy called “whole brain emulation,” where the goal is to slice a real brain into thin layers, scan each one, use software to assemble an accurate reconstructed 3-D model, and then implement the model on a powerful computer.
The fact is, even if we can emulate a brain, that might be like trying to build an airplane by copying a bird’s wing-flapping motions—often, machines are best designed using a fresh, machine-oriented approach, not by mimicking biology exactly.
All of This Could Happen Soon Rapid advancements in hardware and innovative experimentation with software are happening simultaneously, and AGI could creep up on us quickly and unexpectedly for two main reasons: 1) Exponential growth is intense and what seems like a snail’s pace of advancement can quickly race upwards—this GIF illustrates this concept nicely: Source 2) When it comes to software, progress can seem slow, but then one epiphany can instantly change the rate of advancement (kind of like the way science, during the time humans thought the universe was geocentric, was having difficulty calculating how the universe worked, but then the discovery that it was heliocentric suddenly made everything much easier).
Computers can expand to any physical size, allowing far more hardware to be put to work, a much larger working memory (RAM), and a longterm memory (hard drive storage) that has both far greater capacity and precision than our own.
Beginning with the development of language and the forming of large, dense communities, advancing through the inventions of writing and printing, and now intensified through tools like the internet, humanity’s collective intelligence is one of the major reasons we’ve been able to get so far ahead of all other species.
The group could also take on one goal as a unit, because there wouldn’t necessarily be dissenting opinions and motivations and self-interest, like we have within the human population.10 AI, which will likely get to AGI by being programmed to self-improve, wouldn’t see “human-level intelligence” as some important milestone—it’s only a relevant marker from our point of view—and wouldn’t have any reason to “stop” at our level.
The reason is that from our perspective, A) while the intelligence of different kinds of animals varies, the main characteristic we’re aware of about any animal’s intelligence is that it’s far lower than ours, and B) we view the smartest humans as WAY smarter than the dumbest humans.
Cute!” The only thing is, in the grand spectrum of intelligence, all humans, from the village idiot to Einstein, are within a very small range—so just after hitting village idiot level and being declared to be AGI, it’ll suddenly be smarter than Einstein and we won’t know what hit us: And what happens…after that?
The median year on a survey of hundreds of scientists about when they believed we’d be more likely than not to have reached AGI was 204012—that’s only 25 years from now, which doesn’t sound that huge until you consider that many of the thinkers in this field think it’s likely that the progression from AGI to ASI happens very quickly.
If our meager brains were able to invent wifi, then something 100 or 1,000 or 1 billion times smarter than we are should have no problem controlling the positioning of each and every atom in the world in any way it likes, at any time—everything we consider magic, every power we imagine a supreme God to have will be as mundane an activity for the ASI as flipping on a light switch is for us.
If I Only Had a Brain: How AI ‘Thinks’
So can AI, thanks to the development of artificial neural networks (ANN), a type of machine learning algorithm in which nodes simulate neurons that compute and distribute information.
AI such as AlphaGo, the program that beat the world champion at Go last year, uses ANNs not only to compute statistical probabilities and outcomes of various moves, but to adjust strategy based on what the other player does.
For example, when we watch a football game on television, we take in the basic information about what’s happening in a given moment, but we also take in a lot more: who’s on the field (and who’s not), what plays are being run and why, individual match-ups, how the game fits into existing data or history (does one team frequently beat the other?
These algorithms can also recognize objects in context—such a program that could identify the alphabet blocks on the living room floor, as well as the pile of kids’ books and the bouncy seat.
Iterations of the Turing Test, such as the Loebner Prize, still exist, though it’s become clear that just because a program can communicate like a human (complete with typos, an abundance of exclamation points, swear words, and slang) doesn’t mean it’s actually thinking.
A 1960s Rogerian computer therapist program called ELIZA duped participants into believing they were chatting with an actual therapist, perhaps because it asked questions and unlike some human conversation partners, appeared as though it’s listening.
ELIZA harvests key words from a user’s response and turns them into question, or simply says, “tell me more.” While some argue that ELIZA passed the Turing Test, it’s evident from talking with ELIZA (you can try it yourself here) and similar chatbots that language processing and thinking are two entirely different abilities.
In the game, Watson received this clue: “Maurice LaMarche found his inner Orson Welles to voice this rodent whose simple goal was to take over the world.” Watson’s possible answers and probabilities were as follows: Pinky and the Brain: 63 percent Ed Wood: 10 percent capybara: 10 percent GET THE BEAST IN YOUR INBOX!
By uploading case histories, diagnostic information, treatment protocols, and other data, Watson can work alongside human doctors to help identify cancer and determine personalized treatment plans.
This limitation parallels human skills of critical thinking or synthesis—we can apply knowledge about a specific historical movement to a new fashion trend or use effective marketing techniques in a conversation with a boss about a raise because we can see the overlaps.
Artificial Intelligence and Its Implications for Future Suffering
Contents [hide] Introduction Is 'the singularity' crazy?
case for epistemic modesty on AI timelines Intelligent robots in your backyard Is automation 'for free'?
Caring about the AI's goals Rogue AI would not share our values Would a human-inspired AI or rogue AI cause more suffering?
Another hypothetical AI takeoff scenario AI: More like the economy than like robots?
Importance of whole-brain emulation Why work against brain-emulation risks appeals to suffering reducers Would emulation work accelerate neuromorphic AI?
Attitudes toward AGI control Charities working on this issue Is MIRI's work too theoretical?
- On Monday, June 24, 2019
Can we build AI without losing control over it? | Sam Harris
Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris -- and not just in some theoretical way. We're going to build superhuman machines, says Harris,...
Vicarious' Scott Phoenix on AI & race to unlock human brain to create AGI, our last invention
FOR FULL SHOW NOTES, SUBSCRIBE TO: True artificial intelligence is arguably the ultimate technology. To create machines that not only learn,..
Ethics of AI @ NYU: Artificial Intelligence & Human Values
Day 2 Session 1: Artificial Intelligence & Human Values :00 - David Chalmers Opening Remarks 3:30 - Stuart Russell "Provably Beneficial AI" 37:00 - Eliezer Yudkowsky "Difficulties of AGI Alignment...
Joscha Bach - Strong AI: Why we should be concerned
Title: Strong AI: Why we should be concerned about something nobody knows how to build Synopsis: At the moment, nobody fully knows how to create an intelligent system that rivals or exceed...
Ted Goertzel - Minimizing Risks in Developing Artificial General Intelligence
Winter Intelligence Oxford - FHI - Abstract: This paper posits that elements of human-level or superhuman artificial general intelligence will emerge gradually..
Will AI make us immortal? Or will it wipe us out? Elon Musk, Ray Kurzweil and Nick Bostrom.
Will AI bring immortality or extinction? Narrated by Stephen Fry, exploring predictions by Elon Musk, Stephen Hawking, Ray Kurzweil and Nick Bostrom. Learn more about artificial intelligence...
Towards Artificial General Intelligence | Oriol Vinyals | TEDxImperialCollege
What is artificial general intelligence and what are researchers doing to achieve this goal today? Oriol, from Google DeepMind, walks you through the answers to these questions with some interestin...
Artificial Intelligence and the Human Brain
Demis Hassabis: Towards General Artificial Intelligence
Dr. Demis Hassabis is the Co-Founder and CEO of DeepMind, the world's leading General Artificial Intelligence (AI) company, which was acquired by Google in 2014 in their largest ever European...
Artificial Super intelligence - How close are we?
Super Intelligence is getting closer each year and recently, there has been much speculation about how ASI will affect humanity. What is ASI and how far are we from creating one? Song, image,...