AI News, The Top Myths About Advanced AI

The Top Myths About Advanced AI

For example, John McCarthy (who coined the term “artificial intelligence”), Marvin Minsky, Nathaniel Rochester and Claude Shannon wrote this overly optimistic forecast about what could be accomplished during two months with stone-age computers: “We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College […] An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.

We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.” On the other hand, a popular counter-myth is that we know we won’t get superhuman AI this century.

Researchers have made a wide range of estimates for how far we are from superhuman AI, but we certainly can’t say with great confidence that the probability is zero this century, given the dismal track record of such techno-skeptic predictions.

For example, Ernest Rutherford, arguably the greatest nuclear physicist of his time, said in 1933 — less than 24 hours before Szilard’s invention of the nuclear chain reaction — that nuclear energy was “moonshine.” And Astronomer Royal Richard Woolley called interplanetary travel “utter bilge” in 1956.

However, physicists know that a brain consists of quarks and electrons arranged to act as a powerful computer, and that there’s no law of physics preventing us from building even more intelligent quark blobs.

But they argue that as long as we’re not 100% sure that it won’t happen this century, it’s smart to start safety research now to prepare for the eventuality.

In fact, to support a modest investment in AI safety research, people don’t need to be convinced that risks are high, merely non-negligible — just as a modest investment in home insurance is justified by a non-negligible probability of the home burning down.

MYTHS ABOUT THE RISKS OF SUPERHUMAN AI Many AI researchers roll their eyes when seeing this headline: “Stephen Hawking warns that rise of robots may be disastrous for mankind.” And as many have lost count of how many similar articles they’ve seen.

Typically, these articles are accompanied by an evil-looking robot carrying a weapon, and they suggest we should worry about robots rising up and killing us because they’ve become conscious and/or evil.

Machines can obviously have goals in the narrow sense of exhibiting goal-oriented behavior: the behavior of a heat-seeking missile is most economically explained as a goal to hit a target.

If you feel threatened by a machine whose goals are misaligned with yours, then it is precisely its goals in this narrow sense that troubles you, not whether the machine is conscious and experiences a sense of purpose.

sympathize with Rodney Brooks and other robotics pioneers who feel unfairly demonized by scaremongering tabloids, because some journalists seem obsessively fixated on robots and adorn many of their articles with evil-looking metal monsters with red shiny eyes.

To cause us trouble, such misaligned superhuman intelligence needs no robotic body, merely an internet connection – this may enable outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand.

Blog posts and talks AI control AI Impacts No time like the present for AI safety work AI Risk and Opportunity: A Strategic Analysis Where We’re At – Progress of AI and Related Technologies: An introduction to the progress of research institutions developing new AI technologies.

AI safety Wait But Why on Artificial Intelligence Response to Wait But Why by Luke Muehlhauser Slate Star Codex on why AI-risk research is not that controversial Less Wrong: A toy model of the AI control problem Books Superintelligence: Paths, Dangers, Strategies Our Final Invention: Artificial Intelligence and the End of the Human Era Facing the Intelligence Explosion E-book about the AI risk (including a “Terminator” scenario that’s more plausible than the movie version) Organizations Machine Intelligence Research Institute: A non-profit organization whose mission is to ensure that the creation of smarter-than-human intelligence has a positive impact.

Reducing Long-Term Catastrophic Risks from Artificial Intelligence

Good proposed that artificial intelligence beyond some threshold level would snowball, creating a cascade of self-improvements: AIs would be smart enough to make themselves smarter, and, having made themselves smarter, would spot still further opportunities for improvement, leaving human abilities far behind.[3] Good called this process an “intelligence explosion,” while later authors have used the terms “technological singularity” or simply “the Singularity”.[10] [21] The Machine Intelligence Research Institute aims to reduce the risk of a catastrophe, should such an event eventually occur.

The more plausible danger stems not from malice, but from the fact that human survival requires scarce resources: resources for which AIs may have other uses.[13][14] Superintelligent AIs with real-world traction, such as access to pervasive data networks and autonomous robotics, could radically alter their environment, e.g., by harnessing all available solar, chemical, and nuclear energy.

Many AIs will converge toward being optimizing systems, in the sense that, after self-modification, they will act to maximize some goal.[1][13] For instance, AIs developed under evolutionary pressures would be selected for values that maximized reproductive fitness, and would prefer to allocate resources to reproduction rather than supporting humans.[1] Such unsafe AIs might actively mimic safe benevolence until they became powerful, since being destroyed would prevent them from working toward their goals.

An intelligence Explosion May Be Sudden The pace of an intelligence explosion depends on two conflicting pressures: each improvement in AI technology increases the ability of AIs to research more improvements, while the depletion of low-hanging fruit makes subsequent improvements more difficult.

The predominant view in the AI field is that the bottleneck for powerful AI is software, rather than hardware, and continued rapid hardware progress is expected in coming decades.[4] If and when the software is developed, there may thus be a glut of hardware to run many copies of AIs, and to run them at high speeds, amplifying the effects of AI improvements.[8] As we have little reason to expect that human minds are ideally optimized for intelligence, as opposed to being the first intelligences sophisticated enough to produce technological civilization, there is likely to be further low-hanging fruit to pluck (after all, the AI would have been successfully created by a slower and smaller human research community).

Advances in AI and machine learning algorithms,[17] increasing R&D expenditures by the technology industry, hardware advances that make computation-hungry algorithms feasible,[4] enormous datasets,[5] and insights from neuroscience give advantages that past researchers lacked.

More specifically, human ingenuity is currently a bottleneck in making progress on many key challenges affecting our collective welfare: eradicating diseases, averting long-term nuclear risks, and living richer, more meaningful lives.

SIAI’s primary approach to reducing AI risks has thus been to promote the development of AI with benevolent motivations which are reliably stable under self-improvement, what we call “Friendly AI”.[22] To very quickly summarize some of the key ideas in Friendly AI: We can’t make guarantees about the final outcome of an agent’s interaction with the environment, but we may be able to make guarantees about what the agent is trying to do, given its knowledge — we can’t determine that Deep Blue will win against Kasparov just by inspecting Deep Blue, but an inspection might reveal that Deep Blue searches the game tree for winning positions rather than losing ones.

Since code executes on the almost perfectly deterministic environment of a computer chip, we may be able to make very strong guarantees about an agent’s motivations (including how that agent rewrites itself), even though we can’t logically prove the outcomes of environmental strategies.

Since we have no introspective access to the details of human values, the solution to this problem probably involves designing an AI to learn human values by looking at humans, asking questions, scanning human brains, etc., rather than an AI preprogrammed with a fixed set of imperatives that sounded like good ideas at the time.

Possible bootstrapping algorithms include “do what we would have told you to do if we knew everything you knew,” “do what we would’ve told you to do if we thought as fast as you did and could consider many more possible lines of moral argument,” and “do what we would tell you to do if we had your ability to reflect on and modify ourselves.” In moral philosophy, this notion of moral progress is known as reflective equilibrium.[15]

Neuroscientists can investigate the possibility of emulating the brains of individual humans with known motivations, while evolutionary theorists can investigate methods to prevent dangerous evolutionary dynamics and social scientists can investigate social or legal frameworks to channel the impact of emulations in positive directions.[18] Models of AI risks: Researchers can build models of AI risks and of AI growth trajectories, using tools from game theory, evolutionary analysis, computer security, or economics.[1][6][8][14][22] If such analysis is done rigorously it can help to channel the efforts of scientists, graduate students, and funding agencies to the areas with the greatest potential benefits.

Knowledge of the biases that distort human thinking around catastrophic risks,[23] improved methods for probabilistic forecasting,[16] or risk analysis,[11] and methods for identifying and aggregating expert opinions[7] can all improve our collective odds.

Going forward, we plan to continue our recent growth by scaling up our visiting fellows program, extending the Singularity Summits and similar academic networking, and writing further papers to seed the above research programs, in-house or with the best outside talent we can find.

Research into risks from artificialintelligence

Potential for huge impact, but also a large chance of your individual contribution making little or no difference Ratings Career capital: Direct impact: Earnings: Advocacy potential: Ease of competition: Job satisfaction: Our reasoning for these ratings is explained below.

Key facts on fit Strong technical abilities (at the level of a top 20 CS or math PhD program), interest in the topic highly important, self-directed as the field has little structure.

Machine Intelligence Research Institute — General Support (2017)

Our decision to renew and increase MIRI’s funding sooner than expected was largely the result of the following: We received a very positive review of MIRI’s work on “logical induction” by a machine learning researcher who (i) is interested in AI safety, (ii) is rated as an outstanding researcher by at least one of our close advisors, and (iii) is generally regarded as outstanding by the ML community.

While we would not generally offer a comparable grant to any lab on the basis of this consideration alone, we consider this a significant update in the context of the original case for the grant (especially MIRI’s thoughtfulness on this set of issues, value alignment with us, distinctive perspectives, and history of work in this area).

While the balance of our technical advisors’ opinions and arguments still leaves us skeptical of the value of MIRI’s research, the case for the statement “MIRI’s research has a nontrivial chance of turning out to be extremely valuable (when taking into account how different it is from other research on AI safety)” appears much more robust than it did before we received this review.

What happens when our computers get smarter than we are? | Nick Bostrom

Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as "smart" as a human being. And then, says Nick Bostrom, it...

What's worrying Elon Musk? Existential Risk and Artificial Intelligence

A talk from the meetup of the Generalist Engineer group on Existential Risk and Artificial Intelligence. Dr. Joshua Fox served as a Research Associate with the Machine Intelligence Research...

Jessica Taylor – Using Machine Learning to Address AI Risk – EAG 2016

Slides are available at: MIRI's full "Alignment for Advanced Machine Learning Systems" technical agenda is available at

The jobs we'll lose to machines -- and the ones we won't | Anthony Goldbloom

Machine learning isn't just for simple tasks like assessing credit risk and sorting mail anymore -- today, it's capable of far more complex applications, like grading essays and diagnosing...

Can we build AI without losing control over it? | Sam Harris

Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris -- and not just in some theoretical way. We're going to build superhuman machines, says Harris,...

Why AI will probably kill us all.

When you look into it, Artificial Intelligence is absolutely terrifying. Really hope we don't die. ▻ ▻ If you want to support what I do, this is the best way:

AI BUILDING AI MANKIND LOSING MORE CONTROL OVER ARTIFICIAL INTEL

The Event Is Coming Soon - AI BUILDING AI MANKIND LOSING MORE CONTROL OVER ARTIFICIAL INTELLIGENCE AI Building AI is the next phase humanity appears to be going through in its technological...

Bart Selman, " The Future of AI: Benefits and Risks"

The development of Artificial Intelligence (AI) technology is accelerating. A range of transformative innovations now appear likely within the next decade, including self-driving cars, real-time...

The DANGERS of Artificial Intelligence!

What are the DANGERS of Artificial Intelligence?! AI is still bad at learning but could it end humanity sometime in the future? Find out all about the dangers of robots running our lives in...

Why You Shouldn’t Fear Artificial Intelligence

Stephen Hawking and Elon Musk have warned us of the dangers of Artificial Intelligence, but is AI really going to be the downfall of humanity? Read More: The Code We Can't Control