AI News, Page not found - Future of Life Institute

Page not found - Future of Life Institute

Principled AI Discussion in Asilomar Elon Musk donates $10M to keep AI beneficial How Do We Align Artificial Intelligence with Human Valu...

Breakthroughs of 2015 When AI Journalism Goes Bad Think-tank dismisses leading AI researchers as luddites Risks From General Artificial Intelligence Without an Intelligence...

Elon Musk doesn’t think we’re prepared to face humanity’s biggest threat: Artificial intelligence

Elon Musk, who hopes that one day everyone will ride in a self-driving, electric-powered Tesla, told a group of governors Saturday that they needed to get on the ball and start regulating artificial intelligence, which he called a “fundamental risk to the existence of human civilization.” No pressure.

He believes AI “could start a war by doing fake news and spoofing email accounts and fake press releases, and just by manipulating information.

Or, indeed — as some companies already claim they can do — by getting people to say anything that the machine wants.” Musk said he’s usually against proactive regulation, which can impede innovation.

“By the time we are reactive in regulation, it’s too late,” he said, confessing that “this is really like the scariest problem to me.” He’s been warning people about the problem for years, and he’s even come up with a solution: Join forces with the computers.

“I think we should be really concerned about AI.” Still, even to the biggest skeptic, one sentence offered some food for thought: “I have exposure to the very most cutting edge AI, and I think people should be really concerned about it.” Maybe Musk knows something the rest of us don’t?

Better safe than sorry: 01001001 00100000 01100001 01101101 00100000 01101111 01101110 00100000 01111001 01101111 01110101 01110010 00100000 01110011 01101001 01100100 01100101 00100000 Read more: Innovations newsletter Cutting-edge developments in tech and elsewhere.

Potential Risks from Advanced Artificial Intelligence

We do not necessarily agree with all of the content of his posts, but believe they offer a good introduction to the subject.) We don’t endorse all of the arguments of this book, but it is the most detailed argument for a particular potential risk associated with artificial intelligence, and we believe it would be instructive to review both the book and some of the response to it (see immediately below).

In 2013, Holden Karnofsky, Jacob Steinhardt, and Dario Amodei (two science advisors for the Open Philanthropy Project) met with MIRI to discuss MIRI’s organizational strategy.3 In 2014, Holden had another similar meeting with MIRI.4 We have had many informal conversations about the future of artificial intelligence with two of the Open Philanthropy Project’s scientific advisors: Dario Amodei, Research Scientist, Baidu Silicon Valley AI Lab Jacob Steinhardt, PhD student in Computer Science, Stanford University In addition, we have published notes from conversations with some senior researchers in computer science, including: Tom Mitchell, School of Computer Science, Machine Learning Department, Carnegie Mellon University on February 19, 2014.

Timeline According to many machine learning researchers, there has been substantial progress in machine learning in recent years, and the field could potentially have an enormous impact on the world.7 It appears possible that the coming decades will see substantial progress in artificial intelligence, potentially even to the point where machines come to outperform humans in many or nearly all intellectual domains, though it is difficult or impossible to make confident forecasts in this area.

Stuart Russell (a Professor of Computer Science at UC Berkeley and co-author of a leading textbook on artificial intelligence) has expressed similar concerns.12 While it is unlikely that these specific scenarios would occur, they are illustrative of a general potential failure mode: an advanced agent with a seemingly innocuous, limited goal could seek out a vast quantity of physical resources—including resources crucial for humans—in order to fulfill that goal as effectively as possible.13 To be clear, the risk Bostrom and Russell are describing is not that an extremely intelligent agent would misunderstand what humans would want it to do and then do something else.

Some considerations that make this argument seem relatively plausible to us, and/or point to a more general case for seeing AI as a potential source of major global catastrophic risks: Over a relatively short geological timescale, humans have come to have enormous impacts on the biosphere, often leaving the welfare of other species dependent on the objectives and decisions of humans.

Whereas it is possible to establish the safety of a bridge by relying on well-characterized engineering properties in a limited range of circumstances and tasks, it is unclear how to establish the safety of a highly capable AI agent that would operate in a wide variety of circumstances.16 When tasks are delegated to opaque autonomous systems—as they were in the 2010 Flash Crash—there can be unanticipated negative consequences.

Jacob Steinhardt, a PhD student in computer science at Stanford University and a scientific advisor to the Open Philanthropy Project, suggested that as such systems become increasingly complex in the long term, “humans may lose the ability to meaningfully understand or intervene in such systems, which could lead to a loss of sovereignty if autonomous systems are employed in executive-level functions (e.g.

These capabilities could potentially allow an advanced artificial intelligence agent to increase its power, develop new technology, outsmart opposition, exploit existing infrastructure, or exert influence over humans.18 Concerns regarding the loss of control of advanced artificial intelligence agents were included among many other issues in a research priorities document linked to in the open letter discussed above,19 which was signed by highly-credentialed machine learning researchers, scientists, and technology entrepreneurs.20 Prior to the release of this open letter, potential risks from advanced artificial intelligence received limited attention from the mainstream computer science community, apart from some discussions that we found unconvincing.21 We are uncertain about the extent to which the people who signed this open letter saw themselves as supporting the idea that loss of control of advanced artificial intelligence agents is a problem worth doing research to address.

To the extent that they did not, we feel that signing the letter (without public comments or disclaimers beyond what we’ve seen) indicates a general lack of engagement with this question, which we would take as—in itself—a reason to err on the side of being concerned about and investing in preparation for the risk, as it would imply that some people in a strong position to be carefully examining the issue and communicating their views may be failing to do so.

Our understanding is that it is not clearly possible to create an advanced artificial intelligence agent that avoids all challenges of this sort.22 In particular, our impression is that existing machine learning frameworks have made much more progress on the task of acquiring knowledge than on the task of acquiring appropriate goals/values.23 BACK TO TOP Peace, security, and privacy It seems plausible to us that highly advanced artificial intelligence systems could potentially be weaponized or used for social control.

For example: In the shorter term, machine learning could potentially be used by governments to efficiently analyze vast amounts of data collected through surveillance.24 Cyberattacks in particular—especially if combined with the trend toward the “Internet of Things”—could potentially pose military/terrorist risks in the future.25 The capabilities described above—such as superhuman capabilities in areas like programming, strategic planning, social influence, cybersecurity, research and development, and other knowledge work—could be powerful tools in the hands of governments or other organizations.

Our understanding is that this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI), and there seems to have been less analysis and debate regarding them, but risks of this kind seem potentially as important as the risks related to loss of control.26 BACK TO TOP Other potential concerns There are a number of other possible concerns related to advanced artificial intelligence that we have not examined closely, including social issues such as technological disemployment and the legal and moral standing of advanced artificial intelligence agents.

Losing control of an advanced agent would also seem to require that advanced artificial intelligence will function as an agent: identifying actions, using a world model to estimate their likely consequences, using a scoring system (such as a utility function) to score actions as a function of their likely consequences, and selecting high- or highest-scoring actions.

It’s possible (though it seems unlikely to us) that there are limited benefits to having substantially more intelligence than humans, and it’s possible that an artificial intelligence would maximize a problematic utility function primarily via degenerate behavior (e.g., hacking itself and manually setting its reward function to the maximum) rather than behaving in a way that could pose a global catastrophic risk.

For example, it took decades for chess algorithms to progress from being competitive with the top few tens of thousands of players to being better than any human.28 At the same time, these risks seem plausible to us, and we believe the extreme uncertainty about the situation—when combined with plausibility and extremely large potential stakes—favors preparing for potential risks.

We have made fairly extensive attempts to look for people making sophisticated arguments that the risks aren’t worth preparing for (which is distinct from saying that they won’t necessarily materialize), including reaching out to senior computer scientists working in AI-relevant fields (not all notes are public, but we provide the ones that are) and attending a conference specifically on the topic.29 We feel that the Edge.org online discussion responding to Superintelligence30 is broadly representative of the arguments we’ve seen against the idea that risks from artificial intelligence are important, and we find those arguments largely unconvincing.

Potential research agendas we are aware of Many prominent33 researchers in machine learning and other fields recently signed an open letter recommending “expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial,” and listing many possible areas of research for this purpose.34 The Future of Life Institute recently issued a request for proposals on this topic, listing possible research topics including:35 Computer Science: Verification: how to prove that a system satisfies certain desired formal properties.

MIRI’s research tends to involve more mathematics, formal logic, and formal philosophy than much work in machine learning.37 Some specific research areas highlighted by our scientific advisors Dario Amodei and Jacob Steinhardt include: Improving the ability of algorithms to learn values, goal systems, and utility functions, rather than requiring them to be hand-coded.

few small non-profit/academic institutes work on risks from artificial intelligence, including: ORGANIZATION MISSION REVENUE OR BUDGET FIGURE Cambridge Center for the Study of Existential Risk “CSER is a multidisciplinary research centre dedicated to the study and mitigation of risks that could lead to human extinction.”44 Not available, new organization Future of Humanity Institute “The Future of Humanity Institute is a leading research centre looking at big-picture questions for human civilization.

We are currently focusing on potential risks from the development of human-level artificial intelligence.”47 Not available, new organization Machine Intelligence Research Institute “We do foundational mathematical research to ensure smarter-than-human artificial intelligence has a positive impact.”48 $1,237,557 in revenue for 201449 One Hundred Year Study on Artificial Intelligence (AI100) “Stanford University has invited leading thinkers from several institutions to begin a 100-year effort to study and anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live and play.”50 Not available, new organization CSER, FHI, and FLI work on existential risks to humanity in general, but all are significantly interested in risks from artificial intelligence.51 BACK TO TOP Questions for further investigation Amongst other topics, our further research on this cause might address: Is it possible to get a better sense of how imminent advanced artificial intelligence is likely to be and the specifics of what risks it might pose?