AI News, Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity

Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity

We feel this cause presents an outstanding philanthropic opportunity — with extremely high importance, high neglectedness, and reasonable tractability (our three criteria for causes) — for someone in our position.

With all of this in mind, we’re placing a larger “bet” on this cause, this year, than we are placing even on other focus areas — not necessarily in terms of funding (we aren’t sure we’ll identify very large funding opportunities this year, and are more focused on laying the groundwork for future years), but in terms of senior staff time, which at this point is a scarcer resource for us.

(My views are fairly representative, but not perfectly representative, of those of other staff working on this cause.) It will then give a broad outline of our planned activities for the coming year, some of the key principles we hope to follow in this work, and some of the risks and reservations we have about prioritizing this cause as highly as we are.

It seems to me that AI and machine learning research is currently on a very short list of the most dynamic, unpredictable, and potentially world-changing areas of science.1 In particular, I believe that this research may lead eventually to the development of transformative AI, which we have roughly and conceptually defined as AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution.

If the above reasoning is right (and I believe much of it is highly debatable, particularly when it comes to my previous post’s arguments as well as the importance of accident risks), I believe it implies that this cause is not just important but something of an outlier in terms of importance, given that we are operating in an expected-value framework and are interested in low-probability, high-potential-impact scenarios.2 The underlying stakes would be qualitatively higher than those of any issues we’ve explored or taken on under the U.S. policy category, to a degree that I think more than compensates for e.g.

When considering other possible transformative developments, I can’t think of anything else that seems equally likely to be comparably transformative on a similar time frame, while also presenting such a significant potential difference between best- and worst-case imaginable outcomes.

I believe there are many past cases in which it took a very long time for philanthropy to pay off,3 especially when its main value-added was supporting the gradual growth of organizations, fields and research that would eventually make a difference.

Both artificial intelligence generally and potential risks have received increased attention in recent years.4 We’ve put substantial work into trying to ensure that we have a thorough landscape of the researchers, funders, and key institutions in this space.

I see transformative AI as very much a future technology – I’ve argued that there is a nontrivial probability that it will be developed in the next 20 years, but it is also quite plausibly more than 100 years away, and even 20 years is a relatively long time.

I’ve previously put significant weight on an argument along the lines of, “By the time transformative AI is developed, the important approaches to AI will be so different from today’s that any technical work done today will have a very low likelihood of being relevant.” My views have shifted significantly for two reasons.

Second, having had more conversations about open technical problems that could be relevant to reducing risks, I’ve come to believe that there is a substantial amount of work worth doing today, regardless of how long it will be until the development of transformative AI.

problems having to do with making reinforcement learning systems and other AI agents less likely to behave in undesirable ways (designing reinforcement learning systems that will not try to gain direct control of their rewards, that will avoid behavior with unreasonably far-reaching impacts, and that will be robust against differences between formally specified rewards and human designers’ intentions in specifying those rewards);

A reinforcement learning system is designed to learn to behave in a way that maximizes a quantitative “reward” signal that it receives periodically from its environment - for example, DeepMind’s Atari player is a reinforcement learning system that learns to choose controller inputs (its behavior) in order to maximize the game score (which the system receives as “reward”), and this produces very good play on many Atari games.

However, if a future reinforcement learning system’s inputs and behaviors are not constrained to a video game, and if the system is good enough at learning, a new solution could become available: the system could maximize rewards by directly modifying its reward “sensor” to always report the maximum possible reward, and by avoiding being shut down or modified back for as long as possible.

And this behavior might not emerge until a system became quite sophisticated and had access to a lot of real-world data (enough to find and execute on this strategy), so a system could appear “safe” based on testing and turn out to be problematic when deployed in a higher-stakes setting.

intuitively, the challenge would be to design the system to pursue some actual goal in the environment that is only indirectly observable, instead of pursuing problematic proxy measures of that goal (such as a “hackable” reward signal).

My current impression is that government regulation of AI today would probably be unhelpful or even counterproductive (for instance by slowing development of AI systems, which I think currently pose few risks and do significant good, and/or by driving research underground or abroad).

If it turns out that there are highly promising paths to reducing accident risks – to the point where the risks look a lot less serious – this development could result in a beneficial refocusing of attention on misuse risks.

In my view, one of the best ways to achieve this is to be as well-connected as possible to the people who have thought most deeply about the key issues, including both the leading researchers in AI and machine learning and the people/organizations most focused on reducing long-term risks.

However, I think the case for this cause is compelling enough to outweigh this consideration, and I think a major investment of senior staff time this year could leave us much better positioned to find outstanding giving opportunities in the future.

In particular, one of our main goals is to support an increase in the number of people – particularly people with strong relevant technical backgrounds - dedicated to thinking through how to reduce potential risks.

Risks of AI – What Researchers Think is Worth Worrying About

and a flurry of attention around celebrity comments around AI dangers (including the now well-known statements of Bill Gates and Elon Musk), it’s safe to say that the risks of AI has embedded itself as a topic of pop-culture discourse — even if it’s not a very serious one amongst the populace at present.

Recently, we interviewed and reached out to a total of over 30 artificial intelligence researchers (all except one hold a PhD) and asked them about the risks of AI that they believe to be the most pressing in the next 20 years and the next 100 years.

(NOTE: If you’re interested in the full data set from our surveys, including 12 guest responses that didn’t make this graphic and expert predictions on the biggest AI risks within the next 100 [not just 20] years, you can download the complete data set from this interview series here via Google Spreadsheets;

risk list with 36 percent of responses, a positive correlation with the massive amount of media attention on autonomous vehicles and improved robotic manufacturing, among other industries.

In Dr. Stephen Thaler’s opinion, the greatest risk that human beings face is “the revelation that human minds may not be as wonderful as we all thought, leading to the inevitable humiliation and denial that accompanies significant technological breakthroughs,”

Will the onslaught of AI technologies inspire an overcoming of human disparities as societies come together to address underlying faults, or catalyze growing rifts that escape our eventual control?

was done after the survey (it could be argued that other categories could have been used to couch these responses), and that 33 researchers is by no means an extensive consensus, the resulting trends and thoughts of PhDs, most of whom have spent their careers in various segments of AI, are interesting and worth considering.

We conducted this survey to mainly spurn debate and consideration for the reasonable risks of AI. Interacting with and getting the thoughts of readers is always valuable, which why we’re asking you to make your own predictions and compare them to other TechEmergence readers’:

Benefits Risks of Artificial Intelligence

et al.[1] Lets keep it that way lest systems built to protect human rights on millenniums of wisdom is brought down by some artificial intelligence engineer trying to clock a milestone on their gantt chart!!!!

The strength of the FDA, the MDD, the TGA and their likes in the developing nations is a testament to how the rigor of the conduct of the research and the regulations grow together so another initiative such as the development of atomic bomb are nibbled before they so much as think of budding!!!

 And then I read about the enormous engagement of the global software industry in the areas of Artificial Intelligence and Neuroscience.

These standards would serve as instruments to preserve the simple fact upon which every justice system in the world has been built viz., the brain and nervous system of an individual belongs to an individual and is not to be accessed by other individuals or machines with out stated consent for stated purposes.

The standards will identify the frequency bands or pulse trains for exclusion in all research tools- software or otherwise, commercially available products, regulated devices, tools of trade, and communication infrastructure such that inadvertent breech of barriers to an individual’s brain and nervous system is prohibited.

Potential Risks from Advanced Artificial Intelligence

Since then, we have reviewed further relevant materials such as FLI’s open letter and research priorities document.6 According to many machine learning researchers, there has been substantial progress in machine learning in recent years, and the field could potentially have an enormous impact on the world.7 It appears possible that the coming decades will see substantial progress in artificial intelligence, potentially even to the point where machines come to outperform humans in many or nearly all intellectual domains, though it is difficult or impossible to make confident forecasts in this area.

Bostrom has offered the two following highly simplified scenarios illustrating potential risks:11 Stuart Russell (a Professor of Computer Science at UC Berkeley and co-author of a leading textbook on artificial intelligence) has expressed similar concerns.12 While it is unlikely that these specific scenarios would occur, they are illustrative of a general potential failure mode: an advanced agent with a seemingly innocuous, limited goal could seek out a vast quantity of physical resources—including resources crucial for humans—in order to fulfill that goal as effectively as possible.13 To be clear, the risk Bostrom and Russell are describing is not that an extremely intelligent agent would misunderstand what humans would want it to do and then do something else.

Our understanding is that this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI), and there seems to have been less analysis and debate regarding them, but risks of this kind seem potentially as important as the risks related to loss of control.26 There are a number of other possible concerns related to advanced artificial intelligence that we have not examined closely, including social issues such as technological disemployment and the legal and moral standing of advanced artificial intelligence agents.

We have made fairly extensive attempts to look for people making sophisticated arguments that the risks aren’t worth preparing for (which is distinct from saying that they won’t necessarily materialize), including reaching out to senior computer scientists working in AI-relevant fields (not all notes are public, but we provide the ones that are) and attending a conference specifically on the topic.29 We feel that the Edge.org online discussion responding to Superintelligence30 is broadly representative of the arguments we’ve seen against the idea that risks from artificial intelligence are important, and we find those arguments largely unconvincing.

We agree with much of Luke’s analysis, but we have not closely examined it and do not necessarily agree with all of it.32 Many prominent33 researchers in machine learning and other fields recently signed an open letter recommending “expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial,” and listing many possible areas of research for this purpose.34 The Future of Life Institute recently issued a request for proposals on this topic, listing possible research topics including:35 FLI also requested proposals for centers focused an AI policy, which could address questions such as:36 This agenda is very broad, and open to multiple possible interpretations.

Sustained progress in these areas could potentially reduce risks from unintended consequences—including loss of control—of future artificial intelligence systems.38 It seems hard to know in advance whether work on the problems described here will ultimately reduce risks posed by advanced artificial intelligence.

Ten possible risks of artificial intelligence

How dangerous could artificial intelligence turn out to be, and how can we make sure the technology's developed safely and beneficially? Risk Bites dives into ...

Deadly Truth of General AI? - Computerphile

The danger of assuming general artificial intelligence will be the same as human intelligence. Rob Miles explains with a simple example: The deadly stamp ...

Can we build AI without losing control over it? | Sam Harris

Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris -- and not just in some theoretical way. We're going to build ...

The Dangers of Artificial Intelligence - Robot Sophia makes fun of Elon Musk - A.I. 2017

The Dangers of Artificial Intelligence - Robot Sophia jokes and makes fun of Elon Musk - A.I. 2017 - 2ndEarth Alternative (22/04/2017) ** For your viewing ...

Bill Gates: I think we do need to worry about artificial intelligence

Microsoft founder Bill Gates on drones, start-ups, artificial intelligence and privacy versus security concerns.

What happens when our computers get smarter than we are? | Nick Bostrom

Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as "smart" as a human being.

Why comfort will ruin your life | Bill Eckstrom | TEDxUniversityofNevada

After documenting and researching over 50000 coaching interactions in the workplace, Bill Eckstrom shares life-altering, personal and professional ...

Genetic Engineering Will Change Everything Forever – CRISPR

Designer babies, the end of diseases, genetically modified humans that never age. Outrageous things that used to be science fiction are suddenly becoming ...

Moral Math of Robots: Can Life and Death Decisions Be Coded?

A self-driving car has a split second to decide whether to turn into oncoming traffic or hit a child who has lost control of her bicycle. An autonomous drone needs ...

The Rise of the Machines – Why Automation is Different this Time

Automation in the Information Age is different. Books we used for this video: The Rise of the Robots: The Second Machine Age: ..