AI News, Existential risk artificial intelligence

The way insurance could help mitigate AI risks

There is a growing consensus that artificial intelligence (AI) will fundamentally transform our economy and society.1 A wide range of commercial applications are being used across many industries.

Among these are anomaly detection (e.g., for fraud mitigation), image recognition (e.g., for public safety), speech recognition and natural language generation (e.g., for virtual assistants), recommendation engines (e.g., for robo-advice), and automated decision-making systems (e.g., for workflow applications).

In contrast, general AI is applicable to broader problem areas, has the capacity to assess its surroundings, and gives emotionally driven responses to situations in the way humans do.

Super AI systems, which possess the potential to outperform humans across a wide range of disciplines, have not been fully developed yet and are very likely still decades away.

The insurance industry plays an important role in modern economies and societies, especially when it comes to the detection and evaluation of risks.3 Insurance companies put a price tag on risks and help protect people from possible harms.

As has been the case with other emerging problems such as data breaches, cyber-intrusions, and outright harms, insurance helps people and businesses deal with problematic developments and protects them from the financial costs associated with those things.

“We are still at the very beginning of understanding what the potential AI risks are, so there are neither empirical data nor theoretical models that estimate the potential loss frequency and magnitude of AI risks.”

We are still at the very beginning of understanding what the potential AI risks are, so there are neither empirical data nor theoretical models that estimate the potential loss frequency and magnitude of AI risks.

AI risks might spread across the global within short time periods so that the potential to geographically diversify risks (which is fundamental for the insurance industry) is in doubt.

AI knows no geographical boundaries, and the systemic impact of particular losses on the global economy must be much better understood to improve the insurability of the AI risks noted above.

For example, with respect to cyber-risks, the insurance industry is actively working on developing standardized terminologies and policies for data breaches, hacking, and identity theft.

With many current insurance policies, the cyber-insurance market is extremely small, with diminutive insured sums and narrow cover restrictions that do not really address AI risks.

The industry has a strong interest in the early detection of potential risks arising from new technologies, and, in the case of AI, the potential risks are extremely diverse.

21 Recent Publications on Existential Risk (September 2019 update)

Each month, The Existential Risk Research Assessment (TERRA) uses a unique machine-learning model to predict those publications most relevant to existential risk or global catastrophic risk.

These bounds are unlikely to be affected by possible survivorship bias in the data, and are consistent with mammalian extinction rates, typical hominin species lifespans, the frequency of well-characterized risks, and the frequency of mass extinctions.

Existential risks: a philosophical analysis This paper examines and analyzes five definitions of ‘existential risk.’ It tentatively adopts a pluralistic approach according to which the definition that scholars employ should depend upon the particular context of use.

More specifically, the notion that existential risks are ‘risks of human extinction or civilizational collapse’ is best when communicating with the public, whereas equating existential risks with a ‘significant loss of expected value’ may be the most effective definition for establishing existential risk studies as a legitimate field of scientific and philosophical inquiry.

The world destruction argument The most common argument against negative utilitarianism is the world destruction argument, according to which negative utilitarianism implies that if someone could kill everyone or destroy the world, it would be her duty to do so.

The world destruction argument is not a reason to reject negative utilitarianism in favour of these other forms of consequentialism, because there are similar arguments against such theories that are at least as persuasive as the world destruction argument is against negative utilitarianism.

We possess technologies that have been inducing changes in the climate of our planet in ways that threaten to at very least displace large portions of the human race, as well as weapons capable of eliminating millions and rendering large swaths of the Earth uninhabitable.

Ethical Challenges in Human Space Missions: A Space Refuge, Scientific Value, and Human Gene Editing for Space This article examines some selected ethical issues in human space missions including human missions to Mars, particularly the idea of a space refuge, the scientific value of space exploration, and the possibility of human gene editing for deep-space travel.

We conclude that while these issues are complex and context-dependent, there appear to be no overwhelming obstacles such as cost effectiveness, threats to human life or protection of pristine space objects, to sending humans to space and to colonize space.

AI: A Key Enabler of Sustainable Development Goals, Part 1 [Industry Activities] We are witnessing a paradigm shift regarding how people purchase, access, consume, and utilize products and services as well as how companies operate, grow, and deal with challenges in a world that is continuously changing.

Fine-tuning might just be an illusion, or a result of irreducible chance, or nonexistent because nature could not have been otherwise (which might be shown within a fundamental theory if some constants or laws could be reduced to boundary conditions or boundary conditions to laws), or it might be a product of selection: either observational selection (weak anthropic principle) within a vast multiverse of many different realizations of physical parameters, or a kind of cosmological natural selection making the measured parameter values quite likely within a multiverse of many different values, or even a teleological or intentional selection or a coevolutionary development, depending on a more or less goal-directed participatory contribution of life and intelligence.

In contrast to observational selection, which is not predictive, an observer-independent selection mechanism must generate unequal reproduction rates of universes, a peaked probability distribution, or another kind of differential frequency, resulting in a stronger explanatory power.

The possibility of AGI developing a motivation for self-preservation could lead to concealment of its true capabilities until a time when it has developed robust protection from human intervention, such as redundancy, direct defensive or active preemptive measures.

In arguing that communicators do not yet fully understand why an intersectional approach is necessary to avoid climate disaster, I review the literature focusing on one basis of marginalization–gender–to illustrate how inequality is a root cause of global environmental damage.

I then examine the Green New Deal as an example of an intersectional climate change policy that looks beyond scientific, technical and political solutions to the inextricable link between crises of climate change, poverty, extreme inequality, and racial and economic injustice.

To ensure that the benefits of these technologies, which include significantly improved feasibility of near-term restoration of preindustrial atmospheric CO2 levels and ocean pH, environmental remediation, significant and rapid reduction in global poverty and widespread improvements in manufacturing, energy, medicine, agriculture, materials, communications and information technology, construction, infrastructure, transportation, aerospace, standard of living, and longevity, are not eclipsed by either public fears of nebulous catastrophe or actual consequential accidents, we propose safe design, operation and use paradigms.

We discuss design of control and operational management paradigms that preclude uncontrolled replication, with emphasis on the comprehensibility of these safety measures in order to facilitate both clear analyzability and public acceptance of these technologies.

Finite state machines are chosen for control of self-replicating systems because they are susceptible to comprehensive analysis (exhaustive enumeration of states and transition vectors, as well as analysis with established logic synthesis tools) with predictability more practical than with more complex Turing-complete control systems (cf.

The corporate capture of sustainable development and its transformation into a 'good Anthropocene' historical bloc Inspired by Antonio Gramsci's analysis of bourgeois hegemony and his theoretical formulation of historical blocs, this paper attempts to explain how the concept and practice of sustainable development were captured by corporate interests in the last few decades of the twentieth century and how they were transformed into what we can name a 'good Anthropocene' historical bloc at the beginning of the twenty-first century.

The critical Anthropocene narrative, thus, stands in radical opposition to the 'good Anthropocene' narrative which I argue was invented as a strategy to defend the socio-economic status quo by the proponents of sustainable development and their successors in the Anthropocene era, despite the good intentions of many environmentalists working in corporations, governments, NGOs, and international organizations.

However, crucial communication systems, infrastructure networks and others are usually coupled together and can be modeled as interdependent networks, hence, since 2010 the focus has shifted to the study of the more general and realistic case of coupled networks, called Networks of Networks (NON).

Integrated emergency management and risks for mass casualty emergencies Today it is observed the intense growth of various global wide scale threats to civilization, such as natural and manmade catastrophes, ecological imbalance, global climate change, numerous hazards pollutions of large territories and directed terrorist attacks, resulted to huge damages and mass casualty emergencies.

It first emphasizes the very need for the peace and reconciliation through three examples of national reconciliation both internal and external: reconciliation between France and Germany after the continuous war since 1813, reconciliation between Germany and Poland after the World War II, and the reconciliation between Germany and Germany, the very recent peace movement.

Prospects for the use of new technologies to combat multidrug-resistant bacteria The increasing use of antibiotics is being driven by factors such as the aging of the population, increased occurrence of infections, and greater prevalence of chronic diseases that require antimicrobial treatment.

The excessive and unnecessary use of antibiotics in humans has led to the emergence of bacteria resistant to the antibiotics currently available, as well as to the selective development of other microorganisms, hence contributing to the widespread dissemination of resistance genes at the environmental level.

In Study 1 (US sample, n = 183, mean age 38.2, 50.81% female), we studied the general public’s judgments of the badness of human extinction.

M = 5.69, SD = 1.86), and that funding work to reduce the risk of human extinction is more important than funding other areas of government, such as education, health care and social security (1 = much less important to fund work to reduce the risk of human extinction, 4 = midpoint, 7 = much more important;

Participants (n = 1,251, mean age 36.6, 35.33% female) were randomly divided into a control condition and four experimental conditions: “the animals condition”, “the “sterilization condition”, “the salience condition” and “the utopia condition” (see below for explanations of the manipulations).

However, this was just a preliminary question: as per the discussion above, what we were primarily interested in was which difference participants that gave the expected ranking found greater: the first difference (meaning that extinction is not uniquely bad) or the second difference (meaning that extinction is uniquely bad).

(Recall that the first difference was the difference between no catastrophe and a catastrophe killing 80%, and the second difference the difference between a catastrophe killing 80% and a catastrophe killing 100%.) We therefore asked participants who gave the expected ranking (but not the other participants) which difference they judged to be greater.

We found, first, that a large majority ranked no catastrophe as the best outcome and 100% dying as the worst outcome (the expected ranking) both in the animals condition (89.84%, 221/246 participants) and the sterilization condition (82.54%, 208/252 participants).

The proportion of the participants who gave the expected ranking that found extinction uniquely bad was significantly larger (χ2(1) = 8.82, P = 0.003) in the animals condition (44.34%, 98/221 participants) than in the control condition (23.47%, 50/213 participants).

Similarly, the proportion of the participants who gave the expected ranking that found extinction uniquely bad was significantly larger (χ2(1) = 23.83, P < 0.001) in the sterilization condition (46.63%, 97/208 participants) than in the control condition (23.47%, 50/213 participants).

We therefore included a maximally positive scenario, the “utopia condition” (248 participants), where it was said that provided that humanity does not go extinct, it “goes on to live for a very long time in a future which is better than today in every conceivable way”.

It was also said that “there are no longer any wars, any crimes, or any people experiencing depression or sadness” and that “human suffering is massively reduced, and people are much happier than they are today” (in the scenario where 80% die in a catastrophe, it was said that this occurred after a recovery period;

Conversely, participants were told that if 100% are killed, then “no humans will ever live anymore, and all of human knowledge and culture will be lost forever.” We hypothesized that both of these manipulations would make more participants judge extinction to be uniquely bad compared with the control condition.

We found again that a large majority ranked no catastrophe as the best outcome and 100% dying as the worst outcome (the expected ranking) both in the salience condition (77.82%, 193/248 participants) and the utopia condition (86.69%, 215/248 participants).

We found again that large majorities ranked no catastrophe as the best outcome and 100% dying as the worst outcome (the expected ranking) in the control condition (87.80%, 144/164 participants), the animals condition (92.44%, 159/172 participants), the sterilization condition (91.62%, 153/167 participants), the salience condition (83.05%, 147/177 participants) and the utopia condition (89.71%, 157/175 participants).

We then found again that a minority of the participants who chose the expected ranking found extinction to be uniquely bad in the control condition (36.92%, 24/65 participants), though this minority was slightly larger than in the two samples of the general public (cf.

In Study 3 (N = 71, mean age 30.52, 14.00% female) we aimed to test whether people devoted to preventing human extinction (existential risk mitigators) judge human extinction to be uniquely bad already when asked without further prompts.

(Existential risks also include risks that threaten to drastically curtail humanity’s potential12,13,14,15, without causing it to go extinct, but we focus on risks of human extinction.) This would support the validity of our task by demonstrating a link between participants’ responses and behavior in the real world.

But unlike the samples in Studies 2a to 2c, and in line with our hypotheses, substantial majorities of the participants who chose the expected ranking found extinction uniquely bad both in the control condition (85.71%, 24/28 participants) and the utopia condition (94.59%, 35/37).

AI ethics and AI risk - Ten challenges

How dangerous could artificial intelligence turn out to be, and how do we develop ethical AI? Risk Bites dives into AI risk and AI ethics, with ten potential risks of ...

The Existential Risk of Artificial Intelligence

David Goldberg (Co-founder & CEO of Founders Pledge) hosts Jaan Tallinn (Founding Engineer of Skype & Kazaa) on the existential risk of artificial intelligence ...

What's worrying Elon Musk? Existential Risk and Artificial Intelligence

A talk from the meetup of the Generalist Engineer group on Existential Risk and Artificial Intelligence. Dr. Joshua Fox served as a Research Associate with the ...

Nick Bostrom on Artificial Intelligence and Existential Risks

Prof. Nick Bostrom - Artificial Intelligence Will be The Greatest Revolution in History

Prof. Nick Bostrom is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics ...

Existential Risk: Managing Extreme Technological Risk

Of the 45 million centuries of the Earth's history, this one is very special. It is the first century that one species – us – hold the future of the planet in our hands.

Roman Yampolsky: Artificial intelligence as an existential risk to humanity

Roman Yampolsky: Artificial intelligence as an existential risk to humanity GoCAS Existential risk to humanity, Gothenburg 7 sep 2017.

AI is an existential risk | Elon Musk