AI News, The Partnership on AI artificial intelligence

Ethics Society team

Article 36 is a non-profit organisation working to prevent harm caused by certain weapons.

Article 36 is also part of the steering group of the International Campaign to Abolish Nuclear Weapons (ICAN), which was awarded the 2017 Nobel Peace Prize, and has led efforts to establish the impact of explosive weapons in populated areas as an international humanitarian priority.

Ethics of artificial intelligence

The ethics of artificial intelligence is the part of the ethics of technology specific to robots and other artificially intelligent beings.

divided into roboethics, a concern with the moral behavior of humans as they design, construct, use and treat artificially intelligent beings, and machine ethics, which is concerned with the moral behavior of artificial moral agents (AMAs).

The term 'robot ethics' (sometimes 'roboethics') refers to the morality of how humans design, construct, use and treat robots and other artificially intelligent beings.[1]

It has been suggested that robot rights, such as a right to exist and perform its own mission, could be linked to robot duty to serve human, by analogy with linking human rights to human duties before society.[3]

Pamela McCorduck counters that, speaking for women and minorities 'I'd rather take my chances with an impartial computer,' pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all.[14]

However, Kaplan and Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines: Using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and engrained, which makes them even more difficult to spot and fight against.[15]

The new recommendations focus on four main areas: humans and society at large, the private sector, the public sector, and research and academia.

In a highly influential branch of AI known as 'natural language processing,' problems can arise from the 'text corpus'—the source material the algorithm uses to learn about the relationships between different words.[33]

The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it.

'If any major military power pushes ahead with the AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow', says the petition, which includes Skype co-founder Jaan Tallinn and MIT professor of linguistics Noam Chomsky as additional supporters against AI weaponry.[55]

Regarding the potential for smarter-than-human systems to be employed militarily, the Open Philanthropy Project writes that these scenarios 'seem potentially as important as the risks related to loss of control', but that research organizations investigating AI's long-run social impact have spent relatively little time on this concern: 'this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI), and there seems to have been less analysis and debate regarding them'.[56]

To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency, which are related to the concept of AMAs.[61]

In 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne in Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource.[65]

In 2009, academics and technical experts attended a conference organized by the Association for the Advancement of Artificial Intelligence to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions.

They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons.

In a paper on the acquisition of moral values by robots, Nayef Al-Rodhan mentions the case of neuromorphic chips, which aim to process information similarly to humans, nonlinearly and with millions of interconnected artificial neurons.[73]

Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit - or if they end up developing human 'weaknesses' as well: selfishness, a pro-survival attitude, hesitation etc.

Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation.

Nick Bostrom and Eliezer Yudkowsky have argued for decision trees (such as ID3) over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability (e.g.

while Chris Santos-Lang argued in the opposite direction on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal 'hackers'.[66]

Concerns cited ranged from how AI is used in surveillance and in spreading fake content online (known as deepfakes when they include doctored video images and audio generated with help from AI) to cyberattacks, infringements on data privacy, hiring bias, autonomous vehicles, and drones that don’t require a human controller.[76]

Many researchers have argued that, by way of an 'intelligence explosion' sometime in the 21st century, a self-improving AI could become so vastly more powerful than humans that we would not be able to stop it from achieving its goals.[77] In

However, instead of overwhelming the human race and leading to our destruction, Bostrom has also asserted that super-intelligence can help us solve many difficult problems such as disease, poverty, and environmental destruction, and could help us to “enhance” ourselves.[79]

Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not 'common sense'.

Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit partnership to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence.

They stated: 'This partnership on AI will conduct research, organize discussions, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advance the understanding of AI technologies including machine perception, learning, and automated reasoning.'[81]

The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies.

This event caused an ethical schism between those who felt bestowing organic rights upon the newly sentient Geth was appropriate and those who continued to see them as disposable machinery and fought to destroy them.

'Partnership on AI' formed by Google, Facebook, Amazon, IBM and Microsoft

Amazon, DeepMind of Google, Facebook, IBM, and Microsoft today announced that they will create a non-profit organization that will work to advance public ...

AI and Ethics Symposium: Partnership on AI - Opportunities, Challenges and Social Responsibility

Interview with Alice Xiang, Research Scientist at Partnership on AI

This interview took place at the Deep Learning Summit in Boston, on the 23 & 24 May. Interviewer: Gracelyn Shi, The Knowledge Society Alice Xiang is a ...

Apple Joins Artificial Intelligence Partnership

On Friday, Apple announced that it has formally joined The Partnership on AI to Benefit People and Society. Other members of the organization include: Amazon ...

Harry Shum discusses Partnership on AI

Microsoft Executive Vice President for AI + Research, Harry Shum, speaks at a event in London focused on Microsoft's continued investment in Artificial ...

Exploring the Human + Artificial Intelligence Partnership | Ivan Portilla | TEDxOnBoard

IBM Watson is a cognitive computing system. This introductory-level talk will explore what cognitive computing means, where cognitive computing came from, ...

Allison Duettmann: Thinking About "The Future" - Partnership on AI Workshop @Google

A talk for the Positive Futures from AI Workshop, organized by Partnership on AI, May 15-16, San Francisco. Foresight.org bit.ly/foresightupdates.

HER - Falling in Love with an Artificial Intelligence

support- one time donation - blog- Can an AI technology be our romantic

Dangerous Artificial Intelligence A.I. Partnership Revealed

In this video we quickly break down how recently a dangerous Artificial Intelligence "A.I" partnership between 5 major corporations was announced.

The Role of Artificial Intelligence in Society | Terah Lyons | TEDxBeaconStreet

Terah Lyons, discusses how Artificial Intelligence is currently handled by our government, and how this may change overtime. Terah Lyons is a Policy Advisor to ...