AI News, Study finds workers would rather be replaced by a robot than another person

Study finds workers would rather be replaced by a robot than another person

In the modern era, robots have been taking the place of human workers because they are seen as a cheaper and more reliable source of labor.

In the first study, the researchers asked 300 people if they would rather see a colleague replaced by a human or a robot—62 percent of respondents chose the human.

In a second study, the researchers asked 251 people to rate how much negativity they felt about losing a job to a robot versus to another person.

If a worker is replaced by another human, it casts doubt on their ability to do a job—if they are replaced by a robot, though, it is just a sign of technology taking over.

Ethics of artificial intelligence

The ethics of artificial intelligence is the part of the ethics of technology specific to robots and other artificially intelligent beings.

divided into roboethics, a concern with the moral behavior of humans as they design, construct, use and treat artificially intelligent beings, and machine ethics, which is concerned with the moral behavior of artificial moral agents (AMAs).

The term 'robot ethics' (sometimes 'roboethics') refers to the morality of how humans design, construct, use and treat robots and other artificially intelligent beings.[1]

It has been suggested that robot rights, such as a right to exist and perform its own mission, could be linked to robot duty to serve human, by analogy with linking human rights to human duties before society.[3]

Pamela McCorduck counters that, speaking for women and minorities 'I'd rather take my chances with an impartial computer,' pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all.[14]

However, Kaplan and Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines: Using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and engrained, which makes them even more difficult to spot and fight against.[15]

The new recommendations focus on four main areas: humans and society at large, the private sector, the public sector, and research and academia.

In a highly influential branch of AI known as 'natural language processing,' problems can arise from the 'text corpus'—the source material the algorithm uses to learn about the relationships between different words.[33]

The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it.

'If any major military power pushes ahead with the AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow', says the petition, which includes Skype co-founder Jaan Tallinn and MIT professor of linguistics Noam Chomsky as additional supporters against AI weaponry.[55]

Regarding the potential for smarter-than-human systems to be employed militarily, the Open Philanthropy Project writes that these scenarios 'seem potentially as important as the risks related to loss of control', but that research organizations investigating AI's long-run social impact have spent relatively little time on this concern: 'this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI), and there seems to have been less analysis and debate regarding them'.[56]

To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency, which are related to the concept of AMAs.[61]

In 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne in Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource.[65]

In 2009, academics and technical experts attended a conference organized by the Association for the Advancement of Artificial Intelligence to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions.

They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons.

In a paper on the acquisition of moral values by robots, Nayef Al-Rodhan mentions the case of neuromorphic chips, which aim to process information similarly to humans, nonlinearly and with millions of interconnected artificial neurons.[73]

Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit - or if they end up developing human 'weaknesses' as well: selfishness, a pro-survival attitude, hesitation etc.

Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation.

Nick Bostrom and Eliezer Yudkowsky have argued for decision trees (such as ID3) over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability (e.g.

while Chris Santos-Lang argued in the opposite direction on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal 'hackers'.[66]

Concerns cited ranged from how AI is used in surveillance and in spreading fake content online (known as deepfakes when they include doctored video images and audio generated with help from AI) to cyberattacks, infringements on data privacy, hiring bias, autonomous vehicles, and drones that don’t require a human controller.[76]

Many researchers have argued that, by way of an 'intelligence explosion' sometime in the 21st century, a self-improving AI could become so vastly more powerful than humans that we would not be able to stop it from achieving its goals.[77] In

However, instead of overwhelming the human race and leading to our destruction, Bostrom has also asserted that super-intelligence can help us solve many difficult problems such as disease, poverty, and environmental destruction, and could help us to “enhance” ourselves.[79]

Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not 'common sense'.

Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit partnership to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence.

They stated: 'This partnership on AI will conduct research, organize discussions, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advance the understanding of AI technologies including machine perception, learning, and automated reasoning.'[81]

The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies.

This event caused an ethical schism between those who felt bestowing organic rights upon the newly sentient Geth was appropriate and those who continued to see them as disposable machinery and fought to destroy them.

Employees less upset at being replaced by robots than by other people

Over the coming decades, millions of jobs will be threatened by robotics and artificial intelligence.

Human replacements pose greater threat to feeling of self-worth The study shows: In principle, most people view it more favorably when workers are replaced by other people than by robots or intelligent software.

The researchers were able to identify the causes behind these seemingly paradoxical results, too: People tend to compare themselves less with machines than with other people.

This reduced self-threat could even be observed when participants assumed that they were being replaced by other employees who relied on technological abilities such as artificial intelligence in their work.

The study could also serve as a starting point for further research on other economic topics, says Fuchs: 'It is conceivable that employee representatives' responses to job losses attributed to automation will tend to be weaker than when other causes are involved, for example outsourcing.'

People don’t want to see workers replaced by a robot—themselves excepted

There has been extensive public discussion of how automation may fundamentally change the job market, eliminating so many positions that some sort of universal income will be required.

While the universal income discussion may be new, anxieties about machinery and automation replacing human work go back over a century, with worries growing with the advent of robotics and machine-learning algorithms.

To find out whether job losses due to automation produce a distinct set of worries, researchers did an extensive study of how people respond to job loss.

The researchers involved in the new work—Armin Granulo, Christoph Fuchs, and Stefano Puntoni—cite an extensive survey of European residents, which showed that they tend to view robots as displacing human employment.

One interpretation is that this is based on personal worries that they may end up in a job that's vulnerable to automation, the other is that it could be a more general pro-social view, driven by concern for other people losing their job.

Another study swapped out robots for software and produced similar results, leading to the rather unusual statement that 'Participants displayed a strong and significant preference for being replaced by software.'

The same pattern held true when the researchers zeroed in on factory workers who had indicated that they were concerned that their jobs would be replaced by automation, as well as people who had recently lost their jobs, suggesting it has real-world relevance.

It's just that, when confronted by immediate replacement of any sort, the threat to self-image from a human replacement turned out to be stronger—researchers estimated its effect is four times stronger.

In addition to being a somewhat unusual window into the human psyche, the authors note that it has implications about everything from automation and employment policy discussion to how to structure retraining for people who have lost their jobs.

Most workers would prefer to be replaced by a robot than by another human Wednesday, 7 August 2019

In an article published in Nature Human Behaviour on 5 August, coauthored by Armin Granulo and former RSM faculty Christoph Fuchs, findings show that people experience more negative feelings when they are replaced by another person than when they are replaced by a robot.

However, while comparing one’s abilities to a robot may be less of a concern to people’s self-worth in the short run, robotic replacement is perceived as more threatening to people’s economic situation in the long run.

“We hope that, particularly in times when policymakers are discussing strategies intended to support workers who have been displaced by technology, our work encourages more research on the psychological consequences of technological unemployment before technological progress disrupts specific jobs and occupations.”

As technological progress is expected to affect millions of workers in a wide variety of occupations in the coming decades, it is important for the stability of society that we understand potential threats to the psychological wellbeing of affected workers, and how this transition will affect their long-term economic prospects.

How Close Are We to Replacing Humans With Robots?

A.I. advancements are making robots look more like humans. How long until they replace us? What Happens If We Give A.I. The Ability To Remember Everything ...

The Robot-Arm Prosthetic Controlled by Thought

Johnny Matheny is the first person to attach a mind-controlled prosthetic limb directly to his skeleton. After losing his arm to cancer in 2008, Johnny signed up for ...

Making Sense with Sam Harris #66 - Living with Robots (with Kate Darling)

In this Episode Sam Harris speaks with Kate Darling about the ethical concerns surrounding our increasing use of robots and other autonomous systems.

China Innovation! The Rise Of Robotics in China | 8 Human Jobs Already Assigned To Robots

Because of the rise of AI and robotics, we are now entering into a new era of automation and robotics in the environment of work and employment.

Engineers Created A New Bionic Arm That Can Grow With You

Check out Daniel Melville's 3D printing business here! Written & Produced by Lauren Ellis Edited by Lee Cofer Mould Shot ..

How Close Are We to Downloading the Human Brain?

Downloading your brain may seem like science fiction, but some neuroscientists think it's not only possible, but that we've already started down a path to one ...

Humans and Robots

Cambridge researchers are studying the interaction between robots and humans – and teaching them how to do the very difficult things that we find easy.

Will robots take our jobs? | CNBC Explains

Elon Musk says robots will be able to do everything better than humans. So does that mean the future workforce will be entirely automated? CNBC's Elizabeth ...

Human Touch in the Era of Smart Robotics | Zhengchun Peng | TEDxPuxi

Zhengchun Peng describes a world in which a projected 10 Billion robots will co-exist with people in homes and work places. When robots can touch like ...

The future of flying is robot pilots

In the future, you could fly a plane piloted by robots. The US military invented a robot arm that can fly commercial airplanes using AI. The robot pilot was ...