AI News, Artificial intelligence researchers boycott South Korean university amid fears it is developing killer robots
- On Monday, July 30, 2018
- By Read More
Artificial intelligence researchers boycott South Korean university amid fears it is developing killer robots
Leading artificial intelligence researchers have boycotted South Korea’s top university after it teamed up with a defence company to develop “killer robots” for military use.
An open letter sent to the Korea Advanced Institute of Science and Technology (KAIST) stated that the 57 signatories from nearly 30 different countries would no longer visit or collaborate with the university until autonomous weapons were no longer developed at the institute.
“If we combine powerful burgeoning AI technology with insecure robots, the Skynet scenario of the famous Terminator films all of a sudden seems not nearly as far-fetched as it once did,” Lucas Apa, a senior security consultant from the cybersecurity firm IOActive, told The Independent.
If robot ecosystems continue to be vulnerable to hacking, robots could soon end up hurting instead of helping us.” KAISTpresidentSung-ChulShin responded to the open letter, claiming that the university had 'no intention' of developing lethal autonomous weapons.
Autonomous Weapons: an Open Letter from AI Robotics Researchers
They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions.
Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.
Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits.
Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.
Leading AI researchers vow to not develop autonomous weapons
In a letter published online, 2,400 researchers in 36 countries joined 160 organizations in calling for a global ban on lethal autonomous weapons.
'We would really like to ensure that the overall impact of the technology is positive and not leading to a terrible arms race, or a dystopian future with robots flying around killing everybody,' said Anthony Aguirre, who teaches physics at the University of California-Santa Cruz and signed the letter.
Flying killer robots and weapons that think for themselves remain largely the stuff of science fiction, but advances in computer vision, image processing, and machine learning make them all but inevitable.
The Pentagon recently released a national defense strategy calling for greater investment in artificial intelligence, which the Defense Department and think tanks like the Center for a New American Security consider the future of warfare.
'Emerging technologies such as AI offer the potential to improve our ability to deter war and enhance the protection of civilians in the form of fewer civilian casualties and less collateral damage to civilian infrastructure,' Pentagon spokesperson Michelle Baldanza said in a statement to CNNMoney.
Their refusal to 'participate in [or] support the development, manufacture, trade, or use' of autonomous killing machines amplifies similar calls by others, but may be largely symbolic.
Machines that think and act on their own raise all sorts of chilling scenarios, especially when combined with facial recognition, surveillance, and vast databases of personal information.
Elon Musk leads 116 experts calling for outright ban of killer robots
Some of the world’s leading robotics and artificial intelligence pioneers are calling on the United Nations to ban the development and use of killer robots.
While AI can be used to make the battlefield a safer place for military personnel, experts fear that offensive weapons that operate on their own would lower the threshold of going to battle and result in greater loss of human life.
The letter, launching at the opening of the International Joint Conference on Artificial Intelligence (IJCAI) in Melbourne on Monday, has the backing of high-profile figures in the robotics field and strongly stresses the need for urgent action, after the UN was forced to delay a meeting that was due to start Monday to review the issue.
The founders call for “morally wrong” lethal autonomous weapons systems to be added to the list of weapons banned under the UN’s convention on certain conventional weapons (CCW) brought into force in 1983, which includes chemical and intentionally blinding laser weapons.
We need to make decisions today choosing which of these futures we want.” Musk, one of the signatories of the open letter, has repeatedly warned for the need for pro-active regulation of AI, calling it humanity’s biggest existential threat, but while AI’s destructive potential is considered by some to be vast it is also thought be distant.
Ryan Gariepy, the founder of Clearpath Robotics said: “Unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability.” This is not the first time the IJCAI, one of the world’s leading AI conferences, has been used as a platform to discuss lethal autonomous weapons systems.
South Korean university boycotted over 'killer robots'
Shin Sung-chul, president of the Korea Advanced Institute of Science and Technology (Kaist), said: 'I reaffirm once again that Kaist will not conduct any research activities counter to human dignity including autonomous weapons lacking meaningful human control.
He went on to explain that the university's project was centred on developing algorithms for 'efficient logistical systems, unmanned navigation and aviation training systems'.
Next week in Geneva, 123 member nations of the UN will discuss the challenges posed by lethal autonomous weapons, or killer robots, with 22 of these nations calling for an outright ban on such weapons.
'At a time when the United Nations is discussing how to contain the threat posed to international security by autonomous weapons, it is regrettable that a prestigious institution like Kaist looks to accelerate the arms race to develop such weapons,' read the letter sent to Kaist, announcing the boycott.
Elon Musk calls for ban on killer robots before ‘weapons of terror’ are unleashed
The robot was implemented to reduce the strain on thousands of human guards who man the heavily fortified, 160-mile border. While it does not operate autonomously yet, it does have the capability to, according to Nakamitsu. “The system can be installed not only on national borders, but also in critical locations, such as airports, power plants, oil storage bases and military bases,” says a description in a video released by Samsung, which makes the SGR-AI.
“Without wanting to sound alarmist, there is a very real danger that without prompt action, technological innovation will outpace civilian oversight in this space.” According to Human Rights Watch, autonomous weapons systems are being developed in many of the nations represented in the letter — “particularly the United States, China, Israel, South Korea, Russia and the United Kingdom.” The concern, the organization says, is that people will become less involved in the process of selecting and firing on targets as machines lacking human judgment begin to play a critical role in warfare. Autonomous weapons “cross a moral threshold,” HRW says.
[Innovations ‘It knew what you were going to do next’: AI learns from pro gamers — then crushes them] In recent years, Musk’s warnings about the risks posed by AI have grown increasingly strident — drawing pushback in July from Facebook chief executive Mark Zuckerberg, who called Musk’s dark predictions “pretty irresponsible.” Responding to Zuckerberg, Musk said his fellow billionaire’s understanding of the threat post by artificial intelligence “is limited.” Last month, Musk told a group of governors that they need to start regulating artificial intelligence, which he called a “fundamental risk to the existence of human civilization.” When pressed for concrete guidance, Musk said the government must get a better understanding of AI before it’s too late.
- On Saturday, September 21, 2019
The Threat of AI Weapons
Will artificial intelligence weapons cause World War III? This animated clip is from my friends at ..
KILLER ROBOTS ARE COMING: Google & Tesla Beg Awareness
See more at Elon Musk and Mustafa Suleyman have written an open letter urging the UN to block the use of lethal autonomous weapons ..
10 Scariest Advancements in A.I.
See more at Any rational person knows that its only a matter of time before robots put humanity under the metal thumb of oppression
Tech Visionaries Warn Us About Killer Robots
Elon Musk and Stephen Hawking are among scientists who urged researchers to calm down with artificial intelligence. They argue that artificial intelligence and ...
Autonomous weapons and international law
The introduction of artificial intelligence and robotics to future scenarios of warfare is posing new challenges to national and international codes of law, ethics, ...
We Talked To Sophia — The AI Robot That Once Said It Would 'Destroy Humans'
This AI robot once said it wanted to destroy humans. Senior correspondent Steve Kovach interviews Sophia, the world's first robot citizen. While the robot can ...
Elon Musk Wants Worldwide Ban on Lethal Autonomous Weapons
Tesla and SpaceX CEO Elon Musk is reportedly seeking a global ban on lethal autonomous weapons. In a recent letter to the UN, Musk and 116 AI and robotics ...
Scientists want artificially intelligent robot soldiers banned
A global ban on developing artificially intelligent robot soldiers is needed to protect the future of mankind, a letter from hundreds of leading scientists including ...
AI leaders Musk, Tegmark, and DeepMind call for autonomous weapons systems ban
AI leaders Musk, Tegmark, and DeepMind call for autonomous weapons systems ban Prominent artificial intelligence thought leaders, including SpaceX and ...
Artificial intelligence | AI | John McCarthy | sporadic usage
Artificial Intelligence What is Artificial Intelligence? Artificial Intelligence is a way of making a computer, a computer-controlled robot, or a software think ...