AI News, Killer Robots in the US Military: Ethics as an Afterthought artificial intelligence

Killer Robots in the US Military: Ethics as an Afterthought

The US military is not discounting the future development of killer robots, or lethal autonomous weapon systems (LAWS), as agents in the US war machine.

Whilst the DoD has established Directive 3000.09, putting in place a practical framework for developing autonomous weapon systems (AWS) and the lethal counterpart LAWS, their development of a comprehensive ethical framework that responds fully to national and international concerns is currently a mere afterthought.

Fewer boots on the ground mean fewer military deaths and more precise and efficient missions means fewer civilian casualties.

And whilst accepting the inevitability of war may be a depressing reminder of humanity’s primal instincts, war machines around the world are well-oiled, well-funded, and developing with the times.

Whilst DARPA works on some of the longer-term projects, the Joint Artificial Intelligence Centre (JAIC) was established in 2018 to keep up with and deploy tech at the speed it’s being developed and used in the private sector and academia.

The JAIC oversees the DoD’s adoption of AI across its departments and is developing partnerships in academia and the private sector to gain access to rapidly developing and deployable AI tech.

Without the constraints of a well-considered ethical framework, the US may develop and deploy autonomous weapons in a manner that brings more shame than the 1945 atomic bomb attacks on Hiroshima and Nagasaki.

There is a risk that in a bid to keep our hands clean in the dirty business of war, the bloodshed will become invisible from the sanitised war rooms where human officers oversee the lethal machines.

The DoD does not rule out the possibility that fully autonomous weapon systems will be able to select targets and deploy lethal force without direct human involvement.

The poll garnered opinions across 26 countries and found that 66% of people opposing the use of LAWS felt that such weapons cross a moral line.

Yet, it wasn’t until September 2019 that the DoD revealed its plan to hire an ethicist, indicating that developing a robust ethical framework is not an urgent priority.

We are decades away from developing the kind of technology that would allow machines to perform the range of complex cognitive tasks that humans are capable of, let alone combining that with robots that can move with human-like dexterity.

With all the AI hype, it’s tempting to think that killer robots are just around the corner, when in fact it could be decades before we are even close to developing that kind of complex tech.

And somewhere quietly in the background, the DoD is taking a leisurely approach to securing an ethicist and forming transparent ethical guidelines which would hold them accountable during their race to the AI finish line (if there is one).

It may not be possible to fully replicate humanity’s consciousness within a machine, in which case the argument against killer robots may become stronger.

Official Launch of Privacy Awareness Week 2016 in Queensland

Launch of Privacy Awareness Week in Queensland.