AI News, Can We Trust Robots?
Can We Trust Robots?
In popular culture, robots tend to be either faultlessly loyal Victorian butlers or duplicitous psychopathological killers.
In recent years, dozens of tech and science luminaries have shared their apprehension of AI run amok—ofsuperintelligent robots establishing a new world in which humans are at best irrelevant and at worst, extinct.
In the not-so-distant future, we will begin entrusting to robotic systems that are highly or completely autonomous such vital tasks as driving a car, performing surgery, and choosing when to apply lethal force in a war zone.
For the first time, machines programmed, but not directly controlled, by us will be making life-or-death decisions in complicated, fluid, and unstructured environments.
Elon Musk leads 116 experts calling for outright ban of killer robots
Some of the world’s leading robotics and artificial intelligence pioneers are calling on the United Nations to ban the development and use of killer robots.
While AI can be used to make the battlefield a safer place for military personnel, experts fear that offensive weapons that operate on their own would lower the threshold of going to battle and result in greater loss of human life.
The letter, launching at the opening of the International Joint Conference on Artificial Intelligence (IJCAI) in Melbourne on Monday, has the backing of high-profile figures in the robotics field and strongly stresses the need for urgent action, after the UN was forced to delay a meeting that was due to start Monday to review the issue.
The founders call for “morally wrong” lethal autonomous weapons systems to be added to the list of weapons banned under the UN’s convention on certain conventional weapons (CCW) brought into force in 1983, which includes chemical and intentionally blinding laser weapons.
We need to make decisions today choosing which of these futures we want.” Musk, one of the signatories of the open letter, has repeatedly warned for the need for pro-active regulation of AI, calling it humanity’s biggest existential threat, but while AI’s destructive potential is considered by some to be vast it is also thought be distant.
Ryan Gariepy, the founder of Clearpath Robotics said: “Unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability.” This is not the first time the IJCAI, one of the world’s leading AI conferences, has been used as a platform to discuss lethal autonomous weapons systems.
The Rise Of The Robots: What The Future Holds For The World’s Armies
Supervised robots with humans on the loop can select their own nonhuman targets–like incoming missiles or, presumably, other robots–to defend humans “for local defense to intercept attempted time-critical or saturation attacks,”
At least 19 countries and international organizations, including Human Rights Watch, have called for an international ban on autonomous, lethal robots, potentially similar to existing restrictions on undetectable mines and blinding laser weapons.
Russian President Vladimir Putin recently expressed interest in advancing the country’s own military autonomous tech and Russia has been a somewhat hesitant participant in international discussions on limiting robots, though talks are set to continue this year.
The U.S. Defense Department’s current five-year policy on autonomous weapons is also scheduled for re-evaluation in 2017, making it quite possible that Presidents Trump and Putin will effectively decide just how autonomous systems will move from the lab to the battlefield.
“A human soldier could identify with the mother’s fear and the children’s game and thus recognize their intentions as harmless, while a fully autonomous weapon might see only a person running toward it and two armed individuals.
They might also struggle with other factors required under international law: deciding whether the military benefits of an attack overcome potential risks to civilians, and even when it’s necessary to apply force at all, like when a human target appears to be wounded, the group warns.
But some experts say autonomous robots, rigorously developed and properly tested, could even help keep civilians safe in high-risk situations, where they’d avoid making mistakes a human might that could endanger unarmed bystanders.
And their developers should steer away from programming techniques, like some forms of machine learning, that can lead to decision-making processes humans can’t easily peer into, he says.
But in December, Steven Groves–now chief of staff to U.S Ambassador to the United Nations Nikki Haley and then a Trump transition advisor–told Politico the U.S. would be unlikely to support a ban on the weapons, which he deemed necessary to maintain military superiority, though he said at the time he didn’t officially speak for the administration.
“The capabilities of [lethal autonomous weapons systems] to increase U.S. national security have yet to be fully explored, and a preemptive ban or moratorium on such research is against U.S. interests.”
New generation of drones set to revolutionize warfare
One of the biggest revolutions over the past 15 years of war has been the rise of the drones -- remotely piloted vehicles that do everything from conduct air strikes to dismantle roadside bombs.
Some autonomous machines are run by artificial intelligence which allows them to learn, getting better each time.
It’s early in the revolution and no one knows exactly where it is headed, but the potential exists for all missions considered too dangerous or complex for humans to be turned over to autonomous machines that can make decisions faster and go in harm’s way without any fear.
Humans on the ground have given them a mission to patrol a three-square mile area, but the drones are figuring out for themselves how to do it.
Roper, head of a once-secret Pentagon organization called the Strategic Capabilities Office, remembers the first time he saw Perdix, which is named after a bird found in Greek mythology.
Perdix flies too fast and too high to follow, so 60 Minutes brought specialized high-speed cameras to the China Lake Weapons Station in California to capture it in flight.
Developed by 20 and 30-somethings from MIT’s Lincoln Labs, Perdix, is designed to operate as a team, which you can see when you follow this group of eight on a computer screen.
Will Roper: We’ve given them a mission at this point, and that mission is as a team go fly down the road and so they allocate that amongst all the individual Perdix.
Cheap and expendable, Perdix tries to make a soft landing but it’s no great loss if it crashes into the ground.
Perdix can be used as decoys to confuse enemy air defenses or equipped with electronic transmitters to jam their radars.
As a swarm of miniature spy planes fitted with cellphone cameras they could hunt down fleeing terrorists.
The robots are slow and cumbersome but they’re just test beds for cutting edge computer software which could power more agile machines -- ones that could act as advance scouts for a foot patrol.
Jim Pineiro: I would want to use a system like this to move maybe in front of me or in advance of me to give me early warning of, of enemy in the area.
The computer already knows what I look like, so now we’ll see if it can match what’s stored in its memory with the real thing as I move around this make-believe village.
Rollie Wicks: What I was doing was, I was turning over control of the weapon system to the autonomous systems that you’ve seen on the floor today.
Had Wicks given permission to shoot, the missile would have struck my location using a set of coordinates given to it by the robots.
Systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”
What that means, says General Paul Selva, vice chairman of the Joint Chiefs of Staff, and the military’s man in charge of autonomy, is that life or death decisions will be made only by humans -- even though machines can do it faster and, in some cases, better.
Paul Selva: All the research I’ve seen says about five years ago machines actually got better at image recognition than humans.
Paul Selva: This goes to the ethics of the question of whether or not you allow a machine to take a human life without the intervention of a human.
When testing is done, this pilot house will come off and the crew will be standing on the pier waving goodbye.
From then on this will be a ghost ship commanded by 36 computers running 50 million lines of software code.
It has a top speed of 26 knots and a tight turning radius which should enable it to use its sonar to track diesel-powered submarines for weeks at a time.
Will Roper and his team of desert rats are about to attempt to fly the largest autonomous swarm ever: 100 Perdix drones.
I mean, if what we mean is biggest thing is something that’s going to change everything, I think autonomy is going to change everything.
- On Wednesday, January 16, 2019
US Military MOST ADVANCED ROBOT TECHNOLOGY to create the Robot Soldiers of Tomorrow
A great video of US Military future robot technology. A robot is a machine—especially one programmable by a computer— capable of carrying out a complex series of actions automatically....
WORST NIGHTMARE for Russian Military US Military advanced robot technology
A video on the worlds most advanced new Robot technology. A robot is a machine—especially one programmable by a computer—capable of carrying out a complex series of actions automatically....
10 Insane Military Weapons In Action | Interesting Facts
10 Insane Secret Military Weapons | Interesting Facts ▻ Subscribe: For copyright matters please contact us at: email@example.com ###############################..
MOST ADVANCED !!! US Military Robot Technology to collect Tax Money to pay for F-35 Atlas Robot
US Military defense contractor Boston Dynamics unveils it's latest robot . A robot is a mechanical or virtual artificial agent, usually an electro-mechanical machine that is guided by a computer...
Tech Visionaries Warn Us About Killer Robots
Elon Musk and Stephen Hawking are among scientists who urged researchers to calm down with artificial intelligence. They argue that artificial intelligence and specifically autonomous weapons...
The Future of Autonomous Mini-drone Technology - AI tech
It seems that any great new technology at some point gets weaponized. I understand there are bad people but is taking a life the only conclusion people can come up with? Why not employ technology...
DoD Announces Successful Perdix Micro-Drone Demonstration
In one of the most significant tests of autonomous systems under development by the Department of Defense, the Strategic Capabilities Office, partnering with Naval Air Systems Command, successfully...
Amir Husain: "The Sentient Machine: The Coming Age of Artificial Intelligence" | Talks at Google
The Sentient Machine addresses broad existential questions surrounding the coming of AI: Why are we valuable? What can we create in this world? How are we intelligent? What constitutes progress...
A Swarm of Nano Quadrotors
Check out our latest video: Experiments performed with a team of nano quadrotors at the GRASP Lab, University of Pennsylvania. Vehicles developed..
Artificial Intelligence Documentary
Artificial Intelligence (AI) is to make computers think like humans or that are as intelligent as humans. Thus, the ultimate goal of the research on this topic is to develop a machine that...