AI News, Why This Hitchhiking Robot Might Not Be Cute Enough to Make It Across Canada

Why This Hitchhiking Robot Might Not Be Cute Enough to Make It Across Canada

We'd better hope that there will never be a time when robots will be able to do absolutely everything without any help from humans, because that's the time when our entire species is likely to become redundant.

Until that time comes, the technique of human exploitation is a valuable skill for robots to learn, because it's a great way of being able to complete objectives with a minimum of hardware or software.

The tricky part is that hitchBOT isn't offering to help out with gas or anything like that: it's going to be relying entirely on people feeling sorry for it and wanting to help out.

The physical form looks like somebody has cobbled together odds and ends to make the robot, such as pool noodles, bucket, cake saver, garden gloves, Wellies, etc.'

That's certainly a non-threatening look, but it's quite different from the appearance of other robots that we've seen successfully be able to autonomously manipulate humans through the sheer power of their appearance.

And finally, the way that humans interact with them is very simple and straightforward: Blabdroid has two buttons and asks simple questions in a childlike voice, while Tweenbot is even simpler, with a written sign asking people to help it get somewhere.

Parents of the Decapitated HitchBOT Say He Will Live On

The three-foot-tall hitchhiking robot, a high-tech Flat Stanley that depended on the kindness of strangers to tour this great planet, passed away due to decapitation and dismemberment by an unknown assailant.

If we had needed to repair it at any point, we'd send out one of HitchBOT's family members to go out there and fix it.” HitchBOT successfully traveled more than 6,000 miles across Canada and took rides across Germany and the Netherlands.

Sadly, the robot didn’t even make it close to his intended destination of San Francisco, but it was able to cross a few fun things off its pre-trip bucket list: A visit to Times Square and attending a baseball game—at ridiculously pricey Fenway Park, nonetheless.

Although it couldn’t move around by itself (the “robot” was literally just a bucket festooned with pool noodles for arms and legs, a cake saver for a head, and garden gloves and Wellies for hands and feet), there were a few sensors embedded within HitchBOT.

“It is able to take photos and record video, and could post these to social media as long as they did not violate any privacy rights.” From within Philadelphia, there are volunteers to get HitchBOT back on its welly-rocking feet.

Robots Are Smart—But Can They Understand Us?

In real life, though, machines still struggle mightily with human language. Sure, Siri can answer questions if it recognizes enough of the words in a given query. But asking a robot to do something that it hasn’t been programmed, step-by-step, to do?

and the appropriate response would be “What?”—unless it had learned how to process the long string of questions related to that seemingly simple act.

For instance, it knows that a cup is something you can use to hold water, for drinking, or as a way to pour water into something else; a stove is something that can heat things, but also something upon which you can place things.

“All robots, such as those in industrial manufacturing, self-driving cars, or assistive robots, need to interact with humans and interpret their imprecise language,”

Because most people tend to give different commands as they lead the robot through the process, the team has been able to collect a large vocabulary related to the same step in the process.

Sometimes, however, the robot is still clueless: When it was told to wait until ice cream became soft, “it couldn’t figure out what to do,' Saxena says.

the robot realized that it first needed to carry the pot over to the tap and fill it with water. It also knows that when instructed to heat something, it can use either the stove or the microwave, depending on which is available.

being able to follow directions 64 percent of the time isn’t good enough, he says, particularly since humans understand what they’re told 90 percent of the time. 


In July 2016, Dallas police chief David Brown decided to end a violent standoff1 with Micah Johnson,2 who had fatally shot five officers and wounded several more, in an unusual way.3  As a makeshift solution, the police attached a pound of the plastic explosive C4 to a Remotec F-5,4 a robot designed for bomb disposal.5  The intentional detonation of the explosive killed Johnson, and was the first deliberate use by American police of a robot armed with deadly force.6 Keep in mind that this improvised solution was a remotely controlled robot.  The robot was not designed to harm people, and it lacked any ability to make independent decisions.7  Nevertheless, the use of the robot in Dallas comes at a time when many people are predicting that sophisticated police robots will one day become “useful, cheap, and ubiquitous.”8  Hundreds of robots—most of them made for bomb disposal—are already in the possession of local police departments.9  Many such robots will soon employ artificial intelligence and will be expected to operate with a degree of independence.10  The near certain use of these robots by the police11 raises questions about what sorts of limits and regulations should be imposed on their use.12 Consider a future in which robots could supplement or replace some basic police functions.  An autonomous police vehicle patrols a neighborhood and briefly detains a person deemed suspicious so that an officer miles away can subject him to questioning.13  During the detention, the vehicle dispatches a micro drone to obtain a DNA identification sample.14  Or consider the possibility of thousands of autonomous police drones the size of insects flying through a city without detection to conduct surveillance and carrying nano-chemicals to disable dangerous suspects.15  Or imagine social patrol robots that dispense advice to the lost, record surveillance data, and call in other robots to assist in unexpectedly hostile situations.

 The squad car and the two-way radio provided the police with a geographic range and communication ability far superior to traditional foot patrol.16  Robots represent the next leap.  Robot staff have already attended to guests at the Henn-na Hotel in Japan.17  Hotel robots deliver towels and coffee to guests, while other robot bartenders serve drinks and still others deliver pizza.18  Robot journalists create online content for Thomson Reuters and Yahoo.19  A novel co-written with an artificial intelligence (AI)20 program advanced to the first stage of a Japanese literary contest.21  Semiautonomous Reaper unmanned drones carry Hellfire missiles.22  In the near future, robots will probably serve as our delivery drivers and our garbage collectors.23  Robots like the Japanese Robear will probably provide eldercare to seniors.24  Pepper the robot will be an anthropomorphic companion that will provide us with emotional care.25 As for policing, Dubai plans to introduce patrol robots with artificial intelligence to its streets by 2020.26  The Chinese AnBot can independently patrol, and upon a remote operator’s command, use its mechanical arm to grab people as well as deploy its “electrically charged riot control tool.”27  Equipped with infrared cameras, microphones, and license plate readers, the American Knightscope security robot can patrol independently, for $7.00 an hour.28  Machines endowed with artificial intelligence and the capacity for independent action will have profound impacts on policing.  To be sure, advances in technology have always given police new powers.

 Robots, however, may be different in kind.  Like the internet, robots raise new issues and challenges to the regulation of policing.29 Police robots raise special questions because of the powers we entrust to the police.  If the development of military robots provides any guidance, then we can expect some police robots to be artificially intelligent machines capable of using legitimate coercive force against human beings.30  Military robots will also possess these powers, with an important difference: We will not expect police robots to exercise deadly force against a hostile enemy.  More importantly, constitutional law and democratic norms constrain the police.  How we design, regulate, or even prohibit some uses of police robots requires a regulatory agenda now to address foreseeable problems of the future.

Sophisticated and inexpensive robotics will be attractive to the police just as they have been to the military.31  The federal government is already spending significant amounts of money and attention on robotics research.32  Those robotic applications will make their way to policing, and federal monies for robotics will become available to local law enforcement agencies just as they have in the case of recent technologies like body cameras, biometrics, and big data analysis.33  What is more, police departments will likely raise the argument that they must be prepared for a robotics arms race, such as against criminals and terrorists who could 3D print an army of weaponized micro-drones.34 This Article considers the law and policy implications of a future where police robots are sophisticated, cheap, and widespread.  In particular, I focus on questions raised by the use of robots able to use coercive force, as opposed to robots with surveillance or other support functions.  Drawing upon the rapidly developing body of robotics law scholarship, as well as upon technological advances in military robotics—from which policing will surely borrow—we can anticipate the kinds of regulatory challenges we will face with the future of police robots.

they can provide information, fire upon an enemy, or engage in financial trades.  Indeed, there is no single definition of a “robot.”37 An emerging consensus has suggested, however, that a robot be defined as any machine that can collect information, process it, and use it to act upon the world.38  These qualities vary widely from one robot to another.  This sense-think-act model might describe anything from a “bot” that makes independent online purchases39 to an eldercare robot that provides emotional comfort or assists with lifting objects.40  In appearance, robots could take any form.  Some military robots, for instance, may assume the shape of four legged centaurs to enhance stability.41  Thus, if a robot processes information it senses and acts upon it, a police robot does so in order to perform a task traditionally assumed by human police officers.

 Microsoft quickly disabled its social chatbot Tay after it incorporated online responses and began spouting racist speech and called for genocide online.46  Since artificial intelligence would drive the behavior of robots, robots may behave in ways that we cannot predict, or even explain afterwards.47 Artificial intelligence by itself is not unique to robotics.  We can already feel the impact of big data—applying complex computer algorithms to massive sets of digitized data—in fields like finance, healthcare, and even policing.  A number of police departments already use artificial intelligence in software that tries to identify future geographic locations where crime will occur, to predict which individuals may be at highest risk for violent crime commission or victimization, and to identify which among the billions of daily Internet posts amount to suspicious behavior.48  The police employ these big data tools, however, as guidance for human decisionmaking.  Robots with artificial intelligence are distinct because they would be able to translate their analysis of data into physical action.

The ambivalence we feel toward robots might also counsel new legal characterizations particular to them.  We may think that a person smashing his lawn mower merely has an ill temper, but that a person abusing a social robot is cruel.51  If robots are designed to serve as pets, caregivers or friends, could robots be the victims of criminal assault, abuse, or even rape and murder?52  In this way, the law may need to extend some legal protections to robots, for some of the same reasons we criminalize animal cruelty.53  We prohibit the infliction of needless violence against some animals because such behavior reflects something depraved about the perpetrator.  Though we may not mistake robots for humans yet, we may soon reach a point where machines endowed with artificial intelligence may need protection from human abuse.

What develops first in the military often finds its way to domestic policing.  There has long been a close relationship between both the culture and institutions of the military and law enforcement.  The bureaucratic hierarchy in policing—adopting titles like sergeant, lieutenant, and captain—reflects the military’s influence.68  Not only are many rank and file officers former members of the military, many police departments actively recruit from their ranks as well.69  We even use war metaphors to describe our domestic policing strategies.70 This military influence extends to specific tactics and technologies used by the police.  While the federal Posse Comitatus law71 forbids the use of the military for civilian policing, military equipment and training has trickled down to police departments through other means.  For instance, the so-called “1033 Program,” part of the National Defense Authorization Security Act of 1997,72 is the federal initiative that has transferred surplus military equipment such as MRAPs (Mine-Resistant, Ambush-Protected vehicles), grenade launchers, and amphibious tanks to local police departments.73  While the public may have been shocked at images of police officers wearing combat fatigues and carrying M16s during protests in Ferguson, Missouri in 2014,74 these police officers were little different from the hundreds of other police departments who had been recipients of military equipment transfers under the 1033 program.75  Similarly, police SWAT teams, now common in police departments around the country, were created as specialized paramilitary groups.  Former LAPD chief Daryl Gates, credited with establishing the first SWAT teams, brought in ex-Marines to help train these small groups of officers to act and dress like soldiers in volatile situations.76 Imagine police robots that could surround a suspicious person or even halt a speeding car.77  This might take the form of a swarm of small robots, each less than a pound, designed to incapacitate a person by surrounding him and by using nonlethal force.  Consider further that such a swarm would be capable of using some form of coercive force to prevent an unwillingly detained person from flight.  A “Multi-Robot Pursuit System” that guides packs of robots to “search for and detect a non-cooperative human”—part of a Pentagon request for contractors—would surely be useful to the police.78 Even if this use of robots is still just a concept, we can anticipate the kinds of legal and policy challenges that might arise.  First, how much should humans remain “in the loop”—maintain some degree of involvement, in other words—in the use of robot police?79  Second, how much coercive force should we permit police robots to exercise?

How much should police delegate decisions about force and coercion to their own robots?80  Take a look at the robotics currently on the market—the idea that we might lose control over them seems almost laughable.  No consumer today fears their housekeeping Roomba, and even the most advanced private security robot available now could be disabled by a swift kick.  But technology changes fast.81  The Pentagon’s Autonomous Research Pilot Initiative funds research for algorithms that will “increase a system’s level of autonomy.”82  Artificial intelligence experts hint that we might see humanlike artificial intelligence within a few decades, not a century.83  Within our lifetime, robots might not only seem “human” in their basic intelligence, but emotional, perhaps even “superhuman.”84 Not every robot will display such capabilities.  Today, some machines the military deems “robots”—like the widely used Talon—are controlled completely by remote human operators.85  Other robots use artificial intelligence to operate independently for limited tasks;

Current military research already supports the development of robots with greater degrees of autonomy.  One research goal of the Pentagon is to establish linked autonomous systems so that robots can communicate to one another in a rapidly changing environment.  In the military, autonomous drones could scan a combat area and communicate with ground robots to find suspicious places or people.90 The possibility that some robots capable of hurting or killing people will be capable of complex, independent action raises concerns, however.  In the near future, robots could make decisions in ways that we cannot easily control or understand.  The question of human involvement is itself complicated, because artificial intelligence itself is becoming more complicated.  Assume we require that a human must assess a robot’s determination to use coercive force.  Deciding whether a machine with artificial intelligence has made a good decision may not be easy, since the algorithm’s processes may not be totally intelligible to a human operator.  Even if we have access to an algorithm’s source code, we still might not know how or why it reached its decision.91  Engineers at Google, for instance, recently conceded that they do not fully understand how Google’s successful RankBrain AI works, only that it works well.92  Requiring a human “in the loop” may mean little if how the robot came to its conclusion remains opaque to the human being in charge.

second reason to be skeptical about any prohibition on the regular use of lethally armed police robots is the future role of robots more generally.  In the future, we will be surrounded by robots of all kinds at work (as coworkers), at home (as caregivers), and in leisure (as social or sexual companions).  That world will also include robots involved in crime.  Just as robots in the military reduce the need for soldiers to put themselves at risk, robots can provide the same safety and anonymity for someone interested in committing crime.  An armed robot drug dealer could act as an autonomous vending machine able to defend itself against attack and destroy evidence in the event of discovery.104 Once the first crimes are committed by robots armed with lethal force, police in the United States will almost certainly balk at any prohibitions on lethally armed police robots.105  Such prohibitions may find police support in countries like New Zealand and Britain, where most police are unarmed, as are most civilians.106  In the United States, however, lethally armed robots may become just another point in the development of weapons that the police will want to use.

The kinds of weapons police robots might adopt are matters of technology and policy, but the circumstances in which robots could use force against human suspects are legal ones.  Imagine that a suspect temporarily detained by a police robot decides to start shooting at the robot.  If the robot shoots back—and injures or kills the suspect—would that be legally defensible?107 The answer will depend in part on how we classify robots under the law.  Human police may legally resort to force, even deadly force, in the appropriate circumstances.  Claims of excessive force against the police, whether in the context of an arrest, stop, or other detention, are judged by a standard of “objective reasonableness” under the Fourth Amendment.108  Deadly force may be used in situations where a suspect “poses a threat of serious physical harm, either to the officer or to others.”109 Distinguishing between legally permissible and impermissible uses of force by the police is not always easy.  The U.S. Supreme Court has avoided requiring any exclusive list of factors in assessing reasonableness.  Rather, the Court has emphasized that the use of force analysis requires courts to “slosh” through the “factbound morass of ‘reasonableness.’”110  Moreover, considerable deference is accorded to the police, as the “calculus of reasonableness must embody allowance for the fact that police officers are often forced to make split-second judgments—in circumstances that are tense, uncertain, and rapidly evolving.”111 That reasonableness “must be judged from the perspective of a reasonable officer on the scene, rather than with the 20/20 vision of hindsight.”112  Finally, that assessment asks “whether the officers’ actions are ‘objectively reasonable’ in light of the facts and circumstances confronting them, without regard to their underlying intent or motivation.”113  The result is a “notoriously opaque and fact-dependent” doctrine that has become difficult for courts to articulate and police to incorporate into their training.114 Even if the Fourth Amendment’s use of force doctrine were clearer, it still would not translate easily to the world of robotics.  First, the high degree of deference given to police in the use of force context takes into account the fallible nature of human judgment in volatile situations with high degrees of stress and emotion.  As a result, police decisions to use force, even deadly force, do not have to be correct, only objectively reasonable.  Artificially intelligent machines capable of coercive force do not feel fear, disgust, anger, or take offense.  In this respect, robots might be more reliable than human beings in making split second decisions about whether to draw a weapon or use a stun gun.  Does that mean we should expect a narrower set of circumstances for robotic reasonableness than we do for humans?

We can see these questions of technology and policing already being put to the test with predictive policing software.  Defined broadly, predictive policing applies artificial intelligence to data in order to detect patterns of crime.121  Using the vast amount of digitized information today, predictive policing programs try to predict criminal risks in individuals or in geographic locations.  In the case of locational prediction, predictive policing programs—using historical crime data and other factors—identify geographic locations where crime is more likely to occur in the future.122  Police departments can use that information to redistribute patrol resources.  Cities including Los Angeles, New York, and Seattle have purchased predictive software programs of this type.123  In the future, predictive policing programs may further guide the allocation of police resources and hiring.

 In its discussion of new police technologies, the President’s Task Force on 21st Century Policing acknowledged in its 2015 report that body cameras could “build community trust and legitimacy” with appropriate regulations.128  That same year, the Department of Justice made more than $20 million available to local police departments to purchase body cameras.129 Eager to present them as tools of accountability, police departments around the country have embraced the adoption of body cameras.  Yet many police departments have adopted body cameras without detailed policies on their use, public access, and data storage.  Those body camera policies that do exist can vary considerably:130  Seattle posted all of its pilot project body camera footage on YouTube,131 while other departments have declined to release footage unless required by court order.132 The story of police body camera adoption thus far has been: use first, regulate later.  Without planning, the use of police robots will develop in the same way, with more serious consequences.  Should the arming of police robots, for example, be left to local departments?

Instead, uniform national policies should dictate the regulation of robotic policing.  Even if robots of the sort described here have yet to arrive, we can anticipate the sorts of questions that robotics will bring to policing.  The degree of human involvement in robotic decisionmaking, whether and how to arm police robots, and how to evaluate the legal responsibility of a police robot: These are all normative judgments about law and policy.  In the absence of uniform policies, we are likely to address these questions in a piecemeal fashion: a mix of unenforceable internal policies, hesitant state legislatures, possibly conflicting federal agency decisions, and court cases in which judges cannot agree as to the appropriate characterization of robots.133 We could begin with a national body to develop robotics expertise that could advise federal, state, and local lawmakers.  A “federal robotics commission,” for instance, could identify important legal and policy questions raised by robotics in a variety of areas—including policing—with specific substantive recommendations.134 More concretely, the federal government can wield its considerable resources to influence how local police departments use robots.  While the federal government cannot force state and local police to adopt particular policies,135 the Department of Justice can and has influenced the adoption of new strategies and technologies through the use of federal funding.  For example, the widespread interest in and adoption of body-worn cameras by local police departments in 2015 has been prompted in part by the availability of federal funding for body camera purchases.  Likewise, the Department of Justice offers funding to local police departments in order to purchase predictive policing systems.136 The federal government could condition the receipt of federal funds upon the adoption of regulations by grantees.137  Police departments could receive funding for robots so long as they, for instance, did not enable the robots to use deadly force without specific guidelines already in place.  No police department would be forced to accept a robot under these conditions, but every department that sought federal funding would be obliged to follow these conditions.  A top-down form of strong encouragement by the federal government could be effective in setting uniform policies for police robots.

Robots - Full Episode: 1420

The futurists were right. What humans have done for generations, robots are taking over. Robotics have dominated industry for years—from mining to manufacturing, but now they're moving in...