AI News, Robots Could Act as Ethical Mediators Between Patients and Caregivers

Robots Could Act as Ethical Mediators Between Patients and Caregivers

Most of the discussion around robots and ethics lately has been about whether autonomous cars will decide to run over the nearest kitten or a slightly farther away basket full of puppies.

Whether or not robots can make ethical decisions when presented with novel situations is something that lots and lots of people are still working on, but it’s much easier for robots to be ethical in situations where the rules are a little bit clearer, and also when there is very little chance of running over cute animals.

At ICRA last month, researchers at Georgia Tech presented a paper on “an intervening ethical governor for a robot mediator in patient-caregiver relationship.” The idea is that robots will become part of our daily lives, and theyare much, much better than humans at paying close and careful attention to things, without getting distracted or bored, forever.

It was a small number of participatants (nine, withaverage age of 71), but at this stage the IEG is still a proof-of-concept, so the researchers were mostly interested in qualitative feedback.Based on the responses from the study participants, the researchers were able to highlight some important takeaways, like: Safety is most important “I think anything to protect the patient is a good thing.” “That’s a high value.

That’s appropriate there, because it gives real information, not just commanding.” The robot should not command or judge “I think that [commanding] puts the robot in the spot of being in a judgment … I think it should be more asking such as ‘how can I help you?’ … But the robot was judging the patient.

Ethics of healthcare robotics: Towards responsible research and innovation

Informed by their experience with “embedded” ethics in technical projects and with various tools and methods of responsible research and innovation, the paper identifies “internal” and “external” forms of dialogical research and innovation, reflections on the possibilities and limitations of these forms of ethical–technological innovation, and explores a number of ways how they can be supported by policy at national and supranational level.

The Encyclopedia of Human-Computer Interaction, 2nd Ed.

This chapter introduces and critically reflects upon some key challenges and open issues in Human-Robot Interaction (HRI) research.

Furthermore, I will argue that due to the artificiality of robots, we need to be careful in making assumptions about the 'naturalness' of HRI and question the widespread assumption that humanoid robots should be the ultimate goal in designing successful HRI.

In addition to building robots for the purpose of providing services for and on-behalf of people, a different direction in HRI is introduced, namely to use robots as social mediators between people.

Human-Robot Interaction (HRI) is a relatively young discipline that has attracted a lot of attention over the past few years due to the increasing availability of complex robots and people's exposure to such robots in their daily lives, e.g.

Also, robots are increasingly being developed for real world application areas, such as robots in rehabilitation, eldercare, or robots used in robot-assisted therapy and other assistive or educational applications.

The chapter will not dwell into technical details but focus on interdisciplinary aspects of this research domain in order to inspire innovative new research that goes beyond traditional boundaries of established disciplines.

service robots that should assist people in their homes or at work, and they may join this field in order to find out how to handle situations when these robots need to interact with people, in order to increase the robots' efficiency.

Artificial Intelligence and Cognitive Science researchers may join this field with the motivation to understand and develop complex intelligent systems, using robots as embodied instantiations and testbeds of those.

Last but not least, a number of people are interested in studying the interaction of people and robots, how people perceive different types and behaviours of robots, how they perceive social cues or different robot embodiments, etc.

it may use commercially available and already fully programmed robots, or research prototypes showing few behaviours or being controlled remotely (via the Wizard-of-Oz approach whereby a human operator, unknown to the participants, controls the robot), in order to create very constrained and controlled experimental conditions.

Unfortunately this area of 'user studies', which is methodologically heavily influenced by experimental psychology and human-computer interaction (HCI) research, is often narrowly equated with the field of 'HRI'.

The characterization of the fundamental HRI problem given above focuses on the issues of understanding what happens between robots and people, and how these interactions can be shaped, i.e.

Many technical definitions are available concerning its motor, sensory and cognitive functionalities, but little is being specified about the robot's appearance, behaviour and interaction with people.

Behaviours and appearances of robots have dramatically changed since the early 1990s, and they continue to change — new robots appearing on the market, other robots becoming obsolete.

The design range of robot appearances is huge, ranging from mechanoid (mechanical-looking) to zoomorphic (animal-looking robots) to humanoid (human-like) machines as well as android robots at the extreme end of human-likeness.

Only a deep investigation of both aspects will eventually illuminate the illusive 'I', the interaction that emerges when we put people and interactive robots in a shared context.

In my perspective, the key challenge and characterization of HRI can be phrased as follows: 'HRI is the science of studying people's behaviour and attitudes towards robots in relationship to the physical, technological and interactive features of the robots, with the goal to develop robots that facilitate the emergence of human-robot interactions that are at the same time efficient (according to the original requirements of their envisaged area of use), but are also acceptable to people, and meet the social and emotional needs of their individual users as well as respecting human values'.

Let us consider a thought experiment and assume our research question is to investigate how a cylindrically shaped mobile robot should approach a seated person and how the robot's behaviour and appearance influences people's reactions.

The robot will be programmed to carry a bottle of water, approach the person from a certain distance, stop at a certain distance in the vicinity of the person, orient its front (or head) towards the person and say 'Would you like a drink?'.

Despite these gross simplifications, as indicated in table 1 below, we will end up with 37 = 2187 combinations and possible experimental conditions to which we may expose participants to.

Each session, if kept to a very minimal scenario, will take at least 15 minutes, plus another 15 minutes for the introduction, debriefing, questionnaires/interviews, as well as signing of consent forms etc.

Since people's opinions of and behaviours towards robots is likely to change in long-term interactions, each person should be exposed to the same condition 5 times, which gives 10935 different sessions.

Also, the participants need to be chosen carefully, ideally one would also consider possible age and gender differences,as well as personality characteristics and other individual differences — which means repeating the experiment with different groups of participants.

Regardless of whether we expose one participant to all conditions, or we choose different participants for each condition, getting sufficient data for meaningful statistical analysis will clearly be impractical.

We end up with about 328050 * X minutes required for the experiment, not considering situations where the experiment has to be interrupted due to a system's failure, rescheduling of appointments for participants etc.

Clearly, running such an experiment is impractical, and not desirable, given that only minimal conditions are being addressed, so results from this experiment would necessarily be very limited and effort certainly not worthwhile.

The results would indicate how robot height influences people's preferred approach distances but only in a very limited sense, since all other features would have to be held constant, i.e.

And indeed, quantitative methods used in these domains often provide valuable guidelines and sets of established research methods that are used to design and evaluate HRI experiments, typically focusing on quantitative, statistical methods requiring large-scale experiments, i.e.

Such qualitative methods may include in-depth, long-term case studies where individual participants are exposed to robots over an extensive period of time.

The purpose of such studies is more focused on the actual meaning of the interaction, the experience of the participants, any behavioural changes that may occur and changes in participants' attitudes towards the robots or the interaction.

Rossano et al., 2013) may analyse in depth the detailed nature of the interactions and how interaction partners respond and attend to each other and coordinate their actions.

In the field of assistive technology and rehabilitation robotics, where researchers develop robotic systems for specific user groups, control conditions with different user groups are usually not required: if one develops systems to assist or rehabilitate people with motor impairments after a stroke, design aids to help visually impaired people, or develop robotic technology to help with children with autism learn about social behaviour and communication, contrasting their use of a robotic system with how healthy/neurotypical people may use the same system does not make much sense.

Thus, in this domain, control groups only make sense if those systems are meant to be used for different target user groups, and so comparative studies can highlight how each of them would use and could benefit (or not) from such a system.

However, most assistive, rehabilitative systems are especially designed for people with special needs, in which case control conditions with different user groups are not necessarily useful.

Note, an important part of control conditions in assistive technology is to test different systems or different versions of the same system in different experimental conditions.

Such comparisons are important since they a) allow gaining data to further improve the system, and b) can highlight the added value of an assistive system compared to other conventional systems or approaches.

Such artifacts would be tools in the research on the nature of the disorder or disability, rather than an assistive tool built to assist the patients — which means it would also have to take into consideration the patient's individual differences, likes and dislikes and preferences in the context of using the tool.

Developing complex robots for human-robot interaction requires substantial amount of resources in terms of researchers, equipment, know-how, funding and it is not uncommon that the development of such a robot may take years until it is fully functioning.

The iCub was developed as a research platform for developmental and cognitive robotics by a large consortium, concluding several European partners developing the hardware and software of the robot.

Results of the IROMEC project do not only include the robotic platform, but also a framework for developing scenarios for robot-assisted play (Robins et al., 2010), and a set of 12 detailed play scenarios that the Robot-Assisted Therapy (RAT) community can use according to specific developmental and educational objectives for each child (Robins et al., 2012).

2010), however time ran out at the end of the project to do a second design cycle in order to modify the platform based on trials with the targeted end-users.

In the case of the iCub the robot was developed initially as a new cognitive systems research robotics platform, so no concrete end users were envisaged.

Figure 38.1 A-B: The Care-O-bot® 3 robot in the UH Robot House, investigating robot assistance for elderly users as part of the ACCOMPANY project (2011, ongoing).

involving users in the design and ensuring that the to be developed robot fulfills its targeted roles and functions and provides positive user experience remains a difficult task (Marti and Bannon, 2009).

Figure 38.4: Modified from Dautenhahn (2007b), sketching a typical development time line of HRI robots and showing different experimental paradigms.

Note, there are typically several iterations in the development process (not shown in the diagram), since systems may be improved after feedback from user studies with the complete prototype.

Resource efficiency means that experiments need to yield relevant results quickly and cheaply (in terms of effort, equipment required, person months etc.).

Outcome-relative fidelity means that outcomes of the study must be sufficiently trustworthy and accurate to support potentially costly design decisions taken based on the results (Derbinsky et al.

Even before a robot prototype exists, in order to support the initial phase of planning and specification of the system, mock-up models might be used, see e.g.

Once a system's main hardware and basic control software has been developed, and safety standards are met, first interaction studies with participants may begin.

However, having one or two researchers remotely controlling the robot's movements and/or speech can be cognitively demanding and impractical in situations where the goal is that the robot eventually should operate autonomously.

For example, in a care, therapy or educational context, remotely controlling a robot require another researcher and/or care staff member to be available (cf.

Figure 38.6: a) Two researchers controlling movement and speech of a robot used in a (simulated) home companion environment (b).

28 subjects interacted with the robot in physical assistance tasks (c), and they also had to negotiate space with the robot (d), e) layout of experimental area for WoZ study.

Once WoZ experiments are technically feasible, video-based methods can be applied whereby typically groups of participants are shown videos of the robots interacting with people and their environments.

Previous studies compared live HRI and video-based HRI and found comparable results in a setting where a robot approached a person (Woods et al., 2006a,b).

Another prototyping method that has provided promising results is the Theatrical Robot (TR) method that can be used in instances where a robot is not yet available, but where live human—robot interaction studies are desirable, for example see Fig.

The Theatrical Robot describes a person (a professional such as an actor, or mime artist) dressed up as a robot and behaving according to a specific and pre-scripted robotic behaviour repertoire.

(2004) have used this method successfully in studies which tried to find out how children with autism react to life-sized robots, and how this reaction depends on whether the robot looks like a person or looks like a robot.

The small group of four children studied showed strong initial preferences for the Theatrical Robot in its robotic appearance, compared to the Theatrical Robot showing the same (robotic) behaviour repertoire but dressed as a human being, see example results in Figure 8.

However, the TR can also be used as a valuable method on its own, in terms of investigating how people react to other people depending on their appearance, or how people would react to a robot that looks and behaves very human-like.

Building robots that truly look and behave like human beings is still a future goal, although Android robots can simulate appearance, they lack human-like movements, behaviour and cognition (MacDorman, Ishiguro, 2006).

Figure 38.8: Using the Theatrical Robot paradigm in a study that investigated children with autism's responses to a human-sized robot either dressed as a robot (plain appearance a), or as a human person (human appearance),showing identical behaviour in both conditions.

For example, in the area of developing home companion robots, researchers study the use of robots for different types of assistance, physical, cognitive and social assistance.

fetch-and-carry), reminding users of appointment, events, or the need to take medicine (the robot as a cognitive prosthetic), or social tasks (encouraging people to socialize, e.g.

Implementing such scenarios presents again a huge developmental effort, in particular when the robot's behaviour should be autonomous, and not fully scripted, but adapt to users' individual preferences and their daily life schedule.

Figure 38.9 A-B-C: a) SISHRI methodological approach (Derbinsky et al., 2013) — situated, real-time HRI with a simulated robot to prototype scenarios, b) example of simulation of interaction shown on the tablet used by the participant.

GoTo is used to simulate the time that the robot will take to travel from one position to other (picture on the right), ToDo was introduced to expand the functionality of this prototype, the activities relate to the user, rather than the robot and can be logged in the system (e.g.

In this example, the user can send the robot from the kitchen (current robot position) to any other place the user selects from the list (kitchen, couch, desk, drawer).

a) KASPAR, the minimally expressive robot developed at University of Hertfordshire used for HRI studies including robot-assisted therapy for children with autism, b) Roomba (iRobot), a vaccuming cleaning robot shown operating in the University of Hertfordshire Robot House, c) Autom, the weight loss coach.

Credit: Intuitive Automata, d) Pleo robot (Ugobe), designed as a 'care-receiving' robot encouraging people to develop a relationship with it e) Robosapien toy robot (WowWee), f) Design space — niche space — resources, see main text for explanation.

The present reality of robotics research is that robots are far from showing any truly human-like abilities in terms of physical activities, cognition or social abilities (in terms of flexibility, 'scaling up' of abilities, 'common sense', graceful degradation of competencies etc.).

In contrast to the companion paradigm, where the robot's key function is to take care of the human's needs, in the caretaker paradigm it is the person's duty to take care of the 'immature' robot.

We may enjoy this interaction, in the way we enjoy role play or immersing ourselves in imaginary worlds, but one needs to be clear about the inherently mechanical nature of the interaction.

As Sherry Turkle has pointed out, robots as 'relational artifacts' that are designed to encourage people to develop a relationship with them, can lead to misunderstandings concerning the authenticity of the interaction (Turkle, 2007).

If children grow up with a robot companion as their main friend who the interact with for several hours each day, they will learn that they can just switch it off or lock it into a cupboard whenever it is annoying or challenging them.

Similar issues are discussed in terms of children's possible addiction to computer games and game characters and to what extent these may have a negative impact on their social and moral development.

At present such questions cannot be answered, they will require long-term studies into how people interact with robots, over years or decades — and such results are difficult to obtain and may be ethically undesirable.

The answers are unlikely to be 'black and white' — similar to the question of whether computer games are beneficial for children's cognitive, academic and social development, where answers are inconclusive (Griffiths, 2002;

widespread assumption within the field of HRI is that 'good' interaction with a robot must reflect natural (human-human) interaction and communication as closely as possible in order to ease people's need to interpret the robot's behaviour.

Indeed, people's face-to-face interactions are highly dynamic and multi-modal — involving a variety of gestures, language (content as well as prosody are important), body posture, facial expressions, eye gaze, in some contexts tactile interactions, etc.

This has lead to intensive research into how robots can produce and understand gestures, how they can understand when being spoken to and respond correspondingly, how robots can use body posture, eye gaze and other cues to regulate the interaction, and cognitive architectures are being developed to provide robots with natural social behaviour and communicative skills (e.g.

While we discuss below in more detail that the goal of human-like robots needs to be reflected upon critically, the fundamental assumption of the existence of 'natural' human behaviour is also problematic.

Is a person behaving naturally in his own home, when playing with his children, talking to his parents, going to a job interview, meeting colleagues, giving a presentation at a conference?

If 'natural' is meant to be 'biologically realistic' then the argument makes sense — a 'natural gesture' would then be a gesture using a biological motion profile and an arm that is faithfully modeling human arm morphology.

Thus, for humans, behaving 'naturally' is more than having a given or learnt behaviour repertoire and making rational decisions in any one situation on how to behave.

We are 'creating' these behaviours, reconstructing them, taking into consideration the particular context, interaction histories, etc., we are creating behaviour consistent with our 'narrative self'.

Robots do not have a genuine evolutionary history, their bodies and their behaviour (including gestures etc.) have not evolved over many years as an adaptive response to challenges in the environment.

For example, the shape of our human arms and hands has very good 'reasons', it goes back to the design of forelimbs of our vertebrate ancestors, used first for swimming, then as tetrapods for walking and climbing, later bipedal postures freed the hands to grasp and manipulate objects, to use tools, or to communicate via gestures.

Previously,I proposed different roles of robots in human society (Dautenhahn, 2003), including: Dautenhahn et al.(2005) investigated people's opinions on viewing robots as friends, assistants or butlers.

Goodrich and Schultz (2007) have proposed roles for a robot as a mentor for humans or information consumer whereby a human uses information provided by a robot.

For humans and some other biological species social learning is a powerful tool for learning about the world and each other, to teach and develop culture, and it remains a very interesting challenge for future generations of robots learning in human-inhabited environment.(Nehaniv & Dautenhahn, 2007).

Ultimately, robots that can learn flexibly, efficiently, and socially appropriate behaviours that enhance its own skills and performance and is acceptable for humans interacting with the robot, will have to develop suitable levels of social intelligence (Dautenhahn, 1994, 1995, 2007a).

Recently, a number of projects worldwide investigate the use of robots in elder-care in order to allow users to live independently in their homes for as long as possible see e.g.

Success in this research domain will depend on acceptability, not only by the primary users of such systems (elderly people) but also by other users (family, friends, neighbours) including formal and informal carers.

Figure 38.11: Population projections for the 27 Member States, showing an increase of people aged 65 and above from 17.57% to 29.54%, with a decrease of people aged between 15-64 from 67.01% to 57.42%.

So care providers may show a great interest in using robots for social company, and elderly people might welcome such robots as a means to combat their loneliness.

cleaning, feeding, washing elderly people, robots may be designed to fulfill those tasks, potentially freeing up care staff to provide social contact with genuine, meaningful interactions.

the RI-MAN robot and Yamazaki et al., 2012), while it is well within our reach to build robots that provide some basic version of company and social interaction, 'relational artifacts' according to Turkle et al.

If one day robots are able to provide both social and non-social aspects of care, will human care staff become obsolete due to the need of cutting costs in elder-care?

Or will robots be used to do the routine work and the time of human carers will be freed to engage with elderly residents in meaningful and emotionally satisfying ways?

The latter option would not only be more successful in providing efficient and at the same time humane care, it would also acknowledge our biological roots, emotional needs, and evolutionary history—as a species, our social skills are the one domain where we typically possess our greatest expertise, while our 'technical/mechanical' expertise can be replaced more easily by machines.

Based on a Pioneer mobile platform (left), a socially interactive and expressive robot was developed for the study of assistance scenarios for a robot companion in a home context.

It consists of a mobile base, a touch-screen user interface and diffuse LED display panels to provide expressive multi-coloured light signals to the user.

The non-verbal expressive behaviours have been inspired by expressive behaviour that dogs display in human-dog interaction in similar scenarios as those used in the Robot House, in collaboration with ELTE in Hungary (Prof.

a) early Sunflower prototype, b,c) Sunflower, d) HRI home assistance scenarios with an early Sunflower prototype in comparison to dog-owner interaction in a comparable scenario, e) (Syrdal et al., 2010).

Autism is a lifelong developmental disorder characterized by impairments in communication, social interaction and imagination and fantasy (often referred to as the triad of impairments;

While in 1979 Weir and Emanuel had encouraging results with one child with autism using a button box to control a LOGO Turtle from a distance, the use of interactive, social robots as therapeutic tools was first introduced by the present author (Dautenhahn (1999)) as part of the Aurora project (1998, ongoing).

The use of robots for therapeutic or diagnostic applications has rapidly grown over the past few years, see recent review articles which show the breadth of this research field and the number of active research groups (Diehl et al., 2012, Scassellati et al.

(2001) and the present author (Dautenhahn 2003) gave examples of trials with pairs of children who started interacting with each other in a scenario where they had to share an autonomous, mobile robot that they could play with.

Similarly, recent work with the minimally expressive humanoid robot KASPAR discusses the robot's role as a salient object that mediates and encourages interaction between the children and co-present adults (Robins et al, 2009;

(2009) provide a proof-of-concept study showing how an AIBO robot can adapt to different interaction styles of children with autism playing with it, see also a recent article by Bekele et al., (2013).

Social robots are usually equipped with tactile sensors, in order to encourage play and allow the robot to respond to human touch, e.g.

Using tactile HRI to support human-human communication over distance illustrates the role a robot could play in order to mediate human contact (Mueller et al., 2005;

To illustrate this research direction, Fotios Papadopoulos has investigated how autonomous AIBO robots (Sony) could mediate distant communication between two people engaging in online game activities and interaction scenarios.

Here, the long-term goal is to develop robots as social mediators that can assist human-human communication in remote interaction scenarios, in order to support, for example, friends and family members who are temporarily or long term prevented from face-to-face interaction.

One study compared how people communicate with each other through a communication system named AiBone involving video communication and interaction with and through an AIBO robot with a setting not involving any robots and using standard computer interfaces instead (Papadopoulos et al., 2012).

These results show a careful balance and trade-off between efficiency of interaction and communication modes, and their social relevance in terms of mediating human-human contact and supporting relationships.

Using the remote interactive story-telling system participants could collaboratively create and share common stories through an integrated, autonomous robot companion acting as a social mediator between two remotely located people.

Results suggests user preferences towards the robot mode, thus supporting the notion that physical robots in the role of social mediators, affording touch-based human-robot interaction and embedded in a remote human-human communication scenario, may improve communication and interaction between people (Papadopoulos, 2012b).

The use of robots as social mediators is different from the approach of considering robots as 'permanent' tools or companions — a mediator is no longer needed once mediation has been successful.

Previously, I described this tendency as the 'life-like agent hypothesis' (Dautenhahn, 1999): 'Artificial social agents (robotic or software) which are supposed to interact with humans are most successfully designed by imitating life, i.e.

This comprises both 'shallow' approaches focusing on the presentation and believability of the agents, as well as 'deep' architectures which attempt to model faithfully animal cognition and intelligence.

Such life-like agents are desirable since Argument (3) presented above easily translates to robotic agent and companions, since these may be used to study human and animal behaviour, cognition and development (MacDorman and Ishiguro, 2006).

Clearly, the humanoid robot is an exciting area of research, not only for those researchers interested in the technological aspects but also, importantly, for those interested in developing robots with human-like cognition;

When trying to achieve human-like cognition, it is best to choose a humanoid platform, due to the constraints and interdependencies of animal minds and bodies (Pfeifer, 2007).

However, arguments (1) and (2) are problematic, for the following reasons: Firstly, while humans have a natural tendency to anthropomorphize the world and to engage even with non-animate objects (such as robots) in a social manner (e.g.

human-like hands and fingers suggest that the robot is able to manipulate objects in the same way humans can, a head with eyes suggests that the robot has advanced sensory abilities e.g.

More generally, a human-like form and human-like behaviour is associated with human-level intelligence and general knowledge, as well as human-like social, communicative and empathic understanding.

Due to limitations both in robotics technology and in our understanding of how to create human-like levels of intelligence and cognition, in interaction with a robot people quickly realize the robot's limitations, which can cause frustration and disappointment.

Some users may attribute personality to it, but the functional shape of the robot clearly signifies its robotic nature, and indeed few owners have been shown to treat the robot as a social being (Sung et al., 2007, 2008).

Thus, rather than trying to use a humanoid robot operating a vacuum cleaner in a human-like manner (which is very hard to implement), an alternative efficient and acceptable solution has been found.

Building humanoids which operate and behave in a human-like manner is technologically highly challenging and costly in terms of time and effort required, and it is unclear when such human-likeness may be achieved (if ever)in future.

The current tendency to focus on humanoid robots in HRI and robotics may be driven by scientific curiosity, but it is advisable to consider the whole design space of robots, and how the robot's design may be very suitable for particular tasks or application areas.

For tasks that do involve a significant amount of human-robot interaction, some humanoid characteristics may add to the robot's acceptance and success as an interactive machine, and may thus be justified better.

compare a review in (De Santis et al.,2008) that identifies different approaches for human-robot safety ranging from design, sensors, software, planning, biomimetics to control solutions to human-robot safety.

the analysis and design of safety aspects, the design of safety for robots via the development of specific mechanical and actuator systems or by exploiting new materials, design of low and medium-level controllers for safe compliance via direct force compliance, and the development of high-level cognition, control and decision-making aspects (Herrmann and Melhuish, 2010).

The latter is likely to change in long-term interactions when a user gets used to interactions with the robot and understands better its functionalities and limitations, which allows the user to make better predictions about the robot's behaviour.

There are two main aspects to the use of social cues for enhancing safety with robots: 1) The robot can express social cues and show behaviour which intuitively informs the user that a potentially hazardous action by the robot is imminent or under way.

2) Alternatively, the robot can actively monitor the user's activities (and/or use information from its interaction history with the user to make predictions about the user's behaviour and activities), and modify its own actions accordingly to avoid unsafe interactions.

Point 2) above is significantly more technically demanding of robot control and sensor systems, but both approaches have the potential to facilitate safe working of a robot in a human-oriented environment.

human and robot both being 'safety-aware' and collaboratively trying to avoid unsafe situations by mutually being attentive to and adapting to each other's current or predicted actions would be the more 'natural' solution, similar to how people coordinate their actions.

Note, humanoid robots are not necessarily safer than other robots as implied in the following statement: While the above statement may appear intuitive to non-roboticists, a human-like shape does not necessarily help in predicting the behaviour of a robot.

In many cases a non-human like machine, which people have little prior expectations of, will make people act in an instinctively cautious manner around the machine, similar to the caution people apply when encountering unknown and potentially dangerous situations.

Non-humanoid robots decrease the expectations in terms of the skills people attribute to them, and they may elicit cautious behaviour in people who will carefully assess the robot's abilities and how one can safely interact with it, rather than assuming that it 'naturally' has human-like abilities and is safe to interact with.

Elder-care robots that are currently under investigation will probably only be mass deployed when today's young people have reached retirement age—a generation used to electronic devices, the internet and World-Wide-Web, gadgets and social networking on an unprecedented scale.

But even today's participants in HRI studies are not 'naive' in a strict sense—they come with particular attitudes towards technology in general and often robots in particular, even when they have never encountered one face-to-face.

They may respond to robots with some biological reactions typically shown towards humans, but this reaction may be influenced by top-down mechanisms of their beliefs about the system (Shen et al.

HRI is a moving target, and so, as HRI researchers, we need to keep moving, too—being flexible and open-minded about the very foundations of our domain and the nature of robots, and being open-minded towards creative solutions to robot design and methodological challenges.

I would also like to thank the excellent research team in the Adaptive Systems research group at the University of Hertfordshire who created some of the research work cited in this article which has greatly shaped and changed my ideas on social robots and human-robot interaction over the past 13 years.

679-704 Dautenhahn, Kerstin and Robins, Ben (2006): The role of the experimenter in HRI research - a case study evaluation of children with autism interacting with a robotic toy.

and Zhao, Yong (2011): Internet use, videogame playing and cell phone use as predictors of children's body mass index (BMI), body weight, academic performance, and social and overall self-esteem.

82-88 Robins, Ben, Ferrari, Ester, Dautenhahn, Kerstin, Kronreif, Gernot, Prazak-Aram, Barbara, Gelderblom, Gert-jan,Tanja, Bernd, Caprino, Francesca, Laudanna, Elena and Marti, Patrizia (2010): Human-centred design methods: Developing scenarios for robot assisted play informed by user panels and field trials.

How Humans Respond to Robots: Building Public Policy through Good Design

Historically, robotics in industry meant automation, a field that asks how machines perform more effectively than humans.

One current example of this is the ongoing campaign by Human Rights Watch for an international treaty to ban military robots with autonomous lethal firing power—to ensure that a human being remain “in the loop” in any lethal decision.

From driverless cars to semi-autonomous medical devices to things we have not even imagined yet, good decisions guiding the development of human-robotic partnerships can help avoid unnecessary policy friction over promising new technologies and help maximize human benefit.

But after 12 years in robotics, with researchers celebrating when we manage to get robots to enact the simplest of humanlike behaviors, it has become clear to me how complex human actions are, and how impressive human capabilities are, from our eyesight to our emotive communication.

Their ability to search large amounts of data within those constraints, their design potential for unique sensing or physical capabilities–like taking a photograph or lifting a heavy object–and their ability to loop us into remote information and communications, are all examples of things we could not do without help.

Neuroscientists have discovered that one key to our attribution of agency is goal-directed motion.1 Heider and Simmel tested this theory with animations of simple shapes, and subjects easily attributed character-hood and thought to moving triangles.2 To help understand what distinguishes object behavior from agent behavior, imagine a falling leaf that weaves back and forth in the air following the laws of physics.

If a butterfly appears in the scene, however, and the leaf suddenly moves in close proximity to the butterfly, maintaining that proximity even as the butterfly continues to move, we would immediately say the leaf had “seen” the butterfly, and that the leaf was “following” it.

In 2008, Cory Kidd completed a study with a robot intended to aid in fitness and weight loss goals, by providing a social presence with which study participants tracked their routines.4 The robot made eye-contact (its only moving parts), vocalized its greetings and instructions, and had a touch-screen interface for data entry.

While all participants in the first group (pen and paper) gave up before the six weeks were over, and only a few in the second (touch screen only) chose to extend the experiment to eight weeks when offered (though they had all completed the experiment), almost all those in the last group (robot with touch screen) completed the experiment and chose to extend the extra two weeks.

Sharing traumatic experiences may also encourage bonding, as we have seen in soldiers that work with bomb-disposal robots.5 In the field, these robots work with their human partners, putting themselves in harm’s way to keep their partners from being in danger.

It turns out that iRobot, the manufacturers of the Packbot bomb-disposal robots, have actually received boxes of shrapnel consisting of the robots’ remains after an explosion with a note saying, “Can you fix it?” Upon offering to send a new robot to the unit, the soldiers say, “No, we want that one.” That specific robot was the one they had shared experiences with, bonded with, and the one they did not want to “die.” Of course, people do not always bond with machines.

One handy rubric referenced by robot designers for the latter is the Uncanny Valley.6 The concept is that making machines more humanlike is good up to a point, after which they become discomforting (creepy), until you achieve human likeness, which is the best design of all.

The theoretical graph of the Uncanny Valley includes two lines, one curve for agents that are immobile (for example, a photograph of a dead person would be in the valley), and another curve with higher peaks and valleys for those that are moving (for example, a zombie is the moving version of that).

My PhD advisor, Reid Simmons, jokes that roboticists should address such human fears by asking ourselves, “How prominent does the big red button need to be on the robots we sell?” Although there are also notable examples to the contrary (Wall-E, Johnny-5, C3P0), it is true that Hollywood likes to dramatize scary robots, at least some of the time (Skynet, HAL, Daleks, Cylons).

In Shinto animism, objects, animals, and people all share common “spirits,” which naturally want to be in harmony.7 Thus, there is no hierarchy of the species, and left to chance, the expectation is that the outcome of new technologies will complement human society.

As one journalist writes, “Given that Japanese culture predisposes its members to look at robots as helpmates and equals imbued with something akin to the Western conception of a soul, while Americans view robots as dangerous and willful constructs who will eventually bring about the death of their makers, it should hardly surprise us that one nation favors their use in war while the other imagines them as benevolent companions suitable for assisting a rapidly aging and increasingly dependent population.”8 Our cultural underpinnings influence the media representations of robotics, and may influence the applications we target for development, but it does not mean there is any inevitability for robots to be good, bad or otherwise.

Cellphone providers might share our location data with other companies, the government might be able to read our emails, but in a world where social presence insists on constant Twitter updates and Instagram photos, people are constantly encouraged to be sharing this data with the world anyway.

The flipside of considering human bonding with machines is that robotic designers, and ultimately policymakers, may need to protect users from opportunities for social robots to replace or supplement healthy human contact or, more darkly, retard normal development.

As some autism researchers are investigating, social robots might help socially impaired people relate to others,9 practicing empathetic behaviors as a stepping stone to normal human contact.

In an experiment where robots were theoretically to perform a variety of assistive tasks, participants were asked to select their preference for robot-like, mixed human-robot or human-like face.11 In the context of personal grooming, such as bathing, most participants strongly preferred a system that acted as equipment only.

On the other hand, when selecting a face to help the subject with an important informational task, such as where to invest the subject’s money, the robotic face was chosen least, i.e., users preferred presence of humanlike characteristics.

In the second, we introduce robots into shared human environments, such as our hospitals, workplaces, theme-parks or homes (for example, a hospital delivery robot that assists the nurses in bringing fresh linens and transporting clinical samples).

In the third, people travel within machines with the ability to provide higher-level commands like destination or share control—for example, landing a plane after a flight takes place on auto-pilot.

For all of these categories, effective robots must have intuitive interfaces for human participation, clear communication of state, the ability to sense and interpret the behaviors of their human partners, and, of course, be capable of upholding their role in the shared task.

The hope is to enable designs in which the split of human and machine capabilities enables users and has positive human impact, resulting in behaviors, performance and societal impact that go beyond what either partner could do alone.

When I was working at NASA’s Jet Propulsion Laboratory, the folks there liked to think of their spacecrafts and rovers as “extensions of the human senses.” Other telepresence applications and developments include a news agency’s getting a better view of a political event, a businessperson’s skipping a long flight by attending a conference via robot, a scientist’s gathering data, or a search-and-rescue team’s leading survivors to safety.

Demanding higher local autonomy by the robot would be important when it needs rapid response and control, say, to maintain altitude, has unique knowledge of its local environment so it can, for example, orient to a sound, or is out of communications range.

Similarly, in situations where control loops would not occur fast enough, communication goes down, or where we would want the pilot to oversee other robots, we can use shared autonomy to make use of the operator’s knowledge of saliency and objective without weighing him or her down with the minutiae of reliable machine tasks, such as executing a known flight path.

To provide an example of an application with high robot autonomy, imagine a geologist that wants to use half a dozen flying robots to remap a fault line after earthquakes to better predict future damage.

Using a team of robots, this scientist can safely explore a large swath of terrain, while the robots benefit from the scientist’s knowledge of salience: what are the areas to map, say, or the particular land features posing danger of future collapse?

(There are also various social and policy issues in selecting autonomy designs this space, such as the ordinance in Colorado seeking to make shooting drones out the sky legal12.) In search-and-rescue tasks, a much lower level of robot autonomy might be desirable because of the potential complexity and danger.

That means they could face danger themselves, so in certain environments, research teams have found success with three-person teams.13 While the main pilot watches the point of view (video feed) of the robot and controls its motion, a second person watches the robot in the sky and tells the pilot about upcoming obstacles or goals that are out of robot view.

In order for a telepresence system to maintain its social function of empowering the sick employee, the company and colleagues must know they are safe from misuse of the office robot’s data, and firm policies and protections should be in place.

Researchers at Georgia Tech have begun to investigate the potential for use of telepresence robots by the elderly.14 Whether by physical handicap or loss of perception, such as sight, losing one’s driver’s license can cause a traumatic loss of independence as one ages.

During an exploratory study, researchers learned that while the elderly they surveyed had a very negative reaction to the idea that their children might visit them by robotic means, rather than coming in person, the idea of having a robot in their children’s home that they could log into at will was very appealing.

Some study participants expressed the desire to simply “take a walk outside,” or attend an outdoor concert “in person.” The trick here is to use such technologies in ways that protect the populations whom the technology is meant to support, or encourage the consumer use cases likely to have positive social impact.

They may also aid our elderly, assist workers, or provide a liaison between two people – like, for example, a robotic teddy bear designed to help bridge the gap between nurse and child in an intimidating hospital environment.

With the right privacy protections for the data collected about the audience, the stage can provide a constrained environment for a robot to explore small variations iteratively from one performance to the next, using the audience members as data points for machine learning.15 As I began to explore with my robot comedian, it may also be easier for robots to interpret audience versus individual-person social behaviors because of the aggregate statistics (average movement, volume of laughter, and the like) and known social conventions (applause at the end of a scene).16

In my experience with robot comedy, people love hearing a robot share a machine perspective: talking about its perception systems, the limitations of its processor speed, and battery life, and overheating motors bring a sense of reality to an interaction.

The common value created may or may not invest the human partners in the robot, but it can definitely equip interaction partners with a better understanding of actual robot capabilities, limitations and current state— particularly if the information is delivered in a charismatic manner.

In a study comparing hospital delivery robots operating on a surgery ward versus a maternity ward, the social context of its deployment changed the way people evaluated the robot’s job performance.17 While the surgeons and other workers in the former became frustrated with the robots getting even slightly in the way in the higher stress environment, the same robots in the maternity ward were rated to be very effective and likeable.

What will become increasingly tricky, however, is the idea of changing the distribution of decision making, such that the vehicle is not just in charge of working mechanics (extracting energy from its fuel and transferring the steering wheel motion to wheel angle), but also for driving (deciding when to accelerate, or who goes first at an intersection).

In extending this design potential to socially intelligent, embodied machines, good design and public policy can support this symbiotic partnership by specifically valuing human capability, maintaining consideration of societal goals and positive human impact.

As social robotics researchers increase their understanding of the human cultural response to robots, we help reveal cultural red lines that designers in the first instance, and policymakers down the road, will need to take into account.

Regulations encouraging and safeguarding good design will influence whether users will want to keep humans at the helm, as in telepresence systems, join forces in a shared collaborative environments, or hand over the decision making while riding in an autonomous vehicle.

Do Robots Need a Code of Ethics?

“I did a study at the Media Lab where we found that people who have low empathic concern for others will treat a robot differently than people who have high empathic concern, so we can use a robot to measure human empathy,” says Darling.

“Not only can we measure or observe people’s behavior with robots, but could we use robots therapeutically to help people manage their behavior?” Two of the areas where robots are used successfully in human interaction is in healthcare and education.

We don’t know the answer, but it might actually turn us into crueler humans if we get used to certain behaviors with these lifelike robots.” Darling’s work focuses on researching and influencing technology design and policy direction.

Especially in the healthcare and education realm, Darling points out, many robots are being developed for vulnerable populations, which brings concerns for misuse of those consumers as well as the question of when humans can and cannot be replaced.

Kate Darling - Ethical issues in human-robot interaction

Robot ethics is about humans.” Kate Darling is a Research Specialist at the MIT Media Lab and a Fellow at the Harvard Berkman Center and the Yale Information Society Project. Robots...

What is robot ethics? | Thomas Arnold

Thomas Arnold, Research Associate, Human-Robot Interaction Laboratory Tufts University. “I think life is about exploring, learning from one another, struggling, and continuously asking that...

Waking Up With Sam Harris #66 - Living with Robots (with Kate Darling)

In this Episode of the Waking Up podcast, Sam Harris speaks with Kate Darling about the ethical concerns surrounding our increasing use of robots and other autonomous systems. Kate Darling...

Robot Tries to Escape from Children's Attack

This video is part of “Escaping from Children's Abuse of Social Robots,” by Dražen Brščić, Hiroyuki Kidokoro, Yoshitaka Suehiro, and Takayuki Kanda from ATR Intelligent Robotics...

Robot Ethical Questions

Somebody asked a fantastic question in response to my video on the uncanny valley – fundamentally they were robot ethics questions and I wanted to give it the response it deserved. These...

How Avant-Garde Robots will Help us Survive this Century | David McGoran | TEDxBristol

"What if the borders between humanity, technology, and nature are only in our mind?” In this colorful and fast-moving TEDx talk David takes us on a journey through his evolution as an activist,...

AI Being TAUGHT to Disobey Humans

Royalty Free Music: Bensound: New Dawn The Dangers of artificial general intelligence operating on a distributed global network have been delineated by people like Stephen Hawkins, Elon Musk,...

Why AI will probably kill us all.

When you look into it, Artificial Intelligence is absolutely terrifying. Really hope we don't die. ▻ ▻ If you want to support what I do, this is the best way:

Moral Math of Robots: Can Life and Death Decisions Be Coded?

A self-driving car has a split second to decide whether to turn into oncoming traffic or hit a child who has lost control of her bicycle. An autonomous drone needs to decide whether to risk...

AI and Ethics

Kay Firth-Butterfield (moderator), Margaret Boden, Francesca Rossi, Wendell Wallach, and Huw Price discuss how we might be able to ensure that AI is designed and used according to high ethical...