AI News, An Uncanny Mind: Masahiro Mori on the Uncanny Valley and Beyond
An Uncanny Mind: Masahiro Mori on the Uncanny Valley and Beyond
In this guest post, Norri Kageki interviews Masahiro Mori, who, as a professor of engineering at Tokyo Institute of Technology inthe 1970s, proposed the now-famous concept of the uncanny valley.
[Read the first authorized translation of his seminal article here.] Mori's insight was that people would react with revulsion to humanlike robots, whose appearance resembled, but did not quite replicate, that of a real human.
He called this phenomenon bukimi no tani (the term 'uncanny valley' first appeared in the 1978 book Robots: Fact, Fiction, and Prediction, written by Jasia Reichardt).
The uncanny valley has become more relevant in the past few years since robots that actually look and move like humans are starting to become a reality.
In fact, researchers currently debate over whether they should try to overcome the uncanny valley or simply design robots that are more mechanical in appearance.
Norri Kageki: Your essay titled 'The Uncanny Valley' first appeared in a 1970 issue of Energy (pictured below), a magazine published by Esso, a Japanese subsidiary of Standard Oil Co.
He asked me to be a part of a round table on the subject along with the science fiction writer Sakyo Komatsu and Prof.
At that time, electronic prosthetic hands were being developed, and they triggered in me the same kind of sensation.
These experiences had made me start thinking about robots in general, which led me to write that essay.
I do appreciate the fact that research is being conducted in this area, but from my point of view, I think that the brain waves act that way because we feel eerie.
The uncanny valley relates to various disciplines, including philosophy, psychology, and design, and that is why I think it has generated so much interest.
Pointing out the existence of the uncanny valley was more of a piece of advice from me to people who design robots rather than a scientific statement.
MM: It started to be picked up after the humanoids conference in 2005 (IEEE Robotics and Automation Society International Conference on Humanoid Robots).1 I was invited to that event, but I was unable to attend due to other commitments.
[In this Japanese folk story, a dog barks to let his owners know where to dig, and when they dig they find gold.] I seem to have a good nose for sniffing out interesting things, but I don't have the skill to dig them up.
NK: Your uncanny valley chart captures the concept nicely: The curve first goes up, as people's affinity toward robots increases, they become more humanlike, but only up to a point, when the curve suddenly plunges into the uncanny valley.
Do you still think that robot designers should aim for the first peak instead of aiming beyond the valley?
NK: There are people like former Massachusetts Institute of Technology professor and founder of start-up Heartland Robotics Rodney Brooks who think humans are also machines and that they can be built.
My friend Ichiro Kato [the late Waseda University professor and pioneer in humanoid research] used to think that the mind can be created, but I don't think so.
If a steep hill suddenly protrudes from the flatland, you can draw a line to show where the mountain starts, but Mt.
People now call them distributed autonomous systems, and there has been much progress in terms of their applications in the past ten years.
I had thought about it many years ago, and I built seven robots that moved as a swarm and exhibited them at Expo '75 held in Okinawa in 1975.
I used to cut out these pictures and photographs from various magazines, and there are designs that my students and I drew back in those days.
I have never approached my research in a way where you decide on an area and then you try to dig out everything in that area.
I think the teachings of Buddha is the best way to understand humans, especially with regard to understanding the human mind.
Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley
We demonstrate an UV in subjects’ explicit ratings of likability for a large, objectively chosen sample of 80 real-world robot faces and a complementary controlled set of edited faces.
An “investment game” showed that the UV penetrated even more deeply to influence subjects’ implicit decisions concerning robots’ social trustworthiness, and that these fundamental social decisions depend on subtle cues of facial expression that are also used to judge humans.