New Experiments Reveal Why Human-Like Robots Creep Us Out


Robots, Androids and artificial intelligence are being used more frequently to complete tasks humans one performed in the workplace. Machines conducting essential activities such as a robot that delivers medicines on different floors of a hospital unburden medical staff. As long as we can perceive them as cognitive mechanisms everything comes off without a hitch. But the more these machines start to resemble humans, the creepier we feel. And scientists now have discovered why.

The Uncanny Valley

Human replicas highly resembling people tend to elicit eerie sensations—a zone scientists call “the uncanny valley.” Androids or robots with human-like features, are often more appealing to people than those that resemble machines—but only up to a point. Many people experience an uneasy feeling in response to robots that are nearly lifelike, and yet somehow not quite “right.” According to new scientific research by psychologists at Emory University, the feeling of affinity can plunge us into one of repulsion as a robot’s human likeness increases.

When we see a face in our morning coffee cream or in an image in a cloud, we don’t freak out. Neither does giving names to inanimate objects such as our car. That recognition led researcher Wang Shensheng to hypothesize that something other than just anthropomorphizing may occur when we view an android. Since the uncanny valley was first described, a common hypothesis known as the mind-perception theory proposed that when we see a robot with human-like features, we automatically add a mind to it. A growing sense that a machine appears to have a mind leads to the creepy feeling, according to this theory. But Shensheng found the opposite to be true. “It’s not the first step of attributing a mind to an android,” he explained, “but the next step of ‘dehumanizing’ it by subtracting the idea of it having a mind that leads to the uncanny valley. Instead of just a one-shot process, it’s a dynamic one.”

The New Study

To tease apart the potential roles of mind-perception and dehumanization in the uncanny valley phenomenon the researchers conducted experiments focused on the temporal dynamics of the process. Participants were shown three types of images— human faces, mechanical-looking robot faces and android faces that closely resembled humans—and asked to rate each for “aliveness.” The exposure times of the images were systematically manipulated, within milliseconds, as the participants rated their “aliveness” or animacy.

The results, published in the September issue of the journal Perception, showed that perceived “aliveness” or animacy decreased significantly as a function of exposure time for android faces but not for mechanical-looking robots or human faces. And in android faces, the perceived animacy drops at between 100 and 500 milliseconds of viewing time. That timing is consistent with previous research showing that people begin to distinguish between human and artificial faces around 400 milliseconds after stimulus onset.

A second set of experiments manipulated both the exposure time and the amount of detail in the images, ranging from a minimal sketch of the features to a fully blurred image. The results showed that removing details from the images of the android faces decreased the perceived animacy along with the perceived uncanniness.

“The whole process is complicated but it happens within the blink of an eye,” Wang says. “Our results suggest that at first sight we anthropomorphize an android, but within milliseconds we detect deviations and dehumanize it. And that drop in perceived animacy likely contributes to the uncanny feeling.”

According to Wang, the findings have implications for both the design of robots and for understanding how we perceive one another as humans. “Robots are increasingly entering the social domain for everything from education to healthcare,” Wang explains. “How we perceive them and relate to them is important both from the standpoint of engineers and psychologists.”

The research may help in unraveling the mechanisms involved in mind-blindness — the inability to distinguish between humans and machines—such as in cases of extreme autism or some psychotic disorders, according to the researchers. Projecting human qualities onto objects is common. “We often see faces in a cloud for instance,” Wang says. “We also sometimes anthropomorphize machines that we’re trying to understand, like our cars or a computer.”

Reference

Wang, S. et al. (2020). The uncanny valley phenomenon and the temporal dynamics of face animacy perception. Perception. DOI: 10.1177/0301006620952611



Source link