SOCIAL ROBOTS AND FACIAL EMOTIONS​

Humans express emotion through various media. Humans use their faces to present internal affective states. Humans use speech to express emotion as well. Among various types of expressions of emotions, facial expressions are predominant. Various expressions displayed on humans’ faces represent different cognitive processes. In communicative or interactive conditions, humans can recognize the emotion being used.  This ability to recognize and describe emotion perceived on another’s face presents a unique ability of humans to integrate communicative components without even saying a single word.

For instance, humans can easily see anger or pain through others’ facial movements; this sight creates a loop of the response of helping or empathizing with another person.  Further, we convey our acceptance or rejection in a social environment through facial expressions more readily than any other mode of expression of emotion.

As social robots are becoming ubiquitous, “at least in research,” it is vital to integrate the ability to recognize facial expressions of emotion.  Imagine an elderly care home where social robots can be deployed to do mundane and repeated tasks around the elderly; such robots must understand how the elderly express emotions through faces so that the social robots can give appropriate feedback or responses to help the elderly. However, its social robot with the ability to recognize any facial expression is not easy. This is because every human expresses their emotion subtly through faces in different ways to another human. For this, social robots need to be able to adapt to different human facial expressions and able to personalize themselves in giving appropriate feedback or responses to humans.