Theory of Mind to infer mental states

As mentioned in an older post, the Theory of Mind is the ability of people to infer their own and other mental states, such as knowledge, beliefs, intentions or desires [1]. This notion has only been recently studied in robotics to enable autonomous systems to predict human behaviour when interacting with them and adapt their actions accordingly. These adaptations are challenging and necessitate that the robot be fully aware of its users and surroundings.

Context

In the PERSEO project, we are working on adapting robot behavior to human actions based on their mental states. We focus on understanding “false beliefs,” which refers to situations where a human’s beliefs don’t align with reality. To achieve this, we conducted an HRI experiment [2] based on the “Sally-Anne test” [3] and the methodology of Buttelman et al. [4]. The experiment involved the robot interacting with two humans (a searcher and a tricker), a toy, and two boxes. During the test, the searcher could enter and leave the room and move to one of the boxes. Meanwhile, the tricker could place the toy outside the boxes or in one of them. Based on these scenarios, we evaluated the robot’s response at the end of the experiment regardind two different conditions:

  • False belief condition: The searcher puts the toy in the first box (picture B1) and leaves the room (picture C1). In their absence, the tricker approaches the robot (picture D1) and switches the toy’s position from the original container (picture E1) to the other one. Finally, the searcher returns to the table and stands next to the first box, the one where they believe still contains the toy (F1).
  • True belief condition: In contrast with the first condition, after placing the toy in the second box (B2), the searcher does not leave the room (C2). They are then aware of the tricker approaching the robot (D2), and swapping the toy’s location (E2). When the malicious user leaves, the searcher steps back in front of the robot, standing next to the box where the toy was originally located (F2).

As an answer, the robot is indicating the box where the toy is currently located (F1) in the False belief condition as an indicator that the searcher would like to play with the toy. While in the True belief condition, the robot acknowledge that the searcher is no longer interested to play with the toy.

Future of ToM in robotics

The scenario above provides a good starting point for robots to use their reasoning capabilities to assist humans. However, the current scenario is limited in its simplicity, which prevents the robot from being deployed in more complex, real-world situations. Future studies will focus on developing more sophisticated scenarios that involve a wider environment with more complex tasks, such as collaborative scenarios.

References:

[1] D. Premack and G. Woodruff, “Does the chimpanzee have a theory of mind?,” Behavioral and Brain Sciences, vol. 1, no. 4, p. 515–526, 1978.

[2] M. Hellou, S. Vinanzi, and A. Cangelosi, “Bayesian theory of mind for false belief understanding in human-robot interaction,” in 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 1893–1900, 2023.

[3] S. Baron-Cohen, A. M. Leslie, and U. Frith, “Does the autistic child have a “theory of mind” ?,” Cognition, vol. 21, no. 1, pp. 37–46, 1985.

[4] D. Buttelmann, M. Carpenter, and M. Tomasello, “Eighteen-month-old infants show false belief understanding in an active helping paradigm,” Cognition, vol. 112, pp. 337–342, Aug. 2009.