With the last two blogs, we have seen that Theory of Mind (ToM) has been extensively analyzed in psychological research to explore how people infer their own and others’ mental states. However, there is currently a growing interest in the application of ToM in robotics, particularly in social robotics that interact with humans. The idea is to enable robots to reason and adapt their behaviors for better interactions by understanding people’s mental states. Consequently, researchers in robotics have focused their work on developing cognitive models with ToM abilities and their applications in complex situations requiring the robot to comprehend people’s mental states.
In response to these challenges, different methods and models have emerged in recent years that can be categorized into different elements.
Probabilistic models
The first and common technique to design ToM-capable agents is the Bayesian Network (BN), a graphical data analysis model and a popular tool for encoding uncertain expert knowledge in expert systems [1]. This type of model is well-suited for representing the knowledge and learning of infants, who are thought to represent the world by constructing a causal map: an abstract, coherent, learned representation of the causal relations among events [2,3]. As an example, Vinanzi et al. [4] have developed a robot learning architecture based on BNs that is able to estimate the trustworthiness of human partners based on an understanding of their mental states. Another notable example comes from Baker et al. [5,6], who implemented a model called the \enquote{Bayesian Theory of Mind} (BToM), which uses Bayesian inverse planning to represent how people infer others’ goals or preferences. This model is combined with a partially observable Markov decision process (POMDP) to represent an agent’s planning and inference about the world. The model then uses Bayesian inference to invert the planning and reconstruct the agent’s joint belief state and reward function, conditioned on observations of the agent’s behaviour in the environment. The model relies on intentional agency [7], establishing that all agents are expected to perform specific actions based on their desires and beliefs and then achieve a specific goal. From this definition, an external observer can reason on the goal or intention of the agent regarding the sequence of actions it performs and extract the different mental states to explain the agent’s planning.
Machine Learning
Another common methods commonly used in Artificial Intelligence, especially in robotics, are Machine Learning methods, including Deep Learning(DL) and Reinforcement Learning(RL). We have seen a glimpse of this methodology in the previous section with BToM using POMDPs to represent agent’s planning. However, other models with specific architecture use deep learning models such as neural networks. Rabinowitz et al. [8], presented the Machine Theory of Mind or ToMnet used to learn and predict an agent’s behaviour. Using a dataset of different agents’ previous behaviours, the model tries to predict a new agent’s future behaviours. In doing so, the authors introduced two concepts to describe components of this observer network and their functional role. (1) a general theory of mind – the learned weights of the networks, which encapsulate predictions about the expected behaviour of all agents in the training set – and (2) an agent-specific theory of mind – the “agent embedding” formed from observations about a single agent at test time, which encapsulates what makes this agent’s character and mental state distinct from others’.
On the other hand, some researchers combine different ML methods to define a relationship between ToM and trust. Patachiolla et al. [9]i mplemented a cognitive architecture for trust and ToM in humanoid robots, a model built with an actor-critic framework[10] to interpret biological observations combined with a model called ERA [11], an architecture gathering self-organizing maps as functions approximator, and a BN to represent the intrinsic environment’s values. The model is then tested on a humanoid robot that can learn new objects through the teaching of two humans: one who gives the right name for the object and one who does not. As mentioned, Vinanzi et al. [4] have built a different architecture to develop intention reading and trust in human-robot interaction (HRI). The authors established a system combining a convolutional neural network (CNN), an unsupervised clustering algorithm, and BN to predict the user’s intended actions. Another BN is exploited for the trust estimation of the robot toward the user. For the evaluation, the authors set up a collaborative game wherein the robot and the user needed to work together in order to align blocks with different colours in specific settings.
Conclusion
In this blog, we have explored different methods and architectures applied to develop Theory of Mind (ToM) in machines. These algorithms enable systems to autonomously adapt their behaviors to different agents based on their mental states, including their beliefs, preferences, or intentions. However, numerous challenges and issues remain to be addressed. For instance, there is a need to generalize cognitive models capable of incorporating not only the mental states of people but also other factors, such as emotions. Furthermore, while the papers we have discussed touch upon various aspects of ToM, there is still a lack of application in more real-world scenarios. The primary goal is to equip robots with the capability to interact in complex environments involving external factors and multiple individuals. ToM serves as a means to achieve this goal in the coming years; however, this requires further research involving more studies in Human-Robot Interaction (HRI) with these cognitive architectures embedded in social robots. Such studies would offer valuable insights into how robots can better understand human behavior and enhance their ability to interact and collaborate with humans in real-world settings.
References
- [1] David Heckerman, Dan Geiger, and David M. Chickering, “Learning bayesian networks: The combination of knowledge and statistical data” in Uncertainty Proceedings 1994,
- [2] Noah D. Goodman, Chris L. Baker, Elizabeth Baraff Bonawitz, Vikash K. Mansinghka, Alison Gopnik, Henry Wellman, Laura Schulz, and Joshua B. Tenenbaum, “Intuitive theories of mind: a rational approach to false belief,” in Proceedings of the Twenty-Eigth Annual Conference of the Cognitive Science Society. Mahwah, NJ: Erlbaum, 2006
- [3] Alison Gopnik, Clark Glymour, David M. Sobel, Laura E. Schulz, Tamar Kushnir, and David Danks, “A Theory of Causal Learning in Children: Causal Maps and Bayes Nets.,” 2004.
- [4] Vinanzi, S., Cangelosi, A., & Goerick, C. (2021). The collaborative mind: intention reading and trust in human-robot interaction. Iscience, 24(2).
- [5]Chris L. Baker, Rebecca R. Saxe, and Joshua B. Tenenbaum, “Bayesian theory of mind: Modeling joint belief-desire attribution,” in In Proceedings of the Thirtieth Third Annual Conference of the Cognitive Science Society, 2011, pp. 2469–2474.
- [6] Chris L. Baker, Julian Jara-Ettinger, Rebecca Saxe, and Joshua B. Tenenbaum, “Rational quantitative attribution of beliefs, desires and percepts in human mentalizing,” Nature Human Behaviour, vol. 1, 2017.
- [7] Dennett, Daniel C. The intentional stance. MIT press, 1989.
- [8] Rabinowitz, N., Perbet, F., Song, F., Zhang, C., Eslami, S. A., & Botvinick, M. (2018, July). Machine theory of mind. In International conference on machine learning (pp. 4218-4227). PMLR.
- [9] Patacchiola, M., & Cangelosi, A. (2020). A developmental cognitive architecture for trust and theory of mind in humanoid robots. IEEE Transactions on Cybernetics, 52(3), 1947-1959.
- [10] Konda, V., & Tsitsiklis, J. (1999). Actor-critic algorithms. Advances in neural information processing systems, 12.
- [11] Morse, A. F., De Greeff, J., Belpeame, T., & Cangelosi, A. (2010). Epigenetic robotics architecture (ERA). IEEE Transactions on Autonomous Mental Development, 2(4), 325-339.