Trust in human-robot interaction: A measurement issue that stems from a theoretical gap?

Trust is probably one of the most complicated construct I have ever faced as a psychologist. It is this kind of phenomenon that everyone experiences on a daily basis, and can identify properly when experiencing it, which yet remains difficult to grasp scientifically. What is trust exactly? Where does it come from? How does it work? And why do we trust people, organizations, or even machines? These questions, and plenty of others, have become what academics call hot topics. In general, most authors agree with the definition of trust as the willingness (of a trustor) to accept risks and vulnerabilities in expectation of positive outcomes resulting from the intentions or behaviors of another (a trustee) [1]. Yet, this definition is far from having solved the debate about the nature and functioning of trust.

In the field of human-robot interaction (HRI), trust is also a hot topic. Discussions are still ongoing to characterize trust towards robots (e.g., How distinct is it from trust towards humans?), understand its determinants and consequences, determine methods to measure it, and which level of trust is appropriate to foster successful interactions with robots. Initially, trust towards robots was addressed as a matter of reliance to it. For instance, if you can easily accept to leave your Roomba doing its cleaning job without any supervision, that means you are relying on the ability of Roomba to competently achieve its tasks. This example seems to be a good case of trust: You accept the risk of leaving Roomba alone to do its job so you get more time to work on your short-notice due essay (or to procrastinate more in front of your favorite streaming platform).

Yet, some researchers have noticed that reliance seems to be mostly based on the perceived capacity of the robot whereas, in many interactions with fellow humans, trust seems to be a subtle and variant mix of knowledge (e.g., “I know that Perkins has the skills to review my report”) and affects (e.g., “… But I have always found this guy kind of fishy”). This is what researchers call cognitive (i.e., knowledge-based) trust, and affective (i.e., affect-driven) trust [2], [3]. Some authors proposed that cognitive trust focuses on trustee’s characteristics related to competence or competence, while affective trust is based on trustee’s characteristics related to benevolence. This is how trust scales usually measure trust in HRI: People are asked to what extent a robot is competence, reliable, or performant (competence), and how sincere, warm, or transparent (benevolence) it is*.

However, what is puzzling with this sort of approach is that it assumes that only assessing a trustee’s competence is knowledge-driven (or cognitive), and only assessing a trustee’s benevolence is affect-driven. This approach creates an overlap between mental processes of the trustor and the characteristics of trustee. Yet, would we not use our knowledge of someone’s integrity and benevolence before entrusting this person with a position in healthcare or in a bank? Conversely, does it not feel good to know a person is reliable for a given task we entrust them with? Rare are the works verifying that cognitive trust is about a trustee’s competence, and affective trust about a trustee’s benevolence. In HRI, one work concluded that trustee’s benevolence indeed determines affective trust, but that competence does not necessarily leads to heightened cognitive trust [5]. However, it must be noted that the scales they used for cognitive and affective trust are very similar to the scales assessing competence and benevolence. Besides, many items may be deemed irrelevant to HRI as they were originally meant to asses trust towards coworkers, explaining discrepancies between the measures of cognitive and affective trust, and the measures of robots’ perceived benevolence and competence. Trust as a mental state (or psychological process) remains to be assessed properly, most probably by emphasizing the difference between “I believe/I know I can trust this agent”, and “I feel I can trust this agent”.

Another neglected aspect of trust is that it is a decision-making process (i.e., resulting in trusting the trustee or not) [3]. As such, trust may be influenced by both intuitive (or automatic) and deliberative (controlled) psychological processes. An alternative to the distinction between cognitive and affective trust may be then explicit (i.e., “I know why I trust this agent or not”) vs. implicit trust (i.e., “I have a gut feeling about if I should trust this agent or not or not”) [6]. Such an approach of trust would be called dual-process, as it emphasizes two ways of processing information leading to trust [7].

As we have seen, the way we measure trust (in psychology and in HRI) stems from our theoretical understanding of the phenomenon. The limitations of the measures I have pointed out then suggest that we would need more work on the understanding of trust as a psychological phenomenon. But for that, at least a measure of trust based on knowledge and trust based on affect, otherwise said, a measure of trust as a characteristic of the respondents themselves (instead of as a set of trustee’s perceived characteristics in the mind of the respondent) would be needed. This way, we could reach a better understanding of trust as a psychological process, leading to new, more accurate, reliable, and valid measures of trust.


*It must be noted that some scales distinguish more dimensions of a trustee’s characteristics. For instance the second version of the MDMT [4] scale distinguishes trust related to reliability, competence, transparency, ethicality, and benevolence. An earlier version of the scale however distinguished two dimensions: performance trust and moral trust.

References:

[1]       A. Weiss, C. Michels, P. Burgmer, T. Mussweiler, A. Ockenfels, and W. Hofmann, ‘Trust in everyday life.’, Journal of Personality and Social Psychology, vol. 121, no. 1, pp. 95–114, Jul. 2021, doi: 10.1037/pspi0000334.

[2]       D. M. Rousseau, S. B. Sitkin, R. S. Burt, and C. Camerer, ‘Not So Different After All: A Cross-Discipline View Of Trust’, AMR, vol. 23, no. 3, pp. 393–404, Jul. 1998, doi: 10.5465/amr.1998.926617.

[3]       J. D. Lee and K. A. See, ‘Trust in automation: designing for appropriate reliance’, Hum Factors, vol. 46, no. 1, pp. 50–80, 2004, doi: 10.1518/hfes.46.1.50_30392.

[4]       D. Ullman and B. F. Malle, ‘Measuring Gains and Losses in Human-Robot Trust: Evidence for Differentiable Components of Trust’, in 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Mar. 2019, pp. 618–619. doi: 10.1109/HRI.2019.8673154.

[5]       N. Anzabi and H. Umemuro, ‘Effect of Different Listening Behaviors of Social Robots on Perceived Trust in Human-robot Interactions’, Int J of Soc Robotics, May 2023, doi: 10.1007/s12369-023-01008-x.

[6]       C. Burns, K. Mearns, and P. McGeorge, ‘Explicit and Implicit Trust Within Safety Culture’, Risk Analysis, vol. 26, no. 5, pp. 1139–1150, 2006, doi: 10.1111/j.1539-6924.2006.00821.x.

[7]       J. St. B. T. Evans and K. E. Stanovich, ‘Dual-Process Theories of Higher Cognition: Advancing the Debate’, Perspect Psychol Sci, vol. 8, no. 3, pp. 223–241, May 2013, doi: 10.1177/1745691612460685.