10.1109/RO-MAN53752.2022.9900518guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
research-article

Evaluating the Impact of Emotional Apology on Human-Robot Trust

Authors Info & Claims
Published:29 August 2022Publication History

ABSTRACT

Previous research has shown that robot mistakes or malfunctions have a significant negative impact on people’s trust. One way to mitigate the negative impact of trust violation is through trust repair. Although trust repair has been studied extensively, it is still not known which strategy is effective in repairing trust in a time-sensitive driving scenario. Additionally, prior research on trust repair has not dealt with the effects of expressing emotion in attempting trust repair. In this paper, we presented the development of a variety of trust repair methods for a time-sensitive scenario using a simulated driving environment as a testbed for validation. These trust repair methods included baseline apology, emotional apology, and explanation. We conducted an experiment to compare the impact of these trust repair methods on human-robot trust. Experimental results indicated that the emotional apology positively affected more participants than the no-repair, baseline apology, and explanation. Furthermore, this study identified emotional apology as the most effective method for the time-sensitive driving scenario.

References

  1. [1].Robinette P., Li W., Allen R., Howard A. M., and Wagner A. R., “Overtrust of robots in emergency evacuation scenarios,” in Human-Robot Interaction (HRI), 2016 11th ACM/IEEE International Conference on, 2016: IEEE, pp. 101108. Google ScholarGoogle Scholar
  2. [2].Xu J. and Howard A. , “The Impact of First Impressions on Human-Robot Trust During Problem-Solving Scenarios,” in 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2018: IEEE, pp. 435441. Google ScholarGoogle Scholar
  3. [3].Xu J. and Howard A. , “How much do you trust your self-driving car? Exploring human-robot trust in high-risk scenarios,” in 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2020: IEEE, pp. 42734280. Google ScholarGoogle Scholar
  4. [4].Esterwood C. and Robert L. P. , “Do you still trust me? human-robot trust repair strategies,” in 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN), 2021: IEEE, pp. 183188. Google ScholarGoogle Scholar
  5. [5].Kohn S. C., Quinn D., Pak R., de Visser E. J., and Shaw T. H., “Trust repair strategies with self-driving vehicles: An exploratory study,” in Proceedings of the human factors and ergonomics society annual meeting, 2018, vol. 62, no. 1: SAGE Publications Sage CA: Los Angeles, CA, pp. 11081112. Google ScholarGoogle Scholar
  6. [6].Robinette P., Howard A. M., and Wagner A. R., “Timing is key for robot trust repair,” in International conference on social robotics, 2015: Springer, pp. 574583. Google ScholarGoogle Scholar
  7. [7].Sebo S. S., Krishnamurthi P., and Scassellati B., “"I Don't Believe You”: Investigating the Effects of Robot Trust Violation and Repair," in 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2019: IEEE, pp. 5765. Google ScholarGoogle Scholar
  8. [8].Baker A. L., Phillips E. K., Ullman D., and Keebler J. R., “Toward an understanding of trust repair in human-robot interaction: current research and future directions,” ACM Transactions on Interactive Intelligent Systems (TiiS), vol. 8, no. 4, pp. 130, 2018. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. [9].Tolmeijer S.et al.., “Taxonomy of Trust-Relevant Failures and Mitigation Strategies,” in Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, 2020, pp. 312. Google ScholarGoogle Scholar
  10. [10].Nayyar M. and Wagner A. R. , “When Should a Robot Apologize? Understanding How Timing Affects Human-Robot Trust Repair,” in International conference on social robotics, 2018: Springer, pp. 265274. Google ScholarGoogle Scholar
  11. [11].Esterwood C. and Robert L. , “Fool Me Three Times: Trust Repair & Trustworthiness Over Multiple Violations and Repairs,” presented at the Cooperative AI NeurIPS Workshop, Virtual, 2021. Google ScholarGoogle Scholar
  12. [12].Kox E., Kerstholt J., Hueting T., and De Vries P., “Trust repair in human-agent teams: the effectiveness of explanations and expressing regret,” Autonomous Agents and Multi-Agent Systems, vol. 35, no. 2, pp. 120, 2021. Google ScholarGoogle Scholar
  13. [13].Wang N., Pynadath D. V., and Hill S. G., “Trust calibration within a human-robot team: Comparing automatically generated explanations,” in 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2016: IEEE, pp. 109116. Google ScholarGoogle Scholar
  14. [14].Hald K., Weitz K., André E., and Rehm M., “"An Error Occurred!”-Trust Repair With Virtual Robot Using Levels of Mistake Explanation," in Proceedings of the 9th International Conference on Human-Agent Interaction, 2021, pp. 218226. Google ScholarGoogle Scholar
  15. [15].Natarajan M. and Gombolay M. , “Effects of Anthropomorphism and Accountability on Trust in Human Robot Interaction,” in Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, 2020, pp. 3342. Google ScholarGoogle Scholar
  16. [16].Kim T. and Song H. , “How should intelligent agents apologize to restore trust? Interaction effects between anthropomorphism and apology attribution on trust repair,” Telematics and Informatics, vol. 61, p. 101595, 2021. Google ScholarGoogle ScholarCross RefCross Ref
  17. [17].Fratczak P., Goh Y. M., Kinnell P., Justham L., and Soltoggio A., “Robot apology as a post-accident trust-recovery control strategy in industrial human-robot interaction,” International Journal of Industrial Ergonomics, vol. 82, p. 103078, 2021. Google ScholarGoogle ScholarCross RefCross Ref
  18. [18].Dunn J. R. and Schweitzer M. E. , “Feeling and believing: the influence of emotion on trust,” Journal of personality and social psychology, vol. 88, no. 5, p. 736, 2005. Google ScholarGoogle ScholarCross RefCross Ref
  19. [19].Lee W.-S. and Selart M., “How betrayal affects emotions and subsequent trust,” The Open Psychology Journal, vol. 8, pp. 153159, 2015. Google ScholarGoogle ScholarCross RefCross Ref
  20. [20].Fahim M. A. A., Khan M. M. H., Jensen T., Albayram Y., and Coman E., “Do Integral Emotions Affect Trust? The Mediating Effect of Emotions on Trust in the Context of Human-Agent Interaction,” in Designing Interactive Systems Conference 2021, 2021, pp. 14921503. Google ScholarGoogle Scholar
  21. [21].Mohammad S. and Turney P. , “Emotions evoked by common words and phrases: Using mechanical turk to create an emotion lexicon,” in Proceedings of the NAACL HLT 2010 workshop on computational approaches to analysis and generation of emotion in text, 2010, pp. 2634. Google ScholarGoogle Scholar
  22. [22].Plutchik R., “A general psychoevolutionary theory of emotion,” in Theories of emotion: Elsevier, 1980, pp. 333. Google ScholarGoogle Scholar
  23. [23].ten Brinke L. and Adams G. S. , “Saving face? When emotion displays during public apologies mitigate damage to organizational performance,” Organizational Behavior and Human Decision Processes, vol. 130, pp. 112, 2015. Google ScholarGoogle Scholar
  24. [24].Robbennolt J. K., “Apologies and medical error,” Clinical orthopaedics and related research, vol. 467, no. 2, pp. 376382, 2009. Google ScholarGoogle Scholar
  25. [25].Lewicki R. J., Polin B., and Lount R. B., “An exploration of the structure of effective apologies,” Negotiation and Conflict Management Research, vol. 9, no. 2, pp. 177196, 2016. Google ScholarGoogle Scholar
  26. [26].Watson I.. “Tone Analyzer.” https://tone-analyzer-demo.ng.bluemix.net/ (accessed 3/20, 2022). Google ScholarGoogle Scholar
  27. [27].Al Marouf A., Hossain R., Sarker M. R. K. R., Pandey B., and Siddiquee S. M. T., “Recognizing language and emotional tone from music lyrics using IBM Watson Tone Analyzer,” in 2019 IEEE International Conference on Electrical, Computer and Communication Technologies (ICECCT), 2019: IEEE, pp. 16. Google ScholarGoogle Scholar
  28. [28].VanBuskirk K. and Letson M. R. , “An IBM Watson Tone Analysis of Selected Judicial Decisions,” Scribes J. Leg. Writing, vol. 19, p. 25, 2020. Google ScholarGoogle Scholar
  29. [29].Steffens A. N., Langerhuizen D. W., Doornberg J. N., Ring D., and Janssen S. J., “Emotional tones in scientific writing: comparison of commercially funded studies and non-commercially funded orthopedic studies,” Acta Orthopaedica, vol. 92, no. 2, pp. 240243, 2021. Google ScholarGoogle Scholar
  30. [30].Tanveer M. I., Samrose S., Baten R. A., and Hoque M. E., “Awe the audience: How the narrative trajectories affect audience perception in public speaking,” in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018, pp. 112. Google ScholarGoogle Scholar
  31. [31].Fischer A. H., Kret M. E., and Broekens J., “Gender differences in emotion perception and self-reported emotional intelligence: A test of the emotion sensitivity hypothesis,” PloS one, vol. 13, no. 1, p. e0190712, 2018. Google ScholarGoogle Scholar
  32. [32].Hester N., “Perceived negative emotion in neutral faces: Gender-dependent effects on attractiveness and threat,” Emotion, vol. 19, no. 8, p. 1490, 2019. Google ScholarGoogle Scholar
  33. [33].Arrieta A. B.et al.., “Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI,” Information fusion, vol. 58, pp. 82115, 2020. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. [34].Miller T., “Explanation in artificial intelligence: Insights from the social sciences,” Artificial intelligence, vol. 267, pp. 138, 2019. Google ScholarGoogle ScholarCross RefCross Ref
  35. [35].Robinette P., Howard A. M., and Wagner A. R., “Effect of Robot Performance on Human–Robot Trust in Time-Critical Situations,” IEEE Transactions on Human-Machine Systems, 2017. Google ScholarGoogle Scholar
  36. [36].Jian J.-Y., Bisantz A. M., and Drury C. G., “Foundations for an Empirically Determined Scale of Trust in Automated Systems,” International Journal of Cognitive Ergonomics, vol. 4, no. 1, pp. 5371, 2000/03/01 2000, doi: 10.1207/S15327566IJCE0401_04. Google ScholarGoogle ScholarCross RefCross Ref

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image Guide Proceedings
    2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)
    Aug 2022
    1654 pages

    Copyright © 2022

    Publisher

    IEEE Press

    Publication History

    • Published: 29 August 2022

    Qualifiers

    • research-article
  • Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0

    Other Metrics

About Cookies On This Site

We use cookies to ensure that we give you the best experience on our website.

Learn more

Got it!