10.1109/RO-MAN53752.2022.9900857guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
research-article

Exploring First Impressions of the Perceived Social Intelligence and Construal Level of Robots that Disclose their Ability to Deceive

Authors Info & Claims
Published:29 August 2022Publication History

ABSTRACT

If a robot tells you it can lie for your benefit, how would that change how you perceive it? This paper presents a mixed-methods empirical study that investigates how disclosure of deceptive or honest capabilities influences the perceived social intelligence and construal level of a robot. We first conduct a study with 198 Mechanical Turk participants, and then a replication of it with 15 undergraduate students in order to gain qualitative data. Our results show that how a robot introduces itself can have noticeable effects on how it is perceived–even from just one exposure. In particular, when revealing having ability to lie when it believes it is in the best interest of a human, people noticeably find the robot to be less trustworthy than a robot that conceals any honesty aspects or reveals total truthfulness. Moreover, robots that are forthcoming with their truthful abilities are seen in a lower construal than one that is transparent about its deceptive abilities. These results add much needed knowledge to the understudied area of robot deception and could inform designers and policy makers of future practices when considering deploying robots that deceive.

References

  1. [1].Bond C. F. and Robinson M. , “The evolution of deception,” Journal of nonverbal behavior, vol. 12, no. 4, pp. 295307, 1988. Google ScholarGoogle ScholarCross RefCross Ref
  2. [2].Kneer M., “Can a robot lie?Google ScholarGoogle Scholar
  3. [3].Kneer M. and Stuart M. T. , “Playing the blame game with robots,” in Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, 2021, pp. 407411. Google ScholarGoogle Scholar
  4. [4].Short E., Hart J., Vu M., and Scassellati B., “No fair!! an interaction with a cheating robot,” in 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 2010, pp. 219226. Google ScholarGoogle Scholar
  5. [5].Gratch J., Nazari Z., and Johnson E., “The misrepresentation game: How to win at negotiation while seeming like a nice guy,” in Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, 2016, pp. 728737. Google ScholarGoogle Scholar
  6. [6].Lewis M., Yarats D., Dauphin Y. N., Parikh D., and Batra D., “Deal or no deal? end-to-end learning for negotiation dialogues,” arXiv preprint arXiv:1706.05125, 2017. Google ScholarGoogle Scholar
  7. [7].Robinette P., Li W., Allen R., Howard A. M., and Wagner A. R., “Overtrust of robots in emergency evacuation scenarios,” in 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 2016, pp. 101108. Google ScholarGoogle Scholar
  8. [8].Robinette P., Howard A., and Wagner A. R., “Conceptualizing overtrust in robots: why do people trust a robot that previously failed?” in Autonomy and Artificial Intelligence: A Threat or Savior? Springer, 2017, pp. 129155. Google ScholarGoogle Scholar
  9. [9].Xu J. and Howard A. , “The impact of first impressions on human-robot trust during problem-solving scenarios,” in 2018 27th IEEE international symposium on robot and human interactive communication (RO-MAN). IEEE, 2018, pp. 435441. Google ScholarGoogle Scholar
  10. [10].Paetzel M., Perugia G., and Castellano G., “The persistence of first impressions: The effect of repeated interactions on the perception of a social robot,” in Proceedings of the 2020 ACM/IEEE international conference on human-robot interaction, 2020, pp. 7382. Google ScholarGoogle Scholar
  11. [11].Ettinger D. and Jehiel P. , “Towards a theory of deception: Else working papers (181),” ESRC Centre for Economic Learning and Social Evolution, London, UK, 2009. Google ScholarGoogle Scholar
  12. [12].Danaher J., “Robot betrayal: a guide to the ethics of robotic deception,” Ethics and Information Technology, vol. 22, no. 2, pp. 117128, 2020. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. [13].Kaminski M. E., Rueben M., Smart W. D., and Grimm C. M., “Averting robot eyes,” Md. L. Rev., vol. 76, p. 983, 2016. Google ScholarGoogle Scholar
  14. [14].Wagner A. R. and Arkin R. C. , “Acting deceptively: Providing robots with the capacity for deception,” International Journal of Social Robotics, vol. 3, no. 1, pp. 526, 2011. Google ScholarGoogle ScholarCross RefCross Ref
  15. [15].Dragan A. D., Holladay R. M., and Srinivasa S. S., “An analysis of deceptive robot motion.” in Robotics: science and systems. Citeseer, 2014, p. 10. Google ScholarGoogle Scholar
  16. [16].Short E., Hart J., Vu M., and Scassellati B., “No fair!! an interaction with a cheating robot,” in 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 2010, pp. 219226. Google ScholarGoogle Scholar
  17. [17].Sebo S. S., Krishnamurthi P., and Scassellati B., “"i don’t believe you”: Investigating the effects of robot trust violation and repair," in 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 2019, pp. 5765. Google ScholarGoogle Scholar
  18. [18].Vázquez M., May A., Steinfeld A., and Chen W.-H., “A deceptive robot referee in a multiplayer gaming environment,” in 2011 international conference on collaboration technologies and systems (CTS). IEEE, 2011, pp. 204211. Google ScholarGoogle Scholar
  19. [19].Brewer B. R., Fagan M., Klatzky R. L., and Matsuoka Y., “Perceptual limits for a robotic rehabilitation environment using visual feedback distortion,” IEEE transactions on neural systems and rehabilitation engineering, vol. 13, no. 1, pp. 111, 2005. Google ScholarGoogle ScholarCross RefCross Ref
  20. [20].Wijnen L., Coenen J., and Grzyb B., “" it’s not my fault!” investigating the effects of the deceptive behaviour of a humanoid robot," in Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, 2017, pp. 321322. Google ScholarGoogle Scholar
  21. [21].Rogers K. and Howard A. , “Intelligent agent deception and the influence on human trust and interaction,” in 2021 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO). IEEE, 2021, pp. 200205. Google ScholarGoogle Scholar
  22. [22].Ford M. E. and Tisak M. S. , “A further search for social intelligence.” Journal of Educational Psychology, vol. 75, no. 2, p. 196, 1983. Google ScholarGoogle Scholar
  23. [23].Barnes M. L. and Sternberg R. J. , “Social intelligence and decoding of nonverbal cues,” Intelligence, vol. 13, no. 3, pp. 263287, 1989. Google ScholarGoogle Scholar
  24. [24].Rogers K. and Howard A. , “When a robot tells you that it can lie,” in 2022 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO). IEEE, 2022. Google ScholarGoogle Scholar
  25. [25].Barchard K. A., Lapping-Carr L., Westfall R. S., Fink-Armold A., Banisetty S. B., and Feil-Seifer D., “Measuring the perceived social intelligence of robots,” ACM Transactions on Human-Robot Interaction (THRI), vol. 9, no. 4, pp. 129, 2020. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. [26].Trope Y. and Liberman N. , “Temporal construal.” Psychological review, vol. 110, no. 3, p. 403, 2003. Google ScholarGoogle ScholarCross RefCross Ref
  27. [27].Trope Y. and Liberman N. , “Construal-level theory of psychological distance.” Psychological review, vol. 117, no. 2, p. 440, 2010. Google ScholarGoogle Scholar
  28. [28].Liviatan I., Trope Y., and Liberman N., “Interpersonal similarity as a social distance dimension: Implications for perception of others’ actions,” Journal of experimental social psychology, vol. 44, no. 5, pp. 12561269, 2008. Google ScholarGoogle Scholar
  29. [29].Ingram K. M., Espelage D. L., Merrin G. J., Valido A., Heinhorst J., and Joyce M., “Evaluation of a virtual reality enhanced bullying prevention curriculum pilot trial,” Journal of adolescence, vol. 71, pp. 7283, 2019. Google ScholarGoogle ScholarCross RefCross Ref
  30. [30].Kim T. W. and Duhachek A. , “Artificial intelligence and persuasion: a construal-level account,” Psychological science, vol. 31, no. 4, pp. 363380, 2020. Google ScholarGoogle Scholar
  31. [31].Akdim K., Belanche D., and Flavián M., “Attitudes toward service robots: analyses of explicit and implicit attitudes based on anthropomorphism and construal level theory,” International Journal of Contemporary Hospitality Management, 2021. Google ScholarGoogle Scholar
  32. [32].Peer E., Vosgerau J., and Acquisti A., “Reputation as a sufficient condition for data quality on amazon mechanical turk,” Behavior research methods, vol. 46, no. 4, pp. 10231031, 2014. Google ScholarGoogle ScholarCross RefCross Ref
  33. [33].IEEE. (2019, feb) Ethically aligned design. [Online]. Available: https://standards.ieee.org/wp-content/uploads/import/documents/other/ead1e_affective_computing.pdfGoogle ScholarGoogle Scholar
  34. [34].Obar J. A. and Oeldorf-Hirsch A. , “The biggest lie on the internet,” Information, Communication & Society, vol. 23, no. 1, pp. 128147, 2020. Google ScholarGoogle ScholarCross RefCross Ref

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image Guide Proceedings
    2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)
    Aug 2022
    1654 pages

    Copyright © 2022

    Publisher

    IEEE Press

    Publication History

    • Published: 29 August 2022

    Qualifiers

    • research-article
  • Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0

    Other Metrics

About Cookies On This Site

We use cookies to ensure that we give you the best experience on our website.

Learn more

Got it!