10.1109/ARSO54254.2022.9802976guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
research-article

When a Robot Tells You That It Can Lie

Published:28 May 2022Publication History

ABSTRACT

What happens when a robot introduces itself and tells you that it can lie when it determines it is best for you? This work presents an empirical study into how people perceive the social intelligence of a robot that is either transparent or not about its honesty or deceptive capabilities. We also investigate if these perceptions are influenced by the physical or virtual presence of a robot. Through a mixed method approach, our results showed no significant differences in an aggregated perceived social intelligence with regards to either the presence factors or introduction transparency factors. However, individual components, like trustworthiness, were rated significantly more negative after a one-time, first introduction of a robot that was transparent about its deceptive capabilities. These results add much needed knowledge to the understudied area of robot deception and could inform designers and policy makers of future practices when considering deploying robots that deceive.

References

  1. [1].Dautenhahn K., “Socially intelligent robots: dimensions of human–robot interaction,” Philosophical transactions of the royal society B: Biological sciences, vol. 362, no. 1480, pp. 679704, 2007. Google ScholarGoogle ScholarCross RefCross Ref
  2. [2].Ford M. E. and Tisak M. S. , “A further search for social intelligence.” Journal of Educational Psychology, vol. 75, no. 2, p. 196, 1983. Google ScholarGoogle Scholar
  3. [3].Barnes M. L. and Sternberg R. J. , “Social intelligence and decoding of nonverbal cues,” Intelligence, vol. 13, no. 3, pp. 263287, 1989. Google ScholarGoogle Scholar
  4. [4].Devin S. and Alami R. , “An implemented theory of mind to improve human-robot shared plans execution,” in 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 2016, pp. 319326. Google ScholarGoogle Scholar
  5. [5].Scassellati B., “Theory of mind for a humanoid robot,” Autonomous Robots, vol. 12, no. 1, pp. 1324, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. [6].Butterfield J., Jenkins O. C., Sobel D. M., and Schwertfeger J., “Modeling aspects of theory of mind with markov random fields,” International Journal of Social Robotics, vol. 1, no. 1, pp. 4151, 2009. Google ScholarGoogle ScholarCross RefCross Ref
  7. [7].Ding X. P., Wellman H. M., Wang Y., Fu G., and Lee K., “Theory-of-mind training causes honest young children to lie,” Psychological Science, vol. 26, no. 11, pp. 18121821, 2015. Google ScholarGoogle Scholar
  8. [8].Lewis M., Yarats D., Dauphin Y. N., Parikh D., and Batra D., “Deal or no deal? end-to-end learning for negotiation dialogues,” arXiv preprint arXiv:1706.05125, 2017. Google ScholarGoogle Scholar
  9. [9].Ehsan U., Liao Q. V., Muller M., Riedl M. O., and Weisz J. D., “Expanding explainability: towards social transparency in ai systems,” in Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 2021, pp. 119. Google ScholarGoogle Scholar
  10. [10].Theodorou A., Wortham R. H., and Bryson J. J., “Designing and implementing transparency for real time inspection of autonomous robots,” Connection Science, vol. 29, no. 3, pp. 230241, 2017. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. [11].Lu J., Lee D., Kim T. W., and Danks D., “Good explanation for algorithmic transparency,” Available at SSRN 3503603, 2019. Google ScholarGoogle Scholar
  12. [12].Wortham R. H. and Theodorou A. , “Robot transparency, trust and utility,” Connection Science, vol. 29, no. 3, pp. 242248, 2017. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. [13].Isaac A. and Bridewell W. , “Why robots need to deceive (and how),” Robot ethics, vol. 2, pp. 157172, 2017. Google ScholarGoogle Scholar
  14. [14].Kneer M., “Can a robot lie?2020. Google ScholarGoogle Scholar
  15. [15].Kneer M. and Stuart M. T. , “Playing the blame game with robots,” in Companion of the 2021 ACM/IEEE international conference on human-robot interaction, 2021, pp. 407411. Google ScholarGoogle Scholar
  16. [16].Ettinger D. and Jehiel P. , “Towards a theory of deception: Else working papers (181),” ESRC Centre for Economic Learning and Social Evolution, London, UK, 2009. Google ScholarGoogle Scholar
  17. [17].Danaher J., “Robot betrayal: a guide to the ethics of robotic deception,” Ethics and Information Technology, vol. 22, no. 2, pp. 117128, 2020. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18].Kaminski M. E., Rueben M., Smart W. D., and Grimm C. M., “Averting robot eyes,” Md. L. Rev., vol. 76, p. 983, 2016. Google ScholarGoogle Scholar
  19. [19].Wagner A. R. and Arkin R. C. , “Acting deceptively: Providing robots with the capacity for deception,” International Journal of Social Robotics, vol. 3, no. 1, pp. 526, 2011. Google ScholarGoogle ScholarCross RefCross Ref
  20. [20].Dragan A. D., Holladay R. M., and Srinivasa S. S., “An analysis of deceptive robot motion.” in Robotics: science and systems. Citeseer, 2014, p. 10. Google ScholarGoogle Scholar
  21. [21].Short E., Hart J., Vu M., and Scassellati B., “No fair!! an interaction with a cheating robot,” in 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 2010, pp. 219226. Google ScholarGoogle Scholar
  22. [22].Sebo S. S., Krishnamurthi P., and Scassellati B., “"i don’t believe you": Investigating the effects of robot trust violation and repair," in 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 2019, pp. 5765. Google ScholarGoogle Scholar
  23. [23].Vazquez M., May A., Steinfeld A., and Chen W.-H., “A deceptive´ robot referee in a multiplayer gaming environment,” in 2011 international conference on collaboration technologies and systems (CTS). IEEE, 2011, pp. 204211. Google ScholarGoogle Scholar
  24. [24].Brewer B. R., Fagan M., Klatzky R. L., and Matsuoka Y., “Perceptual limits for a robotic rehabilitation environment using visual feedback distortion,” IEEE transactions on neural systems and rehabilitation engineering, vol. 13, no. 1, pp. 111, 2005. Google ScholarGoogle ScholarCross RefCross Ref
  25. [25].Wijnen L., Coenen J., and Grzyb B., “" it’s not my fault!” investigating the effects of the deceptive behaviour of a humanoid robot," in Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, 2017, pp. 321322. Google ScholarGoogle Scholar
  26. [26].Rogers K. and Howard A. , “Intelligent agent deception and the influence on human trust and interaction,” in 2021 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO). IEEE, 2021, pp. 200205. Google ScholarGoogle Scholar
  27. [27].Barchard K. A., Lapping-Carr L., Westfall R. S., Fink-Armold A., Banisetty S. B., and Feil-Seifer D., “Measuring the perceived social intelligence of robots,” ACM Transactions on Human-Robot Interaction (THRI), vol. 9, no. 4, pp. 129, 2020. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. [28].Ijsselsteijn W., Harper B., Group T., et al.., “Virtually there? a vision on presence research,” Presence–IST2000, vol. 31014, 2001. Google ScholarGoogle Scholar
  29. [29].Wainer J., Feil-Seifer D. J., Shell D. A., and Mataric M. J., “Embodiment and human-robot interaction: A task-based perspective,” in RO-MAN 2007-The 16th IEEE International Symposium on Robot and Human Interactive Communication. IEEE, 2007, pp. 872877. Google ScholarGoogle Scholar
  30. [30].Bainbridge W. A., Hart J., Kim E. S., and Scassellati B., “The effect of presence on human-robot interaction,” in RO-MAN 2008-The 17th IEEE International Symposium on Robot and Human Interactive Communication. IEEE, 2008, pp. 701706. Google ScholarGoogle Scholar
  31. [31].Li J., “The benefit of being physically present: A survey of experimental works comparing copresent robots, telepresent robots and virtual agents,” International Journal of Human-Computer Studies, vol. 77, pp. 2337, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. [32].Rogers K., Bryant D., and Howard A., “Robot gendering: Influences on trust, occupational competency, and preference of robot over human,” in Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, 2020, pp. 17. Google ScholarGoogle Scholar
  33. [33].Bryant D., Borenstein J., and Howard A., “Why should we gender? the effect of robot gendering and occupational stereotypes on human trust and perceived competency,” in Proceedings of the 2020 ACM/IEEE international conference on human-robot interaction, 2020, pp. 1321. Google ScholarGoogle Scholar
  34. [34].Srinivasan V. and Takayama L. , “Help me please: Robot politeness strategies for soliciting help from humans,” in Proceedings of the 2016 CHI conference on human factors in computing systems, 2016, pp. 49454955. Google ScholarGoogle Scholar
  35. [35].Obar J. A. and Oeldorf-Hirsch A. , “The biggest lie on the internet: Ignoring the privacy policies and terms of service policies of social networking services,” Information, Communication & Society, vol. 23, no. 1, pp. 128147, 2020. Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

(auto-classified)
  1. When a Robot Tells You That It Can Lie

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image Guide Proceedings
        2022 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO)
        May 2022
        147 pages

        Copyright © 2022

        Publisher

        IEEE Press

        Publication History

        • Published: 28 May 2022

        Qualifiers

        • research-article
      • Article Metrics

        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0

        Other Metrics

      About Cookies On This Site

      We use cookies to ensure that we give you the best experience on our website.

      Learn more

      Got it!