ABSTRACT
If a robot tells you it can lie for your benefit, how would that change how you perceive it? This paper presents a mixed-methods empirical study that investigates how disclosure of deceptive or honest capabilities influences the perceived social intelligence and construal level of a robot. We first conduct a study with 198 Mechanical Turk participants, and then a replication of it with 15 undergraduate students in order to gain qualitative data. Our results show that how a robot introduces itself can have noticeable effects on how it is perceived–even from just one exposure. In particular, when revealing having ability to lie when it believes it is in the best interest of a human, people noticeably find the robot to be less trustworthy than a robot that conceals any honesty aspects or reveals total truthfulness. Moreover, robots that are forthcoming with their truthful abilities are seen in a lower construal than one that is transparent about its deceptive abilities. These results add much needed knowledge to the understudied area of robot deception and could inform designers and policy makers of future practices when considering deploying robots that deceive.
- [1]. , “The evolution of deception,” Journal of nonverbal behavior, vol. 12, no. 4, pp. 295–307, 1988. Google ScholarCross Ref
- [2]. , “Can a robot lie?” Google Scholar
- [3]. , “Playing the blame game with robots,” in Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, 2021, pp. 407–411. Google Scholar
- [4]. , “No fair!! an interaction with a cheating robot,” in 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 2010, pp. 219–226. Google Scholar
- [5]. , “The misrepresentation game: How to win at negotiation while seeming like a nice guy,” in Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, 2016, pp. 728–737. Google Scholar
- [6]. , “Deal or no deal? end-to-end learning for negotiation dialogues,” arXiv preprint arXiv:1706.05125, 2017. Google Scholar
- [7]. , “Overtrust of robots in emergency evacuation scenarios,” in 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 2016, pp. 101–108. Google Scholar
- [8]. , “Conceptualizing overtrust in robots: why do people trust a robot that previously failed?” in Autonomy and Artificial Intelligence: A Threat or Savior? Springer, 2017, pp. 129–155. Google Scholar
- [9]. , “The impact of first impressions on human-robot trust during problem-solving scenarios,” in 2018 27th IEEE international symposium on robot and human interactive communication (RO-MAN). IEEE, 2018, pp. 435–441. Google Scholar
- [10]. , “The persistence of first impressions: The effect of repeated interactions on the perception of a social robot,” in Proceedings of the 2020 ACM/IEEE international conference on human-robot interaction, 2020, pp. 73–82. Google Scholar
- [11]. , “Towards a theory of deception: Else working papers (181),” ESRC Centre for Economic Learning and Social Evolution, London, UK, 2009. Google Scholar
- [12]. , “Robot betrayal: a guide to the ethics of robotic deception,” Ethics and Information Technology, vol. 22, no. 2, pp. 117–128, 2020. Google ScholarDigital Library
- [13]. , “Averting robot eyes,” Md. L. Rev., vol. 76, p. 983, 2016. Google Scholar
- [14]. , “Acting deceptively: Providing robots with the capacity for deception,” International Journal of Social Robotics, vol. 3, no. 1, pp. 5–26, 2011. Google ScholarCross Ref
- [15]. , “An analysis of deceptive robot motion.” in Robotics: science and systems. Citeseer, 2014, p. 10. Google Scholar
- [16]. , “No fair!! an interaction with a cheating robot,” in 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 2010, pp. 219–226. Google Scholar
- [17]. , “"i don’t believe you”: Investigating the effects of robot trust violation and repair," in 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 2019, pp. 57–65. Google Scholar
- [18]. , “A deceptive robot referee in a multiplayer gaming environment,” in 2011 international conference on collaboration technologies and systems (CTS). IEEE, 2011, pp. 204–211. Google Scholar
- [19]. , “Perceptual limits for a robotic rehabilitation environment using visual feedback distortion,” IEEE transactions on neural systems and rehabilitation engineering, vol. 13, no. 1, pp. 1–11, 2005. Google ScholarCross Ref
- [20]. , “" it’s not my fault!” investigating the effects of the deceptive behaviour of a humanoid robot," in Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, 2017, pp. 321–322. Google Scholar
- [21]. , “Intelligent agent deception and the influence on human trust and interaction,” in 2021 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO). IEEE, 2021, pp. 200–205. Google Scholar
- [22]. , “A further search for social intelligence.” Journal of Educational Psychology, vol. 75, no. 2, p. 196, 1983. Google Scholar
- [23]. , “Social intelligence and decoding of nonverbal cues,” Intelligence, vol. 13, no. 3, pp. 263–287, 1989. Google Scholar
- [24]. , “When a robot tells you that it can lie,” in 2022 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO). IEEE, 2022. Google Scholar
- [25]. , “Measuring the perceived social intelligence of robots,” ACM Transactions on Human-Robot Interaction (THRI), vol. 9, no. 4, pp. 1–29, 2020. Google ScholarDigital Library
- [26]. , “Temporal construal.” Psychological review, vol. 110, no. 3, p. 403, 2003. Google ScholarCross Ref
- [27]. , “Construal-level theory of psychological distance.” Psychological review, vol. 117, no. 2, p. 440, 2010. Google Scholar
- [28]. , “Interpersonal similarity as a social distance dimension: Implications for perception of others’ actions,” Journal of experimental social psychology, vol. 44, no. 5, pp. 1256–1269, 2008. Google Scholar
- [29]. , “Evaluation of a virtual reality enhanced bullying prevention curriculum pilot trial,” Journal of adolescence, vol. 71, pp. 72–83, 2019. Google ScholarCross Ref
- [30]. , “Artificial intelligence and persuasion: a construal-level account,” Psychological science, vol. 31, no. 4, pp. 363–380, 2020. Google Scholar
- [31]. , “Attitudes toward service robots: analyses of explicit and implicit attitudes based on anthropomorphism and construal level theory,” International Journal of Contemporary Hospitality Management, 2021. Google Scholar
- [32]. , “Reputation as a sufficient condition for data quality on amazon mechanical turk,” Behavior research methods, vol. 46, no. 4, pp. 1023–1031, 2014. Google ScholarCross Ref
- [33]. . (2019, feb) Ethically aligned design. [Online]. Available: https://standards.ieee.org/wp-content/uploads/import/documents/other/ead1e_affective_computing.pdfGoogle Scholar
- [34]. , “The biggest lie on the internet,” Information, Communication & Society, vol. 23, no. 1, pp. 128–147, 2020. Google ScholarCross Ref
Comments