Expressive Robots

Robot expressiveness, from LED light displays to motion, is important to how people perceive and interact with robots. We work to better understand these topics.

Problem

During interactions with other people, speech is one part of the way we communicate, but a rich multidimensional space of nonverbal expression is also essential. Nuanced nonverbal expression is also important in interactions with robots, but the research community is yet to fully understand this space and harness full potential of nonverbal communication in human-robot interaction (HRI).

Solution

In studies involving robot light signals, sound, and facial expressions, we have begun to investigate human responses to and perceptions of different nonverbal robot signaling. One approach to unlocking the potential of nonverbal robot signaling is to quantify human responses to these nonverbal behaviors and share open-source resources from the explored space. Another approach is to create and validate models that can extend to future HRI work.

Upcoming Events

To help the research community on robot sound (and related areas) come together, we have proposed workshop on Understanding and Harnessing Sound in Robotic Systems, to take place at the ICRA 2022 conference. More information is available here.

People

  • Rhian Preston (PhD Student)
  • Nnamdi Nwagwu (PhD Student)
  • Jai'La Crider (MS Student)
  • Ibrahim Syed (Undergraduate Researcher)
  • Adeline Schneider (Undergraduate Researcher)
  • Stayce Mockel (Undergraduate Researcher)
  • Lily Oliphant (Undergraduate Researcher)

Publications

  • Brian J. Zhang, Bastian Orthmann, Ilaria Torre, Roberto Breslin, Jason Fick, Iolanda Leite, and Naomi T. Fitter., "Hearing it Out: Guiding Robot Sound Design through Design Thinking," Proceedings of the IEEE International Conference on Human and Robot Interactive Communication (RO-MAN), Busan, Korea, 2023. [BibTeX] [PDF]
  • Brian J. Zhang and Naomi T. Fitter, "Nonverbal Sound in Human-Robot Interaction: A Systematic Review," ACM Transactions on Human-Robot Interaction, 2023. [BibTeX] [PDF]
  • Rhian C. Preston, Nisha Raghunath, Christopher A. Sanchez, and Naomi T. Fitter, "“Armed” and Dangerous: How Visual Form Influences Perceptions of Robot Arms," Proceedings of the International Conference on Social Robotics (ICSR), Florence, Italy, 2022. [BibTeX] [PDF]
  • Brian J. Zhang, Christopher A. Sanchez, and Naomi T. Fitter, "Using the Price Sensitivity Meter to Measure the Value of Transformative Robot Sound," Proceedings of the IEEE International Conference on Human and Robot Interactive Communication (RO-MAN), Naples, Italy, 2022. [BibTeX] [PDF]
  • Brian J. Zhang, Noel Sigafoos, Rabecka Moffit, Ibrahim Syed, Lili Adams, Jason Fick, and Naomi T. Fitter, "SonifyIt: Towards Transformative Sound for All Robots," Robotics and Automation Letters, 7(4):10566-10572, 2022. [BibTeX] [PDF]
  • Ibrahim Syed, Jason Fick, Brian J. Zhang, and Naomi T. Fitter, "Toward Generative Sound Cues for Robots Using Emotive Musification," Proceedings of the International Conference on Auditory Display (ICAD), Virtual, 2022. [BibTeX] [PDF]
  • Ibrahim Syed, Brian J. Zhang, Naomi T. Fitter, and Jason Fick, "Towards Generative Musical Cues for Emotive and Responsive Robot Sonfication," Proceedings of the Sound for Robots Workshop at the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 2022. [BibTeX] [PDF]
  • Brian J. Zhang, Knut Peterson, Christopher A. Sanchez, and Naomi T. Fitter, "Exploring Consequential Robot Sound: Should We Make Robots Quiet and Kawaii-et?," Paper accepted to the IEEE International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic (remote), 2021. [BibTeX] [PDF]
  • Lilian Chan, Brian J. Zhang, and Naomi T. Fitter, "Designing and validating expressive Cozmo behaviors for accurately conveying emotions," Paper accepted to the IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Vancouver, Canada (remote), 2021. [BibTeX] [PDF]
  • Brian J. Zhang, Nick Stargu, Samuel Brimhall, Lilian Chan, Jason Fick, and Naomi T. Fitter, "Bringing WALL-E out of the Silver Screen: Understanding How Transformative Robot Sound Affects Human Perception," Paper accepted to the IEEE International Conference on Robotics and Automation (ICRA), Xi'an, China (remote), 2021. [BibTeX] [PDF]
  • Brian J. Zhang, Christopher A. Sanchez, and Naomi T. Fitter, "Consequential robot sound: Should robots be quiet and high-pitched?," Proceedings of the Sound in Human-Robot Interaction Workshop, ACM/IEEE International Conference on Human-Robot Interaction (HRI), Boulder, CO, USA (remote), 2021. [BibTeX] [PDF]
  • Caitlin Frazee, Brian J. Zhang, and Naomi T. Fitter, "Enabling intentional sound for construction cobots," Proceedings of the 2020 IROS Workshop on Building Construction and Architecture Robotics, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA (remote), 2020. [BibTeX] [PDF]
  • Elizabeth Cha, Naomi T. Fitter, Yunkyung Kim, Terry Fong, and Maja Matarić, "Generating expressive light signals for appearance-constrained robots," Presented at the International Symposium on Experimental Robotics and awaiting publication, Buenos Aires, Argentina, 2018. [BibTeX] [PDF]
  • Elizabeth Cha, Naomi T. Fitter, Yunkyung Kim, Terry Fong, and Maja Matarić, "Effects of robot sound on auditory localization in human-robot collaboration," Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI), Chicago, IL, USA, 2018. [BibTeX] [PDF]
  • Naomi T. Fitter and Katherine J. Kuchenbecker, "Designing and assessing expressive open-source faces for the Baxter robot," Proceedings of the International Conference on Social Robotics (ICSR), Kansas City, MO, USA, 2016. [BibTeX] [PDF]