Student: Sophia Pegolo
Advisor: Dr. Kristina Striegnitz (Associate Professor of Computer Science)
As a student researcher in Human-Robot Interaction (HRI), my project explored how non-linguistic utterances (NLUs), such as beeps, tones, or synthesized instrument sounds, can be designed to improve communication between humans and robots. While facial expressions and language have been widely studied as expressive tools in HRI, NLUs remain relatively under explored despite their perceived emotional effectiveness in robotic media characters like Wall-E and R2-D2 as well as their strong connection to music, something known to successfully convey emotions. Our work draws inspiration from these expressive models and investigates how sound-based signals can support conveying emotions and intentions in robot communication without relying upon human mimicry.
This summer, I had the opportunity to take an academic deep-dive into this field and design a study based around our on-campus social robot, Valerie. Through a survey, participants were presented with three different character descriptions, a variety of human-robot interaction scenarios, and an array of different instruments and synthesized sounds to play the NLU melodies. These sounds represent different potential “voices” for our characters, and we have the participants assess which sounds fit the best, given the context of character and scenario. Establishing an effective voice is crucial for this future work’s significance. Though not yet conducted, this study hopes to lay the groundwork for future exploration of NLUs as tools to make robot communication more intuitive, emotionally rich, and effective.