Human-Robot Interaction Design
Human-Robot Interaction Course
3-Person Group Project | 4 Months
Personal Contributions: Study design, wiring & building, all Arduino programming, research & paper
How does the perceived importance of a robot’s task influence the likelihood and degree to which a human will intervene in the event of failure?
How can we create a human-robot encounter to test the above research question? As robot presence grows in everyday society, it may be important to understand if humans attach different levels of respect to robots built for different purposes and if that will impact their interaction with them.
We built a simple messenger robot with the apparent task of delivering envelopes across campus. We controlled the perceived importance of its task by switching whether or not the envelopes it carried were labeled as "Confidential." When it spilled the envelopes, we measured differences in whether and how passersby assisted it.
Guerilla Testing & Research
After conducting a review of both human-robot interaction and psychology literature, my team and I wanted to design a way to test human willingness to assist a robot in event of failure. After discussing several different questions, we chose this one because we wanted to better understand how to encourage humans to help robots in the future.
Burke's Dramatic Pentad
To decide on the robot's appearance, we first designed the entire human-robot interaction we needed to observe. We employed Burke's Dramatic Pentad to determine the agents, scene, and purpose of the interaction. We then designed the interaction from start to finish using Tang's Matrix of Engagement.
We originally thought that the robot should speak to passersby to ask for help, but decided that a messenger robot should be simple and relatively "dumb." Thus, we designed the robot to communicate with humans through 3 different colored blinking LEDs and sound effects along with some physical positioning.
Tang's Engagement Matrix
The robot did not need to be autonomous for the experiment, people just had to believe that it was autonomous. So, we built the robot from a remote control car, which would allow us to inconspicuously drive the robot around as we observed interactions. We labeled the robot as a messenger and attached a letter tray to the top of it, to clearly communicate its purpose. To cause the robot to spill, we originally planned to include an actuator which would be powered by the control. However, by installing the letter tray at a slight angle, we ensured that the envelopes could spill from change in momentum (quick stopping, bumping into things). We gave the robot 3 different states, "Moving," "Waiting," and "Error," which would be marked by 3 different colored LEDs. We also included a big, green "Resume" button, for people to press after returning papers.
We took apart the RC car and hacked its motor to power an Arduino Uno, which would control the speaker and LEDs. The control had a button to open and close the car doors, so we built a full bridge rectifier to harness that power in order to remotely trigger the robot's "Error" state of a blinking red LED and a negative sound effect that we programmed. During experimentation, we caused robot "errors" and controlled its movement from a distance.
We wired the green "Resume" button to play a positive sound effect and turn the green "Moving" LED on. We lengthened the car antenna wire to improve the control range. We built a cardboard body onto the car base which held all the pieces and housed wires and the Arduino.
After assembly was complete, we stenciled eyes on the robot in order to give some indication of which direction it would be moving in. We named the robot "Hermes," after the greek messenger god, and labeled it as such to give more context about its task to passersby.
Initial testing showed that we would not have as much range for remote control as we thought, so we had to position ourselves relatively close to Hermes and hide the control.
We ran tests in 3 sessions for about 6 hours total. A surprisingly large number of passersby completely ignored or did not notice Hermes, although that could be attributed to the commonplace nature of robots at Carnegie Mellon. We prepared observation sheets in advance to measure demographics, whether the passserby noticed the robot, how long they stopped to help, how many envelopes they replaced, whether they followed the robot after helping it, whether they pressed the "Resume" button, and any extra notes about the interaction.
We recorded 250 subjects, 103 for the normal envelopes and 147 for confidential. We ran p-and t-tests on our results and found statistical significance in the case of males being more likely to follow the robot if its envelopes were marked "Confidential." Overall, we did not find strong evidence to support the hypothesis that perceived importance of task makes people more likely to assist a robot in event of task failure. It's also possible that the marking of envelopes was not a strong enough influence on perceived importance.
However, we did see many different noteworthy interactions with the messenger robot. Many people stopped to take pictures (then they might or might not help it). One person went into a nearby building to get a long piece of tape and taped the envelopes to the tray to prevent them from spilling. Another passerby wedged a folder under the tray and over the envelopes to hold them in place. Once one person had stopped, more people were much more likely to stop. One group consisted of ~7 people (some would leave and others would join) who engaged with the robot for nearly 10 minutes. The most dedicated assistant (pictured here) read the envelope labels and personally pulled/carried Hermes all the way to the address room (through 2 campus buildings). The envelopes were not marked as confidential in this scenario. When she delivered Hermes, she repeatedly said, “He got lost, so I brought him back here.” These interactions demonstrated that some humans will go decidedly out of their way to assist even a simple robot in its task.