James E. Young, an assistant professor at the University of Manitoba, has done a lot of interesting things with robots, from adding puppy tails to them to turning a Roomba into a mood ring, in order to better understand how humans perceive and react to bots. His recent work includes a fascinating study which demonstrated that almost half of human subjects were willing to knuckle under and persist in a boring task when asked to by a robot — even though they’d been informed by a human that they were free to leave at any time. I talked with him about what we know and don’t know about the way humans relate to bots — and whether today’s robot designers are taking our social instincts into consideration when they program our new co-workers.
Q: It seems like a lot of what you do deals with the interaction between humans and robots — how people perceive them, what their instinctive reaction to them is. Is that fair to say?
A: I think that’s exactly it. I’m in computer science, so I’m on the technology side of things, but I really think there’s a lot to be done just thinking about the psychology behind it, while we’re making this technology. Because [robots] are going to be working with people, so we might as well include that from the get go.
Q: What drew you to that aspect of the field? What was the first thing that made you think, “there really needs to be someone looking at this”?
A: Well, to be fair, I was initially led [to examining human-robot interactions] by my supervisor in grad school. But what really got me hooked on it is, I was, like many people — my students are the same — I was under the impression, “It’s just a machine. This is just a silly exercise. It’s fun, but it’s silly.” But then you start running studies, you start getting people from the general public — intelligent, educated, smart, normal people — and you bring them in and they suddenly change. The interact with these silly machines as if they’re real. They’re not just playing along; they really react to them. And I thought, wow, this is actually really powerful. Not just powerful in terms of we can really be using this to [design bots], but it’s powerful in the sense that’s hard to turn it off. We’re humans. We’re trained to relate to the world socially. We’re so well trained to relate to other people socially. So when a machine starts acting even a little bit that way, [those social skills] just come out. So we kind of need to pay attention to it.
In a lot of our studies that involve programming a robot with some kind of emotion, inevitably we’ll have a [participant] — an engineer, or a physicist, someone who’s very good at the technical side of things, who will say something like, “okay, I’ll play your game, but we all know that robots are just machines. [They don’t have emotions,] that’s for humans, or animals.” So they come in with this strong opinion — but then you see them interacting with the robot, and that goes out the window. They act just like everybody else. So even people who are consciously aware of that, that the idea of a robot having emotions is a bit much, they play into it too. We’re all humans after all.
Q: Do you feel like we have a good grasp on the psychology behind that? Do we humans have a whole set of instinctive ways of determining if something is alive or not, if something has intention or not? Do we know what qualities a bot can exhibit that would make people think, “okay, this is a being,” as opposed to an object?
A: I would say we don’t yet have a good grasp on that. In the field of human-robot interaction, we have a lot of people from psychology, from philosophy, looking at this exact question, “what is it?” I haven’t myself seen a convincing story yet. A lot of it is down to that basic brain level of, if something looks like it might be alive, and acts like it might be alive, part of our brain says, “well, I guess it’s alive.” In some of my work, and other people’s work as well, we have little disc-shaped robots that move around, and people will attribute emotion to it. There was a famous study by Heider and Simmel – this is going back a long time, to the 40s, I believe? — and what they did was they had moving shapes, a circle and two triangles, and they would move around a screen, and they showed that people would construct stories, just from these shapes moving around. So will people look at a robot moving around and say, okay, maybe this has intention. Like a kid looking a shadow on the wall and imaging all kinds of monsters out to get them.
Q: Some of your work looks at how people will interpret a robot’s motion as an emotional cue. But in another study you did, people were not only attributing emotion to a robot, they were following its instructions, arguing with it. In the first study, it seemed like people were thinking of the bot like a pet, whereas in the second they were imbuing it with authority, were willing to obey it. Do you think designers are aware of that, when they’re creating these bots? That the way they design the bot, the way the thing looks, can change how people perceive it in this way?
A: Absolutely. In our case, [with the authority study], it was actually an undergraduate student who came to me with the idea, and I said, “yeah, it’s not going to work. Have you seen our robots? They’re like little children.” But hey, it’s an undergraduate project, let’s just try it out. And we were shocked at how many people obeyed the robot. Not only how many people obeyed, but whether they obeyed or disobeyed, how they went about it. Even people that didn’t obey, they got uncomfortable, they started rationalizing, arguing. It’s just a machine. You can walk away, you know? But because the machine is engaging them with these social cues, that’s very very difficult to do. I do think there’d be stronger results with an intimidating, large robot. Our bot was a little bot — change it to a stronger voice, a more insistent tone, I think all those things would have an impact. For me it’s even more surprising that this had an impact, worked at all, given all these problems — cute little bot with a cutesy little voice, talking softly.
We actually did a follow-up study looking at different types of bots, different shapes — a small humanoid robot, a disk-shaped robot, and a computer that talked — a big box, basically, that talked to you. Our hypothesis was that the more human-like it was, the more people would listen. But we didn’t find that at all. Negative result. So we’re still not exactly sure what’s going on, though we have lots of ideas. Even with the speaking box, we put one of those old-fashioned audio meters on it so when it spoke the meter goes from green to red. So it still had a bit of personality when it talked to them, and it used the exact same voice, the exact same words, that the other robots used. But it didn’t gesture, didn’t look at them. Yet people still obeyed, still argued with it. So we’re not sure exactly what’s going on. My gut feeling that as you add more of these cues, you’ll get more impact. But in our follow-up study we didn’t find that yet. I actually have a student who’s going to come back and do her PhD, looking at exactly that.
It’s been cool looking at all this stuff. But as a computer scientist and a robot designer, I want to know what the parameters are. If we want a machine that people won’t follow, how do we design that? If we have a security guard robot that we want to be a bit more authoritative, how do we build that in? What are the parameters of it?
Q: I don’t know if you’ve seen Rethink Robotic’s newest bot, Sawyer? They’re well known for creating the Baxter robot, which is very humanoid looking — two arms, shoulders, a face, these cartoony eyes. But their new bot Sawyer, it has a little face on it still, but it’s just an arm. It’s not very human-looking. Reading about your work, it made me wonder, would people interact with something like that differently?
Because in some ways, there’s a lot of advantages to designing a bot that looks nothing like a person — in many circumstances, creating a machine to do things people can’t do is kind of the whole point. But would robots like that be perceived as more threatening? Intimidating? It sounds like from what you’re saying that there hasn’t been much work done on that yet.
A: We’re definitely still finding stuff out. Looking at this robot, it actually reminds me of project I saw when I was at the University of Calgary, where I did my PhD — there was another PhD student, he had designed a bot that was basically just like a long stick coming out of a box, which he could move around and rotate. And his work showed that people attributed a personality into that stick, just because of the way it moved — it looked like it had intention, it was trying to approach things, moving around in space. Looking at Sawyer here, I imagine the same thing would happen.
A lot of robot design is about what’s practically good for getting work done. If I’m working with a Sawyer, with an arm robot, it should look like it has intention — kind of like if you’re reaching into someone’s personal space, say coming over their shoulder to grab something from their desk. You have that body language going on so that they know what’s coming, they have a chance of blocking you. It’s the same thing with a designing a Sawyer-type bot — making sure you use those eyes to signal intention, having it look in the direction it’s going to move, so you know what it’s paying attention to. It’s a good idea to build as much of this stuff into the bot as you can, just to make it easier to work with. You could turn these things off, if you wanted to, in the name of efficiency – get rid of the eyes. But efficiency doesn’t always mean getting as quickly as possible from A to B. If you’re scaring people who are working with the robot, or if they get hurt, that’s not very efficient either. So just because something doesn’t look like a human, we can still build in these social cues to make it easier to work with.
Q: As a layperson, to use your example of grabbing a pencil off someone’s desk — when I go to do that, as a human, I’m not consciously thinking, “okay, now stand two feet behind and to the left and make a small noise to attract attention.” But if you want to get a robot to do it, you have to program that in. Do you feel like designers today are thinking about all these unconscious cues people use, when they’re designing bots?
A: I think it’s starting to emerge. In the research community, particularly the human-robot interaction community, it’s pretty standard. In robotics as a whole, people that are building the motors, solving the hard problems with things like balance – I think it still needs some work there. Sometimes we have this problem — and I’ve been guilty of this myself in the past — of thinking, oh yeah, soft and fuzzy is great for the living room, but we just want to get this done. If I want to make a pet robot, sure, I want it to feel good. If I’m designing a rehabilitation robot for a care home, sure I want it to be warm and fuzzy. But in a factory, we kind of feel like, well, it’s about efficiency, it’s about solving the job. But as I tried to argue in my piece for the Wall Street Journal, getting the job done includes the warm and fuzzy. If you watch people working together, they’re not machines. We use a very detailed communication channel, and you don’t think about how you’re using it, it just comes out — but as we say in computer science, it’s high bandwidth. [Our body language] conveys a lot of information very quickly. If robots are going to be working with people, and you don’t pay attention to that, you’re missing out on a channel. It’d be like designing a video game and deciding not to use sound. You’re missing out on an additional channel of communication.
A student of mine went to an exhibition in Calgary recently, [he was doing a demonstration involving Baxter] and there were a lot of people that came up to him after and asked, well, does the robot really need a face? Couldn’t you get rid of that? So while there are firms out there that are incorporating these ideas, I think there’s still work to be done to convince people that’s not just cute, it’s not just fun, it’s actually important and efficient.