Fast Forward Q&A: How to Build Emotional Machines

fast-forward-q-and-amp;a-how-to-build-emotional-machines photo 1

Welcome to Fast Forward, where we have conversations about living in the future. At SXSW this year, we talked with Sophie Kleber, Executive Director of Product and Innovation at the Huge Digital Agency. In the interview below, we discuss conversational interfaces, making smarter machines, and the bot revolution, among other things. Watch our chat or read the transcript below.


Dan Costa: You gave a presentation about how we can make more emotional machines. What exactly does that mean, and why do we want to have more emotional machines?

I was asked this question in my talk. When I proposed emotionally intelligent man-computer or machine-computer interactions, people were like, "We don't even have emotionally intelligent humans, so is it like the blind leading the blinder?"

The thing is, we're currently at a super cusp between how we interact with machines. The time of what we call "the terminal world"—where we interact through terminals made smaller and smaller, but it's really just screens—that time is over. The interactions that we now have are becoming much more intuitive.

Voice is the leading chart of it, but even the Internet of Things, in which computing power wraps your life and does things without you, even small adjustments without you consciously thinking about it, it's a very, very new world of interacting with machines. The intelligence that's starting to come up as well is fascinating.

What we've noticed in research that we've done with people who have conversed with Alexa or Google is that the moment that machine starts talking, people assume relationships. It is not that these people say, "Oh, it's now just easier." They also assume that the machine has some sort of empathy towards them, some sort of support. The moment to think about what that means and what that personality is, is now.

Then there's this other fear that's coming. It has been coming since 1995, when it was invented at MIT by Rosalyn Picard is the idea of effective computing. That means that machines are able to understand, interpret, and decode emotions and also potentially to react emotionally. There's these two sides to it, right?

Yes.

It's very, very important that we talk about it now because this the first step into mind reading. By 2021, this is expected to be a $36.4 billion dollar industry, so it's a massive industry of just detecting emotions. If you see what these machines can detect, there's basically three messages. There's facial recognition, voice, and biometrics. Facial is the most advanced because of these micro-movements that we have and micro-expressions that we have.

If you see what can be done, it's a little bit like mind reading. It's a massive opportunity and a massive responsibility. Someone I talked to at MIT said it's like nuclear power. Now we have to see what we do with it. That's why we're talking about it now, because the moment is here. The intelligence is coming, the interaction models are changing, so we need to figure it out. People are ready for it, or assume it naturally, so we need to figure out what we do as industry and how we get into it.

I think that phrase "terminal-based computing" is interesting. PCMag obviously covered the birth of the PC industry. We moved into the mobile space, but that was basically just a terminal that you could carry with you wherever you went. Now that we've got Siri and Google Now, those were okay interfaces, but with the rise of Alexa and Google Home you find people at home asking questions and getting into conversations. They literally presume that there's some kind of emotional response. They read into it.

That's right.

That's because as humans we were built to look for those types of responses.

I have thought a lot about why Siri wasn't a success and Alexa is a success. I think there's two parts to it. The first one is that Siri was built as an alternative interface to an interface that was already there. Siri's on the phone, there's already an interface, why would I switch it to the voice, right?

Because the voice interface was worse than the screen-based interface, it didn't kick off. Alexa doesn't have a screen interface. It's just a puck. It has only the voice interface. Plus Alexa's in the comfort of your home in which you have a much different understanding of privacy. It's yours. You are much more comfortable just talking out loud to a machine versus in public, [where] I'm not going to go ask Siri weird, embarrassing questions.

I'm amazed at how many people do that, though. Complete voice dictation on a crowded train.

Yeah, it's starting to become a little bit weird because these people forget that they're not in a private space. They don't understand what everyone else can hear. We see how that goes. The assumption of emotions was pretty big. In our user research, we found everything from wanting a basic assistant, just doing the chores and reminding you of stuff to this one guy said, "What if I could come home and I can unload onto my AI? I could just unload what happened in the day and the machine is like, 'Oh yeah, that's great.'"

He wanted it to listen.

Yes. He literally used the phrase "instead of paying the shrink." Then we asked people in an ideal world, what would you describe the relationship between the machine? They said anything from assistant, to friendly assistant, to friend, to best friend, even mom. Two people had named their AIs and one had named it after their mom and the other one had named it after their child. These are very, very personal relationships, so it's already there.

I think when we think about what brands have to, and could do, the first thing we need to think about is what are we trying to achieve here? Are we trying to make Prozac in computer form? I think, luckily, the research in happiness has evolved enough to understand that that happiness is not the goal. People want a fulfilled life, to flourish. A way to flourish is not just being happy. It has to do with meaning. It has to do with meaningful relationships and accomplishment. This concept of resilience says you need to be down in order to get back up and feel like you have accomplished something.

Emotions are complicated as f**k. Happiness is complicated. Everything is something that we have to be very nuanced in what we want to achieve. If we assume for a second we want to achieve flourishing, right? Then we have to see okay, what kind of inputs do we have? Then what kind of ways do we have to react? There's two parts. I develop a framework essentially and there's two parts to the framework. The first one is permission to play and desire for emotion.

From a user perspective, you have to understand whether in this particular situation there's a desire for emotion. You look at what's the user's emotional state right now? Are they in an okay emotional state? What are their emotional ambitions? Do they want to go somewhere else from that state? What is the nature of the interaction? Is it a transaction? Like if I'm just transacting on American Airlines or whatever there's no way for me to infuse emotions because it isn't in any way an interaction that wires and requires that.

I think a lot of brands make a mistake right at that point. They're in a situation where it should be transactional and instead they're trying to have a conversation with their user and the user doesn't want to have a conversation.

The last 20 years perfected the internet as a transactional machine. If you look at all the companies that rose to power—Amazon, Google, all these travel sites—these are all transaction-based situations. We've perfected that, so we shouldn't mess with that.

In that framework then, you have to see whether or not there's actually a desire for emotional interaction. Then you have to look at what the user's context is like. If we're sitting here and we're having a connection an AI would come in and say, "Oh, you seem stressed. Calm down."

It's not going to go over well.

It's not the right moment in time. Exactly. Then on the brand side you have to think about a lot of things as well. I think it was in 2014 Facebook actually conducted an experiment.

A very controversial experiment.

A very controversial experiment. They wanted to understand whether if you look at positive things in your News Feed you get happier and vice versa. If you look at negative things you get sadder, right? They conducted this experiment where they showed thousands and thousands of people neutral to positive messages and vice versa, neutral to negative messages. Then measured the sentiment in their posts, to find out whether they were happier or sadder.

Low and behold, of course, they were. If you saw more negative messages you would tend to post something more negative as well. Problem was they did that completely without humans agreeing to it. They did it completely without permission. They kind of fall back to their terms and conditions and said this is fine, but ...

Because they're engineers.

They're engineers.

They were just engineers beta testing. They were just A/B testing a theory.

Exactly.

They collected data and it was useful data.

It was useful data, but this is the danger zone. If you look at it from the other side, in 2014 Facebook intentionally made thousands and thousands of people sad. That is ethically not correct. We are currently teetering on this edge where a couple of companies are playing with it. We see companies going to market that say "we can measure your stress level and now we're just going to apply self-optimization, gamification to it and say you have a streak of being less stressed."

No one knows whether this is desired, so we've got to be very careful. You can't design what you can't understand. Cognitive interaction and especially emotions is something that we're only scratching the surface of. Every designer has to be very, very careful not to design something that we can't really understand. That by the way also prevents us from ever designing or in the near future designing Ex Machina, right?

Of course.

We can't design a machine yet that has ambitions when we don't really understand how ambitions are formed. We can't really understand an emotional crazy manipulator, when we don't understand how emotions work and why they work with one person and not with the other. Then of course there's the laws of robotics that apply as well, which the first law being don't harm a human being or neglect, allow a human being to be harmed.

I'm not worried about the Ex Machina thing. I'm more worried about the lack of knowledge and therefore cheap tricks, like when you think about the Hershey Smile Machine where you're going up and if you smile you get a free Hershey's.

There is something Pavlovian about that, more than anything else.

Exactly. We're not Pavlovian dogs and I'm guaranteeing you this is not a genuine smile. The smile only turns genuine once you have the chocolate. We're turning things around in a weird way because we're playing, we're trying to figure it out.

I think you're right about the Hershey experiment, but there is technology that can tell whether or not that smile is genuine or not.

Right.

The question is who's going to be allowed to get that information? Obviously for the user, it would be great. That could be positive feedback. It could help you run your life better, but should your employer have access to that information? Should brands have access to that information? Then what are the rules about what they can do with that information? Because right now there are basically no rules.

If you look at the permission to play, that framework, is it the right context? Do you have active permission from the user? I think at this point there needs to be an active agreement that needs to be made, so I can't just go and scan you from afar and say, "This person is happy or this person is sad," or monitor you in terms of employment. Of course it's tricky because that environment is an owned environment and you enter, you actively agree into that owned environment when you sign up for a job somewhere.

They read your emails, or they could potentially, you agree to that. Then what is the purpose again of that understanding? Is the purpose to change your emotions for emotional well-being or is the purpose productivity, which are not the same thing. Employers sometimes try to make them the same thing because it would be beautiful if they could, but they aren't. I think in that framework the other thing is, does the company actually have a value proposition that allows them to play in that space?

Because we are just at the cusp of research, a lot of the work that's being done is being done in some sort of well-being, like stress, weight loss, with people who have difficulty decoding emotions. It's very much a well-being, a health kind of space situation, but it's not going to stay there. What's your value proposition that you can play there and then do you actually have the right intelligence? Do you actually know and have the right algorithms to decode what you're seeing? Then understanding of what comes out of it.

When you look at that framework, you fill it with basically three different ways that a machine can interact. The first one is it reacts like a machine. It understands the emotional input, but it outputs like a machine. Conversational IVRs do that, right? They route you. They understand your stress level, but they route you or expedite you to a human being so it's like a switchboard much more than anything. Or safety in cars, the car understands that you're dozing off or that you get angry. They react like a machine by either pulling over or stopping the machine.

They can read the emotion, they can acknowledge it, but then they react like a machine instead of reacting like a human.

Exactly. That's the first option. The second option is this idea of the machine reacting like an extension of self. There's two parts to it. The one is to make the emotions visible for the user so it's a learning experience, telling you your stress level is high. Telling you your anger level is high, or things like that. [It's a] little bit [like] Big Hero 6...diagnostics.

It's this idea of just kind of exposing it, but the user is in full control of changing the emotions or acting in any way shape or form. This is the space that we are very comfortable in right now, but the space moves very easily into a space of empathy. The question arises, would we ever be willing to pay for a service like that with the premise that this service is uniquely here for us? The service doesn't take ads like Apple. It isn't doing ads and things like that, and therefore we pay for it to be uniquely for us.

Then the last one is the idea of reacting like a human. That is the idea that I as a user enter an agreement with the machine allowing that machine to manipulate my emotion still based or with the premise of my well-being, but I actively enter into the agreement that this machine can manipulate my emotions.

You give it some independence.

You give it some independence and you give it some permission to give you advice.

And to steer you.

To steer you, yes.

That Facebook experiment was very instructive. They figured out what makes people sad, what makes people happy. I can imagine people saying, "Well, I'll pay an extra $5 a month for a Facebook feed that makes me happy."

Right.

It makes me "happier," I should say.

It makes me happier, exactly. I didn't know this, but in the research I found when you ask Alexa or tell Alexa, "Alexa, I'm really sad," she actually reacts like a scene from Big Hero 6. She's like, "I'm sorry you feel that way. Sometimes listening to music or taking a walk or calling friends or talking to friends helps. I hope you feel better soon."

She's not qualified. This is like Google 101 research. I found this on the internet, but people are already thinking about it, engineers are thinking about it right now. It's not cognitive psychology. It's not designers. It's not even marketers, but it's engineers are thinking about these kinds of things. It comes very close and the idea that a machine that you could very soon ask a machine for this type of advice is here. It's not tomorrow. This is right now.

Are there any brands or companies that are doing this well? That are providing a service that can do one of these things well?

There are a couple of companies. Because we currently really just moving from research into commercial applications, there are a few companies who do interesting things in these spaces. AutoEmotive is a company that does it in the driving space, in the auto space to say okay, we're going to detect all of these things, but when you look at it it's still a little bit like a wire, it looks like a physical computing experiment a little bit.They're pretty well-funded as a start-up.

Affectiva, of course, does a lot of it in the commercial space by showing people ads and having people reading the micro-expressions. It's market research, but it's important and interesting.

There's one company, it's called SimSensei, and it looks a little bit like talking to a SIM, but the idea is this. In PTSD treatment and things like that, especially young men, soldiers have huge difficulties talking to therapists because of the stigma of the shrink and so forth.

They've developed an emotionally reacting or empathetic bot to start these conversations. While they're saying okay, these conversations aren't necessarily the only therapy, they are just an auxiliary to real therapy, this has a huge success with this type of target audience. They feel much more comfortable talking to a machine because they think it's just out and over. It's like the guy in the research who was like, "I talked to my bot and it's done and it's out. It's over."

There's a way into something that previously many people didn't have access to or had a stigma around it, where it actually starts being interesting .

That's fascinating. Let me get to my closing question. I ask all my guests this. What trend, technological trend are you most concerned about going forward in the future?

I do have to say it's the trend of understanding the emotions because, of course, that's why I'm talking about it.

You have concerns. You're worried about this.

I'm worried about this. Yale University and MIT just recently entered into an agreement to put $27 million dollars aside to think about the ethics of this. When you see some of these detections and see what comes out, it is very intimate and it is very close to mind reading. I don't know your thoughts, but I know your feelings. There's something to that that we have to see how we get comfortable with it.

I do, however, think that there's this constant adaption curve between what technology can do and what humans want. It's kind of like playing tennis, right? Something comes out in terms of a technology capability and we're like, "Okay cool." Then at some point we toss the ball back and we're like, "No, not cool. We don't want it."

I think a similar thing is going to happen here as well and potentially there's an idea that with the idea of exposing these emotions we become more in tune with them as well.

We're talking about mind reading, but even less than mind reading it could be mass manipulation, technological-based manipulation.

Right.

Which is concerning.

It's concerning, and if you look at the balance you know we're talking about $27 million here and then an expected $36 billion on the other side. This is a little bit of chump change to put aside for the ethics of it. I do think that all technological advances make us think and make us adapt as humankind in an extremely lightning speed. I think that this is another piece of that.

In terms of positivity, positive trends, what do you see? What trend do you think that gives you great hope and that you're really excited about?

I am very excited about the idea of not having to interact with machines as a screen anymore because I do think that where we're going with conversational UIs and additional UIs and non-UIs, like truly non-UIs as well is we are coming back to an idea of why we originally invented machines. That is an idea to for us as humans have more of the thing that we value most, which is time and live the life we want to live. Just now that we are out of this idea of serving a machine because we have to learn their commands, but actually having a machine listen more. We're coming closer to that original promise, so I'm very, very happy to see where that goes.

Is there a gadget that you use everyday that you are just in love with, that changed your life?

My Philips Hue.

Really?

Yes. It's so crazy. Yeah, I have it at home only in one room and I love being able to wake up with light versus waking up with an alarm, which is always a horrendous sound. I love changing the moods and things like that and I love the potential that it has as well once it's connected and once it might be able to be connected to my biometrics to just do things by itself. I think lighting is such a fantastic small mood changer and to be able to connect these things.

I've heard a lot of people say that. I was very dismissive of intelligent light bulbs because they were very expensive and I didn't see the utility. Once I installed them in my house, I use them every day.

Yeah, it's crazy.

If people want to follow your work, find out what you're working on, where can they find you online?

The best for my work is Twitter. I'm @BIBILASSI. Then of course at Huge Inc., we have a blog where we post on Medium. It's called Magenta. Sometimes when I get a real brain spark I post it there, too.

For more Fast Forward with Dan Costa, subscribe to the podcast. On iOS, download Apple's Podcasts app, search for "Fast Forward" and subscribe. On Android, download the Stitcher Radio for Podcasts app via Google Play.

Recommended stories

More stories