Fast Forward: Thad Starner Talks Google Glass, Helping Dogs Talk

fast-forward-thad-starner-talks-google-glass-helping-dogs-talk photo 1

I was in Austin, Texas, last month for SXSW Interactive, where I had the chance to sit down with a number of tech industry execs for my video series Fast Forward, including Chris Becherer, VP of Product at Pandora.

In this edition of Fast Forward, we're talking with Thad Starner, Professor of Computing at Georgia Tech. Thad was the technical lead on a project that eventually led the world to Google Glass, which he was wearing during our chat. But he's been in the wearable space for much longer than that, and his work focuses on the very nature of the human-machine interface. His most recent work involves building mind-reading wearables for dogs. Check out our chat in the video and transcript below.


Dan Costa: It's a busy SXSW. There is a lot of talk of augmented reality. There's a lot of talk of virtual reality. Wearables are still a very hot topic. When most people think of wearables, they think of either Google Glass or a Fitbit that you wear on your wrist to track your steps. When you think of wearables, what do you think?

Thad Starner: Well, for me it's any computer, any body-worn device that helps you while you're doing some other, primary task. [Referencing his Google Glass] It's any body-worn computer that is designed to be secondary. It helps you for some other primary task, like answering questions during an interview.

My notes are on my iPad mini. I have to keep looking down to get the latest notes and to get your title correct. You've got that right in [Google Glass] itself.

Yes. All I have to do is glance up, grab a couple of words and go right back to the conversation. My old systems, I don't know if you looked at some of my older pictures, but my older systems had the display right here [gesturing directly in front of his left eye]. As a matter of fact, there was one from 1997 where it was even in the lens. That was made by a company called Micro Optical at the time. That actually allows you to be pretty facile with having an intelligent assistant that can help you from second to second. It really changes your confidence and changes your ability to do things on the fly that you might normally stumble over.

You were one of the original crew. It seems like you guys were almost in competition to see who could build the coolest, most sophisticated, most functional wearable system.

I started making my system back in 1990, but I failed at over and over again. It wasn't until '93 that I finally had a system I could wear in my daily life. At the time, I was working for a company called BBN. They helped make the first routers for the internet. After I put the device on, I realized in a month that it wasn't just good research. It was a good lifestyle.

That's what these devices are about. It creates a killer lifestyle for you. Of course, back then I had a cell phone connection for the internet that was measured in feet, not in bit rate. In other words, how many feet could I walk before the analog cell phone call dropped and I had to redial the connection?

Eventually, I went back to the MIT Media Lab and started recruiting more and more students who were interested in this. We had a little convoy of people who made the devices for their particular needs and we figured out how to live as a community of cyborgs. That changed what we thought about the technology. I'll give you an example: Suppose you come in as a VIP to visiting the lab. We'll have one person talk with you about what your interests are. While that's happening, there will be messages going back and forth between the eyeballs of the other members of the group trying to plan your day. "Oh, he's very interested in augmented senses. So you take him for lunch. I need to go to the doctor's appointment at three, so I'll get him at two." So, you have these people typing on their keyboards sending messages to each other while the conversation's going on trying to schedule a day based on what your interests are. It kind of gets to the stage of telepathy.

A lot of times it sounds very much like Slack, where we've got real-time communication and different segments taking shape. But instead of making it terminal based and making it connected to a computer, where you have to be in front of your laptop or phone, it's wherever you are. You can do two things at once. You can have that conversation and schedule lunch.

Well, no, you don't want to do that. Your IQ drops by 40 points when you try to do that. What you want to do is make the computer support the conversation. For example, we have a system called a Remembrance Agent that has a little noise-canceling microphone, listens in to my side of the conversation and everything I say gets transcribed and put into a text buffer. And then the computer is continuously looking up previous emails, previous papers, previous conversations that might relate to what we're talking about. It pulls up little reminders in my display of the details I might want to include in this conversation. So it's supporting the conversation, providing me information.

I had a student who was talking to me about speech recognition—how you want to use speech recognition with wearables. And I said, well if you really want to do that, you should—and I pulled up my notes while we're having the conversation—I say to him, you should really look at Steve Whittaker's talk on Filochat from 1995. I believe he had some great details about how up to 93 percent of people's day is spent in opportunistic communication. I provided him with all of these details. My student just looked at me like "how do you know all this?" It's all right here [points to Google Glass]. I knew enough to look for Steve Whittaker and then the computer provided me with all of the details. Having that kind of support in real-time is really kind of awesome.

fast-forward-thad-starner-talks-google-glass-helping-dogs-talk photo 2

We've also done it before where I was giving some testimony at the National Academy of Sciences in Washington, D.C. about augmenting humans. I showed them what I was doing. I had my students, my grad students, back at Georgia Tech—so I was in D.C., my students at Georgia Tech were listening in and also I showed everybody I had a live feedback to Georgia Tech. I put up both my presentation notes and my students' notes [on the head-up display Google Glass] that they were suggesting in real-time.

So as I was testifying, they said, "Oh, you should talk about this," and they'd throw me a URL, "or you should talk about this" and throwing stuff up, providing other information. The panel that was interviewing me, they started asking questions of the students, not of me.

I was just kind of the face man for all of this knowledge of this consortium of people back at the lab who could quickly answer questions because they were experts in the field. But they could look up information a whole lot faster than I could. They simply used me to talk through. These intellectual collectives, this ability to go to experts, being able to help you out at a moment's notice, is quite a powerful thing. On the other hand, if you try to say "Read email" while having a conversation [on a different topic], you sound like an idiot. Humans cannot multi-task very well.

It's not multi-tasking. It's really augmentation.

Right.

It seems like the nature of being smart, the nature of knowledge, the nature of intelligence has changed. You used to have to memorize all of these things to have a conversation with it without referring to your notes, and that's not the paradigm anymore. You just need to know what questions to ask and be connected to a system that can deliver those answers to you.

There is a certain amount of information you need to internalize so you can look it up quickly and be able to think in those areas, but for a lot of other things, you just want the world's knowledge at your fingertips and have it brought up delivered when you need it. This idea of just-in-time information retrieval is very powerful. That's what I think is going to happen within the next five or 10 years. We're going to have these intelligent assistants that can help us on a second-by-second basis.

That's one of the great things about wearable computers. What I think defines wearable computers is that it's something where the computer becomes part of us. You probably have used binoculars before right? It's something you pick up. You look at something in the distance. You adjust the focus. They're kind of finicky, right? You have to hold them just right and adjust the focus every time you change from one subject to another. You put them down, up and down. You don't think of your binoculars as part of you. But your eyeglasses, you put them on, and you use them all day. You see through them. You don't think about them. You just use them. So, what we have right now with most of these devices, most of these computers, our smartphones, our tablets; they're not part of us. They're other.

As soon as you get the devices close to the body, suddenly the technology gets out of the way. That's the paradox of wearable computers. By bringing the technology close to the body, it gets it out of the way. I think that's where we're going to go in the future. We're going to see more and more ways for technology to assist us, make us more powerful, more independent, more confident and more informed on a split-second by split-second basis. I want to see these intelligent assistants go from something that you're talking to; to something that's part of you. Something that's assisting you from one moment to the next very proactively.

I think that concept of experiencing the world through the technology as opposed to just using the technology, is a crucial difference.

That's the experience I've been working towards. A lot of that is simply the speed of access. Larry Page said to me, "It's all about reducing the time between intention and action." I was stunned. I've been trying to articulate the importance of the speed of access of a device. Here he did it in one phrase. The time between intention and action is crucial.

One of the demos I do [when I'm giving a talk] is to have everybody [in the audience] complete two tasks. I have them raise their hand after they complete each task. So, I'll say something like, "Okay, what time is it?" I'll raise my hand because my time is on the front of my screen right now, and my audience also completes the task very quickly. Then, the second task I have them do is to answer a question like "where is Mount Pinatubo?" I can raise my hand because asking the question got me the answer on my [Google Glass] screen right now. It's in the Philippines, and as a matter of fact, I have a little map of where it is right now [showing] on my screen.

It's so hard for my audience to get to that information quickly that they don't even bother doing it. I'm talking about technical audiences like Xerox Park or Georgia Tech or Stanford. So here I am, in an audience full of nerds, doing an experiment with them and most of them don't even bother trying to get to that information because they know it's going to take too long.

What if we can change it from something that takes too long to something that's as easy as checking the time. Suddenly, you make people a whole lot more powerful. If you can reduce the time of intention to action to within two seconds or less, suddenly people will do it without thinking about it. It's part of them. Once it gets beyond two seconds, the use goes exponentially down and people think of [the technology] as an "other."

Before we started recording, we were talking about patron saints. And we were trying to figure out what Saint Stanislaus was the patron saint of. I've got an internet-connected tablet in my hand. I wanted the information. But I did not bother opening up Google, typing it in, and searching for it because I didn't care that much. But just by having that conversation, it popped up on your screen.

I mean I have it here. I just asked. This is more explicit. I said simply asked, "Saint Stanislaus was the patron saint of?"—the answer is Poland—and then [the answer] popped up a couple of seconds later. It's quite powerful. It reduces the sort of reluctance to try and find information about it in the world. It's great for me as a professor because it makes me seem more intelligent than I am.

This is something that happened when I was a student at MIT. I was taking a class from Justine Cassell, and it was about gesture. During a review before the final exam, she asked, "What was the importance of Deixis?" I had all my class notes on my computer, and I could access it very quickly with this [Twiddler] keyboard. It was before the speech recognition got very good. What I would do is construct my sentences so that I could find the answer in time to finish a sentence.

So I raised my hand to answer the question. She called on me. I said, "At the beginning of class, we said the importance of deixis was uh, uh, uh, um," because I made a mistake. I dropped my keyboard. I got into the wrong mode of E-Macs [my word processor]. I said, "I'll have to get back to you. I got in the wrong mode."

The entire class broke up laughing. They had spent all this time with me and had not realized I was doing this on a daily basis. A professor who was taking the class with me leaned over and said, "Now I want one. You do this all the time, don't you? You can pull up information so quickly. It seems like its part of you." Yes, that's exactly the reason I do that.

It was amazing that these professors and students who I had been working with for years just never realized how much assistance it the device gives me. The first time I gave a professional talk, one of the senior students came up and said to me, "Thad that was the best talk I've ever heard you give, but why were you wearing your computer?"

She had no idea that while most people have 3-by-5 index cards, I have it up here [in front of me]. Having that sort of assistance made me a better speaker. It was just mind-blowing to people at the time that you could use your machine this way.

So, let's make a small pivot. We've talked a lot about human wearables. You're working with dogs now. Explain the work you're doing.

The project is called FIDO, Facilitating Interactions for Dogs with Occupations. Yes, it's an acronym, but it's a cute one. The idea is there are all these working dogs out there. Dogs that are allergy alert dogs or medical alert dogs, search and rescue, bomb detection, inspection for TSA for contraband or agriculture. These dogs do a lot of jobs, but they have a very hard time communicating what they're sensing and what they're thinking to their handlers.

fast-forward-thad-starner-talks-google-glass-helping-dogs-talk photo 3

We had one of our trainers [at SXSW] a couple of years ago to do a [live] demonstration. She's a diabetic and one of these people who can just faint from lack of blood sugar. She has a dog that's trained to react to that alert situation. The dog goes and finds the nearest person, grabs them, and tries to bring them to her because she's going to pass out soon.

Before she's about to faint, it goes and gets help?

Yes. It actually can sense that change in body chemistry from smell, and get help. This happened at the security gate at the Atlanta Airport, and the dog did what it was trained to do—went up on a TSA security officer. The security officer just thought the dog was being friendly, but our trainer said, "I'm going to faint in a second. I've got to sit down." You can imagine it would be a whole lot better if the dog could speak.

So, we've made a vest where the dog goes and communicates alerts by pulling the vest. When the sensor is activated and [a voice command] says, "Please follow me. My owner needs your assistance." Of course, the first time you test this out, people think they're being pranked, or they just don't believe it. Did this dog just talk to me?

It's an audio recording of a human voice that actually says that sentence?

Yes. We have to train the dog to do it twice because the first time the humans don't believe it.

Does the second time, does the dog say, "No, really! I mean it this time?"

No, we just have it repeat the same thing and it seems to work. We're still testing it out. We need to test it out on some unsuspecting people at Georgia Tech and see what they say. If it happens to you, you know it's part of our testing. We'll have one of our assistant dogs in training coming here to do that demonstration.

That dog's got to learn new tricks by 2 p.m. this afternoon?

As a matter of fact, we initially thought we were going to have troubles with bringing the dog to SXSW and so we're going to be doing a lot of training today to be able to show off what we want.

The idea that these dogs have all these perceptions that they can articulate to us is kind of new. We had some press on our FIDO vest, and we got a call from the Georgia Tech police asking if the vests could be used for dogs who do bomb sniffing. We were surprised to learn that they have bomb-sniffing dogs, [but they] explained that the dogs are used to scan the sports venues before people come in." We're like, "Oh. Tell us about bomb sniffing." We're interested to learn more.

fast-forward-thad-starner-talks-google-glass-helping-dogs-talk photo 4

It turns out the dogs are trained for things like gun powder and C4 but also things like peroxide bombs, which...smell very different. Peroxide bombs are also very unstable. If you get one of those, you don't want the dog or handler anywhere near it. Gunpowder or C4 are very stable and it's not a problem. What the Georgia Tech police wanted to do was have the dogs have their vest with two little pull toys. If the dog smells gun powder, they pull the gun powder one and it geo-locates them and it shows the dog's location on the officer's display where the dog is. Then they know that there's a bomb here but it's a stable bomb and they can just command the dog to stay there and just bark at the bomb until they can find it quickly.

If the dog pulls on the other side of the vest, it's a peroxide bomb. We know it's unstable and then you actually recall the dog to get it out of the way and then you send a different type of bomb disposal unit.

The same thing with search and rescue...and these TSA agriculture dogs for what they're smelling in people's luggage. You can and even do this for inspection of farmers' fields. There are some things like sweet potato rot where you can't tell from the outside that there's a problem. It's a new thing in North Carolina that is just ravaging their crops. The dogs can smell it. You can send out the dogs and have them so they can search the fields until they find it. If they find it, they alert. You see where they are. You go there. You can see where they're located, dig up the roots, confirm that it's actually rotting and then you get rid of that part of the field and start again.

It turns out that once we start utilizing these dogs for doing these things, more people come out of the woodwork, new applications are discovered. Dogs that sniff truffles, for example.

That's what I was thinking.

That's one of my favorites. We have a hobbyist who wants to play with us now on that one.

Can I borrow the dog for the weekend?

Exactly.

The thing I like about it is that you're giving humans access to the dog's perceptions, which are in many ways superior to humans, and just using the technology to allow them to communicate those perceptions.

Their sense of smell is 100 times better than anything we do electronically. They have very good sense of hearing. Their eyesight is not as good as ours but they perceive differently. If we just allow them to actually communicate better, we might be able to learn a whole lot more about what they're capable of doing and how they might do their jobs better. Of course, the dogs think this is great fun. This is all play for them.

Let's move on to haptic feedback and haptic learning and some systems you've done there. Is it true you can teach someone how to play the piano without actually playing piano?

It's pretty crazy. We call this thing passive haptic passive feedback learning. It's one of the super powers that wearable computers can give you. The idea that it can teach you a complex mechanical task or complex manual task without your attention. It is kind of surprising. We've done this with piano several times.

Suppose you want to learn Beethoven's "Ode to Joy." It's one of the simpler ones we use for our demonstrations. We have you put on a glove, and the glove has little vibration motors on each finger. You put on your Bluetooth headset, and it just plays the song over and over again as you and I are talking or you're reading email, or you're driving, whatever. You ignore it and it just plays the song over and over again. As each note is played, it taps the finger that belongs to that note. After 20 minutes, you have the first stanza of "Ode to Joy."

The next 20 minutes, you have the second one. In about an hour and 30 minutes of just this tapping, we got to break it up in the right chunks, but after about an hour and thirty minutes, you have the whole song. If you put your fingers down, then you put your hand down on the piano, we mark where the first note is because otherwise you wouldn't know, and your hand becomes possessed and it just plays the song it. It's one of these things that if you focus on it too much, you can't do it but if I distract you, you can.

You play the song and, I imagine, you play the sounds back in your head to try and hit the same notes.

We did this with Chad Myers on CNN once. He has no musical talent. We put it on his hand while he was tracking a hurricane in the Gulf, then brought him out live on CNN. You could see him have this stage fright. This guy's in front of 20 million people every day, yet gets stage fright as he puts his hand on the piano. You could see he didn't even know what the song was. He just starts tapping it. He doesn't know what it was going to be but as he goes on he recognizes the tune. Suddenly his rhythm gets better and he then repeats it. He gets better and better at it as he goes. You could see this "oh my word" revelation on his face. Having done it to myself a couple of times, I will say it is a very odd thing.

The thing we're trying to do now is figure out how far we can go with it. We started with just a single note, single hand melodies. Then, my student, Caitlyn, has figured out how to do chords. Now our standard piece is Mozart's Turkish March. For people who are musicians, you know that two hands are very different, very different rhythms on them. It's a complex enough piece that learning it is is non-trivial. We're taking complete novices, throwing the gloves on their hands and after a few sessions, they're playing this complex piece. I'm not saying they're great musicians but I'm saying they know the note sequence. We got interested in then in what else we might be able to teach.

fast-forward-thad-starner-talks-google-glass-helping-dogs-talk photo 5

One of the things we started looking at was typing. Since we figured out how to do chords, we tried to do braille. Braille involves these six fingers [showing his thumb, index, and middle fingers on both hands], so it's the dots, or these eight fingers if you do the extended form of it. Most people don't bother learning braille as they get older as they lose their vision. It's also a real crisis with students. About 40 percent of them never learn braille at least not sufficiently enough to communicate with using it. Learning braille for a student who is blind is one of the big indicators of future educational progress as well as economic success. These braille teachers who teach braille are itinerant, moving. They just go from school to school because there isn't enough demand to have a permanent teacher at a particular school.

We started looking at making gloves that could teach people braille passively. The answer was yes. In four hours, I can give you an earbud and these gloves. While you're doing and you can do whatever you want, and it just repeats back to you, "The quick brown fox jumps over the lazy dog." It does each letter at a time and you feel the letter on your fingers. After about four hours of this, I say, "What's the letter G," and you know it's that [showing the pattern on his fingers]. What's the letter A," you know it's that. Not only can you type it but you can feel it. You run your finger over the pips and know how to type the letter. You go, "Okay, now how would I type that? Okay, so that's the G." To our surprise, it just works.

What do you think is happening in the brain to enable that kind of connection? Because it's not conscious and it's not just intellectual either? It's something else between the mind and the body.

There's a lot I really don't know. But there's some speculation by a colleague of mine who believes that what's happening is that the sense of touch is a much earlier evolutionary human sense that humans got evolutionarily. Almost all animals have a sense of touch. That seems to fit with the idea that touch is part of a very low-functioning part of our brain, very low lizard brain type stuff. Maybe it's the exposure that's really conditioning us to recognize these patterns. Whereas you try to learn something like language when you sleep, it doesn't work really well because audio is a very much higher brain thing. That's why we didn't expect the text entry stuff experiment to work. Music is very low brain stuff, whereas systems of language are thought of as higher brain function.

It is not language-associated.

Right. The fact that learning text entry worked was a surprise to us. We're not quite sure how far up we can go. Can we teach sign language? Can we teach dancing? Can we help experts become more efficient? One of the things we want to do is get some baseball pitchers and record their arm movements and see if we can play that back to them in their downtime and increase their consistency.

I have no idea if any of this stuff is going to work. One thing we did try, though, was Morse Code. Braille is just a key sequence. Morse Code is rhythm.

We found that we can use the induction transducer on glass. To tap the side of your head. Now, this normally works by vibrating your skull and sending the sound directly to your cochlea, bypassing your eardrum. This is why I can wear this and get little alerts and not have anything blocking my ears, but if your hit with 15 hertz it feels like a tap.

We started with "quick brown fox jumps over the lazy dog" style training, but now we're just tapping the side of your head in Morse Code and believe it or not in 4 hours again—it seems like the magic time for the alphabet—most of our participants went down to 0 error...when we ask them to type S. We can show them dashes and dots on paper and they can write down the letters for it. We can tap the side of their head again and they can know what message we're sending.

So four hours and we're having them do this during a video game. So we're making sure we distract them with a video game. Video games turn out to be the most efficient distraction.

We've known that for a very long time.

That's true. We're trying graduate entrance exams. Give them reading comprehension, or math problems. It turns out the only one that's distracting is video games.

It's an interesting thing. We were just talking to Poppy Crum earlier today about audio and mixing senses. There's a lot of AR at this show; there's a lot of virtual reality. Some of the most interesting things are happening not just in one sense, but in a collection of senses. So, in this conversation we've talked about the visual space, we've talked about the audio space and also the physical. The tactile sensation. When you put all those things together it seems really like really interesting things start to happen.

That's the whole thing. It's not just the synthesis or the senses. It's also the fact that it's something you can wear with you throughout your day. Once you have that sort of proximity to the body, you can do all sorts of things with the body we didn't think about before.

Passive tactile tapping learning is one, and another thing that came up was passive tactile tapping rehabilitation. This was also quite a surprise to us. We had an open house demonstrating our piano playing thing and one of our colleagues at the Shepherd Center—which does a lot of work on spinal cord injury—she came by and said, "Hey Thad I want this for my patients." I couldn't understand why she would want people with tetraplegia to practice piano. She said, "No you don't understand, Tetraplegia just means all four limbs are affected. My patients with partial spinal cord injury still have some sensations and some dexterity. I bet if we put your gloves on them and trained them to play piano, that they'll actually regain some sensation; improve their sensation and dexterity."

She was right. We taught about eight different songs over eight weeks. Participants would have the gloves on two hours a day, five days a week and then have piano lessons three times a week. At the end of the eight weeks, the participants who had the gloves did much better. They had significant improvements in their hand sensation and dexterity compared to the people who just had normal piano lessons.

So, this idea that we can do rehabilitation, that we can trick your brain into devoting more and more resources to explain these signals coming out from your hand is just astonishing. I don't know if you've ever had a broken leg or had a parent that had a hip surgery, but rehabilitation takes a long time. It's very boring, and it's hard to make it fun. It's hard to get compliance. If I can just put on a glove and have rehab happen, and by the way, we're going to give you this super power that you're going to learn to play piano by the end of it, that's kind of motivating for a lot of people.

That's what we're currently doing. We're doing it right now. We're trying to see if this might help with recovery of hand sensation after stroke. If that turns out to work, then there are a million people a year we might help. That's very speculative. We have no idea if it's going to work or not, but we'll see.

All right. Let's get to some questions I ask all my guests. What technology trends do you see right now that concern you the most? What keeps you up at night?

Very little keeps me up at night. I sleep quite well.

You have three jobs.

I have three jobs. I don't have time to sleep. A lot of people's worries about technology is what you see in the press. Reality's a whole lot more boring than what a lot of commentators will lead you to believe or the movies will. I guess the thing I worry most about is the echo chamber that social media can cause. People can believe pseudo-science. People can be led to believe things that just aren't true. People make commentaries on my work, and I'm like, that's wrong.

I do a lot of work with the deaf community. It turns out that teaching your kid sign language will help them acquire language skills faster. For the longest time people believed that if you teach your kids sign language, it will delay the onset of normal speech. That's precisely wrong. Teaching sign language from birth seems to help improve everyone's language skills.

Most kids can't really form the words coherently until they're 12 months old. They can start forming the signs, you know things like milk or food. They can do that at six months. Having children be able to express themselves is what gives them short-term memory. You learn short-term memory from learning how to communicate. So having the kids be able to sign six months earlier than they can speak may be giving them an advantage on memory and language in general. The original research was 20 years old but they're pretty sure about it now. So it wasn't until recently that even the medical community has said, "Oh you should teach not just deaf kids but all kids sign language." It's a pretty powerful thing.

We have a game called Pop Sign that allows hearing parents of deaf children to learn sign language, or any parents to learn sign language, by playing on this addictive game site or their mobile phones.

So, coming back to your original question I see the other situations where people who get into echo chambers especially with rare knowledge. It's easy to get people going down the wrong path.

On the optimistic side, what are you most encouraged by and most hopeful for?

Well, I like the stuff that Elon Musk is doing about trying to convert us over to solar power. I'm the kind of person waiting for those roof tiles to be available so I can convert my house over.

In my own space, the wearable computing, the intelligent assistant is taking off. Seeing people's lives changed by the availability of this device is not just because it is improving their personal lives or their professional lives, but literally saving lives on the operating room table, just because of the information we can deliver just in time to the surgeon.

So this idea that we're going to have these intelligent assistants that can help us on a second-by-second basis. Finally, we are getting to the stage where this stuff these things are becoming useful.

What gadget, a service or a device that you use every day, changed your life? What do you love the most? You cannot pick Google Glass.

Can I pick my previous prototype photo system?

I guess I'll allow that.

The system I had before Glass was a Micro Optical SVB6 display and the Twiddler keyboard and a shoulder pack computer. The hardware wasn't the game changer, it was the ability to take notes during conversations and refer to these notes that has changed my life. If I sit across the table from a Nobel Prize winner at lunch, I want to be able to remember that conversation. I want to know just the five words that will spur my natural memory.

It turns out that having this keyboard so I can type underneath the table and take notes on that conversation has made me a much more effective human being and more socially graceful, too. One of the things that my students were shocked to see is that before I see somebody, I pull up my Rolodex on them. It has the last time we talked together and what we talked about. It really helps me reload my brain about our conversations.

Suppose I'm visiting an old friend. I have notes like "Hey yeah his daughter's kid was going off to college. She was studying geology; I wanted to put her in contact with my wife. I should ask him if that happened." It really enables you, not just in your professional career, but also being more social as well.

Yeah, we talked a lot about these devices making us smarter and making us seem smarter. That extension of memory where you're suddenly connected to more of your life because you can remember more of it because you can access more of it and get those reminders in real time, that's an interesting application.

In the early days of wearable computing, we did three things. We did augmented/alternate reality, intellectual collectives, and augmented memory. The intellectual collective is a kind of social media. Augmented reality is all the rage these days. Most people don't realize it was called Artificial Reality before we coined the term Alternate Augmented Reality. Actually, I can tell you why we changed the name.

Why did you do that?

In my early days of studying this artificial reality, there was a guy name Myron Krueger, who wrote a wonderful book called Artificial Reality 2. I highly recommend that to anyone who wants to understand Augmented Reality. Anyways, I think it was Timothy Leary who started conducting press interviews where he talked about Virtual Reality and Artificial Reality being like LSD drug trips. At the time, I was a student writing a [US military] grant proposal, and I didn't want them to look up the term Artificial Reality and see it affiliated with LSD.

So I thought I needed to find another term to use that wouldn't pull up something on the web that might get me in trouble, so we made this conscious decision to call it Augmented Reality. Fortunately for me these other guys at Boeing were also looking at using augmented reality for assembling this stuff for doing electrical line harnesses and making airplanes, and they came up with the same term. So I think I had the first written version of it, and they had the first publication on it. The name Augmented Reality stuck.

Did you get the grant?

I did. It's why I'm here today. Thank you to the NDSEG, the National Defense Science and Engineering Graduate Fellowship, I do believe. If it weren't for you guys, I wouldn't be here. I wouldn't be a professor. That grant allowed my professor to take on an additional grad student and allow me to make wearable computers happen. So thank you very much.

Related

  • Fast Forward: 'Note to Self' Host Manoush ZomorodiFast Forward: 'Note to Self' Host Manoush Zomorodi
  • Fast Forward: SAP's Steve Singh on Automation, Business Without BordersFast Forward: SAP's Steve Singh on Automation, Business Without Borders
  • Fast Forward Q&A: How to Build Emotional MachinesFast Forward Q&A: How to Build Emotional Machines

If people want to follow you and find out what you're working on how can they do so online?

The easiest way is to go to Google Scholar and type in my name and you'll get access all of my papers and just click on the most recent ones. If you want to see something that's more day by day if you go to the Google+ community on Wearable computing, it's the one I run, I try to put the coolest stuff that's happening out there. At least from my research and we're going to start reviewing some of the stuff that's happening in general.

For more Fast Forward with Dan Costa, subscribe to the podcast. On iOS, download Apple's Podcasts app, search for "Fast Forward" and subscribe. On Android, download the Stitcher Radio for Podcasts app via Google Play.

Article Fast Forward: Thad Starner Talks Google Glass, Helping Dogs Talk compiled by Original article here

Recommended stories

More stories