It's hard to believe that it has been over a decade since the two of us started working together. One of the first major projects we worked on together was developing an interactive agent to use in some of our VR experiments. We called it Cristina. We'd like to talk a little bit about the challenges we faced and how we solve them because a lot of it is still relevant today. Cristina was able to carry on a conversation with a real person who was immersed in VR. Her graphical appearance wasn't great even by the standard of that time. But what made her interesting was that her body language was quite realistic and most important of all, it responded to the participant. Speech interaction isn't easy now but back then it was a big challenge. Getting a character to respond realistically with dialogue was beyond what we could do. Because Cristina was going to be used for experiments where they'd always be an experiment to present, we decided to use the approach called Wizard of Oz. Essentially, we faked interactive conversation by having in an experimenter choose what Cristina was going to say from a number of options in a menu just like the Wizard in the book. On the other hand, this wouldn't work for a lot of the body language. Her body language responds to other people very quickly. The experimenter wouldn't be able to choose gestures and movements quickly enough. Also, we mostly do body language subconsciously so we aren't really aware of what we're doing. That means it is very hard to explicitly control it for a character. So, we decided to automate a lot of the body language that involves sensing the behavior of the participant and producing realistic responses. The sensors we had available are very similar to what we have now and head-mounted displays, a head tracker and a microphone. The most important information we needed to know was whether Cristina was speaking or listening. Speaking was easy when the experimenter pressed a button, we just played back a particular piece of speech until we finished. The audio we recorded an actress reading the script. We then hand animated body and facial animations to go with the audio. Our software played back the audio and animations together to make Cristina speak and gesture. Living experiments with motion captured actors as they read out the lines and used that animation which gave more realistic results. Listening behavior was if anything more important. It created the illusion that Cristina was really paying attention to the participant and made her feel real. We used the microphone to detect whether the participant was speaking or not. And when they were speaking, our software would occasionally playback nods and smiles to give some friendly encouragement. Cristina's eyes were also very important. We had a simple automatic gaze animation where she would alternate between looking at the participant and looking away. Importantly, we change the proportion of time she was looking depending on whether she was speaking or listening. She looked a lot more while listening to show attention. This has proven to be quite effective. Take a look at this video clip and please pay attention to his body language. That shirt looks great on you. How much was it? I think it was about 20 pounds, it was quite cheap. Thank you very much though. You look very nice too. Oh, thank you very much. Here, the participant was clearly enjoying the interaction very much. So much so that he moved all the way to the back wall of the cave in order to get closer to Cristina. What he didn't know was that we have also programmed Cristina to maintain a social distance from him. So, the closer he moved, therefore the back Christina moved. She would move to make sure that she was always at a comfortable distance to the participant. Cristina would always turn to the face of the participant as you would naturally do in a conversation. And in one experiment, we reduced the distance midway through the conversation making her move forward and increasing the sense of intimacy. This had a strong emotional impact on our participants. So, Cristina had quite realistic body language and had a strong impact on participants using only a few simple rules that you can't reproduce with current VR technology. The rest of this course is going to explain how to do some of these things and create socially skilled virtual characters.