Our next topic is style-by-demonstration. So, the work we introduce here is called style by demonstration, teaching interactive movement- movement style to robots. So the goal here is to teach interactive behavior to robots. The robot observes user behavior. And then user do something and the robot do something. So we try to teach this kind of interactive motion. And in order to do this, you know traditional programming is require, you know? User have to explicitly hard-code how to do it. However, traditional text based programming like C or Java is too difficult for designers. So, our goal is to, do this, make it possible for artists to design behavior. And then, we take proble- programming by demonstration approach. So in the training phase, user interacts with the robot, operated by an operator. So, the designer actually operates this robot, interacting with the user. And then, from this demonstration result training data, in the run time, systems automatically synthesizes robot motion, responding to the user input in the real time and referring to the training data. So that's the basic idea. Let me show you a video. So here this is a user and this is a designer operator, and then here's a robot. So, in- this is a training phase. In the training phase, operator is operating the robot to responding to the user. After that, he leaves. He disappears. So I think he's teaching following behavior. So, in this case, the designer is teaching attacking behavior. So, tries to push this user away from the region, so that's a behavior. Oh, by the way, everything is tracked using a motion capture system, so the user’s positions, and robot position is continuously tracked, and recorded, as a training data. After the learning in the run time, robot replays interactive motion adapting to the current situation. So here is the robot is reprising a burglar attacking behavior. So, I think it's very tedious to teach this kind of motion using standard program language, taking user and robot position as input, and then compute many geometries. And hard code motor- motion is hard, very difficult. We also tested this kind of tabletop configuration for teaching. So the user is manipulating, controlling the robot in this way following the character, eh- following the user. I think here the user is teaching stalking behavior. You know, following a person from the behind, hiding away. So, this is the result. So the system is learning automatic algorithm based on the training data. So this why he's following the person, like a stalking, with a certain distance. So this is not just a replay, or pre-programmed, predefined motion. It is adaptive to the current, current configuration. Okay. So, that's, that's video. So let me briefly describe the algorithm. How to do it. So, here's a situation. So, we have training data. User motion data, positional data, robot positional data, and these are time series data. So, in the training data you have lots of time series paired, positional data, motion data. And also, we have a runtime situation. So user’s position is continuously tracked and robot motion is continuously tracked. So, now, this is the current time. In the current time frame you have the history of user position, robot position, and also user's current position. And the task is how to compute the robot's next possible- desired position. So that's our task. So input is this old data, and the output is position, Desired position of the robot. In order to do it, in order to sense a similar motion to the training data, first task is searches for the similar situation. So, you compute the current configuration, recent user positions Recent user motions and recent robot motions, and it searches for the similar situation in the data set. You can make it faster by indexing it beforehand, but anyway what you do is search for the similar situation. And after identifying the most similar situation. And the system basically just copies and pastes, takes the motion in the training data to the current data. Of course, this is a very, very simplified view, but this describes the basic idea behind it. And this kind of two-pair and two-pair synthesis is inspired by this technique so called image analogies. This is the technique designed, developed for image filtering. So what we, what they do is here, so A, A dash, and B is an input. And the B dash is a result. So, here the user tries to teach image filtering. So this is a, A is input image. And A dash is output image of this one. So this one is kind of water painting filter, so system takes image and then converts it to a kind of a water painted painting from the input. So here the user provides an example as A and A dash, and it also provides B as a source image. After the system learns filtering from A and A dash, and applies to B, and you get B as an output. What happens internally is similar to the method we had just described. For each pixel in B dash we searches for the similar situation in A, A dash, and B, and then applies the result in A dash to B dash. So that's what they do. Okay. So, in this short video we described how to teach interactive behavior to a robot by demonstration. So the, the designer operates a robot in training time, and then robot behaves based on the training data. And internally what they do is what the robot do is searches for the similar data in the demonstration or training data, and then copies the previous motion to the current motion. And then the algorithm is inspired by the image analogies technique developed in image filtering. To learn more, original paper was published as Style by Demonstration, teaching interactive movement style to robots. And image analogies is published at SIGGRAPH 2001. And the general concept of programming by demonstration, or programming by example, has been extensively studied in HCI field. And one representative reading is, recommended reading is titled book is Watch What I do: Programming by Demonstration. So this book introduce many interesting techniques and examples of program- programming by demonstration. Thank you.