Our next topic is Actuated Puppet. The work we introduce here is an Actuated Physical Puppet as an Input Device for Controlling a Digital Manikin. The Motivation here is that 3D character posing is difficult. So, suppose you have a character and then you want to set the posing. And the most popular approach I think is 3D widgets. On the screen, you have a widget like this, and using a mouse to control individual joint angles. However, this is very tedious to control individual joint angles. For each joint, you have three angles, one by one, can take much time. Another approach is the Motion capture. You have a capture device and a unit. However, you need dedicated environment, and that you also need a skilled actor- actor. And also another possibility is to use Physical Input Device. This is very convenient. You can grab a Puppet and then pose, and you get the 3D view. So you can use both hands can be very good. However standard physical input devices very stiff. You know if you, if it to relax, it cannot hold a pose. And if its stiff enough to hold a pose, it's very hard to control. And also you have to control, manually specify all joint angles, which is inconvenient, in some cases. So, our approach is to add servomotors to each joint, so we get Actuated Physical Puppet. So we not only use Puppet as input device, but also the device reacts to the physical- virtual data to provide active feedback. And there are couple of benefits to it. First, it's easy to load existing pose into the robot, so you can start from existing pose to accelerate posing. So it's one benefit. Another is intelligent gravitational compensation. Here's a little bit explanation. As I said, if the joint is all free, it's easy to control. But you know, the it cannot hold its posture. So it's inconvenient to relax. And then another approach is always approach turn off the servomotors. Then it can be too stiff. It can hold existing pose, like this legs. However as a negative side it very hard to control, change a pose. So what we do is, adaptively do it, so intelligent gravitational compensation. So if you do not touch this arm, these legs, its con- fixed by a servomotor by producing a power against gravi-, gravity. However, if the user start to manipulating the joint, the system automatically detects it. And then automatically turns off some servomotors, so you can push it very- with very light weight force. So, that is intelligent gravitational compensation. And then finally, we also implemented active guidance, using measured human data. And this consist of joint coupling, and data driven inverse kinematics. Let me describe a little bit more. So joint coupling is two joints will work together. What's easy to understand the example is this hip joint, and knee joint. You know, as you know if you move your leg. Upwards you if you control hip joint, then knee joint also follows, right? It's very hard to keep the legs straight as you rotate the hip joint. So, if you rotate the hip joint, it's very natural to also bend the knee joint together. So, this is joint coupling. And with actuation you can do it, you know, if you control the hip joint, the system automatically rotates the knee joint. So this is joint coupling, which is very useful to generate a very natural human posture. So this one automatically avoids, unnatural configuration. The another is, data driven inverse kinematics. If you just control position. For example, suppose you control the fingertip, and then go downwards, and if you apply simple inverse kinematics result is like this. So it's mechanically valid, it’s Geometric- geometrically valid. Fingertip follows the user control. However, joint only rotates along the alo- along the arm. And then, resulting pose is very strange, a unnatural force. And instead of that, we use data. We measure lots of natural reaching gesture of a human And then use it synthesize appropriate motion. So here if the user pull down the fingertip downwards, and then the character, the robot, automatically the puppet moves making a very natural motion, using data. So, let me show you a video. So as I said Physical Puppet not only controls a physical version of character, also virtual character provides feedback to the Physical Puppet to guide natural pose editing So it has 32 degrees of freedom and connected to the PC. Yeah so primarily it's an input device. You control the pose, and then you see the virtual character in the same pose. So this provides a very intuitive way, of controlling character posture. This can be very tedious, if you use two dimensional mouse. And one benefit of using servomotors is, uploading pre-defined postures into a robot, our puppet. So, you can get pre-defined postures into the puppet. So this is a very good starting point for design, and then this is Gravitational Compensation. You know, if the Torque is OFF to relax then it cannot hold a given posture, you know, if you release it goes down. And on the other hand, if you to, always turn on the torque power, then it can hold the posture, but, very rigid, you know? You need strong power. To rotate. And this is a result of automatic gravity compensation. So as you start pushing. The system turns off the power. So, you can change the pose, with very light weight force. But also, system can hold the posture after release. And then, this is a result of Data-driven IK. So, if the Data-driven IK is turned off. The motion is very unnatural. It's very robotic motion, because the robot doesn't change entire body shape. However, If you turn on the inverse driven, Data-driven IK, then the resulting motion is very natural. So user control the fingertip, and then system automatically bend his legs and then move the head and other arm. And this is a joint coupling. The user is controlling the hip joint, and the system automatically controls knee joint, so. So, let me show you a couple of application scenarios. So, suppose you design in car seat, and then the visibility should be preserved. So this, in this way, you can intuitively control the digital mannequin in the car environment. And then you can check the here, is reachability. So you can check whether this puppet can reach specific target in a car. So this one is load assessment. So this red indicates where the power is necessary to hold the posture. So, this is useful for analysis of the environment. And this is a Visibility analysis. So by changing the posture of puppet, we can see which part of the environment the puppet can see. So that's the idea. So let be briefly describe the algorithm. Specifically, I will briefly describe how to do Data-driven Inverse-Kinematics. So as I said, Data-driven IK is like this. If you move a fingertip, then the puppet makes a very natural human like motion. In order to do so. We get data. But let me first describe Inverse-Kinematics in general. So inve- Inverse-Kinematics, is a very fundamental problem in character animation or also in robotics. So suppose you have this kind of arc, you have a base here, and then you have joint one, joint two and joint three. And you have each joint angles. So, Forward Kinematics is the standard way. So, given joint angles, you get the position of the fingertip. So, this is very deterministic single computation. However, what people usually want to do is to specify target x, y position of fingertip. And then try to compute appropriate joint angles. This is very a frequently occurring problem to solve. And then it looks like this, given x, y, you have to compute joint angles. This is inverse problem, which is very difficult to solve. So many solutions, many algorithms. And there are couple of possible ways like purely geometric. So something like, minimize a change from the current configuration or physically most you know, physically most plausible shape, or as a data-driven. And Data-driven is a method we use. So, we do something like this. In the motion capture environment, we ask the person to reach many places around his body, and then we measured all the postures, together with the fingertip position. So you get many data like this, x, y, z position, and joint angles, x, y, z position, and joint angles. We get many data of these samples. And after that user input is target fingertip location. And then we are identify the nearby samples. And then just blend them together. And then you'll get, the desired joint angle. So this is what we do in the Data-driven Inverse-Kinematics. So here's a summary. So we presented, I presented actuated puppet device for character posing. So user controls a virtual character, using physical puppet, and also gets feedback from the system. And one important feature is intelligent gravity compensation, and active guidance from the system. And we, I present the Data-driven Inverse Kinematics. So, we take many snapshots, example poses and then blend them together to get natural posture. So, to learn more, original paper is titled An Actuated Physical Puppet as an Input Device for Controlling a Digital Manikin. And if you want to know more about the previous standard puppet devices, there's two papers. Dinosaur Input Device. And also Of Mice and Monkeys: A specialized input device for virtual body animations. And Inverse Kinematics, as I said, lots of work on these topics. So example base like is this one. Artist Directed Inverse Kinematics using radial basis function interpolation. So this is an example of a Data-driven Inverse IK. And also there is a geometric approach. So natural Motion, Animation through Constraining, and De-constraining at Will. So this one introduces a purely geometric approach, to get natural posture from a fingertip positions. That's it for this week. Thank you. Ah- this video.