[MUSIC] Our ability to interact with objects in the virtual world is essential in creating the plausibility illusion in VR. Our ability to change the state of fairs in the virtual world makes us feel that we're not just existing in the virtual world, but rather we're part of it. It makes us feel that we're no longer just observing the world, but playing an important role in it. There are different ways we should control objects in the virtual world just like we do in the real world. We could have direct control of objects by just grabbing an object and rotate it and put it where else. In VR in order for this to happen, my hands have to be tracked maybe from the VR controllers I hold in my hands. And ideally, when I touch an object, I should also have some feedback from the controllers. Or we can control objects using physical controllers. For instance, I can press a button to turn on the radio or hold tight the driving wheel to drive my car. In VR, the best way to simulate this is to also use the real physical controllers. For instance, the most realistic VR driving experience should be achieved with real driving wheels and car controllers, which again gives the kind of feedback you will get when driving a real car. However this is not always possible because users normally have limited access to physical controllers. So in VR we also use virtual controllers. They are graphical representations of those physical controllers which you could interact with. For instance, it could have a virtual button to turn on the radio which we can press by putting your hand there. Again in this case, because the button doesn't physically exist, when the user touches the button virtually, ideally they should have some haptic feedback. Finally, another way we could control objects in the real world is by asking somebody else to do it for us. For instance, if I want to switch on the light but I'm far from the light switch, I might tell somebody who is sitting next to it to do it for me or when I'm not feeling so well, I might ask somebody else to bring me glass of water. In VR we could also explore this option often refered to as agent control. In real life, we often use voice and gestures to communicate our demands. In the virtual world, we could do the same. Because speech and gesture recognition is still fairly unreliable, this method is not commonly used. But there are creative ways you can use this method. For instance, in one of the games I saw a little robot would bring items to the users for them to choose from. There are several basic tasks in the area of object interaction which may applications share. The first one is selection, where the user acquire or identify a particular object or a subset of objects. The real world equivalent would be to pick up or point at objects by hand or indicate by speech. The second task is positioning, where to use it changes the 3D position of an object. Similar to in real world when we move an object from a starting location to a target location. The third task is rotation where the user changes the orientation of an object. The last one is scaling where the user changes the size of an object. There is no real-world counterparts for this task but it is commonly used in the virtual world. For instance one, an interior designer tries to figure out the best size for the painting on the wall. One of the benefits of defining these basic tasks is that, we can then look at factors that could significantly affect user performance and usability. For instance, when it comes to selection task the user interaction strategy could differ greatly depending on how far they are from the target and the direction of the target. For instance the target right in front of them would be easier to select than one behind them. It also depends on the size of the target, density of the objects around the target, and number of targets to be selected. Finally if the target is hiding behind another object, then it also becomes trickier for the user to select it. Those basic object interaction tasks help to really simplify how we could design VR interactions by breaking sophisticated tasks into their most essential properties. However, because of the oversimplification, it is definitely not a recipe for all VR applications. Many popular VR applications have their own specific tasks. Example of those applications include, playing ping pong with a bat, painting into 3D space with a paint brush. Using medical prop in a medical simulator, or driving with real controller in a driving simulation, just to name a few. [MUSIC]