[MUSIC] What sort of code do you think we'll need to make a robust AR app? That does something other than just display the geometry it's detected? For one thing, we should definitely have some code that is looking at the AR system state we talked about in the last module. So we know when the AR system is running and providing useful tracking info. And of course we should probably put some sort of visual content into the environment. Other than just planes and points that help us visualize what the AR system has found. And finally, it would be cool if the user could interact with the environment, as seen through their smartphone. To place objects or otherwise interact with it. First, let's tackle interacting with the environment. How can we take smart phone touch events and interpret them in the context of the AR environment? Normally in a game we would use the Physics.Raycast method to look for virtual objects along a specified viewing direction. But the trackables that the AR system finds normally don't have any geometry associated with them. Unless we supply the AR managers with pre fabs, we don't even see any visible presence in the ARC. So to help us use the trackable data, and interact with what it represents. Unity has provided us with new versions of the Raycast API calls that live in the ARSessionOrigin object. The ARSessionOrigin object is used because the Raycasts need to be transformed from Unity space into AR Session space. Two kinds of AR Raycasts are supported. One that looks for intersections with planes, and another that can be used to look for points in the point cloud. The first method takes a screen position as input, which makes it well suited to finding planes where the user touches the screen. The second method requires you to construct a Ray with a 3D position and direction. But it is more flexible because you can also search for points in the point cloud. But an infinitely thin ray can't be used to search for infinitely small points. So it is possible to specify, a pointCloudRaycastAngleInDegrees, to search using a narrow cone. Any points that lie inside the cone will be found. This parameter is useful only if you need to search for points though. Both of the new AR Raycast methods allow you to specify a trackable type so you can restrict the search. The trackable types include not only a plane, but a restricted type of Plane called WithinPolygon. This is probably the type of plane you should use to compare touch events. It will only match a plane if you touch it within the visualized area, which is probably what the user expects. If we do use the Unity plane visualizers, it's possible to use the Physics Raycast methods to detect AR planes. If the prefab you use has a collision mesh enabled, the prefab we created does have a collision mesh. So the visualized mesh we see in our app does respond to the normal Raycasting methods we have in the Unity Physics API. However, the point cloud information we get doesn't include a collision mesh. So, we can't really detect points with a Physics Raycast. The project work for this lesson is to create a masters grip for your AR scene that you will develop through the rest of this module. And then submit for peer review at the end. Your first task will be to set up a callback to monitor the arSessionState and display it on the screen. Then we'll set up some callbacks for the plane and point cloud managers. And display some text on screen showing the number of track planes and the number of points in the point cloud. Finally we'll use Raycasting to determine when you are touching one of the detected planes. And create an animated robot at that position in the scene. So let's get back into the editor and do some AR scripting.