Next topic we discuss is the control of robotic lighting system. The work we introduce here is titled Lighty: A Painting Interface for Room Illumination by Robotic Light Array. So the problem we want to address here is that it is difficult to control many light, especially when they can change orientations. So here an example restoration, so suppose we have many lights in the ceiling, and the individual light can change its orientation. And this is a very huge control parameter space, and it's very hard to control appropriate parameters. And typical user interfaces look like this. You have many sliders to change brightness and orientation, and so on. And this is not very useful for people to quickly sketch desired lighting configuration. So our approach is to use painting interface. So configuration looks like this. We have an environment, and we have many robotic lights, and they can change brightness and orientation. And we have a camera here to capture our environment. And then in this camera view, user paints desired lighting result So this part should be bright, this should be, this part should be dark. So you just paint desired configuration. And then the system learns inverse simulation, and then obtain desired parameter setting. So that's the idea. Let me show you a video. So again, this is a system work view. So you have actuated the light on the ceiling and you are in the environment. And then in order to control the light you pick up a tablet and then you see the environment and then paint on it. So this is a prototype hardware we developed. We- we built a miniature room with miniature lights and miniature furniture and this is a array of robotic lights. So, they can change orientation individually, and also can change the brightness. So, each light has three degrees of freedom. So if we have 12 lights, which means 36 yeah, degrees of freedom. So, here's a painting user interface. So, given this screen, user pick up a color and paint. Yeah. Okay. So here lots of things happening here. So this view is always a real-time capture of the camera view. So you see camera view. And as the user paint, user's paint is actually feedback is given as these contour lines. So this area bright, this is a little bit dark. And then given this contour line user request, system continuously learns optimization to get desired parameters, and in the real-time, The environment comp- moves the robotic light. And then up- and you see the result in real-time in this camera view. So, there's a lots of happening behind the scene. Yeah, so user paint system searches for the, the parameter setting interactively. And then you see the layout result immediately. So, you always see the camera view. Yeah. So, depending on the user input, system automatically computes the parameters said and then drives the robotic system. So, this is more rapid example. So, if you touch down, and move around, it will actually paint and commit it. But, if you hover your paint tip on top of the surface, you can still see the preview of the painting without actually committing the paint. So this is a mimic of oh, simulation of traditional approach. So you have 12 lights and then you individually control brightness and then orientation. So, and this a typical interface, and it's very tedious control one by one, to get the desired result. So we compare this interface with our painting interface, and if you want to bright somewhere, if you want to illuminate somewhere, it's relatively easy. You just turn on, electric light, and then direct it. However, if you want to make some part dark, it suddenly turns out to be very difficult using traditional interface because it involves control of many lights. You know, you move light sideways, moving away, looking away and turn off. But there's lots of interruptions behind, between multiple lights, so that's a difficulty. Okay, so that's the video I think. So the benefit of our system is that in addition to make specific regions bright, you can also make specific regions dark. This is kind of negative light. So you ask a specific region to get darker, and the system do it, and looks like a negative light, and if you- this is a result from our user study, in the painting if the user asked to make upper left corner dark, just paint that region darker and then you get this result. However, if you use traditional direct controller, it takes time and it's very difficult to get this kind of result. And let me briefly describe the algorithm, behind the scene. So, this is what's happening. So, this is the light parameters, so you have many parameters. Light orientations, brightness and so on. And then if you drive the light, and if you get the result, then you know, physics will happen and then you will get this camera view. And then, after that, as user paints desired painting result. So, system compare these two, and then optimize the parameter setting and then we'll get it. And in order to learn optimization, you need to do this iteratively many times, so instead of using- actually driving physical lights, we run simulation. You know, what happens if this parameter setting is given, and then system simulates the illumination result and then compare the result compare it to the user input, and then again run simulation and so, so that's the idea. So the physical process, again the process- physical process is too complicated to obtain analytic model. So in order to do this simulation, we use a data-driven prediction method. So we, so we captured many, many illumination results. So you have many parameters, like lighting parameters. So this light and brightness, the second light query orientation. So you have many parameters and then this, you get many, many camera views. So you capture many images. And then based on this data, you predict the resulting illumination results for a new given parameter set. And in order to do this, we basically individually control the lighting parameter for individual light, and then we add them together. However, important point is that naive summation of image data or pixel values does not work. So suppose you have pixel A illumination result here, no, suppose you have a illumination result over A here and then illumination result over light B here. But if you add directly pixel values, it does not correspond to the result, illuminated by light A and B simultaneously. That is because of the non-linear relationship between radiance. So, physical- physical value of the brightness is not directly linear. Relate- linearly related to the pixel value, because there's a non-linearity. So in order to handle this, you, we first need to convert pixel values into a radiance value. So real-world brightness value. After converting pixel value to radiance value you can accurately predict the summation of two light illuminations. Then after that we can again convert it to the pixel value to get the prediction. So that's what you need to do to build this kind of system. So in summary, we, I just showed Robotic lighting system with painting interface. And I briefly discussed what you need to do, to implement this kind of thing. So, you need to compute a simulation in the radiance space, instead of pixel space, pixel brightness value space. So the original paper was published as design and enhancement of painting interface for room lights. And if you want to know more about Radiance Computation, one point- one good starting point is this paper, recovering high dynamic range, dynamic radiance maps from photographs. So this paper discuss, how to compute original radiance values from multiple photographs, over the same scene. And also our lighting control with painting interfaces, there are a couple experiments in 3D graphics, and one example is this one, lighting with paint. So user paints desired lighting result, and then system computes, appropriate lighting for the computer graphics. So what do we do is a real world version of this one. So that's it for this week