One way to describe the characteristics of a sensor that's used for remote sensing is with spatial resolution. It's also a way of describing the image that's created by that sensor. If we look at this image here, this is an actual real satellite image that I've zoomed in really close, so you can see the individual cells. If we describe the size of those cells in terms of their actual distance on the ground, that's really what we're referring to as spatial resolution. So, in this case, these particular cells are 30 meters wide and 30 meters high. Typically, for us, for most purposes for now, you can think of these as being square. There maybe exceptions or sensors that are not quite square, but for us for now, just think of them as being perfectly square. So, the spatial resolution will have a big role in terms of what you're able to see on the ground. Imagine if you were trying to see something that's smaller than 30 meters, like fire hydrant, do you think that a sensor that has a spatial resolution of 30 meters will be able to identify something that small, so that you'll be able to see it on the ground when you're looking at the image? Probably not. What about a car? Cars are much smaller than 30 meters, so it's unlikely you'll be able to see those as well. What about a road? Well, yeah, most roads would be able to be picked up pretty well. You'd be able to identify those because they are about 30 meters or more in size, you'd be able to see them. Of course, anything bigger than 30 meters, say a building or a farmer's field, something like that would definitely be able to be picked up. So, the whole idea here, remember, is that your sensing one square at a time, one cell, and there's only one number that's assigned for that entire cell. So, it's how much light is being reflected from that square in its entirety. So, when you think about what might be in that square, if it's all one thing, then that's going to be fairly straightforward, as it's either reflecting a lot of light or not. So, imagine if it was, I don't know, a giant sheet of paper. So, it was a really white looking, then it's going to reflect a lot of light or what if it was something really dark, like a dark lake, then it probably absorbing a lot of that light. But what if it's mixed, what if you have half of it it's a cornfield and half of it it's a parking lot? Well then you start getting mixtures of things and really all the sensor can get out of that is well how much light overall is being reflected from that square, and then it just decides on one number that's going to give for that entire cell. So, not to make things too complicated, but the spatial resolution is easy to understand really. It's just the size of the cells, but it has a huge effect on the amount of detail that you're going to be able to get out of an image, and really detail translates into what are the specific objects that I'll be able to identify. Here's an example of a satellite image with a, what we would call a low spatial resolution. So, the size of each of the cells that make up this image are one kilometer by one kilometer. So, those are quite large cells in terms of what they're sensing on the ground, but that's not necessarily a bad thing because this sensor was designed to be used over huge areas at the same time. So, this is for weather satellite, and as you can see here, the whole idea is that it's able to detect things like cloud patterns, in this case over North America. If we zoom out, this is actually what that satellite sees at any given time. It's able to see the entire disk, as they would call it, of the Earth at one time. So, for this satellite, is very far away from the earth and it's designed in a way that the size of the cells being one kilometer is actually perfectly fine because the things that it's trying to detect are much larger than one kilometer, such as clouds. This is a different image from a different sensor on a different satellite. So, the name of the satellite is Landsat. The sensor that's on that satellite is the Thematic Mapper. So, sometimes they'll have different names like that. So, the satellites Landsat, the sensor is the Thematic Mapper. In this case, the spatial resolution of the Thematic Mapper is 30 meters. So, now, we're looking at, this is the Greater Golden horseshoe area around Toronto. So, that's Toronto there. This is Lake Ontario, and so now we're able to see more detail at this spatial resolution than we would be able to have seen with that previous one of one kilometer resolution. So, now, you're able to see, even zoomed out quite a bit like, this, things like main roads here, bridges, individual farmer's fields, that kind of thing. Here's another example. This is from the Indian remote sensing satellite. This is a five meter sensor. So, we're getting a little more detail now. You can zoom in more and see more specific types of objects, and one of the reasons I like to show this one is to point out this big black area here, and there's a little bit there. So, what is that big black area? This is downtown Toronto. Is there a giant crater there? Or is there some kind of black hole? No. What could it be? Wait a minute, what's this thing we have here? Well, this looks like a giant white cotton ball. No. I don't think that's it. I think it might be a cloud. Yes. It's a cloud, and so if you see from the way that the sun is angled with this is that that is the shadow. This is the shadow here from that cloud. So, why do I point that out? Because the type of remote sensing that we're doing here is based on using the sunlight that's reflected off of the surface of the earth. So, if there's a cloud in the way, then there is no sunlight being reflected off of the area that's in shadow, and what that means is if there's no light being reflected, then we have no information that we can get from that area. There's no light being reflected, so they're going to have low values and all that looks like to us as a big black spot. So, clouds are a big factor in remote sensing when you're using visible light or light that's coming from the sun. If you have a cloudy area, especially areas that are cloudy all the time, like say the Amazon River Basin, then it's very difficult with this type of sensor to be able to collect information because all of your images are cloudy. Even for Toronto, when I tried to get images for that, often there may only be a few a year that are cloud free. Often you can get ones that have some clouds in them, and when we're looking for imagery, they'll rank it based on the percentage of cloud cover because sometimes, well, maybe there's some clouds in there but they're not in the area I'm interested in; but other times the cloud cover can be quite extensive and the satellites only go over an area every once in a while, every few days, and so you have to find that coincidence of the satellite happens to be overhead and it happens to be a clear day, so that you can get the maximum amount of information out of that image. Anyway, this is more to do with spatial resolution, but I thought this was a good time to mention the effect clouds can have on the quality of the information you're trying to get out of an image. This is yet another satellite and sensor. This is called a IKONOS, and this is a four meter spatial resolution. So, now, we're getting much more detail than we would've had before. You can see individual roads, buildings, parks, paths through a park, residential streets even are coming out fairly well. You can see swimming pools, things like that, and this is a different sensor on the same satellite. So, this is the one meter sensor on the IKONOS satellite, and now you're really getting a lot of details. So, remember, the whole idea here is that you're trying to pick up information about smaller and smaller objects using smaller and smaller cells. So, if you have a one meter cell, then any object that's around the size of one meter or greater, you will be able to see. So, now, we can actually pick out individual vehicles on the road, some better than others, probably larger vehicles are going to be easier to discern, but definitely we're able to see the outlines of rooftops. We can actually even see trees on the ground. So, we're getting a lot more detail from this much higher resolution image. So, by the way, that's one way of describing this is higher spatial resolution versus lower spatial resolution. Higher spatial resolution means that the cells are smaller and you're getting more detail. Lower spatial resolution means that the cells are larger. Sometimes would be referred to as a more coarse spatial resolution or more pixelated. They all mean the same thing. This image is from an air photo that was taken from a plane with a 20 centimeters spatial resolution. So, obviously, that's way better than the one kilometer. Well, I shouldn't say better than a one kilometer spatial resolution, it's just that they're designed for different purposes. The one kilometer one is great for weather patterns. This one is much better for, say things like urban planning. That's why cities often will collect air photo data because it's so high resolution, and they can get a lot of detail off of it. So, they are able to look at things like the locations of trees, buildings roads and so on. So, spatial resolution really has to do with what you want to use the data for. So, for example here, I have an image with a spatial resolution of 150 meters, and I've zoomed into a scale of 1 to 50,000, that would be the map scale for this image. If I tell you that this is downtown Toronto and these are the Channel Islands, and I say, "Isn't this a beautiful image?" You kind of look at it and go, "How could you even tell that?" Yes. It's because I picked the image and I know where it is, I can identify it. But if you've never been to that area before, you might not even be able to tell what's there. So, you might say that that's a bad image, or that's a lousy image, or that's bad data. No, it's not. It's just not meant to be used at that map scale. If I take that same image with the same spatial resolution and just zoom out to a different map scale, so now we're at one to 46 million, suddenly you say "Oh, well that's actually kind of a nice looking image. That's all of North America. We can see the different mountain ranges and bodies of water," and so this map scale, the same data with the same coarser lower spatial resolution, we would say 150 meters is not that great in a lot of ways, but for this particular purpose it's perfectly adequate. So, when you're looking for data, this is all to say that when you're out looking for data and you're trying to figure out, "Well, what am I going to be mapping? What scale am I going to be working at? What are the types of objects that I want to be able to map?" Then you have to think about what is the spatial resolution of the imagery based on the spatial resolution of the satellite or the sensor that collected that data. So, that's a key factor when you're looking for data is what is the spatial resolution?