Shelling segregation model is a very simple agent based model. It's simple in concept, and it's also simple to program on a computer. However, when we write computer programs, we might end up with bugs. To ensure that the computer program is actually simulating the shelling segregation model. We need to verify the program, this verification stage is necessary for both exploratory and predictive agent based models. To do so, we note that in shelling segregation model, once an unhappy agent is identified at r equals xy the words the location of an red agent. It will move to a new point, r prime equals x prime y prime, switching to an unoccupied spot in the next time step. To ensure that the computer program is simulating the agent based model correctly. We need to check that our prime equals x prime y prime and that that slot is empty before the move. Otherwise, with the move, we erased another agent in the simulation. If this continues, we'd end up with fewer and fewer agents as the simulation progresses. Furthermore, r equal xy in other words, the coordinates for the read agent becomes empty after the move. Otherwise, we would have forgotten to delete the unhappy agent from its old location. And over longer and longer simulation times, that spot would become a source of unhappy agents, increasing the number of agents in the simulation. So the number of agents needs to remain constant, and they need to move. After verifying that the computer program correctly implements the agent based model. We need to move on to the calibration stage. If we're building predictive agent based models, it's done by adjusting p the preference, the only parameter in the model. If we do not calibrate our agent based model, our simulation results may agree qualitatively with the real results. But we will likely not have the quantitative agreement that we seek. To calibrate an agent based model, we can match the simulation results to the aggregate or macroscopic data from the real world. For example, in shelling segregation model, we can measure an aggregate segregation index from real neighborhoods and then adjust p preference in the model. For simulations to produce the same aggregate segregation index, matching the data with the simulation results. we can also calibrate the agent based model using Mesoscoping data. For example, for shelling segregation model, we can Fourier transform the densities in x and y for real neighborhoods. To get in naught kx, ky, we then adjust the parameter p preference in the model until the simulations produce in kx, ky that best fits in naught kx, ky. For example, suppose we have a 16 by 16 grid, where we have alternating zeros, and ones across a row, or down a column. Finally, we can also calibrate the agent based model using microscopic data. For example, we can track the relocation histories of individual households in the real world. And then adjust the parameter p, the preference parameter, until the simulations produce the best match of these simulated reconstructions of how they move the relocation histories. In particular, microscopic data will be required for calibration if the population of agents is heterogeneous, and P takes on different values for different agents. In this slide, let's say more about calibration at the nice o scope ik scale. Suppose we created a map showing how segregated a real community is. And we'd like to know what value of p it corresponds to in the shelling model. So here's the real world data. Well, we could run simulations of the shelling model at two different values of p obtain different maps. Because of the stochastic nature of the model, we would not be able to obtain a simulated map. It's identical to the real map down to the pixel. However, we can tell that the top simulated map is a better fit to the real map than the bottom simulated map. But this decision might be a little subjective, one simple way to make this comparison more quantitative is to make use of the Fourier transform. In a nutshell, a Fourier transform reads in an image, like the real or the simulated map of a segregated community. And decomposes it into spatial oscillations at different frequencies, amplitudes. For example, if the red and blue neighborhoods in the model are frequently repeated at alternating blocks 100 meters apart. Then we can see a strong signal at the spatial frequency at about one 100 per meter. On the other hand, if segregated neighborhoods do not exhibit repetition patterns that are 10 meters apart. Then the amplitude associated with the spatial frequency of 1/10 per meter would be zero. In this Fourier transform of the real segregated community, we find that the spatial amplitudes are only large and small spatial frequencies. That's to say the spatial scale of segregated communities are large. On the other hand, when we Fourier transform the two simulated maps, we find that the Fourier transform of the trop map. This one also has large amplitudes at small spatial frequencies. But the Fourier transform of the bottom map has large and moderate amplitudes across a wide range of spatial frequencies. By comparing the two Fourier transforms, we can then say that only the top simulated map is a better fit to the real world. And thus we, we can say that the shelling parameter of the real world is close to the p value of the top simulation. Finally, after verification and calibration, we're ready to test our agent based model against the real world. This stage is known as validation, and it requires the simulations to be compared against additional data from the real world. Just as for the calibration stage, we can validate the agent based model using macroscopic, mesoscopic or microscopic data. More importantly, depending on the purpose of the agent based model, the model can be calibrated against microscopic data or mesoscopic data. But validated against macroscopic data. A key reason for this is that we can collect more detailed data and take our time to train the agent based model. We can also be more intrusive, and collect more personal information on a volunteer basis for the training of the agent based model. However, when we deploy the agent based model for operational purposes. It will be unlikely for us to collect detailed microscopic data or recruit volunteers for extended periods of time. Because of these constraints, it therefore makes sense to validate the agent based model however it was trained against macroscopic data