So, here's a cartoon example log page. So, here we've got the DRAM and here we got Flash. The CPU builds these log pages. So, here's log page zero, here is time and the event, time and the event, time and the event, etc. There would be so many of these entries. Think of it like a data structure that you're filling up, okay? Then, once this page is built and more events are occurring over time, firmware running, and the CPU would then create the next log page, and the next log page, and the next log page. Now, since they're in DRAM and if you lose power, this is volatile data, so periodically firmware would flush these log pages out to NAND Flash or NOR Flash, some kind of non-volatile memory. This can be very, very handy for you to look at when you get one of your products returned to you or mailed back to you. You can hook up your debugger and all your diagnostic tools and read out these log pages, so you can see what happened at least from the CPU's perspective in terms of critical event logging. As I said, the primary trade-off is the granularity and frequency of creating a log entries versus the space, which is i.e. the cost in dollars associated with storing the logs. So, there's this trade-off and it's a spectrum. There's not a right or wrong answer. It depends on your product and what your customer's requirements are, etc. If you've done a good job designing the log page system, it should capture the cause of the failure, if you've done a good job. You can then look through your logs and you figure out, this is what happened at this time. I see what happened, and you have very high confident, you have other engineers look at it. It's usually just one person, but it would be a team of engineers. Some companies have a whole failure analysis department. That's all they do. They look at failures that come back. That's their job. They get these failed products, and they're lined up on a desk or rack, or whatever, and they just work their way through one after another and they get back to the customer why that particular product failed or what we believe to be the root cause. Though, I may or may not make them happy. It depends again on the frequency of how many products are failing over time. At least you have some idea. But, be aware, some failures are very, very difficult to root cause and some are never determined. A particle strikes, there's nothing wrong with the device. It just happen to take Alpha particle strike on some critical flip-flop sent the processor out into Lala Land, trying to execute data. You run all your tests, you look at all the logs, you see what happened, illegal instruction exception, that was the last log entry, and nothing. Everything was fine up until then. We're not sure why that product failed. Customer may or may not be happy with that answer, but it is a possibility there are transients that happened. Everyone's experience our computer all of a sudden just blue screen of death and we don't know why. Even happens on my Mac every once in a while. Something just, it's very rare, but these particles are flying through us all the time and every once in awhile, one of them strikes a critical flip-flop inside a chip and the outcome is unanticipated, and difficult to determine, and impossible to well, "nearly impossible to reproduce." Those are the really tough ones. So, a step beyond examining logs is, sometimes you get good information in your logs, but it indicates that there's some kind of a defect as developed with this particular unit for some reason. So, this can lead to disassembly of the product. So, you start pulling parts off of the board, you're looking for cracked printed circuit board traces, remember talked about thermal stressing your products before you ship them just to check for cracked traces and cracked views and cracked buried views, solder joints crack. It's a very common failure. Everywhere I've ever worked, there is solder joints, whether they're BDAs, or whether they're flat packs with the little pins that come out on the side and solder gets solder down, thermal stressing, shock, and vibration just wear and tear on these things. Just imagine how brutal it is for those, all of those electrical components in the trimble environment because it's out, it's cold and they're digging dirt. I mean it's a very, very harsh environment. So, they probably go through a tremendous amount of work on the design side to keep printed circuit board traces from cracking, testing, and validating that their solder joints are solid and so forth. So, those things you can do them, there's underfill techniques. There's much manufacturing techniques that can be brought to bear to make the product more resilient to mechanical vibrations and thermal stressing. But, metal migration inside of a chip, who knows, or raise your hand if you know what a mental migration is. So, I worked at Sperry Univac for five years and then I went to work for Artist Graphics designing 3D graphics chips. That's when I got to know a bunch of the folks at LSI Logic at Thainisense company has since been acquired by a logo. We were talking to the support engineers, and they started to explain metal migration to me, and really? So, get this, connections, those tiny little wires inside silicon chip that connects all the different transistors together, the output of a flip-flop going into the input of a NAND gate, for instance. It's a physical piece of wire that used to be aluminum, but now they're copper. When electrons are flowing, the electrons actually banging in to the copper atoms and over time can cause that wire to stretch out and get thin because the copper it will say the current flow is predominantly in this direction can cause that wire to slowly get stretched out and get thinner and thinner and thinner until it breaks due to the just electron flow flowing through that piece of wire and the flowing electrons banging into the copper atoms actually puts up physical tiny, tiny amount of physical force on them and then they can cause a short or excuse me, not a short, but an open, and your circuit is broken at that point. So, you can look for things like metal migration you start de-cap the chip, slice the chip on its side, take x-rays, and micrographs from the side to try and figure out what went wrong. There's all kinds of things that you can do to dig into. It all depends on how much time and money you want to spend in root causing a failure. But, sometimes it requires this level of digging and peeling the onion as it were digging down deeper and deeper and deeper trying to figure out what's going on and especially if you've got high volume, lots of return material coming back to you, not good, and your management is going to want to know what's going on so they might have to resort to some of these more drastic measures to figure out why products are failing for a specific reason. In building chips, there's lots of rules that are applied to try mitigate metal migration and these other what are called grown defects as a whole whole range of defects that silicon chips can develop over time. But, the process rules are supposed to keep those from happening in high quantity for a period of four to five years. That's why you don't see many electronic products that have warranty periods longer than five years because they wear out and even Flash wears out. For those of you that statement I was talking about Flash, the wear out mechanism is those electrons that are tunneling through that oxide between the substrate and the floating gate, electrons get stuck in that oxide in the middle, and pretty soon you can't program or erase it, and it's end of life, there's nothing you can do. So, hopefully you won't ever have to go to those depths of root cause analysis, but it certainly is a possibility for some products, especially in products in high volume and products that are failing at a high frequency or high rate, you're going to want to get to the bottom of what's going on.