[MUSIC] We're continuing to talk about UART protocol and we're gonna talk right here about how synchronization happens. So the transmitter and the receiver, they have to be synchronized. That is the receiver has to know when to expect data from the transmitter. And it has to be fairly accurate about the timing that it expects. And UART is an asynchronous protocol meaning it has no clock. Normally, if you have a synchronous protocol, the clock is how it's synchronized. When it sees the rising edge, it knows data will be sent. But this doesn't have a clock. So it has to figure out when data is gonna be sent another way. So UART synchronization. So what it means is that the receiver has to know when it's looking at that value on that wire, that serial wire. It has to know which bit it's reading. If it's suppose to read and what bit it's reading. It has to count the bits. So for instance say, it knows that you're sending 8 bits. It has to know okay now, is when I should sample bit number one and then now is when bit number two happen. Now is when bit number three should happen and so on. So it has to know when to expect each bit to arrive, so it know when to sample the signal on that wire to get a 1 or a 0. If it gets out of sync like that, that's synchronized. If the receiver knows when to expect each bit sent from the transmitter. If it's not synchronized properly then the receiver will not expect the bits at the right time. It'll receive the wrong data. So here's an example. Right here we're show a little timing diagram. And we're really showing three bits being sent. There's a 1, there's a 0 and there's a stop bit. That's what should be sent. That's what the transmitter is actually sending. Now a stop bit is high, the stop bit is at the end where the signal is actually high. So it should be sending a 1 then a 0 then a stop bit. These are just the last bits of communication. So I'm just showing the bits 7 and 8 of the last two bits of say an 8 bit set of data. And then after the last two bits, you're gonna expect to stop bits. So what should be, the expected bits as they are correctly observed would be a bit 7 would be 1, bit 8 would be a 0 and then the stop bit is the next bit and the signal is high and it would say oh okay, this is a stop. And if that happened, if the receiver correctly knew when to observe those signals, it would receive the correct bits. It would receive a 1 for bit 7, a 0 for the bit 8. And then it will receive high at the stop bit and that means that it's working. That means that is correct. That is a correct assumption. The stop bit has to be high. If the stop bit is low then there's a problem with the communication. So if the receiver knows exactly when to sample the signals, it will receive the right bits. But let's say the receiver is off, it is not synchronized properly. It started sampling earlier, too early. So if it started sampling too early, it might sample the values at the wrong time. So when that one comes through instead of thinking oh, this is bit 7, it might think it's already on bit 8. And if that were the case then when that 0 comes through, instead of thinking it's bit 8, it would think that that was the stop bit since it's the one after bit 8. And then it would read a 0 for the stop bit and that cause a failure because a stop bit has to be 1. So if it was not synchronized right then it could read a 0 for the stop bit, that would cause a failure and will loose the whole byte and that byte would have to be resent. So what I'm saying here is that the receiver has to be synchronized with the transmitter. It has to know when the transmitter is gonna send a byte, a bit. It has to know which bit is being sent. So it needs to synchronize on the beginning of the communication. So that's what the start bit is for. So imprecise start time. If it gets the start time wrong then it can send the wrong, it'll receive the wrong bits and maybe fail in its communication. So the start bit is how synchronization happens. So the start bit remember, at the beginning before you've actually started sending anything, the wire is high. The single wire is gonna be high. The receiver knows its transmission is gonna start. The start bit is happening when it goes from high to low. So when there's a falling edge on that signal then it says okay, this is the start bit. Now is when I have to start synchronizing myself. Now, we've got two examples. Two pictures up here, it's like timing diagrams. And what they're showing is two different situations. Now remember that this UART, it was made a long time ago. It's made to be robust in the face of noise. So noise, you get all kinds of electromagnetic noise on these signals. So this signal maybe it's supposed to be high because you're not communicating but there's some kind of glitch. Some kind of noise, some kind of electromagnetic noise which forces the signal to go low, mistakenly. Now when that happens, if the receiver is too quick, it might say oh, the signal went low, I guess communication is starting but maybe it's just a glitch, a temporary bounce down. And it shouldn't consider that to be a real signal. It should ignore a short glitch. So what happens is, it's gonna measure the time that the signal is low. So on the left, it's just a glitch. The signal goes low but for a very short time. If it's too short then the receiver should say oh, that's not real, that's not really a start. But on the other hand on the right, if this signal goes low and stays low for a significant amount of time then the receiver should say oh, this is a real start I need to synchronize myself. So the receiver has to be able to distinguish between a glitch for a start which is not a real start signal and a real start. So it hs to count how long it stays low. Now see those little up arrows on the timing diagrams? Those are the sampling points. Those are the points in time where the receiver is checking the signal value. And you'll notice with the glitch, it goes low for only three sample sample points. So the receiver would say oh, it's five or rather low for three sample points. That's not enough. Where if you look at the one where the startup is detected, the one on the right, it's low for I don't know eight sample points and it would say okay, that's efficient. So that's what's gonna happen on the receiver end. It's gonna sample over and over, faster than the baud rate. Typically 16 times faster at least, and it's gonna count how many times to find the start bit, it's gonna count how many samples it's low for. If it's low for enough samples then it says yes, that is a real start. So detection of the start bit is used to synchronize the receiver and the transmitter and it's synchronized based on the falling edge of the signal and it recognizes the start bit based on the falling edge. The following 0 must be of sufficiently long duration to screen out any kind of noise. The receiver has to sample faster than the baud rate, typically 16 times faster. That's common. The start bit Is indicated by a 0 for at least half a period. So the length of the period depends on the length of your baud rate. But let's say, you're using a baud rate of 9600 baud. If you remember, the period length of that is 104 microseconds. So the hold period is 104 microseconds, half a period would be 52, 52 microseconds. So the receiver, if it sees that signal go down load for 52 microseconds then it says okay, this is a real start signal. We're really starting communication, I need to synchronize myself now against that falling edge. Where if it's less than 52 microseconds then it says oh, that's just a glitch, I can ignore that. Thank you. [MUSIC]