So in course one, we briefly touched up on and his work at the labs. Here's a more formal look at the sampling theorem specifically applied to the signals that we are considering. So let's consider a real world analog signal. You can denote it by x of t. So it's of duration T and bandwidth B. And x then is the discrete representation of this analog signal x of t. From early analysis, we can find the frequency of the signal except T this we do not by except F. And similarly in the discrete domain, we have a discrete 48 representation X(w) Now let's consider the Nyquist sampling theorem. What Nyquist theorem says is we can reconstruct xt from xn if we fulfill two criteria. The first one is the maximum frequency component in x(t) is band-limited to B, that is f max, the maximum frequency component, is less than B. And 2, we take samples in the analog domain at uniform locations spaced at 1 over 2 T times. In this case If we manage these two criteria, the uniform sampling and the signal being band-limited, there is an equivalence between X of f. The analog spectrum and the discrete already transformed discrete spectrum that we can compute if we violate one, that is, the band limited nature of the signal. Then in the discrete domain, X sub w is actually repeating multiple times. And these images start coming closer and closer, and this is what we call aliasing. And this is the reason why we apply a low pass filter before we convert the signal into a discrete signal, so that we keep the images in the Discrete Fourier Transform that'll Apart. So, by the low pass filter, attenuates all the signals above f max, including some noisy terms, so that we do not cause any aliasing. And once we have the discrete signal, x of n. Using the discrete Fourier transform show here, we can compute X(k). If we take N time domain samples, we will have N over 2 plus 1, so if you have 256 samples it's 129 in the frequency domain that are uniformly spaced between zero and your f max in the frequency. This is indexed by the variable k. Now that the X(k), the fourier transform, has both amplitude and phase information. Then given, X of K, we can go back into the time domain exactly using inverse DFT. In practice it is very common to do some sort of processing on x of k before going into the time domain, and we can also extract certain very interesting parameters directly in the time domain by operating on x of n. So, I would like to tell you about some of the processing we do on signals. I would like to take speech as the example, because speech is one of the most complex signals I ever had to work with. And this is probably true for many others. And the dummy. So this is a very busy picture, has multiple pane, and the bottom pane is the actual speech wave form the female speaker is saying don't ask me to carry and ID tag like that. You should listen to this. Let me pause for a minute and we will listen to this sound and then come back. >> Don't ask me to carry an oily rag like that. >> So, what was being said, the transcription, at the level? Is what I have written there. And just above that there are fonts. The segments, or alphabets of spoken language, if you like are shown. The sentence is from timid database if you're curious. And the tool that I've used to generate this, is an open source tool called wave surfer I can make a link available to you if you're interested. The next thing here that I show, is the energy of a short term moving window. It tracks the envelope of the energy of the signal and the Y axis is in DB skip. This has lots of information about the maker of the signal. The next pane is actually showing the pitch counters. You already know that male speakers have much lower pitch than females. Their pitch is the tone at which we speak, it's a feature that can conveys lot about how we say something. This is a statement, is it a question? So the Y axis is in hertz, and we can extract the pitch period from the wave form signal and also frequency, domain time, domain processing. [COUGH] You will notice that around .45 seconds, there sounds like K, like in the word ask. There is no pitch for the sound speaker, these are not oxalic sounds. And now this is the most complex feature that that I'm showing you in this particular graph. These are called formants. So when you say a word like in the word ask, there are resonances in the vocal track at specific frequencies depending on the sound that you say. Is it or ooh, by looking at where the vocal cord is resonating, you can detect them. And you can see how they change in the word oily rag around 1 to 1.3 seconds. This tool also plots formants and non-vocalic sounds like guh and kuh at 1.6 and 1.8 sounds respectively this is essentially junk. The tool is not perfect. There are no formats for the sounds, so they should ignore it. The point is, depending on the sensor signals that we are working with and depending on the application, we get to decide what features we want to extract from the signal and implement them, and the DSP to extract them. And speech signal processing is a treasure troll to going and get all kinds of tools that you can implement or borrow for ECG. So on and so forth. Other examples of examples are the features. That you can probably extract the number of zero crossings in a given window or plotting the histograms, or peak-picking the signals. The signal mean, variance, so on and so forth. So I would like to talk about another very, very useful sensor that we will deal with in answer to the microphone. That's the 3D Accelerometer. For Accelerometers and for many other sensors fact, you may come across the vendor is a very good source for learning about the sensor. So here's the link to the accelerometer from analog devices. I have used one of their accelerometers, EDXL 362. I might have little bit more about that in future. This is a very, very low power accelerometer, worked quite well for me. Again, the vendors are also a very good source for getting to know how to use their devices. They put out very useful application notes on how to interface their components to the sensors. So depending on your capstone application, you would do very well to search the vendor's websites. Here's a very accurate pedometer upnote from analog devices that provide the complete algorithm for step counting, and It chooses a very simplified kinematic model of human gait. This is basically in watered pendulum that the center of gravity is above the fulcrum. And as you walk, you can look at the responses to compute various parameters. The app mode actually describes a very simple low pass filter to reduce the noise and counting the steps. And knowing the height of the person, you can actually get the distance traveled. And if you now the weight, you can get a very crude approximation of the calories. So what I meant to say was, there's this x, y, and z axis. In one of the axis, in the vertical axis, as you're walking, you get much bigger amplitude. That's the signal that you filter and you can do simple processing to look at the peaks and count the steps. The reason I prefer something like this as opposed to commercial fitness devices is they're all black boxes. You can never tell what algorithm is being used. And what we have found is, in a given same scenario two different devices give you completely different results. So we looked at speech signal as a way of the different types of signal processing tools that we can use. An accelerometer, one of the most important sensors. And where you can go to get information from other sensors that you'll be working with.