3:06

Other time series are completely man-made, such as in economics.

Here you have the Dow Jones index,

which measures some sort of health state of the economy in certain circles.

And you can see once again that there were some trends, for instance,

the Dow Jones has been pretty low for the best part of last century and

then has grown, again, kind of exponentially.

But what is interesting is probably you're familiar with the crash of 1929,

which is this little dip here,

and compare that, which went down in history as some sort of major disaster,

with the kind of swings that we have today when the Dow is at such high levels.

Okay, so we have seen some examples, now let's try to formalize the concept of

discrete-time signal, for us, this is a sequence of complex numbers.

So it is a one-dimensional sequence, at least for now.

The notation is x[n], where n is in square brackets to indicate that n is an integer.

It's a two-sided sequence, so

n goes from minus infinity to plus infinity, and it's a mapping,

therefore, from Z, the set of integers, to C the set of complex numbers.

n is what we call an a-dimensional time, so we can think of it as time if we want,

but we have to make sure not to associate a physical unit to n.

n is a-dimensional, it just sets an order on the sequence of samples.

Discrete-time signals can be created by an analysis process where we take periodic

measurements of a physical phenomenon,

think of the floods of the Nile if you want.

Or in a synthesis process where we use say a computer program to generate data point

that simulate a physical phenomenon that we want to reproduce,

we will see an example very soon.

Let's now look at some prototypical signals that will appear again and

again in this class.

The simplest non-trivial signal that you can think of is a signal where every

sample is equal to 0 except for n equal to 0 where the samples is equal to 1.

This is called the delta signal, and

it exemplifies a physical phenomenon that has a very, very short duration in time.

To help your memory, you can associate the delta signal to a clapper,

the device that is used in the movie industry,

although perhaps not in the mechanical form that you see here in this picture,

to synchronize the audio and the video tracks.

When you shoot a movie, the video and the audio are recorded on separate devices,

and then you have to synchronize the two tracks together.

So the way this is done, is by filming the clapper and

then having the top part of the clapper slam down on the bottom part.

This will generate a very short instantaneous sound that on the audio

track will look like a delta signal or a combination of positive and

negative delta signals.

When you need to synchronize audio and video, you will look for

this pattern in the audio track.

You will look for the delta, and associate it to the frame,

where the top part of the clapper is hitting the bottom part.

Another useful signal is the unit step.

This is a signal that is 0 for all negative values of the index.

So x[n] = 0 for n less than 0, and

is equal to 1 for n greater than or equal to 0.

This depicts a very simple phenomenon, the flipping of a switch.

So think of a Frankenstein switch when this is pulled up, then the contact

is made, and the signal will go from zero to one and stay at one forever.

Another common signal is the exponential decay.

We take a number a less than 1 in magnitude, and

we take successive powers of the absolute value of a.

Because a is less that 1 in magnitude, successive powers will go down

exponentially to 0, but of course, will never reach 0 unless we go to infinity.

In order to prevent the signal from exploding when n is negative,

we multiply the signal by the unit steps.

So we basically force to 0 all values of the sequence for

negative values of the index.

The exponential decay captures the behavior of a lot of physical systems, for

instance, it shows how your coffee cup gets cold.

Newton's law of cooling says that the rate of change of the temperature of a body

is proportional to the difference in temperature between the environment and

the body itself.

So if you solve this differential equation, you find out that the evolution

of the temperature follows indeed an exponentially decaying trend.

Of course, this is an idealized version of how a coffee gets cold,

because you should have only convection and large conductivity.

But in general, this is a common behavior for a lot of physical systems.

We have seen, for instance, that the rate of discharge of a capacitor in

an RC circuit is also an exponentially decaying curve.

In discrete-time, the exponential decay, a to the power of n,

models this kind of behavior.

And finally, we have sinusoidal signals.

Here we have, for instance, an example using the sin function.

Discrete-time sequence is simply the sine of an angular frequency of omega

0 times the index n + na initial phase theta.

8:38

Omega 0 is measured in radians.

Theta is measured in radians as well.

Because n is a-dimensional, so

the sum of a omega 0 n + theta is measured in radiance.

There is certainly no need to stress the importance of oscillatory behavior

in nature, your heartbeat, engines, the motion of the waves,

the vibration of strings in musical instruments.

But in signal processing, oscillations are particularly important because

they are at the heart of Fourier analysis, as we will see very soon.

It is useful to divide discrete-time signals into four classes, finite-length

signals, infinite-length signals, periodic signals, and finite-support signals.

We will now look at them in turn.

Finite-length signals are signals that contain only capital N samples.

We indicate them with the notation x[n], as for standard sequences, but

we always specify the range of the index, n, that goes from 0 to capital N- 1.

Sometimes we will also use vector notation,

in this case, the signal is a column vector, like so.

And the connection between finite-length signals and

vectors will be clear very soon in one of the future lectures.

Finite-length signals are very practical entities, and they're good for

numerical packages.

You will always deal with array of data where the size of the array is finite.

However, it's not practical to develop the entire signal process

in theory concentrating only on finite-length signals because

the length gets in the way.

Infinite-length signals are standard sequences where the index n ranges over

the entire set of integers from minus infinity to plus infinity.

And these are, of course,

abstract entities because they contain potentially an infinite amount of data.

But they're very good for theorems and

results that do not depend on the length of the data.

11:55

The amount of information of a finite-support sequence is the same

as a finite-length sequence of length capital N.

And they constitute another bridge between finite and infinite-length sequences.

In a way, we can always embed a finite-length sequence into

an infinite-length sequence, either by periodizing the finite-length sequence,

so turning that into a periodic signal, or

by turning it into a finite-support signal by appending 0s before and

after the interval 0 capital N minus 1.

Elementary operators for signals include scaling where we take a sequence,

and we multiply each element in the sequence by a factor alpha that belongs to

the field of complex numbers.

We can sum two signals together where we take a sequence and we add to each

element of the sequence the corresponding element of the second sequence.

The product is like the sum, except that we multiply each element in the first

sequence by the corresponding element in the second sequence.

And finally, the shift by k where we anticipate or delay a signal

by shifting the sequence by an integer number of samples k, k belongs to z.

The definitions of the first three operators is valid for

all classes of signals.

In the case of the shift, however,

we have to be careful when we apply a shift to a finite-length signal.

Remember that for a finite-length signal,

the index in x[n] can only range between 0 and N- 1.

Now, if we choose k too large or too small,

we can easily send the argument here outside of the prescribed balance.

So in order to apply shift to a finite-length signal,

we have to decide how to embed that signal into an infinite-length sequence.

And we have two types of shifts according to the embedding that we choose.

Imagine we embed the finite-length signal into a finite-support sequence.

In that case, it's as if we were appending and

prepending 0s outside of the range of the signal.

So in this case when we shift to signal, say suppose we shifted towards the right,

0s will be pulled in into the range that is valid for the finite-length signal.

And we will lose the last points in the signal.

So here graphically, we see what happens.

Here's the original signal imagine embedded into a finite-support signal, and

here is the result of the shift by 1, 2, 3, and so on, so forth.

As we shift, we pull in 0s, and we lose data.

14:37

Conversely, if we image a periodic extension,

a periodization of the original sequence, the shift will become a circular shift.

If we shift say,

towards the right, what goes out here will come back on the other side.

And the result, as you can see here graphically,

is that we're circulating the data around the support of the signal.

We will see later that the periodic extension and therefore,

the circular shift is actually the natural way to interpret the shift for

a finite-length signal.

We also have a definition of energy for a discrete-time signal.

This is a sum for

all elements in the sequence of the square magnitude of the elements.

If you think of the signal's values as voltages across a 1 ohm resistor,

then you can see that this definition of energy is consistent

with the physical interpretation of energy.

Many sequences have an infinite amount of energy, the unit step, for instance.

If you do the sum, you will see that Ex goes to infinity.