Now why go through all this trouble? Well it turns out that if you substitute

the equation for v(t) in terms of the ei's into the differential equation for

v, and then further we use the eigenvector equation, as well as the

orthonormality of ei, then we can solve for ci as a function of time.

And so here is the equation for ci as a function of time, and once you have a

closed form expression for ci as the function of time, we can substitute that

value for ci into our equation for v. And therefore, we have solved the

differential equation, and we now have a complete expression, that characterizes

how v changes as a function of time. And if you want to get into all the

mathematical detail of how we derived this expression for ci(t), I would

encourage you to go to the supplementary materials on the course website.

We can now show that the eigenvalues of the recurrent connection matrix,

determine whether the network is stable or not.

To see this, suppose one of lambda I is bigger than 1.

Well, what happens to the output of the network, given by v(t), which is a linear

combination of the item vectors weighted by these coefficient ci?

Well if one of the lambda I's is bigger than 1, lets say that this lambda I here

is equal to 2, which is bigger than 1. Then this term ends up being an

exponential function of time. And so as time goes on you're going to

have this term becoming larger and larger, and therefore ci of t is also

going to become larger and larger. And so the output of the network then,

also grows without any bound, which means that v(t) explodes, and so what you end

up getting is an unstable network. On the other hand if all the eigenvalues

are less than 1, then you should be able to convince yourself, by plugging in

values of lambda I less than 1, in our equation for ci(t), that the network is

stable because v(t) is going to converge to some steady state value.

Which is given simply by the linear combination of all of these coefficients

which are conversed now to this particular value, multiplied by each of

the corresponding eigenvectors. Now we can answer the question that we

posed earlier in the lecture. What can a recurrent network do?

One thing that a linear recurrent network can do, is amplify its inputs.

To see this, suppose that all the lambda I, the eigenvalues are less than 1.

So we showed in the previous slide that the output of the network in the steady

state is going to look like this. And if one of these eigenvalues, let's

say lambda 1 is very close to 1, and all the other eigenvalues are much much

smaller. Then the lambda 1 term, is going to

dominate the sum, and so the steady state output of the network, is going to be

basically the projection of the input onto the, first item vector, divided by

1, minus lambda 1, multiplied by e 1. So, what we have then, is a network that

is amplifying it's input projection. So, if lambda 1, for example, is equal to

0.9, which is close to 1, then 1 over 1 minus lambda 1 is going to be 10.

And so, we have an amplification factor of this projection of the input on to e1

of 10. Now let's look an example of a Linear

Recurrent Network. So, let's assume that each of these

output neurons codes for some angle between minus 180 degrees to plus 180

degrees. So instead of labeling these neurons with

1, 2, 3, 4 and 5, we can label them according to some angles.

So for example, this could be minus 180 degrees, this neuron could be minus 90.

This neuron could be labeled with 0, this with plus 90, and this with 180.

Now, why are we labeling neurons with angles?

It's because we can now define the connection matrix M, as a cosine

function, for example, of the relative angle labeling the neurons.

So in other words, m of theta, theta prime, could be proportional to cosine of

theta minus theta prime. What does this type of connectivity look

like? Well it results in neurons exciting other

neurons that are nearby, and inhibiting other neurons that are further away.

And here's a graphical depiction of the cosine based connectivity function.

So, for neurons that are close to any given neuron, you have excitation, and

for neurons that are further away, you have inhibition.

Now let's ask the question, isn't M, defined by such a connectivity function,

symmetric? In other words, is M theta, theta prime

equal to M theta prime theta? Well, that's the same as asking whether

cosine of x is equal to cosine of minus x, which we know is true.

Which means that yes, the connectivity matrix is indeed symmetric.

Now this type of a connectivity function's interesting because there's

some evidence that such connectivity is also found in the cerebral cortex.

Neurons in the cerebral cortex tend to excite other neurons that are near them,

and inhibit neurons that are further away.