We can categorize the input spike train in terms of what is known as the response

function, Roby. So that's given by the summation over all

the times at which. A spike occured.

So some, some or all the I of delta, T minus TI.

So this is basically the delta function. So everytime you have a psike you put in

this delta function which is essentially an infinite pulse at that location of the

spike. Now why would you really want to do that?

Well it turns out that when we do an integral for the filtering, it turns out

to be quite convenient to have the spike train as one of these summations of delta

functions. So basically this is a technical detail,

so don't get so worried about it right now.

So suppose that we have a spike train and we would like to model the effect of all

the spikes on this particular neuron. How'd we do that?

Well let's first select what kind of synapse this particular synapse is.

So suppose it's something like an ampa synapse, as we discussed in the previous

slide. The ampa synapse behaves as of it is an

exponential function, so we have something that looks like this.

This is k and this is. T is a function of time.

And so, this can be used as a filter to act as the effect of an input spike on

the postsynaptic neuron. So, now we have a filter.

So here is the filtering equation that will model how the synaptic conductance

changes on the post-synaptic neuron side. So basically what we are saying is that

gb which is the synaptic conductance at b is essentially just nothing but the

maximum conductance times basically summation of all of these exponential

functions added together, and if you like integrals as the summation here is the

linear filtering equation. And here is your favorite function, the

rho b, the neural response function, where you have these delta functions

summed up at the locations where you have spikes.

Now, if you're still confused about this, actually, there's a very easy way to

interpret the summation or this integral. So, here is The spike train and here's

what the synaptic conductance GB is going to look like.

So every time you have a spike, you put in one of your K functions, your synaptic

filter. And then when you have another input

spike such as this one, you simply add a copy of the synaptic filter, and you do

so for each of these input spike. And so you're going to get a synaptic

conductance that looks something like this.

So this is what GBT looks like for this particular.

Input spike train. So that wasn't really too hard, was it?

The moral of the story here of course was that don't be too intimidated by these

types of complex equations. So are you ready now to put everything

that you have learned so far to create a network model?

Now let's do it, so here is a simple example, let's just take a two neurons,

neuron 1 and neuron 2 lets connect them together with excitatory synapses.

Neuron 1 cnnects with neuron 2 with this excitatory synapse and neuron 2 connects

with neuron 1 with this excitaory synapse.

Now each of this neurons is given by our. Favorite equation here is the equation

for how the membrane potential changes as a function of time.

Here's the time constant for the membrane.

And we're going to model these two neurons as Integrate-and-Fire neurons.

So, this is something you heard about in Adrian's lecture in a previous week.

And so the Integrate-and-Fire neuron essentially models the membrane potential

and then when there is A particular threshold that is reached.

So, here's the threshold. Then the neuron has a spike.

So, the neuron spikes here and then is reset back to a particular value.

So the particular value in this case is minus 80 millivolts.

And the synapses are going to be modeled as Alpha synapses.

So, you're going to use an Alpha function which has Essentially as we saw before,

it peaked just lightly after zero and then it decayed down to zero.

And so, we're going to first look at what happens if we model excited rate

synapses, so neuron one excites neuron two and neuron two excites neuron one.

And here is what the behavior of the network looks like for these two neurons

when they're exciting each other. So you can see that neuron one fires

first in this case and then neuron two fires after and so on.

So they basically alternate firing from one to the other.

Now what will happen if we change the synapses from excited rate to inhibit

rate? So here's something surprising that

happens. So if we change the synapses to

inhibitory synapses, so we can do that by changing the equilibrium potential, also

called the reversal potential of the synapse, to minus 80 millivolts.

So that's less than the resting potential neuron given by minus 70 millivolts.

So you can see that when you change the synapses to be inhibitory then we get

synchrony which means the two neurons start firing at the same times so this

synchronized wti each other and that's a really interesting property that people

have been looking at also in certain brain regions so.

Here's an example where a simple model of just two neurons either exciting each

other or inhibiting each other give rise to sudden interesting behaviors that

might be of relevance to people trying to model particular circuits in the brain.

Okay great, that wraps up this particular lecture segment.

In the next lecture. We look at how we can go from spiking

networks to networks based on firing rates.

And this, as we'll see, makes it much easier to simulate large networks of

neurons. So, until then, goodbye and ciao.