You guessed right.

There's a learning rule for perceptrons, and

it involves adjusting the weights and the threshold according to the output error.

The output error is given by vd- v.

vd here denotes the desired output, or the label that we get with each input.

And v denotes the output of the perceptron.

Here are the update rules for the weights, and the threshold.

Epsilon, as you will recall, is the learning rate.

A positive constant that determines how fast the rates are adapted.

Let's see if we can understand this weight update rule,

in the case where the input was positive.

Now, in this case, the learning rule, you can see, increases the weight,

if the error is positive, so what does that mean?

It means that vd was plus 1 and the output of the perceptron was minus 1.

So, in order for it to do the correct thing in this case,

so generate an output of plus 1, the perceptron needs to increase

the weighted sum, so that it's above the threshold.

So, it can do that by increasing the weight.

And so, we can see, now,

that the learning rule's doing the right thing in this particular case.

Now what if the error was negative?

So, in that case,

you can see that this learning rule is going to decrease the weight.

So is that the right thing to do?

Well, if the error is negative, it means that the desired output, the label,

was minus 1.

And the output of the perceptron must have been plus 1, so

that gives you a negative error.

So in this case, what we want the perceptron learning rule to do,

is to make the output, which is plus 1, be a minus 1 output.

So you can make the output minus 1,

by decreasing the rated sum to be below the threshold.

And that's in fact, what the learning role does, it decreases the weight w i,

which in turn makes the weighted sum eventually go below the threshold.

The learning rule does the opposite for the case where u i, is negative and

you should be able to convince yourself that that's the right thing to do.

In the case of the threshold, the update rule decreases the threshold if

the error is positive, and increases the threshold if the error is negative.

So to see that this is again the right thing to do in this case, and

the errors positive it means that v d was plus one, and

the output of the perceptron was minus 1.

And so you can see that when you decrease the threshold, this in turn encourages

the output of the perceptron to go from minus1 to plus1, because now the threshold

has been decreased and so that again, is doing the correct thing.

Similarly, when the error is negative, you must have had the case that the desired

output was minus 1, and the perceptron's output was plus 1.

And so by increasing the threshold, we are now encouraging the perceptron to not

have the output plus 1, it's going to have the weighted sum go below the threshold,

because the threshold is now being increased.

And so once again, that's the right thing to do to make

sure that the perceptron's output, matches the desired output.