0:35

It is distributed. This actually is a very fussy word.

It's hard to exactly pin down its definition.

We are going to see a very distributed solution.

Today and later in the course we'll see many other variance of sort of distributed

solutions. But roughly speaking distributed means

that the network elements in this case the transmitter and the receivers do not need

to talk to each other to much with explicit message passing.

This is also an iterative algorithm. So the near far solution, for example, is

not iterative. You just estimate a channel and then you

tell the receiver what to do. It's a one shot.

But most of the algorithm we'll be see, looking at will be iterative algorithm.

That means there's a certain time index. We usually use k or t to index the time.

Sometimes it's discreet, sometimes it's continuous.

Most likely to be discrete for what we'll be looking at.

Okay. So I have to index this.

And we hope that as times goes on this index krt becomes very large something

good will happen. For example, the algorithm actually stops.

It converges. It converges at a pretty good point,

solving some optimizational gain problem. Alright, so there is enough background of

this just one line theorem Iterated Distributed Algorithm.

So what is the algorithm. Let me write down algorithm first.

It's so simple to write down just one single line and then let's intuitively

look at it before moving on to mathematical analysis of it.

3:42

Well, let's take a look. A symbol, first, in communication.

[inaudible] You don't need to know anything about the network, except your

own target SIR, which presumably you know and it does seem to vary over time, and

the current SIR that your intended receiver is getting.

That's all you need to know. Second, this is very simple in

computation, where there is only one multiplication or division, right?

And then there is very simple in configuration.

What do you mean by configuration? A lot of times, in algorithm I have a lot

of parameters. Game parameter is a typical example.

Step size, it's a special case of that. We'll see many parameterized algorithm

throughout the course. But this one, actually has no parameters.

Right. The ratio between gamma and SIR, itself,

is the game. You don't have to tune any algorithmic

parameters, and we'll see that simplicity in configuration is often the hallmark of

a successful technology transfer from research to adoption.

Any one of our algorithms, not used, because there are too many parameters to

turn, and people don't know what to do with those parameters.

They aren't sure if the parameters today, will still be robust and good tomorrow.

But this E P C is very simple. In communication, in computation, in

configuration. But is it trying to do the right thing?

Well intuitively it is. I'm going to look at both.

The equilibrium behavior and the convergence behavior intuitively.

Equilibrium looks actually very good, all right.

What is equilibrium here? Equilibrium is, in this case, we're going

to look at, at least two more definitions of equilibrium, but in this case, just

means that nobody changes from one time to another anymore.

In other words, there's certain time, T, beyond which, no one's transmit power is

moving anymore. Well, whenever that is the case, clearly,

that means that this ratio is one. That means, everybody's SIR, is exactly,

the target SIR, alright. So the question is will you ever get to

equilibrium. Will you ever stop?

Let's think about that. That's actually not trivial, right?

You think, oh well, if my current SIR is lower than my target, this gain parameter

is bigger than one, that means my next transmit power will be bigger.

Well, that'll help me presumably to lift my SIR next time closer to gamma.

On the other hand, if my current SIR is already bigger than the target gamma.

Then I'll rather make my trans pat power smaller, because, remember, we're talking

about a 2G voice call. So, if I get my target gamma, I don't care

if it's bigger than that or not. So I'd rather conserve power.

I want a transmit power minimal way to achieve the target Gammas, and this is

doing the right thing. So intuitively, the direction, is correct.

It is sending basically a negative feedback that says too much SIR, lower

your power, too little SIR, increase your power.

Too little, too small relative to the target.

This is all well and good, except you're not the only one in the room, or in this

case, you're not the only one in the network.

There are many other transmit/receiver pairs going around here, 'kay?

And they all think the same way, so while you are moving, they are moving too, their

transmit powers that is. So it's not at all clear that this

ensemble, this network, this corral of transmit receiver peers will collectively

converge and stop moving together. But at least it sounds plausible, because

the directions are somewhat correct. Now proving that convergence will happen

is not easy. In fact, convergence may not happen.

You can run this algorithm and it doesn't stop at all.

And that's because if everybody requires a very big gamma, gamma 1's big, gamma two

is big, and maybe there's no way to achieve these gammas.

You can run the simple example right? I want P1 G1 one over P2 G1 two plus N1 to

be really big. I also want P2 G2 two.

Divided by P1 and G to one, plus N2, this is received as R for user two, to be

really big. And for certain G's and S, you see that I

can not find any P1 and P2 to numbers that can make both ratios very, very big.

There is intrinsic competition or trade off among these users.

So, for sufficiently large Gammas, all of them being big, you can't even get

convergence anyway. So, you can't prove because that's not

true. But it turns out that we can show,

whenever the gammas are reasonable that is, they are actually achievable.

That is a vector of Ps to achieve them for the given Gs and Ns.

Then we can prove that this DPC algorithm will converge to this desirable

equilibrium. That's provably true.

But to show that would take us to advanced material.

And we'll come back to that after the normal period of this lecture.