You can study maximum flow in either un-directed or directed graphs.

The directed graph case is in some sense the more general one.

Let me state that here. The input is directed graph, one special

vertex is called the source vertex s, and another vertex t, is called the sync

vertex. Finally, each edge e has a capacity, u

sub e, which is the maximum amount of flow, or traffic, that the edge can

accommodate. Informally, the goal is to push as much

stuff as possible. From the source vertex s to the sync

vertex t respecting the capacities of the various lengths.

So you should think of examples as electrical current being generated from s

and flowing to t or you can think of water being injected into the network at

s flowing through these edges representing pipes and then being drained

out at t. The point is, you have a conservation of flow at every vertex

other than s and t. That is, the amount of stuff coming in

has to equal the amount of stuff coming out, except for s, where stuff gets

generated and at t, where stuff gets drained.

As a simple example, suppose in this pink network on the right, I give each edge a

capacity of 1. the maximum amount of stuff you can push

from s to t in this network is 2 units. The way to do that is you spend one unit

of stuff over the top path. From S to V to ^.

And you send a second unit on the bottom path, using W as an intermediate stop.

So I'm happy to report that The Maximum Flow Problem can be solved exactly, in

polynomial time. There are many ways of doing it, many

different cool algorithms that solve the max flow problem.

But the simplest ways are, in effect, greedy algorithms, where you route flow

one path at a time. One thing worth pointing out, is the most

obvious greedy approach to the maximum flow problem doesn't work.

The most obvious thing to do, is to just find the path where there's residual

capacity along every edge, send flow along that path, and repeat.

So, the sub-optimality of the naive greedy algorithm is already evident in

the 4 vertex network that I've shown you on this slide.

Suppose in the first generation, to find your first path to push flow along, you

wind up choosing the path that goes from s, to v, to w, to t.

So, that has three edges on it. All of those edges have capacity one, so

your free to push one unit of flow. Along this exact path.

But, if you do that, you've now blocked up the other paths as well, the s u v and

the s w t path. There is no way to send further flow

along any of those three paths. So this naive greedy algorithm, only

compute a flow of value one whereas we know the max flow value is equal to 2.

So instead you have to allow a more generous notion of an augmenting path,

where you can send the flow in reverse, in effect undoing augmentations of

previous iterations. But, with this more generous definition

of augmenting paths, the resulting greedy algorithm is indeed guaranteed to compute

a maximum flow. Moreover if you're smart about which

augmenting path you use in each iteration of greedy algorithm, you can proof a

running time bound which is polynomial. There's also a number of non-augmenting

path based polynomial time algorithm's for the maximum flow problem as well.

In fact, this diversity of solutions for the maximum flow problem makes it a

little hard for me to give you a succinct bottom line about what running time we

can get for this problem. But roughly speaking, we don't have

algorithm which are near linear time. But we have plenty of algorithm which are

not much worse than that, like quadratic regime.

Think about, you know, Bellman-Ford like running time bounds of m*n and a lot of

these algorithms to better than quadratic in practice.

And one thing that's cool, is that even though this maximum flow problem is

totally classical, these augmenting path algoritms where first studied in the

1950s by Foren Folkenson even in the 21st century we've been seeing some really

nice new developments with maximum flow. Cool new algorithms based on things like

either random sampling, or connections to electrical networks.

In some applications of flow networks, you want to think about flows as not

being computed by some centralized algorithm but rather as arising from the

behavior of a number of participants. Think for example when you get in a car

and drive from say Home to work. Nobody's telling you which route you have

to take, rather you choose what route you want to take from home to work, based on

your own preferences. For example, perhaps you take the

shortest route available. Such self interested behavior in networks

has been a major research topic in the 21st century.

Let me just show you one acute cocktail party ready example of that called braces

paradox. So imagine we have a flow network.

And again you might want to think about road traffic as being a simple example.

Let's focus on a case where a bunch of morning commuters are leaving a common

suburb, represented by a vertex s, destined for a nearby city, let's call it

a vertex t, all leaving at roughly the same time and all of them get to pick

whichever routes they want through the network to get from s to t.

So to better represent issues in transportation networks, let's, instead

of thinking of links as having capacities, let's think of them as having

delay functions. So for each edge, there's going to be a

function, which describes. As a function of how much traffic uses

that edge, what is the resulting travel time that all of that traffic endures? So

as you know from your own experiences, the more congested a road is, the longer

it takes you to get across that road. So as an illustration in this pink

network on the right, let's give two of the roads the constant delay function

always equal to one. So these would be roads that have, in

effect, an infinite number of lanes but they're also pretty long.

So no matter how much traffic is using it, it always takes you an hour to

traverse one of these two roads. The other two roads, by contrast, are

going to have the travel time increase with the congestion.

And just to keep things really simple, let's use the identity function so that

if 100% of the traffic uses one of these roads, then it takes an hour.

If 50% of the traffic uses one of these roads, all of that traffic takes a half

an hour to traverse the road, and so on. So let's now look at this pink network.

Remember, we have this one unit of selfish traffic, which is representing

hundreds, or thousands of drivers, that are picking their own routes from s to t.

Let's assume that drivers just want to get to t as quickly as possible.

They just want to minimize their travel time.

And then the question is, which flow is going to arise from this aggregate,

self-interested behavior. Well, the two routes are exactly the

same. Each one has one road that always takes

an hour, and it has a se-, and also has a second road which takes time proportional

in hours to the fraction of traffic that uses it.

So because of this symmetry, once things settle down into a steady state We expect

half of the traffic to be using the top path, half of the traffic to be using the

bottom path. With a 50/50 split of the traffic, each

of the two routes takes an hour and a half overall.

Now, I mentioned something called, Bracer's paradox.

So what's the paradox? Well, imagine that we view this hour and a half commute time

time was totally unacceptable. Moreover, imagine that we have a new

technology. Someone just invented a teleportation

device and we want to install it in this network so that.

People can get to their workplace faster than before.

So let's install one of these teleporters, at the vertex v, to allow

people to travel instantaneously, to the vertex w.

So we're, we'll represent that by adding to this pink network, a fifth link from v

to w, with a constant delay function always equal to zero.

So what are the ramifications of adding this teleportation device allowing you to

go from V to W instantaneously. Well suppose you were one of those

drivers, stuck with your current commute time of an hour and a half, no surprise

you wouldn't want to ditch your old route, to make use of this new teleport.

So you would want to switch from your old path, whether it was s v t or s w t to

the new zigzag path, s v w t. It's looking like then, it would, you'd

get to work in only an hour. You'd spend half an hour on the edge s v

instantaneous teleportation and another half an hour on the edge w t.

But, here's the catch, not only are you going to switch your path to use the

teleportation device, so are the thousands of other drivers.

So in the new steady state, after we've augmented the network with this

teleportation device, everybody's going to be using the zig zag path s v w t.

And now that everybody is doing exactly the same thing, all of the traffic uses

the edge s v, driving it's travel time up to an hour.

Everybody's using the edge w t driving it's travel time up to an hour.

So now, depsite the fact that we've made the network intuitively only better, In

the unique new steady state, assuming everybody selfishly chooses their minimum

travel time path, the commute time has actually gotten worse.

It's jumped from an hour and a half for everybody, to two hours for everybody.

That is Braess' paradox, improvements to networks where you have selfish users,

can make the outcome worse for everybody. Wow.

So that's Braess's Paradox, discovered by, you guessed it, Braess, a German

mathmatician, back in 1968. Amaze your friends and colleagues at the

next nerdy cocktail party you find yourself at.

Actually, there's a physical realization of Braess' Paradox you might find more

useful For that purpose. Here's the idea, the idea is your going

to take as ingredients, strings and springs.

The strings are meant to perform the function of constant delayed functions.

Strings of course being in-elastic objects, that the length of teh string is

independant of the amount of force you apply.

Springs, on the other hand, are elastic. That is, their length is proportional to

the force applied to the spring. So those play a role analogous to the

linear delay functions in this network. So Braess' Paradox shows that there

exists a way to wire strings and springs brings together.

And then hang this contraption of strings and springs from a fixed base say the

underside of a table. You hang a weight from the bottom of

these strings and springs, so that naturally stretches it out.

Now, in general, when you have a contraption that's holding some weight

and you start chopping, you Take a pair of scissors and cut things off in the

middle of this contraption. You expect the contraption to be weaker

and therefore the weight is going to sag further toward the ground.

But in the same way Braess's Paradox shows that we're moving a supposedly

helpful teleportation device from a network can actually improve, can

actually shorten, selfish commute times. That shows that cutting out top strings

from the middle of the contraption, can actually get a weight to levitate further

off of the ground. And this is not just hypothetical.

Students in my classes have built these strings and springs contraptions, and

have demonstrated this levitation. If you search YouTube I'll bet you could

find some videos. Try it at home.