Hello and welcome back. We have been talking so much about active

contours during this week as one of the examples of the use of partial

differential equations in image processing that I want to conclude the

week with a bit more of details now that we know all the background material that

we need. We know curve evolution, level sets,

calculus of variations, differentia; geometry.

We have all those concepts that we learned this week.

Let's put them together in one of the active contours type of techniques.

So, let's just discuss that a bit more. We normally talk first about edge

detection. Now, edge detection, basically only talks

about local churches. So it's basically giving us places where

there's a big change in the image. And, because sometimes we compute edges

with radiant, for example, computing the radiant of the image.

Then at every location we might have the strength of the edge, which is the

absolute value of these or the magnitude of these.

We might have the basic direction of the edges.

Remember that the normal to the level sets is in this direction.

So basically if we have an object. This is the level line, and then the

gradient is perpendicular to it, so we also get the direction of the edges which

are in this direction. So we get a lot of information, but all

that information is local. As we see in this picture.

The goal of active counters as well as other techniques for example, was also

the goal of graph guides, is to integrate those local measurements.

If we look at this picture for example. That we see local representation of

edges. There are some basically spread around

edges, but if we look carefully, it looks like there is an object here and probably

another object here. So there is some relationship between the

local edges and that's what active counters is going to try, try to capture.

Let's just see that. That's what we want to do.

We, I want to run that again. We basically want to integrate this local

measurements of image changes. And that's what active converse does,

exactly in the way I just showed it. It's just curves that move with the

equations that we have seen and we are going to discuss again, curves that move

towards those local edges. So, active converse basically starts from

an image. The first thing you do is you have to

compute edges. You have to do these local computations.

There's a lot of art in how to compute those edges.

But that's always the first step. So you end up with an image that has,

kind of, information about the local characteristics of the image, and often,

when we talk about active contours, we talk about this function.

So we smooth a bit the image, and that, we know, by this time, we're in the sixth

week, we know images are noisy. And very often we have to just clean the

noise a bit. We take the gradients as indicator of the

local edges, this is just to make sure that we don't divide by zero.

So we basically take one over a regularized gradient or one over the

gradients of our regularized image. And that gives us count on indicator

function of where should edges be. If there are, if there are edges, this is

very high so one over is very small. Once we have that, then the active

contours have to go and integrate that. Now, where is the active contours

equations that I showed you before and I'm going to show you again coming from.

One of the things is that, what is a curve?

What is an ice curve? An ice curve is a curve that goes around

areas where this image that is basically this one is very low.

Remember, once again, gradients very high, one other gradients very small.

So I want to find curves. That travel around regions that this

image is very small and this is represented here.

I basically am integrating over the image G and I want that integral to be very

small. That will be my curve for the active

contour. So, this is a function of g.

And I'm looking for the curve that minimizes that function.

We know what, how to do that. That's calculus of variations.

We're looking for a curve. That's a function that minimizes this.

So, we plug. G in here and we do calculus of

variations and we get an equation for the active contours.

That's basically whole trick. So, once again, we do the local edge

indicator. We are looking for curves that are what

are called curves of minimal effort or shortest path in this space.

We do calculus of variation. And we get the curve deformation that

moves the curve towards the minima of this function.

Again, sometimes, just a local minima. We move it towards the minima of that

function. And those are curves that are traveling

around low values of g. Which means, traveling around edges.

And that's basically how we get the equation for active contours.

Let's see that equation again. Before that I want to show you just one

example, think it's a nice example in ultrasound.

Those green curves are original curves and they deform, according to the

equation that we obtained from the Euler-Lagrange, towards the red curve.

The red curve is sitting in a place where if I integrate, if I sum G.

All around these is low certainly much lower than the initial one.

And that's what the calculus of variations gave me that deformation.

So let's talk about this function g. The function g, I'm going to just draw

some graphs here, is basically, this is an image.

It's a very simple binary image. These are the edges of the image.

Let's look at a profile. This is almost binary.

It looks in the screen as binary. The profile, when I travel here, it goes

up and down. So it's about this.

Now when you compute edges in this image, you are going to get basically high

values around these regions. And then if you do this, those are going

to be low values, and we know that. Remember, and we're going to see the

equation again. Remember first of all we want to travel

in regions of low G, because the, we're, we're doing the integral of G.

So we want to go around regions with low G, as we see here.

That's one thing and, remember, we want this G to be very small because we want

the curve, when it's evolving, to stop at the boundaries.

We don't want it to continue evolving. So that's one very important concept and

that's what we get from taking this G function, this potential function, to be

a function of the gradient and inverse function of the gradient.

Now, when we do the Euler, the, basically the Euler-Lagrange of the calculus of

variations of this. We got this equation.

That we see here. This is equation that I show you a number

of times to be basically the equation of the calculus of variation.

So this equation. That is minimizing.

This is that. Curve evolution.

The curve is evolving. We have seen this a number of times

already. And of course, this is implemented with

the level set, and I showed you in one of the previous videos, at the end, a slide

with this equation in the level set's framework with an implicit function fit.

So, this is the equation that is basically the gradient descent or the

equation that basically minimizes this, which is also called a Geodesic curve.

It's a curve of minimal weight and they're also sometimes called Geodesic

curves. Now we already saw in the previous slide

this function, and this is the function that gives us basically this, what we see

here. Now it comes out from the minimization of

these also, these term. What these term is doing, look, there's a

gradient. Of the potential function, not a gradient

of the image, a gradient of the potential function.

And those are represented here by these green lines.

So it's not only that G gets very close to zero, and is going to try to stop the

evolution. Remember, if we don't have G, this is

curvature motion and it shrinks it to a point.

This would try to stop the evolution but if we don't have an ideal age, this will

never be zero. Then it will slow down but it won't stop.

It will come here and it will kind of move because G is not zero yet.

It will maybe just move around or maybe even jump over, if G is not, not really,

really a very low and the curvature wins. So, this other term, that once again

comes natural from the calculus of variations, is.

Basically pushing the curve. Towards the gradient of G, this green

arrow. So it's saying, okay, but I'm going to

trap you. Because I have this term that is not

going to let you go away. So this is kind of an extra velocity.

And once again this part is because, remember, when we have velocities that

are not perpendicular to the curve, all what we care is that perpendicular are

the normal component, and this is what this operation does.

It projects it to the normal component. So we have a stopping term that says.

Come to me, come very smoothly to me because you have curvature here.

Now we know that curvature motion smoothes the curve, stop when you get

close to edges or at least slow down and this time traps you is pushing you

because is at gradient of this function so is pushing you so you get stuck there.

So that's basically what the active contours or in particular the geodesic

active contours. Sometimes they are also called the

geometric active contours because it has this curvature and a lot of geometric

components of the curve. This is what these geodesic active

contours are doing, are minimizing this energy, this geodesic energy, are trying

to find curves of minimal distance, minimal weighted distance with waves G.

And they do that by doing calculus of variations getting to this curve

evolution. And implemented in, in the level size

frame work. So everything together that we have

learned during this week is in this Geodesic Active Contours.