0:02

Hi, and welcome back to the introduction to deep learning.

You already know the mathematics and the basic principles behind deep learning.

Now, it's time to get your hands dirty.

Before we do that, I have a question for you.

What do you think are the requirements from a deep learning framework?

What capabilities for APIs do you expect from it?

Thank you for your answers.

For us, the answer

looks like that it should or provide fast computation, fast matrix operations.

It should provide symbolic differentiation

and it should provide optimization and preferably run on GPUs.

There are several frameworks which are more or less equal.

We've chosen TensorFlow for this specialization because of its largest user community,

ability to run on distributed systems,

and ease of integration with production,

and also it's terrific visualization capabilities namely as a TensorBoard.

Let's go.

If you're not running Outlook,

you should probably install TensorFlow.

But now, it's TensorBoard, relaunch it.

It's running locally.

It should be available under this link.

It's also empty.

There is nothing to see here.

What do you- but we'll be filling it up as we go.

So, the first thing we do,

we import TensorFlow and we create a session.

So session is our interface to the computing engine.

If you want to use GPU source then it should pass options here.

To get a feeling of what's going on,

let's do a simple exercise namely use

our normal function to compute the sum of squares of numbers from zero to N-1.

Well, and please don't tell me it has analytical selection.

It's not the point here.

The point here is to do the same thing with TensorFlow.

Now, we have N,

it was N, it was byte and integer.

Now, N is tf.placeholder, let's strongly type,

so we know that it should be an integer with a value is not specified here.

So is it our TensorFlow things of computations, sequence of operations.

Then what we call tf.reduce_sum(tf.range(N)**2.

So this just looks hopefully the same as NumPy.

And we run it, we run it,

and it runs, and runs in the TensorBoard.

Well, what happened here?

The TensorFlow has a difference,

it's different from say NumPy,

in the sense that the definition of your computation and execution are separate.

If you're into functional programming,

you should like it a lot,

because the basic building block of TensorFlow is a symbolic graph.

A symbolic computation graph in which you define inputs and

we define transformations to be applied on those inputs.

What we did here, we had N,

so this is our N. Remember,

you put a name, input function.

That's why it appears like this.

Then we have a Range.

So, Range is just another function.

And then a vector from range,

it get passed to Power with square root,

and then get passed to Sum and it's reduced to a number.

Now, in order to define those graphs,

so what we do with this is we define placeholders for the inputs.

We define graphs, we combine them from operations and when

we need to run it we just call eval or run methods.

Now, TensorFlow

supports all the standard numerical data types.

It also has most of the functions you'll find in NumPy.

So, since you're at advance machine learning specification you should

not have trouble switching from one to another.

Well, last point is that TensorFlow,

this is obviously not a complete introduction to TensorFlow.

And it has many,

many features, some of them are useful and high-level.

And you should take a look at tf.contrib before reinventing some wheel.

Let's see some more TensorFlow stuff.

I'll begin with just placeholders.

So this is an input of- there is a float input.

It can have any size, any shape.

We can require our input to be a vector,

a vector of any length,

so if an input is non into shape,

it means that it can take any value.

You are going to have a vector of fixed size,

you're going to have a matrix with fixed number of columns but any number of rows.

You can have a multidimensional Tensor.

And you can freely combine Nones and numerical dimension.

The operations, operations are defined in a very,

very user friendly fashion.

So, here you take each element of input vector and double it, element wise.

Here we take each element of input vector to compute the cosine function.

And here, you have input vectors, clear each component,

you'll then subtract the original vector and you add one to each component.

Now, we run. Now, this is an example of a more complex transformation.

We defined two vectors and then we multiply,

we do element wise multiplication and element wise division.

So here we see that it is,

the result is again a Tensor, here we evaluate.

So we take the transformation, we call eval,

and we pass a dictionary of input value.

So we write it to TensorBoard.

Here are our exercises as a TensorBoard.

Well, if you ask me,

it looks like an unholy mess.

And do you recognize this part but everything else is very cluttered and more away,

you can even expand it to see that,

you have many placeholders.

There are 10 nodes. Clutter, clutter, clutter, clutter.

How do we deal with it?

TensorFlow provides you with capabilities to group stuff and name stuff.

So all those exercises is the problem,

we can put into a name scope and also we can not

have an anonymous vector but have

a vector with a name to recognize it from other placeholders.

Now, let's begin from the beginning.

So the TensorFlow, and now for transformation and reload the TensorBoard,

now reload the TensorBoard,

we shall see that instead of whole screen of clutter.

We now have all the examples of placeholders,

examples neatly grouped into a collapsible box,

and we have my transformation here with

a vector we can easily recognize by name and the operations.

Here is also a vector which still retains its default name.

Now to summarize, TensorFlow

is a framework we'll be using throughout this presentation to implement deep learning,

probably you should get familiar with it.

As a major principle component of TensorFlow is a computation graph.

It's graph of transformations,

which apply to numerical data.

Now to your assignment,

I suggest to you to implement mean squared error computation in TensorFlow.

So here are some tests which will help you making sure

it's correct against the baseline implementation from a scalar.

So, thank you and

see you in the next video,

where we'll learn how to implement actual machine learning models in TensorFlow.