0:13

To help understanding lectures, we provide to you several complementary materials.

In this complementary material, I am going to introduce you to the tensors.

So we will use English proverb, keep task small.

So to study tensors,

we will consider two-dimensional tensors which is much simpler, and

most of the expressions, intermediate expressions can be written explicitly.

So we will consider flat space two-dimensional tensors.

To introduce you into what means tensors, let me start with a vector.

All of you know what means vector.

Vector is, in two dimensions, is something which carries two components.

Two components.

But there is a distinction between the role of numbers and the vector.

Vector is some quantity which appropriately transforms

under the coordinate transformations.

Namely, if we have a vector V1,

V2, it transforms according

to the following rule, M12,

M11, sorry, M12,

M21, M22, V1, V2.

Well, this is a coordinate transformation matrix.

We consider only linear transformations in flat space.

And most frequently, we actually consider notations, and this matrix is just

famously known cosine phi- sine phi sine phi cosine phi.

2:02

So this is just a matrix of rotation on angle phi.

So this is a two-dimensional vector.

Well, what is the difference?

In tensor notation, vector is represented as follows.

It's some quantity which carries index a, and a is ranging from 1 to 2.

Then in tensor notation,

this equality is written as follows,

it's Va bar = MabVb.

2:36

So in tensor notations,

if there is a repeated index, it assumes that there is a summation.

So this is literally means there is

a sum over B from 1 to 2, MabVb.

And let us see that this is actually this one.

So explicitly, this is just two equations for each value of a.

So we have V1, which is just M1bVb,

which is just M11V1 + M12V2.

And similarly, for V2 bar M2bVb, and so forth, obviously.

So this is actually the same as this one.

So this in terms of notations, has the same meaning as this one.

3:39

So what means tensor?

Tensor is a quantity which carries indices.

So for example,

n-tensor is a quantity which

has n indices, a1, an.

I mean, I restrict myself to the tensor which carries only lower indices.

We will see the difference between lower and high and

upper index tensors a bit later.

So tensor is some quantity which carries indices, similarly to the vector.

But there can be many indices.

So tensor quantity is some collection of quantities which transform

under rotations, under coordinate transformations as follows.

It's Ma1b1,

Manbn times Tb1bm.

So on each index, among the n indices,

there is an action of this, the same matrix.

5:22

In fact, it has two indices, and

then the rotations transformed according to this rule.

Each of these two vectors transforms with this matrix, so the rule is the same.

5:34

So in principle, if you have a tensor with

two indices, tensor with two indices,

it is convenient to place it in a matrix,

for example, T11, T12, T21.

5:55

21, T22.

But if you have a tensor with more indices,

its placement into a matrix is already kind of cumbersome.

In fact, well, in principle,

we can place it in a cubic matrix of the following form,

T111, T112, T121, T122.

And then there is second layer, T211, T212, and etc.

So but who needs this placement?

I mean, in fact, if you have tensor with many indices,

you can place it in the hypercubic lattice, the hypercubic matrix.

But there is no point of doing that.

So even this placement is pointless because

one has to get rid of this way of notations.

One has to use tensor notations because they're convenient for

many reasons, and we will explain why during this discussion.

7:06

To clarify why we need tensors, let me introduce the notion

of scalar product or norm in the spatial vectors.

So we all know that if we have two vectors,

we can multiply them in the scalar product

[COUGH] which is just V1W1 + V2W2.

Let me just stress why I use upper and lower index,

in this case, it's like in a sense meaningless but

just to stress that we have something like (V1,

V2) multiplying (W1, W2).

And for their row, we use lower index.

For column, we use upper index.

So this is equivalent to this one.

Now in tensor notation then scalar product can be written like this.

8:27

Well for that we use metric tensor, bilinear form which specifies the norm.

For example, the norm of a vector V.

So a scalar product of V on itself can be written

like this, it can be written like this,

and/or like this, delta ab Va Vb.

Where how do we obtain lowercase from upper?

So if we have a lowercase index vector we can multiply it to a tensor

with upper indices delta ab, who is this, I will explain in a moment.

Then we have obtain an uppercase index.

And if we multiply, if we have uppercase index we multiply

to this ba and we obtain Va.

So this is inverse of this, which in our case is trivial, delta ab

times delta bc is delta ac.

So this is just tautologically written thing, what is delta ab?

Delta ab is just the unit matrix.

9:51

Delta bc is just the inverse of a [INAUDIBLE].

So its matrix also is unit.

We'll encounter a bit different situation in a moment later.

So its matrix is also like this and

so delta ac is just,

also it's a Kroenicker symbol.

So it's a unit matrix.

10:19

So using these things, we can map lowercase and

uppercase indices to each other.

So that's the reason we need them.

Hence, this can be written in many different ways.

So it can be written like this, VaVb, etc.

Vb Vb, so this are different ways of writing the same things.

And it means the same, literally this.

And notice that due to these relations the map

between upper and lower case indices is trivial.

So if we have Va so the components are V1, V2.

And we have vector Va, its components are v1 and v2.

Due to these relations one can observe that V1

is just equal to V1 and V2 is just equal to V2.

So the difference between upper and

lowercase indices in this case is tautological, and

we just keep it to stress that this transforms as a column.

This transforms as a row.

So it means that this transforms according to matrix M,

while this transforms according to inverse matrix of M.

And inverse matrix of M, in case of a rotation, is just transposed matrix.

11:55

So this is what concerns the tensors.

And now one can see the reason why tensors are convenient.

For example suppose we have a tensor like this, a, bc, def,

g, like this.

So we have a tensor with many indices, the order of indices is important.

Because the tensor doesn't have to be symmetric.

Some of the indexes are upper case, some of them lower case.

And then we can take a product of this tensor to a different tensor,

say, with letter B which has index g,

which has index a, like this d.

For example b like this.

So this product of tensors carries as

one can see this index is contracted so we have a summation over this index.

This index is contracted, we have a summation over this index.

This index also contracted and this index is contracted.

So the result of this is sum tensor which carries three indices.

So because transformation of this index compensated with this index, and etc.

We have something which transforms according to the rule how tensors with

three indices transform.

So, we in tensor notation,

all the transformation properties under rotations are obvious.

That's one of the reasons why tensors are convenient.

13:28

Well, another option is just if we have tensor with two indices, Tab,

we can multiply to Va Wb and this will be a scalar quantity,

which doesn't have any indices.

So this is another example of similar situation.

13:45

And what else?

So using this things we can lower and higher indices.

For example, if we have a tensor like this,

we can multiply it by a metric tensor.

And the result will be tensor with three lower indices.

And similarly, we can, higher for

example Tabc we can multiple delta bd this will be a tensor with indices adc.

So these are the ways we can lower and

upper case indices to each other.

Perhaps, we need to clarify further on notations.

Namely we have a tensor like this,

it transforms according to the rule as follows.

That there is Mac and Mbd, Tcd,

and what is the difference between this and

this matrix, is that the one

is the inverse of the other.

Namely if we have Mab Mbc, this is delta ac,

and what does it mean that we have an invariant expression?

It means that if we have a quantity like this,

for example Tab Vb, then we have the following

transformation rule for this quantity.

That this bar quantity

is just Mac T bar cb Vb bar,

without bar sorry, Vb.

So why do we have this relation because according to this And

this rule of the transformation of this index compensate transformation lessened.

Lessened, confirmation of this index compensates this.

And we have only one M gone out and

which states that this quantity transforms as a vector was one index.

So and all the rest follows.

16:23

And several things are in order.

So we have a metric.

It means that if we have, in our space,

two points, x and nearby points, x and

x + dx or notations, xa and xa + dxa.

16:47

And then we can define distance between these

two points according to the formula like this.

dl squared = dxadxa

= delta ab dxadxb.

And the same as delta ab dxadxb.

So this is the same thing.

And as you know this is just dx1 squared + dx2 squared.

So, finally, we should stress that because,

as we all know, on the rotations, this bilinear form doesn't change.

It means that, after rotation, we have the dxa bar

dxa bar, is equal to dxa dxa.

Namely, this is equal to dx1 bar squared + dx2 bar squared and

this is equal to dx1 squared + dx2 squared.

So under rotations, the bilinear form doesn't change,

it means that delta ab, which is a metric tenser, so

it's a quantity which transforms according to the rule s tenser should transform.

MacMad delta cd but,

the components of the bard coincide with the regional component.

So the matrix of this guy is the same as the matrix of this guy.

18:34

This is not the case for an average, the tensor with two indices or

with many indices.

So, this statement just means that we have invariant tensor.

Another invariant tensor in two dimensions is totally anti symmetric tensor.

Epsilon ab is invariant.

In fact, it has this property ba, so it's antisymmetric.

And if we specify that epsilon 12 is 1,

then one can obviously from these properties, find that epsilon 11 is 0.

And epsilon 22 is also a 0 because under symmetry a and

this change of indices is changes in sine but it's equal to itself, so

it's 0 and epsilon 21 is just minus 1.

So and why is it invariant?

Tensor is just because we have two vectors Va and Wb.

They call a responding quantity

epsilon abVaVbbb is nothing but

the area of this parallelogram.

And the area of the parallelogram, after rotation, not only the area doesn't

change but even the formulae expressing is, after rotation doesn't change.

19:57

So it means that this tensor is invariant under rotations.

Similarly, in three dimensions, we have anti symmetric

tensor epsilon ijk, where ij and k round from 1 to 3.

And this tensor, antisymetric and

the exchange of any two indices, its two indices, so it's minus for example.

And the rate exchange of the first three indices, etc., etc.

So it's uniquely fixed by its symmetry properties.

And it's also invariant because epsilon ijkViVjWjUk for

three vectors, which are not colinear,

three non-colinear vectors in three dimensions.

This guy specifies the volume of how to say

this is parallel, not parallel repeated,

but its faces are parallelograms.

So anything analogous to parallel.

And so, the area for this parallel repeated doesn't change and

the formula expressing the area doesn't change.

So these are very similar in four dimensions,

in four dimensional spacetime we also have epsilon mu, nu, alpha, beta,

which specifies antisymmetric tensor.

What else should I say here about the tensors?

Is the difference between space and space time.

And that I will clarify in a moment.

[MUSIC]