So far, the centrality of a node is defined using the degree and the shortest distance to other nodes. Now, we introduce a different idea. We would like to say that if you are important and I'm connected to you, then I must be somewhat important too. In other words, my centrality is proportional to the combined centrality values of my neighbors. Now, if we write that down mathematically it would look like the top formula. The centrality of (Vi) is the sum of its neighbors. Now, we can write that as an equation where Lambda is a proportionality constant. The resulting equation looks exactly like the Eigenvector equation we have seen before. Now again, you really do not have to understand how it works. It's fine to know that if we solve that equation, we will get the Eigen values lambda. Now, let's take the largest lambda and find the corresponding Eigenvector, which will give you the centrality of each note. Notice the difference between the degree centrality and the Eigenvector centrality in the same graph. The yellow node in the middle has a low degree centrality compared to its neighbors. However, with the Eigenvector centrality, the node becomes comparably more important because the neighbor centrality status boost some of the centrality of this node. In contrast, consider the second highlighted note. It had the same on normalize degree centrality as the previous node. But because the neighbors are low centrality nodes, the eigenvector centrality goes down. So, this is the intended consequence of the eigenvector centrality measure. Speaking of consequences, Eigenvector centrality essentially says if you know the right people, your importance will go up. Well, that's kind of risky proposition. Here is me in a social network, and let's say I'm connected to this somewhat dubious character, and I think it doesn't really matter. What I don't know is that my connection has it's own set of connections, and look at who they are. So, on the one hand, these shady characters that are now connected to indirectly does raise my eigenvector centrality, but it also has quite a damaging effect on my reputation. My EV centrality almost makes me look like a suspect. Now, think about that. Now Brin and Page, the founders of Google had an interesting way to think about the eigenvector centrality. They thought about a server. Well no, not that kind of server. This kind of server. The kind that surfs the web. But, this is a special web surfer called a random surfer. And here is what he does, he picks a web page and looks at the links. Then he chooses a random link, goes to that page and does the same thing again. Except sometimes when you kind of get bored and goes to totally new page. How often does he do this random jump? Let's say, there's always sort of a 15% chance that he will. Or more generally, with the probability of 1 minus alpha. So, Page and Brin's idea was to figure out that this surfer will visit a page with a high chance, if the page is central. They came up with a measure for this stationary probability of a page being visited by the random surfer. They did not call it centrality, they called it pagerank. Let's see a YouTube video to understand how pagerank behaves. Okay, now we are talking about the World Wide Web which is a huge graph. How do we of such a graph? The answer is iteratively using a method called power iteration. This method can be used because we are looking for the largest eigenvalue. Let me show you. Let's take a small graph. Let's initialize the still unknown page rank as zero for all nodes. Now, page rank A is a 0.15 chance, that I was at A already. And 0.85 chance that I come to A from B, or I come to A from C. However, at this point, the page rank of B and C are at zero. So, in the first iteration, page rank of A is 0.15. Now for B, I can only come to B from A, but we cannot claim all of pagerank of A, because there is always a half chance that the surfer will come to B from A because he could also get to here from C. This, plus the 0.15 chance that the surfer is already at B, makes Bs pagerank 0.21. Now after doing a few rounds of this computation, between 50 and 100 iteration, let's say. The values will converge. Now, this computation has been demonstrated to perform well in the MapReduce framework. In module four, we'll talk about another way of computing this metric.