We will go over the logic of Dijkstra's algorithm without writing code. If you are an advanced student and know the algorithm, you can skip to the next lecture. The basic plan is to start from node I and progressively traverse a sequence of notes. When the method attempts to choose the next node to traverse to, it chooses a node for which the total rate of the path to that node is the lowest. In the beginning, the algorithm is on the start node I. The distance from I to I is 0. And the distance to all other nodes is infinity, because the system doesn't know them yet. A second table, called a priority queue, is currently exactly the same as the distance row of the first array. The system starts with processing node I, the source node. That means it finds the nodes that can reach from I, F and J. Note that the respective total weights, that is 5 for F, and 15 for J, to get to these nodes in the distance row. Then it marks I as done. We have made the node I agree, because the node is processed. Next it looks at row d to find the least distance, which is 5. The corresponding vertex is F. So next, the method traverses to F. Now the algorithm on node F and has determined that out of its possible destinations, E, G, and J, J is the least expensive. The total path, that is the weight of the path to J, is 10. 5 from I to F, plus 5 from F to J. This diagram shows that the priority is now shorter because it has popped out the already processed load, I. At step three, we are processing node J but face the following situation. We can go back to F from J, but that will cost 10 plus 15, that is 25. 25 is worse than the cost of the current path to F, which is 5 directly from I. Thus we do not go from F to J, and do not update the distance shown. The other option is to go from J to G, which incurs a cost of 10 plus 5, 15. Hm, this does not improve the current cost to reach G through F, which is now at 15 already. Therefore, we do not update the distance for G. So at this point, we see that while J is processed, it had no impact on the traversal process. We consider the distance row again, and find that the next node to expand is G, which is reached through F. Continuing as before, G's processed. It opens up the possibility of diverging to C, at a cost of 35. Or to D, at the cost of 25. But wait, we have an issue, there are two competing nodes, E coming form F or D coming from G, that are both expansion candidates. At this point, the algorithm can make a random choice because there is no other information. Let's say we make an arbitrary choice, we expand to E next. In an optional video, we will see how we can use the additional information to make a more informed decision. After expanding to E, we can find that we have already reached the node B, so we are done. The other choice, that is to go to D from G, costs less than the path through B, but it doesn't matter now, because the destination is reached. Just for the sake of completeness, if we did let the algorithm continue to operate, it will terminate when all reachable nodes are reached. We say all reachable nodes, become some nodes like H, are not reachable because it has no incoming edge. Such a node, as we said before, is called the root node of the graph. In general, a graph can have more than one root node. Now that we have reached our destination, we need to construct the shortest path. We start by taking the destination B and find its predecessor, E. Then we find the node E, and check its predecessor, which is F. Finally, we find the predecessor of F to obtain I, which is a source node for the task. So these nodes can then be stretched together in reverse, thus building I to F, to E, to B, which is highlighted in the film. So, how well does this algorithm work for big graphs? Actually, not very well. We often measure the performance of an algorithm in terms of the size of the data.