Hello, bonjour, mingalarbar, kem che, Welcome to the 9th tutorial of Statistical Mechanics: Algorithms and Computations from the Physics Department of Ecole Normale Supérieure. The Monte Carlo method that we have mastered in the last two months and a half in classical and quantum systems has far outgrown its origin. In this week's lecture already we studied time-dependent phenomena and faster-than-time algorithms. Today, Monte Carlo methods are literally everywhere in science and engineering and time has come to present at least one application in a wider context of these methods. The application that we have chosen is concerned with optimization methods. For concreteness we consider a problem that is familiar to all of you and to all of us you know what it is? Well, it is the packing problem that builds the logo of our course. What you see here is a red unit sphere and a bunch of blue unit spheres around it. All of the blue ones touch the red one but none of the blue ones overlap with each other. So the question is how many of the blue unit spheres we can place around the red unit sphere? So, let's find the first solution to this packing problem. What you see here are six blue spheres grouped around a central red sphere in its equatorial plane. Now we add two more layers one on the top and one on the bottom, in fact we put three more spheres on the top and three more spheres on the bottom all of these spheres touch each other and touch the red sphere, and we have 3+6+3: we have found a packing with 12 spheres As you already saw in the logo we can distort this packing a little bit and create space around the blue spheres Let us find a second solution now not with three layers, but with four layers. We start with one sphere at the bottom we add five spheres on the second level then five spheres on the third level and then a final sphere on top. Now, we have 1+5+5+1 spheres all of them touch the red sphere but in this icosahedral packing none of them touches each other. Again, we make a little space and the question arises whether we have enough space to add a 13th blue sphere to this ensemble of 12 spheres. This is the famous 13-spheres problem and it was first discussed by Newton and Gregory in 1694. So clearly there are different possible packings, and you understand that a sampling problem is just around the corner. So, from a wider perspective the subject of this tutorial is the relationship between the sampling approach and for once not the integration but searching, and orientation in complicated configuration spaces. Next, in this tutorial Michael.. next in this tutorial, Michael will show us how to formulate the packing problem not in terms of spheres on the surface of a sphere, but in terms of disks on the surface of the sphere. Then we will go on to discuss strategies to find solutions that were once so well-hidden that before the advent of Monte Carlo it took years and decades to find them. Let us transform the problem a little bit and search for the maximal radius of 13 blue outer spheres of radius R around a central red sphere. If this maximal radius is larger or equal to 1 then there is enough space for Newton's 13th sphere otherwise not. As Werner just explained, we can simplify the problem a bit by considering the projections of the 13 blue outer spheres onto disks inside the red sphere. Let us start by considering the simple case with only two blue outer spheres touching each other and the central red sphere, as is illustrated here. The centers of the two blue disks are a distance of 2R apart while the centers of the projection disks are a distance of 2r apart. You can see that 2R relates to 2r like (1+R) relates to 1 you might remember this relation as Thales' theorem. It gives us a simple relation between the radii of disks touching on a sphere and spheres touching on a sphere, it is given by R = 1 / (1/r - 1) likewise we can see that the disks have to be at least a distance 2r apart, lest they overlap. So let us now follow our winning strategy of the last two months we randomly sample the 13 positions of the blue disks on the surface of the unit sphere. In a first naive approach, we uniformly sample these positions and check for overlaps. Of course, we are only interested in legal configurations that is configurations of non-overlapping disks, and we reject in case of an overlap. In Python, this direct-sampling approach can be implements as is shown here Of course this program works and will eventually give us legal configurations of 13 disks on a sphere. This works very well for small radii r but it suffers from a huge rejection rate for larger disks. Already for disk radius r=0.32 the program will generate 10000 illegal configurations before finding a legal one for disks of radius r=0.34 the program will generate hundreds of thousands of illegal configurations and for r=0.36 it will generate more than 10000000 illegal configurations. This value, r=0.36, corresponds to a sphere radius R=0.5625 and you will see in a few minutes that this is still far away from the close-packing configuration. The problem of finding legal configurations of disks on a sphere is related to the problem of finding legal configurations of hard disks in a box, as we considered in week 2 of this course. You will remember that in tutorial 2 we discussed the program direct_disks_any.py which allowed us to first place the points in the plane and only then compute the minimal distance and density. We can adapt this program to our needs by placing the points on the sphere using the Gaussian random numbers we learned about in week 4 of this course and then computing the minimal distance. This gives the python program shown here. So, is there a smarter way to generate large-radius configurations? Of course there is.. Alberto will in a few moments explain a method to use a Markov chain algorithm for this purpose, but before moving on, please take a moment to download, run and modify the programs direct_sphere_disks.py and its alternative version direct_sphere_disks_any.py that we discussed in this section. Now, after the direct sampling let's go ahead with the Markov chain sampling, following our nine week old strategy for tackling difficult problems. In Python, this gives markov_sphere_disks.py where here we start from a legal configuration of points on the sphere, and propose little moves which are accepted if there is no overlap, so that during our Markov chain simulation the disks never touch. This suggests that we should slightly swell the disks at certain time during the simulation, by a percentage gamma of the maximal increase that would keep the configuration legal. In Python this is done in the function resize_disks.py, that is shown here where we can set the percentage gamma of the maximal increase that we wish to use. Alternating long runs of markov_sphere_disks.py with resize_disks.py allows to construct a very efficient program which is actually a very general optimization strategy. In our example of 13 spheres we can see that we can reach densities that we never dreamt of using direct sampling. The idea of combining long Markov chains with careful resizing is reminiscent of what in metallurgy we call annealing which means that we cool down from high temperature a metal very slowly in order to drive out the grain boundaries and the other imperfections and this makes the metal less brittle. On the computer, the name simulated annealing was coined by Kirkpatrick, Gelatt and Vecchi more than 30 years ago in a very famous paper. In our problem of 13 disks on a sphere each time step is composed by a resizing and 10000 iterations of the Markov chain After each resizing the step of the Markov chain which is set by the standard deviation sigma of the Gaussians is automatically adjusted in order to keep the acceptance probability in accordance with the 1/2 rule. We see that the packing density eta slowly increases during a run until the disks settle in a jammed configuration and the parameter sigma goes to zero. Naturally there are many inequivalent jammed configurations and some of them are non-optimal but they can trap the simulating annealing routine forever and this is a clear limitation of the method. However if you run the simulated annealing routine many times with different random numbers and keeping gamma small you will observe that the solution is in most of the cases the highest density solution this means that the optimal packing configuration has a large basin of attraction. On our problem of 13 disks on a unit sphere the maximal value of R that we can obtain and actually you will obtain in this week's homework, is R=0.916467.. with our method we cannot actually exclude the value R=1 in fact this is exclude by a proof given by the mathematicians Schutte and van der Waerden in a famous paper of 1951. Before listening to Vivien, (welcome back Vivien) please download, run and modify the program simulated_annealing.py that we discussed in this section. Please make sure to understand how the Markov chain is implemented and the function resize_disks. Let us now generalize what Alberto has explained to us, and this will allow us to illustrate the successes of the simulated annealing method, but also its limitations. As a first generalization, let us consider the case not of 13 disks on the sphere but the case of any number N of spheres around the inner sphere. The program simulated_annealing.py can be easily modified, and you obtain the following result for the maximal density eta_max as a function of N. You observe that eta_max grows slowly with N but it seems that it does not go to the value eta_close_packing = pi / 2 sqrt(3) which is the maximal density for the close packing in two dimensions However we would expect to reach this asymptotic since a lot of small disks on a sphere would feel like living in a two-dimensional plane and thus adopt the hexagonal ordering. Yet, in fact, this method will still reach this asymptotic and it can really be proven that the convergence is very slow The ratio between the maximal density eta_max and eta_close_packing behaves as 1 - const / N^(1/3) Let us now make an analogy we can imagine the jammed configuration as local minima in a vast and complex landscape the optimal configuration will correspond to the global minimum, that we can call the ground state. In fact, many physical problems can be recast in this description and in general the method of the simulated annealing will work because if one takes an equilibrium system and send the temperature to zero, it will adopt the ground state. However, this argument is purely formal because in practice simulated annealing cannot be done at a very very small velocity which we would require for this method to work always. The nice behavior we have discussed is in fact limited to simple cases: disks, spheres, .. For instance in the case of polydisperse disks the method of simulated annealing does not find the optimal configuration, but still finds good configurations. In real life, simulated annealing must be practiced with care especially in cases where there are imperfections and disorder because then there are a lot of metastable states and the system may be stuck in them. If the imperfections are too pronounced the result of the algorithm is very often a glass rather than a crystalline matter. For the case of the polydisperse disks on a sphere, that is to say disks with varied radius, this is what happens. The disorder prevents the system from reaching its optimal configuration and leads the system into one of the jammed configurations whose basins of attraction is much larger than in the case of monodisperse disks Before continuing, let me grab the object of the week the flip die, and let's flip faster than the clock. So, in conclusion, we have studied in this Tutorial 9 one of the many connection between statistical mechanics and the rest of life. You will continue on the subject of packing in this week's homework but don't forget that there is a connection between sampling not only with integration as we have studied throughout this course but also with searching and orientation in complicated spaces So finally good luck and have a lot of fun with Homework session 9, the final homework session of this course. Next week, we will have a lecture on the alpha and the omega of Monte Carlo and the final tutorial will be devoted to a review and to a big party. And thanks again for your interest, and for the very active part that you're taking in our course.