So I want to discuss with you the fundamental of dendritic cable theory. But before that, I want to highlight early theoretical ideas about neuron, the neuron as a computational device. And so, the most influential I think, and so, the most influential paper in this field, the early paper in this field trying to connect a new role to computation. A neuron as a microchip that computes is a very influential work by McCulloch and Pitts. M&P, McCulloch and Pitts what we call today the McCulloch and Pitts neuron. the field in a very, as I said, influential paper with a very beautiful name. A logical calculus of the ideas immanent in the nervous activity. This is from 1948, and the ideas of this particular paper sorry. Sorry, 1943, and the particular idea of this paper, which was very, very by the way, influential for computer, development of computer science. Less influential of neuroscience up until recently, this particular paper was really inspired by two properties of the neuron, which you already know. One of them is this all or none nature of the neuron, the fact that neuron either fires the spike or not. So this is one aspect that influence the world, and the other aspect that influence the McCulloch and Pitts study or ideas are the fact that neurons receive two types of synapses. The excitatory synapse E and the inhibitor synapse I. You already know these. So you basically have all the ingredients, you, you have the ingredients that inspired the heavy mathematical paper of McCulloch and Pitts, but let me give you a very brief summary of what they say. So, McCulloch and Pitts said the following. Let's look at a neuron, a point neuron. There is no structure, no dendrites, no axon, just a point neuron. And this point neuron has an all or none property, either it fires or not fire. And let's say that this neuron receives synapses, excitatory synapse number 1, excitatory synapse number 2, excitatory synapse number 3, and inhibitory synapse. And let's assume, also, it's an assumption. We can change this assumption, but let's assume that it is sufficient that one excitatory synapse, either that or that or the other one is active to fire a spike, one synapse. We know that this is not the case typically, but let's say that one synapse depolarize the membrane to reach threshold for firing of this cell. And the inhibitory synapse tends to veto the activity of the cells. So, the inhibition vetos cell firing. And excitation attempts to fire the cell. So this is all the assumptions needed. Okay? So this, the, assuming that a single excitatory cell e can reach spike threshold. And that a single inhibitory synapse, I can veto all the synaptic activity and no spike. Then McCulloch and Pitts realize that you can now write a logical sentence as follows. You can say that the cell will generate an output, that's called output 1, state 1. The output one is generated if e1 or e2 or e3 are active and not i, and, and not inhibition. So either one excitatory or other excitatory or the third excitatory synapse are, is active, but not the inhibitory synapse. So here is a logical statement. So they wanted to look at a neuron, a neuron as a logical device with threshold. Yes, and output is generated only if this logical paragraph, this logical statement is implemented. So now, you see the big jump between, between looking at that neuron is producing spikes and having synapses excitatory, inhibitory, the neuron is a logical device. And then, indeed, in this very elaborate mathematical paper, they show that you can, using this simple McCulloch and Pitts neuron with this simple, very simple neuron, exciter, inhibitor and all or none properties. You can really build a computer, a computing machine, a universal computing machine that can basically compute very complicated sophisticated computations using networks, connected networks of such McCulloch and Pitts neurons. And this of course, had a very, very strong influence on the generation of the more than digital computer, which also has some aspect of zero one and summation of inputs coming form other zero one gates and so on. So, this is just to show you that you can think of a neuron as a computational element. And this I think, is one of the first direct ideas that were implemented in a paper showing that you can be inspired by the neuron, to build a computing machines. By the way, it's interesting, I don't know if many of you know, that a lot of the computer direction, computer the mathematics of computers were influenced by, by neurons. So, the modern computing machines, the digital machines were very much influenced by this paper. And so, the brain helped already then to inspire another machine, the computer, the digital, digital computer. Of course, today, the computers help us a lot to understand the brain. So there is this kind of interesting crosstalk between ideas inspired by a machine that we already have our brain, our neurons, our spikes to build another machine, the digital computer. We shall discuss later on this cross-fertilization when we'll talk in the next lesson about the Blue Brain Project, where you heavily use the digital computer to understand the brain. This is I think, basically the first example of looking at neurons as computing device. But of course, we know that neurons are not point neurons. We know that this is a distributed system. It has dendrites. It has axon. It has a cell body. So, the question is, of course, what does a distributed system, a distributed electrical system, not a point system, not an isopotential system. By a distributed electrical system can compute? Does it add in principle, the fact that you have a distributed system like dendrites and axons? Does it add to, at least in theory, does it add to the computational capability, capability of the nervous system or of a single neuron? So this is the question. What is the computational implications of having dendritic neurons, of having a distributed system? So before I go into the development of ideas based on mathematics of dendrites, I want to repeat something that I already mentioned in the first lesson. I want to repeat the importance of mathematics in understanding complicated systems. So again, I want to discuss why model mathematically, why use mathematics as a tool to model complicated systems like the brain. So this is again, Lord Kelvin, and Lord Kelvin said something again, very important. He said, I'm never content until I have constructed a mathematical model of what I'm studying. If I succeed in making one, I understand, otherwise I do not. So the basic claim here is really, that if you want to describe mathematics eh, eh, physical system like the brain or any other complicated physical, physical system, words and graphs and data collection is not enough. Basically, you have to approach it with a very rigorous, very systematic mathematical approach in order to compactly describe the system. I already showed you the Hodgkin-Huxley model. I hope you are convinced that this Hodgkin-Huxley model really made a jump, conceptual jump in understanding the spike. The spike was always there, people recorded the spike, including Hodgkin's and Huxley. But eventually, only after they wrote the mathematics of the spikes, we can very, very clearly say that we understand the spike. So, let me summarize three highlights or three basic aspects of why to model mathematically, because there are levels of why using mathematics. Okay. So let me say a few words about why modeling in general, mathematically, and in particular, why, why taking into account details. We'll discuss later the details. But I want to discuss the general notion of modeling or theory and also discuss the issue of details. So, so, what, what are the, in general, three aspects of reasons for doing mathematical modeling of complex systems? The first thing, and the very clear thing is that you want somehow to interpret your, your details, your experimental findings. So you find something experimental and you want to have some interpretation of this experiment. And not only interpretation, but you also want to have some predictions. So you take all that you have, like in the case of Hodgkin-Huxley for the squid axon, you take a lot of experimental results. And you want to give this experiment some meaning, some interpretations in order to cross from the details that you measured from the microscopic details of, in case of Hodgkin and Huxley, the ions, the membranes, conductances, and so on, to the macroscopic phenomena of the spine. So you want to have this interpretation of how the details, how the experiments explained the phenomena, and not only that. What kind of predictions can you do using the model that you build? So the purpose of a good model is not only to replicate in a compact way the experiments, but also to provide some predictions. And indeed, Hodgkin-Huxley did predict the refractory period. They did predict, and we did not discuss it, the spike velocity within the axon. They predict aspects that were not directly pointing to the model. So interpretation and predictions are a very important part of a good model. The other part of the good model is the issue of finding key biophysical parameters. So, the emphasis is on the word, key. Because if you have the key parameters, not all the parameters, because there are, there are thousands of parameters that affect the phenomena, but there are key parameters, the major parameters. The key parameters that if you have these parameters in your model, then your model behaves appropriately and you don't need more parameters in order to explain the gross phenomenon. The spike, for example, like in this case of Hodgkin-Huxley model. So Hodgkin-Huxley found the key parameters that influenced the spike. The membrane conductance for sodium and the membrane conductance for potassium. And the kinetical parameters, the voltage dependent parameters m, n and h, and so on. These are the key parameters. And they had the key parameters that are enough to describe the spike. So this enables them to compactly describe the phenomena because they have several key parameters and not all the parameters, so to speak. So, the role of a good model is to compactly describe the phenomena, and for this,you need the key parameters. And the third level of a good model, is that it enables you to cross, so to speak, to another level, level of description. In this case, you want to cross from the biophysical level of parameters to make the jump, conceptual jump, and to link in the case of brain, in the case of neuron, to link the biophysics into computation, into function. This is a big conceptual jump, like McCulloch and Pitts did. Because McCulloch and Pitts took the synapses, and took the spikes, the all or none property of the spike. And they did a conceptual jump and used this ingredient, the spike and the synapses, into a computation, a logical computation in their case. So a good model enables you to think how this, so to speak, low level parameter is the spine, the synapses, and conductances and, and so on. How, how can they be used for computation? This is the third important aspect of a good theory, enables you to link level from biophysics to computations. And I want next to show you how does Rall, like McCulloch and Pitts. How does, how did Rall take some of the very physical parameters, in this case including dendrites and using these notions, dendrites, synapses, and so on, in order to suggest computations that can be performed by the dendritic, by the extended dendritic tree? But before I'm going into the dendrites and to the theory of Rall, the cable theory of Rall for dendrites. Before I'm going here, I want to show you some thoughts about, of Ramon y Cajal about the [UNKNOWN]. So you already know well that I appreciate very much this great anatomist Ramon y Cajal, I already mentioned him several times in my talks. And this time I want to show you a set of phrases that he was using about [UNKNOWN], he did not like [UNKNOWN] like myself, then and so this is what Ramon y Cajal wrote about theorists. So this is Ramon y Cajal, you've already met. And Ramon y Cajal wrote a very nice little book, really nice little book, but in this book, I collected some of his antagonism to theory, to theorists, like myself. Theorists are highly cultivated, wonderfully endowed minds, so this is good, whose wills suffer from a particular form of lethargy. They claim to view things in the grand scale, they live in clouds. When forced with difficult problem, they feel the irrestistible urge to formulate the theory rather than question the nature. The essential thing for them is the beauty of the concept. It matters very little to them if the concept is based on thin air. And finally, basically, the theorist, like myself, is a lazy person masquerading as a diligent one. He unconsciously obeys the law of minimal effort, because it is easier to fashion a theory than to discover a phenomena. This is well, beautifully written. We may agree with him or not. It's very hard to agree with him, of course, after the 1905. Set of papers by, theoretical papers by Einstein, who really made a huge jump. By the way, the same year as that Ramon y Cajal function. So Ramon y Cajal didn't think much about theory. But I'm going to show you what is the impact of theory in understanding neuroscience.