In this video, I'm going to talk about the different ways that the brain encodes sound location. Now, sound location is really interesting, because as you've already seen, the ear can't form an image of the location of sound the way that the eye can form an image of the location of visual stimuli. So the brain has to figure out where sounds are located. And we've already talked about some the of the cues that the brain uses for doing that. But what we haven't talked about is what kind of representation the brain forms by synthesizing these cues. An early model that addressed this question was by Lloyd Jeffress in 1948. He suggested that the brain constructs a representation of sound location in part from intraoral timing differences using different delay lines. I touched on these delay lines briefly in an earlier video. And the idea would be neurons would receive imput from both the left and the the right ears. But the imput that they received would be delayed by a certain amount based perhaps how long the axons are to bring the signals together. And that the delay lines would vary across different neurons. This neuron, for example, would receive input with a short delay from the right ear and a longer delay from the left ear. If the input is delayed a little bit from the right ear and by a larger amount from the left ear and if this neuron responds best when its inputs are coincident from the two ears, that would make the neuron most responsive to sounds located to the left. The idea is that if the sound is located to the left, it arrives in that left ear first, the action potentials are fired by neurons receiving input from the left ear get a head start to travel down this longer path. Whereas the action potentials evoked when the sound reaches the right ear start later but have a shorter distance to go. The shorter neural path on the right compensates for a longer delay in the world for a sound located to the left. So this neuron would respond better to leftward sounds and this neuron, with an opposite pattern of inputs, would respond better to rightward sounds. Jeffress and subsequent scientists proposed that by having a wide range of different neurons with different delay line patterns, the brain could form a map of auditory space. In barn owls this is what seems to occur. Barn owls are nocturnal hunters and they are experts at localizing sound. And their brains seem to have maps for auditory space. But in other species, this does not appear to be the case. Primate seem to use meters for encoding sound location. I'll tell you a little bit about work done in my laboratory using monkeys and other folks have found similar results using brain imaging experiments in humans. The kinds of experiments we've done have involved having monkeys listen to sounds presented from an array of different speakers. And we evaluate how neurons respond as a function of sound location. Do individual neurons respond selectively to only a narrow range of locations or do they respond broadly with a response pattern that is proportionate to the location of the sound within that broad domain of space? What we find is that generally neurons respond broadly and proportionately. So here's an example of a response pattern from a particular brain region called the inferior colliculus. The inferior colliculus happens to be next to the superior colliculus that we've talked about in previous videos, but it performs a rather different role. The inferior colliculus is an early auditory structure. It is situated after the ear but before signals have reached auditory cortex. And neurons in the inferior colliculus respond roughly proportionately to the location of sounds along a particular direction, specifically the axis connecting the two ears. Neurons in the left inferior colliculus respond better when sounds are located to the right. And more weakly when sounds are located to the left. Neurons in the inferior colliculus on the right side show the opposite patterns, preferring sounds located to the left and more weakly to sounds on the right. If you want more information on this, this is the paper that this particular graph came, comes from. We've also done a similar study in auditory cortex finding generally similar results. And in humans, Nelly Salminen and her colleagues have tested this uses magneto-encephalography and have come to a similar conclusion regarding how the human brain encodes sound location. There are links to these papers at the end of this video. So if visual information is encoded in a map and auditory information is encoded in a meter, this raises a really interesting question of how we integrate what we see and what we hear. Integrating what we see and what we hear is so common that we're not even aware of doing so. We only notice this when we experience illusions like ventriloquism, where we're combining what we see and what we hear in a way that doesn't reflect the underlying physical reality in the world. Combining information from different sensory sources is a very helpful way of making sure that we understand exactly what those physical events and objects are. For example, right now you're using visual information watching the movements of my mouth to help you understand what I'm saying. But being able to combine visual and auditory information requires the brain to have a mechanism for matching stimuli together in space, to ensure that the correct aspects of the visual scene are associated with the relevent aspects of the auditory scene. So if the two systems are using different types of coding formats, this can pose a real problem. So the next experiment I'm going to tell you about involves trying to figure out how the brain might resolve this. So once again, I'll turn to my favorite brain area the superior colliculus. And remember that I mentioned earlier that the superior colliculus contains both visual and auditory signals. So it's a really interesting place to look to see how these two types of sensory information are combined with each other. So we did an experiment that investigated the nature of the representations of visual and auditory space in this structure. What we found was that visual information is indeed encoded in a map, as was previously thought. But that auditory information is encoded instead using a meter. Very similar to its inputs, but very different from the type of code that's employed in the same neurons and the same brain area, to encode visual information. This slide illustrates our finding schematically. So imagine that there's a visual stimulus at this particular location in space. This is a schematic depiction of the activity patterns across the entire population of neurons in the superior colliculus. What you can see here is that neurons whose receptive fields correspond to this particular location in space are quite active and neurons whose receptive fields are located elsewhere would not be very active. So you get a sort of hill of activity at a particular location in this population of neurons. A location that corresponds to the location of the visual stimulus. If the video stimulus is located somewhere else, like here, again you get a hill of activity but at a different location in the superior colliculus. And if the stimulus is located over here, you get a hill of activity at yet a third location. Now, when we did the same experiment but merely using sound instead of visual stimuli we got a quite different pattern of activity. If we presented a sound at this location, we got a low level of activity across the superior colliculus. If we presented a sound at that location, we got a higher level of activity. And if the sound was over here, we gotta still higher level of activity. So the overall level of activity co-varied with the location of the sound. That's exactly what we've been talking about as a meter as a representation for the information. The level of activity is indicating where something is. These findings show that in the monkey's superior colliculus, there are two different representations, one for visual stimuli, in the form of a map, and one for auditory stimuli, in the form of a meter. These are overlapping populations of neurons, so the same neuron can have very different response patterns, depending on whether or not the stimulus is a visual stimulus, or an auditory stimulus. What we think happens next is that these two forms of representation are both fed through the kind of transformation scheme that I talked about in an earlier video and that this transformation scheme can operate perhaps on either type of input, that it might operate on what you might call digital information or analog information. Thus creating an output command in the form of a meter to control the movements of the eyes. So this is one way that visual and auditory information can be combined with each other to accomplish a common goal. Namely, causing the eyes to move to look at some stimulus of interest in the environment. In the next video, we will talk about another really interesting difference between how visual and auditory information is encoded. And that is the reference frame that is used to define these locations. Reference frame is an important aspect of spacial coding in a variety of different contexts.