A third issue that we want to emphasize in performance evaluation is signal independence. And to do this, I want to begin with this notion of the wisdom of crowds, which has become very popular in recent years. It comes from a book, the name comes from a book, by James Surowiecki. He draws on research that's been going on. For probably a 100 years but the fact that Surowiecki wrote this book lead many academics to do more research in this area, and it's become a very hot area in research. The basic observation is that the average of a large number of forecasts reliably outperforms the average individual forecast. So the motivating example are these historical county fairs. Where you might have a large cow, for example, in a pen. And everyone who comes to the fair gets to guess the weight of the cow. And the interesting bit, the fascinating bit really, is that even though most people are quite wrong in their guesses, the average of their guesses is remarkably accurate. And this has been shown in county fairs, but also in many other domains that the result of all these bets is that the idiosyncratic areas offset each other. I might be a little bit high, you might guess a little bit low. If there's enough of you and me making these guesses, then we tend to get very close to the truth. So this has been studied in many domains now and shown in many places and it provides a way to get closer to the truth by getting more signals, getting more people involved, getting more judgements from more people. This is something that some firms do in performance evaluation and more firms should do more of. But there's one very important caveat, and that is that the value of the crowd critically depends on the independence of their opinions. So if, for example, in the county fair if everyone who walked up to the booth talked with each other before they made their guess, the value of the crowd would greatly diminish. They wouldn't be, those idiosyncratic errors would be a little bit less idiosyncratic. Now they'd be related to each other. If one person was loudly arguing that he knew because of the breed of cow this was, or the last cow he saw, that this was a particular weight, he would influence everybody's opinion. Their opinions wouldn't be independent. And the value of those independent signals, would be washed away. So, it's not merely a crowd that you need. You need a crowd that is to the extent possible independent of one another. So, independent here means uncorrelated. If the opinions are actually correlated, then the value of each additional opinions quickly diminishes. So if you think you have the opinion of 100 people, if they're actually highly correlated, then you might only have the opinion of 5 or 10. It's striking how quickly you rob a crowd of it's value when those opinions are correlated. Here's a chart of that. This comes from Bob Clemen and Bob Winkler down at Duke, 1985 study where they just kind of worked out mathematically, what's the equivalent number of experts for a given number of experts and the degree to which those experts are correlated? So here's what they found. If the correlation is 0, in other words, if the experts are perfectly independent, then every expert you add creates that much new value. So the equivalent number independent expert is the same. But when those expert opinions becomes correlated even at 0.2 which is a pretty low correlation even at 0.2, you quickly lose the value of adding experts. So for example, in this chart, it shows that as you go from 0 experts to 9, if you have a correlation of 0.2, you never quite get above 4. You asymptote there, where you have nine judges, but because there's a little correlation between them the effective number of judges, there's only three, three and a half. If the correlation's 0.4, it plateaus much more around 2. If the correlation is point A, you're not getting much more than one opinion, even though you've gotten nine experts. This should be very sobering to us. So we wanna push you towards crowds. We wanna push you towards more opinions, more assessments, cuz that's gonna help. But you've got to simultaneously try to maintain the independence of those opinions. In a recent study by Floyd and Zimmerman, and colleagues from Zurich, they found that even when you tell people the correlation, even when you tell them the exact structure of where the correlation is, how strong the correlation is, people don't properly adjust. It is sobering to think about how people deal with according to opinion, it's actually not laid out for them explicitly. This is not something people deal with very naturally. Where does the correlation come from? How can we think about signal independence in terms that we are familiar with in organizations. Well, the most obvious is that, it happens when people have discussed the issue together. When they've had conversations, when they've bounced their opinions off of each other. As soon as that's happened, they no longer have independent opinions. It can happen if they simply talk to the same people, maybe they don't talk to each other, but they both talk to the same third party. Or they share the same friends, or they went to dinner with the same group on different nights before. All of these things break down independence, all of these things increase correlation. It even happens when people have the same background. If they're from the same place. If they've trained the same way. If they have the same advisors. If they have the same historical experiences. All of these things tend to increase the correlation in their opinions. Therefore, all of these things break down the independence, and decrease the value of adding additional voices to whatever process you're running. So we push towards a broader sample. We push towards more sources, more assessments, but with a very important caveat. You need to bill those as independently as possible and then design processes to keep them as independent as possible.