The fourth and final issue we're gonna talk about on performance evaluation is process versus outcome. And the first issue we want to add here is that a firm ought to, and individuals ought to consider a broader set of objectives than they typically consider. So organizations usually care not just about what happens, but how it happens, how a person goes about their job. So for example, perhaps most importantly, along the way to creating some outcome, what impact did that employee have on other employees? What impact did they have on others. Can you fold that into the performance evaluation as well. This just highlights that we tend to care more, we care about more things than just what is traditionally measured. And, so one prescription that comes from it is to measure more things. An example of why we do this, is research by. Bond, Carlson and Keeney, they found that people considered too few objectives. And this is in negotiation settings, this is in decision making settings, that you ask people what they're trying to accomplish. They'll list a few objectives but then if you ask others. They ask a large group and then you share all the possible answers with people will go back and say, oh yeah, I forgot that. And I'll add that in the end, they listed on their own something like about half of all the objectives they ultimately recognize as relevant. So, left to our own devices, we're a little too casual, a little too informal about focusing on a few narrow issues instead of a broad set. Now it can lead firms to focus too narrowly. An example of a firm that recognized this, Dell Computers in the early 2000s, famously hard-charging, famously results-oriented, changed their performance evaluations. They changed from 100% results-based to 50% results-based. What an employee accomplished. And 50% how he or she accomplished it. So they're judging not only what the person does, but their impact on other people, because they had too much experience with managers that were running over people to hit the numbers to get the bonus. And in fact, the firm cared about things other than just the numbers that were being measured. In general the more uncertainty there is in an environment, the less control an employee has over the exact outcomes. The more a firm should emphasize process over outcome. So the more noise, the less control the employee has the more they should be evaluated on process and not on outcome. One way you can go about this, is to use analytics to figure out which processes tend to produce the desired outcomes. And what you're looking for here is what's the more fundamental driver of the outcome? It could be that you're measuring only the last step in the process, and in fact there are some important intermediate steps that are more fundamental, and they might provide additional performance measures. So, for example, the sports analytics world gives us an illustration of this. In hockey, for a long time teams were evaluated, in fact players were evaluated, based on goals. And if you were trying to figure out whether a team was really good, or poor they'd look at the number of goals they'd been scoring. If they wanted to evaluate whether a player was strong or weak, they would look at his contribution to the goals scored while he was on ice. And this is fine, and it's related, and it's important. But can you do better? It turns out there aren't many goals scored in hockey. And sometimes they are scored, because they hit the pole and went left when others aren't scored, because they hit the pole and went right. They tip off of people's skates all kinds of crazy things happens. There is a lot of noise between what a player controls and what actually happens. And whenever there's that noise you want to be careful about how much weight you put on it. It's not giving you a very reliable signal for the true effort, or the true ability underneath it. So what do they do? They determined through analytics that a better predictor was not goals, but shots. That if you looked at how many shots, they call it shots on goal, shots that were hit, basically, at the goal, at the net. That was a better predictor, that was a more reliable measure. And one way to think about it is, it's more persistent that a guy, or a player, or a team that looks good in one period on shots, is more likely to look good in the next period on shots. Than a player who looks good on goals, is to look good on goals the next period. It's a more persistent, more fundamental measure. Well you can take that even further they subsequently realized what really matters is possession, it's not even shots. There's again so much noise on shots that what the best measure they can find on team, or player is contribution to possession. The teams that keep that puck, the teams that have the puck, are the ones that are most likely to shoot. The ones who are most likely to shoot are most likely to score goals. But they had to figure that out going backwards looking for the more and more fundamental measure, and you can think of these as process measures. They're getting away from the noisy, rare outcome measure of a goal, to the more fundamental more reliable measure, process measure possession. What about non-hockey applications you might reasonably ask? Well it's hard to come up with them, one I'd push you to think about what that means for your organization, but one conversation I've had along these lines is with sales organizations. Can they come up with more fundamental measures than the traditional dollars booked? Can you, for example, consider the process that leads to that dollars booked. Is it the number of bids that a salesperson gets her organization into? Or maybe before that, is it the relationship building that the salesperson does in order to get the bids in order to book the dollars? Or maybe even it's earlier than that. It's an earlier process yet. It's the number of contacts generated. And we don't know exactly which of these would be most persistent. It will vary of course by organization and by industry, but the idea is to take data to the problem. And determine where in the process you might start adding assessments to get away from these relatively, rare noisy outcome measures at the very least to supplement them with more fundamental drivers earlier in the process. I wanna end this section with a quote from Shane Battier, another sports example, another Michael Lewis example. This was an article written in The New York Times Magazine a few years ago, where Shane Battier was playing for the Houston Rockets, and the Rockets were famously users of data analytics. And Battier was famously a player, who despite not having any obvious, flashy Stats, or obvious skills hung around the NBA. Teams found him valuable, and he became even more valuable when he went to the Rockets. They started looking at process, they started coaching him on here's something you can do differently that's gonna influence the game. If you defend a player, for example, this way you're gonna decrease the likelihood that he makes the shot. If you play offense this way, if you attack the post in this way, you're more likely to score. So it's always likelihood with the Rockets, it's always likelihood in sports analytics, and it allowed Battier, kind of freed him up to focus on what he knew was right, and not worry so much about whether it happened to turn out well on any particular trip down the court, or happened to turn out badly. On another trip down the court he could focus on the process. He put it this way, knowing the odds, Battier can pursue an inherently uncertain strategy with total certainty. He can devote himself to a process, and disregard the outcome to any given encounter. This is critical, because in basketball, as in everything else, luck plays a role and Battier cannot afford to let it distract him.