And so we know already that in general different tests are more versus less effective at predicting how a candidates going to perform for the job. What people analytics allows us to do is go beyond this general and start thinking for this specific job, what do we know about what makes people effective and what doesn't? So in particular, with people analytics, what we're usually trying to do with hiring is take people's performance, how they do at the job, and try and understand what drives that. And so on the one hand, we look at a variety of performance measures. And I think both Kate and Martine are talking a little bit about some of the challenges of using performance appraisals. We know that these are not great, there is probably some information there and so predicting who is likely to get right to this high versus low, at least tells us whose going to be seen as fitting in this organization. On top of that, there are other things that you might want to predict about who's going to be a high performer. So increasingly, an organization says a lot of objective information, obviously if it's a sales job, you can think about who's going to have them as sales. Call centers, you now have huge amounts of data about things like what people's average handle time is absenteeism, their availability. We also have quality assurance. We have customer satisfaction service. And so, in some organizations you have a great deal of information about each individual on a more kind of objective basis. There are other things we also might want to be able to predict about people, so attrition, for example, is a fairly hard measure. It tells you something about who fit and who didn't. It can be important in its own right. So there are some organizations that spend a lot of money training people. If that's the case, one of the important things you want to think about in your hiring is, is this person going to stay long enough for us to get a return on our investment? You may also be interested in understanding who of the people are promotable. So the idea is we say, okay, we want to know who's going to do well on these various measures and we want to figure out, among the people who are applying, which are going to be the best bet. So what you do is you use those as kind of your variables that you're trying to predict. And then on the other side you put a series of characteristics of the individual that you know at the time they're applying. So which of these actually predicts performance? So, one of these you might look at is the resume, their background, their characteristics and so on. So one of the things that Google has been a pioneer in this area found out when they did this was that they had been famously for years asking for college transcripts for everybody from junior to senior people and looking at their GPAs. When they actually sat down and looked at what predicted performance, they found that once people have been out of college for more than a couple of years, GPA had no value whatsoever as a predictor. And so they said, okay, this is not something we should be screening on. Interestingly, apparently, another investment bank did this recently and actually they found in their case, GPA was predictive of performers, which speaks in part to the value of doing this in different ways and different organizations. Because what predicts performance is going to depend in part on the nature of the role. Another thing that you might look at is test scores. So, if you run a series of intelligence tests, personality tests, job knowledge tests and so on, which of those are going to predict, who's performs well in the job. And then you can also look at interviews. So, in a structured interview, like I say, rather than just sitting down, getting to know somebody, what you're really trying to do is figure out where they score on various different attributes. And so you should have a series of questions that are aimed to tap into those attributes where you can read them kind of high, medium, low on that attribute. There's then the possibility to go back after a year or two and say okay, which of these questions and types of questions actually seems to predict whether or not they're going to do well on the job and which day. Again, another nice story that Google shared with the public. They were famous for a series of questions. They basically asked people to think out of the box. So this kind of how many golf balls could you fit in a jumbo jet? How many call boxes are there in Manhattan? Remember when I was applying to consulting jobs about 20 years ago. Somebody asked me how many ties are sold in Great Britain in the average year? I just thought, who knows? The idea of these is not that you know the answer but rather that you try and think through, and that they can kind of stump you and see how you respond to a questions you haven't thought about before. Can you be creative, can you structure an answer thoroughly? All those sorts of things. So there is this nice idea behind it that we can really see how smart people are. Again, turned out not to work at all, completely unpredictable performance. And so on the base of that, they're trying to move away from these questions, persuade people not to ask them. So you can figure out what kind of questions work. You can also start even to look at which interviewers have done a better job of predicting who's going to be a high performer and who isn't. And the idea is on the base of this you're better able to choose who interviews and even what kind of test we use. Some people have gone even further. So, Jet Blue told a very nice story about how they're using this analytics. When they hire flight attendant, one of their big questions was, is it more important that they're friendly or that they're helpful? Because you can have people who smile a lot and say nice things, but then the person will actually lift your bag, help you get it up, looks for people who actually need assistance and that sort of thing. And so it really wasn't clear to them, and which was better, not clear to me either, my colleagues will tell you on either basis and I'm likely to get a job at Jet Blue. When they tried to figure this out, what they did was they ran a little experiment. They went and they asked their customers to rate their flight attendants, is this person friendly? Are they helpful? What do you think? And also to rate their overall customer experience. Are they likely to recommend Jet Blue, onto somebody else? And to see was their rating of the airline higher when the customer was friendly or helpful. And on this basis they discovered that actually it's helpfulness that was more valuable. So that enabled them to further fine tune their hiring. Once they knew what was really important in this job. Then they could think about how are we going to go out and actually screen for that. And so the idea then is basically to take these predictors and see, based on what we know about the people in our organization hope performing well, which of these predictors do tell us something about what people's performance is likely to be. In doing this, there are few things to bear in mind. First and most obviously, you won't be comparing apples with apples. So if you're looking a differences in performance, you want to make sure the people are doing the same work in the same place or the same level. All of those sorts of things if you really want to hold that constant because otherwise it's quite possible that what you're really seeing is differences in what people are supposed to be doing rather than actual differences in their attributes driving performance. You know pernicious version of this is you want to be very wary of time on the job. Generally we expect certainly when people have been in the job for a moderate period of time, they're going to be better at what they're doing, so are your really comparing people a new versus medium in the job? Or are you comparing people who are all in the same job at the same time? The second thing, do you want to be concerned about? And I'll get into this in more detail in a minute. See this idea of disentangling influences. So, often you have multiple different attributes of people so they vary on their education, their experience levels, what it was they were doing. So we see the friendliness, the helpfulness, all of these sorts of things. And the challenge is if you just look at one of these variables is it that that's predicting performance or is it just that it's highly correlated with something else? You want to disentangle those influences to make sure you really are getting at the attributes that drive performance. What are the things you want to do is there's as well probably just apply a bit of common sense. If you look at any dataset, you're going to see a bunch of different patterns. Some of them are going to be real. You would see them in any similar datasets. Some of them are not. So the various statistical techniques but trying to figure out which are the real ones. But also probably applying a of common sense. Does this make sense? Can we figure out why this would be is an important thing to do? The final more tricky piece is that it's also worth thinking about what we're doing here is we're taking the people that we hired and seeing, how does what we knew about them before we hired them, helped predict how they're performing today. Well, that's not the same sample as our general applicants, the people who we have now we weeded out a bunch. So it's not exactly the same as predicting who, out of the general applicants, will perform well. It creates a number of statistical biases that I think if you're really serious about this, you need some attention to it.