Since so much of high impact innovation really revolves around the scientific method and this idea of rapid experimentation, I wanted to come back for a few minutes to this idea of the hypothesis and what really makes a good hypothesis. We've seen some teams struggle with this, and while there's been a tremendous evolution of experimentation methods, we used to test things much faster at lower cost. Really what they're testing is the hypothesis, the assumption, the thing that must be true for this new idea to work. And when the hypothesis isn't great, that can really impair the entire innovation process. Generally, what you'll see is something along these lines. So, someone will say, I believe that if we do the following, then something will result. In other words, I believe that if we clean the site where we're going to perform surgery, then we will have fewer surgical site infections. So, that's sort of on the way to a hypothesis but it's certainly not something I'd say is clear enough to support the experimentation methods that we like to use. And so you can start asking some questions around that type of statement. And you could ask well, is there a specific prediction? Did someone say, well, what do you believe is going to happen? Is it clear what's driving the change? What is the mechanism that's leading to that change? Do you know who will be impacted and by when? These things are supposed to usually unfold over a specific amount of time in specific ways. And in fact, can you design an experiment that can invalidate the statement? So, a good hypothesis can either be validated or invalidated, and you have to make sure you can design that experiment to invalidate your assumption. The components of a great hypothesis statement are within the set, and this is probably more comprehensive than you really need to be, but it's very important and very useful to at least acknowledge and understand what you might want to consider in a hypothesis. So, for example, it might be I believe that if someone is specific, the who, does something specific which is the action, then some needle is going to move so, what's going to change? The readmission rate is going to come down. The impact, what's it going to change by how much? And we can start to ask some interesting questions about, how good is good enough? How much does something need to move for it to be meaningful? For the target population so, who will be affected and by how much within a certain time period? And then again the mechanism, because why do you think this is actually going to work? So, let's take one of the projects. So, this is where we had an adverse population and this was around postpartum preeclampsia. So, what we needed to do, we believed, was we could send these women home with a blood pressure cuff and start to text them. And we believe that if we would text them asking for their blood pressure values, they would reply, send us their blood pressure, we could see when blood pressures were getting elevated and, therefore, keep them safe. One of the things that we started with was a statement along these lines. I believe that if we start texting with women at risk for postpartum preeclampsia, when they're discharged from the hospital we can keep them safe. So, then you have to go back to that last frame of what's missing? What's in there? Well, it's a lot of things that are missing in terms of specifics. But even before we get to the specifics what you see almost immediately is really there is an incredible cascade of assumptions, there's a whole bunch of lined up hypothesis to get from where we are today to where we want to be in this future. So, when you say I believe that, what you're really saying is all of this, I believe that we can identify which women are at risk, I believe that we can actually afford those blood pressure cuffs to give out to the women, I believe they'll use them and they'll use them correctly, I think that we can get them to text us back so they'll actually say, it's okay to text me and I will text you back, that they'll be accurate, they'll read the blood pressure value and send us the right value, that we can get it in time and that we'll do something with it. So many of the interventions that we see assume that and rely upon the fact that someone on the care team will actually take a new piece of information and act differently, and change an intervention based on what they see, that's a big assumption. So, assuming they going to close the loop, they're going to just care, they're going to be able to adjust medication in many cases, that the patient will take those medications, that those medications will reduce the blood pressure, that reduced blood pressure is going to decrease the readmissions, and the mobility is going to decline by a certain amount. Remember we're saying by how much I have to make a specific prediction, and that we're going to see the change within a specific amount of time so, that's the balance. So, that's a quite a long line of hypotheses. This is why we introduced that concept of dimensional liaising them based on, where are you really uncertain and which ones are critical, which are the ones that would be devastating if you're wrong. So, that intersection of uncertainty and criticality is where we start. You can pick the ones that you believe are in that camp, but maybe you'd start here. So, first of all, we actually believe that if the physician explains the health risk to at risk patients, within 30 days we'll start to see a success rate of 90 percent of those patients, with the success rate is defined as them allowing us to send the text messages, that they will in fact opt into this intervention. And we believe the mechanism about why is that they trust a doctor, that they want to stay healthy, that they prefer texting because they don't like phone calls. They really like asynchronous communication because they may not have time to talk to us at that moment, and maybe they don't even want to talk to someone on the care team. So, that's one of the hypotheses it's on the way, but it still falls within the umbrella of the project. You say, why are we doing this project? What is the reason for going down this path? And you have to always be working under this umbrella. So, if we apply that structure that we talked about a few minutes ago, this is what it starts to look like. I believe that if women are at risk for postpartum preeclampsia get a daily text from their doctor. So, who's going to do what, when, once they're discharged? And what we're doing is asking them to test suspected blood pressure values. Then the readmission rate so, the outcome is going to drop by 50 percent within 90 days so, a bounded amount of time, and there's a mechanism because we'll be able to prescribe and adjust medication based on knowing their blood pressure, we'll be able to do this for at least 80 percent of the population, and we're going to get a 90 percent adherence to the medications which is what allows us to avoid the adverse events that actually drive the readmissions and morbidity. There's a lot baked in there, it's probably too much, if you're going to be pure about hypotheses we have a lot going on and it says, well, that's a different components you should test them individually. But the reality is this is the umbrella statement, if you're going to be working on this project, your prediction is that you could be able to move the needle that much in this way, and that's what's going to tell you whether to keep going, pivot or stop. This is now a testable hypothesis that can in fact be invalidated. A lot of times what you see is you just start with that first statement which is maybe the insufficient hypothesis, and it's the conversation with the team in the back and forth between team members that allows people to kind of think, what do we really know, and what do we really believe and just asking each other questions. So, in a case where someone's trying to work on getting people to be on time to their appointments, and they have an idea in their head about how we're going to have an intervention to get them to those appointments on time, they might start with a pretty simple statement. "Having traffic information would help patients show up to their appointments on time." And they say, here's my hypothesis. And so the other members of the team can start to ask them questions about it. Well, what do we get to send them, and how are we going to get it to them? Okay. I think I'm going to be texting them, and I'm going to be sending them travel time information, I need to send them when to leave their house based on the traffic so they get there on time. Okay. And what will be good about being on time? Well, we know when patients are late it backs up the schedule, and it ends up affecting everyone for the rest of the day, and a large percentage of a patient population can experience wait times. So, that might then lead to the next hypothesis, a little bit more refined, "Texting new patients who drive when to leave their house based on current traffic and travel times to their doctor's office will reduce the incidents of patients experiencing waits of over 10 minutes by 50 percent within a week." So, there we've added by how much, within what time frame. Two core components have a good hypothesis, and again the mechanism. Why do you believe that will happen? Well, because by leaving their house on time, fewer patients are going to arrive late and back up the schedules for everyone else for the rest of the day. So, that interaction among the team can really evolve what started as a perfectly good statement, but not specific enough to be tested with a good experiment design, and really not specific enough to be invalidated. One of the questions that I do get asked from time to time is, do you always need to have a hypothesis? And frankly I think the answer is No. There are a couple cases where you don't. One of those is, you're just exploring. So, you have this idea, you just want to try it, and you know you can try something very quickly just because you're trying to learn. So, you're asking a question along the lines of, I wonder what would happen if we did the following thing. I have no problem with people going out and running a quick experiment where it's only for initial learning purpose. The other reason or context in which I don't think you need a hypothesis is when you're doing exploratory research. In other words, you're asking the question, I wonder what's going on in that circumstance, in that practice with those patients? I'm going to go do some contextual inquiry, I'm going to go do some calls for anthropology, or work where I'm just going to observe, and talk, and start to get smart about what's happening. Now, when you do these, when you go in without hypothesis you're doing that because your goal is to form a hypothesis that can then be explicitly tested. So, if you remember from last time, there are three possible paths at this point. If you're doing good hypothesis creation, you're making that specific prediction, and you're making it specific enough to be validated or invalidated, and if it's true, you go, if your prediction was correct, keep going. You also of course can invalidate it. So, you made a prediction it was false, and remember you didn't learn anything new in this case, then you stop. But of course if you were wrong, there's something really interesting happened. Let's say you were texting those patients to arrive on time, and it wasn't working out so well because you found that you really needed to text the caregiver or somebody who was going to be providing them with transportation, giving them the ride, that's the person you needed to be texting. Well, you know what, your original hypothesis was false, it didn't work out, and you made a prediction that didn't come true, but you have this new insight you'll create your new hypothesis and try again. So, that's the iteration. So, that is what we think about when we think about what is good hypothesis creation, and let the experimentation begin.