You have now learned how to create and interpret run charts and control charts or Shewhart charts. In this video, I'll recap the important aspects of these tools and show you how to use the results of these analyses to maximize chances of improvement. First, you learn how to create a run chart using the median to establish a center line. You learn how to apply the run chart rules in order to detect unusual patterns over time warranting investigation. Then you learn how to create and interpret control charts using a set of control chart rules. The five control chart rules you have learned are: First, any single point outside the control limits. Second, the shift rule, a run of eight or more consecutive points on one side of the center line. Third, the trend rule, six consecutive increases or six consecutive decreases. Fourth, two out of three consecutive points in the outer third of the region between the control limits. Fifth, 15 or more consecutive points within the inner third of the control limits hugging the center line. You have learned the meaning of the term special cause variation and common cause variation, as well as the concept of statistical control. Remember, all processes will exhibit common cause variation. The only question is whether or not there are any special causes of variation present. One or more rule breaks constitutes evidence of special cause variation, meaning that the process is out of statistical control and is unstable. An absence of rule breaks indicates only common cause variation and the process is in statistical control. It is stable and will continue to perform as is unless something changes. This flowchart describes how to use control charts in improvement work. Having established an aim, defined measures, and collected baseline data, a control chart is created. If the chart shows evidence of special cause variation through rule breaks, then some detective work is required. You should seek to understand what those special causes are and what to do about them. This could involve further data analysis, observing the care process, and talking to stakeholders. Having identified the cause, if it is beneficial, you might work to embed it in the process. If the special cause has an adverse effect on quality, you should seek to prevent it from happening or improve the process so that it's robust to this particular problem in future. By continuing to collect data and update the chart over time, you'll be able to identify this has been successful. On the other hand, if the chart does not show evidence of special cause variation, there are no rule breaks, then the process is stable and will not deliver improved results unless something changes. If the process is stable and performing at a high level of quality, this may be considered sufficient, in which case, the process could be monitored to ensure it doesn't deteriorate. If however, the stable performance is not at an acceptable level of quality, i.e improvement is needed, then the process must be redesigned. This redesign should draw on a wide range of stakeholder views and may benefit from additional data collection and analysis to further understand problems, as well as research literature to identify potential solutions. In either situation, continued data collection and analysis will reveal what works and what does not. Notice the similarity to the plan-do-study-act cycle here. This flowchart provides more detail on how the study phase can provide objective, statistically valid information about what type of action is most likely to result in improvement. Now that you have seen how we should act based on whether or not there's special cause variation present, it should be clear, there are two types of mistake we can make when acting on variation in a measure. First, we can mistake common cause variation for special cause variation. For instance, insisting that there is an upward trend when there are only three points in a row going up. This is relatively likely to happen by chance just as a result of common cause variation. When we make this kind of mistake, we are reacting to signals that aren't there. Deming referred to this mistake as tampering and argued that this increases the amount of unwanted variation in a process. You can imagine how this might happen in a healthcare process if start reacting to every up and down in a measure, when in fact the process was just steadily performing and producing completely predictable levels of common cause variation. The second type of mistake we can make is to mistake special cause variation for common cause variation. For instance, not investigating when a point lies outside the control limits. When you make this kind of mistake, we're missing important signals in the data. This could lead to missed opportunities for improvement. You might find it useful to note that these two types of mistake are analogous to type I and type II errors in frequentist hypothesis testing. Both kinds of mistake can hamper improvement efforts and potentially even make things worse for patients. The decision flow chart we have just seen can be useful in avoiding these mistakes. In an improvement initiative, it is important to begin by establishing a clear baseline, capturing how the process is performing before any changes are made. This can be crucially important in making a convincing argument that an initiative has succeeded. Here you can see the baseline on the left-hand side of the chart. Once the baseline is established, the control limits and center line should be extended out into the future. This means that they are now fixed for the time being, a new date was simply added on top without modifying the position of the limits or center line. You can see an example of this here, indicated by a dotted center line. The data for this period is not incorporated into the center line or control limits which are derived solely from the baseline. This allows the new data to be compared with the baseline process using the rules for special cause variation. If an improvement project succeeds, we should expect to see a shift rule break signaling the improvement. If the improvement is sustained, the shift will herald the start of a new stable process. To capture this, we can form a new set of center line and control limits to represent this new process. This is sometimes referred to as recalculating the control limits. In the example chart, this can be seen from April 2015 onwards. Note there's a transient rule break like that occurring around December 2014, is not sufficient to warrant forming new limits. A new stable process does not follow on in this case. Only later, once the improvement is embedded, does the shift persist. This means that you should wait until you have sufficient data before establishing new limits. 20 data points is sufficient for stable control limits. You may add just a center line to give an idea of how the new process is performing after 10 data points. This should then be updated as each subsequent data point is added until you have 20 data points in the new process, at which time the new limits can be locked and extended as before. You can see this on the right-hand side of this example. In this example chart, which shows the percentage measure, you might have noticed that the control limits vary in their widths about the center line. Why do you think this might be? Think about what you know about sampling distributions. This type of chart is known as a p-chart, p for percentage or proportion. Next, you will learn about why the limits vary in width, as well as how to create and interpret p-charts. Control charts or Shewhart charts are the main tools of statistical process control. You have seen how these charts offer a robust and objective approach to understanding variation in healthcare processes over time. Including how this can be used as part of the iterative approach to improvement engendered by the model for improvement and the plan-do-study-act cycle.