At this point in the course, you have a draft of all state and responsive descriptions in your design document. Congratulations, you've made it a very long way. As the learner explores and uses the interactive, they will be putting together pieces of information from these descriptions. Neither state descriptions nor responsive descriptions are experienced in isolation. These pieces of information, the details of the current state on the relevant changes to objects and context help the learner construct their own understanding of the content, relationships and learning goals of the interactive. Is precisely this weaving together of the state and responsive descriptions that results in an interactive story-like experience. Our description design aims to create an intuitive and enjoyable story-like experience that supports the learner in developing and understanding of the learning goals of the interactive. Now it's time to review how your state and responsive descriptions work together and to evaluate the story they can tell once implemented in a working prototype. As we mentioned when introducing state descriptions, refining your descriptions is a critically important part of description design. In fact, most of our time in designing and implementing descriptions is spent in successive iterations within this refinement phase. We use a variety of methods to review and evaluate our description designs when we have a working prototype, including informal discussions with learners, discussions with consultants, user-experience interviews, focus groups with teachers and learners, and user surveys. Without a working prototype, we use methods that support us in imagining the working system and imagining we are the learner. In this video, I'm going to introduce two methods that help us do that. The first is a heuristic evaluation using a set of description design heuristics. These are particularly helpful to evaluate if the state descriptions support understanding of the state of the sim and if they set up the learner for the story-like experience as they interact. The second method is called the cognitive walkthrough. The cognitive walkthrough is particularly helpful for evaluating the weaving together of state and responsive descriptions. See the readings for this video for references to heuristic evaluation and the cognitive walkthrough. Heuristic evaluation is the use of preset criteria to evaluate a user design experience. During description design, we do a heuristic evaluation of the state descriptions using the following four scenarios. For each scenario, we write out the descriptions that would be delivered in order and evaluate based only on this information. The first scenario is a read through of the summary section only. Remember, the summary section starts off to state descriptions, so it basically starts off the learner. Then we answer the following questions based only on this summary description or on these descriptions in the summary. Does the summary section provide a concise and inviting description of the important objects in the interactive? Does the interaction hint effectively encourage transitioning to interaction? Does the keyboard shortcuts sentence effectively provide an indication of the presence of the keyboard shortcuts and where they're located without over-emphasizing their importance? The next scenario is a full read through of the initial state descriptions. In other words, this is a read through of the state descriptions up the interactive on startup. Once we've done that, we answer this question. Upon reaching the end of the read through, is the next logical or most inviting action a productive one or an unproductive one? For scenario number three, we do a read through of the state description headings only, then we answer the questions. Do the headings alone convey a sense of what the sim is a better? Do the headings convey a sense of where to start investigating next? For the last scenario, we do a tab through of the interactive objects only. These are the descriptions that would result in the reading out of the object name, the initial value or state of the object if there is an initial value or state, and the role or function of each interactive object in sequence. Once we've laid this out and read through it in order, we answer these questions. Do the object names sound fun or intriguing? Do the object names and values fit together? Do the initial object responses encourage interaction? Is the action to activate the interactive object obvious from the initial object response? If not, is clarifying help text nearby in the state descriptions. If you like your answers, you're interactive description design is ready for the next step. If there are answers that identify some problem areas returned to those specific areas and see if you can address the problem. For help review the state description design patterns and refer to the tips for the state description designed tasks. Next, we evaluate the interactive story using cognitive walkthroughs. A cognitive walkthrough is essentially what it sounds like. Taking a mental walkthrough of the described interactive experience. We use this method to aid in our thinking about what the experience of using our interactive will be like, allowing us to imagine ourselves as the learner, using the interactive before we have a working prototype of a particular design or potential design change. There are a few different ways to do this method. I'm going to describe a variant we have found helpful and we use it frequently. To do a cognitive walkthrough, we first specify a specific task or process to accomplish in the interactive. That's going to be what we do the walk-through of. The design of your interactive is intended to enable or encourage users to do one or multiple things. I suggest selecting a task related to accomplishing an important thing. For example, in balloons and aesthetic electricity, we might start as selecting as a task, rubbing the yellow balloon on the sweater. This task would involve finding and crabbing that yellow balloon, moving Into the sweater, and rubbing it on the sweater. For John Travoltage, we might start by selecting as a task shocking John, which involves finding and then rubbing John's foot on the rug until there's enough charge for discharge event, and then finding and then moving John's arm closer to the door knob. We then write out step-by-step, one way that this task could be accomplished in your interactive. Lists out the actions a learner must take to accomplish the task and all of the descriptions that would be read aloud as they take those actions. This needs to be comprehensive list. List out everything. As we do this, we're imagining how the descriptions might be interpreted in the context we would be hearing them. For example, if there is not enough information provided, complete a task, then we would allow the walk-through to lead to a dead end. Cognitive walkthrough in a design doc. Here's an example of a cognitive walkthrough for the set simulation friction. In this walk-through, the task we're exploring is grabbing the chemistry book and rubbing it on the physics book until all of the particles have vaporized. This task is interesting and potentially challenging from a non-visual perspective. The chemistry book needs to be lowered onto the physics book to cause enough friction to result in particles vaporizing. The particles vaporize in discrete groups, leaving space between the books when a vaporization event happens. The description needs to effectively indicate many things in this task, including how to grab the chemistry book when the book needs to be pressed down more on the physics book to cause friction during rubbing, and what's happening to the particles as rubbing a occurs. Let's go through the walk-through, focusing on how it's documented. The special notation helps communicate all aspects of the design as we imagine exactly how the experience of the task plays out. The special notation we use is listed at the top of the walk-through. Descriptions in quotation marks indicate what is voiced by the screen reader. Descriptions in parentheses indicates description strings that should be silenced at this time in the infraction. Sonification notes go within asterisks and describe the sounds that happen. This helps us get a feeling of the actual experience. Keyboard presses are noted in bold and underline text and they define each step the learner takes using the keyboard. Since we are imagining a non-visual experience, keyboard presses are what drives the experience. Using these notations, we enact the experience in our mind and write down each step needed to complete the task and what happens at each step. Blind learners typically start by reading the state descriptions. We're just going to put the summary on the interaction hint here because that's what they're going to read before we start the walk-through. Chemistry book rests lightly on top of a physics book and is ready to be rubbed against it. In zoomed-in view of where books meet surface temperature, thermometer is cool, atom jiggle a tiny bit. Grab chemistry book to play. As the learner, the summary and hint make me decide to press the Tab key to find the book. As a designer, I note the keypress and what I will hear as my keyboard focus lands on. The grab chemistry book button. Learners surely know what to do with the button. I note the keypress, press space bar to grab the book. Then I write down the initial grab response within quotation marks. Grabbed lightly on physics book, move book with W, A, S, or D keys. Space to release. Atoms jiggle a tiny bit, the temperature cool. Now, this is where the walk-through gets interesting. I note the user starts by moving the left and right arrow keys and then presses left again. No friction happens, so no rubbing sound. I've just got this highlighted for myself. This is what the learner hears. "Left, right, move down to rub harder." I execute that hint. I press the down arrow key or the S key and I hear, "Down. Rub faster or slow." At the same time, I hear a bump, and I note the bump in the asterisks. I press the left and right arrow keys again, and this time I get productive horizontal movement. I hear, "Left, right. Jiggling more, warmer." Rubbing sound begins, jiggling sound intensifies. I note that with my sound notation. Then I put the left and right responses in parentheses. I don't want to hear those anymore. I want to focus on the friction. Jiggling faster, now hotter. Jiggling even faster, even hotter. Very hot. Atoms breakaway from chemistry book. At this point in my interviews, blind users kept rubbing back and forth so I have a design note right here. If I continue rubbing left and right with the arrow keys, the horizontal movement has no friction and the cooling alerts start so I note how that sounds. Left and right are still silenced in the parentheses and I hear, "Jiggling less, temperature cooler." The rubbing sound has stopped and the jiggling sound lessens. Those are noted with my special sound notation. Then I hear "Moved down to rub harder on that third horizontal keypress." I take that hint. I press the down arrow or the S key, down. Rub faster or slow and I get that bump again. I press the arrow keys left and right. This time, I get productive rubbing. Left, right, jiggling more, warmer. The rubbing sound starts and the jiggling responses start again. Jiggling faster, now hotter. Jiggling faster, even hotter. Very hot, atoms breakaway. Again, I have air space, so we see a common pattern here. After each time the atoms break away, the user needs to have not only jiggling responses, but a reminder to move down to rub harder. I note that in the order that it happens. I follow the hint again, down. Rub faster or slow and I get the bump sound. As a designer, I know at some point in the interaction, there will be no more atoms that can break away from the chemistry book. The next part of the walk-through covers that. We have productive rubbing after move down and the bump sound, and we use the left and right arrow keys, so we have productive rubbing. Left, right. Jiggling more, warmer and the rubbing sound starts. Then we have the jiggling responses. Jiggling faster, now hotter. Jiggling even faster, even hotter. Very hot, atoms jiggling very fast. Jiggling super fast, super hot. Reset the sim to make more observations. Now that's something new. The learner stops pressing the arrow keys and listens and the responses they hear describe the cooling. Jiggling less, temperature cooler. Jiggling less, cooler. Though learner, again, it starts with the left and right arrow keys, and productive rubbing happens because the books are still touching. There hasn't been a breakaway event. Jiggling more, warmer. Jiggling faster, now hotter. Jiggling even faster, even hot. Very hot. Atoms jiggling very fast. Jiggling super fast, super hot. At this point in my interaction, I'm wondering why there's no breakaway event, so I stop interacting and listen carefully. Jiggling less, temperature cooler. The rubbing sound stops. The jiggling sound lessens and I get all of my cooling events. Then I decide to release the book. I press the space bar to release the book. I hear, "Released." I use my screen reader shortcut keys to read through the scene summary again to get a hint of what's happening. This is what the scene summary tells me, "Chemistry book has far fewer jiggling atoms as many had broken away. Chemistry book rests on top of a physics book. In zoomed-in view of where books meet, surface temperature thermometer is cool, atoms jiggle a tiny bit. Resets sim to make more observations." To finish out my interaction, I press the tab key twice to move focus to the reset all button. I hear, "Grab zoomed-in book, button. Reset all button." I press the space bar and I hear, "Sim screen restarted. Everything reset." Then I use my screenreader shortcut key to start reading through the scene summary again and I get the same description I heard at the very beginning of my interaction. Next, we evaluate what happened in this walkthrough, asking questions like, did the description read aloud as part of each action make sense in context? Going from one action to the next, did the description flow together? Did the information makes sense when provided in that order? Did each description support accomplishing the task? Are there places where you can already guess that a learner might not have known what to do next or might have gotten lost, confused, or was provided was misleading description? Was the description too long or too short? Was the language too technical, too simple, or maybe even ambiguous? Let's go back to the friction cognitive walkthrough example and consider some of these questions. In creating this cognitive walkthrough, we were able to refine the information to be conveyed during interaction and to communicate that for the designer to the developer. For example, in prior cognitive walkthroughs, I noticed that there were two situations where rubbing the books together would not result in friction because of the airspace between the books. This happens at startup when the chemistry book rests lightly on top of the physics book and after friction causes a vaporization event. In both cases, the learner may not be aware of the airspace. We refined these descriptions to include the hint, move down to rub harder. In the beginning here we have rest slightly on top of the physics book and then after the first grab, after horizontal movement, we have move down to rub harder. Then after continued rubbing results in a vaporization event and the airspace between the book exists again, we have the jiggling responses and move down to rub harder after three left and right horizontal movements. So that's clear in our cognitive walkthrough. We refined these descriptions to include the hint move down to rub harder, a startup and when an evaporation event leaves a gap. Without that indicator, the learner could be in a situation where they are rubbing, but there is no friction. So either nothing happens or the books are cooling instead of heating up and they would not know why. With the addition of the move down to rub harder, the learner is better supported to understand why there is no rubbing sound or why the book may start cooling when they're still rubbing and what is a good next step here. With this cognitive walkthrough, we can also check state descriptions with the responsive disruptions and make sure that they went well together. For example, the beginning we say chemistry book rests lightly on top of a physics book and then the learner has to move down to rub harder. Also, after all the atoms have evaporated, there's the hint, reset the same to make more observations. If they missed all of that and they've returned to the scene summary, the scene summary is again communicating that same information. The chemistry book has far fewer jiggling atoms as many have broken away and the user has to reset the same to make more observations. All of that works well together. We could also use the cognitive walkthrough to help us identify things that we didn't need to hear. For example, after a successful horizontal movement, we didn't need to keep hearing left and right. We were able to use the cognitive walkthrough to show the developer that we don't want to hear left and right after successful horizontal movement. We want the learner to focus on hearing the result of rubbing, the changes in speed of the jiggling and the temperature. The developer implemented logic to the code that frack the use of the left and right arrow keys, and of course the A and D keys and remove the left and right indicators from the description strings after a successful back-and-forth movement was detected. The cognitive walkthrough is very useful for helping us imagine the non-visual experience. Missing steps and potentially confusing scenarios reveal themselves very quickly as you go step-by-step through a task. Throughout the design process, we use many cognitive walkthroughs for tasks large and small, such as to brainstorm and to resolve and evaluate design ideas. These two evaluation methods are particularly useful to determine if the description design is ready for implementation. Implementation is a lengthy process too, so it's good to make sure that the descriptions are ready. Using these methods, we can review and refine until we feel the state descriptions have a good chance of accomplishing the design goal. That is to frame and encourage interaction. The responsive descriptions have a good chance of accomplishing the design goal too, that is, sustaining interaction and the overall described experience is feeling like it comes together like an interactive story. Once implementation begins, you can use a similar approach to determine if a prototype is ready to be reviewed by a learner or someone outside the design team.