[SOUND] [MUSIC] >> Hello, as you know from the first lecture this week. One of the major differences between field visits and usability testing is that in field visits, the course of a participant's activity, guides the moderator through the whole study. To decide when the participant's action is a problem or not you, need to fully understand the motivation behind this action. For example, you may consider someone's action as inefficiency, whilst this someone just decided to perform another task first. That is why, for moderating field visits, the [INAUDIBLE] that includes participants explaining the action I used. We discussed such moderation techniques earlier. One of them is retrospection. As you remember, it applies that the full session is recorded on the video. After the session has ended, the participant is shown this video and she explains why and what she did. Here the designer asks follow-up questions. Despite the fact that this technique can be applied in many different situations, it's typically used when it is undesirable for some reasons to distract the participant during task performance. For instance, recall our example of the tablet app for shop assistance, that was intended to be used in the course of the communication with customers. In this lecture, I'd like to talk about the moderation technique which I touched on earlier, while making the overview of user search methods. Its name is Masters-Apprentice Model. It's a part of contextual inquiry, a specialized variant of field visits, proposed by Karen Holtzblatt and Hugh Beyer. It implies role playing where the designer takes the role of an apprentice, and the participant the role of a master, who teaches the apprentice to do her job. Despite the fact that the model is not common nowadays, it's still quite familiar such that people know how to act in accordance with it. Holtzblatt and Beyer found out that it's not only easy for people from the target audience of your app users, but for designers too. People with no special backgrounds, know how to conduct this technique much more quickly by acting like an apprentice, than by memorizing the most rules for asking interview questions. Master apprentice model is based on four principles. Actually, you are already familiar with the first two of them. The principle of context essentially tells you, firstly, to go out of your office building, and go where an interaction occurs, and secondly, to see the interaction with your own eyes, not just talk about it. During the study, you need to ensure that the participant talks about end goal experience, instead of summarizing the experience. Focus gives the point of view the designer takes, in the course of observation sessions. We discussed it in detail in the lecture about preparation for field users, this week. The principle of partnership implies the following. During the status session, you alternate between watching and probing. You intervene a participant's activity to ask follow up questions. For instance, every time when we saw that the participating firm, this time the participant becomes accustomed with it. Because you ask similar questions in the same situations. And she starts to explain why she is following without your intervention. Thus you become partners with the participant, by maintaining the same model of interaction. The last principle of interpretation implies sharing your understanding of colors of participants behavior with them, in order to validate your interpretations. Holtzblatt and Beyer stated that since participants are in the middle of doing work, it's quite hard for them to agree with wrong interpretations. That's actually quite true, but it works only when you use observations to study the usage context. You see, contextual inquiry where master-apprentice model originated from, were initially proposed as a user search method, aimed at understanding work in enterprise context, when we use master-apprentice model for emulation purposes. I do not recommend to follow the last principle. You can validate interpretation just by asking open-ended questions, as we discussed in the previous lecture. I think you must have noticed that the master apprentice model shares some common features with active intervention. So it will be easy to explain it by mentioning the differences, between these two moderation techniques. But before that, I'd like to show you the structure of the observation session. Many steps of this structure are similar to those from the structure of [INAUDIBLE] test session. An introduction here is equal to introductory instructions in section there. During this step, you need to tell participants about the current study a little bit. Explain what is going to happen, ask permissions, etc. The conventional interview here, is the same as interview in studies. It is aimed at gathering more information about a participant, her experience in the subject domain. The transition implies assigning the roles, by providing additional instructions, that we will discuss next. Observation session go in accordance with the course of activity in question. It's where we use [INAUDIBLE] fueled by indications of interaction problems and other things, where to find interesting to observe. The last two steps are equal to the steps of linear visibility test. All right, to the differences between two moderation techniques. As you already know, the first difference is related to explicit definition of roles of all three parties. The mobile application, the participant and you, the narrator. At the transition step, you need to remind participants once again, that the app is the object of this study, or in other words, that you are testing the app not her. Then you need to define roles, using instructions like the ones shown on this slide. This one particularly is for observations of experienced users. But you may reformulate it a bit, to make it more suitable for studies involving novice users. Anyway, the idea here to treat the participant as an expert in what she does, and a primary speaker. The moderator is an apprentice, but she may intervene the activity to ask follow-up questions, similarly, as we discussed in the previous lecture. Another new thing, is acknowledgement tokens. An effect of that, when asked to talk, the participant feels the need, as [INAUDIBLE] it, to check the connection. Your job, as an apprentice, is to acknowledge understanding using verbal but not always lexical sounds. The choice of acknowledgement tokens is important, because they must be neutral enough not to distract the participant anyhow. In their review of scientific literature on the topic, and offered two variants of appropriate tokens, mm-hm or uh-huh. It's crucial to give them interrogative intonation, to make the minute equal to this continuum. During this study session, participants may stop talking for many different reasons. For instance, the high-task complexity interferes with their ability to verbalize thoughts. You more likely will see that in interactions with unfamiliar apps or tasks. Recall our discussion of normal stages of action model, from the previous week, and you'll get why. Reminders should be as unintrusive as possible. And found that an appropriate way to encourage the participant to resume speaking, is to use an acknowledgement token, such as mm-hm, first. And then say something like and now, or so? In fact, that's all the differences. Your behavior, when we see an indication of an interaction problem, should be the same as your behavior while applying active intervention. And the last thing I'd like to bring your attention to. In this week, we've presented the useability testing and field visits as antagonists. Where in field visits, the course of a participant's activity guides the moderator through the study. Opposite to visibility testing, where the context of use is simulated by giving participants tasks to perform. But in real projects, you may mix these methods. For example, you may employ the use of interview-based tasks with your observations, in order to cover tasks that haven't occurred naturally. This is one of the reasons why I said that field visits, is harder to conduct. And that is why I recommend to start from using useability tests. Thank you, see you in the next lecture. [SOUND] [MUSIC]