Welcome back, we're continuing with our discussion of texting as an interview mode. And shifting to a discussion of measurement issues, primarily, when it comes to using text for collecting data. There's really one study in the literature which compares texting to voice interviews by Schober et al. And it actually looks at 4 modes data collection. Two involve voice and two involve SMS. One of the voice modes is conducted by live human interviewers, the other by an automated system that speaks questions and recognizes open answers. The two text modes are similarly administered by human interviewers, who will send questions via text and respondents will send answers via text. And by an automated system. So the automated system will text questions and accept texted answers. In their study, the participants who were actually just using iPhones, so that platform was the same across all the participants. Were recruited through various online sources and then they screened into the studies. So they knew that they were going to be taken part in the study they just didn't know what mode they would, the data would be collected. So keeping in mind that they have screened in already they response rates generally high but reliably higher for text than voice interviews, so text then voice. Contacts, though why is this? Well the higher response rates in text could be due to the persistence of the message. So there's no non-contact in the sense that if the survey organization text a message inviting a sample member to participate. That message will remain on the phone and the organizations will be confident it was received. It is very rare that a text message doesn't reach it's destination, at least, without some indication to the sender. The high response rate might also be due to the fact that it's possible to participate when convenient in text. Whereas in voice, you're contacted and generally you're invited to participate at that moment. It might not be convenient, but with text it's possible to respond when it is convenient. And, the higher response rate might come about because in text invitation the sample member has more time to decide whether or not to participate. They're not put on the spot the same way that a sample member is with voice. So, it may be that once they take some time and find themselves in a different situation than the one they were in when the message arrived, they realize yep I could participate in this. Another look at response rates concerns the speed with which the interviews that are eventually conducted are actually conducted. So if we look at this figure we can see that within one day of sample members being invited to participate In the automated text mode, over 90% of those interviews have been conducted. And in the human text mode, over 70% of those interviews have been conducted. So these are substantially and reliably faster than the two voice modes, in which about 60% and 50% of the cases have been completed within one day. Although response rates are higher with text than voice, and the sample is completed faster with text than voice, breakoffs are higher with text than voice. So by breakoffs here we mean that at the end of the field period, a case was not completed. And so why might this be? Well, it could be that there's less social presence in text than voice, less evidence of some kind of agents at the other end of this communication, particularly the absence of voice. So it could be that without that social presence participants feel freer to simply not answer the next question they receive or they kind of just drift off without realizing, remembering that they are in the middle of an interview. And, related to that, the asynchronous character of text may be partly responsible for this. So, well, as we will see, there are benefits to the asynchronous character of text. It does mean that respondents don't need to answer right away and that lack of time pressure could translate into just taking an infinitely long amount of time to respond, which would be essentially a break off. So the break off rate is high in automated than human modes as well as higher in text and voice. This is probably due to the fact that there is no human interviewer present to keep the respondent on connected and engaged. And the respondents who broke off, whether in automated versus human modes or text versus voice, the respondents who broke off are no different demographically than those who completed the questionnaire. So except for slightly more females breaking off. So, it doesn’t seem that this is a demographically driven issue and that it might introduce any kind of bias. There are more break offs than in text and voice and in automated and human administered modes. So, turning to measurement In the same study. The authors compared a couple of measures of data quality between text and voice. They used two measures of satisficing, one was the frequency of rounded numerical answers, so questions requiring a number for the response that were divisible by five were treated as evidence of kind of taking them into shortcut. The idea is that if somebody were to say 100 which is divisible by five versus 97, you'd be more likely to assume that they've been kind of engaged in deliberate accounting strategy. Coming up with 97 versus 100 and 100 being more top of the head estimate or kind of approximation. The other satisficing measure is straightlining, in which the respondents give the same answer to all of the questions using a single response scale running from strongly favor to strongly oppose. In this case the author's considered straight lining a set of responses that included the same answer to six of seven questions using this scale. So there were those two measures of satisficing and then also a disclosure of sensitive information, so the frequency of giving undesirable responses. So just saying, less than one day, when asked, how often do you exercise? That would be an undesirable response. And so if people are giving more of those in one mode than the other, that would indicate they're giving higher quality answers in one mode than the other. So text respondents on iPhones entered answers like this respondent. In the human administered modes the interviewers used an interface like this one which the question was displayed and they could send probs to the respondents by selecting from the windows of the lower part of the screen. And another view of the threaded exchange between respondents and the interviewer. What are likely outcomes? Well, with respect to satisficing it could be that there's more satisficing with text than voice. Respondents might import their kind of least effort strategies from usual texting practices in which it's common to use truncated and abbreviated forms of speech or forms of communication. Just because there's effort involved in answering text, but texting might lead to less satisficing because of the reduced time pressure. So respondents can answer when they have time to think about their answers and possibly consult records, which would lead to higher quality results. So that could go either way in the view of these authors and with respect to disclosure, that too could go either way. There could be more disclosure of sensitive information, in, texting and voice, because of the reduced social presence. The fact that there's no interviewer face or voice. That would be better quality data. But there could be less disclosure, lower quality data, because of concerns that the questions and the answers are permanent, they're visual. They're persistent, as we discussed in the previous segment. What were the results? Well, turning first to satisficing, the authors reported that there was less satisficing in text, which the orange bars in this figure represent than in voice. So better quality data in text than voice. Fewer numerical answers ending in zero or five. And the other measure of satisficing, straghtlining, also told the same story. There was less straightlining in text than in voice. With respect to disclosure, texting also led to higher quality data. There was more disclosure, the orange bars are taller than the blue bars in text than in voice. There also was an advantage for automated over human administered modes. Quite similar to the advantage we've seen for self administration with computerized data collection, such as E Casey. When we talked about those kinds of modes in the context of interviews. And again, for the same reasons. The fact that these modes are automated means that there's not an interviewer present who might inhibit candid responding. So texting is leading to less satisficing by two measures and more disclosure of sensitive information. All of which is positive when it comes to considering text as a motive interviewing. There are some properties of text interviews that are worth considering when evaluating the motive for interviewing for data collection. The key feature is that text interviews take longer than voice interviews, so this figure illustrates that. In the lower part of the figure, the orange bars indicate questions that were texted either by a human or by an automated system, and the black bars are the answers or anything else. As you can see, relative to the voice interviews in the top part of the panel, where the blue bars indicate questions, they're asked by voice, and the black bars indicate both answers and kind of any other spoken content, the text message interviews are much more spread out in time. There's more space between the orange bars and the black bars that follow questions. That there is between the blue bars for voice interviews and the black bars that follow the question. So, presumably, this is because the text respondents are taking more time before answering to think about their answers, leading to the higher quality results that we have just discussed. The other distinction is that there are more verbal interactions, more spoken interactions, more spoken turns that are not necessarily answers in voice than in text interviews. So really, if you look at the text interviews they're really of the form question, answer, question, answer. But in voice, there's a lot more being said by the respondents primarily. So In essence they're less efficient even though voice interviews are completed in shorter period of time. Thank concludes our discussion of texting as an interview mode and as a contact mode In particular texting as a contact or precontact mode seems to increase participation in both web and mobile data collection, more than precontact and other modes in particular, email and paper invitation. As an interview mode, there are number of promising features of texting. There was very little evidence of coverage error, just differences in age and education. And the evidence is that mobile phone owners who do text are growing in number, so that even these differences between age and education may vanish in the near future. With respect to nonresponse, response rates were higher in text than in voice, although break offs were higher in text than voice. And then, with respect to measurement, there was evidence by a number of measures that texting leads to higher quality data than voice interviews. By two measures of satisfising, there was better data. That is there was less satisfising in texting than voice. Fewer rounded numerical answers. Less straight lining and there was more disclosure of sensitive information in text than in voice interviews. And then, finally, with respect to efficiency, although the interviewer Interviews take longer in text and voice. This is presumably because they're spread out in time as respondents think through their answers more carefully. The field period is shorter for text and voice. That is, the sample is completed more quickly in text interviews and voice, particularly for automated text. We saw that there were over 90% of the interviews that were completed, were completed within one day of the invitation being distributed. Some things to think about with respect to texting as an interview mode is, is it only suitable for brief interviews? In the study we've talked about these participants had all screened in and were compensated pretty well for their time. But it could be that respondents won't tolerate a very long interview when it's conducted via text. On the other hand, because they can participate when it's convenient. They may be willing to tolerate more questions than we might think. Another thing to think about is how texting might evolve as an interview mode. So it's now actually quite easy to attach various media to a text message, or to speak a text message. And, for it to be converted to text. An early idea was proposed by Fuchs, as suggested that when texting is used as a contact mode, to attach a video of a human interviewer, inviting sample members to participate. So, it remains to be seen where texting goes but the initial evidence is that it's promising as another mode of conducting surveys. So that concludes our treatment of mobile data collection, both mobile web and SMS. For our final topic, we'll address alternative modes of data. Not necessarily involving self-report as in answering survey questions but involving sources of data such as administrative records or social media or other types of what have been called found. As opposed to designed data in that these sources of data were never really intended for use in social measurement. But increasingly researchers to finding ways to use this other data sources in social measurement. So in our final segment we'll address both of those and I look forward to seeing you then.