您在這裡

Reliability in Unobtrusive Research

20 十月, 2015 - 10:17

LEARNING OBJECTIVES

  1. Define stability and describe strategies for overcoming problems of stability.
  2. Define reproducibility and describe strategies for overcoming problems of reproducibility.
  3. Define accuracy and describe strategies for overcoming problems of accuracy.

This final section of the chapter investigates a few particularities related to reliability in unobtrusive research projects (Krippendorff, 2009) 1 that warrant our attention. These particularities have to do with how and by whom the coding of data occurs. Issues of stability, reproducibility, and accuracy all speak to the unique problems—and opportunities—with establishing reliability in unobtrusive research projects.

Stability refers to the extent to which the results of coding vary across different time periods. If stability is a problem, it will reveal itself when the same person codes the same content at different times and comes up with different results. Coding is said to be stable when the same content has been coded multiple times by the same person with the same result each time. If you discover problems of instability in your coding procedures, it is possible that your coding rules are ambiguous and need to be clarified. Ambiguities in the text itself might also contribute to problems of stability. While you cannot alter your original textual data sources, simply being aware of possible ambiguities in the data as you code may help reduce the likelihood of problems with stability. It is also possible that problems with stability may result from a simple coding error, such as inadvertently jotting a 1 instead of a 10 on your code sheet.

Reproducibility, sometimes referred to as intercoder reliability (Lombard, Snyder-Duch, & Campanella Bracken, 2010), 2 is the extent to which one’s coding procedures will result in the same results when the same text is coded by different people. Cognitive differences among the individuals coding data may result in problems with reproducibility, as could ambiguous coding instructions. Random coding errors might also cause problems. One way of overcoming problems of reproducibility is to have coders code together. While working as a graduate research assistant, I participated in a content analysis project in which four individuals shared the responsibility for coding data. To reduce the potential for reproducibility problems with our coding, we conducted our coding at the same time in the same room, while sitting around a large, round table. We coded at the same time in the same room so that we could consult one another when we ran into problems or had questions about what we were coding. Resolving those ambiguities together meant that we grew to have a shared understanding of how to code various bits of data.

Finally, accuracy refers to the extent to which one’s coding procedures correspond to some pre-existing standard. This presumes that a standard coding strategy has already been established for whatever text you’re analyzing. It may not be the case that official standards have been set, but perusing the prior literature for the collective wisdom on coding on your particular area is time well spent. Scholarship focused on similar data or coding procedures will no doubt help you to clarify and improve your own coding procedures.

KEY TAKEAWAYS

  • Stability can become an issue in unobtrusive research project when the results of coding by the same person vary across different time periods.
  • Reproducibility has to do with multiple coders’ results being the same for the same text.
  • Accuracy refers to the extent to which one’s coding procedures correspond to some pre-existing standard.

EXERCISE

  1. With a peer, create a code sheet that you can use to record instances of violence on the television program of your choice. Now, on your own, watch two or three episodes of that program, coding for instances of violence as you go along. Your peer should do the same. Finally, compare your code sheet to that of your peer. How similar were you in your coding? Where do your coding results differ, and why? Which issues of reliability may be relevant?