In attempting to discredit an Indiana University study about Bill O'Reilly's name-calling, Ron Mitchell, producer of The O'Reilly Factor, claimed that “the researchers admit they had to make several changes to their 'coding instrument' ... until the results fit the preconceived notion of name-calling on the Factor.” In fact, the changes were done in pre-testing in order to deliver accurate results for the study, not to make O'Reilly look worse.
Peas in a pod: In LA Times op-ed, O'Reilly producer misrepresented IU study to defend host
Written by Andrew Ironside & Paul Waldman
Published
In a May 10 op-ed in the Los Angeles Times, Ron Mitchell, producer of Fox News' The O'Reilly Factor, misrepresented an Indiana University study that found that host Bill O'Reilly engages in name-calling once every 6.8 seconds during his “Talking Point Memo” segment. In a purported defense of O'Reilly and to claim that the study is “biased,” Mitchell asserted that Times columnist Rosa Brooks, who cited the report in her May 4 column, “failed to tell Times readers that the researchers admit they had to make several changes to their 'coding instrument' because the first attempts generated 'unacceptably low scores.' That's code for: they tried and tried until the results fit the preconceived notion of name-calling on the Factor.” Mitchell's claim that the study was changed because it found too few examples of O'Reilly's personal attacks is false. In fact, IU researchers did not make any changes to their coding instrument in order to make O'Reilly look worse. The “low scores” in question were not scores of O'Reilly's name-calling but, rather, intercoder reliability scores of Krippendorff's alpha, a computation used to assess the internal consistency of a content analysis instrument.
In his eagerness to impute malicious motives to the IU researchers, Mitchell displayed a misunderstanding of the techniques of content analysis. In their methodological note, the researchers described the process they went through refining their coding instrument to achieve “intercoder reliability.” This term refers to the degree to which different researchers code each text in the same way. Achieving intercoder reliability means that the biases of the individual coders are not coloring the results in any way. In other words, an unreliable instrument is one in which different coders code the same text but produce different results. A reliable instrument is one in which different coders, no matter who they are, code the same text and produce identical or near-identical results.
The process the IU researchers went through, described in their article, is standard practice in most content analysis projects: An instrument is designed, then tested, and if the measures are found to yield unacceptably low levels of reliability between coders, the instrument is refined and/or the coders receive more training to remove ambiguity until an acceptable level of reliability is achieved.
Mitchell wrote the following in his op-ed:
Brooks also failed to tell Times readers that the researches admit they had to make several changes to their 'coding instrument' because the first attempts generated 'unacceptably low scores.' That's code for: they tried and tried until the results fit the preconceived notion of name-calling on the Factor.
Mitchell seems to be under the impression that the “unacceptably low scores” mentioned by the researchers are scores about how often O'Reilly attacks opponents. This is false. The “scores” in question are reliability scores, the measures of how closely matched the coding results of different coders were. This is clearly represented in the study:
Four investigators held meetings to test the original coding instrument and its codebook. These meetings prompted several refinements of the coding instrument. Two of the investigators were appointed as coders and trained to collect data. Coder reliability pre-tests were conducted on surplus episodes of ''Talking Points Memo.'' The first two pretests produced unacceptably low scores (Krippendorff's alpha"/0.67 and 0.51). More coder training ensued.
Furthermore, the researchers used a particularly stringent measure -- Krippendorff's alpha -- to assess intercoder reliability. Their initial tests yielded Krippendorff's alphas between .51 and .67, which the researchers deemed an unacceptably low level of reliability. After further training of the coders, they eventually reached Krippendorff's alpha levels between .84 and 1.00.
As a Temple University website on intercoder reliability and content analysis explains: “Coefficients of .90 or greater are nearly always acceptable, .80 or greater is acceptable in most situations, and .70 may be appropriate in some exploratory studies for some indices. Higher criteria should be used for indices known to be liberal (i.e., percent agreement) and lower criteria can be used for indices known to be more conservative (Cohen's kappa, Scott's pi, and Krippendorff's alpha).” (emphasis added). Because Krippendorf's alpha is an extremely stringent measure of intercoder reliability, a coefficient of .67 would actually be considered acceptable by many researchers. The fact that the instrument used by Indiana University researchers yielded Krippendorff's alphas above .84 means that they were using a highly reliable coding system.
According to the press release about the study issued by IU, the study was “published in the academic journal Journalism Studies," a peer-reviewed journal affiliated with the Journalism Studies Division of the International Communications Association. “All articles in this journal have undergone rigorous peer review, based on initial editor screening and anonymised refereeing by two anonymous referees,” according to the Journalism Studies website. The press release added that "[a]n earlier version of the study won a top faculty award from the Journalism Studies Division of the International Communication Association."
Mitchell followed in the footsteps of the host he works for in attempting to discredit the IU study. As Media Matters for America noted, O'Reilly suggested the study's results were somehow tainted because philanthropist George Soros gave Indiana University a $5 million donation, and that “their research wound up in the hands of Media Matters, the smear Internet site partly funded by enterprises connected to George Soros,” which then “issued a press release” about it. In fact, Soros' donation was specifically earmarked for a project in Kyrgyzstan, the university has stated that "[t]he researchers received no grant funding for this study," and Media Matters (which O'Reilly has falsely claimed is funded by Soros) learned of the study the way many others did: from the university's press release.