The national media have certainly NOT been giving Barack Obama a rougher ride than the GOP candidates, but a new study by the Pew Project for Excellence in Journalism is fueling the myth that, as Politico’s Keach Hagey put it this morning, “Obama has received the most unremittingly negative press of any of the presidential candidates.”
To be sure, Hagey is repeating exactly what Pew is claiming, but there are at least three major problems with using their study to conclude that the media have an anti-Obama bias. First, they didn’t study what most people would consider “the media.” Second, their definition of “positive” and “negative” press doesn’t match what media experts consider “favorable” or “unfavorable” coverage. (More after the jump)
And, third, the researchers didn’t really even look at the stories — they let a computer (using an algorithm dubbed “Crimson Hexagon”) churn through the words and determine whether an assertion was pro- or anti-Obama (or Perry, or Romney, etc.).
Despite these important caveats, media writers are already using Pew’s research to claim the media have an anti-Obama bias. “President Obama has received more negative press coverage in recent months than any of his prospective GOP challengers,” wrote the Boston Globe’s Donovan Slack.
CBSNews.com’s Brian Montopoli parroted: “President Obama ‘has suffered the most unrelentingly negative treatment’ of all presidential candidates over the past five months.” And Politico’s Hagey: “Obama has received the most unremittingly negative press of any of the presidential candidates by a wide margin, with negative assessments outweighing positive ones by four to one.”
Let’s take the issues one at a time:
■ Way Too Much Media: In what appears to be an effort to be comprehensive, the Pew researchers stretched the concept of “media” so wide, that it’s really a study of nearly everything on the Internet. Influential and top-rated media outlets (like ABC, CBS and the New York Times) are buried in a sea of “coverage and commentary on more than 11,500 news outlets, based on their RSS feeds” and analyzed by a computer software program.
Think about it: There are three national broadcast networks that offer news coverage (four if you count PBS), plus another half-dozen cable networks that frequently discuss politics (including the business channels). The Audit Bureau of Circulation tracks the circulation of just 635 daily newspapers, plus another 553 Sunday-only newspapers. The number of news magazine’s tracked by Pew’s own annual State of the Media is six. Add in Google, Yahoo, AOL News, the Huffington Post and other online-only news outlets, and you still only have a couple of thousand news sources. The number of truly important news sources is probably only in the dozens.
So for a study to include 11,500 news outlets (English-language only, the report says), the researchers have cast their net so widely that their study necessarily includes a huge number of insignificant or derivative news outlets — hundreds of iterations of the same AP story on the Web sites of local TV stations, for example. Such a study design makes it impossible to discover how the candidates were covered by the relatively small number of news media outlets that reach hundreds of thousands or millions of people each day.
[Pew also separately looked at “hundreds of thousands” of blogs, which again means that the few dozen top-ranked influential blogs are buried in a mass of data that includes vast numbers of low-trafficked and irrelevant sites.]
To study the news media’s effect on the campaign, researchers need to isolate the news media sources that are having the most profound effect — either at reaching the most viewers (like the big networks) or most influential at establishing a national narrative (like the New York Times or Politico). Throwing thousands of sources into one big pot — some with audiences in the millions, others reaching only a few hundred a day — just confuses the role that journalists actually have in setting the agenda and crafting a candidate’s image.
■ Positive and Negative Assessments. The debate over whether the media play favorites in campaigns is not new. The Pew Research Center for The People & The Press found in October 2008 that “by a margin of 70% - 9%, Americans say most journalists want to see Obama, not John McCain, win on Nov. 4.” In election after election, Pew has found large majorities of Americans believing that the media are favoring whomever the Democrats have nominated.”
But journalists — at least those supposedly in the straight news business — rarely aid their favored candidates through outright editorializing. Instead, they exert influence over the news agenda (what stories to cover, what stories to ignore); source selection (which experts are asked to provide a soundbite on the evening news, and which ones are never called); and framing (i.e., a teacher’s union complains about cutbacks — is that a story about the damage budget cuts could do to the poor, or is it about the power of special interests to block needed spending reductions?).
Of course, if a reporter does make an overt editorial judgment in the guise of a news report, that’s obviously a good example of pro- or anti- bias.
The Pew study was designed to review “assertions” within news coverage, and then tag it as “positive,” “negative,” or “neutral” (or “irrelevant” if the assertion was off-topic). The key to understanding Pew’s numbers is that they incorporated “horse race” assessments into their measure of good and bad press. As their methodological explanation confirmed: “A story that is entirely about a poll showing Mitt Romney ahead of the Republican field — and that his lead is growing, would be a good example to put in the ‘positive’ category.”
Careful researchers would avoid blurring such “horse race” statements into an overall measure of good press/bad press. Back in August, both Rick Perry’s strongest supporters and his staunchest foes would agree that he was on top of the GOP preference polls — it’s not “the media” pushing a biased editorial line to say so. Standard measures of “good” and “bad” press include: assessments of a candidate’s personal integrity, ethics and job competence; evaluations of their policy proposals; and their capabilities as a candidate — in other words, those attributes that can make someone more or less likely to support their candidacy.
Including “horse race” assessments undoubtedly skewed the numbers in favor of Perry (who led most surveys until late September) and hurt President Obama, whose job approval ratings were on the decline. Plus, tallying overt “assertions” would also minimize the effect of daily news coverage (where the bias is usually more subtle), while boosting the effect of editorials and commentary with obvious opinions.
Our own work this campaign season shows that the national media consistently framed the debt story in a way that played to Obama’s agenda, and hit Republican candidates with mainly hostile questions premised on liberal policy assumptions. In an election context, those are big favors to the Democrats that cannot be tallied on a simple “positive” or “negative” scorecard.
■ Letting the Computer Do Most of the Work: Determining the tone of news coverage is based on a technique called “content analysis,” where researchers develop categories and rules to measure the content of news stories. A particular content analysis scheme is deemed “reliable,” i.e., valid, if other researchers can take the same set of rules and get similar results.
In theory, this would seem ideal work for a computer, which has no political prejudices and cannot be numbed by going over countless stories on the same topic, day after day. But in practice, I have discovered, the key is to have analysts who understand the context of the stories they are reading or watching. Campaign news changes from day to day, new issues arise, and new buzzwords can become a kind of journalists’ shorthand, referring to some episode or incident that has a shared definition among political insiders.
Pew reports that their human researchers worked up models for the computer algorithm, feeding it examples of “positive” and “negative” stories until the computer matched the human researchers “in 97% of the cases analyzed.” But with such a vast number of stories, it’s impossible that human researchers could cross-check even a tiny fraction of the coverage. Nearly all of the “anti-Obama” or “pro-Perry” stories were never reviewed by an actual researcher to check the context and meaning of the keywords the computer was trained to spot.
The use of computer technology did seem to preclude the examination of video, in favor of stories printed on the Web sites of television and radio outlets. Again, according to the methodological explanation: “Even though the television programs from Fox News are not in the sample directly, content from Fox News is present through the stories published on FoxNews.com.”
Such a choice eliminates the news as seen by the largest audiences, in favor of online stories that were read by far few people. While it’s true that most, if not all, of the stories that make it onto a network’s airwaves are available in some form on their Web sites, what’s lost is the editorial selection made by the on-air producers. ABCNews.com has included several items on the Obama administration’s “Fast and Furious” scandal, for example, but none have recently made it onto ABC’s airwaves.
The point of studying the media for potential bias is to make sure that journalists are not skewing the news before it reaches voters, so that the real decisions are in the hands of the people, not the media elites. For liberal journalists to hear that their profession is somehow skewed against President Obama can only encourage them to attempt to tilt the scales in the other direction. That’s a step away from the fair and balanced journalism that we need.