Day-after-election coverage is especially useful for differentiating news content on both the quality and bias metrics. This is because election stories are naturally full of concrete facts–who won, who lost, where, and by how much.
Further, the facts by themselves are fairly easy for most people to interpret without having someone else analyze them and give you their take on what they mean. Winning is generally good, and flipping a seat is even better. Losing is generally bad, and having your seat flipped is worse. Winning by more than expected, or losing by less than expected, are also good–everyone can interpret such facts in this manner. A story that just reports “what happened” in an election will give you what you need to know to interpret what they mean.
But because the story of an election can be told easily with just facts, many publications want to provide readers with additional reasons to turn to them instead of others. As a result, sources which typically have high percentages of fact-reporting stories as compared to analysis and opinion stories seem compelled to provide more analysis and opinion on the day after election day. Such is the case for LA Times, Washington Post, Bloomberg, and AP News below. In turn, sources lower on the quality spectrum of the chart turn to nearly all analysis and opinion. Therefore, on the day after election day, one way you can tell which sources are the most fact-based is by looking to see if you can still find a high percentage of fact-reporting (as opposed to opinion and analysis) stories on the front page.
Bias is even easier easy to distinguish in such election stories and headlines. From a content-analysis perspective, two patterns emerged in highly-biased analysis and opinion pieces around this particular election. The first is whether or not the election constituted a “wave” of any sort, with takes from the “it was definitely a very blue wave–in fact, the bluest” to “there was no wave at all, and if there was one, it was red.” The characterization of any sort of wave is highly indicative of left or right bias. The second pattern can be seen in analysis and opinion stories stating why the results were actually much better than they seemed for one’s own side and much worse than they seemed for one’s opponent. See all the examples below, which shows that an enormous amount of the content people consume as news is dedicated to convincing one’s own side of what the results mean, rather than what the results are.
Knowing that many people read only content from one ideological realm or the other, it’s no wonder that people on the left and the right have entirely different realities of what last night’s results mean. This is not a problem that will be solved soon, I know. But it is a problem we can make strides toward solving. In fact, we must do something, or else continue to endure the effects of hyper-polarization: everything from an internal sense of despair, to broken relationships with family and friends, to politically-motivated violence.
Thanks for your efforts to reduce polarization by helping people navigate the complex media landscape, thereby helping us all make better choices about the news we consume.
Los Angeles Times (Neutral/skews left, Fact reporting/Analysis)
Washington Post (Skews left, Fact reporting/Analysis)
Associated Press (Neutral, Original Fact Reporting)
Bloomberg (Neutral/Balanced, Fact Reporting/Analysis)
Axios (Skews left, Fact Reporting/Analysis)
Buzzfeed News (Skews left, Fact Reporting/Analysis)
Daily Kos (Hyperpartisan Left, Opinion/Selective/Unfair)
Alternet (Hyper-partisan left, Opinion/Selective/Unfair)
Palmer Report (Most Extreme Left, Misleading/Propaganda)
Fox News.com (Hyper-partisan Right, Analysis Opinion)
RedState (Hyper-partisan/Most Extreme Right, Opinion/Misleading/Unfair)
Daily Wire (Hyperpartisan Right, Opinion/Selective/Unfair)
Gateway Pundit (Most Extreme Right, Misleading/Propaganda)