Progressive skew underlies media stories and government actions

Public attention vs. media coverage of Vietnam war shows Progressive skew

Public attention vs. media coverage of Vietnam war [1]

Progressive skew didn’t win up through the 1890s

…crisis alone need not spawn Bigger Government. It does so only under favorable ideological conditions, and such conditions did not exist in the 1890s. Acting with substantial autonomy, governments even in a representative democracy may… refuse to accept or exercise powers that many citizens would thrust upon them.

American governments in the twentieth century, impelled by a more “progressive” ideology, readily accepted—indeed eagerly sought—expanded powers.[2]

Progressive skew starts with journalists’ worldviews

“Now the thing that God puts in a man that makes him a creative person makes him very sensitive to social nuances and that sort of thing. And overwhelmingly—not by a simple majority, but overwhelmingly—people with those tendencies tend to be on the liberal side of the spectrum. People on the conservative side of the political spectrum end up as vice presidents at General Motors.”

Individuals with strong political views will accept lower pay to do the type of reporting they believe in. Professionalism and peer review increase autonomy and independence in many fields.

…85 percent of Columbia Graduate School of Journalism students identified themselves as liberal, versus 11 percent conservative…

The journalists who voted for a major party candidate in presidential elections between 1964 and 1976 overwhelmingly went for Democrats: Lyndon Johnson 94 percent, Hubert Humphrey 87 percent, and George McGovern and Jimmy Carter 81 percent each.[3]

Progressive skew spins stories that show government people as heroes

Although people commonly suppose that news organizations report just the facts, journalists typically tell stories about current events. A report on a house fire, an earthquake, a factory closing, or a battle is actually a story about the event. It is no coincidence that we call news reports “stories.”

During times of foreign crisis and the early stages of a war, there is likely to be near-unanimous support for the war effort among the denizens of official Washington. The crucial expansion of government power can occur without the news media’s presenting the case against that expansion (for want of a prominent source).

…reporters place great reliance on government officials as sources. Members of the opposing party typically provide the “other” point of view, which limits the range of coverage.

Government… serves as a personalized hero, offering new policies to solve society’s problems. Thus, for example, a fiscal stimulus package to revive economic activity provides a happy ending to a story about a recession.[4]

Progressive skew tilts coverage towards government, and all the more during “crises”

…in this article media storms are operationalized as instances of a strong increase (≥150%) in attention to an issue/event that lasts at least 1 week and that attains a high share of the total agenda (≥20%) during at least that week.

New York Times front page story policy areas for 1996-2006 shows Progressive skew

New York Times front page story policy areas for 1996-2006:
Coverage is skewed, especially during media storms.[5]

Progressive skew of media “crises” is followed by disproportionally-large government actions

We first replicated the well-known and general linear effect of media attention on political attention: when media attention goes up, politics follows.

More importantly, we found that, once in media storm mode, media attention has a significantly stronger effect on congressional hearings than when not in storm mode. Our findings—which were the first results of an empirical, systematic examination of incoming information—support the notion that punctuated political attention is due to a nonlinear processing of incoming information.[6]

  1. Neuman, W. Russell. “The threshold of public attention.” Public Opinion Quarterly 54.2 (1990): 159-176.
  2. Higgs, Robert. Crisis and Leviathan: Critical episodes in the growth of American government. Oxford University Press, 1987, pp. 78-79.
  3. Sutter, Daniel. “Can the media be so liberal? The economics of media bias.” Cato Journal 20.3 (2001): 431-431.
  4. Sutter, Daniel. “News media incentives, coverage of government, and the growth of government.” The Independent Review 8.4 (2004): 549-567.
  5. Boydstun, Amber E., Anne Hardy, and Stefaan Walgrave. “Two faces of media attention: Media storm versus non-storm coverage.” Political Communication 31.4 (2014): 509-531.
  6. Walgrave, Stefaan, et al. “The nonlinear effect of information on political attention: media storms and US Congressional Hearings.” Political Communication (2017).

Statistics bias and flaws are only human

Telephone tag dramatizes grow of statistics bias and flaws

  • Random findings can be misidentified as significant.
  • Methodological problems can be overlooked by reviewers.
  • And popular reporting can misrepresent or even exaggerate the original findings.
  • The dangers are especially present if the finding is suggestive and the study is underpowered for the job of resolving the correlations in the data.[1]

Statistics bias and flaws reflect strong incentives

Statistics are one of the standard types of evidence used by people in our society.

Activists trying to gain recognition for what they believe is a big problem will offer statistics that seem to prove that the problem is indeed a big one (and they may choose to downplay, ignore, or dispute any statistics that might make it seem smaller).

…experts… seem more important if their subject is a big, important problem.

The media favor disturbing statistics about big problems because big problems make more interesting, more compelling news…

Politicians use statistics to persuade us that they understand society’s problems and that they deserve our support.

Every statistic… is the product of choices—the choice between defining a category broadly or narrowly, the choice of one measurement over another, the choice of a sample. People choose definitions, measurements, and samples for all sorts of reasons: perhaps they want to emphasize some aspect of a problem; perhaps it is easier or cheaper to gather data in a particular way—many considerations can come into play.

Statistics bias and flaws can be checked out

The issue is whether a particular statistic’s flaws are severe enough to damage its usefulness.

It would be nice to have a checklist… potential problems with definitions, measurements, sampling, mutation, and so on.

  1. Who produced the number, and what interests might they have?
  2. What might be the sources for this number? How could one go about producing the figure?
  3. What are the different ways key terms might have been defined, and which definitions have been chosen? Is the definition so broad that it encompasses too many false positives (or so narrow that it excludes too many false negatives)? How would changing the definition alter the statistic?
  4. How might the phenomena be measured, and which measurement choices have been made?
  5. What sort of sample was gathered, and how might that sample affect the result?
  6. And how is the statistic used? Is it being interpreted appropriately, or has its meaning been mangled to create a mutant statistic?
  7. Are comparisons being made, and if so, are the comparisons appropriate? Are there competing statistics? If so, what stakes do the opponents have in the issue, and how are those stakes likely to affect their use of statistics? And is it possible to figure out why the statistics seem to disagree, what the differences are in the ways the competing sides are using figures?

In practice… the Critical need not investigate the origin of every statistic. When confronted with an interesting number, they may try to learn more, to evaluate, to weigh the figure’s strengths and weaknesses.

Statistics bias and flaws turn up in every kind of evidence

…this Critical approach… ought to apply to all the evidence we encounter when we scan a news report, or listen to a speech, whenever we learn about social problems.

Claims about social problems often feature dramatic, compelling examples; the Critical might ask whether an example is likely to be a typical case or an extreme, exceptional instance.

Claims about social problems often include quotations from different sources, and the Critical might wonder why those sources have spoken and why they have been quoted: Do they have particular expertise? Do they stand to benefit if they influence others?

Claims about social problems usually involve arguments about the problem’s causes and potential solutions. The Critical might ask whether these arguments are convincing. Are they logical? Does the proposed solution seem feasible and appropriate? And so on.

Being Critical—adopting a skeptical, analytical stance when confronted with claims—is an approach that goes far beyond simply dealing with statistics.[2]


  1. Gelman, Andrew, and David Weakliem. “Of beauty, sex and power: Too little attention has been paid to the statistical challenges in estimating small effects.” American Scientist 97.4 (2009): 310-316.
  2. Best, Joel. Damned Lies and Statistics: Untangling Numbers from the Media, Politicians, and Activists. Updated edition, University of California Press, 2012, Scribd pp. 32, 178-182.

Data collection, sharing, and use are keys at Google

Google meeting with laptops everywhere and video shared online, illustrating Google's data collection, sharing, and use.
[1]

Collecting data

“We need generalists… Lots of projects and companies grow without doing new things; they just get bigger teams. We want projects to end.”

Google… tackles most big projects in small, tightly focused teams, setting them up in an instant and breaking them down weeks later without remorse. “Their view is that there is much greater progress if you have many small teams going out at once.”

A typical task, from tweaking page designs to doing scientific research, involves six people. Hundreds of projects go on at the same time. Most teams throw out new software in six weeks or less and look at how users respond hours later.

With 82 million visitors and 2.3 billion searches in a month, Google can try a new user interface or some other wrinkle on just 0.1% of its users and get massive feedback, letting it decide a project’s fate in weeks. One success in ten tries is okay; one in five is superb.

Everyone from a failed venture moves to another urgent project. “If something is successful, you work it in, somehow… If it fails, you leave.”

Sharing data

Google… shares all the information it can with as many employees as possible…

It also pursues a rapid-fire food-fight strategy that throws out ideas as fast as possible, to see what sticks.

One key rule: You can’t call any idea “stupid.”

(Nor is most any idea too wild. On a recent day at the Google campus a bulletin board invited workers to a session on the dream of erecting a 200-mile-high elevator into space.)

Using data

One true god rules at Google: data. The more you collect, the more you know and the more certain your decisions can be, disciples believe…

“Often differences of opinion between smart people are differences of data…”

In some meetings people aren’t allowed to say “I think…” but instead must say “The data suggest…”

…the guy with the best data wins.[2]


  1. “Search Quality Meeting: Spelling for Long Queries (Annotated)” YouTube, 12 Mar. 2012, www.youtube.com/watch?v=JtRJXnXgE-A. Accessed 24 Nov. 2016.
  2. Hardy, Quentin. “Google Thinks Small.” Forbes 176.10 (14 Nov. 2005): 198-202.