Paul N. Edwards


On his book A Vast Machine: Computer Models, Climate Data, and the Politics of Global Warming

Cover Interview of March 06, 2011

The wide angle

I went to graduate school in the 1980s, at the height of the Carter-Reagan Cold War.  It was a very scary time, and not only because the risk of nuclear war reached heights unseen since the Cuban missile crisis. First acid rain, then the ozone hole, then the issue of “nuclear winter”—a global climate catastrophe caused by the smoke and dust from a superpower nuclear war—made it clear that human activity could seriously affect the global atmosphere.

I wrote my dissertation about computers’ central role in the American side of the Cold War. In the 1950s, military projects from hydrogen bomb design to continental air defense to nuclear strategy all spurred computer development, with massive government support. Computers became icons for that era’s widespread technological hubris: the idea that technology could deliver panoptic surveillance, global control, and ultimate power. That story was the subject of my first book, The Closed World: Computers and the Politics of Discourse in Cold War America, published by MIT Press in 1996.

The nuclear winter controversy arose from applying climate models to the effects of nuclear war. So it wasn’t really a long step for me to begin studying how computer models interacted with the politics of climate change.

Even before I finished The Closed World, I was deeply engaged in that research. For years I worked intensively with famed climate scientist Stephen Schneider, who died last summer. I interviewed dozens of climatologists and computer modelers. I spent countless days at scientific meetings and visited climate labs around the world.

As I was researching the book during the 1990s, climate politics exploded. But by around 2000, the main scientific controversies had settled out, and the concerted campaign to cast doubt on climate science—heavily funded by the coal and oil industries—seemed to be losing steam. Then George W. Bush’s administration revived the false controversies. Political appointees doctored scientific reports and attempted to muzzle government scientists such as James Hansen.

Meanwhile, finishing my book took much, much more time than I’d expected. By the time I was finally wrapping up the manuscript of A Vast Machine in the summer of 2009, Barack Obama was president and carbon-pricing bills seemed likely to move swiftly through Congress. Once more, I thought the controversies had finally ended and that A Vast Machine would fizzle into obscurity.

Instead, in November 2009, less than a month after I submitted the final page proofs, “Climategate” made headlines around the world. Someone—a hacker, or perhaps a disaffected insider—released climate data and thousands of private emails among scientists from the Climatic Research Unit in the United Kingdom. Climate change skeptics—or denialists, as most of them should really be called—made a lot of noise about what they call “manipulation” of climate data.

Their allegations illustrated exactly the conundrum A Vast Machine reveals: as a historical science, the study of climate change will always involve revisiting old data, correcting, modeling, and revising our picture of the climatic past.

This does not mean we don’t know anything. (We do.)  And it also does not mean that climate data or climate models might turn out to be wildly wrong. (They won’t.)

Climate science proceeds by constantly inverting its own infrastructure.  Making global data means turning the data collection process upside down to find out how old data were collected, interpreted, and combined with other data. This process can reveal errors, systematic instrument bias, or other problems. Scientists use this knowledge to delete mistaken readings, adjust for instrument bias, and combine newly discovered records with existing ones.

Following Geoffrey Bowker, I call this process “infrastructural inversion.” It’s fundamental to climate science. Infrastructural inversion means that there will never be a single, perfect, definitive global data set. Instead, we get what I call “shimmering”: global data converge—and they converge around a sharp warming trend over the last 40 years—but they never fully stabilize, because it is always possible to find more historical data and do a better job of data modeling. Unfortunately, infrastructural inversion can be abused in order to stoke controversy, if it’s misunderstood—or deliberately mis-portrayed—as a lack of knowledge rather than an essential process of knowledge production.