edwards_paul

Paul

N.

Edwards

Virtually everything we know about anything complex depends on computer models

Cover Interview of

In a nutshell

The first decade of the 21st century has been the hottest on record. And NASA recently registered 2010 as the hottest year since instrumental observations began. On the planetary scale, the heat just keeps on rising.

But this January, snow fell on 49 of the 50 United States.

And yet, this rare event is not strange. You can understand it in the context of global warming. It’s the overall warming of our atmosphere that may be causing colder and snowier winters in the United States—by adding humidity to the air, and by changing global circulation patterns.

How can we possibly know these things? Earth’s atmosphere is 50 billion cubic kilometers of swirling air and moisture, always in motion, whipped into complex, turbulent patterns by solar heat and planetary rotation. Tracking climate change means tracking what’s happening to all that air over long periods—from years to decades to centuries.

A Vast Machine traces the history of weather and climate science as a global knowledge infrastructure.

To study anything on a planetary scale, you have to make global data: collect measurements from everywhere, catalog them, store them, render them accessible for analysis.

Organized international weather observation began in the 1850s and grew rapidly into a near-global system, interrupted only briefly by the two world wars. It’s a remarkable story of long-term international cooperation, leading to a colossal kluge: a global communication system for weather and climate data, cobbled together from telegraphy, fax, shortwave radio, postal mail, computer networks, and half a dozen other media.

Making global data is hard enough—but it’s only the first step. The second, even harder step is to make data global.

Standards, instruments, recording and reporting practices differ, around the world and over time. You have to process noisy information, cope with incomplete coverage, and blend different types of measurements into a uniform whole, a data image of the entire planet.

To take just one example, most satellite instruments measure radiances at the top of the atmosphere. To combine that information with readings from ground stations and weather balloons, scientists have to translate those radiances into the variables that govern atmospheric behavior, such as temperature, pressure, and humidity. That requires complex data modeling.

Thus virtually everything we know about weather and climate depends fundamentally on computer models.

“Climate science proceeds by constantly inverting its own infrastructure. All is turned upside down to find out how old data were collected, interpreted, and combined with other data—then the data are adjusted based on the findings.”

The wide angle

I went to graduate school in the 1980s, at the height of the Carter-Reagan Cold War. It was a very scary time, and not only because the risk of nuclear war reached heights unseen since the Cuban missile crisis. First acid rain, then the ozone hole, then the issue of “nuclear winter”—a global climate catastrophe caused by the smoke and dust from a superpower nuclear war—made it clear that human activity could seriously affect the global atmosphere.

I wrote my dissertation about computers’ central role in the American side of the Cold War. In the 1950s, military projects from hydrogen bomb design to continental air defense to nuclear strategy all spurred computer development, with massive government support. Computers became icons for that era’s widespread technological hubris: the idea that technology could deliver panoptic surveillance, global control, and ultimate power. That story was the subject of my first book, The Closed World: Computers and the Politics of Discourse in Cold War America, published by MIT Press in 1996.

The nuclear winter controversy arose from applying climate models to the effects of nuclear war. So it wasn’t really a long step for me to begin studying how computer models interacted with the politics of climate change.

Even before I finished The Closed World, I was deeply engaged in that research. For years I worked intensively with famed climate scientist Stephen Schneider, who died last summer. I interviewed dozens of climatologists and computer modelers. I spent countless days at scientific meetings and visited climate labs around the world.

As I was researching the book during the 1990s, climate politics exploded. But by around 2000, the main scientific controversies had settled out, and the concerted campaign to cast doubt on climate science—heavily funded by the coal and oil industries—seemed to be losing steam. Then George W. Bush’s administration revived the false controversies. Political appointees doctored scientific reports and attempted to muzzle government scientists such as James Hansen.

Meanwhile, finishing my book took much, much more time than I’d expected. By the time I was finally wrapping up the manuscript of A Vast Machine in the summer of 2009, Barack Obama was president and carbon-pricing bills seemed likely to move swiftly through Congress. Once more, I thought the controversies had finally ended and that A Vast Machine would fizzle into obscurity.

Instead, in November 2009, less than a month after I submitted the final page proofs, “Climategate” made headlines around the world. Someone—a hacker, or perhaps a disaffected insider—released climate data and thousands of private emails among scientists from the Climatic Research Unit in the United Kingdom. Climate change skeptics—or denialists, as most of them should really be called—made a lot of noise about what they call “manipulation” of climate data.

Their allegations illustrated exactly the conundrum A Vast Machine reveals: as a historical science, the study of climate change will always involve revisiting old data, correcting, modeling, and revising our picture of the climatic past.

This does not mean we don’t know anything. (We do.) And it also does not mean that climate data or climate models might turn out to be wildly wrong. (They won’t.)

Climate science proceeds by constantly inverting its own infrastructure. Making global data means turning the data collection process upside down to find out how old data were collected, interpreted, and combined with other data. This process can reveal errors, systematic instrument bias, or other problems. Scientists use this knowledge to delete mistaken readings, adjust for instrument bias, and combine newly discovered records with existing ones.

Following Geoffrey Bowker, I call this process “infrastructural inversion.” It’s fundamental to climate science. Infrastructural inversion means that there will never be a single, perfect, definitive global data set. Instead, we get what I call “shimmering”: global data converge—and they converge around a sharp warming trend over the last 40 years—but they never fully stabilize, because it is always possible to find more historical data and do a better job of data modeling. Unfortunately, infrastructural inversion can be abused in order to stoke controversy, if it’s misunderstood—or deliberately mis-portrayed—as a lack of knowledge rather than an essential process of knowledge production.

March 7, 2011

A close-up

Here’s something fascinating about meteorology, from Chapter 10: most of the data in a modern weather forecast aren’t collected from instruments. Instead, they’re created by a computer simulation of the atmosphere.

As new data come in from ground stations, satellites, and other platforms, software known as the “data assimilation system” compares them with its previous forecast for the current period. Where it finds discrepancies, the data assimilation system adjusts the forecast accordingly—but not always in favor of the “real” data from instruments. When incoming data are inconsistent with the forecast, it’s sometimes the case that the problem isn’t the computer simulation, but errors in the instruments, the reporting system, or the interpretation of signals.

As one meteorologist put it in 1988, a modern data assimilation system “can be viewed as a unique and independent observing system that can generate information at a scale finer than that of the conventional observing system.” In other words—to exaggerate only slightly—simulated data are better than real data.

The tremendous success of data assimilation systems in weather forecasting has a corollary for climate science, worked out over the last two decades. It might just be possible, some scientists think, to take the world’s entire collection of weather data and run it through a modern data assimilation system and forecast model to produce a kind of movie of global weather from about 1900 on.

Called “reanalysis,” this process (the subject of Chapter 12) has already produced some very important climate data sets for the past 40-50 years. So far, they’re less accurate than our historical climate data—but they’re also far more detailed, because of the information the models can actually generate.

“Virtually everything we know about weather and climate depends fundamentally on computer models. Simulation has become a principal technique of investigation—not merely a supplement to theory, but an embodiment of complexities that can’t be fully reduced to well-understood principles.”

Lastly

Computer-based information infrastructures continue to evolve, with strong effects on climate science and politics.

Once upon a time, climate models and data were held and interpreted only by a tiny elite. But rapidly spreading ideals of transparency now combine with technological capabilities to permit or even require the open sharing of both. The Climate Code Foundation advocates the publication of climate model code, and many laboratories already do that. Most major climate data centers now make basic data readily available online (though using and interpreting such data still requires considerable expertise).

Meanwhile, anyone with a head for math can master powerful statistical analysis tools, and knowledge of computer programming is widespread. These capabilities have facilitated the rise of blogs such as Climate Audit, which purports to independently evaluate climate data.

These projects hold great potential for scientific benefits—but they also have enormous drawbacks. By moving technical discussions of climate knowledge into a public universe beyond the boundaries of accredited science, they make it exponentially harder for journalists and the general public to distinguish between genuine experts, mere pretenders, and disinformation agents.

Anyone who still doubts that denialism about climate change is deliberately manufactured by vested interests should read Naomi Oreskes’ Merchants of Doubt and James Hoggan’s Climate Cover-Up.

A Vast Machine is mainly about the history of climate knowledge. But on another level, the book concerns an even deeper problem, namely how we know anything about any complex, large-scale phenomenon.

Not so long ago, scientific ideology held data as the ultimate test of truth and theory as the ultimate source of knowledge. Models and simulations were seen as mere heuristics, poor imitations of reality useful mainly to spark ideas about how to improve theory and generate more data.

In the last three decades things have changed. In today’s sciences, many kinds of data are understood to be imperfect and in need of adjustment through modeling. Meanwhile, simulation has become a principal technique of investigation—not merely a supplement to theory, but an embodiment of complexities that can’t be fully reduced to well-understood principles.

Think of anything big you care about: earthquakes, rain forests, the global economy, world population, the ozone hole. Today you’ll find computer simulations and data modeling at the heart of the science that studies it. Climate science led the way to this colossal shift in the scientific method, whose ramifications we are still discovering.

The ubiquity of simulation and data modeling in modern science—and further afield, in financial models, polling, economic forecasts, Google searches, and the myriad other places models and data touch each other in contemporary life—requires us to seek a more realistic and sophisticated picture of how knowledge is made, and what it means to say that we know something.

© 2011 Paul Edwards
Paul N. Edwards A Vast Machine: Computer Models, Climate Data, and the Politics of Global Warming MIT Press 528 pages, 9 x 6¼ inches ISBN 978 0262013925
edwards_book
edwards_paul

Paul N. Edwards is Professor of Information and History at the University of Michigan. He writes and teaches about the history, politics, and culture of information infrastructures. In addition to A Vast Machine, featured in his Rorotoko interview, Edwards is the author of The Closed World: Computers and the Politics of Discourse in Cold War America (MIT Press, 1996) and co-editor (with Clark Miller) of Changing the Atmosphere: Expert Knowledge and Environmental Governance (2001). He also maintains a personal website and one with additional information and downloads related to A Vast Machine.

Cover Interview of
March 7, 2011