Paul N. Edwards

 

On his book A Vast Machine: Computer Models, Climate Data, and the Politics of Global Warming

Cover Interview of March 07, 2011

Lastly

Computer-based information infrastructures continue to evolve, with strong effects on climate science and politics.

Once upon a time, climate models and data were held and interpreted only by a tiny elite. But rapidly spreading ideals of transparency now combine with technological capabilities to permit or even require the open sharing of both. The Climate Code Foundation advocates the publication of climate model code, and many laboratories already do that. Most major climate data centers now make basic data readily available online (though using and interpreting such data still requires considerable expertise).

Meanwhile, anyone with a head for math can master powerful statistical analysis tools, and knowledge of computer programming is widespread. These capabilities have facilitated the rise of blogs such as Climate Audit, which purports to independently evaluate climate data.

These projects hold great potential for scientific benefits—but they also have enormous drawbacks. By moving technical discussions of climate knowledge into a public universe beyond the boundaries of accredited science, they make it exponentially harder for journalists and the general public to distinguish between genuine experts, mere pretenders, and disinformation agents.

Anyone who still doubts that denialism about climate change is deliberately manufactured by vested interests should read Naomi Oreskes’ Merchants of Doubt and James Hoggan’s Climate Cover-Up.

A Vast Machine is mainly about the history of climate knowledge.  But on another level, the book concerns an even deeper problem, namely how we know anything about any complex, large-scale phenomenon.

Not so long ago, scientific ideology held data as the ultimate test of truth and theory as the ultimate source of knowledge. Models and simulations were seen as mere heuristics, poor imitations of reality useful mainly to spark ideas about how to improve theory and generate more data.

In the last three decades things have changed. In today’s sciences, many kinds of data are understood to be imperfect and in need of adjustment through modeling. Meanwhile, simulation has become a principal technique of investigation—not merely a supplement to theory, but an embodiment of complexities that can’t be fully reduced to well-understood principles.

Think of anything big you care about: earthquakes, rain forests, the global economy, world population, the ozone hole. Today you’ll find computer simulations and data modeling at the heart of the science that studies it. Climate science led the way to this colossal shift in the scientific method, whose ramifications we are still discovering.

The ubiquity of simulation and data modeling in modern science—and further afield, in financial models, polling, economic forecasts, Google searches, and the myriad other places models and data touch each other in contemporary life—requires us to seek a more realistic and sophisticated picture of how knowledge is made, and what it means to say that we know something.


© 2011 Paul Edwards