Paul N. Edwards

 

On his book A Vast Machine: Computer Models, Climate Data, and the Politics of Global Warming

Cover Interview of March 07, 2011

A close-up

Here’s something fascinating about meteorology, from Chapter 10: most of the data in a modern weather forecast aren’t collected from instruments. Instead, they’re created by a computer simulation of the atmosphere.

As new data come in from ground stations, satellites, and other platforms, software known as the “data assimilation system” compares them with its previous forecast for the current period. Where it finds discrepancies, the data assimilation system adjusts the forecast accordingly—but not always in favor of the “real” data from instruments. When incoming data are inconsistent with the forecast, it’s sometimes the case that the problem isn’t the computer simulation, but errors in the instruments, the reporting system, or the interpretation of signals.

As one meteorologist put it in 1988, a modern data assimilation system “can be viewed as a unique and independent observing system that can generate information at a scale finer than that of the conventional observing system.” In other words—to exaggerate only slightly—simulated data are better than real data.

The tremendous success of data assimilation systems in weather forecasting has a corollary for climate science, worked out over the last two decades. It might just be possible, some scientists think, to take the world’s entire collection of weather data and run it through a modern data assimilation system and forecast model to produce a kind of movie of global weather from about 1900 on.

Called “reanalysis,” this process (the subject of Chapter 12) has already produced some very important climate data sets for the past 40-50 years. So far, they’re less accurate than our historical climate data—but they’re also far more detailed, because of the information the models can actually generate.