NYT Book Review Questions Global Warming: Nature Too Complex For Computer Models

I must admit that I never thought I’d see this kind of a book review at the New York Times. This is especially true given the recent zealotry surrounding global warming, and how much of the media-driven hysteria is based on computer models created to predict future climate events.

Much to my elated surprise, the Times amazingly published an article Tuesday entitled “The Problems in Modeling Nature, With Its Unruly Natural Tendencies.”

I imagine many readers are checking that link about now as they question my veracity. Go ahead. I can take it.

Let’s cut to the chase, shall we (emphasis mine throughout):

Now [coastal geologist Orrin H.] Pilkey and his daughter Linda Pilkey-Jarvis, a geologist in the Washington State Department of Geology, have expanded this view into an overall attack on the use of computer programs to model nature. Nature is too complex, they say, and depends on too many processes that are poorly understood or little monitored — whether the process is the feedback effects of cloud cover on global warming or the movement of grains of sand on a beach.

Yep. You read that correctly. Go back and read it again, for it largely refutes all the dire predictions by global warming alarmists that are based on computer models. Furthermore, it explains the recent study out of Ohio State University concerning lower temperatures in Antarctica then had been predicted by such models. The article continued:

Their book, “Useless Arithmetic: Why Environmental Scientists Can’t Predict the Future,” originated in a seminar Dr. Pilkey organized at Duke to look into the performance of mathematical models used in coastal geology. Among other things, participants concluded that beach modelers applied too many fixed values to phenomena that actually change quite a lot. For example, “assumed average wave height,” a variable crucial for many models, assumes that all waves hit the beach in the same way, that they are all the same height and that their patterns will not change over time. But, the authors say, that’s not the way things work.

Also, modelers’ formulas may include coefficients (the authors call them “fudge factors”) to ensure that they come out right. And the modelers may not check to see whether projects performed as predicted.

Hmmm. Modelers’ formulas may include coefficients to ensure that they come out right. Sounds exactly like what folks who refer to anthropogenic global warming as “junk science” have been claiming for years, wouldn’t you agree? Regardless, the article continued:

Eventually, the seminar participants widened the project, concluding that erroneous assumptions, fudge factors and the reluctance to check predictions against unruly natural outcomes produce models with, as the authors put it, “no demonstrable basis in nature.” Among other problems, they cite much-modeled but nevertheless collapsed North Atlantic fishing stocks, poisonous pools unexpectedly produced by open pit mining, and invasive plants and animals that routinely outflank their modelers.

Stay with this, for it gets even better:

Two issues, the authors say, illustrate other problems with modeling. One is climate change, in which, they say, experts’ justifiable caution about model uncertainties can encourage them to ignore accumulating evidence from the real world.

Hmmm. Scientists actually ignore accumulating evidence from the real world. I wonder how many of the scientists involved in the recent United Nations Intergovernmental Panel on Climate Change report fit this description. The article continued:

But, the authors say it is important to remember that model sensitivity assesses the parameter’s importance in the model, not necessarily in nature. If a model itself is “a poor representation of reality,” they write, “determining the sensitivity of an individual parameter in the model is a meaningless pursuit.”

And here comes the marvelous payoff:

Given the problems with models, should we abandon them altogether? Perhaps, the authors say. Their favored alternative seems to be adaptive management, in which policymakers may start with a model of how a given ecosystem works, but make constant observations in the field, altering their policies as conditions change. But that approach has drawbacks, among them requirements for assiduous monitoring, flexible planning and a willingness to change courses in midstream. For practical and political reasons, all are hard to achieve.

Besides, they acknowledge, people seem to have such a powerful desire to defend policies with formulas (or “fig leaves,” as the authors call them), that managers keep applying them, long after their utility has been called into question.

So the authors offer some suggestions for using models better. We could, for example, pay more attention to nature, monitoring our streams, beaches, forests or fields to accumulate information on how living things and their environments interact. That kind of data is crucial for models. Modeling should be transparent. That is, any interested person should be able to see and understand how the model works — what factors it weighs heaviest, what coefficients it includes, what phenomena it leaves out, and so on. Also, modelers should say explicitly what assumptions they make.

How does this pertain specifically to global warming?

And instead of demanding to know exactly how high seas will rise or how many fish will be left in them or what the average global temperature will be in 20 years, they argue, we should seek to discern simply whether seas are rising, fish stocks are falling and average temperatures are increasing. And we should couple these models with observations from the field. Models should be regarded as producing “ballpark figures,” they write, not accurate impact forecasts.

“If we wish to stay within the bounds of reality we must look to a more qualitative future,” the authors write, “a future where there will be no certain answers to many of the important questions we have about the future of human interactions with the earth.”

Kind of sounds like they’re suggesting that models should not be used to establish a consensus on future climate events, doesn’t it?

Alas, it seems unlikely the global warming alarmists are going to accept any of this. After all, their goals have absolutely nothing at all to do with science. As such, why should they concern themselves with something so arcane as the value of computer models?

Global Warming New York Times
Noel Sheppard's picture