Question 3

Can computer-based models meaningfully replicate the impact of all of the natural factors that may significantly influence climate?


1st expert response by Dr. Christopher Essex
Professor of Applied Mathematics, Department of Applied Mathematics, The University of Western Ontario, and co-author of ‘Taken by Storm: the troubled science, policy and politics of global warming’.

Model Misconceptions

There are many serious, persistent misconceptions about the science behind climate. Some arise from ignorance and naivety, while others are convenient for certain interests to maintain. Here are some of these misconceptions, particularly those relating to climate modelling.

1. The climate problem is a solved problem.
False:
Powerful interests find this misconception desirable. Others are misled by naïve and misguided childhood distortions that make climate seem simple. But, naivety about climate models has helped this misconception flourish perhaps more than anything else. However, climate is anything but simple and climate models have not bested it. climate is one of the most challenging open problems in modern science. The oceans and atmosphere form the definitive example of a complex system. Some knowledgeable scientists believe that the climate problem can never be solved. That is even stated by the IPCC, which is purported to represent the consensus opinion of scientists. Here is a quote directly from them (Third Assessment Report, 2001, Section 14.2.2.2, page 774):
“In climate research and modelling, we should recognize that we are dealing with a coupled non-linear chaotic system, and therefore that long-term prediction of future climate states is not possible.”
That is perhaps a bit extreme even for me. However, as many as two of the seven famous Clay Millennium problems (famous unsolved problems of mathematics) are at the foundations of climate science. You would win a million dollars if you were to actually solve the primary governing equations that we need to forecast climate. That’s right! We don’t know how to solve the main equations for forecasting climate. What we do instead is stick approximations to them onto computers and hope for the best.

2. Computer climate models only solve the exact physical equations.
False:
I am amazed how many influential academics are so naïve about computation and models
that they are prepared to state this misconception openly. Some have even used climate
model output as future “observations” — even in refereed publications. This misconception is driven by the mystique of computers. We have Hollywood special effects and fictional stories of alternative virtual realities based on computers. Children now learn all of their mathematics aided by them. It is hard to imagine that there is anything wrong with what a computer says, when everything one experiences suggests that computers contain all of mathematics and physics. However, things are the other way around: computer technology is an application of mathematics and physics.
Mathematics is too big to fit into computers. Only an infinitely large computer could represent all numbers. There is an entire, highly sophisticated, research field known as numerical analysis that attempts to cope with the very real limitation of finitely sized computers. When we teach students how to do computations properly, we teach them that computations always leave something out. The proper way is not to leave out anything that matters. Even proper computation of real physical problems is at best an approximation. It is not hard to estimate how long it would take a contemporary computer to forecast climate 10 years into the future by following the simple rules of proper computation. The computer would take at least 1020 years to do this. That’s longer than the age of the Universe by a factor of 1010. Thus we cannot do computer climate forecasts doing classical proper computing on the basic physical equations, let alone solve them without approximation.

3. Models reproduce observations accurately.
It Depends:
Since computers cannot do proper computation of those equations that are essential for forecasting climate, the only alternative is “improper” computation. By “improper” I mean that we are reduced to using that part of the computer computation approximating the correct physics, while knowingly leaving out things that do matter. In practice what is left out is substantial: nearly all of what we think of as weather gets tossed out, including clouds, thunderstorms, all vertical motion, convection, rain and more. Nearly all processes that move energy in the vertical must be overlooked. The resolution (hundreds of kilometers) of these models is just too coarse for these things. However, results would be nonsense without some fix for such omissions. The fix is fakery. Fake physics can be much faster to compute than true physics. As long as it’s faster, and it seems to produce close to the correct results, it’s okay. The fake physics and the true physics differ in important ways. Fake physics does not conserve the same things as true physics. Fake physics has constants that can be adjusted to make the model agree with observations as nearly as possible. We say that models with such constants are empirical. This will be a surprise to some people. Models don’t use true physics! The adjustable, falsely conserved constants are called parameters. Thus these fake laws of physics are called parameterizations. The parameters are adjusted to best fit observations. John Von Neumann, one of the fathers of modern computers, among other things, famously said, “With four parameters I can fit an elephant, and with five I
can make him wiggle his trunk”
If empirical models don’t agree with Nature, they can be fixed so that they do agree. Nonetheless, climate models do not agree with observations in a number of notable ways. Some model temperature fields are known to be systematically wrong. They are also
notoriously bad at clouds, or rain and many other things. However, if modelers really wanted to fix these things, they could. It is a testament to their honesty that they do not. They want to make their models as accurate as possible without too much “fitting.” That means using as few adjustable parameters as possible and using physical intuition to develop parameterizations. But the resulting parameterizations, no matter how good, are no implementation of the correct physics.

4. Any difficulties for climate models can be fixed.
False:
Empirical models, not using the full physics, can be effective nonetheless by fitting to, or
“training” on observations. Disagreements between empirical models and observations are fixed by adjusting the parameterizations so that they reproduce the observations correctly. Empirical models in applied fields like engineering work this way. However, the problem of forecasting climate is not an engineering problem. What if the next 50 years of observations is not like the last 50 years? Will all of the parameterizations, such as they are, work as well? No one knows. The next 50 years could be different simply because of the internal nature of system, or it could be different because the climate really does change. Either way, the models likely have to be retrained to work properly under the new circumstances. To know in advance that such a shift is coming or to redo the parameterizations in advance cannot be done empirically. No one really knows what to do with climate model calculations over timescales longer than we have observations.
What do we train models on over those timescales? The conventional wisdom is that models should be made like a hot brick rather than like a ringing bell on those timescales. Thus nothing happens in models on long timescales. In that domain, all change must come from outside. There is no empirical way to know whether Nature actually works that way. It could easily change on its own, without external influence. However long timescales are what forecasting climate is all about. climate models cannot be fixed.

5. Models have value.
True:
Climate models have so many game ending limitations that one might be tempted to suggest that we should just get rid of big climate models. I have heard some frustrated people say such things. But models are important academically, certainly in terms of understanding how the various components making climate fit together. That is no trivial problem. While serious long range forecasting for them is really an exotic form of extrapolation at best, they are truly the best we have. That is not because the people building the models have not done a good job, it is because the scale of the problem is really beyond our capability. And that will remain so for the foreseeable future. But make no mistake, models, are no substitute for a theory of climate. We do not have a theory for climate—yet.

2nd Expert response by J Scott Armstrong and Kesten Green
J Scott Armstrong is a professor at the Wharton School of the University of Pennsylvania and Kesten Green is a senior lecturer at the International Graduate School of Business at the University of South Australia. They are leading researchers on forecasting.

Forecasting experts’ simple model leaves expensive climate models cold

A simple model was found to produce forecasts that are over seven times more accurate than forecasts from the procedures used by the United Nations Intergovernmental Panel on climate Change (IPCC).

This important finding is reported in an article titled “Validity of climate Change forecasting for public policy decision making” in the latest issue of the International Journal of Forecasting. It is the result of collaboration among forecasters J. Scott Armstrong of the Wharton School, Kesten C. Green of the University of South Australia, and climate scientist Willie Soon of the Harvard-Smithsonian Center for Astrophysics.

In an earlier, paper (http://www.forecastingprinciples.com/files/WarmAudit31.pdf), Armstrong and Green found that the IPCC’s approach to forecasting climate violated 72 principles of forecasting. To put this in context, would you put your children on a trans-Atlantic flight if you knew that the plane had failed engineering checks for 72 out of 127 relevant items on the checklist?

The IPCC violations of forecasting principles were partly due to their use of models that were too complex for the situation. Contrary to everyday thinking, complex models provide forecasts that are less accurate than forecasts from simple models when the situation is complex and uncertain.

Confident that a forecasting model that followed scientific forecasting principles would provide more accurate forecasts than those provided by the IPCC, Green, Armstrong and Soon used a model that was more consistent with forecasting principles and knowledge about climate.

The forecasting model was the so-called “naïve” model. It assumes things will remain the same. Being such a simple model, people are generally not aware of its power. In contrast to the IPCC’s central forecast that global mean temperatures will rise by 3?C over a century, the naïve model simply forecasts that temperatures next year, the year after, and so on for each of 100 years into the future would remain the same as the temperature in the year prior to the start of the forecasting exercise. Picture a graph of temperature over time: the naïve forecasts would appear as a flat line.

The naïve model approach is confusing to non-forecasters who are aware that temperatures have always varied. Moreover, much has been made of the observation that the temperature series that the IPCC uses shows a broadly upward trend since 1850 and that this coincides with increasing industrialization and associated increases in manmade carbon dioxide gas emissions.

To test the naive model, we started with the actual global average temperature for the year 1850 and simulated making annual forecasts from one to 100 years after that date – i.e. for every year from 1851 to 1950.  We then started with the actual 1851 temperature and made simulated forecasts for each of the next 100 years after that date – i.e. for every year from 1852 to 1951. This process was repeated over and over starting with the actual temperature in each subsequent year, up to 2007, and simulating forecasts for the years that followed (i.e. 100 years of forecasts for each series until after 1908 when the number of years in the temperature record started to diminish as we approached the present). This produced 10,750 annual temperature forecasts for all time horizons, one to 100 years, which we then compared with forecasts for the same periods from the IPCC forecasting procedures. It was the first time that the IPCC’s forecasting procedures had been subject to a large-scale test of the accuracy of their forecasts.

Over all the forecasts, the IPCC error was 7.7 times larger than the error from the naïve model.

While the superiority of the naïve model was modest for one to ten-year-ahead forecasts (where the IPCC error was 1.5 times larger), its superiority was enormous for the 91- to 100-year-ahead forecasts, where the IPCC error was 12.6 times larger.

Is it proper to conduct validation tests?

In many cases, such as the climate Change situation, people claim that: “Things have changed! We cannot use the past to forecast.” While they may think that their situation is unique, there is no logic to this argument. The only way to forecast the future is by learning from the past. In fact, those who are proclaiming the dangers of global warming also base their assumptions on their analyses of the past.

Could one improve upon the naïve model? While the naïve model is much more consistent with forecasting principles the IPCC’s approach to forecasting climate, it does violates some principles. For example, the naïve model violates the principle that one should use as long a time series as possible, because it bases all forecasts on simply the global average temperature for the single year just prior to making the forecasts. It also fails to combine forecasts from different reasonable methods. The authors planned to start simple with this self-funded project and to then obtain funding to undertake a more ambitious forecasting effort to ensure that all principles were followed. This would no doubt improve accuracy. However, the forecasts from the naïve model were very accurate. For example, the mean absolute error for the 108 fifty-year-ahead forecasts was only 0.24?C. It is difficult to see any economic value to reducing such a small forecast error.

We concluded our most recent paper with the following thoughts:

Global mean temperatures have been remarkably stable over policy-relevant horizons. The benchmark forecast is that the global mean temperature for each year for the rest of this century will be within 0.5C of the 2008 figure.

There is little room for improving the accuracy of forecasts from our benchmark model. In fact, it is questionable whether practical benefits could be gained by obtaining perfect forecasts. While the Hadley temperature data drifts upwards over the last century or so, the longer series shows that such trends can occur naturally over long periods before reversing.

Moreover, there is some concern that the upward trend observed over the last century and half might be at least in part an artifact of measurement errors rather than a genuine global warming.

Even if one accepts the Hadley data as a fair representation of temperature history (and that is debatable, especially given the recent revelations about possible irregularities in temperature data handling by the climate Research Unit at the University of East Anglia), our analysis shows that errors from the naive model would have been so small that decision makers who had assumed that temperatures would not change would have had no reason for regret.

For further information contact J.  Scott Armstrong or Kesten C. Green

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s