Forecasting versus reality: we should never forget a forecast is not a fact

Forecasting versus reality: we should never forget a forecast is not a fact

by Martin Livermore
article from Monday 4, December, 2017

AS THEY SAY, forecasting is very difficult, particularly about the future. Hackneyed as this may be, it nicely encapsulates the need to take what anyone – however expert – says about the future with a large pinch of salt. This is particularly important as we are bombarded with projections about the future these days, largely because today’s IT makes it easier both to do the maths and to share the results.

This doesn’t mean that we don’t need forecasts, simply that we should put them in the right context and not assume they are automatically right. Those making the forecasts should be very well aware of their limitations (although human nature may sometimes get in the way of objectivity) but the rest of us usually receive the information via the filter of at least one layer of interpretation and rewording, often making them sound more certain than they really are.

Weather forecasts are a perfect example of the problems. It is quite possible to look at three different forecasts for your local area and get three different pictures of what the weather will be like. The Met Office and others make their projections for rain in terms of the percentage chance of precipitation and whether it is likely to be heavy or light. Also clouds, by their nature, are difficult to predict with certainty at times. This inherent uncertainty inevitably leads to different interpretations.

Some providers of the forecast to consumers, such as the main broadcast and internet media, put the projections in more black and white terms, based on their own interpretation. We, in turn, may then decide whether or not to take an umbrella with us. So, even if temperatures, wind speed and direction and overall amounts of sunshine may be pretty much the same in practice as predicted, if the pattern of rainfall is significantly different we regard the forecast as wrong.

Arguably of even more importance are economic forecasts, on the basis of which important policy decisions are made by central banks and governments. Weather forecasts, even with sophisticated data collection and use of supercomputers, rapidly become less accurate if made over a horizon of just a few days. Economic forecasts are not only complex, but are based on incomplete data and cover a period of months or years. Not surprisingly, they are always subject to later review and revision. Even official historical figures are in the first instance a best estimate and are revised later, in some cases meaning that a supposed recession never happened.

We should never forget the difference between a forecast and a fact. Forecasting is a very useful tool, but only tells us one possible outcome if our assumptions are correct and our understanding of a particular system (the weather or the economy, for example) is good enough to reproduce the right trends via computer modelling. In other words, this is a ‘what if’ rather than certain view of the future.

This understanding of forecasting is well illustrated by a recent study on air pollution (Clean air target ‘could be met more quickly’). Air pollution in urban areas, and particularly the role of cars in elevating levels of nitrogen dioxide, has become a big issue for many European governments. EU rules are being regularly breached, and combinations of new engine emissions standards, encouragement of electric and hybrid vehicle purchase and restrictions on older cars entering inner city areas have been introduced to deal with this.

Setting aside for the time being the fact that even eliminating the internal combustion engine from cities would not solve the NO2 and particulates problems, it seems that the UK government has based its projections on how long it will take to reduce air pollution to below the legal limit on inaccurate data. Researchers at the Universities of York and Leicester have found that catalytic converters fitted to reduce emissions of particulates age in such a way that older cars actually produce less nitrogen dioxide than when they first roll off the production line. Government policy takes no account of this real world trend, so makes unduly pessimistic assumptions about the time taken for the policy to achieve its goal.

This kind of thing undoubtedly goes on all the time. It’s quite understandable: given the complexity of many of the studies, it’s likely that only one group of researchers will do each one, and going back to check assumptions after all the hard work is something that doesn’t often happen. Equally, it is unlikely that another group of researchers will provide an independent forecast unless they already suspect something to be awry with the initial one.

This issue is, of course, vitally important for climate change and energy policy. The weight of real world observation is gradually forcing a rethink of one critical assumption, the climate sensitivity factor. This is the increase in average temperature arising from a doubling of carbon dioxide in the atmosphere. The mainstream view, encapsulated in the various IPCC reports and what drives international emissions policy, is that there is a positive feedback mechanism for every additional amount of CO2injected into the air. What may be termed the lukewarmist view is that this positive feedback is either not real or very weak.

Despite this legitimate difference of opinion, we still hear talk of the number of gigatonnes of carbon dioxide it is safe to release without breaching the somewhat arbitrary target of a 2°C rise in temperature, above which warming would be considered dangerous. That such forecasts (actually projections, as the modellers would call them) are treated as if they were fact is understandable as a way to pressure governments into cutting emissions, but it doesn’t make this an acceptable practice.

Governments do (or should) make policy based on scientific advice. It is incumbent upon the advisors to make clear the uncertainties and unknowns in the evidence on which their advice is based, and to properly assess new and possibly conflicting studies. Unfortunately, it is much harder to modify an existing policy position than to make the original one from scratch, so the inertia of the system tends to militate against improvement. The UK government’s response to the new evidence on air pollution could be an interesting test case.

Martin Livermore writes for the Scientific Alliance, which advocates the use of rational scientific knowledge in the development of public policy. To subscribe to his regular newsletter please use this link.

ThinkScotland exists thanks to readers' support - please donate in any currency and often

Follow us on Facebook and Twitter & like and share this article
To comment on this article please go to our facebook page