Long-term persistence and trend significance

Update 28 April 2014: Climate Dialogue summaries now online
The summary of the second Climate Dialogue discussion on long-term persistence is now online (see below). We have made two versions: a short and an extended version. We apologize for the delay in publishing the summary.

Both versions can also be downloaded as pdf documents:
Summary of the Climate Dialogue on Long-term persistence
Extended summary of the Climate Dialogue on Long-term persistence
[End update]

In this second Climate Dialogue we focus on long-term persistence (LTP) and the consequences it has for trend significance.

We slightly changed the procedure compared to the first Climate Dialogue (which was about Arctic sea ice). This time we asked the invited experts to write a first reaction on the guest blogs of the others, describing their agreement and disagreement with it. We publish the guest blogs and these first reactions at the same time.

We apologise again for the long delay. As we explained in our first evaluation it turned out to be extremely difficult to find the right people (representing a range of views) for the dialogues we had in mind.

Climate Dialogue editorial staff
Rob van Dorland, KNMI
Marcel Crok, science writer
Bart Verheggen

Introduction long-term persistence

How persistent is the climate and what is its implication for the significance of trends?

The Earth is warmer now than it was 150 years ago. This fact itself is uncontroversial. It’s not trivial though how to interpret this warming. The attribution of this warming to anthropogenic causes relies heavily on an accurate characterization of the natural behavior of the system. Here we will discuss how statistical assumptions influence the interpretation of measured global warming.

Most experts agree that three types of processes (internal variability, natural and anthropogenic forcings) play a role in changing the Earth’s climate over the past 150 years. It is the relative magnitude of each that is in dispute. The IPCC AR4 report stated that “it is extremely unlikely (<5%) that recent global warming is due to internal variability alone, and very unlikely (< 10 %) that it is due to known natural causes alone.” This conclusion is based on detection and attribution studies of different climate variables and different ‘fingerprints’ which include not only observations but also physical insights in the climate processes.

Detection and attribution
The IPCC definitions of detection and attribution are:

“Detection of climate change is the process of demonstrating that climate has changed in some defined statistical sense, without providing a reason for that change.”

“Attribution of causes of climate change is the process of establishing the most likely causes for the detected change with some defined level of confidence.”

The phrase ‘change in some defined statistical sense’ in the definition for detection turns out to be the starting point for our discussion. Because what is the ‘right’ statistical model (assumption) to conclude whether a change is significant or not?

According to AR4, “An identified change is ‘detected’ in observations if its likelihood of occurrence by chance due to internal variability alone is determined to be small.” The magnitude of internal variability can be estimated in different ways, e.g. by control runs of global climate models, by characterising the statistical behaviour of the time series, by inspection of the spatial and temporal fingerprints in observations, and by comparing models and observations (e.g. via their respective power spectra, cf. fig 9.7 in AR4).

Long-term persistence
Critics argue though that most if not all changes in the climatological time series are an expression of long-term persistence (LTP). Long-term persistence means there is a long memory in the system, although unlike a random walk it remains bounded in the very long run. There are stochastic /unforced fluctuations on all time scales. More technically, the autocorrelation function goes to zero algebraically (very slowly). These critics argue that by taking LTP into account trend significance is reduced by orders of magnitude compared to statistical models that assume short-term persistence (AR1), as was applied e.g. in the illustrative trend estimates in table 3.2 of AR4. [1,2]

This has consequences for attribution as well, since long term persistence is often assumed to be a sign of unforced (internal) variability (e.g. Cohn and Lins, 2005; Rybski et al, 2006). In reaction to Cohn and Lins (2005), Rybski et al. (2006)[3] concluded that even when LTP is taken into account at least part of the recent warming cannot be solely related to natural factors and that the recent clustering of warm years is very unusual (see also Zorita (2008)[4]). Bunde and Lennartz also looked extensively at the consequences of LTP for trend significance and concluded that globally averaged temperature data on land do show a significant trend, but the sea surface temperatures don’t. [5,6]

These different conclusions translate directly into the question of how important the statistical model used is for determining the significance of the observed trends.

Climate Dialogue
Although the IPCC definition for detection seems to be clear, the phrase ‘change in some defined statistical sense’ leaves a lot of wiggle room. For the sake of a focussed discussion we define here the detection of climate change as showing that some of this change is outside the bounds of internal climate variability. The focus of this discussion is how to best apply statistical methods and physical understanding to address this question of whether the observed changes are outside the bounds of internal variability. Discussions about the physical mechanisms governing the internal variability are also welcome.

Specific questions

1. What exactly is long-term persistence (LTP), and why is it relevant for the detection of climate change?

2. Is “detection” purely a matter of statistics? And how does the statistical model relate to our knowledge of internal variability?

3. What is the ‘right’ statistical model to analyse whether there is a detected change or not? What are your assumptions when using that model?

4. How long should a time series be in order to make a meaningful inference about LTP or other statistical models? How can one be sure that one model is better than the other?

5. Based on your statistical model of preference do you conclude that there is a significant warming trend?

6. Based on your statistical model of preference what is the probability that 11 of the warmest years in a 162 year long time series (HadCrut4) all lie in the last 12 years?

7. If you reject detection of climate change based on your preferred statistical model, would you have a suggestion as to the mechanism(s) that have generated the observed warming?

[1] Cohn,. T. A., and H. F. Lins (2005), Nature's style: Naturally trendy,. Geophys. Res. Lett., 32, L23402, doi:10.1029/2005GL024476
[2] Koutsoyiannis, D., Montanari, A., Statistical analysis of hydroclimatic time series: Uncertainty and insights, Water Resour. Res., Vol. 43, W05429, doi:10.1029/2006WR005592, 2007
[3] Rybski, D., A. Bunde, S. Havlin, and H. von Storch (2006), Long-term persistence in climate and the detection problem, Geophys. Res. Lett., 33, L06718, doi:10.1029/2005GL025591
[4] Zorita, E., T. F. Stocker, and H. von Storch (2008), How unusual is the recent series of warm years?, Geophys. Res. Lett., 35, L24706, doi:10.1029/2008GL036228
[5] Lennartz, S., and A. Bunde. "Trend evaluation in records with long‐term memory: Application to global warming." Geophysical Research Letters 36.16 (2009)
[6] Lennartz, Sabine, and Armin Bunde. "Distribution of natural trends in long-term correlated records: A scaling approach." Physical Review E 84.2 (2011): 021129

Guest blog Rasmus Benestad

The debate about trends gets lost in statistics...

Rasmus E. Benestad, senior researcher, Norwegian Meteorological Institute

Figure 1. The recorded changes in the global mean temperature over time (red). The grey curve shows a calculation of the temperature based on greenhouse gases, ozone, and changes in the sun.

From time to time, the question pops up whether the global warming recorded by a network of thermometers around the globe is a result of natural causes, or if the warming is forced by changes in the atmospheric concentrations of greenhouse gases (GHGs).

In 2005, there was a scientific paper [1] suggesting that statistical models describing random long-term persistence (LTP) could produce similar trends as seen in the global mean temperature. I wrote a comment then on this paper on RealClimate.org with the title Naturally trendy?.

More recently, another paper [2] followed somewhat similar ideas, although for the Arctic temperature rather than the global mean. A discussion ensued after my posting of ‘What is signal and what is noise?’ on RealClimate.org. A meeting is planned in Tromsø, Norway in the beginning of May to discuss our differences - much in the spirit of Climate Dialogue.

Weather statistics
It is easy to make a statement about the Earth’s climate, but what is the story behind the different views? And what is really the problem?

If we start from scratch, we first need to have a clear idea about what we mean by climate and climate change. Often, climate is defined as the typical weather, described by the weather statistics: what range of temperature and rainfall we can expect, and how frequently. Experts usually call this kind of statistics ‘frequency distribution’ or ‘probability density functions’ (pdfs).

A climate change happens when the weather statistics are shifted: weather that was typical in the past is no longer typical. It involves a sustained change rather than short-lived variations. New weather patterns emerge during a climate change.

At the same time, there is little doubt that some primary features of our climate involve short-lived natural ‘spontaneous’ fluctuations, caused by the climate itself. The natural fluctuations are distinct to a forced climate change in terms of their duration [3].

A diagnostic for climate change involves statistical analyses to assess whether the present range and frequency of temperature and precipitation are different from those of the past.

Natural variations
However, the presence of slower natural variations in the climate makes it difficult to make a correct diagnosis.

Long-term persistence (LTP) describes how slow physical processes change over time, where the gradual nature is due to some kind of ‘memory’. This memory may involve some kind of inertia, or the time it takes for physical processes to run their course. Changes over large space take longer time than local changes.

The diffusion of heat and transport of mass and energy are subject to finite flow speeds, and the time it takes for heat to leak into the surroundings is well-understood. Often the rate decays at an exponential rate (with a negative exponent).

There may be more complex behaviour when different physical processes intervene and affect each other, such as the oceans and the atmosphere [4]. The oceans are sluggish and the density and heat capacity of water are much higher than that for the air. Hence the ocean acts like a flywheel, and once it gets moving, it will go on for a while.

There are some known examples of LTP processes, such as the ice ages, changes in the ocean circulation, and the El Niño Southern Oscillation.

We also know that the Earth’s atmosphere is non-linear (‘chaotic’) and may settle into different states, known as ‘weather regimes’ [5-7]. Such behaviour may also produce LTP. Changes in the oceans through the overturning of water masses can result in different weather regimes.

Laws of physics
It is also possible to show that LTP takes place when many processes are combined, which individually do not have LTP. For example, the river flow may exhibit some LTP characteristics, resulting from a collection of rainfall over several watersheds.

We should not forget that long-term changes in the forcing from GHGs also result in LTP behaviour [3].

The climate involves more than just observations and statistics, and like everything else in the universe, it must obey the laws of physics. From this angle, climate change is an imbalance in terms of energy and heat.

It is fairly straight-forward to measure the heat stored in the oceans, as warmer water expands. The global sea level provides an indicator for Earth’s accumulation of heat over time [3].

Hence, the diagnosis (“detection”) of a climate change is not purely a matter of statistics. The laws of physics sets fundamental constraints which lets us narrow down to a small number of ‘suspects’.

Energy and mass budgets are central, but also the hydrological cycle is entangled with the temperatures and the oceans [8]. Modern measurements provide a comprehensive picture: we see changes in the circulation and rainfall patterns.

Classical mistakes
There are two classical mistakes made in the debate about climate change and LTP: (a) looking only at a single aspect (one single time series) isolated from the rest of Earth’s climate and (b) confusing signal for noise.

The term ‘signal’ can have different meanings depending on the question, but here it refers to manmade climate change. ‘Noise’ usually means everything else, and LTP is ‘noise in slow motion’.

There are always physical processes driving both LTP and spontaneous changes on Earth (known as the weather), and these must be subject to study if we want to separate noise from signal.

If an upward trend in the temperature were to be caused by internal ocean overturning, then this would imply a movement of heat from the oceans to the air and land surface. Energy must be conserved. When the heat content increases in both the oceans [9] and the air, and the ice is melting, then it is evident that the trend cannot be explained in terms of LTP.

The other mistake is neglecting the question: What is signal and what is noise? Some researchers have adapted statistical trend models to mimic the evolution seen in measured data [1,2]. These models must be adapted to the data by adjusting number of parameters so that they can reproduce the LTP behaviour.

In science, we often talk about a range of different types of models, and they come in all sorts of sizes and shapes. A climate model calculates the temperature, air flow, rainfall, and ocean currents over the whole globe, based on our knowledge about the physics. Statistical trend models, on the other hand, take past measurements for which they try to find statistical curves that follow the data.

Statistical models are sometimes fed random numbers in order to produce a result that looks like noise [10]. It is concluded that the measured data are a result of a noisy process if these models produce results that look the same. In other words, these models are used to establish a benchmark for assessing trends.

Statistical trend models may, however, produce misguided answers if proper care is not taken. For example, those adapted to data containing both signal and noise cannot be used to infer whether the observed trends are unusual or not. The LTP associated with GHGs can be quite substantial, and forced trends in measured temperatures will fool the statistical models which assume the LPT is due to noise.

The misapplication of statistical trend models can easily be demonstrated by subjecting them to certain tests. The statistical trend models describing LTP make use of information embedded in the data, revealed by their respective autocorrelation function (ACF).

Figure 2 below shows a comparison between two ACFs for temperature for the Arctic (area mean above 60N), taken from two different climate model simulations. One simulation represents the past (black) driven with historical changes in GHGs. The other (grey) describes a hypothetical world where the GHGs were constant, representing a ‘stable’ climate in terms of external forcings.

It is clear that the ACFs differ, and the statistical models used to assess trends and LTP rely on the shape of the ACF.

Figure 2. The upper graph shows the annual mean temperature for the Arctic simulated by a climate model. The black curve shows the year-to-year variations for a run where the model was given the observed GHG measurements. The grey curve shows a similar run, but where the GHGs do not change. The bottom panel shows the ACF, and the black curve indicates that the effect of GHGs on temperature is to increase the LTP. For the assessment of trends, the statistical models should be trained on the grey curve, for which we know there is no forced trend and where all the variations are due to changes in the oceans.

Circular reasoning
It is the way models are used that really matters, rather than the specific model itself. All models are based upon a set of assumptions, and if these are violated, then the models tend to give misleading answers. Statistical LTP-noise models used for the detection of trends involve circular reasoning if adapted to measured data. Because this data embed both signal and noise.

For real-world data, we do not know what degree of the variations are LTP-noise and what are signal.

We can nevertheless specify the degree of forcing in climate model simulations, and then use these results to test the statistical models. Even if the climate models are not completely perfect, they serve as a test bench [11].

The important assumptions are therefore that the statistical trend models, against which the data are benchmarked, really provide a reliable description of the noise.

We need more than a century-long time series to make a meaningful inference about LTP if natural variations have time scales of 70-90 years. Most thermometer records do not go much longer back in time than a century, although there are some exceptions.

It is possible to remedy the lack of thermometer records to some extent by supplementing the information with evidence based on e.g. tree rings, sediment samples, and ice cores3,12,13. Still, such evidence tends to be limited to temperature, whereas climate change involves a whole range of elements.

For all intents and purposes, however, it is important to account for both natural and forced climate change on these time scales. Most people would worry more about the combined effect of these components, as natural variations may be just as devastating as forced. For most people, the distinction between trend and noise is an academic question. For politicians, it's a question about cutting GHG emissions.

Regression
Another side to the story is that the magnitude of the unforced LTP variations may give us an idea about the climate's sensitivity to changing conditions. Often, such cycles are caused by delayed but reinforcing processes. Conditions which amplify an initial response are inherent to the atmosphere, and may act both on forced response (GHGs) as well as internal variations. Damping mechanisms will also tend to strangle oscillations, which is well known from many different physical systems.

The combination of statistical information and physics knowledge lead to only one plausible explanation for the observed global warming, global mean sea level rise, melting of ice, and accumulation of ocean heat. The explanation is the increased concentrations of GHGs.

We can also use another statistical technique for diagnosing a cause [11] which is also used in medical sciences and known as ‘regression analysis’. Figure 1 shows the results of a multiple regression analyses with inputs representing expected physical connections to Earth’s climate. In this case, the regression analysis used GHGs, ozone and changes in the sun’s intensity as inputs, and the results followed the HadCRUT4 data closely.

The probability that this fit is accidental is practically zero if we assume that that the temperature variations from year-to-year are independent of each other. LTP and the oceans inertia will imply that the degrees of freedom is lower than the number of data points, making it somewhat more likely to happen even by chance.

Furthermore, taking paleoclimatic information into account, there is no evidence that there have been similar temperature excursion in the past ~1000 years [12-14]. If the present warming was a result of natural fluctuations, it would imply a high climate sensitivity, and similar variations in the past. Moreover, it would suggest that any known forcing, such as GHGs, would be amplified accordingly. The climate sensitivity may be a common denominator for natural fluctuations and forced trends (Figure 3).

Figure 3. Comparison between trend estimates and the amplitude of 10-year low-passed internal variability in state-of-the-art global climate models.

There may be some irony here: The warming 'hiatus' during last decade is due to LPT-noise [15,16]. However, when the undulations from such natural processes mask the GHG-driven trend, it may in fact suggest a high climate sensitivity – because such natural variations would not be so pronounced without a strong amplification from feedback mechanisms. Figure 3 shows that such natural variations in the climate models are more pronounced for the models with stronger transient climate response (TCR, a rough indicator for climate sensitivity).

For complete probability assessment, we need to take into account both the statistics and the physics-based information, such as the fact that GHGs absorb infrared light and thus affect the vertical energy flow through the atmosphere.

Summary
In summary, we do not really know what the LTP in the real world would be like without GHG forcings, and we don’t know the real degrees of freedom in the measured data record. The lack of such information limits our ability to make a statistics-based probability estimate. On the other hand, we know from past reconstructions and physical reasoning that present warming is highly unusual.

Biosketch
Rasmus Benestad is a physicist by training. Benestad has a D.Phil in physics from Atmospheric, Oceanic & Planetary Physics at Oxford University in the United Kingdom. He has affiliations with the Norwegian Meteorological Institute.
Recent work involves a good deal of statistics (empirical-statistical downscaling, trend analysis, model validation, extremes and record values), but he has also had some experience with electronics, cloud micro-physics, ocean dynamics/air-sea processes and seasonal forecasting.
In addition, Benestad wrote the book ‘Solar Activity and Earth’s Climate’ (2002), published by Praxis-Springer, and together with two colleagues the text book ‘Empirical-Statistical Downscaling’ (2008; World Scientific Publishers). He has also written a number of R-packages for climate analysis posted on http://cran.r-project.org.
Benestad was a member of the council of the European Meteorological Society for the period (2004-2006), representing the Nordic countries and the Norwegian Meteorology Society, and he has served as a member of the CORDEX Task Force on Regional Climate Downscaling.
He is a regular contributor to the well-known climate blog RealClimate.org.

References

1. Cohn, T. A. & Lins, H. F. Nature’s style: Naturally trendy. Geophys. Res. Lett. 32, n/a–n/a (2005).

2. Franzke, C. On the statistical significance of surface air temperature trends in the Eurasian Arctic region. Geophys. Res. Lett. 39, n/a–n/a (2012).

3. Climate Change: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. (Cambridge University Press, 2007).

4. Anderson, D. L. T. & McCreary, J. P. Slowly Propagating Disturbances in a Coupled Ocean-Atmosphere Model. J. Atmospheric Sci. 42, 615–629 (1985).

5. Gleick, J. Chaos. (Cardinal, 1987).

6. Lorenz, E. Deterministic nonperiodic flow. J. Atmospheric Sci. 20, 130–141 (1963).

7. Shukla, J. Predicatability in the Mids of Chaos: A scientific Basis for climate forecasting. Science 282, 728–733 (1998).

8. Durack, P. J., Wijffels, S. E. & Matear, R. J. Ocean Salinities REveal Strong Global Water Cycle Intensification During 1950 to 2000. Science 336, 455–458 (2012).

9. Balmaseda, M. A., Trenberth, K. E. & Källén, E. Distinctive climate signals in reanalysis of global ocean heat content. Geophys. Res. Lett. n/a–n/a (2013). doi:10.1002/grl.50382

10. Paeth, H. & Hense, A. Sensitivity of climate change signals deduced from multi-model Monte Carlo experiments. Clim. Res. 22, 189–204 (2002).

11. Benestad, R. E. & Schmidt, G. A. Solar Trends and Global Warming. JGR 114, D14101 (2009).

12. Mann, M. E., Bradley, R. S. & Hughes, M. K. Global-scale temperature patterns and climate forcing over the past six centuries. Nature 392, 779–787 (1998).

13. Moberg, A., Sonechkin, D. M., Holmgren, K., Datsenko, N. M. & Karlén, W. Highly variable Northern Hemisphere temperatures reconstructed from low- and high-resolution proxy data. Nature 433, 613–617 (2005).

14. Marcott, S. A., Shakun, J. D., Clark, P. U. & Mix, A. C. A Reconstruction of Regional and Global Temperature for the Past 11,300 Years. Science 339, 1198–1201 (2013).

15. Easterling, D. R. & Wehner, M. F. Is the climate warming or cooling? Geophys Res Lett 36, L08706 (2009).

16. Foster, G. & Rahmstorf, S. Global temperature evolution 1979–2010. Environ. Res. Lett. 6, 044022 (2011).

Guest blog Demetris Koutsoyiannis

LTP: Looking Trendy—Persistently

Demetris Koutsoyiannis, National Technical University of Athens, Greece

Stochastics and its importance in studying climate

Probability, statistics and stochastic processes, lately described by the collective term stochastics, provide essential concepts and tools to deal with uncertainty useful for all scientific disciplines. However, there is at least one scientific discipline whose very domain relies on stochastics: Climatology. To refer to a popular definition of this domain by IPCC[1] (also quoted in Wikipedia):

Climate in a narrow sense is usually defined as the average weather, or more rigorously, as the statistical description in terms of the mean and variability of relevant quantities over a period of time ranging from months to thousands or millions of years. The classical period for averaging these variables is 30 years, as defined by the World Meteorological Organization.

“Average”, “statistical description”, “mean”, “variability”, are all statistical terms. Several questions related to climate also involve probability, as exemplified in question 6 of the Introduction of this Climate Dialogue theme:[2]

Based on your statistical model of preference what is the probability that 11 of the warmest years in a 162 year long time series (HadCrut4) all lie in the last 12 years?

Interestingly, similar probabilistic and statistical notions are implied in a recent President Obama statement:[3]

Yes, it’s true that no single event makes a trend. But the fact is, the 12 hottest years on record have all come in the last 15.

The latter statement highlights how important statistical questions are for policy matters and presumes some public perception of probability and statistics, which determines how the message is received.

I have no doubt that the average human being has some understanding of probability and statistics, not only thanks to education, but because life is uncertain and each of us needs to develop understanding of uncertainty and skills to cope with it. However, common experience and perception are mostly related to too simple uncertainties, like in coin tosses, dice throws and roulette wheels. Also education is mainly based on classical statistics in which:

· Consecutive events are independent to each other: the outcome of an event does not affect that of the next one.

· As a result, time averages tend to stabilize relatively fast: their variability, expressed by the probabilistic concept of variance, is inversely proportional to the length of the averaging period.

Adhering to classical statistics, when dealing with climate and other complex geophysical processes, is not just a problem of laymen. There are numerous research publications adopting, tacitly or explicitly, the independence assumption for systems in which it is totally inappropriate. Even the very definition of climate quoted above, particularly the phrase “The classical period is 30 years” historically reflects a perception of a constant climate[4][5] and a hope that 30 years would be enough for a climatic quantity to get stabilized to a constant value—and this is roughly supported by classical statistics. In this perception a constant climate would be the norm and a deviation from the norm would be something caused by an extraordinary agent. The same static-climate conviction is evident in the “weather vs. climate” dichotomy (e.g. the “climate is what you expect, weather is what you get”; (see critical discussions in Refs. [6], [7], [5]).

Figure 1. Probability that a 12-year period contains the specified number of warmest years (n) or more in a 162-year long period, as calculated assuming a random climate and a Hurst-Kolmogorov (HK) climate with Hurst parameter H = 0.92 (see text below for explanation of the latter).

Now let us pretend that, as in classical statistics, climate was synthesized by averaging random events without dependence and try to study on this basis the above question (slightly modified for reasons that will be explained later). So, what is the probability that, in a 162-year long time series, at least n (where n = 1, 2, …12) of the warmest years all lie in a 12-year long period? The reply is depicted in Figure 1 labelled “Random climate”. The first seven points are calculated by Monte-Carlo simulation. For n = 7 years this probability is 0.00005 (1/20 000). The Monte Carlo simulation would require too much time to find the probability that all 12 warmest years are consecutive (n = 12), because this probability is really an astonishingly small number; but I was able to find it analytically and plotted it on the graph. I also approximated with analytical calculations the probability that at least 11 warmest years are clustered within 12 years. From the graph we can conclude that it is quite unlikely that more than 8-9 warmest years would cluster, even throughout the entire Earth’s life (4.5 billion years separated in segments of 162 years). To have 11 warmest events clustering in a 12-year period we would need, on the average, one hundred thousand times the age of the Universe.

Is this overwhelming evidence that something extraordinary has occurred during our lives, or that the independence assumption leads to blatantly irrational results?

No one would believe that the weather this hour does not depend on that an hour ago. It is natural to assume that there is time dependence in weather. Therefore, we must study weather not on the basis of classical statistics, but we should rather use the notion of a stochastic process. Now, if we average the process to another scale, daily, monthly, annual, decadal, centennial, etc. we get other stochastic processes, not qualitatively different from the hourly one. Of course, as the scale of averaging increases the variability decreases—but not as much as implied by classical statistics. Naturally the dependence makes clustering of similar events more likely.

The first who studied clustering in natural processes was Harold Edwin Hurst, a British hydrologist who worked in the Nile for more than 60 years. In 1951 he published a seminal paper[8] in which he stated:

Although in random events groups of high or low values do occur, their tendency to occur in natural events is greater. This is the main difference between natural and random events.

Herodotus said that the Egyptian land is "a gift of the Nile". Nile gave also hydrology and climatology invaluable gifts: one of them is the longest record of instrumental observations in history. Its water levels were measured in so-called Nilometers and archived for many centuries. In the 1920s Omar Toussoun, Prince of Alexandria, published a book[9] containing, among other things, annual minimum and maximum water levels of the Nile at the Roda Nilometer from AD 622 to 1921. Figure 2 depicts the time series of annual minimum levels up to 1470 (849 values; unfortunately, after 1470 there are substantial gaps). Climatic, i.e. 30-year average, values are also plotted. One may say that these values are not climatic in strict sense. But they are strongly linked to the variability of the climate of a large area, from Mediterranean to the tropics. And they are instrumental.

The clustering of similar events, more formally described as Long-Term Persistence (LTP) is obvious. For example, around AD 780 we have a group of low values producing a low climatic value, and around 1110 and 1440 we have groups of large values. Such grouping would not appear in a climate that would be the synthesis of independent random events. The latter would be more flat as illustrated by the synthetic example of Figure 3.

Another way of viewing the long-term variability of the Nile in Figure 2 is by using the notion of trends, irregularly changing from positive to negative and from mild to steep. The long instrumental Nile series may help those who prefer the view of variability in terms of trends to recognize “Nature's style [as] Naturally trendy” to invoke the title of a celebrated recent paper.[10]

Figure 2. Nile River annual minimum water level at Roda Nilometer (from Ref. 9, here converted into water depths assuming a datum for the river bottom at 8.80 m), along with 30-year averages (centred). A few missing values at years 1285, 1297, 1303, 1310, 1319, 1363 and 1434 are filled in using a simple method from Ref. [11]. The estimated statistics are mean = 2.74 m, standard deviation = 1.00 m, Hurst parameter = 0.87.

Figure 3. A synthetic time series from an independent (white noise) process with same statistics as those of the Nilometer series shown in the caption of Figure 2.

Seeking a proper stochastic model for climate

Variability over different time scales, trends, clustering and persistence are all closely linked to each other. The former is a more rigorous concept and is mathematically described by the variance (or the standard deviation) of the averaged process as a function of the averaging time scale, aka climacogram. [6],[12] The variability over scale (the climacogram), is also one-to-one related (by transformation) to the stochastic concepts of dependence in time (the autocorrelation function) and the spectral properties (the power spectrum) of the process of interest. [6][12]

In white noise, i.e., the process characterized by complete independence in time, the variability is infinite at the instantaneous time scale (in technical terms its autocorrelation in continuous time is a Dirac Delta function). No variability is added at any finite time scale. Clearly, this is a mathematical construct which cannot occur in nature (the adjective “white”, suggestive of the white light as a mixture of frequencies, is misleading; the spectral density of white noise is flat, while that of the white light is not).

A seemingly realistic stochastic process, which has been widely used for climate, is the Markov process, whose discrete time version is more widely known as the AR(1) process. The characteristic properties of this process are two:

· Its past has no influence on the future whenever the present is known (in other words, only the latest known value matters for the future).[13]

· It assumes a single characteristic time scale in which change or variability is created (but in contrast to the white noise, this time scale is non-zero and technically is expressed by the denominator of the exponent in an exponential function that constitutes its autocorrelation function). As a result, when the time scale of interest is fairly larger than this characteristic scale, the process behaves like white noise.

It is difficult to explain why this model has become dominant in climatology. Even these two theoretical properties should have hampered its popularity. How could the future be affected just by the latest value and not by the entire past? Could any geophysical process, including climate, be determined by just one mechanism acting on a single time scale?

The flow in a river (not necessarily the Nile) may help us understand better the multiplicity of mechanisms producing change and the multiplicity of the relevant time scales (see also Ref. [14]):

· Next second: the hydraulic characteristics (water level, velocity) will change due to turbulence.

· Next day: the river discharge will change (even dramatically, in case of a flood).

· Next year: The river cross-section will change (erosion-deposition of sediments).

· Next century: The river basin characteristics (e.g. vegetation, land use) will change.

· Next millennium: All could be very different (e.g. the area could be glacialized).

· Next millions of years: The river may have disappeared (owing to tectonic processes).

Of course none of these changes will be a surprise; rather, it would be a surprise if things remained static. Despite being anticipated, all these changes are not predictable.

Does a plurality of mechanisms acting on different scales require a complex stochastic model? Not necessarily. A decade before Hurst detected LTP in natural processes, Andrey Kolmogorov,[15] devised a mathematical model which describes this behaviour using one parameter only, i.e. no more than in the Markov model. We call this model the Hurst-Kolmogorov (HK) model (aka fGn—for fractional Gaussian noise, simple scaling process etc.), while its parameter has been known as the Hurst parameter and is typically denoted by H. In this model, change is produced at all scales and thus it never becomes similar to white noise, whatever the time scale of averaging is.

Specifically, the variance will never become inversely proportional to time scale; it will decrease at a lower rate, inversely proportional to the power (2 – 2H) of the time scale (nb. 0.5 ≤ H < 1, where the value H = 0.5 corresponds to white noise). A characteristic property of the HK process is that its autocorrelation function is independent of time scale. In other words if there is some correlation in the average temperature between a year and the next one (and in fact there is), the same correlation will be between a decade and the next one, a century and the next one, and so on to infinity. Why? Because there will always be another natural mechanism acting on a bigger scale, which will create change, and thus positive correlation at all lower scales (the relationship of change with autocorrelation is better explained in Ref. 6). The HK behaviour seems to be consistent with the principle of extremal entropy production.[16]

The Nilometer record described above is consistent with the HK model with H = 0.87. Are there other records of geophysical processes consistent with the HK behaviour? A recent overview paper[17] cites numerous studies where this behaviour has been verified. It also examines several instrumental and proxy climate data series related to temperature and, by superimposing the climacograms of the different series, it obtains an overview of the variability for time scales spanning almost nine orders of magnitude—from 1 month to 50 million years. The overall climacogram supports the presence of HK dynamics in climate with H at least 0.92. The orbital forcing (Milankovitch cycles) is also evident in the combined climacogram at time scales between 10 and 100 thousand years.

Statistical assessment of current climate evolution

Re-examining the statistical problem of 11 warmest years in 12 within 162 year period, now within an HK framework with H = 0.92, we will find spectacularly different results from those of the random climate, as shown in Figure 1. We may see, for example, that what, according to the classical statistical perception, would require the entire age of the Earth to occur once (i.e. clustering of 8-9 events) is a regular event for an HK climate, with probability on the order of 1-10%.

This dramatic difference can help us understand why the choice of a proper stochastic model is relevant for the detection of changes in climate. It may also help us realize how easy it is to fool ourselves, given that our perception of probability may heavily rely on classical statistics.

Figure 4 gives a close-up of the results, excluding the very low probabilities and also generalizing the “12-year period” to “N-year period” so that it can host, in addition to the Climate Dialogue statistical question 6, the results for the “Obama version” thereof as quoted above. In addition, Figure 4 is based on a slightly higher value of the Hurst coefficient, H = 0.94, as estimated by the Least Squares based on Standard Deviation method[18] for the HadCrut4 record. Both versions result in about the same answer: the probability of having 11 warmest years in 12, or 12 warmest years in 15, is 0.1%.

Figure 4. Probability that a N-year period, where N = 12 or 15, contains the specified number, n, of warmest years or more in a 162-year long period, calculated in the same manner as in Figure 1 with H = 0.94.

If we used the IPCC AR4 terminology[19] we would say that either of these events is exceptionally unlikely to have a natural cause. Interestingly, the present results do not contradict those of a recent study of Zorita, Stocker and von Storch,[20] who examined a similar question and concluded that:

Under two statistical null-hypotheses, autoregressive and long-memory, this probability turns to be very low: for the global records lower than p = 0.001…

I note, though, that there are differences in the methodology followed here and that in Zorita et al.; for example, the analysis here did not consider whether the N-year period (where the n warmest years are clustered) is located in the end of the examined observation period or anywhere else in it (the reason will be explained below).

One may note that the above results, as well as those by Zorita et al., are affected by uncertainty, associated with the parameter estimation but also with the data set itself. The data are altered all the time as a result of continuous adaptations and adjustments. Even the ranks of the different years are changing: for example in the CRU data examined by Koutsoyiannis and Montanari (2007)[21], 1998 was rank 1 (the warmest of all) and 2005 was rank 2, while now the ranking of these two years was inverted. But most importantly, the analysis is affected by the Mexican Hat Fallacy (MHF), if I am allowed to use this name to describe a neat example of statistical misuse offered by von Storch,[22] in which the conclusion is drawn that:

The Mexican Hat is not of natural origin but man-made.

Von Storch [22] aptly describes the fallacy in these words:

The fundamental error is that the null hypothesis is not independent of the data which are used to conduct the test. We know a-priori that the Mexican Hat is a rare event, therefore the impossibility of finding such a combination of stones cannot be used as evidence against its natural origin. The same trick can of course be used to “prove” that any rare event is “non-natural”, be it a heat wave or a particularly violent storm - the probability of observing a rare event is small.

I believe that by rephrasing “11 of the warmest years … all lie in the last 12 years” into “11 of the warmest years … all lie in a 12-year long period ” reduces the MHF effect, but I do not think it eliminates it. That is why I prefer other statistical methods of detecting changes[23], such as the tests proposed by Hamed[24] and by Cohn and Lins [10]. The former relies on a test statistic based on the ranks of all data, rather than a few of them, while the second considers also the magnitude of the actual change, not that of the change in the ranks.

Another test statistic was proposed by Rybski et al.,[25] and was modified to include the uncertainty in the estimation of standard deviation by Koutsoyiannis and Montanari [21], who also applied it for the CRU temperature data up to 2005. Note that, to make the test simple, the uncertainty in the estimation of H was not considered even in the latter version (thus it could rather be called a pseudo-test). Here I updated the application of this test and I present the results in Figure 5.

Figure 5 Updated Fig. 2 in Koutsoyiannis and Montanari [21] testing lagged climatic differences based on the HadCrut4 data set (1850-2012; see explanation in text).

The method has the advantages that it uses the entire series (not a few values), it considers the actual climatic values (not their ranks) and it avoids specifying a mathematical form of trend (e.g. linear). Furthermore, it is simple: First we calculate the climatic value of each year as the average of the present and the previous 29 years. This is plotted as a pink continuous line in Figure 5, where we can see, among other things, that the latest climatic value is 0.31°C (at 2012, being the average of HadCrut4 data values for 1983-2012), while the earliest one was –0.30°C (at 1879, being the average of 1850-79). Thus, during the last 134 years the climate has warmed by 0.61°C. Note that no subjective smoothing is made here (in contrast to the graphs by CRU), and thus the climatic series has length 134 years (but with only 5 non-overlapping values), while the annual series has length 163.

Our (pseudo)test relies on climatic differences for different time lags (not just that of the latest and earliest values). For example, assuming a lag of 30 years (equal to the period for which we defined a climatic value), the climate difference between 2012 and 1982 is 0.31°C – (–0.05°C) = 0.36°C, where the value - 0.05°C is the average of years 1953-82. The value 0.36°C is plotted as a green triangle in Figure 5 at year 2012. Likewise, we find climatic differences for years 2011, 2010, …, 1909, all for lag 30. Plotting all these we get the series of green triangles shown in Figure 5. We repeat the same procedure for time lags that are multiples of 30 years, namely 60 years (red points), 90 years (blue points) and 120 years (purple points).

Finally, we calculate, in a way described in Ref. 21, the critical values of the test statistic, which is none other than the lagged climate difference. The critical values are different for each lag and are plotted as flat lines with the same colour as the corresponding points. Technically, the (pseudo)test was made as two-sided for significance level 1% but only the high critical values are plotted in the graph. Practically, as long as the points lie below the corresponding flat lines, nothing significant has happened. This is observed for the entire length of the lag-30 and lag-60 differences. A few of the last points of the lag-90 series exceed the critical value; this corresponds to the combination of high temperatures in the 2000s and low temperatures in the 1910s. But then all points of the lag-120 series lie again below the critical value, indicating no significant change.

Concluding remarks

Assuming that the data set we used is representative and does not contain substantial errors, the only result that we can present as fact is that in the last 134 years the climate has warmed by 0.6°C (nb., this is a difference of climatic—30-year average—values while other, often higher, values that appear in the literature refer to trends based on annual values). Whether this change is statistically significant or not depends on assumptions. If we assume a 90-year lag and 1% significance, it perhaps is; again I cannot be certain as the pseudo-test did not consider the uncertainty in H. Note, the 1% significance corresponds to ±2.58 standard deviations away from the mean; if we made it ±3 everything would become insignificant.

Irrespective of statistical significance, paleoclimate and instrumental data provide evidence that the natural climatic variability, the natural potential for change, is high and concerns all time scales. The mechanisms producing change are many and, in practice, it is more important to quantify their combined effects, rather than try to describe and model each one separately.

From a practical point of view, it could be dangerous to pretend that we are able to provide credible quantitative description of the mechanisms, their causes and effects, and their combined consequences: We know that the mechanisms and their interactions are nonlinear, as well as that the climate model hind casts are poor.[26],[27] Indeed, it has been demonstrated that, particularly for runoff that is mostly relevant for water availability and flood risk, deterministically projected future traces can be too flat in comparison to changes that can be expected (and stochastically generated) admitting stationarity and natural variability characterized by HK dynamics[28].

Biosketch
Demetris Koutsoyiannis received his diploma in Civil Engineering from the National Technical University of Athens (NTUA) in 1978 and his doctorate from NTUA in 1988. He is professor of Hydrology and Analysis of Hydrosystems at the Department of Water Resources and Environmental Engineering of NTUA (and former Head of the Department). He is also Co-Editor of Hydrological Sciences Journal and member of the editorial board of Hydrology and Earth System Sciences (and formerly of Journal of Hydrology and Water Resources Research). He teaches undergraduate and postgraduate courses in hydrometeorology, hydrology, hydraulics, hydraulic works, water resource systems, water resource management, and stochastic modelling. He is an experienced researcher in the areas of hydrological modelling, hydrological and climatic stochastics, analysis of hydrosystems, water resources engineering and management, hydro-informatics, and ancient hydraulic technologies. His record includes about 650 scientific and technological contributions, spanning from research articles to engineering studies, among which 96 publications in peer reviewed journals. He received the Henry Darcy Medal 2009 by the European Geosciences Union for his outstanding contributions to the study of hydro-meteorological variability and to water resources management.

References



[1] IPCC (2007), Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change [Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M. Tignor and H.L. Miller (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 996 pp (Annex 1, Glossary: http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-annexes.pdf).

[2] Climate Dialogue Editorial Staff (2013), How persistent is the climate system and what is its implication for the significance of observed trends and for internal variability? (This blog post).

[3] Obama’s 2013 State of the Union Address, The New York Times, http://www.nytimes.com/2013/02/13/us/politics/obamas-2013-state-of-the-union-address.html?pagewanted=3&_r=0

[4] Lamb, H. H. (1972), Climate: Past, Present, and Future, Vol. 1, Fundamentals and Climate Now, Methuen and Co.

[5] Lovejoy, S., and D. Schertzer (2013), The climate is not what you expect, Bull. Amer. Meteor. Soc., doi: 10.1175/BAMS-D-12-00094.

[6] Koutsoyiannis, D. (2011), Hurst-Kolmogorov dynamics and uncertainty, Journal of the American Water Resources Association, 47 (3), 481–495.

[7] Koutsoyiannis, D. (2010), A random walk on water, Hydrology and Earth System Sciences, 14, 585–601.

[8] Hurst, H.E. (1951), Long term storage capacities of reservoirs, Trans. Am. Soc. Civil Engrs., 116, 776–808.

[9] Toussoun, O. (1925). Mémoire sur l’histoire du Nil, in Mémoires a l’Institut d’Egypte, vol. 18, pp. 366-404.

[10] Cohn, T. A., and H. F. Lins (2005), Nature's style: Naturally trendy, Geophys. Res. Lett., 32, L23402, doi: 10.1029/2005GL024476.

[11] Koutsoyiannis, D., and A. Langousis (2011), Precipitation, Treatise on Water Science, edited by P. Wilderer and S. Uhlenbrook, 2, 27–78, Academic Press, Oxford ( p. 57).

[12] Koutsoyiannis, D. (2013), Encolpion of stochastics: Fundamentals of stochastic processes, 30 pages, National Technical University of Athens, Athens, http://itia.ntua.gr/1317/, accessed 2013-04-17.

[13] Papoulis, A. (1991), Probability, Random Variables and Stochastic Processes, 3rd edn., McGraw-Hill, New York (p. 635).

[14] Koutsoyiannis, D. (2013), Hydrology and Change, Hydrological Sciences Journal (accepted with minor revisions; currently available in the form of an IUGG Plenary lecture, Melbourne 2011, http://itia.ntua.gr/1135/, accessed 2013-04-17).

[15] Kolmogorov, A. N. (1940). Wiener spirals and some other interesting curves in a Hilbert space, Dokl. Akad. Nauk SSSR, 26, 115-118. English translation in: Tikhomirov, V.M. (ed.) 1991. Selected Works of A. N. Kolmogorov: Mathematics and mechanics, Vol. 1, Springer, 324-326.

[16] Koutsoyiannis, D. (2011), Hurst-Kolmogorov dynamics as a result of extremal entropy production, Physica A: Statistical Mechanics and its Applications, 390 (8), 1424–1432.

[17] Markonis, Y., and D. Koutsoyiannis (2013), Climatic variability over time scales spanning nine orders of magnitude: Connecting Milankovitch cycles with Hurst–Kolmogorov dynamics, Surveys in Geophysics, 34 (2), 181–207.

[18] Tyralis, H., and D. Koutsoyiannis (2011), Simultaneous estimation of the parameters of the Hurst-Kolmogorov stochastic process, Stochastic Environmental Research & Risk Assessment, 25 (1), 21–33.

[19] As Ref. 1, p. 23.

[20] Zorita, E., T. F. Stocker, and H. von Storch (2008), How unusual is the recent series of warm years?, Geophys. Res. Lett., 35, L24706, doi: 10.1029/2008GL036228.

[21] Koutsoyiannis, D., and A. Montanari (2007), Statistical analysis of hydroclimatic time series: Uncertainty and insights, Water Resources Research, 43 (5), W05429, doi: 10.1029/2006WR005592.

[22] von Storch, H. (1999), Misuses of statistical analysis in climate research, in Analysis of Climate Variability, Applications of Statistical Techniques, Proceedings of an Autumn School organized by the Commission of the European Community, Edited by H. von Storch and A. Navarra, 2nd updated and extended edition, http://www.hvonstorch.de/klima/books/SNBOOK/springer.pdf, accessed 2013-04.

[23] Koutsoyiannis, D. (2003), Climate change, the Hurst phenomenon, and hydrological statistics, Hydrological Sciences Journal, 48 (1), 3–24.

[24] Hamed, K. H. (2008), Trend detection in hydrologic data: The Mann-Kendall trend test under the scaling hypothesis, Journal of Hydrology, 349(3-4), 350-363.

[25] Rybski, D., A. Bunde, S. Havlin and H. von Storch (2006), Long-term persistence in climate and the detection problem, Geophys. Res. Lett., 33, L06718, doi: 10.1029/2005GL025591.

[26] Anagnostopoulos, G. G., D. Koutsoyiannis, A. Christofides, A. Efstratiadis and N. Mamassis (2010), A comparison of local and aggregated climate model outputs with observed data, Hydrological Sciences Journal, 55 (7), 1094–1110.

[27] Koutsoyiannis, D., A. Christofides, A. Efstratiadis, G. G. Anagnostopoulos, and N. Mamassis (2011), Scientific dialogue on climate: is it giving black eyes or opening closed eyes? Reply to “A black eye for the Hydrological Sciences Journal” by D. Huard, Hydrological Sciences Journal, 56 (7), 1334–1339.

[28] Koutsoyiannis, D., A. Efstratiadis, and K. Georgakakos (2007), Uncertainty assessment of future hydroclimatic predictions: A comparison of probabilistic and scenario-based approaches, Journal of Hydrometeorology, 8 (3), 261–281.

Guest blog Armin Bunde

How to estimate the significance of global warming when taking explicitly into account the long-term persistence in temperature?

Armin Bunde, Institut für Theoretische Physik, Universität Giessen, Germany

1. Long-term Persistence in climate and its detection

Long-term persistence (LTP), also called long-term correlations or long-term memory, plays an important role to characterize records in physiology (e.g. heartbeats), computer science (e.g. internet traffic) and in financial markets (volatility). The first hint that LTP is important in climate has been given in the classic papers by Hurst more than 50 years ago when studying the historic levels of the Nile River.

We can distinguish between uncorrelated records (''white noise''), short-term persistent records (STP) and long-term persistent records. In white noise all data points x1, x2, ..., xN are independent of each other. In STP records, each data point xi depends on a short subset of previous points xi-1, xi-2,.. xi-m, i.e., the memory has a finite range m. In LTP records, in contrast, xi depends on all previous points. The simplest model for STP is the ''AR1 process'' where x i is proportional to the foregoing data point xi-1, plus a white noise component ηi-1,

Despite the evidence that temperature anomalies cannot be characterized by the AR1 process, most climate scientists have used the AR1 model when trying to describe temperature fluctuations and estimating the significance of a trend. This usually leads to a considerable overestimation of the external trend and its significance.

There are several methods to quantify the memory in a given sequence. (For a recent review see [1] and references therein). The first one is the autocorrelation function C(s) where s = 1,2,3,… is the lag time between 2 data points. For white noise, there is no memory and C(s) = 0. For the AR1 process, C(s) decays exponentially, C(s) ~ exp{- s/S} where S =1/|ln a| is the ''persistence length''. For infinitely long stationary LTP data C(s) decays algebraically,

where ϒ is called correlation exponent.

The first figure shows parts of an uncorrelated (left) and a synthetic long-term persistent (right) record, with ϒ = 0.4. The full line is the moving average over 30 data points. For the uncorrelated data, the moving average is close to zero, while for the LTP data, the moving average can have large deviations from the mean, forming some kind of mountain-valley structure that looks as if it contained some external deterministic trend. The figure shows that it is not a straightforward task to separate the natural fluctuations from an external trend, and this makes the detection of external trends in LTP records a difficult task. I will return to this later.

Figure 1.

One can show analytically [2], that in LTP records with a finite length N, the algebraic dependence of C(s) on s can be seen only for very small time lags s, satisfying the inequality (s/N)ϒ << 1. Already for ϒ = 1/2 and records of length 600 (which corresponds to 50 years of monthly data), this condition can only be met for very small time lag times s, roughly s < 6. For larger time lags, C(s) decays faster than algebraically. This is an artifact of the method called ''Finite Size''-Effect. If one is not aware of this effect, one may be led to the wrong conclusion that there exists no long-term memory in sequences of a finite length.

A similar mistake may happen, when one uses the second traditional method for detecting LTP, the power spectrum (spectral density) S(f). The discrete frequency f is equivalent to an inverse lag time, f=1/s, and a multiple of 1/N. For white noise, S(f) is constant. For STP data, S(f) is constant for f well below m/N (since the data are uncorrelated at time lags s above m), and then decreases monotonously.

For LTP records, S(f) decreases by a power law,

so one may detect LTP also by considering the power spectrum. However, due to the discreteness of f, the algebraic decay cannot be clearly observed at frequencies below 50/N, which again may lead to the wrong conclusion that there is no long-term memory.

In addition to the remarkable finite size resp. discreteness effects, both methods lead an over-estimation of the LTP in the presence of external deterministic trends.

In recent years, several methods (see, e.g.,[1,3]) have been developed where long-term correlations in the presence of deterministic polynomial trends can be detected. These methods include the detrended fluctuation analysis (DFA2) and Haar-wavelet analysis (WT2), where linear trends are eliminated systematically. DFA2 is quite accurate in the time window 8 ≤ s < N/4 while WT2 is accurate for 1 ≤ s < N/50. In both methods, one determines a fluctuation function F(s) which measures the fluctuations of the record in time windows of length s around a trend line. For LTP records with correlation exponent ϒ, F(s) increases as

where α is usually called Hurst exponent. By combining DFA2 and WT2 one can obtain a consistent picture on time scales between s=1 and s=N/4. For a meaningful analysis, the records should consist of more than N = 500 data points. I like to emphasize again that in the case an external deterministic trend cannot be excluded, the evaluation of the LTP and the determination of the Hurst exponent must be done with trend-eliminating methods, e.g., DFA2 and WT2, as described above.

The second figure summarizes the results of our earlier analysis (for references, see [1]) for a large number of atmospheric and sea surface temperatures as well as precipitation and river run-offs. Each histogram shows how many stations have Hurst exponents around 0.5, 0.55, 0.6, 0.65 and so on.

Figure 2.

For the daily precipitation records and the continental atmospheric temperatures the distribution of Hurst exponents is quite narrow. For daily precipitation, the exponent is close to 0.5, indicating the absence of persistence (see also [3]), while for the daily continental temperature records, the exponent is close to 0.65, indicating a “universal” persistence law. Both laws can be used very efficiently as test bed for climate models and paleo reconstructions (for references, see [1] and [3]).

There are also more intuitive measures of LTP, and one of them is the distribution of the persistence lengths l in a record (see, e. g. [3]). In temperature data, l describes the lengths of warm resp cold periods where the temperature anomalies (deviation of the daily or monthly temperature from their seasonal mean) are above resp. below zero. It is easy to show that the distribution P(l) of the persistence length decays exponentially for uncorrelated data, i.e., ln P(l) ~ - l. For LTP data, P(l) decays via a stretched exponential, ln P(l) ~ -lϒ where ϒ is the correlation exponent (see [1]). Accordingly, in LTP records large persistence lengths are more frequent, which is intuitively clear.

2. Detection of external trends in LTP data

For detection and estimation of external trends (“detection problem”) one needs a statistical model. For monthly (and annual) temperature records the best statistical model is long-term persistence, as we have seen in the foregoing section. The main features of a long-term persistent record of length N are determined by the Hurst exponent α. Synthetic LTP records characterized by these two parameters can be easily generated by a Fourier-transformation (see, e.g., [1]) with the help of random number generators.

When using LTP as statistical model we assume that there are no additional short term correlations, generated by ''Großwetterlagen'' (blocking situations). Since the persistence length of these short term correlations is below 14 days, they are not present in monthly data sets.

For the detection problem, one then needs to know the probability W(x) that in a long-term correlated record of length L and Hurst exponent α, the relative trend exceeds x. For temperature data, the relative trend is the ratio between the temperature change (determined by a simple regression analysis) and the standard deviation σ around the trend line. For Gaussian LTP data, an analytical expression for W(x), for given α and N, has been derived in [4], which is easy to implement and can serve also as a very good approximation for Non-Gaussian data. In order to decide if a measured relative trend xmmay be natural or not, one has to determine the exceedance probability at xm. If W(xm) is below 0.025, the trend usually is called significant (within the 95 percent confidence interval), if it is below 0.005, the trend is called highly significant (within the 99 percent confidence interval).

From the condition W(y) = 0.025 one may derive error bars y (within the 95 percent confidence interval) for the expected external trend,

If xm is slightly below y, then the minimum value of the external trend is negative and thus the trend is not significant. But the maximum value of the external trend can be large, and thus an external trend cannot be excluded, even though the trend is not significant. Accordingly, if a trend is not significant since W(xm) is above 0.025, this does not mean that one can exclude the possibility of an external deterministic trend. It only means that one is not forced to assume an external trend in order to describe the variability of the record properly. For example, if we observe a small insignificant positive trend, then this trend may either arise from the superposition of a strong positive natural fluctuation (as in Fig.1b) and a small negative external trend or from a strong negative fluctuation (as in Fig. 1b, but downwards) and a large positive external trend.

These conclusions are independent of the used model and hold also for the STP model. In previous significance analyses, climate scientists usually used the STP model, where the model parameter a has been determined from measuring C(1) of the data, see Sect. 1. The significance of a trend (see below) is clearly underestimated by this model.

3. Detection of climate change within the LTP model

Using our terminology of “significant” and “highly significant” we have obtained a mixed picture of the significance of temperature records, partly reviewed in [1].

(i) The global sea surface temperature increased, in the past 100y, by about 0.6 degree, which is not significant. The reason for this is the large persistence of the oceans, reflected by a large Hurst exponent.

(ii) The global land air temperature, in the past 100y, increased by about 0.8 degrees. We find this increase even highly significant. The reason for this is the comparatively low persistence of the land air temperature, which makes large natural increases unlikely.

(iii) Local temperatures: In local temperature records it is more difficult to detect external trends due to their large variability. We have studied a large number of local stations around the globe. For stations at high elevation like Sonnblick in Austria or in Siberia, we found highly significant trends. For about half of the other stations, we could not find a significant trend. However, when averaging the records in a certain area, this picture changed. Due to the averaging, the fluctuations around the trend line decrease and the temperature increases become more significant.

Our estimations are basically in line with earlier, less rigorous trend estimations in LTP data by Rybski et al [5] and in line with the conclusions of Zorita et al when estimating the probability that 11 of the warmest years in a 162 year long record all lie in the last 12 years.

My conclusion is that the AR1 process falsely used by climate scientists to describe temperature variability leads to a strong overestimation of the significance of external trends. When using the proper LTP model the significance is considerably lower. But also the LTP model does not reject the hypothesis of anthropogenic climate change.

Biosketch

Armin Bunde is professor of theoretical physics in Giessen. After receiving his PhD in theoretical solid state physics at Giessen University, he spent several years as a Post Doc in Antwerp, Saarbrücken, and Konstanz. He received a prestigious Heisenberg Fellowship in 1984 and spent three years at Boston University and Bar Ilan University in Israel, where he worked with H.Eugene Stanley (Boston) and Shlomo Havlin (Israel). In 1985 he received the Carl-Wagner Award. Between 1987 and 1993 he was professor of theoretical physics at Hamburg University, since 1993 he is back in Giessen.

In the last 20 years, his main research areas are disordered materials, percolation theory and applications in materials science, as well as fractals and time series analysis in different disciplines, among them geo science, where he is mainly interested in long-term persistence, extreme events and climate networks. In geo science, he has cooperated intensively with Hans-Joachim Schellnhuber, Donald Turcotte, Hans von Storch, and Shlomo Havlin. He has published more than 300 papers and his Hirsch Index (google) is well above 50.

References

[1] A. Bunde and S. Lennartz, Acta Geophysica 60, 562 (2012) and references therein

[2] S. Lennartz and A. Bunde, Phys. Rev. E (2009)

[3] A. Bunde, U. Büntgen, J. Ludescher, J. Luterbacher, and H. von Storch, Nature Climate Change, 3, 174 (2013), see also: online supplementary information

[4] S. Lennartz and A. Bunde, Phys. Rev. E 84, 021129 (2011)

[5] D. Rybski, A. Bunde, S. Havlin, and H. von Storch, Geophys. Res. Lett. 35, L06718 (2006)

[6] E. Zorita, T.F. Stocker, and H. von Storch, Geophys. Res. Lett. 35, L24706 (2008)

Summary of the Climate Dialogue on long-term persistence

Summary of the Climate Dialogue on Long-Term Persistence

Authors: Marcel Crok, Bart Strengers (PBL), Bart Verheggen (PBL), Rob van Dorland (KNMI)

This summary is based on the contributions of Rasmus Benestad, Armin Bunde and Demetris Koutsoyiannis who participated in this Climate Dialogue that took place in May 2013.


Long term persistence and trend significance
“Is global average temperature increase statistically significant?” To answer this question one needs to make assumptions on the statistical nature of the temperature time series and to choose what statistical model is most appropriate.

If the temperature of this year is not related to that of last year or next year we can use statistics to determine whether the increase in global temperature is significant or not. In such an “uncorrelated climate”, i.e. if the temperature of this year is fully independent of other years, the average value becomes zero (or a fixed value) quickly and deviations from the mean last only shortly. However, if there is (strong) temporal dependence the moving average can have large deviations from the mean. This is called long-term dependence or long-term persistence (LTP).

The three participants agree that LTP exists in the climate (Table 1), but they disagree about the exact definition and the physical processes that lie behind it. Benestad and Bunde describe LTP in terms of “long memory”. Koutsoyiannis holds the opinion that LTP is mainly the result of the irregular and unpredictable changes that take place in the climate (Table 2). Both Bunde and Koutsoyiannis are in favour of a formal (mathematical) definition of LTP, which states that on longer time scales climate variability decreases—but not as much as implied by classical statistics.

Benestad said that ice ages and the El Niño Southern Oscillation (ENSO) are examples of LTP processes. Bunde and Koutsoyiannis disagreed (Table 3).

Table 1

Benestad

Bunde

Koutsoyiannis

Does LTP exist in the climate?

Yes

Yes

Yes

Table 2

What is long-term persistence (LTP)?

Benestad

LTP describes how slow physical processes change over time, where the gradual nature is due to some kind of ‘memory’.

Bunde

LTP is a process with long memory; the value of a parameter (e.g. temperature) today depends on all previous points.

Koutsoyiannis

It is unfortunate that LTP has been interpreted as memory; it is the change, mostly irregular and unpredictable in deterministic terms, that produces the LTP, while the autocorrelation results from change.

Table 3

Benestad

Bunde

Koutsoyiannis

Quasi-oscillatory phenomena like ENSO can be described as LTP.

Yes

No

No

Is LTP relevant for the detection of climate change?

There was confusion about the exact meaning of the IPCC definition about detection. The definition reads: “Detection of change is defined as the process of demonstrating that climate or a system affected by climate has changed in some defined statistical sense without providing a reason for that change. An identified change is detected in observations if its likelihood of occurrence by chance due to internal variability alone is determined to be small.”

Bunde and Koutsoyiannis both think detection is mainly a matter of statistics while Benestad thinks it also involves a physical interpretation of distinguishing unforced internal variability from forced changes.

Table 4

Benestad

Bunde

Koutsoyiannis

Is detection purely a matter of statistics?

No, laws of physics sets fundamental constraints

Yes

Not purely but primarily yes

LTP versus AR(1)
Bunde and Koutsoyiannis argue that LTP is the proper model to describe temperature variability, that climate scientists in general use a Short Term Persistence (STP) model like AR(1) which leads to a strong overestimation of the significance of trends (Table 5). Koutsoyiannis showed that the clustering of warm years, for example, is orders of magnitude more likely to happen if you use an LTP model. Benestad agrees that the AR(1) model may not necessarily be the best model. He argues that in general statistical models and LTP in particular, used for the detection of trends involve circular reasoning when applied to what is called the instrumental period, because in this period the data embed both “signal” and “noise”. LTP or STP or whatever statistical model are meant to describe “the noise” only in his opinion (Table 6). Koutsoyiannis in response gave a few examples why in his opinion the danger of circular reasoning is not justified in this case (see Extended Summary).

Table 5

Benestad

Bunde

Koutsoyiannis

Is LTP relevant/important for the statistical significance of a trend?

Yes (though physics still needed)

Yes, very much

Yes, very much

Table 6

What is the relevance of LTP for the detection of climate change?

Benestad

Statistical LTP-noise models used for the detection of trends involve circular reasoning if adapted to measured data. State of the art detection and attribution is needed.

Bunde

For detection and estimation of external trends (“detection problem”) one needs a statistical model and LTP is the best model to do this.

Koutsoyiannis

LTP is the only relevant statistical model for the detection of changes in climate.

Table 7

Benestad

Bunde

Koutsoyiannis

Is the AR(1) model a valid model to describe the variability in time series of global average temperature?

No, if physics based information is neglected

No

No

Does the AR(1) model leads to overestimation of the significance of trends?

Yes, if you don’t also take into account the physics-based information

Yes

Yes

LTP and chaos
There was disagreement about the relation between LTP and chaos (Table 8). According to Benestad chaos theory implicates the memory of the initial conditions is lost after a finite time interval. Benestad interprets “the system loses memory” as “LTP is not a useful concept”. Koutsoyiannis considers memory as a bad interpretation of LTP: it is the change which produces the LTP and thus LTP is fully consistent with the chaotic behaviour of climate.

Table 8

Benestad

Bunde

Koutsoyiannis

Is the climate chaotic?

Yes

Yes

Yes

Does chaos mean memory is lost and does this apply for climatic timescales as well?

Yes

Chaos is not a useful concept for describing the variability of climate records on longer time scales

No; LTP is not memory

Does chaos exclude the existence of LTP?

Yes, at both weather and climatic time scales

No

No; on the contrary, chaos can produce LTP

Does chaos contribute to the existence of LTP?

No, but chaos may give an impression of LTP

Yes

Yes, LTP does involve chaos

Signal and noise
There was disagreement about concepts like signal and noise. According to Benestad the term “signal” refers to manmade climate change. “Noise” usually means everything else, and LTP is ‘noise in slow motion’ (Table 9). Koutsoyiannis argued that the “signal” vs. “noise” dichotomy is subjective and that everything we see in the climate is signal. To isolate one factor and call its effect “signal” may be misleading in view of the nonlinear chaotic behavior of the system. Bunde does assume there is an external deterministic trend from the greenhouse gases but he calls the remaining part of the total climate signal natural “fluctuations” and not noise (Table 9). All three seem to agree that one cannot use LTP to make a distinction between forced and unforced changes in the climate (Table 10).

Table 9

Signal versus noise

Benestad

The signal is manmade climate change; the rest is noise and LTP is noise in slow motion.

Bunde

My working hypothesis: there is a deterministic external trend; the rest are natural fluctuations which are best described by LTP.

Koutsoyiannis

Excepting observation errors, everything we see in climate is signal.

Table 10

Benestad

Bunde

Koutsoyiannis

Is the signal versus noise dichotomy meaningful?

Yes

Yes

No*

Can LTP distinguish between forced and unforced components of the observed change?

No

No

*

Can LTP distinguish between natural fluctuations (including natural forcings) and trends?

No

Yes

*

* Koutsoyiannis thinks that even the formulation of these questions, which imply that that the description of a complex process can be made by partitioning it into additive components and trying to know the signatures of each one component, indicates a linear view for a system that is intrinsically nonlinear.

Forced versus unforced
According to Bunde Natural Forcing plays an important role for the LTP and is omnipresent in climate. Koutsoyiannis agreed that (changing) forcing can introduce LTP and that forcing is omnipresent, but LTP can also emerge from the internal dynamics alone.

Table 11

Benestad

Bunde

Koutsoyiannis

Does forcing introduce LTP?

Yes

Yes

Yes

Is forcing omnipresent in the real world climate?

Yes

Yes

Yes

What according to you is the main mechanism behind LTP?

Forcings

Natural Forcing plays an important role for the LTP and is omnipresent in climate

I believe it is the internal dynamics that determines whether or not LTP would emerge

Is the warming significant?
The three participants gave different answers on the key question of this Climate Dialogue, namely of the warming in the past 150 years is significant or not. They used different methods to answer the question. Benestad is most confident that both the changes in land and sea temperatures are significant. Bunde concludes that due to a strong Long Term Persistence the increase in sea temperatures are not significant but the land and global temperatures are. Koutsoyiannis concludes that for most time lags the warming is not significant. In some cases it maybe is.

Table 12

BenestadI

BundeII

KoutsoyiannisIII

Is the rise in global average temperature during the past 150 years statistically significant?

Yes

YesIV

NoV

Is the rise in global average sea surface temperature during the past 150 years statistically significant?

Yes

No

No

Is the rise in global average land surface temperature during the past 150 years statistically significant?

Yes

Yes

No

I Benestad’s conclusions are based on the difference between GCM simulations with and without anthropogenic forcing (Box 10.1 or Figs 10.1 & 10.7 in AR5)
II Based on the Detrended Fluctuation Analysis (DFA) and/or the wavelet technique (WT).
III Based on the climacogram and different time lags (30, 60, 90 and 120 years).
IV This change is 99% significant according to Bunde.
V For a 90 year time lag and a 1% significance level it maybe is significant (see guest blog).

Is there a large contribution of greenhouse gases to the warming?
Bunde is more convinced of a substantial role for greenhouse gases on the climate than Koutsoyiannis although he admits he cannot rule out that the warming on land is (partly) due to urban heating. Bunde said he may not fully agree with Koutsoyiannis: “We cannot show in our analysis of instrumental temperature data that GHG are responsible for the anomalously strong temperature increase that we see and that we find is significant, but it is my working hypothesis.” Koutsoyiannis believes the influence of greenhouse gases is relatively weak, “so weak that we cannot conclude with certainty about quantification of causative relationships between GHG and temperature changes”. Benestad on the other hand said the increased concentrations of GHGs is the only plausible explanation for the observed global warming, global mean sea level rise, melting of ice, and the accumulation of ocean heat.

Table 13

Benestad

Bunde

Koutsoyiannis

Is the warming mainly of anthropogenic origin?

The combination of statistical information and physics knowledge lead to only one plausible explanation: GHGs

Yes, it is my working hypothesis

No, I think the effect of CO2 is small

Extended summary of the Climate Dialogue on long-term persistence

Extended Summary of the Climate Dialogue on Long Term Persistence

Author: Marcel Crok
With contributions from Bart Verheggen, Rob van Dorland (KNMI) en Bart Strengers (PBL)


Introduction

This summary is based on the contributions of the three invited scientists who participated in the dialogue entitled “Long-term persistence and trend significance”. We want to thank Rasmus Benestad, Armin Bunde and Demetris Koutsoyiannis for their participation.

The summary is not meant to be a consensus statement. It’s just a summary of the discussion and should give a good overview of how these three scientists view the topic at this moment. This summary was written by Marcel Crok and then reviewed and adjusted by the other editors of Climate Dialogue and the advisory board members. In some cases the editors disagreed about the text. In the summary we make clear when this is the case.

The summary was then reviewed by the three invited participants. They do not necessarily endorse the full text or our selection of the dialogue. We did ask them to check the claims in all the tables though in order to make these consistent with their views.

Long term persistence and trend significance
In science one often asks whether a change in some parameter, variable or process is statistically significant. So we could ask: is the increase in global average temperature statistically significant? Whether an observed trend is significant or not is related to the chance of occurrence and thus on the underlying variability, noise and errors, as well as the temporal stochastic structure thereof.


Figure 2.14 from the IPCC AR5 WGI report. Global annual average land-surface air temperature (LSAT) anomalies relative to a 1961–1990 climatology from the latest versions of four different data sets (Berkeley, CRUTEM, GHCN and GISS).

The temperature time series in the figure above shows variation on annual and multi-decadal scales. However, how do we know if this trend is part of the natural variability of the climate system or whether it is due to some forced changes and whether is significant or not? Is the increase very unlikely or quite normal in terms of natural variability?

This is a statistical problem and thus the way to look at it is making a statistical analysis of the time series to determine the amplitude of natural variability. For instance to answer the question for the year-to-year variability we would need to know for every single time step (year) what the chance is to go up or down and how strong these excursions can be. The difficulty is that we don’t have a data set for the “undisturbed” climate, i.e. the climate without anthropogenic influences, which could be used as a reference period and to assess the significance of the recent warming trend. It is noted, though, that there are a lot of proxy data sets, which can be used to infer the stochastic structure of natural climatic variability. These data sets do not describe the climate precisely, but certainly can give information on its stochastic structure and also are free of anthropogenic influences.

With such issues we enter the arena of the Climate Dialogue about long-term persistence (LTP).

Definition of long-term persistence
Both Bunde and Koutsoyiannis showed figures in their guest blogs to explain the difference between independent or uncorrelated data and long-term correlated data. Below is the graph that Bunde showed:


Figure 1 of Bunde’s guest blog showing the difference between uncorrelated data (left) and data with long-term persistence (right). The coefficient γ (gamma) is a measure of persistence.

As he explained: “For the uncorrelated data, the moving average is close to zero, while for the LTP data, the moving average can have large deviations from the mean, forming some kind of mountain-valley structure that looks as if it contained some external deterministic trend. The figure shows that it is not a straightforward task to separate the natural fluctuations from an external trend, and this makes the detection of external trends in LTP records a difficult task.”

Koutsoyiannis said it as follows: “No one would believe that the weather this hour does not depend on that an hour ago. It is natural to assume that there is time dependence in weather. (…) Now, if we average the process to another scale, daily, monthly, annual, decadal, centennial, etc. we get other stochastic processes, not qualitatively different from the hourly one. Of course, as the scale of averaging increases the variability decreases—but not as much as implied by classical statistics.”

Benestad gave the following description of LTP: “Long-term persistence (LTP) describes how slow physical processes change over time, where the gradual nature is due to some kind of ‘memory’. This memory may involve some kind of inertia, or the time it takes for physical processes to run their course. Changes over large space take longer time than local changes.”

So they all accept that LTP ‘exists’ in the climate or is part of climate. There were disagreements though, even about the concept of LTP. Bunde and Koutsoyiannis are both in favour of a formal (mathematical) definition of LTP, which describes what Koutsoyiannis said above, that on longer time scales variability decreases—but not as much as implied by more “classical statistics” like AR(1)[1].

Benestad found this proposition “somewhat artificial” when dealing with temperature time series. He said a great deal of variance is usually removed before the data is analysed, like seasonal variations and the diurnal cycle. “Most of the variance is tied up to these well-known cycles, forced by regional changes in incoming sunlight. Furthermore, ENSO has a time scale that is ~3-8 years, and is associated with most of the variance after the seasonal and diurnal scales are neglected.” Elsewhere Benestad said: “There are some known examples of LTP processes, such as the ice ages, changes in the ocean circulation, and the El Niño Southern Oscillation.” Bunde disagreed that ENSO is an LTP process: “Rasmus [Benestad] will recognize that ENSO is not an example of LTP, in the same way as other quasi-oscillatory phenomena cannot be described as LTP.”

So there is confusion/disagreement about what LTP really “is”. The reason could be that for Bunde and Koutsoyiannis LTP is a statistical property of climatic time series and according to Bunde, as such, it is not an “abstract issue”.

A key issue seemed to be whether it is possible to talk about LTP in terms of physical processes. Koutsoyiannis thinks the system is just too complex to talk about simple physical causes for observed changes and he does not accept the dichotomy physics vs. statistics as in complex physical systems a statistical description is the most pertinent and the most powerful.

According to Koutsoyiannis it is unfortunate that LTP has been commonly described in the literature in association with autocorrelation and as a result of memory mechanisms. For him it is “the change, mostly irregular and unpredictable in deterministic terms, that produces the LTP”. For Benestad LTP is a manifestation of memory in the climate system.

Summary
To answer the question “is the increase in global average temperature statistically significant?” one needs to make assumptions about the statistical nature of the time series and one needs to choose what statistical model is the most appropriate.

If the temperature of this year is not related to that of last year or next year we can use classical statistics to determine whether the increase in global temperature is significant or not. In such an “uncorrelated climate” the average value becomes zero (or a fixed value) quickly and deviations from the mean last only shortly. If there is (strong) temporal dependence though the moving average can have large deviations from the mean. Bunde and Koutsoyiannis claim the climate displays such long-term dependence or long-term persistence (LTP).

The three participants agree that LTP exists in the climate (Table 1). They disagree about the exact definition though and about the physical processes that lie behind it. Benestad and Bunde describe LTP in terms of “long memory”. Koutsoyiannis says that in his opinion LTP is mainly the result of the irregular and unpredictable changes that take place in the climate (Table 2). Bunde and Koutsoyiannis are both in favour of a formal (mathematical) definition of LTP, which states that on longer time scales climate variability decreases—but not as much as implied by “classical statistics” such as AR(1).

Benestad said that ice ages and the El Niño Southern Oscillation are examples of LTP processes. Bunde and Koutsoyiannis disagreed and said that quasi-oscillatory phenomena cannot be described as LTP (Table 3).

Table 1

Benestad

Bunde

Koutsoyiannis

Does LTP exist in the climate?

Yes

Yes

Yes

Table 2

What is long-term persistence (LTP)?

Benestad

LTP describes how slow physical processes change over time, where the gradual nature is due to some kind of ‘memory’.

Bunde

LTP is a process with long memory; the value of a parameter (e.g. temperature) today depends on all previous points.

Koutsoyiannis

It is unfortunate that LTP has been interpreted as memory; it is the change, mostly irregular and unpredictable in deterministic terms, that produces the LTP, while the autocorrelation results from change.

Table 3

Benestad

Bunde

Koutsoyiannis

Quasi-oscillatory phenomena like ENSO can be described as LTP.

Yes

No

No


Is LTP relevant for the detection of climate change?

The full IPCC definitions of detection and attribution in AR5 are (our emphasis)[2]:

“Detection of change is defined as the process of demonstrating that climate or a system affected by climate has changed in some defined statistical sense without providing a reason for that change. An identified change is detected in observations if its likelihood of occurrence by chance due to internal variability alone is determined to be small.”

Attribution is defined as “the process of evaluating the relative contributions of multiple causal factors to a change or event with an assignment of statistical confidence”. As this wording implies, attribution is more complex than detection, combining statistical analysis with physical understanding.

The definition of detection has been differently interpreted by the members of the Editorial Staff:

Interpretation 1 (Rob van Dorland, Bart Verheggen): the second part clarifies the first part that you need (some defined) statistical model to distinguish between forced and unforced (internal variability) change. In the first part it is stated that this is done without knowing the causeof the forced change, i.e. whether it is anthropogenic or natural (sun, volcanoes etc).

Interpretation 2 (Marcel Crok): The first part of the definition suggests that you only need statistics to do detection. The second part suggests you need more than statistics (physical models), unless a statistical method would be able to distinguish between forced changes and internal variability. So the definition is self-contradictory.

Bunde and Koutsoyiannis both think detection is mainly a matter of statistics while Benestad thinks it also involves a physical interpretation of distinguishing unforced internal variability from forced changes.

Bunde wrote: “For detection and estimation of external trends (“detection problem”) one needs a statistical model.” Koutsoyiannis preferred the word “primarily” instead of “purely”: “I would say it is primarily a statistical problem, but I would not use the advert “purely”. Besides, as we wrote in Koutsoyiannis/Montanari (2007)[3], even the very presence of LTP should not be discussed using merely statistical arguments.”

Benestad wrote: “Hence, the diagnosis (“detection”) of a climate change is not purely a matter of statistics. The laws of physics set fundamental constraints which let us narrow down to a small number of ‘suspects’. For complete probability assessment, we need to take into account both the statistics and the physics-based information, such as the fact that GHGs absorb infrared light and thus affect the vertical energy flow through the atmosphere.”

Bart Verheggen wrote the following analysis of this part of the discussion: “This discussion showed that the participants used a slightly different operational definition of detection. Benestad followed the first interpretation of the IPCC definition, i.e. testing the significance of observed changes relative to what is expected from only unforced internal variability. Bunde and Koutsoyannis take detection to mean testing the significance of observed change w.r.t. some reference period without anthropogenic forcings (but with natural forcings). The latter definition in effect sets a higher bar for detection than the former (as the observed trend has to exceed not just unforced internal variability, but also the effect of natural forcings). These differences are probably rooted in different perceptions of what internal variability is (and whether or not it is different in principle from natural forcings).”

Summary
There was confusion about the exact meaning of the IPCC definition about detection. The definition reads: “Detection of change is defined as the process of demonstrating that climate or a system affected by climate has changed in some defined statistical sense without providing a reason for that change. An identified change is detected in observations if its likelihood of occurrence by chance due to internal variability alone is determined to be small.”

Bunde and Koutsoyiannis both think detection is mainly a matter of statistics and that it is very relevant for the detection of climate change. Benestad on the other hand thinks detection is not mainly a statistical issue and that it also involves a physical interpretation of distinguishing unforced internal variability from forced changes.

Table 4

Benestad

Bunde

Koutsoyiannis

Is detection purely a matter of statistics?

No, laws of physics sets fundamental constraints

Yes

Not purely but primarily yes


LTP versus AR(1)

Bunde’s main conclusion in his guest blog was: “My conclusion is that the AR1 process falsely used by climate scientists to describe temperature variability leads to a strong overestimation of the significance of external trends. When using the proper LTP model the significance is considerably lower.” The AR(1) process refers to the simplest model for short-term persistence (STP). So Bunde is saying several things here: 1) LTP is the proper model to describe temperature variability; 2) climate scientists in general use a STP model like AR(1) and 3) this leads to a strong overestimation of the significance of trends.

In a comment Bunde added that “This crucial mistake appeared also in the IPCC report [AR4] since the authors were (…) not aware of the LTP of the climate. They assumed STP [in table 3.2] and thus got the trend estimations wrong by overestimating the significance.”

Koutsoyiannis agrees with Bunde’s conclusions. In figure 1 of his guest blog he showed that the clustering of warm years, for example, is orders of magnitude more likely to happen if you use an LTP model. “We may see, for example, that what, according to the classical statistical perception, would require the entire age of the Earth to occur once (i.e. clustering of 8-9 events) is a regular event for an HK [Hurst-Kolmogorov] climate[4], with probability on the order of 1-10%.” He added that “this dramatic difference can help us understand why the choice of a proper stochastic model is relevant for the detection of changes in climate.” In a comment he also said that “a Markov [AR(1)] process […] finally produces a static climate […]. The truth is, however, that climate on Earth has never been static.”

Benestad agrees that “the AR1 model may not necessarily be the best model”, but adds that “it is difficult to know exactly what the noise looks like in the presence of a forced signal.” Elsewhere he wrote: “The important assumptions are therefore that the statistical trend models, against which the data are benchmarked, really provide a reliable description of the noise.”

In the discussion of this summary Benestad disagreed with Bunde’s claim that the AR(1)-process is falsely used by climate scientists and the IPCC. According to Benestad, Table 3.2 in AR4 as mentioned by Bunde is not seriously arguing that internal variability is AR(1), but merely uses this method as a crude estimate of the trend significance for that particular plot. Benestad: “The relevant question is whether the trend is anthropogenic or due to LTP (or signal versus noise) and to answer this question, you must look at chapter 9 in AR4 on detection and attribution and in particular figure 9.5 on the comparison between global mean surface temperature anomalies from observations and model simulations, and not at Table 3.2. In chapter 9 there are zero hits on ‘AR(1)’.”

He warns for the danger of circular reasoning when using statistical models. “It is the way models are used that really matters, rather than the specific model itself. All models are based upon a set of assumptions, and if these are violated, then the models tend to give misleading answers. Statistical LTP-noise models used for the detection of trends involve circular reasoning if adapted to measured data. Because this data embed both signal and noise.”

This is a key argument of Benestad. He claims statistical models are useless when applied to what is called the instrumental period, because in this period the data embed both “signal” and “noise” and LTP or STP or whatever statistical model are meant to describe “the noise” only in his opinion. Benestad therefore favours other methods: “State-of-the-art detection and attribution work do not necessarily rely on the AR1 concept, but use results from climate models and error-covariance matrices based on the model results to evaluate trends, rather than simple AR(1) methods.”

Koutsoyiannis in response gave a few examples why in his opinion the danger of circular reasoning is not justified in this case. In his first example he divided the global average time series in two parts: “The HadCrut4 data set is 163 year long. So, let us exclude the last 63 years and try to estimate H [the Hurst coefficient][5] based on the 100-year long period 1850-1949. The Hurst coefficient estimate becomes 0.93 instead of 0.94 for the entire period.” So he disagrees that the global average temperature time series cannot be used because the record is ‘contaminated’ by anthropogenic forcing. He also referred to analyses of proxies made by Koutsoyiannis and Montanari (2007), who estimated high values of the Hurst coefficient (H between 0.86-0.93) for the period 1400-1855 and by Markonis and Koutsoyiannis (2013)[6], who showed that a combination of proxies supports the presence of LTP with H > 0.92 for time scales up to 50 million years.

However, Benestad is in favour of separating forced and unforced climate change (as the definition of detection implies), and part of the 1850-1949 temperature changes are due to (natural) forcing. This implies that it is difficult to draw conclusions like Koutsoyiannis did. There was no further discussion on this issue.

Summary
Bunde and Koutsoyiannis argue that LTP is the proper model to describe temperature variability, that climate scientists (and the IPCC) in general use an STP model like AR(1) and that this leads to a strong overestimation of the significance of trends (Table 5 and 7). Koutsoyiannis showed that the clustering of warm years, for example, is orders of magnitude more likely to happen if you use an LTP model. Benestad agrees that the AR(1) model may not necessarily be the best model, but in general statistical models are useless in his opinion when applied to what is called the instrumental period, because in this period the data embed both “signal” and “noise” and LTP or STP or whatever statistical model are meant to describe “the noise” only in his opinion: “Statistical LTP-noise models used for the detection of trends involve circular reasoning if adapted to measured data because this data embed both signal and noise.” (Table 6). Koutsoyiannis in response gave a few examples why in his opinion the danger of circular reasoning is not justified in this case.


Table 5

Benestad

Bunde

Koutsoyiannis

Is LTP relevant/important for the statistical significance of a trend?

Yes (though physics still needed)

Yes, very much

Yes, very much

Table 6

What is the relevance of LTP for the detection of climate change?

Benestad

Statistical LTP-noise models used for the detection of trends involve circular reasoning if adapted to measured data. State of the art detection and attribution is needed.

Bunde

For detection and estimation of external trends (“detection problem”) one needs a statistical model and LTP is the best model to do this.

Koutsoyiannis

LTP is the only relevant statistical model for the detection of changes in climate.

Table 7

Benestad

Bunde

Koutsoyiannis

Is the AR(1) model a valid model to describe the variability in time series of global average temperature?

No, if physics based information is neglected

No

No

Does the AR(1) model leads to overestimation of the significance of trends?

Yes, if you don’t also take into account the physics-based information

Yes

Yes

LTP and chaos

There was disagreement about the relation between LTP and chaos. Obviously, the participants agree that the climate system possesses a chaotic component, but they differ in the extent and time scales of this component. For Benestad though the implication is that LTP cannot be a valid concept for the climate on longer terms. For example in his first reaction to Bunde’s guest blog Benestad wrote: “I presently think one major weakness in your reasoning is [when you say that] ‘in LTP records, in contrast, xi depends on all previous points.’ This cannot be true if the weather evolution is chaotic, where the weather system loses the memory of the initial state after some bifurcation point.”

In another comment Benestad wrote: “The so-called ‘butterfly effect’, an aspect of ‘chaos’ theory, is well-established with meteorology, which means there is a fundamental limit to the predictability of future weather due to the fact that the system loses the memory of the initial state after a certain time period. (…) For geophysical processes, chaos plays a role and may give an impression of LTP, and still the memory of the initial conditions is lost after a finite time interval.”

Koutsoyiannis referred to some of his papers and said that yes, “these publications show that LTP does involve chaos.” In another comment that dealt with untangling the different causes of climate change he said: “in chaotic systems described by nonlinear equations, the notion of a cause may lose its meaning as even the slightest perturbation may lead, after some time, to a totally different system trajectory (cf. the butterfly effect).” So Koutsoyiannis and Benestad largely agree about how chaotic systems behave. The main difference though is that Benestad interprets “the system loses memory” as “LTP is not a useful concept” and here Koutsoyiannis and Bunde disagree with him. In particular, Koutsoyiannis considers memory as a bad interpretation of LTP: it is the change which produces the LTP and thus LTP is fully consistent with the chaotic behaviour of climate.

Summary
There was disagreement about the relation between LTP and chaos (Table 8). According to Benestad chaos theory implicates the memory of the initial conditions is lost after a finite time interval. Benestad interprets “the system loses memory” as “LTP is not a useful concept”. Koutsoyiannis considers memory as a bad interpretation of LTP: it is the change which produces the LTP and thus LTP is fully consistent with the chaotic behaviour of climate.

Table 8

Benestad

Bunde

Koutsoyiannis

Is the climate chaotic?

Yes

Yes

Yes

Does chaos mean memory is lost and does this apply for climatic timescales as well?

Yes

chaos is not a useful concept for describing the variability of climate records on longer time scales

No; LTP is not memory

Does chaos exclude the existence of LTP?

Yes, at both weather and climatic time scales

No

No; on the contrary, chaos can produce LTP

Does chaos contribute to the existence of LTP?

No, but chaos may give an impression of LTP.

Yes

Yes, LTP does involve chaos


Signal and noise

There was disagreement about concepts like signal and noise. In his guest blog Benestad wrote: “The term ‘signal’ can have different meanings depending on the question, but here it refers to manmade climate change. ‘Noise’ usually means everything else, and LTP is ‘noise in slow motion’.”

Koutsoyiannis disagreed with this distinction: “I would never agree with your term “noise” to describe the natural change. Nature’s song cannot be called “noise”. Most importantly, your “signal” vs. “noise” dichotomy is something subjective, relying on incapable deterministic (climate) models and on, often misused or abused, statistics.”

In another comment Koutsoyiannis elaborated on this point: “The climate evolution is consistent with physical laws and is influenced by numerous factors, whether these are internal to what we call climate system or external forcings. To isolate one of them and call its effect “signal” may be misleading in view of the nonlinear chaotic behaviour of the system.”

Bunde seems to take a position in between Benestad and Koutsoyiannis. He does assume – as a working hypothesis - that there is an external deterministic trend from the greenhouse gases but he calls the remaining part of the total climate signal natural “fluctuations” and not noise. Bunde: “we have to note that we distinguish between natural fluctuations and trends. When looking at a LTP curve, we cannot say a priori what is trend and what is LTP. (...) The LTP is natural, the trend is external and deterministic.”

The distinction in signal and noise is another way of stating what detection aims to do: distinguishing whether the (forced) changes are significantly outside of the bounds of the unforced or internal variability. All three appear to agree that purely based on LTP, this distinction can’t be made.

Summary
There was disagreement about concepts like signal and noise. According to Benestad the term ‘signal’ refers to manmade climate change. ‘Noise’ usually means everything else, and LTP is ‘noise in slow motion’ (Table 9). Koutsoyiannis argued that the “signal” vs. “noise” dichotomy is subjective and that everything we see in the climate is signal. To isolate one factor and call its effect “signal” may be misleading in view of the nonlinear chaotic behaviour of the system. Bunde does assume there is an external deterministic trend from the greenhouse gases but he calls the remaining part of the total climate signal natural “fluctuations” and not noise (Table 9). All three seem to agree that one cannot use LTP to make a distinction between forced and unforced changes in the climate (Table 10).

Table 9

Signal versus noise

Benestad

The signal is manmade climate change; the rest is noise and LTP is noise in slow motion.

Bunde

My working hypothesis: there is a deterministic external trend; the rest are natural fluctuations which are best described by LTP.

Koutsoyiannis

Excepting observation errors, everything we see in climate is signal.

Table 10

Benestad

Bunde

Koutsoyiannis

Is the signal versus noise dichotomy meaningful?

Yes

Yes

No*

Can LTP distinguish between forced and unforced components of the observed change?

No

No

*

Can LTP distinguish between natural fluctuations (including natural forcings) and trends?

No

Yes

*

* Koutsoyiannis thinks that even the formulation of these questions, which imply that that the description of a complex process can be made by partitioning it into additive components and trying to know the signatures of each one component indicates a linear view for a system that is intrinsically nonlinear.


Forced versus unforced

In our introduction we introduced the three climate influences that climate scientists distinguish:

“Most experts agree that three types of processes (internal variability, natural and anthropogenic forcings) play a role in changing the Earth’s climate over the past 150 years. It is the relative magnitude of each that is in dispute. The IPCC AR4 report stated that “it is extremely unlikely (<5%) that recent global warming is due to internal variability alone, and very unlikely (< 10 %) that it is due to known natural causes alone.” This conclusion is based on detection and attribution studies of different climate variables and different ‘fingerprints’ which include not only observations but also physical insights in the climate processes.”

There was a lot of discussion about the physical mechanisms behind LTP. Bart Verheggen of the Climate Dialogue team asked a series of questions about this: Can we agree that forcing introduces LTP? Can we agree that forcing is omnipresent for the real world climate? Is LTP mainly internal variability or the result of a combination of internal variability and natural forcings?

Bunde replied that “Natural Forcing plays an important role for the LTP and is omnipresent in climate (so yes and yes to first two questions).” Koutsoyiannis also agreed that “(changing) forcing can introduce LTP and that it [forcing] is omnipresent. But LTP can also emerge from the internal dynamics alone as the above examples show. Actually, I believe it is the internal dynamics that determine whether or not LTP would emerge.”

Verheggen concluded: “All three invited participants agree that radiative forcing can introduce LTP and that it is omnipresent. It follows that the presence of LTP cannot be used to distinguish forced from unforced changes in global average temperature. The omnipresence of both unforced and forced changes means that it’s very difficult (if not impossible) to know the LTP signature of each. Therefore, LTP by itself doesn’t seem to provide insight into the causal relationships of change. It is however relevant for trend significance, but fraught with challenges since the unforced LTP signature is not known.”

Summary
According to Bunde natural forcing plays an important role for LTP and is omnipresent in climate. Koutsoyiannis agreed that (changing) forcing can introduce LTP and that forcing is omnipresent, but LTP can also emerge from the internal dynamics alone.

Table 11

Benestad

Bunde

Koutsoyiannis

Does forcing introduce LTP?

Yes

Yes

Yes

Is forcing omnipresent in the real world climate?

Yes

Yes

Yes

What according to you is the main mechanism behind LTP?

Forcings

Natural Forcing plays an important role for the LTP and is omnipresent in climate.

I believe it is the internal dynamics that determines whether or not LTP would emerge.

Is the warming significant?

This brings us to one of the key questions in this climate dialogue: do you conclude there is a significant warming trend? The participants used different models and methods to answer this question and understanding their different views requests a detailed understanding of these methods which is outside the scope of this dialogue. So here we just mention the differences and focus on the results.

Benestad preferred to use a regression analysis of the global average temperature against known climate forcings as these may be considered as additional information with respect to any statistical model. The results are shown in his figure 1:


Benestad’s Figure 1. The recorded changes in the global mean surface temperature over time (red). The grey curve shows a model calculation of this temperature based on greenhouse gases (GHGs), ozone (O3), and changes in the sun (S0).

Benestad: “The probability that this fit [in the regression analysis] is accidental is practically zero if we assume that that the temperature variations from year-to-year are independent of each other. LTP and the oceans inertia will imply that the degrees of freedom is lower than the number of data points, making it somewhat more likely to happen even by chance.” Benestad says it is very likely that the main physical causes of the change are clear and that greenhouse gases are the main contributors to the warming since the midst of the 20th century (as also illustrated by figure 9.15 in AR4 or figure 10.7 in AR5).

Koutsoyiannis asked Benestad whether his model shown in his Figure 1 is free of circular reasoning, which means that at least he has split the data into two periods for modelling and validation. Benestad left the question unanswered and there was no further discussion on this issue.

Bunde and Koutsoyiannis use different statistical methods. Bunde explained that “nowadays, there is a large number of methods available that is able to detect the natural fluctuations in the presence of simple monotonous trends. Two of them are the detrended fluctuation analysis (DFA) and the wavelet technique (WT).”

Based on these methods Bunde reached the following conclusions:
“(i) The global sea surface temperature increased, in the past 100y, by about 0.6 degree, which is not significant. The reason for this is the large persistence of the oceans, reflected by a large Hurst exponent.
(ii) The global land air temperature, in the past 100 years, increased by about 0.8 degrees. We find this increase even highly significant. The reason for this is the comparatively low persistence of the land air temperature, which makes large natural increases unlikely.”

Koutsoyiannis used a different method to identify and quantify the LTP which he calls a climacogram. Koutsoyiannis is the most ‘skeptical’ of the three participants when it comes to the significance of trends: “Assuming that the data set we used is representative and does not contain substantial errors, the only result that we can present as fact is that in the last 134 years the climate has warmed by 0.6°C (this is a difference of climatic—30-year average—values while other, often higher, values that appear in the literature refer to trends based on annual values). Whether this change is statistically significant or not depends on assumptions. If we assume a 90-year lag and 1% significance, it perhaps is.”


Koutsoyiannis’ Figure 5: testing lagged climatic differences based on the HadCrut4 data set (1850-2012). Differences are not statistically significant according to Koutsoyiannis, except maybe for the 90 year lag.

When asked specifically if his results mean that “detection” has not yet taken place, Koutsoyiannis replied: “Yes, I believe it has not taken place. Whether it comes close: It is likely.”

Koutsoyiannis argues that the current temperature signal is not outside of the bounds of what could be expected from natural forced and unforced changes, thereby using a stricter rating (1%) than the standard definition of “detection” (5%). He bases his statement on a higher Hurst coefficient than Bunde does which partly explains why Koutsoyiannis and Bunde don’t reach exactly the same conclusions.

Summary
The three participants gave different answers on the key question of this Climate Dialogue, namely of the warming in the past 150 years is significant or not. They used different methods to answer the question. Benestad is most confident that both the changes in land and sea temperatures are significant. Bunde concludes that due to a high Hurst parameter the sea temperatures are not significant but the land and global temperatures are. Koutsoyiannis concludes that for most time lags the warming is not significant. In some cases it maybe is.

Table 12

BenestadI

BundeII

KoutsoyiannisIII

Is the rise in global average temperature during the past 150 years statistically significant?

Yes

YesIV

NoV

Is the rise in global average sea surface temperature during the past 150 years statistically significant?

Yes

No

No

Is the rise in global average land surface temperature during the past 150 years statistically significant?

Yes

Yes

No

I Benestad’s conclusions are based on the difference between GCM simulations with and without anthropogenic forcing (Box 10.1 or Figs 10.1 & 10.7 in AR5)
II Based on the detrended fluctuation analysis (DFA) and/or the wavelet technique (WT).
III Based on the climacogram and different time lags (30, 60, 90 and 120 years).
IV This change is 99% significant according to Bunde.
V For a 90 year time lag and a 1% significance level it maybe is significant (see Koutsoyiannis’ guest blog).

Is there a large contribution of greenhouse gases to the warming?

While Bunde and Koutsoyiannis share similar views about the importance of LTP for detection, there are some differences as well which are reflected in the table above. Koutsoyiannis for example does not agree with Bunde’s conclusion that the increase in the global land air temperature in the past 100 years is significant.

Bunde and Koutsoyiannis seem to disagree about the level of LTP (i.e. the value of the Hurst coefficient) in land surface temperature records. Koutsoyiannis believes that on climatic time scales, sea surface temperatures (SSTs) and land surface temperatures (LSTs) should be highly correlated: “I believe if you accept that the sea surface temperature has strong LTP, then logically the land temperature will have too, so I cannot agree that the latter has “comparatively low persistence”. (…) I believe climates on sea and land are not independent to each other—particularly on the long term.” Bunde though thinks that the persistence in the SSTs is higher than that in LSTs.

Bunde is more convinced of a substantial role for greenhouse gases on the climate than Koutsoyiannis although he admits he cannot rule out that the warming is (partly) due to urban heating. “First of all, from our trend significance calculations we can see, without any doubt, that there is an external temperature trend which cannot be explained by the natural fluctuations of the temperature anomalies. We cannot distinguish between Urban Warming and GHG here, but there are places on the globe where we do not expect urban warming but we still see evidence for an external trend, so we may conclude that it is GHG.”

In another comment Bunde wrote: “as a consequence of the LTP in the temperature data, the error bars are very large, considerably larger than for short-term persistent records. But nevertheless, except for the global sea surface temperature, we have obtained strong evidence from this analysis that the present warming has an anthropogenic origin.” And in another comment: “Regarding GHG [greenhouse gases] I may not fully agree with Demetris [Koutsoyiannis]: We cannot show in our analysis of instrumental temperature data that GHG are responsible for the anomalously strong temperature increase that we see and that we find is significant, but it is my working hypothesis.”

When we asked Koutsoyiannis whether he believes the influence of greenhouse gases is small he answered: “Yes, I believe it is relatively weak, so weak that we cannot conclude with certainty about quantification of causative relationships between GHG and temperature changes. In a perpetually varying climate system, GHG and temperature are not connected by a linear, one-way and one-to-one, relationship. I believe climate models and the thinking behind them have resulted in oversimplifying views and misleading results. As far as climate models are not able to reproduce a climate that (a) is chaotic and (b) exhibits LTP, we should avoid basing conclusions on them.”

Benestad on the other hand wrote: “The combination of statistical information and physics knowledge lead to only one plausible explanation for the observed global warming, global mean sea level rise, melting of ice, and accumulation of ocean heat. The explanation is the increased concentrations of GHGs.”

Summary
Bunde is more convinced of a substantial role for greenhouse gases on the climate than Koutsoyiannis although he admits he cannot rule out that the warming on land is (partly) due to urban heating. Bunde said he may not fully agree with Koutsoyiannis: “We cannot show in our analysis of instrumental temperature data that GHG are responsible for the anomalously strong temperature increase that we see and that we find is significant, but it is my working hypothesis.” Koutsoyiannis believes the influence of greenhouse gases is relatively weak, “so weak that we cannot conclude with certainty about quantification of causative relationships between GHG and temperature changes”. Benestad on the other hand said the increased concentrations of GHGs is the only plausible explanation for the observed global warming, global mean sea level rise, melting of ice, and accumulation of ocean heat.

Table 13

Benestad

Bunde

Koutsoyiannis

Is the warming mainly of anthropogenic origin?

The combination of statistical information and physics knowledge lead to only one plausible explanation: GHGs

Yes, it is my working hypothesis

No, I think the effect of CO2 is small



[1] In statistics and signal processing, an autoregressive (AR) model is a representation of a type of random process; as such, it describes certain time-varying processes in nature, economics, etc. The autoregressive model specifies that the output variable depends linearly on its own previous values. It is a special case of the more general ARMA model of time series. More details, see Wikipedia.

[2] Section 10.2.1 in AR5.

[3] Koutsoyiannis, D., and A. Montanari (2007), Statistical analysis of hydroclimatic time series: Uncertainty and insights, Water Resources Research, 43 (5), W05429, doi: 10.1029/2006WR005592.

[4] Hurst-Kolmogorov is a term that Koutsoyiannis has introduced and which is synonymous to LTP. In his guest blog he explains where it comes from: “A decade before Hurst detected LTP in natural processes, Andrey Kolmogorov, devised a mathematical model which describes this behaviour using one parameter only, i.e. no more than in the Markov [AR(1)] model. We call this model the Hurst-Kolmogorov (HK) model.”

[5] The Hurst coefficient H is a measure for long-term persistence. H is a number between 0.5 and 1. The closer to 1, the more persistent a system is.

[6] Markonis, Y., and D. Koutsoyiannis, Climatic variability over time scales spanning nine orders of magnitude: Connecting Milankovitch cycles with Hurst–Kolmogorov dynamics, Surveys in Geophysics, 34 (2), 181–207, 2013.

Leave a Reply

Expert comments to Long-term persistence and trend significance

Jump to public comments | Jump to off-topic comments
  • Marcel Crok

    Welcome back to Climate Dialogue for our second dialogue. We slightly changed our procedure. This time we asked the invited experts to give a first reaction on the guest blogs of the two other discussants. We hope this will generate a guide for the following discussion.
    Marcel Crok

  • Rasmus Benestad

    First comments on the trend discussion by Armin Bunde:

    I think that Bunde provides a nice overview of the statistics behind long-term persistence (LTP) and the mathematical theory. He also provides some examples which also suggests that there are a number of areas where processes exhibit LTP. However, I’m not convinced that the climate system necessarily fulfils all the assumptions made in the LTP analysis.
    It is especially the proposition that any observed number xi in a LTP records depends on all previous points which as not established for geophysical processes. Lorenz1 published in 1962 the concept of non-deterministic flow, also known as ‘chaos’2.

    The so-called ‘butterfly effect’, an aspect of ‘chaos’ theory, is well-established with meteorology, which means there is a fundamental limit to the predictability of future weather due to the fact that the system looses the memory of the initial state after a certain time period. The reason for this is that the future outcome is sensitive to infinitesimally small differences in the description of the state of the atmosphere. This sensitivity can be estimated through a set of lyapunov exponents.

    Failure to predict long-range climate variability with statistical models suggests that there is little predictable precursory signal (memory) on time scales longer than months over large parts of the world. El Niños are notoriously hard to predict from one year the next.

    Nevertheless, chaotic systems will look like LTP processes because the strange attractors describing the state of the system tends to reside in certain ‘regimes’ for some time before flipping over to another regime. So does it matter if a chaotic process look lie LTP but where the memory the initial conditions is ‘lost’ after some time? I haven’t really thought about this before.

    It is true that e.g. temperature cannot be characterized by the AR1 process, and this is because we know a priori that it is not just noise. The temperature is physically forced by a number of processes, be it from changes in the Earth’s orbit around the sun, changes in the sun, changes in geology, changes in volcanic eruptions, or changes in the ocean currents. We can simulate such changes with climate models.

    While the AR1 model may not necessarily be the best model, but it is difficult to know exactly what the noise looks like in the presence of forced signal. State-of-the-art detection and attribution work do not necessarily rely on the AR1 concept, but use results from climate models and error-covariance matrices based on the model results to evaluate trends, rather than simple AR(1) methods.

    Although there are ways to deal with trends in data in the context of LTP analyses, there are draw-backs associated with uncertainties associated with the different models, e.g. specifying polynomial trends and their coefficients. Furthermore, a combination of a Fourier- transformation and a random number generators (‘phase scrambeling’) will produce numbers which mimic the statistical characteristics of the original data, but will not easily distinguish between signal and noise.

    I would not be surprised if trend tests suggest that the long-term increase global mean sea surface temperatures do not qualify as statistically significant, and the reason for that is due to a number of ocean processes rather than the inertia that resides in the oceans. We know that ocean ciculation patterns such as the El Niño Southern Oscillation, the Atlantic Meridional Overturning circulation, and the Pacific decadal oscillation are responsible for slow variations with time scales of months to decades.

    We can look at different quantities such as the ocean heat content and the global mean sea level and the trends are more prominent while such variations are small in comparison.

    1. Lorenz, E. Deterministic nonperiodic flow. J. Atmospheric Sci. 20, 130–141 (1963).
    2. Gleick, J. Chaos. (Cardinal, 1987).

  • Rasmus Benestad

    First comments on the trend discussion by Demetris Koutsoyiannis:

    Koutsoyiannis shares offers a definition of ‘climate’, which can be summerised as ‘what are the ranges and how often can I expect that a particular weather event to take place?’ However, climate is not just statistics, but also about physics: the flow of energy and transport of matter.

    Modern climatology has drawn on experience from weather forecasting and physics in addition to statistics, and the success of daily operation weather forecasting provides convincing evidence suggesting that we do have a good understanding of the role various processes in atmosphere and the oceans play.

    The question of whether 30 years is an appropriate time scale for defining climatic normals may seem a bit academic – it’s a practical time horizon in terms of the time scale of a generation or the life time of a construction. To a great extent, climatology evolved out from the need to provide society with a guidance on how to design infrastructure and plan agriculture. However, this is different to the discussion regarding climate change.

    As far as I know, if is well-known that the real degrees of freedom is less than the number of observations due to persistence and auto-correlation. This has long been tacit knowledge, and the seeming persistence has to do with the chaotic nature of the weather system.

    I disagree with him and think there is no static-climate assumption behind “weather vs. climate” dichotomy – it’s a question of probabilities and predicting the probability density function (pdf). Nobody says that the pdf needs to be constant. In fact, the definition of a climate change is just that: a shift in the pdf over time.

    The question of whether we are now seeing a trend may be answered differently depending on ones assumption. Does it mean that the pdf for the temperature now is different from that of the past? If it is, then that is of relevance to society and it may call for actions to adapt. Is it something we ought to expect because this happens all the time – as Koutsoyiannis suggests? Alternatively, we may ask whether the recent 11 warmest temperatures would at all been possible without a forcing such as GHGs?

    I think that we may be asking different questions.
    Furthermore, we make different assumptions. I think that the following assertion is invalid in the context of geophysical processes: ‘by averaging to another scale, daily, monthly, annual, decadal, centennial, etc. we get other stochastic processes, not qualitatively different from the hourly one”. The reason is that different known physical processes take place with different preferred time scales.

    I also do not see that the idea of long-term persistence (LTP) is omnipresent. Take classical statistical physics for instance– the classical statistics work quite well, and the concept of temperature is indeed an aggregate based on classical statistics in the absence of LTP.

    For geophysical processes, chaos plays a role and may give an impression of LTP, and still the memory of the initial conditions is lost after a finite time interval.
    While the case of the Nile is interesting, I do not think it is a valid analogy for the global mean temperature – the physical processes are just not the same.
    The Nile river flow is influenced by the precipitation over one catchment, which again is determined by the transport of airborne moisture through the atmospheric circulation over the eastern African region. It is affected by monsoons and geography, in addition to the management of the river.

    The global mean temperature, on the other hand, is affected balance between the energy received from the sun and loss to space. Furthermore, it is sensitive to the altitude where the heat escapes to space. This is to some extent linked to the global hydrological cycle, however.

    Trends in the Earth’s mean temperature will have implication for the energy budget, as the natural system is constrained by the laws of physics. The Earth’s atmosphere and oceans represent a closed system in space where only energy enters and leaves and where the physics keeps the state in check. There are less restoring forces for river levels, and the physical constraints are not as limited for the rainfall over a limited region.

    I too do not think that white noise is not really a realistic assumption, and I think that most of my colleagues agree. It is news to me that climate is determined on just one time scale. Where is this stated?

    Another difference in opinion is that we know that autocorrelation function is not independent of time scale: at hourly scales, there is a 24-hr cycle; at a daily scale, there is a distinct 365-25-day cycle, and beyond the annual cycle, the presence of long-term variability becomes more vague.

    There are some signatures of volcanic eruptions, El Niños, and decadal variations, but these are irregular rather than regular. At geological scales, there have been a number of ice ages, but there is no evidence suggesting that they have lasted more than a few million years. Thus we cannot assume that there will always be another natural mechanism acting on a bigger scale.

    I also think that it is big mistake to use of Hust coefficient with the HadCRUT4 without distinguishing forced changes to internal changes – what is noise and what is signal? Or what is the probability that we would see similar high temperatures without forcings? In my mind, Koutsoyiannis mixes forced variations with noise, which muddles the understanding and produces misleading results.

  • Demetris Koutsoyiannis

    First comments on Armin Bunde’s post

    Dear Armin,

    It is a great pleasure to contribute, together with you, in this dialogue and to make this comment on your post, following the suggestion of the CD Editorial Team to identify points of agreement and disagreement. I am particularly glad to report my agreement on the major issues with a few exceptions to which I will refer below. I endorse your statements:

    For monthly (and annual) temperature records the best statistical model is long-term persistence.

    My conclusion is that the AR1 process falsely used by climate scientists to describe temperature variability leads to a strong overestimation of the significance of external trends. When using the proper LTP model the significance is considerably lower.

    I decided not to use mathematical equations in my post, so I welcome the inclusion of equations in yours. From first glance they seem consistent with mine, even though we use different notation. For example, I denote the Hurst coefficient by H (the only mathematical symbol that I used in my post) and it seems that it is identical to your α, while it is related to your γ by γ = 2(1 – H).

    On the other hand, while I agree with your notion of the “finite size effect” I would not agree with treating it in terms of inequalities, as you do, because inequalities give the false impression that in some areas (for large sample size N or for small lag s, to use your own notation) your statistical estimations are perfectly safe. They never are, particularly because LTP magnifies the uncertainty and also introduces (negative) bias, sometimes substantial bias, in estimators which according to classical statistics are unbiased (e.g. that of the variance; see Koutsoyiannis, 2003, and Koutsoyiannis and Montanari, 2007). Instead of using inequalities to identify seemingly safe areas, I think it is better to explicitly take the bias into account in all cases. I will come again later to this.

    In terms of your results for individual stations, as shown in your second figure, these are mostly consistent with mine; however from recent analyses on thousands of raingauges worldwide (Iliopoulou et al., 2013) it seems that precipitation records have an average H closer to 0.6 than to 0.5 which you report. It also seems the LTP is again more appropriate for them than the AR(1) model.

    Now I am coming to your conclusions numbered (i) and (ii). First, while in my post I refer to the combined land and sea surface temperatures (the HadCrut4 data set as mentioned in the introductory entry by the CD Editorial Team), you examine separately the sea surface temperature and the land temperature. I have made analyses also for these (based on the HadSST2 and CRUTEM4 data respectively). So, I can agree with your conclusion (i), i.e.:

    The global sea surface temperature increased, in the past 100y, by about 0.6 degree, which is not significant.

    Indeed, the behaviour I see for SST is not different from that of my Figure 5, except that the climatic difference for the entire 134-year period (1879-2012) is 0.5°C (but if you limit it to the last 100 years it indeed becomes 0.6°C as you report).

    Now coming to the land temperatures, again I find a similar behaviour as in my Figure 5, the only difference being that the few points going out of the critical values in this case refer to the lag 120 years rather than to the lag 90 years shown in the figure (the latter all remain below the critical values). This does not agree with your conclusion:

    (ii) The global land air temperature, in the past 100y, increased by about 0.8 degrees. We find this increase even highly significant. The reason for this is the comparatively low persistence of the land air temperature, which makes large natural increases unlikely.

    For the computational part of the disagreement please take into account that the standard deviation in the land temperature is by more than 50% higher than in the sea surface temperature. Also please recall our discussions when the paper by Koutsoyiannis and Montanari (2007) was published, in which we criticized your approach in Rybski et al. (2006) and, in particular the fact that you did not take into account the uncertainty/bias in the standard deviation into your calculations. I had the impression that you had agreed on that, but perhaps you have forgotten it by now as I infer from your list of references. :-)

    But there is a disagreement also on the logical part of your conclusion (ii). I believe if you accept that the sea surface temperature has strong LTP, then logically the land temperature will have too, so I cannot agree that the latter has “comparatively low persistence”. We are speaking about long-term persistence, which manifests itself in decadal, centennial, etc., time scales. I cannot imagine that, on the long term, the land would not be affected by the long-term fluctuations of the sea temperature. I believe climates on sea and land are not independent to each other—particularly on the long term.

    Demetris

    References

    Iliopoulou, T., S.M. Papalexiou, and D. Koutsoyiannis (2013), Assessment of the dependence structure of the annual rainfall using a large data set, European Geosciences Union General Assembly 2013, Geophysical Research Abstracts, Vol. 15, Vienna, EGU2013-5276, European Geosciences Union.

    Koutsoyiannis, D. (2003), Climate change, the Hurst phenomenon, and hydrological statistics, Hydrological Sciences Journal, 48 (1), 3–24.

    Koutsoyiannis, D., and A. Montanari (2007), Statistical analysis of hydroclimatic time series: Uncertainty and insights, Water Resources Research, 43 (5), W05429, doi: 10.1029/2006WR005592.

    Rybski, D., A. Bunde, S. Havlin and H. von Storch (2006), Long-term persistence in climate and the detection problem, Geophys. Res. Lett., 33, L06718, doi: 10.1029/2005GL025591.

  • Demetris Koutsoyiannis

    First comments on Rasmus Benestad’s post

    Dear Rasmus,

    Your introduction and particularly your reference to the paper by Cohn and Lins (2005) and to your post “Natually trendy?” reminded me of our discussion in your posr in RealClimate seven years ago. This was my first contribution in a blog and thanks to it and its subsequent reposting in ClimateAudit by Steve McIntyre, brought me in contact with many colleagues, including yourself, as well as Tim Cohn and Harry Lins. So, I thank you for that post.

    I also welcome your recognition and explanation of LTP in your current post; these are useful for all of us (particularly for myself as I estimate that from now I will not have as many difficulties in publishing papers related to LTP as in the case of the paper by Koutsoyiannis and Montanari, 2007—see its “prehistory” in http://itia.ntua.gr/781/). :-)

    Most of all, I welcome your Figure 2, which supports all that I have been saying for years. The autocorrelation of synthetic climate, produced by a climate model without GHG change, becomes 0 for lag as small as 5 and keeps a value around 0 for all subsequent lags. As I have described in my post, with zero autocorrelation the climate would be flat. But a static climate has never been the case on Earth. In other words, the climate models are inconsistent with the real world climate, which is characterized by change on all time scales. The LTP is the stochastic representation of irregular change and is also reflected in the autocorrelation function with high values of autocorrelation. LTP has been a dominant characteristic of Nature (see my post as well as Armin’s). Your graph shows that the models need to assume an external (anthropogenic) agent to produce what has been the rule in Nature all the time.

    I believe it is unfortunate that LTP has been commonly described in the literature in association with autocorrelation and as a result of memory mechanisms. It is the change, mostly irregular and unpredictable in deterministic terms, that produces the LTP. The high autocorrelation is just the reflection of change upon that mathematical concept. The first who understood that was Klemes (1974).

    I guess there is difference in the way we are viewing change. Perhaps what I call change you view it as “variation” or “internal variability”. But I have difficulties to understand what you call change. You say:

    A climate change happens when the weather statistics are shifted.

    I hope you agree that these statistics are a human invention to describe Nature, not a natural property per se. Furthermore, their assigned values depend on assumptions, like the time scale of averaging, e.g. 10 or 30 years. Whatever these assumptions are, I cannot imagine any two, adjacent or not, time periods whose statistics (estimated from data, e.g. temporal average) would be the same. In other words, changes, call them shifts if you wish, occur all the time. This shows that your distinction of “variation” and “change” is ambiguous.

    You are clearer when you distinguish “signal” from “noise” as you associate the former with “man-made climate change”. But I would never agree with your term “noise” to describe the natural change. Nature’s song cannot be called “noise”. Most importantly, your “signal” vs. “noise” dichotomy is something subjective, relying on incapable deterministic (climate) models and on, often misused or abused, statistics.

    I found very interesting the argument you offer with respect to potential misuse of statistics:

    Statistical LTP-noise models used for the detection of trends involve circular reasoning if adapted to measured data. Because this data embed both signal and noise.

    I agree that circular reasoning can be a real risk. I also accept your deliberation that you need 70-90 years for a meaningful inference about LTP. Based on this, I can offer several ways to avoid the circular reasoning.

    a. The HadCrut4 data set is 163 year long. So, let us exclude the last 63 years and try to estimate H based on the 100-year long period 1850-1949. The Hurst coefficient estimate becomes 0.93 instead of 0.94 of the entire period. Is LTP artificial then?

    b. Look at Koutsoyiannis and Montanari (2007), Table 1. It examines several proxies for temperature for the last 500-2000 years and provides two sets of H estimates: One for the entire period covered by each of the proxies and one for the period 1400-1855, common for all proxies. Do you see any noteworthy difference (say, greater than 0.03) in the estimates of H between the two periods? Don’t these high values of H (0.86-0.93 for the period 1400-1855) indicate LTP? Can they be the result of anthropogenic origin? Does your “circular logic” argument apply to them?

    c. Look at Markonis and Koutsoyiannis (2013), Fig. 9. This shows that a combination of proxies supports the presence of LTP with H > 0.92 for time scales up to 50 million years. Is this a result of your “signal” which you identify with “man-made climate change”?

    Furthermore, if you trust only instrumental series you may look at the Nile example in my post, which I trust you can assume to be free of “man-made climate change” as it does not go beyond the 15th century. With respect to instrumental temperature records, you are right that most thermometer records do not go back in time longer than a century. However, there are some that they do—some exceptions as you say. As handy examples, I can offer those of Vienna (see Fig. 3 in Koutsoyiannis, 2011) and Berlin/Tempelhof (see Fig. 13 in Koutsoyiannis et al., 2007). Again the LTP is evident, even without considering the last period (for example, as I recall from the latter publication, the first one-third of the record, years 1756-1839, gives H = 0.83 while for the total period H = 0.77).

    In any case, not only is the risk for circular reasoning a real one, but it also concerns much wider areas than LTP. As another example, consider the “Mexican Hat Fallacy” that I referred to in my post. The circular reasoning here is that I formulate a hypothesis after I have seen the data. Deterministic modelling can also be affected by circular reasoning. The hydrological community has given importance to avoiding circular logic. Thus, it has thus been a standard practice in modelling to follow the split-sample technique (Klemes, 1986). We split the available observations into two (sometimes three) segments. We use one segment for building and calibrating a model and the other one for validating it. Has such model validation technique been used in climate models? From my experience (Koutsoyiannis et al., 2008, 2011; Anagnostopoulos et al., 2010) I can only imagine a negative answer.

    In the beginning of your post you present a graph that compares the HadCRUT4 data series with results of a regression model “based on greenhouse gases, ozone and changes in the sun” as you say. You do not give a citation to see the details. So, the natural question is: Did you use a split-sample technique for your model, or any similar validation technique, in which a data segment is kept out when building your model and calculating the regression parameters? If yes, then the danger for circular reasoning is minimal—but not zero, because in reality you have seen all the data. So, what do you say?

    Demetris

    References

    Anagnostopoulos, G. G., D. Koutsoyiannis, A. Christofides, A. Efstratiadis and N. Mamassis (2010), A comparison of local and aggregated climate model outputs with observed data, Hydrological Sciences Journal, 55 (7), 1094–1110.

    Cohn, T. A., and H. F. Lins (2005), Nature’s style: Naturally trendy, Geophys. Res. Lett., 32, L23402, doi: 10.1029/2005GL024476.

    Klemes, V. (1974) The Hurst phenomenon: A puzzle?, Water Resources Research, 10 (4), 675-688.

    Klemes, V. (1986), Operational testing of hydrological simulation models, Hydrological Sciences Journal, 31(1), 13–24.

    Koutsoyiannis, D. (2011), Hurst-Kolmogorov dynamics as a result of extremal entropy production, Physica A: Statistical Mechanics and its Applications, 390 (8), 1424–1432.

    Koutsoyiannis, D., A. Efstratiadis, and K. Georgakakos (2007), Uncertainty assessment of future hydroclimatic predictions: A comparison of probabilistic and scenario-based approaches, Journal of Hydrometeorology, 8 (3), 261–281.

    Koutsoyiannis, D., A. Efstratiadis, N. Mamassis, and A. Christofides (2008), On the credibility of climate predictions, Hydrological Sciences Journal, 53 (4), 671-684.

    Koutsoyiannis, D., A. Christofides, A. Efstratiadis, G. G. Anagnostopoulos, and N. Mamassis (2011), Scientific dialogue on climate: is it giving black eyes or opening closed eyes? Reply to “A black eye for the Hydrological Sciences Journal” by D. Huard, Hydrological Sciences Journal, 56 (7), 1334–1339.

    Koutsoyiannis, D., and A. Montanari (2007), Statistical analysis of hydroclimatic time series: Uncertainty and insights, Water Resources Research, 43 (5), W05429, doi: 10.1029/2006WR005592.

    Markonis, Y., and D. Koutsoyiannis (2013), Climatic variability over time scales spanning nine orders of magnitude: Connecting Milankovitch cycles with Hurst–Kolmogorov dynamics, Surveys in Geophysics, 34 (2), 181–207.

  • Armin Bunde

    First comments on Benestad’s and Koutsoyiannis’ guest blogs

    I actually can agree with most of what Demetris writes on LTP in his interesting and pedagogic blog. I only think that the tools he used (similar as ours in the 2006 paper by Rybski et al) are not the optimum tools. Among climate scientists, the best accepted tool is the exceedance probability (as I wrote in my blog) from which the significance of a trend can be derived. Unfortunately, since the exceedance probability for LTP records was not known before we published our main results in 2009 and 2011, climate scientists used the wrong assumption of an AR1 process to estimate the significance of a trend and considerably overestimated it this way. I am sure, when Demetris will use the exceedance probability and the analytical result we published in 2011, he will arrive at our conclusions, too.

    In contrast, I cannot agree with a large fraction of Rasmus blog. My disagreement is not based on philosophical arguments but simply on mathematics and modern time series analysis. LTP is not an abstract issue, but a process with long memory that (for stationary records) decays in time by a simple power law. Accordingly, the ENSO, for example, is not an example for LTP. It is an example of a very complex and very difficult quasi ocillatory phenomenon which is difficult to forecast, but not LTP. I think it is very important in science and also in climate science to be precise, without being precise, modeling is impossible and progress hard to achieve.

    It is very easy to generate LTP records numerically, one only needs to know how to generate Gaussian random numbers and how to make a Fourier transform. This way, one can very efficiently study theoretically the properties of LTP records. By doing this properly one will find that the autocorrelation function (ACF) as used by Rasmus unfortunately is not an appropriate tool to detect LTP in records with a length below 50 000. As we have shown in 2009 in Physical Review E, there are strong finite size effects which are even worse than anticipated by Rasmus. But this is NOT a problem of LTP, but only of the employed method. The second problem of the ACF can be seen also in Rasmus Fig. 1: It depends on the external trend. Therefore, if one only knows the ACF as tool for detecting LTP one may led to think as Rasmus does in his summary, that we do not really know what the LTP in the real world would be like without GHG forcing.

    If I had worked 20 years ago in this field, I had agreed with this statement. But nowadays, there is a large number of methods available that is able to detect the natural fluctuations in the presence of simple monotonous trends. Two of them are the detrended fluctuation analysis (DFA) and the wavelet technique (WT). These are techniques which have been applied in many different disciplines, ranging from physiology via computer science to the financial markets and tested extensively. By combining them one can quantify LTP on time scales up to N/4 where N is the record length, and N should be above 500. This means, from a monthly record of 40y one can detect LTP when using the proper methods. Again, when using the ACF, 150y are not enough because of the tremendous finite size effects.

    We appreciate today that monthly temperature data are LTP on all time scales, with Hurst exponents around 0.65 for continental and around 0.8 for island stations and sea surface temperatures. Long historical runs from AOCGMs are able to reproduce this behavior. Daily data additionally show short term persistence on scales up to 2 weeks which are averaged out in monthly data. Accordingly, the problem is NO LONGER to determine the Hurst exponent in the presence of anthropogenic warming, but to estimate the contribution of e.g GHG to the warming.

    Within time series analysis, this can be done as I wrote above and in my blog. But as a consequence of the LTP in the temperature data, the error bars are very large, considerably larger than for short-term persistent records. But nevertheless, except for the global sea surface temperature, we have obtained strong evidence from this analysis that the present warming has an anthropogenic origin.

    Best wishes,
    Armin

  • Demetris Koutsoyiannis

    Rasmus, you say:

    I too do not think that white noise is not really a realistic assumption, and I think that most of my colleagues agree. It is news to me that climate is determined on just one time scale. Where is this stated?

    I will be glad to offer news to you, but I have to clarify that my phrase:

    Could any geophysical process, including climate, be determined by just one mechanism acting on a single time scale?

    means that the climate CANNOT be determined by one scale.

    Those who use a Markov/AR(1) model assume that it can be determined by one scale, not me. Note, in a Markov model the autocovariance for lag t is c(t) = b exp(-t/a) where a is the SINGLE time scale for that model. It is trivial to show that on a climatic time scale D > > a, the climate produced by a Markov process has variance Var[x] ~ a/D. This is just the same as in the white noise, where the climatic variance is inversely proportional to the time scale of averaging. Thus, those who assume a Markov behaviour for hydroclimatic processes also accept a white noise (random) behaviour at climatic scales.

    To repeat it once more, it’s not me who assumes a Markov model, a random climate, or a single scale. On the contrary, I stress that all these assumptions are wrong.

    So, I hope with these clarifications my “news” is more informative for you.

    D.

  • Rasmus Benestad

    Thanks for your comments on my post, Demetris. I think we need to live with our disagreements on a few points – but that’s fine. This is what drives science forward. I think one objective was to try to identity the exact points where we diverge in our interpretations. Here is my take on that: let’s start with your this paragraph:

    “In other words, the climate models are inconsistent with the real world climate, which is characterized by change on all time scales. The LTP is the stochastic representation of irregular change and is also reflected in the autocorrelation function with high values of autocorrelation. LTP has been a dominant characteristic of Nature (see my post as well as Armin’s). Your graph shows that the models need to assume an external (anthropogenic) agent to produce what has been the rule in Nature all the time.”

    I do not see how you can claim that climate models are inconsistent with the real world climate. We know that they do reproduce the main important aspects of Earth’s climate, such as cirulation patterns, wind patterns, and the past temperature trends. We also know that these kinds of models taught us about the fundamentals about Chaos theory.

    I also argue that we do not know if LTP really is stochastic, and my demonstration showed that in fact external forcings such as GHGs do introduce LTP characteristics. It does not help you with long time series when you do not know what is stochastic and what is forced.

    Many processes in nature may look like LTP, and a good deal of it is due to physical processes and a chain of causality – not necessarily randomness. Autocorrelation functions can give you some indication about the degree of persistence, but cannot provide insights into the physics in isolation.

    I want to ask if you consider non-linear chaos – which does not have a long-term memory – as an aspect of LTP – it does not satisfy that value x(t’) is influenced by all x(t < t’).

    If you think LTP includes chaos, then is arises from the internal physical processes. If LTP does not involve chaos, then I wonder how you’d distinguish between the two.

  • Rasmus Benestad

    I appreciate Armin explanation, and it’s OK to disagree. I’m not convinced that there are methods that can distinguish noise and signal – or trends and fluctuations – when there is an external forcing. I do not believe that tere is anything magic about LTP, and as any other process, it is caused by physical processes and internal dynamics. Such as changes in the oceans and non-linear chaos.

    I guess we could set up some double blind experiments, with systems where a forced trend of LTP were designed into the results. An mimic the analysis as I did for the ACF for two time series, one with and one without trends.

  • Rasmus Benestad

    Question to both:
    We know that the errorbars of the global mean temperature estimate T(2m) are greater when the sample of thermometer records from the globe was smaller in the realy part of the record. Hence the variance associated with T(2m) is greater in the early part of the record compared to the latter, due to greater statistical fluctuations. The reduction in the fluctuations associated with the sampling, however, must be considered as ‘artificial’ because it does not reflect the changes in the real world.
    So the question: how do the LTP methods deal with imperfect data?

    Also, we know that theforcing is a combination of the contributions from greenhouse gases (GHGs), volcanoes and solar forcing. They have different time scales, where volcanic eruptions leave an imprint which may last for a few years, whereas GHGs have long-lasting effects. We also expect the trends to change over time, and that splitting the temperature record into say ~60 year intervals will result in different segments where the frocings is different. How do the LTP methods account for the presence of such forcings if you a priori do not know exactly what they are?

  • Demetris Koutsoyiannis

    Rasmus,

    I fully agree with your assessment that we disagree. I also fully agree with your statement.

    This [disagreement] is what drives science forward.

    The latter is an important agreement, given a recent opposite trend, i.e. towards consensus building, which unfortunately has affected climate science (and not only).

    So, let us focus on our disagreements and illustrate them with a few numbers, when possible. You refer to my statement about climate models and you say:

    I do not see how you can claim that climate models are inconsistent with the real world climate.

    Recall that my statement was based on your Figure 2, lower panel. I took your grey curve, which refers to non-changing conditions with respect to forcing. I was able to see that this is a perfectly Markovian curve, with autocorrelation ρ(t) = exp(-t/a), where a = 1.25 years (I guess your lag is in years, right?). Now, considering a climatic scale with averaging time scale D = 30 years or D/a = 24, we can infer a characteristic correlation ρ(D) = exp(-D/a) = 3.8 x 10^-11, so small that could be regarded as zero. As a result, even not equating it to zero, two consecutive values of climate (for two consecutive 30-year periods) are virtually uncorrelated (with exact calculations the correlation of two consecutive variables of your “climate” is of the order of 10^-2).

    Yes, I contend that this behaviour, in which climate appears as an uncorrelated process, is inconsistent with the real world climate, in which consecutive variables are correlated. To summarize the results of these calculations, in three points:

    (a) Your climate models produced a series whose stochastic structure at annual scale is Markovian with characteristic time scale a = 1.25 years.
    (b) At a climatic scale of 30 years this corresponds to a climate uncorrelated in time.
    (c) An uncorrelated climate (see my Figure 3 and Armin’s Figure 1, left panel) is static and inconsistent to real world climate, which exhibits correlation and is changing (see my Figure 2 and Armin’s Figure 1, right panel).

    D.

  • Demetris Koutsoyiannis

    Rasmus

    In my previous comment I replied to your sentence:

    I do not see how you can claim that climate models are inconsistent with the real world climate.

    I wish to continue with the subsequent two, so that I have answered at least one paragraph of one of your comments. You say:

    We know that they do reproduce the main important aspects of Earth’s climate, such as cirulation patterns, wind patterns, and the past temperature trends.

    Right, I do not dispute that they reproduce the patterns you indicate, nor their usefulness as simulation tools. What I doubt is their ability to reproduce trends (as I explained above, they produce static climate) and the credibility of their predictions (or projections if you prefer the IPCC idiom) for the future.

    We also know that these kinds of models taught us about the fundamentals about Chaos theory.

    Did they really teach us about chaos? What is your take of the following statement: Schmidt (2007, The physics of climate modeling, Physics Today, 60 (1), 72–73; emphasis added):

    Weather is chaotic; imperceptible differences in the initial state of the atmosphere lead to radically different conditions in a week or so. Climate is instead a boundary value problem—a statistical description of the mean state and variability of a system, not an individual path through phase space. Current climate models yield stable and nonchaotic climates, which implies that questions regarding the sensitivity of climate to, say, an increase in greenhouse gases are well posed and can be justifiably asked of the models.

    But if you imply that, irrespective of what climate models yield, the real climate is chaotic, then I agree with you.

    You can see further information about my views and contributions with respect to climate and chaos in:

    Scientific dialogue on climate: is it giving black eyes or opening closed eyes?, 2011 (it contains further discussion of the above quotation by Schmidt, 2007).
    A random walk on water, 2010.
    A toy model of climatic variability with scaling behaviour, 2006.

    These publications show that LTP does involve chaos.

    D.

  • Rasmus Benestad

    Dear Demetris,

    I took your grey curve, which refers to non-changing conditions with respect to forcing.

    The grey curve represents a climate simulations with no forcings, and provides abenchmark to the one with forcings. We know a priori that this simulation is artificial in the sense that it will have different LTP behavious to the real world where real forcings are present. Hence, it’s a mistake to assume that this simulation is equivalent to that of the real world. Again, I think your problem is that you do not know what is signal or what is noise (you may call the latter music or whatever, but the normal term is noise).

    Current climate models yield stable and nonchaotic climates

    You also misunderstand the way climate models work. Of course they simulate time evolutions which are chaotic – because they simulate weather fluctuations. When you regard the climate as a boundary problem, as Gavin does, then you do indeed see that climate becomes predictable and in that sense nonchaotic. Take this example: I do not know the weather for an exact day in December because of fundamental limitations due to chaos. However, I know that the weather statistics for the December month will show lower temperatures than now.

    Moreover, the LTP issue concerns the time evolution and hence how the simulated weather changes from one dayt to the next. It concerns the ACF. The simulated weather in the climate models is chaotic. Such behaviour has been studied by numerous scholars, from simplified models to full-scale weather models. But the question of whether the weather is chaotic does not imply that the response to a systematic forcing (boundary condition) is chaotic.

    So my point is that the weather evolution is chaotic both in the climate model simulations and the real world. My question to you then is whether you think that such chaos and LTP are fundamentally different, and how would you ditinguish between these?

  • Demetris Koutsoyiannis

    Dear Rasmus,

    The grey curve represents a climate simulations with no forcings, and provides a benchmark to the one with forcings. We know a priori that this simulation is artificial in the sense that it will have different LTP behavious to the real world where real forcings are present.

    Do you believe I had not under understood that? I feel I have to repeat what I said. If you do not feed these models with changing forcings, then they produce a static climate; that is clear from your graph. The real world climate has never been static. The anthropogenic GHG forcing was not present, say, during the Medieval Warm Period, but again the climate was changing. You had implied that, e.g. in your EGU talk Climatic and Hydrological perspectives on long-term changes: a northern European view (2008), where for example you stated:

    Historical evidence is discussed in terms of favorable climatic conditions coinciding with part of the Viking era and the Greenland settlement.

    Had the climate been static, it would not generate favourable climatic conditions for Vikings.

    Of course models are not identical with natural processes. Of course models produce artificial simulations—this is what they should and can do. We should never confuse models with nature. But we should demand from models to reproduce in their simulations some important elements of reality. Climatic models whose simulations are characterized by short-lived autocorrelations and, as a result, produce a static climate as a rule, and need external agents to produce change, are not consistent with the real world climate in which change is the rule.

    D.

  • Rasmus Benestad

    If you do not feed these models with changing forcings, then they produce a static climate; that is clear from your graph.

    This is exactly my point too. But there are forcings in the real world, be it changes in Earth’s orbit around the sun, geological, volcanic, changes in the sun, or in the concentrations of the greenhouse gases. So my point is that LTP in this simulation without forcing will not be the same as in the real world where there is forcing. The forcing introduces LTP. We see this by comparing the two simulations – the black and the grey curves.

    Can we agree on that forcing introduces LTP?

    Can we agree on that forcing is omnipresent for the real world climate?

    So perhaps the real world climate would be static in the absence of forcings. We cannot tell from observations alone.

    As far as I know, the favourable climatic conditions for Vikings was limited to Greenland and the North-Atlantic. When we look at regional variations – as opposed to global – we see more pronounced variations linked to changes in the winds and ocean currents. The early Holocene was also influenced by a difference in Earth’s orbit, whereby the high north received more sunlight (forcing).

    Could we agree on a few ideas?
    In my view, a correct explanation implies that both the assumptions are valid and the mathematics is correct. It is possible to have a beautiful and valid mathematical model/formulae, but if the underlying assumptions are wrong, the answer will not be correct (unless one is extremely lucky). Furthermore, one can start from a set of valid assumptions, and then use incorrect matamatical equations. This too will lead to the wrong answer.

    So I wonder if it’s useful to identify which aspect we are discussing here. I feel that both Demetris and Armin have some impressive mathematical framework (although I’m a bit concerned with the issue regarding LTP vs Chaos), but that their assumptions lead them on the wrong path.

  • Thank you Rasmus, Demetris and Armin, for three very informative essays on the statistical properties of global average temperature timeseries. In these, Rasmus argued that physical considerations are important in the interpretation of statistics. Demetris and Armin mainly discussed the application of statistical methodology to climate data. That leaves me curious how the latter two view the physical interpretation of their statistical analyses.

    Rasmus showed (fig 2), using GCM data, that a deterministic trend causes long term persistence to increase. Demetris welcomed this figure as “supporting what he has been saying for years”, though apparently with a different interpretation that Rasmus, namely that the unforced climate as simulated by GCM’s (low LTP) is unrealistic of the real world (high LTP). However, as Rasmus pointed out, the real world has been impacted by natural and anthropogenic forcings, so one can’t compare an unforced model run with observations that are impacted by forcings: Of course they don’t agree! Also going further back in time (e.g. the past millennia), climate forcings (e.g. changes in the sun, volcanism, land use, greenhouse gases) likely played a substantial role in influencing global average temperature. This severely hampers the quantification of internal climate variability based purely on the presence of LTP, since forced changes increase this persistence.

    Yet implicitly or explicitly, Demetris seems to equate the presence of LTP to natural internal (unforced) variability (also phrased as chaotic behavior). How do you square this interpretation with the fact that forced changes to climate also increase this persistence?

    Rasmus asks very much the same question in his latest comment:
    “Can we agree on that forcing introduces LTP?”
    “Can we agree on that forcing is omnipresent for the real world climate?”

    These are important questions in order to establish areas of agreement and disagreement. I invite Demetris and Armin to comment on these.

    As to the physical interpretation: internal variability would in all likelihood mean a redistribution of energy within the earth system. That ought to result in some components of this system cooling down while the surface is warming up. This is however not what is being observed: Recently, also the deep oceans (at least down to 2000 m depth) have been observed to be warming (Balmaseda et al., 2013). The energy balance provided a powerful constraint to the global average temperature; that has been the case for past changes in earth’s climate and it’s currently still the case. Ignoring this aspect can lead to very strange interpretations (as I tried to show in my april 1st blogpost a couple of years ago, by applying statistical reasoning void of any physical underpinning to imaginary data of my body weight).

    Another question would thus be: If you deem a substantial portion of recent (past ~150 years) global warming to be due to internal variability, where does this increase in energy come from?

    • Armin Bunde

      Dear Bart,
      sorry that I could not answer earlier. There are several questions you raised, which also Rasmus raised, but most of them have found already an answer in the literauture. For example, The fact that the ACF is affected strongly by trends is not new at all, we have discussed it in our 1998 PRL extensively where we used detrending methods to determine the True LTP of temperature data. What we called FA at that time is a method equivalent to the ACF and strongly affected by trends. For a more extensive discussion, see our 2001 Physica A paper ( Kantelhardt et al). Scientists who do not have experience with LTP and are not aware of the better detection methods usually think that the deficit of the ACF is a deficit of LTP, but this is just wrong. I like Rasmus to read our early papers on LTP that were published in the last 15y in Phys. Rev. Lett, Phys. Rev. E, Journal of Geophys. Res. D, GRL as well as Nature Climate Change to become more familiar with LTP, its definition, the methods to detect it, its consequences on the occurrence of extremes. It is easy to find my refs, just go to Google Scholar and type in my name.

      It is important to use the same definition of LTP, but from what Rasmus writes, I understand that he has something in mind which remarkably differs from the well established definition of LTP, which you can find also in the pioneering contributions of Benoit Mandelbrot. So, on this basis it is nearly impossible to have a meaningful discussion.
      I would like to know from Rasmus, for example, what is the evidence for El Nino to be LTP (scientists usually consider this as a quasi oscillatory phenomenon) and does he consider then also other oscillatory (seasons) or quasioscillatory (sun spots) as LTP? If yes, there is a problem, since then he mixes between trends and fluctuations, which is the worsest one can do in this field.

      But let me come to the question of the origin of LTP. One of the origins is certainly the coupling of the atmosphere to the oceans. But the natural forcings also play a role. We have shown in our 2002 PRL that models that only use GHG forcing cannot reproduce the proper LTP. We have discussed extensively the role of the different ,forcings in our 2004 GRL where we found that the natural forcings are important to reproduce the proper LTP. In contrast, GHG forcing did not contribute to the LTP when using the proper methods (NOT the ACF!). So we need the natural forcings for obtaining the correct LTP. We have shown this also in our 2008 JGR-D where we compared millenium runs with natural forcings and without. So we have answered your two questions already long time ago: Natural Forcing plays an important role for the LTP and is omnipresent in climate.

      For describing quantitatively the resultung LTP we do NOT need, in contrast to the claim of Rasmus, understand in detail the role of the many different natural forcings. We just need to understand the mathematical structure of the LTP and how we can model it. Then we can use the methods that we developed in our 2011 PRE to quantify if a trend is natural or not.
      Of course, it is nice to talk about the effect of the different forcings. But since everything is interwoven in a linear and even non linear way it is nearly impossible to separate the effects from the different forcings on the LTP in a satisfying manner. The pragmatic way is to learn what LTP is, to learn and even improve the detection methods that can separate the natural fluctuations from external deterministic trends and then use the method we developped to estimate the effect of the trend. This way is doable and is actually common in climatology. The difference is only that in previous attempts the natural fluctuations have been considered incorrectly as Short Term Persistent, which significantly overestimates the trend significance.

      Best wishes,
      Armin
      ps: If it is difficult to download my papers, I can email them to those who are interested in them.

  • Rob van Dorland

    Demetris, Rasmus, Armin,

    Thank you very much for your points of view and your responses to each other. Although it is worthwhile to discuss the performance of climate models, the key point of this discussion is the determination of the significance of observed trends(in global mean temperature) beyond what would be expected from internal variability only. We can distinguish the following opposite views:

    1) Using statistical models only
    2) Using statistical models in combination with constraints using physical knowledge of the climate system (e.g. internal variability, energy balance)

    I think Demetris is going for option 1, while Rasmus favors option 2. I am not so sure about Armin’s opinion. Armin, can you make a statement on this subject?

    A second and related point is the choice of the statistical model, in particular with respect to the distinction between trends and fluctuations. So, let’s take the method of analysis of the statistical properties applied to the global mean temperature as described in section 5 of the Markonis & Koutsoyiannis (2012) paper (hereafter MK2012):

    In order to construct the climacogram MK2012 compute the standard deviations σ(k) by cutting the time series in N samples of length k (in years). As a test for their method let’s consider two time series: 1) an upward trend in temperatures (white line) and 2) a fluctuation with a two times stronger slope in the first half of the time interval and the same, but negative slope in the second half of the time interval (blue line, see figure).

    Both time series have the same standard deviation for any value of k: σ(k)=Sqrt(1/12).bkN (N>>2), where b is he slope of the trend line. Therefore, the trend and the fluctuation project both onto the same curve in the climacogram and they are indistinguishable.

    In terms of energy changes of the climate system (in case no external forcing is compensating) signal x(t) implies an increasing energy loss, while signal y(t) implies no net energy loss since at time t=kN the mean global temperature has returned to the initial value.

    Therefore, I would like you to react on the next three statements in an attempt to focus the discussion:

    1) The method of MK2012 doesn’t distinguish between trends and fluctuations. In other words, trends in global mean temperatures are considered to be fluctuations in the climacogram. Conclusions about the behavior of the climate system using this climacogram lack energy balance considerations.

    2) The variation in time of the global mean temperature in the absence of external forcing causes an energy imbalance that works to restore the temperature to its equilibrium value (where outgoing energy equals incoming energy).

    3) If energy considerations are taken into account, i.e. on the basis of estimated external forcings, significance levels of measured temperature trends will be met sooner than on the basis of the pure statistical method of MK (2012), because the latter is prone to misinterpreting forced changes as internal variability. In other words, the MK2012 method is not suitable for determining the significance of an observed trend.

    trend and fluctuation

  • Demetris Koutsoyiannis

    Until now I avoided reference to fundamentals of science, although several of Rasmus’s comments offered me this temptation. Instead, I tried to refer to some of my papers which provide my views on such fundamentals, for example the Random Walk on Water. But as I understand from Bart’s comment and particularly the link to his example on his Weight Gain Problem, my references did not work. It is thus unavoidable to clarify my views on some fundamental issues. I will try to clarify my positions in terms of several dichotomies, some of which have been used or implied by Rasmus and most recently by Bart.

    1. Models vs. reality

    In my view this is a true dichotomy.

    My grandson has taught me that the virtual reality, e.g. in computer games, can be fascinating. Of course he and his playmates are able to distinguish it from the real-world reality; for example they are well aware that, in contrast to their computer games, reality does not offer a pool of additional lives if one dies.

    On the other hand, in scientific conferences I have often seen graphs mixing up observational data of the past with model projections of the future and speakers presenting model projections as if they were reality. I have seen IPCC texts, scientific publications and policy documents speaking about the conditions in 2100 using “will” without adverbs like “likely”, “probably”, etc., e.g. “extreme events will become more frequent”.

    Furthermore, it has been very common to use concepts of stochastics, i.e. concepts pertinent to models, as if they were real objects. Concepts like probability density function, autocorrelation function, stationarity, and many more, apply to the world of models, not to the real world. They build upon the concepts of a random variable, a stochastic process, an ensemble, etc. These are abstract mathematical objects, not objects of real life. Large-scale real-world processes, like the climatic processes, have a single life, a unique evolution, and are not repeatable. There are no ensembles (pools of many lives) in real world processes. The idea of an ensemble is a useful one, to define abstract concepts like stationarity, but we should be aware that is applies to models only.

    2. Physics vs. statistics

    In my view this is NOT a true dichotomy.

    I think the language of physics is (or at least includes) mathematics. In mathematics we write 1 +2 = 3, 1 x 2 = 2 etc. Likewise, in physics, 1 kg + 2 kg = 3 kg or 1 kg x 2 m/s^2 = 2 N.

    Sometimes addition and multiplication are not enough to study physical phenomena. Thus, we may use for instance differential equations. We may also use more abstract concepts, like random variables to represent uncertain quantities, or stochastic processes to represent uncertain quantities evolving in time. Further, we may use statistical methods to estimate these quantities from measurements. This does not mean that we departed from physics and we landed on another continent which is statistics or stochastics. We still live in physics. As long as we have the feeling that we are doing physics when we add certain numbers of kilograms, we may well have the same feeling whenever these numbers of kilograms are uncertain and we opted to treat them as random variables.

    As we accept that the regular addition should be made correctly, we should also use correct mathematics when we use random variables. For example, if x, y and z are random variables related by x + y = z, we should be aware that Var[z] = Var[x] + Var[y] + 2 Cov [x,y].This is different to regular addition. Neglecting the last term, the covariance, may result in dramatic errors.

    Furthermore, we may give these peculiar quantities, the covariances, a sound physical meaning. This is the case for example in Reynolds stresses in fluid flows, which are none other than covariances of fluid velocities.

    Finally, we may build sound physical theories based on statistics. For example, the big progress in thermodynamics, which also constitutes the foundation of climatology, happened when statistical thermophysics was able to dominate over the funny notion of the caloric fluid.

    In other words, statistics is physics. For simple physical problems, in which quantities are exact, statistical considerations are not necessary. In complex systems statistics within physics is indispensable.

    3. Signal vs. noise

    In my view this is NOT a true dichotomy in geophysical sciences, while it is meaningful in electrical engineering and telecommunications.

    Excepting observation errors, everything we see in climate is signal. The climate evolution is consistent with physical laws and is influenced by numerous factors, whether these are internal to what we call climate system or external forcings. To isolate one of them and call its effect “signal” may be misleading in view of the nonlinear chaotic behaviour of the system (see also dichotomy 5 below).

    My reasoning as to why I regard this dichotomy as misleading has been exposed in earlier comments. To repeat it as briefly as possible: Let us assume that it is possible to remove the “signal” (the anthropogenic influence) from “noise” (whatever this is). Would the stochastic properties of the “noise” be different from those of the whole climate? As I wrote extensively earlier, the properties of the “noise” alone can be easily seen in older periods, those not affected by the “signal”. And it seems that stochastic properties remained unaltered. In contrast, climate models, which allegedly can distinguish “signal” and “noise” yield a “noise” which is fully inconsistent with past climate.

    4. Trends vs. fluctuations

    In my view this is NOT a true dichotomy.

    I am not aware of a rigorous definition of the term “trend”. I think it is used in a loose and colloquial manner. Indeed, loosely speaking and with reference to my Figure 2 above, we could say that between AD 640 and 780 there was a falling trend in Nile’s water level. However, in view of the longer 849-year record, we may rather say that this was part of a long-term fluctuation. On the other hand people who lived some decades after the recording of the Nile River level had started, i.e., in the 7th-8th century, may have called this an unprecedented trend, may have worried that it would continue in the future, that it would have catastrophic effects, etc.

    From a scientific point of view, if we do not provide rigorous definitions of the terms we use, it may be difficult to discuss in a constructive manner.

    5. Linearity vs. nonlinearity

    In my view this is a true dichotomy.

    It is possible that our understanding of complex natural phenomena has been influenced by that of simple systems, particularly those which can be effectively modelled by linear differential equations. In those systems, solutions corresponding to different causes/perturbations can be added together to form the solution corresponding to the combined effect of all causes. Reversely, we can allocate a weight, with respect to the combined effect, to each of the causes.

    In more complex systems (yet the most common ones) whose study needs to abandon linear models, the contribution of each cause or forcing is not straightforward. A colleague from Mexico offered me this illustration: “A typical example that I give to my students, because it is (unfortunately…) well-understood here in Mexico, is the following: If someone is being machine-gunned by two people at the same time, it is objectively impossible to quantify the contribution of each killer to that person’s death, since the wounds caused by each one of the two would have killed the person anyway, even in the absence of the other killer”. If we wish to go backward to the causes of the causes, examining for instance how these killers acquired the machine guns, how it happened to be in that place at that time etc., things become even trickier.

    Even worse than this, in chaotic systems described by nonlinear equations, the notion of a cause may lose its meaning as even the slightest perturbation may lead, after some time, to a totally different system trajectory (cf. the butterfly effect).

    6. Stochastic vs. deterministic models

    In my view this is a true dichotomy.

    According to my view, things do not happen spontaneously. Also natural phenomena are not infected by a virus of randomness, which after infection turns them from deterministic to random. There is no such virus. Rather, physical laws hold true all the time.

    Whenever we are able to use deterministic models to describe nature and to find solutions which are in good agreement with reality, we don’t need any stochastic descriptions. We use the latter only when the deterministic solutions fail.

    Let us consider Bart’s Weight Gain Problem. The problem may have different versions. For example, one version would be to explain why he gained weight during the last decade. If he kept detailed records of inputs (how many brownies, chocolate fudge cakes, etc. he ate) and outputs, he might be able to make a detailed deterministic model to describe the evolution of his weight.

    Another version of the problem would appear if he tried to predict the future. In this case such a deterministic model may not work. Perhaps he would then think of constructing a macroscopic, rather than detailed, model. He would perhaps recognize that, in principle, whatever the effort he puts, the model will not be accurate. Consequently, he may think of changing the representation of his weight from an exact variable to a random variable. Using a representation by a random variable will not mean that his weight was infected by the virus of randomness or that the law of mass conservation ceased to hold. A random variable is just an uncertain variable. Nothing more and nothing less than this. The modelling framework has now become stochastic. Once he uses a stochastic framework, he may also use the stochastic toolbox, which contains tools such as averages, variances, covariances, autocorrelations, power spectra, and many more. He may try to infer the stochastic properties of the future after fitting the stochastic model on stochastic properties (not necessarily specific values) of the past. He may further try to find and study time series of other people’s weights perhaps with greater ages, in order to see whether age does matter or not in the evolution of weight, etc. All these may help more than the law of mass conservation in the modelling, although we can be sure that this law will always hold true.

    In a few words stochastic modelling is modelling under uncertainty. It is not denial of physical laws. With reference to dichotomy 3 above, stochastics describes the signal and does not need a decomposition “signal + noise” to work.

    Dear colleagues, you may feel free not to adopt my views, to think that they are heretic or to characterize them however you wish. However, I hope you will recognize that I cannot contribute in discussing questions that distinguish statistics from physics; I cannot accept that using a stochastic modelling approach violates or contrasts physics; I cannot offer too much in a discussion about separation of signals and noises; and I cannot offer too much in answering questions that are formulated using undefined terms and concepts.

  • Demetris Koutsoyiannis

    Dear Rob,

    Thanks very much for the excellent example. However, I believe instead of illustrating some weakness in Markonis and Koutsoyiannis (2013) as you may imply, it illustrates that restricting our vision and making improper use of stochastics can be dangerous.

    From your graph I assume that what you call “fluctuation” is a periodic pattern with period kN and what you call “trend” is a line that goes unaltered in the future and the past. Without having the future and the past in mind, we cannot call your “fluctuation” fluctuation; it can well be two consecutive trends, continuing in the past and the future by extrapolating the two segments as straight lines.

    So you chose a time window equal to one period (kN) to view the two cases and you find that the two yield the same climacograms, which is correct. If one wished to restrict the vision further, one could perhaps take a time window of size kN/2 and see two straight-line trends, without any hint for fluctuation.

    But what I propose is the opposite, to widen our vision to, say, an order of magnitude farther (e.g. 10 k N). Or even better try to see how the two cases behave asymptotically, as time tends to infinity.

    We will see that the “trend” case gives a constant climacogram (nb., here, as in Markonis and Koutsoyiannis 2013, I speak about the temporally averaged process, not the cumulative one), not changing with time scale k. For small time scales, the “fluctuation” case will also yield a fairly constant climacogram. However, for large time scale, not only will it give a decreasing climacogram, but the rate of decrease will be very steep, much steeper than in the case of white noise. I have not made the exact calculations for this case, but I guess the behaviour will be virtually the same with what you see in Figure 8 of Markonis and Koutsoyiannis, the series labelled “harmonic”. That is, I guess we will have an envelope curve with a slope equal to -1 (vs. -0.5 for the white noise).

    However, there additional problems here. For the “trend” case we have abused the notion of standard deviation and hence that of climacogram. Indeed, taking some points on a line (straight or whatever) we can calculate a numerical value of a standard deviation using the classical statistical formula. But does this represent anything in a stochastic theoretical framework? The answer is negative. The process is simply nonstationary, so we don’t have the right to treat the points of the time series as if they were generated by a stationary and ergodic process and to use the statistical formula that applies to stationary processes.

    Another point is that we would not need at all to treat this case in a stochastic framework. It would suffice to describe it deterministically, fully avoiding reference to standard deviation.

    A final point is that, if a “trend” like this would be a realistic representation of the climate, we would not be here to discuss. Loosely speaking (as in my previous comment) local trends appeared throughout all Earth history, but these were parts of fluctuations. A consistent trend would lead to a runaway behaviour. That is why it is better to use stationary descriptions within stochastic models of nature (see additional justification in Koutsoyiannis 2006, 2011).

    The “fluctuation” case, could, loosely speaking, classify as a stationary process, under the terms explained in Appendix 1 in Markonis and Koutsoyiannis (2013). However, again we would not need to use stochastics if the “fluctuation” was regular and hence describable in deterministic terms. The need for stochastic description arises when the period and amplitude of fluctuation vary, and when we have many scales of fluctuations. The synthesis of many scales of fluctuations results in a process with LTP, as described in the section “A physical explanation” in Koutsoyiannis (2002).

    My conclusion is let us not restrict our vision. Note, Markonis and Koutsoyiannis used the widest possible time windows for the available instrumental and proxy records.

    D.

    References

    Koutsoyiannis, D., The Hurst phenomenon and fractional Gaussian noise made easy, Hydrological Sciences Journal, 47 (4), 573–595, 2002.

    Koutsoyiannis, D., Nonstationarity versus scaling in hydrology, Journal of Hydrology, 324, 239–254, 2006.

    Koutsoyiannis, D., Hurst-Kolmogorov dynamics and uncertainty, Journal of the American Water Resources Association, 47 (3), 481–495, 2011.

    Markonis, Y., and D. Koutsoyiannis, Climatic variability over time scales spanning nine orders of magnitude: Connecting Milankovitch cycles with Hurst–Kolmogorov dynamics, Surveys in Geophysics, 34 (2), 181–207, 2013.

  • Thanks for your clarifications, Demetris. I do note however that you didn’t answer the questions posed by Rasmus and myself:

    Can we agree that radiative forcing (be it natural or anthropogenic) introduces LTP?

    Can we agree that radiative forcing (be it natural or anthropogenic) is omnipresent for the real world climate?

    If you deem a substantial portion of recent (past ~150 years) global warming to be due to internal (unforced) variability, where does this increase in energy come from?

    Could you please briefly comment on these specific questions?

    Regarding the different dichotomities that you mention: I certainly agree that physics and statistics are not a dichotomy: Both are needed to make sense of the climate system (and of the world around us in general). That is also the point that Rasmus is making, if I understood him correctly. In your reply to Armin, you questioned a statistical result (a strong difference in LTP of land and sea data) based on physical reasoning. That is also what I am doing when I question the dominance of unforced internal variability in explaining the current warming (which is what I interpret your position to be), since I find it inconsistent with basic energy balance considerations (hence the third question above).

    Regarding my weight gain analogy, you wrote “If he kept detailed records of inputs (how many brownies, chocolate fudge cakes, etc. he ate) and outputs, he might be able to make a detailed deterministic model to describe the evolution of his weight.” However, based purely on statistical analyses, the evolution of the timeseries was interpreted to be due to stochastic variations. This would presumably still be the case even if I had detailed records of all inputs and outputs (since the argument was made purely on statistical grounds). Now with the climate system, we do have some (imperfect) idea about energy inputs and outputs (see e.g. Trenberth et al or Hansen et al). These indicate that the energy content of the whole climate system has increased (i.e. radiative forcing acting on the system).

    How could this not have influenced the global average temperature of the planet?

  • Demetris Koutsoyiannis

    Dear Bart,

    I do not avoid answering your questions—or other people’s ones. I am just subject to energy and time constraints. There is a lot of interesting stuff to read and comment on. As you see, I am working to contribute, but I have limitations (and additional duties). Thus I must put priorities. My first priority is that we should understand each other on general principles before we can discuss more specific issues. This is not easy. For example, when, after my extensive essay on dichotomies, in your last comment you say “the evolution of the timeseries was interpreted to be due to stochastic variations”, I feel that I have again to stress my general view: There is no such thing as “stochastic variation”. The variation is real, it was caused by a real cause, e.g. because you ate too much chocolate. In this respect I would never call it noise. It is signal.

    Stochastic can be just the model that we use to describe the real variation. And as I said, such type of model is useful only if the deterministic description fails. Whenever one is obliged to use terms like average, variance, autocorrelation, significance testing etc., he must be aware that, most probably, he has already departed from a deterministic description and resorted to stochastic description.

    A stochastic model should fully respect all laws and be consistent with all available information. If we everything is known then the stochastic model should reduce to the deterministic one. If there is uncertainty, then the stochastic model can describe it in terms of probability.

    D.

  • Dear Demetris,

    I’m trying to focus the discussion, and do not in any way mean to imply that you’re “avoiding to answer the questions”.

    The dichotomy that is at the centre of this discussion is that between internal (unforced) variability and forced changes (where the latter could be natural or anthropogenic). And what LTP could tell us about this distinction (if anything).

    Internal variability usually involves a redistribution of energy within the climate system (where the ensuing changes in surface temperature cause an energy imbalance which works to restore the system back to its equilibrium value where outgoing energy equals incoming energy; Rob’s second point in an earlier comment). Climate forcings involve a change in the energy balance (e.g. from changes in the sun, the albedo or the greenhouse gas loading), which causes the earth’s temperature to change.

    One apparent disagreement about this distinction is whether the presence of LTP is related to the degree of internal variability. Rasmus argued that this is not necessarily the case, since forcings also contribute to LTP and because forcings are omnipresent in the temperature data.

    I would be curious as to your and Armin’s view on this, time permitting of course.

  • Rob van Dorland

    Dear Demetris,

    My point of showing a “trend” and a “fluctuation” over a limited time span is that by applying your method of constructing a climacogram you will lose information on the signature (time evolution) of the investigated time series.

    What you are actually doing is cutting time series into N pieces (averages) of length k years and determine the standard deviation of the distribution. For the same distribution, you may reconstruct time series with different signature by putting the pieces back in a different order. This implies that you lose information on the signature.

    Time series (of global mean temperature) with different signatures may have different effects on the energy budgets of the climate system. The energy considerations are then mathematically speaking constraining the degrees of freedom, you find with your statistical method.

    So, can you confirm that by using your statistical method (MK2012)
    1) you lose information on the signature of the time series
    2) by adding energy considerations to your method, you will lose less information about the signature of the time series of global mean temperature

    • Demetris Koutsoyiannis

      Dear Rob,

      Yes, of course, by using the standard deviation you lose information in comparison to the complete catalogue of values in the time series. However, using standard deviations at a continuum of time-scales (the climacogram) you lose less information, or if you allow me to put it differently, you lose that information that you want to lose.

      I think it is a general practice in modelling of physical phenomena not to care about the details of the system components. In other words, in modelling we always lose information in comparison to the real-world system. We try to find which the essential elements of the system are and we try to represent those, ignoring the less important. For example, in describing a mole of a gas we usually do not care about the positions and momenta of the 6 x 10^23 molecules it contains and prefer to describe the system it in terms of macroscopic quantities like the pressure and temperature, which by the way are statistical quantities. Certainly, the use of temperature and pressure is associated to loss of information, but I think this is intentional, isn’t it.

      Stochastic models are macroscopic descriptions. Now, the climacogram is one-to-one related to the autocovariance function and the power spectrum of a stochastic process. Thus, if you assume a model for the power spectrum of a process, you have a model for the climacogram; no more and no less information with respect to the modelled power spectrum.

      The loss of information in stochastic modelling can be controlled. I believe if you can quantify the “signatures”, the “energy budgets”, the “energy considerations” and the “degrees of freedom”, you refer to, they can be represented in stochastic modelling. Of course this will need some work, but I think in principle it is possible.

      If we accept that these quantities (the quantified “signatures”, “budgets” etc.) are somewhat reflected in the data, they are already there in the existing analysis. But if you can provide me with additional, quantified constraints, I will think how these could be taken into account.

      D.

  • Demetris Koutsoyiannis

    Arthur, you say:

    Koutsoyiannis appears to agree he has little to contribute to substantive discussion here:

    I cannot contribute in discussing questions that distinguish statistics from physics; I cannot accept that using a stochastic modelling approach violates or contrasts physics; I cannot offer too much in a discussion about separation of signals and noises;

    I’m sorry to hear this; dialogue is pretty difficult when one party completely refuses to respond to critical issues like these.

    I am sorry that you feel like this. Perhaps Rob, Marcel and Bart made a bad choice by inviting a person who has little to contribute. On the other hand, as I see from his comment, Bart agreed that physics and statistics are not a dichotomy and he did not express disagreements on what I wrote for the other dichotomies, so what is your point?

    Science is about explaining things we observe, not just sitting back and watching the world do whatever it does.

    Please note that I am an engineer by profession and I would be unprofessional if I sat back and watched. Do you feel that I am sitting back and watching right now, or am I trying to interact with you and the other people involved in the climate dialogue?

    The existence of long-term persistence in an observable implies some underlying cause that has slow variation: either an internal physical variable of the system under study that changes slowly, or an external parameter with similarly slow change. It is critical for our understanding to know which of these is the cause, and deeper analysis of the system has to be done to ferret that out.

    Of course it has. But a slow variation at a single time scale does not suffice to generate LTP. A single time scale of variation results in Markovian dependence. You need to have many scales of change to get LTP. I refer you to my post above; I devoted effort and time to write it, and I hope it would not be waste of time if you read it. In addition, I can refer you to some of my publications already mentioned above, investigating the relation of LTP with extremal entropy production, or the changes over a continuum of scales.

    Just to illustrate with the Nile river level example: let’s say we have two contrasting possible fundamental explanations for the long-term persistence seen there:
    (1) changes in human use of the water and land around the river that vary over centuries
    (2) variations in ocean currents and behavior also on a century-scale, with an impact on climate and rain-fall over the Nile basin.

    If you haven’t done the work to distinguish between (1) and (2), then observing LTP in the Nile river example tells us nothing useful about the climate, it is just as much garbage as a scientific experiment in the lab under conditions where important variables are not being controlled.

    It is not difficult to infer that the dominant mechanism is the second because we speak about the 7th-15th centuries, during which the exploitation of the Nile was minimal, in comparison to that of today. The dams have been built only in the 20th century. Note, we have also a flow record of the Nile of the modern period (starting at 1870). The record is naturalized, meaning that hydrologists were able from water budget data to find what the natural flow discharge was, taking into account human withdrawals etc. The modern naturalized record verifies LTP. We have also records in other sites within the Nile basin, which again verify LTP.

    If this is garbage to you, I am sorry; the only thing I can do is to invite you to make a better analysis and shed more light.

    D.

  • Demetris Koutsoyiannis

    Dear Spencer,

    I have no words to thank you for your comments. I could not reach the clarity (not to mention the level of English) of your comments. In particular, I warmly endorse your last comment assessing that my opinion may not be classed as “option 1″ and clarifying that random walk cannot stand as a physically realistic model, while LTP can. I trust Bart, who seems to dislike unit-root processes, will appreciate the latter feature.

    Not only does your comment make my life easier, in terms of effort to reply, but it also gives me a warm feeling of sharing similar ideas with a knowledgeable colleague, who, notably, has a signal processing background that I do not have.

    Demetris

    • Demetris, in your reply to Spencer you state that LTP can stand as a physically realistic model. What do you mean by that? What is your physical interpretation of the presence of LTP? Is a physical interpretation based solely on the presence of LTP possible at all? I do note that Spencer equates LTP with internal variability (rather than being the combined result of internal variability and radiative forcing). What is your view on that?

      You state that I dislike unit root processes. That would be non-sensical, as LTP and unit roots are just statistical features of a timeseries. What I aluded to in my april first post is that certain interpretations of stastical features may be unphysical.

  • Demetris Koutsoyiannis

    Dear Bart,

    You say:

    You state that I dislike unit root processes. That would be non-sensical, as LTP and unit roots are just statistical features of a timeseries. What I aluded to in my april first post is that certain interpretations of stastical features may be unphysical.

    First, I am sorry if my failed attempt to inject a little bit of humour in the discussion annoyed you and if you found it non-sensical. Therefore, I’ll try to be serious in this comment.

    Second, I would formulate your statement in a different way: Strictly speaking, LTP and unit roots are not features of a time series but of a stochastic model. A time series can be different things, e.g. a series of observations of a natural process, a series of outputs of a deterministic or a stochastic model etc. The time series alone cannot tell us that it exhibits LTP, STP, unit-root behaviour, or whatever. If it could, we would not make this discussion.

    If we do not know the exact dynamics that gave rise to a certain time series, different (infinitely many) stochastic models could be used to represent it. Statistical analysis of the time series will then tell us which of the models, e.g. an LTP model, an STP model, a unit-root model, etc., is more consistent with the statistical properties of the time series. Such comparisons are unavoidably inductive procedures, not strict mathematical proofs.

    Third, I fully agree with you that certain interpretations (and models) may be unphysical. This implies an additional, very powerful means to reject some of the models, the deductive logic.

    As a first example, let us examine a unit root process. I must admit that, I dislike even the term “unit root” which over-stresses a minor mathematical feature of a process. The important characteristic of such process is that it is nonstationary. A nonstationary process in which the mean and/or the standard deviation tends to infinity with increasing time, can be rejected on the basis of deduction. No need to compare the statistical properties of the time series with the theoretical properties of the model.

    The simplest case of such a nonstationary process is the random walk process, closely related to the Brownian motion. Spencer has explained why we should reject it:

    So, for example, we can reject the idea that the climate is a random walk, since a random walk does not have a finite defined population mean, so is inconsistent with the principle of conservation of energy.

    I also quote another explanation by Lubos Motl from his blog:

    Such a model of the climate would resemble the Brownian motion. After a very long time “t”, the typical temperature anomaly would scale like “sqrt(t)”. It would drift away. That’s unrealistic because “sqrt(4 billion years)” would still be many thousands of degrees. ;-) The number “sqrt(t)” is very small compared to “t” but it still diverges if “t” goes to infinity.

    Lubos adds some “damping” to the random walk to make it more physically realistic. He thus gets a Markovian process. This is indeed more realistic but it classifies as a STP process. An STP process is not easy to reject on a deductive basis. However, as I explained in my main post, a Markovian process has some theoretical problems; quoting my main post:

    • Its past has no influence on the future whenever the present is known (in other words, only the latest known value matters for the future).

    • It assumes a single characteristic time scale in which change or variability is created. [...] As a result, when the time scale of interest is fairly larger than this characteristic scale, the process behaves like white noise.

    For these reasons, explained in more detail in my main post, among nonstationary, stationary STP, and stationary LTP processes, the less unphysical are the stationary LTP.

    So dear Bart, does my main post and the additional explanations in my above give answers to your following questions:

    Demetris, in your reply to Spencer you state that LTP can stand as a physically realistic model. What do you mean by that? What is your physical interpretation of the presence of LTP? Is a physical interpretation based solely on the presence of LTP possible at all?

    If not, could you also see some of the papers I give in my references, particularly those I mention in my reply to Arthur (also cited in Spencer’s comments)?

    I left one sentence of your comment unanswered. I will answer this soon (I have something else to do in the meantime).

    D.

  • Demetris Koutsoyiannis

    Bart, you say:

    I do note that Spencer equates LTP with internal variability (rather than being the combined result of internal variability and radiative forcing). What is your view on that?

    I am not sure if Spencer equates LTP with internal variability. In my reading he says that internal variability can lead to LTP and he uses for that a nice example, the cloud cover.

    I have tried to investigate the question whether a system could exhibit LTP if its inputs are constant. You may read two papers about this question ( 2006 and 2010). As you can see, even nonsensically constant input can give rise to LTP.

    This brings us to Rasmus’s questions, which you also quoted in one of your other comments:

    Can we agree on that forcing introduces LTP?
    Can we agree on that forcing is omnipresent for the real world climate?

    My answer to these would be: I agree that (changing) forcing can introduce LTP and that it is omnipresent. But LTP can also emerge from the internal dynamics alone as the above examples show. Actually, I believe it is the internal dynamics that determine whether or not LTP would emerge. As a counterexample, I can imagine a hypothetical system with strong damping/stabilizing mechanisms, in which even an incoming signal exhibiting LTP would not be manifested in the system state. For the Earth system, the radiative forcing itself, being a balance of incoming and outgoing radiation depends on the internal variability.

    D.

    • Dear Demetris,

      Thanks for answering the questions as posed by Rasmus (and repeated by me): “I agree that (changing) forcing can introduce LTP and that it is omnipresent.”

      From this I conclude that you agree that the presence of LTP is not necessarily indicative of internal variability being dominant (since forcing also introduce LTP).

      Is this a correct interpretation of your views?

      Regarding physics vs statistics: They are both useful and needed to make sense of the world around us, hence there is no dichotomy. Statistics in the context of climate science is a tool to understand the physics. My repeated questions were an attempt to have you clarify what your statistical analyses mean in physical terms. Your answers re forcings and LTP (as quoted above) and about conservation of energy (earlier) are steps in that direction which I warmly welcome.

      • Demetris Koutsoyiannis

        Dear Bart, as I wrote above

        Actually, I believe it is the internal dynamics that determine whether or not LTP would emerge.

        Please see also the publications I mentioned which demonstrate that the internal variability is sufficient to introduce LTP, even without external variability.

        Otherwise, I welcome your recognition of statistics as a tool to understand physics.

        D.

      • Demetris Koutsoyiannis

        Also, see Armin’s excellent comment about it.

  • Dear Demetris,

    Thanks for clarifying the nature of the different stochastic models; very useful to read. I appreciate that you accept that conservation of energy prevents the climate from behaving as a random walk. That’s a strong physical constraint and it’s important to be clear about its implications; thanks for adding this clarity from your part too.

    You further state that stationary LTP is the least unphysical process by which to interpret the evolution of global avg temperature. However, that doesn’t tell me anything about the physical processes which you deem responsible for the observed changes in temperature. Are these changes predominantly due to a redistribution of energy within the climate system or due to (natural and anthropogenic) climate forcings or due to something else (e.g. fast feedbacks wondering off)? Does LTP imply anything about the warming being governed by unforced vs forced changes, or by anthropogenic vs natural factors? I’m hoping for an answer in physical, not statistical terms.

    Dear Armin,

    Thanks for joining in again. I appreciate you answering the questions posed earlier by Rasmus and myself: “Natural Forcing plays an important role for the LTP and is omnipresent in climate”. Is it correct to deduce from this that the presence of LTP cannot by itself distinguish between unforced and forced changes in climate? (i.e. LTP is not necessarily indicative of internal (unforced) variability)

    You claim that natural forcings contribute to LTP while GHG forcing does not. That seems in contradiction to what Rasmus claimed (his fig 2). Hopefully Rasmus can respond with his view on this.

    I am somewhat surprised that a trend (e.g. from GHG forcing or from whatever other cause) would not contribute to LTP; how could it not? Since the analysis method is purely based on the statistical behavior, I would guess that it’s purely coincidental that LTP is not increased by one (GHG) forcing whereas it is increased by other (natural) forcings. After all, if the GHG were from natural origin, it would have the same (according to you negligible) impact on LTP. So from that perspective the presence of LTP would not be a strong indication of anthropogenic versus natural causation, right? Moreover, human forcing is not limited to GHG, but also includes aerosol forcing and land use (albedo) forcing (both of which are thought to be net negative, i.e. moderating the warming from GHG). I’m unclear as to how that factors into your analysis and interpretation (but that’s perhaps a detail).

    Dear Rasmus,

    Could you expand on your view of LTP in light of Armin’s comments? Where does your definition differ from or agree with his? I am also curious to hear some more details re your fig 2: you mention only the in- and exclusion of GHG forcing (black and grey line, resp.). However, does the black line include all known (natural and anthropogenic) forcings and the grey line none of these (i.e. only the showing the modeled internal variability)?

  • Rob van Dorland

    Dear Demetris,

    Thank you very much for your answers on my questions. These may lead to the quintessence of differences in insights. But let’s start where I agree with you.

    You stated: “I think it is a general practice in modelling of physical phenomena not to care about the details of the system components. In other words, in modelling we always lose information in comparison to the real-world system. We try to find which the essential elements of the system are and we try to represent those, ignoring the less important.”

    I agree with this statement. Although I find it a little bit surprising that you say in your comments of 1 May 2013 8:07 pm that “Models vs. reality is a true dichotomy”. It contradicts in my view.

    But let’s focus on two kind of models: Atmospheric Ocean General Circulation Models (AOGCMs) and your statistical model (described in MK2012). AOGCMs are the most comprehensive models climate scientists are using in the sense that most (relevant) physical processes following physical laws are incorporated. Every gridbox of the model obeys the first law of thermodynamics, i.e. conservation of energy. This is in my view an essential element.

    It implies on the macroscopic scale that if there is a positive radiative forcing acting for a long time (e.g. the 30% increase in total solar irradiance in the course of the history of the earth), the climate system will gain energy until the outgoing energy flux of the system is in equilibrium with the influx. This will be reflected in global mean temperature changes.

    How about your model? You process time series (of global mean temperatures) as if they consist of multiple fluctuations at all time scales. By ignoring the possibilities of (long lasting) trends (clearly the case of the brightness increase of the sun) energy is not conserved due to reasons I explained in my last comment.

    In my view physics should be an essential part of any (climate)model. Statistics can be tool for analyzing observations or model data, it’s not the same as physical laws (as you stated in your comments of 1 May 2013 8:07 pm: “In other words, statistics is physics.”).

    So, I am very interested to hear whether it is possible that under a myriad of external forcings (natural and in the last centuries also anthropogenic) conservation of energy is not violated in a model which includes fluctuations only.

    Rob

  • Demetris Koutsoyiannis

    Dear Bart,

    The first paragraph of your last comment gave me the impression that we understand each other better now, but then the second part made me worry:

    However, that doesn’t tell me anything about the physical processes which you deem responsible for the observed changes in temperature. Are these changes predominantly due to a redistribution of energy within the climate system or due to (natural and anthropogenic) climate forcings or due to something else (e.g. fast feedbacks wondering off)? Does LTP imply anything about the warming being governed by unforced vs forced changes, or by anthropogenic vs natural factors? I’m hoping for an answer in physical, not statistical terms.

    I believe if something does not tell you or me anything about the observed changes, etc., is irrelevant with our discussion. What you or I understand as physical processes depends on your or my understanding (nb., understanding is subjective). Physical processes are not just those explained by Newton’s laws. Some people are able to understand systems governed by the Second Law of thermodynamics (which by the way is a statistical mechanical law) and classify them as physical systems. Some are even able to understand, and classify as physical systems, even quantum systems governed by Schrödinger equation, in which the concept of probability is central. Some even dare speak about flows of probability, and imply a law of conservation of probability. Most of these concepts I have difficulties to understand, but I would not characterize them unphysical. I would avoid applying the Procrustean idea that physical is only what fits in my own mind. I think the climate system is quite difficult to study and I find that naïve attempts to describe (and understand) it in deterministic terms can only lead to deadlocks. Therefore I welcome any ideas from the stochastic toolbox, such as probability and its fluxes, the principle of maximum entropy, Bayesian statistics, and the like. All these become physical as long as they are applied to physical systems.

    Furthermore, I cannot understand how these two extracts from your own comments can be consistent to each other:

    Quote 1 ( earlier comment)

    I certainly agree that physics and statistics are not a dichotomy.

    Quote 2 ( last comment)

    I’m hoping for an answer in physical, not statistical terms.

    But I hope the explanations given in the last excellent comment by Spencer answer your questions “in physical terms”.

    D.

  • Demetris Koutsoyiannis

    Arthur,

    I certainly do not attack you or your straw man, whose existence I was not aware of. My impression is that we are doing dialogue, as we are prompted by the title of this forum, right? As “dialogue” happens to be a Greek word, I believe I have a good understanding of its meaning. Well, climate is also a Greek word, but certainly this is more difficult to understand :-)

    You are right that I did not provide citations covering all my assertions about Nile. So here is a paper which examines the modern record of the Nile: Medium-range flow prediction for the Nile: a comparison of stochastic and deterministic methods. You may see in Table 2 that the Hurst coefficient of the annual series is 0.85, which indicates LTP. I hope after the scheduled event on Harold Hurst additional citations will be available (in particular, about other sites in the Nile basin). Note, that I have published about the Nilometer record several times, by Figure 2 published in my main post above is for a longer period and is an original figure (as well as all other figures)—not copied from previous publications.

    About the question what “physical insight” is, please refer to my answer to Bart above. But I appreciate that you “have read enough about maximum entropy”, and I respect your wish to see another example. So here it is: Uncertainty assessment of future hydroclimatic predictions: A comparison of probabilistic and scenario-based approaches”.

    This is about a catchment which is important for the Athens water supply system. As you will see, LTP is present in both rainfall and temperature and is further amplified in river flow because of the interaction of the river with groundwater processes as well as because of the changes in the human withdrawals of water (the latter is of minor importance).

    With reference to modelling principles, which I referred to in my first comment to Rasmus, please notice the following extract from that paper:

    For setting up the hydrologic model, the detailed dataset described in section 2b was split into two subsets (or subperiods), consisting of the calibration subperiod of six years (October 1984–September 1990) and the validation subperiod of four years (October 1990–September 1994). In a second step the model was run for the period 1908–2003 with the long-term historical dataset in order to assess its behavior in comparison with the historical runoff time series.

    You may also notice in the last panel of Fig. 12 how poor the performance of climate models is, even after downscaling, and how stable the future projections, obtained by using climate model projections, are. Of course in the system management we have ignored GCM results as detailed in another paper, Hurst-Kolmogorov dynamics and uncertainty.

    D.

  • Demetris Koutsoyiannis

    Dear Rob,

    Believe it or not, when I was writing my reply to Bart above I had not refreshed my browser and I had not seen your comment. So, my interpretation of your comment is that you guessed what I was writing as an answer to Bart and wrote a comment with a relevant question :-)

    In other words, my reply whether statistics is physics is already there as well as in many of my other comments and in many of my publication, and there is little to add. Of course you may feel free to banish statistical thermophysics and quantum physics from physics, but I will not follow you. Assuming that you banish them, can you prove the ideal gas law (PV = nRT = NkT) without using statistics?

    As per myself, given that I regard statistics as physics, I can use some tools like the law of large numbers, the central limit theorem and the principle of maximum entropy as powerful tools to make inference in physics. These say that you do not need to study the details in order to know the macroscopic behaviour of a system comprising myriads of variables. This is the case in the example I gave before: in a monoatomic gas you have 6 (variables/molecule) x 6 x 10^23 (molecules/mol) = a great many myriads of variables, and you do not care to know exactly even a single one of them. For, you can infer the behaviour of the system using the three above probability laws. The conservation of momentum and energy are put as constraints for the entire system, neglecting the details of energy exchange between each of the molecules.

    The situation is quite similar in climate. Why do you think that my approach should violate the conservation of energy? Can you prove that? Or can you give an indication that it should violate it?

    D.

  • All three invited participants agree that radiative forcing can introduce LTP and that it is omnipresent. It follows that the presence of LTP can not be used to distinguish forced from unforced changes in global avg temperature. The omnipresence of both unforced and forced changes means that it’s very difficult (if not impossible) to know the LTP signature of each. Therefore, LTP by itself doesn’t seem provide insight into the causal relationships of change. It is however relevant for trend significance, but frought with challenges since the unforced LTP signature is not known.

    I would also like to put the issue forward that was voiced by Lennart van der Linde in the public comments, echoing Rasmus: The stronger natural variability, the stronger spontaneous changes in the earth system dynamics are amplified, hence the stronger climate sensitivity. This limits the fraction of warming that can be attributed to natural variability, since it would simultaneously enhance the fraction attributable to a given amount of radiative forcing.

  • Rob van Dorland

    Dear Demetris,

    “Believe it or not, when I was writing my reply to Bart above I had not refreshed my browser and I had not seen your comment. So, my interpretation of your comment is that you guessed what I was writing as an answer to Bart and wrote a comment with a relevant question”

    Yes, that’s quite a coincidence, I tend to believe you. Your answer contains interesting information about physics and statistics.

    In my view there is – to speak with your words – a dichotomy between statistics, derived from fundamental physics, and statistics, used as an analyzing tool. Let’s call it statistics1 and statistics2, respectively.

    Statistics1:
    For example molecular velocities following the Maxwell-Boltzmann distribution, photon energies following the Bose-Einstein distribution, the gas-law linking temperature, pressure and volume (as you mentioned), uncertainty principle of Heisenberg in quantum mechanics (also mentioned in your earlier comment). All of these can be derived from fundamental physics. These distributions and/or probabilities are fundamental properties of nature.

    Statistics 2:
    For example decomposing time series into harmonics (Fourier transformation) or into other functions (e.g. Laplace transformation), wavelet analysis and regression. These are not based on fundamental physics, but are mathematical tools for analyzing experimental results and/or observations. Your climacogram is also an analyzing tool as you have characterized it yourself in the abstract of your paper with Markonis (MK2012):

    “In our analysis, we use a simple tool, the climacogram, which is the logarithmic plot of standard deviation versus time scale, and its slope can be used to identify the presence of HK dynamics”

    All of these analyzing techniques are based on assumptions and have their limitations. They can be useful, but are by no means fundaments of nature. Therefore statistics1 is fundamentally different from statistics2.

    Can you agree with the distinction I make between statistics1 and statistics2?

    To respond to your questions:
    “Why do you think that my approach should violate the conservation of energy? Can you prove that? Or can you give an indication that it should violate it?”

    As explained in my earlier comment, your approach doesn’t say anything about conservation of energy. Considering system Earth, this is an important constraint. Including this you might be able to distinguish between forced and unforced climate (global mean temperature) change. As you already mentioned the climacogram cannot make the distinction. My guess is that taking energy constraints into account will affect the conclusions on trend significance. So it’s not a matter of violation, but of omission instead.

    Rob

  • Rasmus Benestad

    Dear Armin,

    I think we’ve reached an agreement on the issue whether forcings affect LTP, if I understand you right. And forcing is omnipresent.

    If we say that trends too have a LTP nature in addition to long-term variability, then we still haven’t managed to distinguish the two.

    Can we agree that any use of LTP in hypothesis testing has to distinguish between intrinsic (“noise”) and externally forced LTP before we can say whether a trend is part of the natural internal variability?

    You may not think it “is nearly impossible to have a meaningful discussion”, but I think you’re wrong.

    Firstly, climate analysis and trend testing is not just a question of LTP. Furthermore, one always need to make a set of assumptions before making sense out of the mathematics, even when applying LTP models.

    It is easy to produce a bunch of meaningles numbers even with the most elegant mathematical model. for instance the mathematical structure of the LTP.

    We just need to understand the mathematical structure of the LTP and how we can model it. Then we can use the methods that we developed in our 2011 PRE to quantify if a trend is natural or not.

    Can you really demonstrated this? I’m sorry, I haven’t read your paper, although the papers I’ve read on LTP so far have not really convinced me. Too idealistic and not sufficiently practical. THese papers also neglect a large scope of additional and relevant information.

    In order to really understand LTP, you need to unveil the physical mechanisms which are at play.

    All correlations can give you misleading numbers when based on a finite sample, and if there is a persistence, the greater the danger of getting an accidentally good fit. With low-level chaos and different (unpredictable) weather regiimes, I wonder if LTP assumtpions and models may be tricked.

    One way to test you methods is to carry out a large set of double blind tests with data samples for which the answers already are known. For instance from climate/weather models (“surrogate” data or “pseudo-reality).

  • Rasmus Benestad

    Dear Demetris,

    Let me respond to some of your points:

    1. Models vs. reality – I think we agree that models are not the real world, but they are nevertheless useful concepts. They are very useful for providing ‘pseudo-reality’ against which statistical models can be tested, mainly because we can make decisions about the simulated world. E.g. forcing ofr no forcing.

    2. Physics vs. statistics – I agree that physics and statistics both are important ways of understanding the universe. I also think that physical laws constrain possible outcomes and this must be reflected in the statistics.

    4. Trends vs. fluctuations – I agree that the concept ‘trend’ is not very definite, and it’s important to state the scientific question more clearly. The question that I think we are dealing with is whether the current global warming (“trend”) could arise from no forcing – or just natural forcings.

    5. Linearity vs. nonlinearity – The difference between linearity and non-linearity is not contriverisal. You could add other more relevant phenomena, such as tropical cyclones. On the other hand, there are aspects which are more linear, and we know that there are some types of forcings which cause a somewhat linear response. Take the seasonal cycle for instance. It is fairly clear what effect what effect changes in the incoming solar energy has on the temperature statistics. We also see the effect from volcanoes, although it may be more complicated. Slow changes in the Earth’s orbit around the sun also appears to produce a response which is fairly clear. And we know that the greenhouse gases has an effect – we can look to the other planets in our solar system. There may be non-linear effects from the oceans-atmosphere causing decadal variations, and the question is whether these are sufficient for explaining the exceptional warming that we see now. Data from the past suggest this warming is hihly unusual, and if it was a spontaneous freak event, it would still be quite unlikely.

    6. Stochastic vs. deterministic models – on the quantum physics level, everything is stochasic, but the ‘law of statistics’ make the classical-scale physics considerably more deterministic. Still, the presence of chaotic dynamics make some processes impossible to predict on a longer time scale. I also agree that not all stochastic models violate the laws of physics, but there are many stochastic models which do. Furthermore, there may be some stochastic models which provide useful predictions for certain scientific questions and still misleading results for others.

    Models describing the mean surface temperature on a planet must account for the energy budget, thermodynamics, and dynamics. There are some models which are non-stationary such as “random walk” models which may be hard to reconcile with the planets energy balance and hydrostatic stability. In such circumstances, there is a true dichotomy – but in many cases there is no dichotomy.

  • Rasmus Benestad

    You claim that natural forcings contribute to LTP while GHG forcing does not. That seems in contradiction to what Rasmus claimed (his fig 2). Hopefully Rasmus can respond with his view on this.

    The way I view this is that any trend will affect the auto-correlations. In addition, there is a question about how the climate system reacts to even a linear trend forcing. We agree that there is a presence of internal dynamics and there are feedbacks which results in inter-annual and decadal varioations. In a chaotic system, the changes in external conditions may imply changes in the systems evolution at points of bifurcation.

    I’ve earlier stated that I believe that the weather is ‘chaotic’ whereas the climate is not. Just like the indicidual days the nexte year are unpredictable whereas the seasonal statistcs are fairly well known (seasonal froecasts try to say something about how much the future will diverge from the ‘normal’ state). This, however, makes an assumption about time scales. We know that there are fluctuations in the global mean temperature, but these are distinct to the slow changes.

    In other words, I’m not convinced that GHG forcing does not contribute to LTP.

  • Rasmus Benestad

    Just to comment on ‘unphysical models‘ to clriy my position.

    In my view, a model is ‘unphysical’ if it violates one or more laws of physics. A model may have some aspects of physics (e.g. maximising entropy), but if it violates other physical contraints such as the conservation of mass, energy, or charge on the classical scale, then it’s ‘unphysical’.

    If the global mean temperature undergoes large excursions, then this will clearly have an impact on other parts of the climate system, as it implies that heat is shifted around. Conservation of energy implies that.

    When it comes to LTP, trends, and internal variations, it may be useful to look at sea level pressure (SLP) or
    SLP indices. We know a priory that the SLP is not likely to have any long-term trend (or perhaps just a tiny bit due to changes in the atmopsphere composition) as the barometric pressure is a measure of the mass of air
    above the point of measurement. If we can rule out the possibility that the atmosphere increases in mass, then SLP is only expected to exhibit LTP but no trend.

    It’s a bit speculative – agreed – that the SLP should have similar LTP as the global mean temperature.

    Another diagnostics could be the global mean sea level change or the total ocean heat content. See the recent post on RealClimate.org. The question is whether this measure shows more of a trend and less decadal variability. How do the LTP detection differ in all these three cases and wht can we learn from that? As I stateed in my opening part, I think it’s a great mistake to look at only one indicator – and in a sense, this is where one ‘unphysical’ property resides.

  • Armin Bunde

    Dear Bart,
    sorry for answering late. For answering your question, we have to note that we distinguish between natural fluctuations and trends. When looking at a LTP curve, we cannot say a priory what is trend and what is LTP. When analysing LTP in the INPROPER way, which still is one of the best liked way among many climate scientists, namely by the ACF, then we also cannot distinguish and this is the reason why Rasmus comes to the wrong conclusion that external trends contribute to the LTP.

    The real crux is that many of our colleagues just have problems in reading and understanding literature on LTP that requires some mathematical understanding, and so they cannot appreciate the enormous progress done in this field in the past 15y. By defintion, an external trend does not contribute to the LTP of a record. The LTP is natural, the trend is external and deterministic. When using the PROPER methods like DFA and WT, you can indeed quantify the natural LTP in a record also in the presence of an external monotonous trend!

    Of course, if you use the improper ACF as Rasmus did, you cannot!!
    When testing to what extent GHG is responsible for LTP, we found it is not, please have a look at our 2004 GRL, where we also specified the methods.

    It is very unfortunate that Rasmus does not seem to be able to read this and our other articles on LTP. I am sure he would appreciate them and see that LTP is not a beautiful and needless kind of theory made by strange theoreticians to make real climate scientists the life harder (forgive me, when I am joking a bit). If he would read them, he would certainly understand (1) from our 2009 PRE that the ACF is a poor tool for analysing LTP in a short record (we have specified the ACF analytically as a function of the record length and the correlation exponent, he only needs to read the formula), (2) from our 1998 PRL, 2001 Physica A, and several other papers including our 2013 Nature Climate Change he would appreciate that there are much better tools (DFA and WT) than the ACF and he would see also the consequences of LTP, e.g., the clustering of extremes. (3)

    Finally, when reading our 2009 GRL and the more extensive 2011 PRE he would highly appreciate that it is easy to determine the significance of a trend in a long-term correlated record, because we have specified analytic formulas for the significance as well as for the confidence intervals. This way, he would recognize that natural LTP is not kind of a strange idea of theoreticians, but is of real use.

    The WRONG alternative, and when reading the nonspecific comments of Rasmus I think he may favourate that one, is to assume that natural climate variability is an AR1 process. Indeed, climate scientists liked it in the past, because it is easy to handle, easy to generate, and a significance analysis was not too difficult to do. Indeed, mathematicians in the 1930s developped already tools for this. What these climate scientists do not know, is that an LTP process can also be generated very easily (one only needs to know Fourier Transforms) and that it is even easier than for the STP to perform a significance analysis. But I am sure, this will change in the next years.

    Finally, Rasmus will recognize that ENSO is not an example for LTP, in the same way as other quasi-oscillatory phenomena cannot be described as LTP
    When reading this again, I cannot exclude the possibility that for Rasmus, LTP is STP plus trend, but this would be a serious mistake.

    Best wishes,
    Armin

  • Armin Bunde

    Dear Rasmus,
    thank you. I apologize that I only can answer today. I have written a quite lengthy answer to the post of Bart from May 3, where I comment quite a lot on your ideas. After reading this, I am convinced that reading and trying to understand our papers really would help you much in this issue. Please do this!

    You will see that in some of them, highly recognized climate scientists like HJ Schellnhuber and H v Storch contributed. You will learn new methods, you will see quite practically, by real formulas, how poor the ACF really is, and you will see how easily in LTP records the significance of external trends can be estimated, in the same way as for STP records, but easier. Everything very practical! The only thing you have to do, is to read these articles!

    We scientists are, of course, open to new ideas, and so it must be a great pleasure for you to get
    acquainted will all this new stuff. Just take the chance!!

    Best wishes,
    Armin

  • Rasmus Benestad

    Dear Armin,

    Thank you for your response. I have already had some discussions on LTP, but if you can recommend one particular paper of yours, I’ll of course read it.

    In the meanwhile, I will urge you to use all the information available, and not just rely on one time series and it’s sole LTP characteristics. That is in my view ‘unphysical’ because we know that the various geophysical data are interlinked in the cliamte system. I will like to stress that we need to take a comprehensive view when we want to address questins such as whether the global warming we now witness is due to natural fluctuations (LTP) or if it is indeed forced by GHGs.

    I realise that highly recognized climate scientists like HJ Schellnhuber and H v Storch have contributed contributed to LTP work, and that this topic is very interesting. So far, I think these findings suggest possible properties, rather than the exclusion of causalities. Here the practical question which we originally addressed was whether we can decide if the current trends are ‘unnatural’.

    Yes, you can use some ‘noise models’ (e.g. FARIMA, fGN, or whatever) and fit it to the data, and say that, given such properties, the trends could easily have happened by chance. My doubts on this strategy is about the design of the analytical strategy.

    My point regarding the simple ACF is that the forcing also will induce LTP properties. You can of course try to remove the trend, but then you will have to make assumptions. You will need to provide convincing demonstrations that shows an objective method that is not biased towards one answer.

    Maybe it’s easy to test the strategy against pseudo-data: results from climate models subject to different forcings. I’d be curious to know what results wou’d get then.

    I also appreciate that LTP is present in many different physical processes, but we know that there are many types of physical processes with different properies. We know that the Earth’s temperature is not self similar or power law, an both the diurnal cyce and annual cycle have well-defined temporal scales. However, we expect a convoluted response to these.

    If you can provide some indication as to what mechanisms would be responsible for LTP for the global temperature, that could shed some light on our differences. Again, I’d suggest subjecting e.g. the mean sea-level pressure to similar tests for LTP, as we expect there to be no trend as the atmospheric mass is constant (more or less). We could also subject different ocean data to the same type of test.

  • Dear Armin,

    You wrote “Natural Forcing plays an important role for the LTP and is omnipresent in climate.” and later on you wrote “By defintion, an external trend does not contribute to the LTP of a record. The LTP is natural, the trend is external and deterministic.”

    I have difficulties reconciling these two statements. Does radiative forcing (e.g. from a change in solar irradiance) contribute to LTP? From the former cite above I gather your answer would be yes, from the latter I gather you would answer no.

    If yes, why would a natural forcing (like a change in solar irradiance, or like Milankowitch forcing) contribute to LTP and anthropogenic forcing (like a change in GHG or aerosols, which could also change -though typically over longer time scales- due to natural processes) not?

  • Demetris Koutsoyiannis

    Lennart, you said:

    Rasmus said at the end of his guest blog:
    There may be some irony here: The warming ‘hiatus’ during last decade is due to LPT-noise [15,16]. However, when the undulations from such natural processes mask the GHG-driven trend, it may in fact suggest a high climate sensitivity – because such natural variations would not be so pronounced without a strong amplification from feedback mechanisms. Figure 3 shows that such natural variations in the climate models are more pronounced for the models with stronger transient climate response (TCR, a rough indicator for climate sensitivity).

    I’m wondering what Armin and Demetris think of this statement. Do you agree, and if not, why not?

    I thought it was clear from what I wrote earlier that I fully disagree with such statements. There is no “LTP-noise”. LTP is property of the real world climate, which emerges from its dynamics. Well, this dynamics may be difficult to infer and express deterministically in its details, but it has some macroscopic characteristics. The macroscopic characteristics are consistent with the Hurst-Kolmogorov (stochastic) dynamics.

    All remaining parts of the statement are hypotheses stemming from a belief that climate models tell us the truth. For example, how do we know about “GHG-driven trend”? Because the climate models tell us so. But there is a problem here as they did not predict the “warming ‘hiatus’ during last decade” so, let’s invent some “noise” to rectify this.

    But as I clearly described above and clarified later, the “noise” properties of the climate models are inconsistent with LTP.

    Those who believe that climate evolution can be described in deterministic terms, should have provided us with models that predict the ‘hiatus’ in deterministic terms.

    If they need to add some “noise” in their deterministic models to match reality, they should have taken care so that their “noise” be compatible with their own models first. To this aim, they could run their models in “unforced” conditions, infer the “noise” properties from there, and then use this “noise”. Can a Markovian noise with characteristic time scale of 1.25 years explain a climatic ‘hiatus’? I do not think so.

    To me climate is by definition a stochastic concept (see my main post), its physics is describable only in stochastic terms, and the foundation and tools for decent climate modelling is offered by stochastics. A first step in such modelling is to explore, based on instrumental data, proxies, and other quantified information, the stochastic behaviour of the real climate. I hope my publications, some of which I referred to in above comments and my initial post, have contributed to the latter step.

    D.

  • Demetris Koutsoyiannis

    I have been looking at several climate blogs to see if this Climate Dialogue discussion has attracted the interest of bloggers and climate discussers. I would say the coverage is thin: only Bishop Hill devoted an entry to this discussion, while, perhaps coincidentally, William Briggs discussed the Mexican Hat Fallacy, also discussed in my main post above. But I saw a few comments in other independent posts, among which I think one is interesting. Before I provide the link to this comment, I will tell two stories which I recalled after seeing some of the comments in blogs (including in this forum).

    Once I was explaining to a colleague that in the Navier-Stokes equations (which, by the way, are used to model water flows, the weather, the ocean currents, etc.) the turbulent stresses are stochastic quantities (covariances of velocities). The colleague told me something like: Look, you may be right but I do not care. As a student in the university I learned some of this stuff, differential equations stuff and stochastic stuff. Now I have repelled them from my mind as I do not need anything more than the high school physics background. Actually, whenever I am not able to downgrade my explanations to elementary school level, people do not believe me.

    Another colleague, in a discussion about climate which started as scientific and ended up as political, told me: I do not care whether climate change is real or not. Even if it was not, we should have to invent it, in order to save the planet from the various threats. (By the way, my own view is that, of course, climate change is real—climate had been changing all the time—and that we should beware saviors).

    Now the comment I wish to refer to is by Dr. Robert G. Brown posted at Watts Up With That?. He offers some interesting (advanced level) physical insights and then (after his phrase in which, very rightly, puts two words in bold) also discusses political aspects of the climate agenda.

  • Demetris Koutsoyiannis

    The Orthodox Easter break gave me the chance to try to see this discussion from a more macroscopic point of view; some statistics about word usage (shown in the graph below) helped me.

    I looked again at the title of the forum entry and I verified that it is “Long-term persistence and trend significance”. Both constituents of the title are statistical terms, yet from the beginning of the dialogue I felt that some of the discussers view statistics as disjointed from physics and identify climate with physics.

    No doubt that climate fully obeys physical laws, but, as I wrote in my introductory post, it happens that climate is based on statistics even in its very definition, so by depreciating statistics we also depreciate the scientific basis of climatology. More generally, in complex systems, to express/derive physical laws we need statistics. I am happy that, finally, my persistence on this thesis resulted in recognition, by most of the discussers, of statistics as essential part of physics. I let aside the neologisms of Statistics_1 and Statistics_2 and the implied new dichotomy—I take it as a joke. Of course in every scientific problem there is good and bad use of statistics, mathematics, logic. But eventually the correct use will prevail.

    Another interesting point I noted in the discussion is a tactic like this: if something is not consistent with our ideas, let us call it unphysical, that is, violating physical laws. Of course, if a theory or analysis violates physical laws, then it should be rejected. But we have to prove which law it violates and how. A stochastic model whose fitting is based on data cannot be pronounced unphysical just because it did not consider explicitly a specific physical law, e.g. conservation of energy. Inasmuch as conservation of energy is reflected in the data, it is indirectly respected by the model as well. Unless we convict also the data (e.g. those used in my calculations or other) and call them unphysical because they are not consistent with what we trust as being physical (in this case what the climate models are telling us).

    Reading this discussion one would perhaps develop an impression that conservation of energy is the only important physical law and that it can explain everything. Of course— I repeated it several times—it is an important law, it is never violated, but on the other hand it cannot explain everything. It does not explain for example, why my room has roughly uniform temperature. Infinitely many non-uniform options would not violate the conservation of energy—they would have the same total energy content (for example if half of the room was below freezing point and the other half much warmer). The most powerful laws in physics are the variational principles rather than the conservation laws: Fermat’s principle (for determining the path of light), the principle of extremal action (for determining the trajectories of simple physical systems), the principle of maximum entropy (for more complex systems). It is the latter principle which explains the uniformity of temperature in my room, not merely the conservation of energy. Conservation of energy offers just one equation for the interacting parts. In contrast, the variational principle offers as many equations as there are unknowns. That is why I believe we should employ variational principles in climate. In particular, the one I believe is most relevant in a changing climate is the principle of extremal entropy production. This gives rise to the LTP, as I explained in my paper I already mentioned several times.

    This brings be to another point of this macroscopic overview. Of course, the stuff contained in this forum is not any formal peer reviewed publication. But it is better if the arguments put can be supported by peer reviewed publications and if the discussers read these publications and refer to them. Each of the discussers has his own publications. I have tried to refer to some of my own several times, but perhaps I was not convincing. I am afraid that Armin had the same feeling when he wrote to Rasmus:

    I am convinced that reading and trying to understand our papers really would help you much in this issue. Please do this!

    Looking at the word cloud above I think it remains one of the most popular issues, I did not cover in this brief overview: that of forcing. But I think there is nothing to add to what I wrote earlier in other comments except that I fully endorse Spencer’s comment, from which I wish to quote this part:

    There is no “forced LTP” and “unforced LTP”. There is the LTP that is present in the climate system. And that LTP has defined characteristics, parameters which define what we might term “natural” climate variability.

  • Armin Bunde

    Dear Rasmus (and Bart),

    sorry for answering late, I just did not have time before. Rasmus, from your comments I really can see that you must get more familiar with LTP and with the distinction between “external trends” and “natural
    fluctuations” as well as with the classical techniques based on exceeding probabilities and confidence intervals to evaluate if some temperature increase is significant or not. Significant means, it cannot be explained by the natural fluctuations in the system. You know, these things are right at the heart of this Climate Dialogue, and in order to have a meaningful discussion we must make sure that we share the same basic knowledge.

    One paper to read is certainly not enough here, this is like ” a drop on a hot stone”. May be, you start with our introductory review from 2012 in Acta Geophysica and with the SI in our recent Nature Climate Change, coauthored by Hans von Storch. The references are in my blog. Then you should, for a deeper insight into the methods which are essential in this field (it is here the same as in physics, the appropriate techniques and methods are essential!!) read the 2001 Physica A article on Dentrended Fluctuation Analysis. If you want to go further, you even can read our 2002 article in Physica A, again by Kantelhardt et al, on Multifractality. This article will soon exceed the limit of 1000 citations and may be you like it, but it contains a large mathematics part. But you know, mathematics is the heart of science, so you should not worry.

    After that, I suggest you to read four of our articles with HJ Schellnhuber, i.e. our joint PRLs (PRL is the most prestigious physics journal as you certainly know) from 1998, 2002, and 2004 (Comment) as well as the paper by Eichner et al. in PRE. In addition, and this article is very important for you and I mentioned it already several times, you should look at our 2004 GRL, where we discuss, for the 2nd time after our 2002 PRL, to which extent the different forcings contribute to the (quasi-universal) persistence law for continental temperatures. We show that with GHG forcing alone, as in our 2002 PRL, the persistence law cannot be reproduced by the AOGCM, but the natural forcings are essential to get it right, in particular volcanic forcings seemed to play an important point. You see, these are quite PRACTICAL PAPERS, but involve some kind of mathematics.

    Having read these papers, you can pass to our 2005 PRL, where we could show that LTP implies a clustering of extreme events, and could show that the clustering of extremes that we PREDICTED on the basis of LTP indeed can be seen in climate records. Again, this paper involves mathematics, but is again VERY PRACTICAL, isnt`t it? After that, you will be in the right mood to look at our 2009 PRE on the ACF. This paper is more demanding in mathematics, but you do not need go through it in detail, just look at the formulas for the ACF and how it depends on the system size. After reading this article, I am sure, you will not continue to analyse data dy the ACF, because the finite size ffects are drastic and hide the proper behavior. So you see, this is also a VERY PRACTICAL paper, which will help you tremendously.

    Finally then, I suggest you to look at our 2009 GRL or better our 2011 PRE which is at the heart of this ClimateDialogue. The paper is also mathematically demanding, but not too much, but since we are scientists, this does not create problems for us, since mathematics is the language of science. I hope you agree. (Otherwise, we were philosophers). Reading all these papers will take you some time, but this will be a very good investment!!! Just do it!!! If you have specific questions, not philosophical ones, I will be more than happy to help you. Also, if you cant get all papers from the Internet, just tell me and I will email them to you. Getting familiar with LTP and the significance of anthropogenic trends is enormously important and at the heart of this CD and the PREREQUISITE for a meaningful discussion. So take the chance!!

    Now let me come to the confusion that arises when you and Bart talk about LTP. First of all, LTP is specific and canbe described by mathematics in a well defined manner, unlike your El Nino example which for sure is not LTP. If I want to find LTP there, I have to analyse the time intervals between ENSO events (you know they are well defined) and then check if these intervals are LTP. This cannot be done satisactorily because we have far less than 100 intervals. I said in my first response to you alreadsy, that since we are scientists and not philosophers, we have to be specific, otherwise no progress in this complex field.

    Now, Bart and Rasmus, what do we understand as natural fluctuations of the climate? For sure not climate without natural forcings. Natural forcings belong to the climate and make the persistence law right (see our 2004 GRL). I wrote already before that climate is highly complex, linear and nonlinear, and everything is interwoven. The natural forcings are not responsible for external deterministic trends, just for the persistence, and one does not need to ask what are the single contributions of the various forcings to temperature changes, since everything is interwoven and probably impossible to separate. It is very naiv to think that mathematical techniques could or should be able to distinguish between the various forcings. So, the natural forcings together with the unforced climate system (which is certainly not white noise) make the ups and downs of temperatures which you can see easily when you look at the data with a sliding average, and we do not need actually to specify where they come from. These ups and down that look like mountains and valleys on all time scales are synonymous for LTP. I find it intriguing that we can classify these fluctuations, in a very satisfying manner, by a singlenumber, the Hurst exponent or correlation exponent. Some climatologists, due to a lack of mathematical understanding, think that these fluctuations can be desribed by an AR1 process, but this is simply nonsense and in disagreement with all facts.

    Now, in addition to the natural forcings, there are anthropogenic forcings, mainly by GHG but also urban effects must be considered. The question is now, what is the effect of these forcings on temperature. This is a highly relevant question and rgards the climate sensitivity. I am not expert in this, but from my colleagues who are experts I learnt that this is a difficult and not fully settled question, in particular when the temperature evolution of the past 15y areconsidered.

    A PRAGMATIC way to approach this problem is what we are doing. We assume there are natural fluctuations (among others driven by the natural forcings) and a probably anthropogenic monotonously increasing part which is kind of deterministic and which we call external trend. You see, instead of vague speculations and philosophical discussions that lead to nowhere we put this simple assumption. This pragmatic procedure is not unusual in climate science, it is actually being used in most articles that are concerned with temperature increases and significane estimations,and you should be aware of it.

    Now, the big mistake made by our colleagues is that they used IMPERFECT methods like you do, namely the ACF or the power spectrum, to conclude that the natural fluctuations defined in the way above, can be described by an AR1 process. Then they used known mathematical techniques to estimate the signifance of a trend. This crucial mistake appeared also in the IPCC report since the authors were, unlike you after reading our papers, not aware of the LTP of the climate. They assumed STP and thus got the trend estimations wrong by overestimating the significance. Our 2009 GRL and 2011 PRE show how to do these estimation for LTP records right.

    Now I am getting tired. I hope you will enjoy our articles,
    Cheers, Armin

  • Armin Bunde

    Thank you Paul, I like your idea about the role of the volcanoes

  • Armin Bunde

    Arthur, thank you for your Comment. I think I answered it in my quite lengthy reply to Rasmus.
    Please have a look at it. DFA and all the other methods can only eliminate monotonous trends.
    We now prefer DFA2 because it eliminates linear trends in the original data, together with WT2.
    If you are more interested, have a look at our 2001 Physica A paper or the SI in our
    2013 Nature Climate Change.

  • Demetris Koutsoyiannis

    Lennart,

    Thanks for your comment, for pointing out these publications and for your questions. I may not be the right person to judge these publications; for example my knowledge about deep-ocean heat uptake is zero. In addition, my university does not subscribe to Nature.Com journals and I do not have access to these publications. So, my reply will be general.

    In brief, based on the information you give about these studies, my answer to your question “Is this the kind of prediction you’re asking for …?” is negative. I believe that retrospect studies are useful to explore possible explanations of observed phenomena. But it may be dangerous to believe that a skill in retrospect explanation (I would not call it “prediction”) should imply a prediction skill. I will give a very relevant example from a report by Philip J. Klotzbach and William M. Gray, entitled “Extended Range Forecast of Atlantic Seasonal Hurricane Activity and Landfall Strike Probability for 2012” (7 December 2011). The abstract reads (emphasis added):

    We are discontinuing our early December quantitative hurricane forecast for the next year and giving a more qualitative discussion of the factors which will determine next year’s Atlantic basin hurricane activity. Our early December Atlantic basin seasonal hurricane forecasts of the last 20 years have not shown real-time forecast skill even though the hindcast studies on which they were based had considerable skill. Reasons for this unexpected lack of skill are discussed.

    Also, a retrospect explanation that seems to have some skill in numerical terms does not imply that the explanation given is correct. Infinitely many models could be fitted, with good performance, to a limited data set in retrospect. Even in blogs and in news we often (perhaps on a monthly or weekly basis) see diverse model fits that explain the global temperature evolution based on various explanatory variables (of solar, atmospheric or ocean origin), or explain various other phenomena based on global warming. In some cases we also see future predictions based on these models. I will not criticize any specific of them; rather I will give a funny counterexample: if you google “proof of global warming”, you will find images implying global temperature being an “explanatory variable” of a hilarious “dependent variable”. I believe there may indeed be significant correlation between the two variables this counterexample refers to, but of course this is just a joke and is put as such, I guess. On the other hand, there is no shortage of studies of similar type but pretending to be serious and claiming causative relationships between global warming and numerous aspects of nature and life.

    I think that to take an analysis of this type seriously, a minimum prerequisite is to contain validation of the hypothesis made. I have explained this in my initial comment to Rasmus, with respect to his model presented in his Figure 1. I am copying here what I wrote to Rasmus, also adding hyperlinks to the references I used for your convenience.

    The circular reasoning here is that I formulate a hypothesis after I have seen the data. Deterministic modelling can also be affected by circular reasoning. The hydrological community has given importance to avoiding circular logic. Thus, it has thus been a standard practice in modelling to follow the split-sample technique (Klemes, 1986). We split the available observations into two (sometimes three) segments. We use one segment for building and calibrating a model and the other one for validating it. Has such model validation technique been used in climate models? From my experience (Koutsoyiannis et al., 2008, 2011; Anagnostopoulos et al., 2010) I can only imagine a negative answer.

    In the beginning of your post you present a graph that compares the HadCRUT4 data series with results of a regression model “based on greenhouse gases, ozone and changes in the sun” as you say. You do not give a citation to see the details. So, the natural question is: Did you use a split-sample technique for your model, or any similar validation technique, in which a data segment is kept out when building your model and calculating the regression parameters? If yes, then the danger for circular reasoning is minimal—but not zero, because in reality you have seen all the data. So, what do you say?

    So, since you have seen the studies you point out, may I ask you two questions? Have these studies followed a split-sample technique, with a separate validation period? Do they also provide a future prediction to enable validation/falsification in a few years from now? If the replies to both questions are positive, then the papers are worth of respect. Whether or not they tell the truth is another issue; we’ll know it later.

    D.

  • Rob van Dorland

    Dear Demetris,

    Apart from your really off-topic comment of May 8, 2013 at 7:29 pm, which gives, however, some nice insight in your motivation, I would like to respond to your on-topic comment of May 8, 2013 at 7:57 pm. (which I consider as a reply on my comment of May 5, 2013 at 12:59 pm.)

    You say:

    No doubt that climate fully obeys physical laws, but, as I wrote in my introductory post, it happens that climate is based on statistics even in its very definition, so by depreciating statistics we also depreciate the scientific basis of climatology. More generally, in complex systems, to express/derive physical laws we need statistics. I am happy that, finally, my persistence on this thesis resulted in recognition, by most of the discussers, of statistics as essential part of physics. I let aside the neologisms of Statistics_1 and Statistics_2 and the implied new dichotomy—I take it as a joke. Of course in every scientific problem there is good and bad use of statistics, mathematics, logic. But eventually the correct use will prevail.

    It really surprised me, that you didn’t get my point. You have a statistical method to analyze time series and applied this to the earth’s climate. I don’t think anyone is disputing that this is just one way of looking at the behavior of the climate system. There are, however, more ways to investigate climate, e.g. considering physical laws (and yes, this includes also fundamental physical properties which can only be expressed in terms of distributions or probabilities – but again, this is fundamentally different from statistical analyzing tools, you are talking about).

    So, please, consider your method as one of the pieces of the (climate) puzzle. The more pieces you gather, the clearer view you get of the complete picture of the puzzle. In other words, if you have more independent information you can exclude (some) options you had with just one piece/method. I think in the public comments Arthur Smith gave an excellent example to illustrate how more information can lead to more constraints (http://www.climatedialogue.org/long-term-persistence-and-trend-significance/#comment-454). He stated:

    The ancient world’s epicycles allowed precise modeling of planetary motions, but those mathematical tools did not provide the physical explanatory (and predictive) power that came with Newton’s inverse square law of gravitation.

    In my view (and relevant for this discussion) this means that if you describe the movements of planets and sun in terms of mathematical formulae, you can do that with any assumption on the center of rotation. If you add physics (here: Newton’s laws of gravitation), there is only one possibility left: all planets rotate around the centre of gravity (which happens to be inside the sphere of the sun).

    Let’s return to the climate system. You say that in order to reveal the behavior of the climate system it is sufficient to consider the climacogram, because all the physics is contained in the investigated timeseries. In my view this is analogue of saying that all physics is contained in the movement of the planets and the sun. In other words: the results of the climacogram can be considered as the epicycles of climate. We should be looking for physical laws to limit the possibilities.

    In order to clarify possible differences in view, I would very much like you to briefly answer the following questions:

    1. Do you claim that your statistical tool (climacogram) has a similar status as statistics derived from fundamental physics? Or in other words that the climacogram is a fundamental property of nature?

    2. Do you recognize that the climate system can be externally forced? For instance if the sun becomes brighter then it will affect the global energy balance?

    3. Do you agree that if there is change in the energy balance then physical processes in the climate system will be influenced? And more specifically, these might affect global mean temperatures?

    4. Do you agree that time series of global mean temperature have deterministic as well as chaotic elements?

    5. Do you agree that by including physical contraints you get a better picture of behavior of the climate system than by only considering the climacogram?

    6. As you confirmed that information is lost in the climacogram analyzing tool (as I showed this concerns information on the signature of the investigated time series, e.g. no distinction between fluctuations and increasing signals), do you agree that adding physical insights may account for the lost information?

    Rob

  • Demetris Koutsoyiannis

    Rob,

    Arthur’s example of epicycles is an excellent one, so thanks for drawing again attention on it. First, if I remember well, the epicycle model is not only one of the ancient world; even Copernicus, who revived Aristarchus’s (3rd century BC) heliocentric model, used epicycles in his model.

    It is useful to think why this model was introduced and prevailed for so many years. It is usually asserted that the reason was that the ancient Greeks regarded the circle as a perfect shape, so that Nature could not follow anything else than this. This may be part of the truth but not the whole truth. Now, from the information we have from the Antikythera Mechanism, the ancient Greek analog computer to model the planetary motions and eclipses, we can infer that this very model may have affected the physical insight. For it is relatively easy to materialize epicycles using gears (the constituent elements of the Mechanism) than to materialize ellipses.

    Whatever the reason for the prevailing of epicycles was, a metaphysical view about Nature or an effect of the then available computer model, the example teaches us not to develop fixations about Nature, neither to adhere to available models.

    Now my replies to your questions:

    1. Do you claim that your statistical tool (climacogram) has a similar status as statistics derived from fundamental physics? Or in other words that the climacogram is a fundamental property of nature?

    No the climacogram is not a fundamental property of nature. Variability and change are. Climacogram is a stochastic means to describe them.

    Furthermore, what you call “statistics derived from fundamental physics” may be the other way round, i.e. physics derived from fundamental statistics. But it is even better to say it in a more symmetric manner: The combined use of fundamental physics and fundamental probability enables description of complex physical systems. For example, if you take the principle of maximum entropy in its pure probabilistic formulation and the laws of conservation of momentum and energy, then you get a convenient and incredibly simple description of the pressure and temperature in your room.

    2. Do you recognize that the climate system can be externally forced? For instance if the sun becomes brighter then it will affect the global energy balance?

    Of course it can. But the global energy balance depends also on the internal dynamics.

    3. Do you agree that if there is change in the energy balance then physical processes in the climate system will be influenced? And more specifically, these might affect global mean temperatures?

    Of course it will. But the internal dynamics are able to cause change as well.

    4. Do you agree that time series of global mean temperature have deterministic as well as chaotic elements?

    No I do not agree. The deterministic and chaotic elements are not a dichotomy. Better to say deterministic chaotic. The deterministic chaotic elements do not exclude a probabilistic description thereof. Rather they necessitate it. Once again, see my Random Walk on Water.

    5. Do you agree that by including physical contraints you get a better picture of behavior of the climate system than by only considering the climacogram?

    If the solution violates a physical constraint, yes, you should include it in a second step. But if your solution respects the constraint, then by explicitly adding it you won’t gain anything. You will find the same solution. For example, if you use the principle of least action to derive the equations of motion of a body, it is not necessary to include the conservation of mechanical energy; rather the latter will be derived as a result of the least action.

    6. As you confirmed that information is lost in the climacogram analyzing tool (as I showed this concerns information on the signature of the investigated time series, e.g. no distinction between fluctuations and increasing signals), do you agree that adding physical insights may account for the lost information?

    Physical insights are always welcome—but it depends on what you call physical insights. As a counterexample, in hydrology there used to be a view that by cutting a catchment into numerous pieces and applying first principles on each piece you would be able to make a model that does not need data and calibration. This reductionist thinking, which was named “physically-based modelling” is receding now as it was gradually understood that merely first principles cannot provide a decent model and that the smaller the pieces, the bigger the requirements for data and calibration.

    D.

  • Armin Bunde

    Just a short comment on the interesting review by Arthur Smith.

    Regarding the length of a record, that you need to distinguish between white noise and LTP with H=0.65: For this simple distinction, you need certainly much less than 500 data points, which is about 40y monthly. It is much more difficult to distiguish between LTP and an STP process. We have found recently that 600 data points (50y monthly data) are sufficient when using DFA2. The larger the Hurst exponent is, the easier the distinction.

    Regarding ENSO and La Nina and other cycles: These are not LTP processes but contribute also to LTP. When analysing temperature data you only eliminate the seasonal cycle but not the others.

    Regarding the warmest years: The estimation of the probability has been given by Zorita et al. quantitatively, as I wrote earlier.

    Regarding deterministic trends:
    In the global temperature, for example, the trend is highly significant on both 50y and 100y scales . The Hurst exponent here is close to1.
    You can find these values in our 2011 PRE and in our 2012 review.

  • Armin Bunde

    Lennart,
    thank you for pointing me out these references. I am not an expert in this topic, but I find the arguments in the nature paper convincing.
    Regarding GHG I may not fully agree with Demetris: We cannot show in our analysis of instrumental temperature data that GHG are
    responsible for the anomalously strong temperature increase that we see and that we find is significant, but It is my working hypothesis.

  • Demetris Koutsoyiannis

    Today I stumbled upon a paper published this week in Digital Signal Processing, which I found on-topic: Navarro et al. (2013). Some may find this paper too technical, statistical, or even off-topic. Till now, I generally avoided being too technical in my comments, but, since Armin has raised several technical issues, this gives me the opportunity to speak about a few of them.

    The above paper supports what Armin had said about the appropriateness of the DFA method for identifying LTP properties. More specifically, the paper studied lognormally distributed data and concluded that three methods had best performance, namely DFA (detrended fluctuation analysis), DWT (discrete wavelet transform) and LSSD (least squares based on standard deviation). Quoting from the abstract:

    The LSSD technique was the most precise in general, but DFA was more robust with highly spiked patterns.

    Coming to what Armin has said about appropriate methods for identifying LTP and estimating parameters, I fully agree with him that the empirical autocorrelation function distorts the LTP properties and should be totally avoided. The reason is that empirical autocorrelation is highly biased as shown in my 2003 paper and graphically illustrated in slide 15 of a 2010 presentation.

    I also agree with Armin that the periodogram/empirical power spectrum is not an appropriate method. The reason is that it is too rough (spiked), whereas common smoothing techniques distort the information. For those who are familiar with spectrum and prefer to view phenomena in the frequency domain, I have recently (2013) proposed a pseudospectrum based on the climacogram, which has similar (or same) asymptotic slopes with the spectrum while avoiding its caveats.

    My colleague Hristos Tyralis and I have also tested roughly all of the related statistical methods and reported our results in a recent (2011) paper. Indeed DFA did not perform bad as shown in our Table 1 (we use the name “Var. of residuals”). However, it is not one of those we recommend (sorry about that, Armin). As we show in this paper, the Hurst parameter and the standard deviation are correlated to each other and therefore their estimation cannot be done separately. Thus, the method of preference should respect this correlation. DFA does not have this property and treats the estimate of standard deviation as if it was unbiased, when in fact it is highly biased (I have mentioned this problem above in my first comment to Armin, asserting that this has also affected their results in Rybski et al.).

    The three methods we recommend in Tyralis and Koutsoyiannis (2011; Table 2) fully account for the interdependency of the parameters and also have the best performance. Not surprisingly, the maximum likelihood method (as streamlined in the paper in full analytical manner, without using approximations) provides best estimates. Yet it has three caveats: (a) as a fully parametric method, it is dependent on the marginal distribution function of the process, (b) it is computationally demanding, and (c) it does not provide graphical diagnostic means to assess the model suitability. The first problem can be tackled easily by normalizing (by an appropriate nonlinear transformation) the data before application. This will deal with the spiked patterns mentioned by Navarro et al. (2013); for example, for their lognormal data, first one should apply a logarithmic transformation to the data.

    Such normalization is also advisable (albeit not necessary) for the next two methods, both of which are based on the climacogram: the LSSD (already mentioned) and the LSV (least squares based on variance). These two are almost equivalent, they are simple, economic and fast: they do not use any concept additional to standard deviation or variance, respectively, whose statistical behaviours are well known. They are also transparent. Thus, they provide a diagnostic tool, which is the comparison of empirical and theoretical climacograms, very easily constructed.

  • Demetris Koutsoyiannis

    Lennard, since you quoted Virginie’s reply, for completeness I am posting my reply to her (also copied to you).

    On 10/05/2013 19:08, Demetris Koutsoyiannis wrote:

    Dear Virginie, dear Lennard,

    Thanks very much for copying your exchange to me and for the clarifications. As I understand it, it is difficult to deal with thousands of parameters, but on the other hand, if with a few parameters one is able to make (perhaps artificially) good fittings in retrospect, a fortiori this will be possible with thousands of parameters too.

    Otherwise, I appreciate Virginie’s modest statements “Most of our retrospective predictions are not very good actually” and “This does not mean the predictions we start now for the next decade will be very good”. The latter looks compatible to my quotation from Klotzbach and Gray.

    Best regards,

    Demetris

    PS. Lennard, in my comment I also referred to Rasmus’s model which looks like a statistical model and does not look to contain thousands of parameters, whatever the latter means.

  • Rasmus Benestad

    Dear Demetris,

    You make an assertment about climate models with which I disagree:

    But there is a problem here as they did not predict the “warming ‘hiatus’ during last decade” so, let’s invent some “noise” to rectify this.

    Often, people compare the observed global mean temperature with the average of the results from many different climate model simulations, which are not corresponding quantities. That would be like comparing this years spring temperatures with climatology – of course you’d expect the day-to-day values to fluctuate about the normal values. And likewisem you’de expect the year-to-year variations in the real world to fluctuate about the slow trend due to ‘noise’ – or internal variations driven by the system’s non-linear dynamics.

    The paper by Easterling and Wehner (2009; GRL; DOI:10.1029/2009GL037810) provides a good discussion on this topic. We can also examine the 10-year interval from 92 CMIP5 simulations (RCP4.5) and we see that there are indeed some models which indicate decades over which the global mean temperature do not increase. This is explained more on Realclimate.org.

    10-year variations in global mean temperature
    The figure above provides an example, where the black line is HadCRUT4. The red lines are the model simulations for which the temperature increases over 2002-2012, whereas the blue ones show those which decrease.

    I must state that it’s failry meaningless to use such short intervals to say anything about long-term trends – just as Easterling and Wehner state.

    But as I clearly described above and clarified later, the “noise” properties of the climate models are inconsistent with LTP.

    Perhaps, and this may be perhaps because all the LTP is due to external forcing. But you have not demonstrated this, Demetris, and I’m not so sure that you’re right. The models do after all embody a non-linear dynamical system which simulate slow variations due to ocean-atmosphere coupling. We also know that they simulate chaotic weather.

    I do not need to rely on climate models, but they are handy for testing out the different hpotheses. We know that these models do have some merit in e.g. predicting the ENSO phenomenon, and they reproduce most othe the phenomena that are oserved in the real world.

    Another way to shed more light on our differences is for you to explain what mechanisms are involved in setting LTP in nature. If you cannot pinpoint the exact physics, I will regard the statement as speculations rather than facts.

    I think the following quote reveals that we are on different wavelength:

    Those who believe that climate evolution can be described in deterministic terms, should have provided us with models that predict the ‘hiatus’ in deterministic terms.

    Please read the paper by Easterling and Wehner which I cited above. I do not think you have understood the tacit understanding that the climate evolution is due both to chaotic variations as well as forced long-term trends.

    I agree there is a degree of stochastic element in our climate, but there is also a substantial degree of deteminism, depending on your scientific question and the scales you look at. For instance, I can confidently say that the mean December-February temperature in Oslo will be substantially lower than the June-August mean in 2015. But I cannot yet say what weather we will get on July 14 this year. These two statements represent two different scientific questions, and both are quite trrivial. Nevertheless, they can illustrate the fact that our climate is not just stochastic, and that this observations is supported be real measurements.

  • Rasmus Benestad

    Dear Demetris,

    Please allow me to comment this quote:

    Another interesting point I noted in the discussion is a tactic like this: if something is not consistent with our ideas, let us call it unphysical, that is, violating physical laws. Of course, if a theory or analysis violates physical laws, then it should be rejected. But we have to prove which law it violates and how. A stochastic model whose fitting is based on data cannot be pronounced unphysical just because it did not consider explicitly a specific physical law, e.g. conservation of energy. Inasmuch as conservation of energy is reflected in the data, it is indirectly respected by the model as well. Unless we convict also the data (e.g. those used in my calculations or other) and call them unphysical because they are not consistent with what we trust as being physical (in this case what the climate models are telling us).

    The idea that anything that violates the laws of physics is ‘unphysical’ is fairly straight-forward. Measured data are not unphysical, but the assumptions you make when analysing them may be inconsistent with physics. You can always find a mathematical framework for fitting a set of data, but that does not mean that this mathematical framework represents meaningful physics. One example is Fourier expansion and the Dirichlet condition.

    In our situation, the global mean temperature does not exist in isolation, but is one aspect of a more comprehensive climate system where processes are interconnected. Moreover, the temperature is a heat measure and plays a role for energy fluxes and evaporation. There is much more relevant information about the global mean available, and I think you reach misguided conclusions by just by looking at the LTP-behaviour of the time series and neglecting all the other knowledge. You cannot just look at the statistics, but need to consider both statistics and physics. You also need to consider other independent measurements.

  • Rasmus Benestad

    Dear Armin,

    Thank you for your reply. I’d very much like to read your paper ‘Long-term correlations in earth sciences’, but it’s behind a pay-wall (even though Norway is a rich country, it does not mean that science is splashing with money). Could you please make it available for us to read?

    I see that you have a great number of papers, but I also expect that you shall be able to explain your points without me and other having to read all your papers – please remember that we have many other things to do, and as long as you have not convinced that LTP is ‘the magic bullet’, you cannot expect others to spend all their time following your example. also, I think you underestimate my competence – just because we come from different angles. I do appreciate the mathematics, and I do have an understanding of the meaning of the Hurst exponent.

    I notice that your position is:

    It is very naiv[e] to think that mathematical techniques could or should be able to distinguish between the various forcings

    My take on this is that it depends on your scientific question/hypothesis. Also, we do have climate models and can carry out numerical experiments to explore the different effects. And if there is a deterministic response, you can use regression techniques to look for ‘fingerprints’.

    One of your comments caught my interest:

    show that LTP implies a clustering of extreme events, and could show that the clustering of extremes that we PREDICTED on the basis of LTP indeed can be seen in climate records

    I think it’s well appreciated that extreme events often come in clusters, and you can for instance see the flood mark and the years of flooding on old English towns. An alternative explanation is that the weather tends to follow a strange attractor (low-level chaos) which too leads to such clustering. Hence, by implying the possibility that there is LTP behind the clustering does not exclude other explanations.

    I presently think one major weakness in your reasoning is

    In LTP records, in contrast, xi depends on all previous points.

    This cannot be true if the weather evolution is chaotic, where the weather system loses the memory of the initial state after some bifurcation point. You also need to examine the Lyapunov exponents to compare with alternative theories.

    Another weakness may arise in running Monte-Carlo simulations for LTP processes, as you assume that the random number generators are perfect. They have improved substantially over the recent years, but I’m not sure if they are free from generating their own patterns.

    You do not always always need sophisticated mathematics (I do like the math) to spot profound differences between the the auto-correlation C(s). And we can look at other data than the global mean temperature, for which we expect a forced trend to be present. For instance, we do not expect there to be a trend in the sea level pressure (SLP), but we do expect that it should exhibit similar internal variability as the temperature (at least on regional scales).

    Another experiment can be to look at the different components of the global mean temperature. If we use standardised values and look at the hemispheric differences, we expect to see variations caused by geographical shifts and ocean over-turning. We can also examine the difference between the tropics and the higher latitudes, as the lower panels below. We see that C(s) changes profoundly when we look at these geographical differences, rather than the global mean for which we expect a trend. Also note the strong fluctuations in the early part of the record which are due to a smaller data coverage (a higher degree of statistical fluctuation). Thus, this geophysical record is probably not homogeneous.

    Demonstration based on HadCRUT4

  • Marcel Crok

    The first question we asked was “What exactly is long-term persistence?”

    After two weeks of interesting discussions even this basic question doesn’t seem to be answered satisfactorily.

    In our introduction we wrote:

    Long-term persistence means there is a long memory in the system, although unlike a random walk it remains bounded in the very long run. There are stochastic /unforced fluctuations on all time scales. More technically, the autocorrelation function goes to zero algebraically (very slowly).

    I searched the different blog posts and comments for remarks about what is LTP. Here are some relevant fragments.

    Rasmus:

    Long-term persistence (LTP) describes how slow physical processes change over time, where the gradual nature is due to some kind of ‘memory’. This memory may involve some kind of inertia, or the time it takes for physical processes to run their course. Changes over large space take longer time than local changes. (…)
    The term ‘signal’ can have different meanings depending on the question, but here it refers to manmade climate change. ‘Noise’ usually means everything else, and LTP is ‘noise in slow motion’.

    Armin:

    In STP records, each data point xi depends on a short subset of previous points xi-1, xi-2,.. xi-m, i.e., the memory has a finite range m. In LTP records, in contrast, xi depends on all previous points. (…)
    For the uncorrelated data, the moving average is close to zero, while for the LTP data, the moving average can have large deviations from the mean, forming some kind of mountain-valley structure that looks as if it contained some external deterministic trend. (…)
    LTP is not an abstract issue, but a process with long memory that (for stationary records) decays in time by a simple power law. (…)

    Demetris:

    No one would believe that the weather this hour does not depend on that an hour ago. It is natural to assume that there is time dependence in weather. (…)
    Now, if we average the process to another scale, daily, monthly, annual, decadal, centennial, etc. we get other stochastic processes, not qualitatively different from the hourly one. Of course, as the scale of averaging increases the variability decreases—but not as much as implied by classical statistics. Naturally the dependence makes clustering of similar events more likely. (…)
    Variability over different time scales, trends, clustering and persistence are all closely linked to each other. (…)
    A characteristic property of the HK process is that its autocorrelation function is independent of time scale. (…)
    I believe it is unfortunate that LTP has been commonly described in the literature in association with autocorrelation and as a result of memory mechanisms. It is the change, mostly irregular and unpredictable in deterministic terms, that produces the LTP.

    Later in two different comments (here and here) Armin wrote:

    It is important to use the same definition of LTP, but from what Rasmus writes, I understand that he has something in mind which remarkably differs from the well established definition of LTP, which you can find also in the pioneering contributions of Benoit Mandelbrot. So, on this basis it is nearly impossible to have a meaningful discussion.
    I would like to know from Rasmus, for example, what is the evidence for El Nino to be LTP (scientists usually consider this as a quasi oscillatory phenomenon) and does he consider then also other oscillatory (seasons) or quasioscillatory (sun spots) as LTP? If yes, there is a problem, since then he mixes between trends and fluctuations, which is the worsest one can do in this field.

    Now let me come to the confusion that arises when you and Bart talk about LTP. First of all, LTP is specific and canbe described by mathematics in a well defined manner, unlike your El Nino example which for sure is not LTP. If I want to find LTP there, I have to analyse the time intervals between ENSO events (you know they are well defined) and then check if these intervals are LTP. This cannot be done satisactorily because we have far less than 100 intervals. I said in my first response to you alreadsy, that since we are scientists and not philosophers, we have to be specific, otherwise no progress in this complex field.

    I agree that it is important to first agree on the definition of LTP. As we can see above a lot of different things have been said about LTP. Armin (and I suppose Demetris agrees) said that LTP is well defined, already by Mandelbrot. Armin and/or Demetris, what is the formal definition of LTP?

  • Demetris Koutsoyiannis

    Dear Spencer,

    Thanks again for the great comments. I fully agree with the first one and, from first glance, with the second too, although I need some more time to assimilate the latter.

    To your former comment I wish to add two references, which I think could be very useful for those who wish to penetrate on stochastic aspects of physics, are not too much adhered to stereotypes and can devote some time in reading.

    These are two books that provide the mathematical basis for real physics of complex systems. I must notify that they are very dense and need to be read several times for a good result:

    Michael C. Mackey, Time’s Arrow: The Origins of Thermodynamic Behavior, Dover, 1992 (a small one: 158 pages).

    Andrzej Lasota and Michael C. Mackey, Chaos, Fractals and Noise: Stochastic Aspects of Dynamics, Springer-Verlag, 1994 (a big one: 459 pages).

    D.

  • Marcel Crok

    Dear Demetris,

    I would like to elaborate a little more about what you show in your figure 5 and the conclusions you draw from it.

    In our introduction we mentioned the IPCC AR4 definition of “detection”: “Detection of climate change is the process of demonstrating that climate has changed in some defined statistical sense, without providing a reason for that change.”

    Would you say that the method you followed in Koutsoyiannis/Montanari (2007) is your preferred answer to the phrase “in some defined statistical sense”?
    Is “detection” purely a matter of statistics for you?
    Does your results mean that in your view “detection” has not yet taken place, although it comes close?

    If you indeed conclude that detection has not taken place yet, does this mean for you that the effect of GHG’s on the climate (temperature) is relatively weak? I ask this with the following in mind: I suppose an external force on the climate could be so big or fast that a significant change on e.g. the global temperature is quickly achieved even when you take LTP into account. The impact of a big meteorite for example causing major cooling on the earth. So in theory the increase in greenhouse gases could also have this effect, do you agree? Does the fact that despite the relatively rapid increase in GHG’s there is no significant change yet in the global temperature, mean that the effect of GHG’s is relatively weak?

    As an aside: did you ever analyse the CO2 record? I can imagine that the rise from 280 to now 400 ppm is very significant even when LTP is taken into account (I suppose this time series also has a Hurst parameter of at least 0.6).

    Marcel

  • Marcel Crok

    Dear Armin,

    In his first comment on your guest blog Demetris wrote:

    But there is a disagreement also on the logical part of your conclusion (ii). I believe if you accept that the sea surface temperature has strong LTP, then logically the land temperature will have too, so I cannot agree that the latter has “comparatively low persistence”. We are speaking about long-term persistence, which manifests itself in decadal, centennial, etc., time scales. I cannot imagine that, on the long term, the land would not be affected by the long-term fluctuations of the sea temperature. I believe climates on sea and land are not independent to each other—particularly on the long term.

    As far as I know/remember I haven’t seen a response from you on this statement. So I am interested to hear your reaction.

    I read in one of your papers that the fact that you find a significant change in the land temperatures does not necessarily mean that greenhouse gases are the cause. It could also be the Urban Heat Island effect for example.

    This topic also reminded me of an interesting paper by Compo and Sardeshmukh a couple of years ago. This paper shows that the land warming is following the ocean warming (If I remember well, the ocean temperatures were prescribed in their model). This comes close to what Koutsoyiannis is saying here as well, that there is or should be a close connection (especially on the longer term) between ocean and land temperatures.

    Marcel

  • Demetris Koutsoyiannis

    Marcel, you ask:

    Armin and/or Demetris, what is the formal definition of LTP?

    Actually, I have given a formal definition already. Quoting from my initial post:

    Specifically, the variance will never become inversely proportional to time scale; it will decrease at a lower rate, inversely proportional to the power (2 – 2H) of the time scale (nb. 0.5 ≤ H < 1, where the value H = 0.5 corresponds to white noise).

    So, whenever 0.5 < H < 1 we have LTP. There are other equivalent definitions: replace in the above “variance” with “autocovariance” and “time scale” with “time lag” and you will have one. Another equivalent definition has been offered by Spencer, in terms of the power spectral density (PSD); he said:

    a PSD of (1/f^a) where a is greater than 0 and up to 1.

    where f is frequency and a = 2H – 1.

    All these are equivalent. One should have in mind that they refer to asymptotic properties, i.e. they should be valid for arbitrarily large time scale or time lag, and for arbitrarily small frequency. For this reason in more formal writing we use the concept of limit (“lim”) for the definition.

    My preference is the definition based on variance (or standard deviation) because it is the most economical and simplest, as well as because it provides the best interpretation, that related to variability/change (rather than “memory” etc.)

    More explanations about LTP and change, hopefully in simple words, can be found in my paper Hydrology and Change just published (today; in preprint format). If anybody is interested to see it and he/she does not have access to the journal (linked above), he/she can email me to send a copy of the preprint.

    D.

    PS. Here is the abstract of the paper:

    Since “panta rhei” was pronounced by Heraclitus, hydrology and the objects it studies, such as rivers and lakes, offer grounds to observe and understand change and flux. Change occurs on all time scales, from minute to geological, but our limited senses and life span, as well as the short time window of instrumental observations, restrict our perception to the most apparent daily to yearly variations. As a result, our typical modelling practices assume that natural changes are just a short-term “noise” superimposed to the daily and annual cycles in a scene that is static and invariant in the long run. According to this perception, only an exceptional and extraordinary forcing can produce a long-term change. The hydrologist H. E. Hurst, studying the long flow records of the Nile and other geophysical time series, was the first to observe a natural behaviour, named after him, related to multi-scale change, as well as its implications in engineering designs. Essentially, this behaviour manifests that long-term changes are much more frequent and intense than commonly perceived and, simultaneously, that the future states are much more uncertain and unpredictable on long time horizons than implied by standard approaches. Surprisingly, however, the implications of multi-scale change have not been assimilated in geophysical sciences. A change of perspective is thus needed, in which change and uncertainty are essential parts.

  • Marcel Crok

    Dear Rasmus,

    Do you accept the formal definition given by Demetris in his latest comment and in his original post?

    Specifically, the variance will never become inversely proportional to time scale; it will decrease at a lower rate, inversely proportional to the power (2 – 2H) of the time scale (nb. 0.5 ≤ H < 1, where the value H = 0.5 corresponds to white noise).

    In your post you wrote:

    The term ‘signal’ can have different meanings depending on the question, but here it refers to manmade climate change. ‘Noise’ usually means everything else, and LTP is ‘noise in slow motion’.

    If the “signal” refers to “manmade climate change”, this suggests that time series before let’s say 1900 only have noise. Is that how you see it?

    In his first comments to you Demetris wrote:

    a. The HadCrut4 data set is 163 year long. So, let us exclude the last 63 years and try to estimate H based on the 100-year long period 1850-1949. The Hurst coefficient estimate becomes 0.93 instead of 0.94 of the entire period. Is LTP artificial then?

    b. Look at Koutsoyiannis and Montanari (2007), Table 1. It examines several proxies for temperature for the last 500-2000 years and provides two sets of H estimates: One for the entire period covered by each of the proxies and one for the period 1400-1855, common for all proxies. Do you see any noteworthy difference (say, greater than 0.03) in the estimates of H between the two periods? Don’t these high values of H (0.86-0.93 for the period 1400-1855) indicate LTP? Can they be the result of anthropogenic origin? Does your “circular logic” argument apply to them?

    c. Look at Markonis and Koutsoyiannis (2013), Fig. 9. This shows that a combination of proxies supports the presence of LTP with H > 0.92 for time scales up to 50 million years. Is this a result of your “signal” which you identify with “man-made climate change”?

    Furthermore, if you trust only instrumental series you may look at the Nile example in my post, which I trust you can assume to be free of “man-made climate change” as it does not go beyond the 15th century. With respect to instrumental temperature records, you are right that most thermometer records do not go back in time longer than a century. However, there are some that they do—some exceptions as you say. As handy examples, I can offer those of Vienna (see Fig. 3 in Koutsoyiannis, 2011) and Berlin/Tempelhof (see Fig. 13 in Koutsoyiannis et al., 2007). Again the LTP is evident, even without considering the last period (for example, as I recall from the latter publication, the first one-third of the record, years 1756-1839, gives H = 0.83 while for the total period H = 0.77).

    At least before there was “manmade climate change”, LTP seemed to be the norm. Do you agree with that?
    Also the examples of Demetris show there hasn’t been a huge change in LTP since GHG’s started to rise. Do you accept this?

    Marcel

  • Demetris Koutsoyiannis

    Dear Marcel

    Thanks for asking questions most relevant to the topic of the dialogue. You say:

    In our introduction we mentioned the IPCC AR4 definition of “detection”: “Detection of climate change is the process of demonstrating that climate has changed in some defined statistical sense, without providing a reason for that change.”

    Would you say that the method you followed in Koutsoyiannis/Montanari (2007) is your preferred answer to the phrase “in some defined statistical sense”?

    Definitely yes. I explained the reasons in my introductory post:

    The method has the advantages that it uses the entire series (not a few values), it considers the actual climatic values (not their ranks) and it avoids specifying a mathematical form of trend (e.g. linear).

    As I noted, the method needs some further elaboration to include the uncertainty in the estimation of H.

    Is “detection” purely a matter of statistics for you?

    I would say it is primarily a statistical problem, but I would not use the advert “purely”. Besides, as we wrote in Koutsoyiannis Montanari (2007), even the very presence of LTP should not be discussed using merely statistical arguments.

    Does your results mean that in your view “detection” has not yet taken place, although it comes close?

    Yes, I believe it has not taken place. Whether it comes close: It is likely. I believe the present dialogue should have been made a decade ago. If LTP was studied more, we would perhaps know more things now. However, as we write in Koutsoyiannis, Montanari, Lins and Cohn: Climate, hydrology and freshwater: towards an interactive incorporation of hydrological experience into climate research (2009):

    Therefore, it is surprising that IPCC AR4, even in the chapters on Paleoclimatology (WG1, Ch. 6) and Freshwater (WG2, Ch. 3), does not contain any reference to Hurst. The only allusion to HK behaviour in AR4 appears in the last paragraph of Appendix 3.A (Low-pass filters and linear trends; WG1, Ch. 3) and indicates that the authors of the Appendix had no understanding of long-term persistence (i.e. HK behaviour) or of the substantial literature describing it.

    May I add that detection of a change through statistical significance is not the only thing that matters. The magnitude of the change is even more relevant. The observed climate warning is 0.6°C in 134 years. Assuming that this is statistically significant and a 0.5°C warming is not, does significance make a big difference? Thus, it is important to compare the observed change to what would be a normally expected change.

    [Note: It is interesting to see how the observed 0.6°C climate warming in 134 years, correspond to what people have in mind about current warming. My student’s age is about 20 years. In each of my new classes I ask the question to students: how much do you believe the global temperature has increased in the last 15 years (since you started to go to school)? A typical answer is 5°C with the minimum usually being 2°C.]

    If you indeed conclude that detection has not taken place yet, does this mean for you that the effect of GHG’s on the climate (temperature) is relatively weak? I ask this with the following in mind: I suppose an external force on the climate could be so big or fast that a significant change on e.g. the global temperature is quickly achieved even when you take LTP into account. The impact of a big meteorite for example causing major cooling on the earth. So in theory the increase in greenhouse gases could also have this effect, do you agree? Does the fact that despite the relatively rapid increase in GHG’s there is no significant change yet in the global temperature, mean that the effect of GHG’s is relatively weak?

    Yes, I believe it is relatively weak, so weak that we cannot conclude with certainty about quantification of causative relationships between GHG and temperature changes. In a perpetually varying climate system, GHG and temperature are not connected by a linear, one-way and one-to-one, relationship. I believe climate models and the thinking behind them have resulted in oversimplifying views and misleading results. As far as climate models are not able to reproduce a climate that (a) is chaotic and (b) exhibits LTP, we should avoid basing conclusions on them.

    As an aside: did you ever analyse the CO2 record? I can imagine that the rise from 280 to now 400 ppm is very significant even when LTP is taken into account (I suppose this time series also has a Hurst parameter of at least 0.6).

    No I have not analysed CO2 data. From paleoclimatic graphs I can guess that CO2 concentration exhibits LTP, too, with a high H—as well as that it is correlated to temperature, but not with a one-way and one-to-one relationship. Unfortunately, the time scales of the paleo time series are too broad and the instrumental observations of CO2 are too short; thus coupling the two sources of information is too difficult. But I believe the change from 280 to 400 ppm is significant.

    D.

  • Armin Bunde

    Dear Rasmus,
    thank you for your Comments. Regarding the review we published last year, it is very unfortunate that it is not available without charge. I wrote a long article in this book, and am quite unhappy that I also have to pay for the other articles in this book. Can you be so kind and send me your email address. I will bundle a collection
    of our papers and will mail them to you. It would be nice to have a discussion on this fascinating and interesting topic also later when this Climate Dialogue finishes.
    Best wishes, Armin

  • Armin Bunde

    Dear Marcel and Demetris,
    I apologize that I could not answer earlier. As you may have seen, I share many thoughts with Demetris regarding the definition and detection of LTP and so on.
    But we do not agree in all points.
    First of all, from our trend significance calculations we can see, without any doubt, that there is an external temperature trend which cannot be explained by the natural fluctuations of the temperature anomalies. We cannot distinguish between Urban Warming and GHG here, but there are places on the globe where we do not expect urban warming but we still see evidence for an external trend, so we may conclude that it is GHG.
    Second, as you certainly know there is a long discussion on athmospheric temperatures versus SST. It has been argued by Fraedrich and also by us that the inertia of the oceans is an important factor for the LTP and thus we expected and confirmd that the Hurst exponent is larger for SST than for SAT. Based on this, Fraedrich even concluded that H should decrease continously when departing from the coast, i.e. stations very far away from the coast line, like Urumqui in China, should have H=1/2. I found this hypothesis interesting but we could not support it finally from our own analysis. So there is a discussion on the point you make, but I think it is settled ( and also the models show this) that SST has a higher persistence than SAT.
    Best wishes,
    Armin

  • Demetris Koutsoyiannis

    Rasmus, you say:

    Perhaps, and this [i.e. the “noise” properties of the climate models are inconsistent with LTP] may be perhaps because all the LTP is due to external forcing. But you have not demonstrated this, Demetris, and I’m not so sure that you’re right. The models do after all embody a non-linear dynamical system which simulate slow variations due to ocean-atmosphere coupling. We also know that they simulate chaotic weather.

    As I explained here and there, your own grey curve in your Figure 2 is fully consistent with a Markov model with a characteristic time scale of a = 1.25 years. See the graph below if you do not believe that: I fitted the red curve which is an exponential decrease with a = 1.25 years and plotted it on your own graph.

    A climate with a = 1.25 years is a static climate (at scale, say, 30 years or more) and any deviation from mean is too small and purely random, without correlation with previous periods.

    From what you write, I guess you can accept that, during Earth’s history, there were periods in which your “signal” as you define it (you say “here it refers to manmade climate change”) or your “external forcing”, was not present. At those periods, the climate models should behave as in your grey line, which is identical to my red line, which in turn signifies a Markov process, which finally produces a static climate (sorry for repeating trivial things). The truth is, however, that climate on Earth has never been static.

    Otherwise, I agree with you that models can simulate chaotic weather. The problem is whether or not they can simulate (a) a chaotic climate and (b) a climate consistent with LTP. From what I know they fail to both.

  • Demetris Koutsoyiannis

    Dear Rasmus,

    You say:

    The idea that anything that violates the laws of physics is ‘unphysical’ is fairly straight-forward.

    I fully agree; actually, I think I had said already that in my phrase you quoted:

    Of course, if a theory or analysis violates physical laws, then it should be rejected.

    Furthermore, you say:

    Measured data are not unphysical, but the assumptions you make when analysing them may be inconsistent with physics. You can always find a mathematical framework for fitting a set of data, but that does not mean that this mathematical framework represents meaningful physics.

    Yes, they may be inconsistent, but they may not, as well. Therefore, to tell that something is inconsistent with something else needs a proof. I may have told this several times in this blog, but I have not seen any proof of inconsistency. Sorry for having to repeat it once again. To save time, I will not comment on the rest of your comment by repeating things that I have already said as well.

    D.

  • Demetris Koutsoyiannis

    Dear Rasmus,

    Thanks for the interesting graph which shows that among numerous (hundreds?) climate model runs there were a few (six?) that did not suggest a warming climate in the last decade. This is a nice demonstration that Earth’s climate does not feel obliged to do what the majority of climate models dictate. It also demonstrates the vanity of deterministic modelling and, in my view, suggests the need to develop stochastic approaches to climate.

    I think the following conclusion from Koutsoyiannis et al. (2011) is relevant:

    By showing the poor skill of climate models in reproducing past climate evolution, we think that we are constructive rather than destructive. In particular, we hope to have contributed in showing that current modelling approaches can be dangerous, because, as they are unable to reproduce climatic variability, naturally they hide or underestimate future uncertainty (cf. Koutsoyiannis et al. 2007, Koutsoyiannis 2010). This may also contribute to the search for better alternatives, perhaps less algorithmic-intensive, needing less powerful supercomputers (which, despite being also money intensive, ultimately may not make any difference), and more thought- and knowledge-intensive.

    D.

  • Demetris Koutsoyiannis

    Dear Lennart,

    You say:

    Maybe Demetris has further comments after reading the paper that Guemas was kind enough to send us a copy of?

    Indeed, I have a few comments.

    First, I noticed the following phrase from the first paragraph (typeset in bold) of the paper:

    The ability to predict retrospectively this slowdown not only strengthens our confidence in the robustness of our climate models, but also enhances the socio-economic relevance of operational decadal climate predictions.

    However, I have difficulties to read this in connection to what Virginie emailed to us:

    Most of our retrospective predictions are not very good actually [...] This does not mean the predictions we start now for the next decade will be very good.

    Second, I noticed in this paper the phrase:

    In those retrospective predictions initialized every November from 1960 to 2011, the ensemble-mean SST averaged over the first 3 forecast years (Fig. 1a) is very close to the observed 3-year running mean SST in all of the predictions from 2000 onward…

    This suggests that the method followed was to reset every year the conditions in order to match the reality. Of course, such method is not feasible if we speak about future predictions and I do not think it is useful even in hindcasts. This is because that any model, even the funniest one, if regularly reset to the current conditions, will exhibit a good performance in reproducing a process characterized by persistence.

    Third, as a colleague who saw the paper noticed, the way the results are graphically presented raises questions. You may see, for example, that in Fig. 1c of the paper, which refers to non-initialized forecasts, the points corresponding to consecutive years are connected to each other by lines, while in Fig. 1a, referring to initialized forecasts, the points are not connected by lines. (Perhaps by leaving the points disconnected, you get a feeling of better agreement). Furthermore, Fig. 1c, which is for 3-5 years ahead of initialization, does not indicate any impressive agreement with reality.

    For these reasons, I do not think the paper has explained the pause of warming.

    D.

  • Demetris Koutsoyiannis

    Paul S,

    Please read my comment again and, in particular, please notice that I have used quotation marks for the terms I quoted from Rasmus, which are “signal” and “external forcing”. Your reply is about forcing in general. Of course there is forcing all the time, but it can be internal, produced by the climate system per se, not by external factors, like Rasmus’s “signal”, which, as defined by himself, “here it refers to manmade climate change”.

    D.

  • Rasmus Benestad

    Dear Marcel,

    I think the proposition

    Specifically, the variance will never become inversely proportional to time scale; it will decrease at a lower rate, inversely proportional to the power (2 – 2H) of the time scale (nb. 0.5 ≤ H < 1, where the value H = 0.5 corresponds to white noise)

    is somewhat artificial in the case for temperatures, as we do know that there is a great deal of variance that is usually is removed before there analysis: the seasonal variations and the diurnal cycle. Most of the variance is tied up to these well-known cycles, forced by regional changes in incoming sunlight. Furthermore, ENSO has a time scale that is ~3-8 years, and is associated with most of the variance after the seasonal and diurnal scales are neglected.

    For precipitation, the picture may be different.

    If the “signal” refers to “manmade climate change”, this suggests that time series before let’s say 1900 only have noise. Is that how you see it?

    Yes and no. The response to natural forcing is still not ‘internal’. We know there have been natural forcings before and we know that they have caused some variations on Earth. Now we have the best instruments ever, and we can measure natural forcings and infer their effects. I still think that external forcings do influence the analysis of LTP, and I have not seen any demonstrations to the opposite. I believe that we need to work through the numbers and would like to see numerical demonstrations to show if LTP exists without forcings. E.g. onthe sea-level pressure and other variables.
    One of the major weaknesses I think with the arguements presented by Armin and Demetris is that they only look at one climate inidacotr, when we know in fact that climate involves many related aspects. It is important to draw on all available information, rather than neglecting related physics and observations and focus only on the statistical aspects of just one index.

    Although the HadCrut4 163 year, it does not represent the same locations over the entire record. In fact, the early part is calculated from a smaller sample of thermometers, and one may even by eye discern the change in the sampling fluctuations associated with the changes in the data coverage. Again, the temperature is affected by external forcings. I suggest using sea-level pressure. Another approach is to subtract the northern from the southern hemisphere, assuming that the forcings and the trends affect the whole planet and that the two hempispheres are affected somewhat similar – as I’ve done and is shown in #470

    I have not read Koutsoyiannis and Montanari (2007) nor Markonis and Koutsoyiannis (2013) – please provide the details here. The Nile is a completely different case to the global mean temperature. THe physics is entirely different. LTP may be true for the Nile, but not for other situations. For local temperature measurements, I would not be surprised to see some long-term like persistence, but I would ascribe most of it to low-level chaos and natural forcings.

  • Demetris Koutsoyiannis

    This is a general comment (i.e. not addressed to a particular discusser) — and a rather pessimistic one.

    In one of my earlier comments to Rasmus I wrote:

    Sorry for having to repeat it once again. To save time, I will not comment on the rest of your comment by repeating things that I have already said as well.

    Earlier, in another comment I wrote:

    Each of the discussers has his own publications. I have tried to refer to some of my own several times, but perhaps I was not convincing. I am afraid that Armin had the same feeling when he wrote to Rasmus:

    I am convinced that reading and trying to understand our papers really would help you much in this issue. Please do this!

    Now Rasmus makes a statement which for me was shocking:

    I have not read Koutsoyiannis and Montanari (2007) nor Markonis and Koutsoyiannis (2013) – please provide the details here.

    Of course, I did not expect that Rasmus would have read the papers by my colleagues and me. However, I would expect that each of the discussers reads the comments in this blog, particularly those addressed to them — or at least uses the “Find” utility of the browser.

    So using the “Find” utility I was able to see that full details of Koutsoyiannis and Montanari (2007) are given several times in this blog:

    In Ref. [2] in the Introductory post by Rob, Marcel and Bart;

    In Ref [21] in my own main post;

    In my First comments on Armin Bunde’s post.

    Also link to the paper is contained in my First comments on Rasmus Benestad’s post.

    Furthermore, for Markonis and Koutsoyiannis (2013) full details are given:

    In Ref [17] in my main post;

    In my First comments on Rasmus Benestad’s post;

    In Spencer Stevens comment to Bart.

    Also, both details and link are also contained in my reply to Rob.

    All the above makes me wonder if climate dialogue is possible.

  • Demetris,

    In a recent comment (http://www.climatedialogue.org/long-term-persistence-and-trend-significance/#comment-490 ) you wrote in response to a question from Marcel about whether detection (of global warming) had taken place:

    “I believe it has not taken place.”

    This I found surprising in light of what you wrote earlier:
    http://www.climatedialogue.org/long-term-persistence-and-trend-significance/#comment-351 : “The global land air temperature, in the past 100y, increased by about 0.8 degrees. We find this increase even highly significant.” And in your opening post: “the probability of having 11 warmest years in 12, or 12 warmest years in 15, is 0.1%.” (based on a value of the Hurst coefficient, H = 0.94, higher than others have found) and “Whether this change is statistically significant or not depends on assumptions. If we assume a 90-year lag and 1% significance, it perhaps is”

    I took your earlier responses to mean that according to you, the recent warming if significant to 99 or 99.9%, dependent on the exact metric used. You mentioned “highly significant”. And now you state “detection has not taken place”. Aren’t these statements mutually exclusive?

    As we wrote in the introductory text, according to AR4 “an identified change is ‘detected’ in observations if its likelihood of occurrence by chance due to internal variability alone is determined to be small.” So how small do you think this chance is?

  • Demetris Koutsoyiannis

    Dear Bart,

    Sorry if you were misled, but it was not my fault. I usually use blockquotes in my comments, but if I remember well the one you quote was from my initial comment. That was sent to the forum editors before the appearance of the dialogue. The editors posted it for me, without using blockquotes.

    So what I said, as you may see it above, in my comment to Armin, was this:

    Dear Armin,

    […]

    Now coming to the land temperatures, again I find a similar behaviour as in my Figure 5 […] This does not agree with your conclusion:

    (ii) The global land air temperature, in the past 100y, increased by about 0.8 degrees. We find this increase even highly significant. The reason for this is the comparatively low persistence of the land air temperature, which makes large natural increases unlikely.

    So, what you quote as if it was said be me, was in fact said by Armin and is not supported by my calculations.

    You can check it if you read my entire comment, rather than part of it which actually is a quotation to which I replied. The meaning is clear—even without blockquotes.

    D.

  • Demetris Koutsoyiannis

    Bart, please also read my pessimistic comment just before yours.

    D.

  • Demetris Koutsoyiannis

    Paul, I regard greenhouse gases part of the climate system. For example, in my view, changes in the vapour concentration classify as internal forcing. As I wrote in an earlier comment (Section 5, Linearity vs. nonlinearity):

    In more complex systems (yet the most common ones) whose study needs to abandon linear models, the contribution of each cause or forcing is not straightforward.

    So I may not be able to calculate the particular contribution of external and internal forcings. For me it suffices to say that the climate was never static, which implies variability — particularly due to internal dynamics. If you can do such separation, please do and let me know if you find that in the entire Earth’s history the ever changing climate has been driven by external forcing only.

  • Thanks Demetris, that clarifies part of the discrepancy, but the other two quotes remain, which I still cannot reconcile with your later statement. So my question still stands:

    What is the chance that the observed changes are due to internal variability? (meaning a redistribution of energy within the climate system – there seems to be quite some confusion about what the different terms forced vs unforced/internal var. mean, which I will come back to in a later comment)

  • Demetris Koutsoyiannis

    Bart, thanks for understanding. My answer stands too. If you read my main post you will see that I provide quantified answers for the “chance”. See in particular my graphs and their explanations. I hope Marcel can verify that what I replied to his comment (actually verifying his own reading of my post and comments) is consistent to what I wrote in my post and my later comments. So, I am afraid I cannot see what looks surprising to you.

  • Demetris Koutsoyiannis

    Some recent signs (lack of progress, repetitions) may indicate that this discussion approaches its end. I wish to thank the editorial team, Bart, Marcel and Rob, for inviting me, my co-guests Armin and Rasmus, and all contributors for the fascinating discussion during these three weeks.

    My best wishes for the continuation and further development of the Climate Dialogue forum. Even with the difficulties encountered, dialogue is the only way forward. Besides, as Heraclitus said, “Tο αντíξουν συμφέρoν και εκ των διαφερόντων καλλíστην αρμoνíαν και πάντα κατ’ έριν γíνεσθαι” (Opposition unites, the finest harmony springs from difference, and all comes about by strife).

    If I may offer a simple suggestion for the future dialogues, I would propose to merge the two sections “Expert comments” and “Public comments”. First, these section titles are not very accurate; it would be more accurate to say “Editors and guests” rather than “experts”. My feeling is that everybody who contributes in this dialogue is an expert—both the eponymous and the pseudonymous discussers. Second, the reading of the comments would be more convenient and sensible if the comments were in chronological order rather than separated into two sections.

    D.

  • I’d like to offer the following observation of the discussion so far (more comments remain welcome, but are by no means demanded).

    There appear to be different interpretations of natural variability and of detection which may be a frequent cause of misunderstanding in this dialogue and beyond. Below I’ll try to describe these different interpretations in an effort to elucidate where the different opinions may (partly) be coming from.

    In general, the following processes involved in climate change can be distinguished:
    - natural unforced variability (e.g. internal variability involving a redistribution of energy)
    - natural forced variability (e.g. changes in the output of the sun or in volcanism)
    - anthropogenic forced variability (e.g. changes in greenhouse gas or aerosol concentrations)

    where a forcing refers to a process causing an energy imbalance, which in turn causes a temperature change. Internal variability on the other hand causes a temperature change arising from semi-random internal processes. This temperature change can then cause an energy imbalance (since outgoing energy scales as T^4), but the cause-effect chain (linking temperature change and energy imbalance) is opposite to a radiative forcing)

    As we wrote in the introductory text, according to AR4 “an identified change is ‘detected’ in observations if its likelihood of occurrence by chance due to internal variability alone is determined to be small.”

    In other words, detection is based on distinguishing the forced (natural and anthropogenic) from the unforced (natural) component.

    Demetris seems to argue that these different processes can not be distinguished, or at least that internal (unforced) variability and natural forcings can not be distinguished. Anthropogenic forcings can only be distinguished by virtue of them not having been acting on the system prior to ~1850. Armin seems to take a somewhat similar view, in combining natural unforced and forced changes in what he terms natural fluctuations. Rasmus seems to take the view as I outlined above (the distinction in three main types of processes).

    Demetris argues that the current temperature signal is not outside of the bounds of what could be expected from natural forced and unforced changes, thereby using a higher bar than the standard definition of “detection”. He bases his statement on a higher Hurst coefficient than Armin does, which increases the bar further.

    This may clarify how the statement that climate forcings introducing LTP and climate forcings being omnipresent (which all three agreed on) can still lead to different conclusions regarding the presence of LTP saying anything about internal variability, because different operational definitions of detection and internal variability (and perhaps also of LTP, as has been put forward by Armin) are used (where in one view internal variability is only the unforced component of change, where in another view internal variability also includes natural forcings).

    This brings up the question, if (according to Demetris) the recent warming is not outside of the bounds of natural forced and unforced variability, where does all the excess energy come from that is observed to accumulate in the climate system? It doesn’t seem to be due to natural forcings (which show no warming trend over the past 50 years), nor is there any sign of a redistribution of energy within the climate system (everywhere we look it’s warming). Where is the energy hiding, or where is it coming from (if not from excess greenhouse gases inhibiting planetary heat loss)?

  • Spencer Stevens

    Firstly, I’d like to thank Rob van Dorland, Marcel Crok, Bart Verheggen and the climate dialogue team for organising this discussion on what I consider to be an absolutely fundamental aspect of climate science. I’d also like to thank Rasmus Benestad, Armin Bunde and Demetris Koutsoyiannis for taking what must be a considerable amount of time to prepare their cases and contribute to the discussion. Much of what I would like to say has already been covered, but I would like to add a few brief comments.

    Rasmus asked the following question and I’d like to add my own perspective:

    So the question: how do the LTP methods deal with imperfect data?

    I’m not sure “LTP methods” is quite how I would view the issue raised here. Although some methods may have an assumption of LTP in a data set, I think methods are tools to operate on the data, and the data carries the property “LTP” or “STP”. The more appropriate question should be, I think, what is the consequence of applying a method (any method) on a data set with LTP present. This may not be clear so I will give a simple example to help illustrate the point I am making.

    We can calculate the sample standard deviation of a data set easily, and this has an exact value. And we can do this whether the data has short- or long-term persistence. We then often interpret this sample standard deviation as an estimate for the population standard deviation. In this case, there is some error, because the sample standard deviation will not exactly match the population standard deviation.

    And we are all taught in statistics class that when calculating the sample standard deviation as an estimate of the population standard deviation, we should divide by (n-1) rather than (n) to ensure the estimate is not biased. But this is only true for estimates of standard deviation where the samples have sufficient independence (for example, either for white noise, or for autoregressive systems where the data set is much longer than the characteristic time constant). If we have a data set containing LTP, using the sample standard deviation to estimate the population standard deviation is biased even if we do divide by (n-1).

    The example I give here is a standard method that can be applied to all data sets. It is not an “LTP” method or an “STP” method, it is simply a method. What we need to take great care with is the consequence of applying this method to an LTP or STP data set because the properties of the method may differ between the two. And in this regard, there is great danger with rules that we often take for granted – such as dividing by (n-1) to get an unbiased estimate – because those are things most easily missed, and can easily mislead.

    To understand how various methods interact with LTP data sets can be relatively easily assessed, for example, through Monte Carlo simulations.

  • Spencer Stevens

    Bart, from your comment:

    Rasmus asks very much the same question in his latest comment:
    “Can we agree on that forcing introduces LTP?”
    “Can we agree on that forcing is omnipresent for the real world climate?”

    These are important questions in order to establish areas of agreement and disagreement. I invite Demetris and Armin to comment on these.

    As to the physical interpretation: internal variability would in all likelihood mean a redistribution of energy within the earth system.

    Firstly: I do not understand why you think internal variability would be likely to result in a redistribution of energy. Let me give you an example of why I think this is not a useful assumption.

    A key factor in internal variability is cloud cover. I like to choose clouds as an example, because they are often used as examples when we teach courses on fractals; a cloud is a classic example of a natural fractal, from the fluffy bits around the edges, to the clumps, and the whole cloud itself (and this is, of course, a product of LTP). But we often just show a single cloud as an example, but the limits do not stop there. The fractal properties of clouds extend not just within a cloud, but spatially from cloud to cloud, then the cloud cover from region to region, and from continent to continent. Also temporally; from day to day, from year to year, from century to century, from epoch to epoch.

    Changing cloud cover is a great example of LTP arising from internal variability in the climate system. But of course, clouds do not just move energy around; they change the earth’s albedo, and change the quantity of energy in the system. On all time scales, with a simple relationship governed by Hurst-Kolmogorov dynamics.

    So I disagree with your assumption that internal variability would (likely) result in energy redistribution alone. I also suspect that if all external “forcings” could be held constant, the internal variability in the system would still see LTP-like temperature swings.

    On the topic of: do we think forcings “cause” the LTP. I also doubt this. I have a signal processing background, and like to think of things in frequency space, and although this is not always the best space to understand LTP, I will use it as an example because I am comfortable with it – apologies for this!

    Imagine we could have an extremely long data set of climatic conditions, with high resolution; a huge number of points. Now we look at the power spectral density of that data set. In log-log space, we will see perhaps thousands of individual peaks. The pattern of these peaks is important, and on data sets I have seen they all follow a consistent, simple pattern in which the magnitude of the peaks follow a simple straight line, with the lowest frequency peaks being highest and the highest frequency peaks being lowest (classic LTP behaviour).

    If each of these peaks had an associated forcing – for example, an orbital explanation, or some other mechanism – I would expect each forcing to be different, and for the peaks not to line up. For example, we can estimate the forcing associated with the orbital parameters easily, and we should expect the magnitude of the peaks to follow the same order.

    But in all cases I have seen – the climacogram in ref. 1 below being a great example – the peaks line up in a strict descending order. While it is theoretically possible that the “forcings” may have aligned themselves to do this, it is an extreme and unlikely coincidence. Which is I find the “forcing” explanation deeply unconvincing, and the “LTP” explanation far more credible.

    ref 1. Markonis, Y., and D. Koutsoyiannis, Climatic variability over time scales spanning nine orders of magnitude: Connecting Milankovitch cycles with Hurst–Kolmogorov dynamics, Surveys in Geophysics, 34 (2), 181–207, 2013.

  • Arthur Smith

    I want to express my strong agreement with Rasmus and Bart’s request that Koutsoyiannis and Bunde need to address possible physical explanations of what they are claiming regarding “long-term persistence”. The Earth is not a purely mathematical system, it is a dynamical physical system with varying environmental conditions – anthropogenic effects among them. If you ignore differences in important physical variables at different times and run a purely mathematical analysis of the system, you are essentially producing garbage, just as would be true of any scientific analysis of an experiment with uncontrolled variables.

    Look at the Nile river level example Koutsoyiannis uses for instance. Koutsoyiannis mentions “land use” among the century-level changes likely to affect water levels – if that could be a significant factor, then that’s an important non-natural element that makes any statistical analysis of the water level non-informative about *natural* climatic long-term persistence. There could be something really interesting in here – perhaps the physical explanation for the ups and downs is some long-term oceanic oscillation cycle and the land-use changes are not relevant. But you have to do a *quantitative analysis* of these different factors to determine that – just as climate scientists do quantitative analyses of “forcings” to decide what’s important at the scale of the climate as a whole. Without that underlying quantitative analysis of the physical parameters of the system, and some indication of confidence that you have isolated away large-scale non-stationary change (local land use, solar and volcanic forcings, etc.) I can’t see how anything concrete can be obtained from this sort of analysis at all.

    Has this level of quantitative analysis been done? Or is there a better example of long-term persistence in climate-relevant observables that is more isolated from land use and other human and climate-forcing factors?

    Or better yet – if standard climate models can’t reproduce this sort of long-term-persistence that they think they see – what sort of physical model *would* allow for it? What changes would be needed in models to allow for this sort of thing? Right now the arguments seem very hand-wavy and unconvincing to me at least.

  • Paul S

    Thanks to everyone for their time and comments.

    The introduction mentions warming over the past 150 years in the same paragraph as referencing an IPCC conclusion ‘it is extremely unlikely (<5%) that recent global warming is due to internal variability alone'. However, it should be made clear that this attribution statement was specifically addressing 'global climate change of the past 50 years', which would be ~1955-2005, not the past 150 years. This is important because it ties in with Bart's comments regarding warming of the oceans – the reason a strong attribution statement was made specifically only for the last 50 years was the availability of observations relating to ocean warming below the surface.

    With that in mind it seems important, if these statistical approaches are attempting to offer an alternative attribution framework from the IPCC's, that they address relevant climate change indicators other than near-surface and surface temperature.

    I'd like to also emphasise Rasmus' point regarding the MWP in relation to an unforced preindustrial control run: the climatic conditions of the Medieval period did occur in the context of forcings – almost all natural of course: orbital, solar, volcanic, small variations in methane, CO2 and probably aerosols (the latter three could be regarded as biogeochemical feedbacks in this context). Therefore an unforced model run doesn't say much of anything about the model's ability to reproduce the climate of the Medieval period.

  • Jos Hagelaars

    Thanks a lot for all the interesting and very informative posts.
    I would think that besides statistical significance of temperature trends physical and chemical laws would be important. A ‘long-term memory’ in the climate system should also be based on the laws of nature. Dr. Benestad has a lot of attention for the physics of the climate system, but I’m missing that in the posts of Dr. Koutsoyiannis and Prof. Bunde.
    So some general questions came to my mind.

    Is it a mere coincidence that after the rising of greenhouse gas concentrations in the atmosphere temperatures seem to rise also, when physics tells us they should rise?
    Or, is it a mere coincidence that e.g. the amount of sea ice in the world is dropping, that sea level is rising when physics tells us this should happen?
    From the recent Pages2K paper and Marcott et al 2013 I conclude that during the second part of the Holocene global temperatures were gradually dropping until some 150 years ago when this decreasing trend changed into an increasing trend. Can this happen by chance when this seems very plausible from the change in radiative forcings?

    What is the statistical possibility that all parameters seem to change in one direction, the direction that physics tells us? It is obvious that several parameters in the climate system are linked to temperature. Should all these parameter changes be taken into account when looking at relative magnitude of the three types of processes, internal variability, natural or anthropogenic forcings?

  • Spencer Stevens

    I am a little surprised that people are claiming that LTP is somehow “statistics only”, given that Demetris has published a paper [1] in the peer-reviewed literature demonstrating how LTP arises from the principle of entropy maximisation, and more importantly, both Armin and Demetris have demonstrated that LTP is a far better match to observations than STP. Also, it is rather unhelpful to just assert that it needs a physical justification, given that one exists, if people wish to question the physical basis, it would be helpful if they would explain which aspect of the physical basis they disagree with; I can see three possible areas, either the principle of entropy maximisation, the analysis Demetris conducted, or whether the necessary conditions (constraints) outlined by Demetris in his paper apply. See ref 1. below.

    I will add some more detail now, based on Rob van Dorland’s quote below, as it is at least specific enough to address the points raised.

    We can distinguish the following opposite views:

    1) Using statistical models only
    2) Using statistical models in combination with constraints using physical knowledge of the climate system (e.g. internal variability, energy balance)

    I think Demetris is going for option 1, while Rasmus favors option 2.

    I do not see how Demetris’ view can be classed as “option 1″. Let me compare the two approaches.

    Rasmus is using GCMs as the basis. These are deterministic models built on a dynamical core of numerical solution to the fluid mechanics problem of the earth’s climate, overlaid with other relationships (insolation, ice and clouds, aerosols etc), some of which are based on analytical theory, others approximations or parameterisations, and others again missing. Rasmus has run these models and demonstrated that STP internal variability is seen, as shown in his plots above.

    Demetris has used a different approach. He has developed a stochastic model, based on the principle of entropy maximisation, combined with some simple, testable constraints, and identified under which constraints we expect to see white noise variability, STP variability, and LTP variability. We can see from the principles and constraints that we expect climate to exhibit LTP internal variability.

    So both Demetris and Rasmus have applied physical laws and constraints of the system. Both have expectations of internal variability from these constraints. But Rasmus has concluded STP internal variability, Demetris has concluded LTP internal variability.

    We can turn to the data to determine which of these theories is consistent with the real, observed climate. Which is something Armin (and many others, such as Cohn, Lins, Montanari, Halley, Rybski, von Storch, etc. etc.) has done for us. And the conclusion is that we see LTP present in the real climate system. This shows us Rasmus’ model is wrong, and falsified. It does not mean Demetris’ model is correct; but it seems the best model we have today.

    The second point Rob makes regards energy balance. Any time series with a finite, defined population mean has a point of “energy balance”. So, for example, we can reject the idea that the climate is a random walk, since a random walk does not have a finite defined population mean, so is inconsistent with the principle of conservation of energy.

    But LTP can be shown, analytically, to have a finite, defined population mean. So there is an “energy balance” present in a system with LTP internal variability. STP also has a finite, defined population mean, so this also passes the energy balance test. The difference being that the sample mean is a poor estimator of the population mean for an LTP series (for much the same reasons as my discussion of standard deviation above).

    So I strongly disagree that adoption of LTP represents option 1 above. The adoption of LTP has a physical basis; it is a stochastic model that is consistent with the constraints of the climate system; the expectation operator yields a meaningful energy balance term. Clearly and unambiguously, LTP falls into option “2″ above.

    [1]. Koutsoyiannis, D., Hurst-Kolmogorov dynamics as a result of extremal entropy production, Physica A: Statistical Mechanics and its Applications, 390 (8), 1424–1432, 2011.

  • Spencer Stevens

    Bart, you say:

    Internal variability usually involves a redistribution of energy within the climate system (where the ensuing changes in surface temperature cause an energy imbalance which works to restore the system back to its equilibrium value where outgoing energy equals incoming energy; Rob’s second point in an earlier comment).

    And I have already explained above why this assumption that internal variability can only redistribute energy is a flawed assumption. Clouds, as just one example (there are many more), exhibit LTP and sensitivity to initial conditions, so can only meaningfully be described as internal variability, can change the albedo of the planet. There is then nothing to “restore the system back to its equilibrium”, there is just the future trajectory of the climate, which will continue to exhibit LTP and will continue to be highly nonlinear and sensitive to initial conditions. And of course the climate will continue to have a finite, defined population mean (the “energy balance”) which will continue to be difficult to estimate from the sample mean.

    I can understand that you would prefer Demetris’ answer to mine, as he is the expert on this topic and I am not, but it does become frustrating when the same fundamental misconceptions have to be addressed over and over again, especially when we are all limited by the time and effort we can put into this discussion.

  • Jim Cripwell

    Spencer Stevens writes “Changing cloud cover is a great example of LTP arising from internal variability in the climate system.”

    Spencer takes zero notice of Svensmark’s hypothesis, and the recent work done at CERN labs, of the potential connection bewteen GCRs and cloud cover. He states absolutely that “Changing cloud cover” is a “great example” of “internal variability”. Ignoring alternative hypotheses, which have the support of some empirical data, is not my idea of what the scientific method is all about.

  • Arthur Smith

    Koutsoyiannis appears to agree he has little to contribute to substantive discussion here:

    I cannot contribute in discussing questions that distinguish statistics from physics; I cannot accept that using a stochastic modelling approach violates or contrasts physics; I cannot offer too much in a discussion about separation of signals and noises;

    I’m sorry to hear this; dialogue is pretty difficult when one party completely refuses to respond to critical issues like these. Science is about explaining things we observe, not just sitting back and watching the world do whatever it does. The existence of long-term persistence in an observable implies some underlying cause that has slow variation: either an internal physical variable of the system under study that changes slowly, or an external parameter with similarly slow change. It is critical for our understanding to know which of these is the cause, and deeper analysis of the system has to be done to ferret that out.

    Just to illustrate with the Nile river level example: let’s say we have two contrasting possible fundamental explanations for the long-term persistence seen there:
    (1) changes in human use of the water and land around the river that vary over centuries
    (2) variations in ocean currents and behavior also on a century-scale, with an impact on climate and rain-fall over the Nile basin.

    If (1) can be eliminated as a cause, leaving us only (2), or if there is other evidence supporting (2) (tree rings or other rain-fall evidence in the area that matches the river level changes, for example) then that is a strong piece of evidence for real internal long-term-persistence in the climate system, something which models evidently don’t capture.

    But if (1) is the explanation, which can be looked at for example by examining patterns of settlement and culture during the periods of river level changes, then this example has nothing to do with long-term-persistence in the climate as a whole, since the changes are caused by local human activity.

    If you haven’t done the work to distinguish between (1) and (2), then observing LTP in the Nile river example tells us nothing useful about the climate, it is just as much garbage as a scientific experiment in the lab under conditions where important variables are not being controlled.

  • Paul S

    Spencer Stevens,

    I don’t think you’re understanding Rasmus’ point. As far as I can tell he’s not at all arguing that the real world exhibits only “STP internal variability”. He’s saying that the statistical behaviour being labelled “LTP” is an emergent property of forced GCM realisations, and that the effect is much weaker when looking at unforced GCM realisations. That is, LTP is an expected consequence of forcing. As you’ve mentioned them, this was the primary finding of Rybski, Bunde and Von Storch (2008) when looking at forced simulations (though apparently not including orbital forcing) of the past 1000 years versus unforced control runs.

    Given that there is always forcing going on “LTP” is always something which should be noticable in real records of global or large scale temperature. However, this is behaviour which is anticipated by GCMs, so I can’t see in what sense they would be falsified by that notion. I think this is why Rasmus talks about the potential for circular reasoning (though I’m not sure it’s the correct phrase in this circumstance): GHG forcing induces strong LTP behaviour, strong LTP signal is identified in climate records, GHG forcing is dismissed as a climate driver due to the presence of strong LTP behaviour.

    From another of your comments:

    If each of these peaks had an associated forcing – for example, an orbital explanation, or some other mechanism – I would expect each forcing to be different, and for the peaks not to line up.

    What are you using to inform your expectations for the consequences of forcing?

  • Arthur Smith

    Demetris Koutsoyiannis – thanks for responding to my comment. However, you attack a straw man, not me, when you state:

    a slow variation at a single time scale does not suffice to generate LTP. A single time scale of variation results in Markovian dependence. You need to have many scales of change to get LTP. I refer you to my post above; I devoted effort and time to write it, and I hope it would not be waste of time if you read it.

    - all I stated was that LTP implies an “underlying cause that has slow variation”, I said nothing about time-scale other than the word “slow”, and certainly had no intention of implying a single time-scale, or even a single cause – there has to be at least one causative element though. It is unnecessary diversions like this one that derail getting to the actual heart of matters; you would do better to focus on things that are actually important and respond to what people actually say.

    That said, I would like to express appreciation for you concretely responding on the questions I had regarding Nile river flow. Although there was no quantitative component to your response, and no citations on the topic, it sounds like qualitatively the issue of human land- and water-use change over the time period in question can be dismissed. If that’s true, the results are not garbage, so good.

    So, then, what are possible underlying explanations for LTP in the Nile river flow case, or if there’s a better example, that? I have read enough about “maximum entropy” and the like to know such claims provide little to no physical insight to me. Buzz words are not explanations.

    Is there any evidence for at least one thing in the physical system of river structure, precipitation, etc. that shows the sort of long-term-persistent slow-change behavior that could explain these observations? If precipitation changes are causative, do we have precipitation records that can be compared and show similar (and preferably matching) persistence over the century-level and longer time scales? And what could explain the long-term memory regarding precipitation?

    Of course over 100′s of thousands of years we have the orbitally-induced ice age changes which certainly explain slow change at that level – and we know from longer term temperature records there has been a general cooling trend for the past 11,000 or so years (until recently), presumably orbital-related as well. Those are the sort of forcing changes Rasmus discusses. We have volcanic and solar forcing changes on shorter time-scales. It’s very unclear from what you’ve done that you have any evidence of LTP originating from anything outside of these external factors – it would be very nice if you could make a clear statement on whether or not that’s true.

  • Arthur Smith

    Armin Bunde – I’m interested that you state (in a comment):

    We appreciate today that monthly temperature data are LTP on all time scales, with Hurst exponents around 0.65 for continental and around 0.8 for island stations and sea surface temperatures. Long historical runs from AOCGMs are able to reproduce this behavior.

    - do I understand this to mean that historical runs from climate models DO show LTP “on all time scales” with these sort of Hurst exponents? Demetris Koutsoyiannis seems to think that’s not possible. What’s the reason for this discrepancy? Is it the issue of whether or not external forcings are being varied (a historical run presumably uses historical values of the forcings)?

  • Spencer Stevens

    Bart, in your latest comment you ask some interesting questions that may help to get to the root of the differences in our understanding of this topic. I’ll try to give some reasonably clear answers, although again these are my thoughts and you may be more interested in what Demetris has to say.

    What is your physical interpretation of the presence of LTP? Is a physical interpretation based solely on the presence of LTP possible at all?

    LTP is, in my view, a consequence of the dynamics (i.e., the physics) of the climate system. But LTP is pervasive. That is, once LTP is imprinted on a system, as we move to longer and longer timescales (e.g. climate), LTP will affect everything.

    So, I don’t think I understand quite what you mean by “is a physical interpretation based solely on the presence of LTP possible”, it is a slightly strange way to phrase the problem. I think once LTP is present, it becomes necessary to interpret climate through the presence of LTP, since LTP is so pervasive.

    I do note that Spencer equates LTP with internal variability (rather than being the combined result of internal variability and radiative forcing).

    I have used a number of terms in this discussion (internal variability, radiative forcing, energy balance), often with scare quotes (!). In doing so, I am trying to link concepts between conventional climate science and how these things manifest themselves in a non-linear system with Hurst-Kolmogorov (LTP) dynamics.

    Internal variability I think can be a useful definition, but I perhaps need to be clear on what I mean. When I refer to internal variability I mean anything which is part of the non-linear, interacting climate system. If something can be both affected by climate, and affect the climate, and be sensitive to initial conditions, then I consider it to be a part of internal variability. Because of the interaction and sensitivity to initial conditions, the horizon of predictability of internal variability is very limited.

    Any factor that can affect climate, but climate does not affect it, I think could be reasonably considered as an external factor. The orbit of the earth is a good example. We have a good understanding of the earths orbit; we have good evidence that it will be quite predictable out to perhaps a hundred million years from now, and that prediction does not require knowledge of climate. So I think this can be considered an external factor. Note that the change in external factor will influence the climate, and send the climate on a different trajectory to that which climate would have followed had the external factor not changed.

    I have talked about radiative forcing but it is not, I think, a terribly useful concept for climate diagnostics. Radiative forcing as I understand it is simply a change in irradiance. In this sense, it is simple and measurable value – I can place a pyrometer outside my door on a sunny day, and if it clouds over I can measure a change in irradiance of the order of hundreds of watts per square metre.

    In this sense I need to clarify my comment. Of course, there will be a change in irradiance, because there are changes in irradiance all of the time. What I refer to when I talk about radiative forcing here is really what we might consider the external controls that climate scientists believe drive changes in equilibrium, such as solar output or GHGs. If we make no “artificial” changes to these, and just allow the climate to run, I think we would still see LTP (but with changes of irradiance happening all of the time).

    I think the confusion is increased because although radiative forcing is in a sense measurable, it is a very poor climate diagnostic and it is often used (misused?) with the assumption that climate is linearly related to a suite of radiative forcings at the climatic scale. I think this assumption is problematic and can easily mislead. It becomes even worse when some changes of irradiance are declared as “feedbacks” rather than forcings when, in practice, essentially all internal variables are “feedbacks” in the context of a complex, non-linear system exhibiting LTP.

  • Spencer Stevens

    Demetris, you are too kind, and despite being a second language, your English is often better than mine (I see many grammatical errors on re-reading my own posts). But your insight into physical processes is far better than mine and I am still very much learning from your work, and it is a privilege to do so.

  • Spencer Stevens

    - do I understand this to mean that historical runs from climate models DO show LTP “on all time scales” with these sort of Hurst exponents? Demetris Koutsoyiannis seems to think that’s not possible.

    Arthur, this is unfair, Demetris has not stated it is “not possible” to get LTP from models. Indeed, Demetris has published a “toy” model of climate that exhibits LTP.

    What we have talked about are specific cases, e.g. the example Rasmus kindly provided, which is an example of a model which clearly does not exhibit LTP.

  • Paul S

    I’d like to ask Armin Bunde a couple of questions concerning his trend significance statements for sea surface temperature:

    1) When you calculate significance from your statistical LTP model, are you effectively asking the question: “Could the observed trend have occurred simply as a result of LTP even in the absense of contemporaneous forcing?”

    2) If that is the case presumably there would be a trend magnitude which would be significant? Can you say what that would be?

    3) Can you say whether it is the rate of change or the absolute amount of change which is more important to gaining significance in your model? For example, would a 1.2ºC SST warming over 200 years (a continuation of the past 100-year trend) remain insignificant? Would a 1.2ºC SST trend over the next 100 years be significant?

    4) Do you accept Demetris Koutsoyiannis point that separating land and sea trends in the way you have is not physically realistic? For reference see Compo and Sardeshmukh (2009) and a blog post by Isaac Held for a concise summary.

    5) Last, but definitely not least: The graph at the top of Rasmus Benestad’s opening blog shows a pretty standard idea of expected climate evolution in response to known forcings. The scale of the plotted response is somewhat arbitrary but it is clear that the proportional shape and timing of the observed temperature evolution is a very good match for our expectations based on prior physical understanding of historical forcings.

    Unless you want to say differently (that would be the first sub-question) I can’t see a reason why a trend induced by LTP would favour the path seen in observed global average temperature. So under an LTP statistical model the observed temperature evolution would simply be one possibility out of hundreds of very different paths, with the same probability of occurrance as any other.

    You mention that GCMs reproduce the LTP behaviour described, but they also unanimously expect increasing SSTs over the historical period in which we have observed increasing SSTs. If GCMs do reproduce LTP behaviour and you’re using LTP to inform your significance calculations, surely the unanimity of trend direction in GCM historical runs should play a role in our understanding of trend significance.

  • Bart Verheggen

    Dear Spencer,

    Regarding the nature of internal variability, I wrote that “in all likelihood” or “usually” it involves a redistribution of energy within the earth system. I included these caveats to not 100% exclude the possibility of spontaneous changes to e.g. planetary albedo or other factors that directly impact the energy balance (rather than via the surface temperature). The reason that I think such processes are very unlikely is that cloudiness and humidity are fast feedbacks in the climate system: They respond quickly and strongly to changes in temperature. I do not know of a plausible mechanism to spontaneously increase or decrease the earth’s cloudiness over multi-decadal timescales. Moreover, nights having warmed more than days precludes the main mechanism being due to a change in albedo (or solar irradiation for that matter).

    From your recent comment I gather that what in climate science is known as a feedback, you classify as internal variability.

    You write: ” I think once LTP is present, it becomes necessary to interpret climate through the presence of LTP, since LTP is so pervasive.” For trend significance, yes, if possible. Though the difficulty there is that we do not know the LTP characteristics of the unforced climate (since forcings are omnipresent in the data), so circular reasoning is an important pitfall.

    You also write: “If we make no “artificial” changes to these, and just allow the climate to run, I think we would still see LTP (but with changes of irradiance happening all of the time).”

    I think the omnipresence of forcings makes this impossible to know for sure. Rasmus showed that a GCM without forcings does not exhibit LTP. To what extent this is indicative of the real world, we don’t know. Comparing it to real world data would require taking into account that the latter is impacted by forcings. The introductory text of this blog refers to an attempt to do so by comparing the power spectra of models and observations (fig 9.7 in AR4). It can imho not be done by a straight comparison of the amount of LTP in real world data (impacted by forcings) and model data (without forcings): apples and oranges.

  • Frank

    Rasmus, Arthur and others rightly point out that LTP must be derived from the physics of the earth’s climate system. However, the earth’s climate system is known to be chaotic. What is the difference between LTP and a chaotic system that exhibits variation on many different time scales.

    We do know that various locations on the planet exhibit long-term variation in climate, such as the MWP in the North Atlantic region. Regional variation on the century time scale (probably of greater magnitude that recent global changes) certainly has taken place – driven by the laws of physics. There is disagreement about the global magnitude of this variation, but physics certainly causes phenomena that exhibit LTP on the century time scale in regions. If restoration of normal conditions simply required transporting less or more heat up to the characteristic emission altitude, such deviations wouldn’t persist for long.

    Before 2000, most climate models required flux adjustment (holding the modeled climate near present day climate) during spin up or the system (driven by physics) would settle into a quasi-equilibrium state far from today’s. Doesn’t the whole spin-up process (with and without flux adjustment) suggest that variation on decade to century time scales can be driven by physics. For that matter, “committed warming” taking place over decades is another example of physics.

  • Jos Hagelaars

    @Jim Cripwell 2013-05-02 13:44:16
    From the website you link to:
    http://arctic.atmos.uiuc.edu/cryosphere/IMAGES/global.daily.ice.area.withtrend.jpg
    The slope of the red line, the daily global sea ice anomaly, is -40000 sq km/year.
    The global sea ice extent is dropping.

  • Lennart van der Linde

    Rasmus said at the end of his guest blog:
    “There may be some irony here: The warming ‘hiatus’ during last decade is due to LPT-noise [15,16]. However, when the undulations from such natural processes mask the GHG-driven trend, it may in fact suggest a high climate sensitivity – because such natural variations would not be so pronounced without a strong amplification from feedback mechanisms. Figure 3 shows that such natural variations in the climate models are more pronounced for the models with stronger transient climate response (TCR, a rough indicator for climate sensitivity).”

    I’m wondering what Armin and Demetris think of this statement. Do you agree, and if not, why not?

  • Eli Rabett

    A much improved dialog.

    That being said statistical analysis of a time series is fraught with traps for the unphysical and dependent on the nature of the data itself. A nice example of this is the comment by Hendry and Prentis, on Beenstock, Reingewertz, and Paldor’s, “Polynomial cointegration tests of the anthropogenic impact on global warming”. For example entropy maximization is subject to energetic constraints, which, in turn are subject to energy flows into and out of the system and between components of the system, things which climate models at least attempt to follow. A model which simply maximizes entropy in the system as a whole, will underestimate entropy for the system as a whole because the sum of the entropy gains/losses (the earth is an open system after all) in the several components, oceans, atmosphere, biosphere etc. has to be larger.

    It is true that 1 + 1 = 2, but 1. kg + 1. kg = 2. kg is a very different beast. For the former you need to know about integers. For the latter about scales, standards, physics, real numbers and other stuff.

  • Spencer Stevens

    Dear Bart,

    Firstly many thanks for responding to me down here in the cheap seats :). The structure of the debate does sometimes leave me feeling a little disconnected from the main debate (although I am grateful for Demetris’ references to my comments to introduce them).

    You say:

    Regarding the nature of internal variability, I wrote that “in all likelihood” or “usually” it involves a redistribution of energy within the earth system. I included these caveats to not 100% exclude the possibility of spontaneous changes to e.g. planetary albedo or other factors that directly impact the energy balance (rather than via the surface temperature).

    I find your response here a little confusing. Both you and Rob put forward arguments against LTP on the basis of an equilibrium and a restoration to that equilibrium. In fact, if the assumptions of a static equilibrium and restoration are correct, this would be a strong case for STP fluctuations in climate. So it is an interesting point of discussion.

    However, the fact is that the equilibrium is not static, and imprinted from the dynamics of the system, as albedo is an internal variable of the climate system. If the system dynamics are LTP, then LTP will be imprinted on the equilibrium point. If the dynamics are STP, then the equilibrium would be STP.

    So the question of “usually” or “likely” becomes irrelevant. Either the equilibrium is static, or a function of the internal dynamics. Since we appear to be agreed that the internal dynamics, the discussion has to move on to those internal dynamics. And this, of course, is why I choose clouds, because clouds are popular examples of fractals; but you seem to argue clouds are not good examples of fractals, because they have a characteristic time constant.

    I was very careful with my definitions of radiative forcing, internal variability and external factors. They are all, I believe, objective definitions, and therefore scientifically meaningful. I am aware of the distinction that climate scientists make between “radiative forcings” and “feedbacks”. Unfortunately the basis for this distinction is based on a metric which does not exist for LTP systems.

    The distinction between a “forcing” and a “feedback” is typically made on the basis of a time constant. For example, you refer to a “fast feedback”, with reference to a speed (and therefore a time). In this example from RealClimate, they state:

    the issue that makes it a feedback (rather than a forcing) is the relatively short residence time for water in the atmosphere (around 10 days).

    The reason this is a problem is that for a system with LTP dynamics, there is no defined time constant. In fact, by introducing a time constant, we immediately insert an assumption of STP into our analysis. Under these circumstances, it is no surprise that we conclude STP dynamics in the climate, because we have assumed it to begin with; the argument is tautological.

    Even worse the RealClimate article supports this assumption with a run from a GCM – which as Rasmus has already shown, does not produce the LTP behaviour we see in observations, as discussed by Demetris. This is compounded by a second error. As discussed above, the equilibrium is not static. But in their GCM test, they remove the water vapour content far from the equilibrium and assess how quickly it returns to the equilibrium in a short timescale. But this is an error also: the LTP is imprinted on the movement of the equilibrium point (amongst other things), and if you move far from that equilibrium point, you would hide that behaviour.

    In summary: in assuming that water vapour has a time constant, and is a “fast feedback”, we introduce STP into our assumptions and as such finding STP in our conclusions is uninteresting and tells us nothing about whether STP or LTP is a better model for climate. However I will add a second post detailing why I strongly disagree with your specific physical claims regarding humidity.

  • Spencer Stevens

    Bart, you say,

    The reason that I think such processes are very unlikely is that cloudiness and humidity are fast feedbacks in the climate system: They respond quickly and strongly to changes in temperature. I do not know of a plausible mechanism to spontaneously increase or decrease the earth’s cloudiness over multi-decadal timescales.

    Bart, it is not difficult to find that humidity and cloudiness are a function not just of temperature, and not linearly of temperature, but a non-linear function of the complete climate state. Although, to be fair, I am not a hydrologist; but we do have an expert hydrologist on the panel, and I am sure he will correct errors that I make!

    Although humidity and cloudiness are related to temperature, it is immediately obvious from observations that there is more than just this that influences these parameters. While very cold places must be dry – the driest locations on earth are the deserts of Antarctica. However, if we look at a list of driest places in the world, they are far from uniformly cold – death valley in the US, or Aoulef in Algeria are both extremely dry, and extremely hot. Yet tropical rainforests are hot yet humid.

    What we see is that the simplistic view that humidity responds quickly to temperature and therefore does not fluctuate on a multi-decadal scale is quite wrong. The humidity and cloudiness is a function of many things – soil moisture content, evaporation, evapotranspiration, precipitation, etc. etc. And land use, and therefore things like soil moisture content are far from constant on the multi-decadal timescale, and therefore humidity and cloudiness are far from constant on a multi-decadal timescale.

    Now you could argue things like land use changes these are forcings. But they are unpredictable forcings, sensitive to initial conditions, that exhibit LTP. And we know that even without human influence, desert regions etc are far from static. They change at all timescales, naturally.

    Sadly, the assumptions that humidity and cloudiness have a characteristic time constant, and respond narrowly to temperature on a short time frame in a manner that is linearly separable from the rest of climate, are assumptions are not consistent with what we know about the earths climate system.

  • Spencer Stevens

    Bart, you say:

    All three invited participants agree that radiative forcing can introduce LTP and that it is omnipresent. It follows that the presence of LTP can not be used to distinguish forced from unforced changes in global avg temperature. The omnipresence of both unforced and forced changes means that it’s very difficult (if not impossible) to know the LTP signature of each.

    I confess I am astonished at this claim. It makes no sense to me.

    There is no “forced LTP” and “unforced LTP”. There is the LTP that is present in the climate system. And that LTP has defined characteristics, parameters which define what we might term “natural” climate variability.

    The uncertainty in estimating these parameters is driven not by the difficulty of separating two things which are not different, but really in estimating the parameters of the LTP from historical data which is either of limited length (e.g. instrumental records) or with confounding factors or artefacts (e.g. paleoclimate reconstructions).

    It is quite possible to detect changes in climate that exceed the range of typical behaviour, and even assign a probability to it – as both Demetris and Armin have done here.

    I think I have said enough for now, but as a parting note I would like to echo part of Armin’s very nice comment here, which expresses the heart of the issue more succinctly than my efforts:

    Of course, it is nice to talk about the effect of the different forcings. But since everything is interwoven in a linear and even non linear way it is nearly impossible to separate the effects from the different forcings on the LTP in a satisfying manner. The pragmatic way is to learn what LTP is, to learn and even improve the detection methods that can separate the natural fluctuations from external deterministic trends and then use the method we developped to estimate the effect of the trend. This way is doable and is actually common in climatology. The difference is only that in previous attempts the natural fluctuations have been considered incorrectly as Short Term Persistent, which significantly overestimates the trend significance.

  • Arthur Smith

    I’d like to second Rob van Dorland’s well-stated question about “statistics1″ vs “statistics2″. When Demetris Koutsoyiannis says there is “no dichotomy” between physics and statistics, I have no idea what he is talking about. The two words have different meanings. Just because there are statistical approaches within physics (I have taught a Statistical Mechanics class), that’s no more significant than noticing there is such a thing as ‘chemical physics’, and yet chemistry and physics are two different subjects. Mathematics and physics mean different things, and mathematical physics is a particular domain of its own. Doing statistics doesn’t mean you’re doing physics, and an explanation via statistical model is not necessarily at all a physical explanation. The ancient world’s epicycles allowed precise modeling of planetary motions, but those mathematical tools did not provide the physical explanatory (and predictive) power that came with Newton’s inverse square law of gravitation.

    So what is the physical explanation for LTP in the Earth system? Since it is not seen in unforced climate models, it must be some other physical element. One possibility, discussed by Bart, Rob, Rasmus, etc above, is that the observed LTP comes from the forcings, and a good piece of evidence for that is that forced climate models evidently DO show LTP (as Armin Bunde notes, as I queried above). What other option is there? Spencer’s clouds and humidity don’t help, because climate models *do* include clouds and humidity changes. Perhaps there is something wrong in the way those models handle it – what specifically would need to be changed to fix that problem? Or perhaps it is something about ocean dynamics that climate models get wrong? Or perhaps it is a whole range of things? But the reluctance of Demetris and Spencer to distinguish between forced and unforced changes suggests that no plausible such physical model for unforced LTP has ever been found.

  • WebHubTelescope

    I agree with the Statistics_1 vs Statistics_2 statement. For example, the Ornstein-Uhlenbeck (O-U) process is the physics model behind the statistical AutoRegressive model. The O-U process will show persistence and provide a physical explanation for what may be happening, and, yet, that is but a simplified explanation of a more elaborate climate model.

    So I suggest that before one starts suggesting H-K dynamics, they go through the first steps of looking at the basic statistical physics, which could include MaxEnt, etc, if applicable. This will lay the foundation for departures from first-order physics models and statistical mechanics.

  • Arthur Smith

    I just read a few of Armin Bunde’s papers, and I’m quite confused…

    For example, this 2005 PRL – http://prl.aps.org/abstract/PRL/v94/i4/e048701 – uses the MBH’99 Northern Hemisphere temperature reconstruction as one of the examples of long-term persistence, among several other long-term climate-related records (including the Nile river level one Demetris Koutsoyiannis cited). The LTP power-law exponent, gamma, in the case of MBH’99 was very low – just 0.1, indicating very slow decay of the correlations. This was with their “DFA2″ approach, which detrends by removing a quadratic polynomial fit from the data. However, removing a quadratic polynomial (or any other polynomial trend) is quite different from a removal of external forcings from the picture. We know that the long-term temperature record has been subject to a series of changes in forcings associated with Earth’s orbit and the sun (sunspot records provide at least some long-term record of that change), as well as greenhouse gas changes. Those have not followed any simple polynomial pattern – in fact they have fluctuations on a wide range of timescales in themselves.

    So while DFA certainly removes the effect of any steady trend from the data, I don’t see how it can be claimed to isolate “natural” behavior as distinct from externally forced behavior. Yes, you could call variations in orbit and sun “natural” if you want – but you could call variations in greenhouse gases just as “natural” (humans are after all part of this world). The interesting question that the LTP analysis appears to address, but which I don’t see how it could, is how the Earth system behaves in isolation from changes in external or human influences. Because that isolation has not been done in this sort of analysis – at least I don’t see it.

    What I think we’re after is something you might call the Green’s function of the Earth – how does the Earth in itself respond over time to an initial delta-function perturbation? Does that response, after initial transients, decay exponentially (AR-1) or does it decay more slowly with a long tail (LTP)? Climate models evidently show exponential decay. What are they missing if the real answer is LTP? And how can we actually come up with any conclusive evidence from observations that LTP is the right answer? I don’t see it in the discussion here.

  • Dan H.

    Lennart,
    The undulations from natural variability (or LPT) do not necessarily indicate high climate sensitivity. While high climate sensitivity attribute to GHGs does require strong amplification from feedback mechanisms, natural causes do not. Read the section about circular reasoning and models. Just because a model with high climate sensitivity shows more pronounced natural variability, that does not necessarily equate it with reality. Also, separating natural from manmade causes may not be possible, as described earlier, as they may be influencing each other.

  • Paul S

    Bart,

    I’ve tracked down Arxiv versions of papers to which Armin has referred (I think):

    Volcanic forcing improves Atmosphere-Ocean Coupled General Circulation Model scaling performance
    Power-Law Persistence in the Atmosphere: Analysis and Applications

    The basis for Armin’s statement about GHG and LTP can, I think, be best traced to Figures 3 and 4 of the volcanic forcing paper, though the other paper is also relevant. The figures show results of Armin’s fluctuation analysis applied to outputs from a GCM (NCAR PCM) from a variety of scenario runs involving different forcing types, including no forcing (i.e. the control run).

    Looking at the GHG-only run for figure 3 we can see the derived LTP exponent is not “improved” – not closer to the typical 0.65 value from observations – compared to the control run. I would suggest this is the source of Armin’s contention that GHG forcing does not contribute to LTP. By contrast volcanic forcing, as the paper title suggests, tends to shift the exponent upwards and closer to observed values. However, it isn’t a natural vs. anthropogenic thing: your example of solar irradiance is even less LTP-potent than GHGs, according to this analysis.

    I think there is some physical basis for this state of affairs. Stouffer et al. 2004 showed that halving CO2 induced a longer response timescale than doubling CO2 in a GCM, so there could be some timescale assymetry for cooling versus warming. The characteristics of volcanic-induced forcing could also play a significant role – there is an abrupt large negative forcing followed by an abrupt positive forcing. The transient effect at the surface appears to be done within a decade but that doesn’t change the fact that a cold pulse has effectively been injected into the oceans, which we can reasonably expect to exert an influence over a longer timescale.

  • Lennart van der Linde

    Demetris,

    You say:
    “[H]ow do we know about “GHG-driven trend”? Because the climate models tell us so. But there is a problem here as they did not predict the “warming ‘hiatus’ during last decade” [...] Those who believe that climate evolution can be described in deterministic terms, should have provided us with models that predict the ‘hiatus’ in deterministic terms.”

    Maybe you’ve seen Guemas et al 2013? They claim to make a “retrospective prediction” of the ‘hiatus’:
    http://www.nature.com/nclimate/journal/vaop/ncurrent/full/nclimate1863.html

    They conclude:
    “Our results hence point at the key role of the ocean heat uptake in the recent warming slowdown. The ability to predict retrospectively this slowdown not only strengthens our confidence in the robustness of our climate models, but also enhances the socio-economic relevance of operational decadal climate predictions.”

    Is this the kind of prediction you’re asking for above, or do you mean something else? How convincing do you find this ‘retrospective prediction’?

    Also Meehl et al 2011 may be relevant, who claim to find “Model-based evidence of deep-ocean heat uptake during surface-temperature hiatus periods”:
    http://www.nature.com/nclimate/journal/v1/n7/full/nclimate1229.html

    They conclude:
    “The model provides a plausible depiction of processes in the climate system causing the hiatus periods, and indicates that a hiatus period is a relatively common climate phenomenon and may be linked to La Niña-like conditions.”

    How would you comment on this conclusion?

  • Lennart van der Linde

    Armin,

    You say:
    “[I]n addition to the natural forcings, there are anthropogenic forcings, mainly by GHG but also urban effects must be considered. The question is now, what is the effect of these forcings on temperature. This is a highly relevant question and regards the climate sensitivity. I am not expert in this, but from my colleagues who are experts I learnt that this is a difficult and not fully settled question, in particular when the temperature evolution of the past 15y are considered.
    A PRAGMATIC way to approach this problem is what we are doing. We assume there are natural fluctuations (among others driven by the natural forcings) and a probably anthropogenic monotonously increasing part which is kind of deterministic and which we call external trend.”

    Demetris said earlier:
    “[H]ow do we know about “GHG-driven trend”? Because the climate models tell us so. But there is a problem here as they did not predict the “warming ‘hiatus’ during last decade” [...] Those who believe that climate evolution can be described in deterministic terms, should have provided us with models that predict the ‘hiatus’ in deterministic terms.”

    Do you agree with this statement? If not, why not? If you do agree, could you also comment on the two studies I cited in my above comment to Demetris (Guemas et al 2013; Meehl et al 2011)?

  • Arthur Smith

    I note that the editors initially laid out a series of questions which have been mostly indirectly answered in the discussion here, but I haven’t see direct answers. Let me repeat the questions and my view on where we are at this point:

    1. What exactly is long-term persistence (LTP), and why is it relevant for the detection of climate change?

    - which is two questions, and the phrase “climate change” isn’t defined (“climate always changes”!) I think all are agreed on at least roughly the meaning of LTP – power-law decay of (detrended or otherwise filtered) correlations in observed quantities. It is relevant for detection of climate change simply because a number of different climate-relevant datasets show some form of LTP, although the power laws (eg. Hurst exponents) differ between differing observables for reasons that are not at all clear. The existence of LTP has to be factored into analysis of the significance of any deviations in observables from historical patterns, as it changes the underlying statistics, at least in principle. However, as Rasmus points out, trying to detect climate change from observations properly should include ALL the observations – there are many things besides air surface temperatures, all with trends pointing to recent warming. If LTP decreases the significance of one or two of these, it doesn’t make much difference to the overall picture – a proper Bayesian analysis is probably the best approach taking into account all the evidence, if we really want to decide whether or not we have enough evidence to “detect” climate change.

    2. Is “detection” purely a matter of statistics? And how does the statistical model relate to our knowledge of internal variability?

    Again two questions. On the first, Rasmus Benestad answered no, I’m not sure on the others. To me “detection” should be based on Bayesian arguments that should include any other explanatory factors we have (including physics), so no, it can’t be just statistics. For example, global temperature trends are much clearer if you factor out the phase of the ENSO cycle from annual temperature values; just looking at the statistics of temperatures without including our knowledge of ENSO (or volcanos, or other recent forcing factors we can measure) is throwing out useful information. So, if you can “detect” purely with statistics, great, but “detection” ought to include all that we know, if there’s any uncertainty from the pure statistics side.

    The second question goes right to the heart of this debate we seem to have been having in the comments on “forced vs unforced” change. I note the editors specifically mentioned this regarding attribution at the start:

    This has consequences for attribution as well, since long term persistence is often assumed to be a sign of unforced (internal) variability

    But there’s been essentially no further discussion of attribution in the comments. It seems very clear from what Bunde and Koutsoyiannis have stated that the observed LTP is *NOT* a sign of unforced (internal) variability, but convolutes both unforced and historical forced changes. There is no way to get around that. In fact, from the discussion it looks like volcanic activity is a major component of the observed LTP in temperatures. So an LTP statistical model bears *NO* relation to our understanding of internal variability.

    3. What is the ‘right’ statistical model to analyse whether there is a detected change or not? What are your assumptions when using that model?

    LTP has been argued for here, but I’m afraid this is very unconvincing because present-day forcings and conditions like ENSO are not being included in the analysis. See above discussion on “detection”. If you insist on using a purely statistical model for detection you’re tying your hands behind your back – but I suppose some people like to do this.

    4. How long should a time series be in order to make a meaningful inference about LTP or other statistical models? How can one be sure that one model is better than the other?

    See above discussion – I don’t think any meaningful inference can be made about LTP regarding internal variability for Earth as a whole because the external influences largely drive changes in the variables and are simply not being controlled or accounted for in the statistical analysis. For any time series that is properly controlled, for a Hurst exponent of 0.65 or so to be distinguished from a random walk 0.5 value, you would likely need a range of at least 3 orders of magnitude in time scales – that is, 1000 years for annual data (though that depends on how uncertain/variable the data is too). I’m doubtful of any claims regarding shorter time series than that unless the observed exponent is a lot higher.

    5. Based on your statistical model of preference do you conclude that there is a significant warming trend?

    This question seems to assume positive answers to previous questions, which I don’t think are justified. Nevertheless it’s been answered several times here that some time series do show significant warming even when including LTP.

    6. Based on your statistical model of preference what is the probability that 11 of the warmest years in a 162 year long time series (HadCrut4) all lie in the last 12 years?

    Benestad I believe answered this qualitatively:

    The probability that this fit is accidental is practically zero if we assume that that the temperature variations from year-to-year are independent of each other. LTP and the oceans inertia will imply that the degrees of freedom is lower than the number of data points, making it somewhat more likely to happen even by chance.

    Koutsoyiannis showed a number of graphs that appear to agree:

    Under two statistical null-hypotheses, autoregressive and long-memory, this probability turns to be very low: for the global records lower than p = 0.001…

    Bunde doesn’t seem to have addressed this question, only looking at trends (and not at combined land-ocean, but separately at ocean and land temperatures); however he stated a high degree of significance in the warming trend over land.

    7. If you reject detection of climate change based on your preferred statistical model, would you have a suggestion as to the mechanism(s) that have generated the observed warming?

    This is an odd question which I don’t think anybody addressed – maybe because all agree that climate change IS “detected” by these statistical methods after all. Maybe the editors want to rephrase it, given the answers provided for the others…

  • Lennart van der Linde

    Demetris,

    Thanks for your reply and your question. Since I also only have access to the abstracts, I passed your question on to Virginie Guemas herself.

    I hope she’ll reply. I did find the first page of her paper on the net, where she talks a bit about her methods:
    http://www.readcube.com/articles/10.1038/nclimate1863

    Maybe this can already (partly) answer your question?

  • Lennart van der Linde

    Demetris,

    The full paper of Meehl et al 2011 is here:
    http://www.cgd.ucar.edu/cas/Staff/Fasullo/my_pubs/Meehl2011etalNCC.pdf

    But I suppose this one is less relevant to your question, since they only studied projections for the rest of this century?

    For your information: I’m just an interested layman, not a professional scientist, so the technical details of this discussion are beyond my comprehension. I’m only trying to understand the implications of the differing contributions as well as I can, so that’s why I ask my questions.

  • Lennart van der Linde

    Armin, thanks for the clarification.

    In reply to Demetris’ question Virginie Guemas writes:
    “We use a general circulation model which comprises millions of lines of codes and thousands of parameters. Each parameter is tuned on a restricted observation campaign and then we run the model and see how well it performs. In this sense, we apply some techinque similar to the split-sample but we actually underdetermine contrary to the split-sample technique.”

    She’s working on predictions for the future, but expects them to not be very good yet.

    Maybe Demetris has further comments after reading the paper that Guemas was kind enough to send us a copy of?

  • J.Seifert

    LTP….long-term persistence…a highly important topic! We can agree on the
    following:
    (1) The longer the time frame for the model being persistant, the better
    the model quality. What means “long”? A millenium is insufficient, because centennial
    cycles may be excluded, which require at least 2, or better 3 millenia in order to be
    identified. Models, based on a 150 year time span (1750-2000) are pure hogwash claiming
    LTP for millenia….
    (2) The idea of applying statistics in order to filter out a trend WITHOUT knowing the
    underlying driving forces of the trend is a dream of statisticians…for example the recent
    Marcott et al paper, trying to filter out a hockey stick out of spaghetti graphs:
    http://i49.tinypic.com/lbogh.jpg or http://47.tiny pic.com/2uylgh3.png
    This is not even pseudoscience, but the low of the lowest end….
    The real climate drives are the five macrodrivers, as given in paper in (3).
    (3)The notion of a linear trend line….another hogwash. There cannot be a linear
    trend in climate, see 27-37 ka BP or the Holocene temp evolution in
    http://www.knowledgeminer.eu/eoo_paper.html
    The temp evolution is non-lineary….but curvelineary.
    (4)A non-lineary and CURVILINEARY trend is THE real temp trend. For this reason, the
    present temp plateau since 15 years is in force. Not in force is the
    lineary trend of 20 years,1980-2000, which does NOT continue with a 0.2 C increase
    per decade, as the claimed in IPCC in ar4, wg1, chapter2. The IPCC claim of 0.2 C
    increase per decade is an outright LIE by climate villains.
    (5)The real trend is sinoidal, as shown in above quoted paper.
    (6)The long term persistence analysis over the ENTIRE Holocene shows the detailed
    natural 60-year Nicola Scafetta cycle for over 10,000 years. Fabrication of hockey
    sticks trends exclude this most important 60-year cycle (described today as
    quasi-cycle of PDO and AMO-cycles).
    Therefore, lineary temp trend lines are climate confusion pure.
    Best regards, JSei.

  • Eli Rabett

    Armin Bunde said May 10, 2013 at 5:24 pm

    “Regarding the length of a record, that you need to distinguish between white noise and LTP with H=0.65: For this simple distinction, you need certainly much less than 500 data points, which is about 40y monthly. It is much more difficult to distiguish between LTP and an STP process. We have found recently that 600 data points (50y monthly data) are sufficient when using DFA2. The larger the Hurst exponent is, the easier the distinction.”

    This, of course, is a strong argument against any value placed on a single dimensional time series, but that, of course, while the basis of many statistically based arguments which have been published, is merely one averaged output from a physical model. Serious attribution studies require judgements about the entirety of the outputs. Of course, no GCM is perfect, but one gains confidence in the averaged results such as global temperature time series by looking at the many well known results which emerge from the GCMs, such as circulation patterns. No such thing can be said about one dimensional statistical models. Certainly one can find multiple statistical explanations for just about any one dimensional time series of finite length. (Also it is well known that the global temperature records do not display white noise, so that is a bit of a red herring)

  • Spencer Stevens

    I think there are still a number of fundamental misunderstandings at the moment of the position of those who favour LTP type natural variability. I suspect I will now add to the confusion, but I’ll try and explain as best I can my view of it.

    Firstly, with regard to determinism, stochastics and chaos. Systems exhibiting chaotic behaviour are intrinsically deterministic – they follow a trajectory, and the deterministic trajectory can be measured after the event (and even predicted a very short way out). But the extreme senstivity to initial conditions and exponential divergence of trajectories arbitrarily close to one another in phase space means that beyond a time horizon we cannot deterministically predict the trajectory of the system. This uncertainty of the future is the essence of why randomness arises from deterministic systems.

    We can characterise this uncertainty using stochastic, rather than deterministic, modelling. But the stochastic modelling is not “just statistics” – it is imprinted on the very physics of the system, through the chaotic solution to the equations governing the system in question. And again, my signal processing background means I tend to prefer to picture this as a power spectral density function, although others prefer autocorrelation functions, they amount to the same thing – a picture of the uncertainty.

    This uncertainty is what we represent by the stochastic function. Rob asks where we see deterministic and chaotic behaviour; of course the answer is that climate exhibits both of these simultaneously, as the two are related. Rasmus asks if it is possible LTP proponents are mistaking LTP for chaotic behaviour; but the point of LTP is to characterise the unpredictable component of the chaotic solution to the equations governing the system.

    The discussion has been interesting, but I think the discussion is being severely held back by these misunderstandings. It is only natural that such misunderstanding/miscommunications happen, but I think it is necessary to resolve them before the discussion can move forward; as Demetris notes, his lecture and associated paper “A Random Walk on Water” may help to bridge some of these gaps.

  • Spencer Stevens

    I have an additional comment on the discussion earlier regarding ENSO, but it touches on a point made by Rasmus in a recent comment. I once again will selfishly discuss stochastic models from a power spectral density (PSD) perspective, as this is more intuitive to me due to my signal processing background.

    A brief recap: the PSD function of different stochastic models. White noise has PSD independent of frequency. STP or autoregressive systems have a flat PSD up to a characteristic frequency, then a decaying spectrum beyond that. LTP have a PSD inversely proportional to frequency, up to a slope of (1/f). A random walk has a PSD of (1/f^2).

    People have noted that ENSO does not appear to exhibit LTP. I would agree with this, but only because of the way it is analysed. We force LTP data to become STP. Let me use the classic ENSO index, the SOI, as an example. It is the difference between the sea level pressure at Darwin and Tahiti.

    An important feature of ENSO is that even though models can create ENSO-like behaviour, ENSO is not deterministically predictable. A valiant effort was made a few years ago to make predictions of ENSO in the 3-6 month time window; and I am most grateful to the scientists involved for standing up and admitting that their predictions were without skill beyond a naive baseline. So, a stochastic model is the only meaningful model; but what are we modelling, and what should we use?

    As discussed, LTP proponents argue the internal dynamics of the climate system exhibit LTP behaviour, and sea level pressure is no different. And it is also important to understand that the LTP exists both spatially and temporally. So, we expect a PSD of (1/f^a) where a is greater than 0 and up to 1. But what happens when we take two samples from a system and difference them?

    From a signal processing perspective, a difference like this is the simplest form of high-pass filter. Such a filter will have a spatial bandwidth governed by the separation of the points. Fluctuations with low frequency will be rejected by the filter, and fluctuations above the filter break point will pass through unchanged. The filter gain characteristics are proportional to the frequency below the break point, and unity above.

    An advantage of working in frequency space is that we can determine the resultant power spectral density from the filter frequency response and the PSD of the initial system prior to the filter. Before the filter break point, we multiply the LTP PSD (~1/f) by the filter gain (~f) and we find a constant PSD output from the filter. Above the filter breakpoint, the LTP PSD (~1/f) is multiplied by unity (~1), and we retain the 1/f relationship beyond this point. (NB. I could really do with adding some graphics here to aid this explanation. I may try later.)

    The PSD I have described post filter is exactly the PSD of an STP system. But it is the action of the filter – the method of analysis – that has created the STP behaviour, not something that is an intrinsic part of the system. The exact same is true for the NH-SH example Rasmus provides.

    So from a first principles analysis, the result from ENSO and the differencing Rasmus carries out is exactly what we would expect from a system exhibiting LTP. STP requires a single fixed frequency to happen; this does not occur in nature, but it often does – perhaps in ways not always intuitive to the user – from the analysis we apply in nature. In this case, the single characteristic frequency of ENSO and the NH-SH difference comes from the effective filter we are applying to the data, not from the data itself.

  • Spencer Stevens

    Armin,

    Many thanks for your comments. I agree with most of what you have said, including the presence and definitions of LTP, and your comments have been thoughtful and interesting. The topic of whether the recent changes are significantly difference to earlier periods in the presence of LTP should be the most interesting part of this debate, but it inevitably gets overshadowed by the debate between STP vs. LTP, which is less interesting in my view, since every paper I have read on the topic that has looked at this question in any depth has concluded LTP.

    I have a small observation to make on your latest comment though:

    Second, as you certainly know there is a long discussion on athmospheric temperatures versus SST. It has been argued by Fraedrich and also by us that the inertia of the oceans is an important factor for the LTP and thus we expected and confirmd that the Hurst exponent is larger for SST than for SAT.

    I do not think this is the best approach to understand LTP. LTP is a phenomena that spans many scales. Demetris’ climacogram shows LTP over 9 scales. The oceans’ inertia exists only at a subset of those scales. Therefore to attribute the ocean inertia as an important factor for LTP makes little sense to me. At the 1 day scale, the atmosphere has inertia. At the decadal scale, the atmosphere may be considered fast and the oceans have inertia. At the 10 million year scale, the oceans may be considered fast and the land mass now has inertia. At the billion year scale, perhaps we may even consider the land masses as fast responding.

    In this context, it makes no more sense to describe the oceans as a factor than it makes to say the atmosphere is a factor than it makes sense to describe continental drift of land masses as a factor. Likewise, it makes little sense to describe volcanoes as a “cause” of LTP; all of these perspectives are single-scale perspectives, and to explain LTP we need a multi-scale perspective.

    Which is why, I do not agree at this stage in the approach of attributing specific factors or causes of LTP in this way. The cause of LTP has to be the dynamic interaction between all of these things because it is the only thing that operates at all scales.

    Based on this, Fraedrich even concluded that H should decrease continously when departing from the coast, i.e. stations very far away from the coast line, like Urumqui in China, should have H=1/2. I found this hypothesis interesting but we could not support it finally from our own analysis. So there is a discussion on the point you make, but I think it is settled ( and also the models show this) that SST has a higher persistence than SAT.

    I also disagree with this reasoning for two reasons. The first I am clear about, the second is based on some thoughts I have which are less clear at this time, so you may choose to ignore my second observation :-)

    My first point: as Demetris has noted, as scale increases, LTP from the oceans must necessarily influence SATs over land at some point. I’d also like to echo my earlier comment; LTP is pervasive at increasing scale. And as I note above, if you explain this difference by the inertia of the oceans, then you have a bigger problem, as the inertia of the land mass (through contentinental drift) is even greater again; surely the persistence over land should be even greater by this explanation. I think we must find other ways to explain LTP than single scale perspectives.

    My second point, which I accept is a little sketchy at this time: I note the location you describe, Ürümqi, is a mid-latitude location with a continental climate. As such it exhibits a very large seasonal temperature variation. Such a large temperature variation would of course upset the estimate of H and does not reflect the LTP variability, so we remove it through the anomaly method prior to statistical estimation of parameters.

    But this leaves a problem: the seasonal variation does not just change the first moment, which we correct for in the anomaly calculation, it changes the second moment also, as a function of the first moment. This seasonal variation of the second moment also causes large errors in the estimation of H. I do not believe the estimates of H by Fraedrich are correct, and it does not surprise me that you find slightly inconsistent results. I suspect your results are also unreliable unless you have somehow managed this change in the second moment.

    I expect SATs over land to be strongly influenced by LTP. I know our current estimates of LTP in these locations are highly unreliable, and this is to some degree confirmed to me from the disparity between your results and those of Fraedrich, but I am unsure at this point how to advance this issue. I suspect we either need to remove the seasonal effects on the second moment or we need a method of estimating H in the presence of both the first and second moments of the seasonal variations. But my thinking is still immature on this point.

  • Paul S

    From what you write, I guess you can accept that, during Earth’s history, there were periods in which your “signal” as you define it (you say “here it refers to manmade climate change”) or your “external forcing”, was not present.

    Actually, I think Rasmus has been consistent and clear that the opposite is the case. See a quote from him here on this very topic:

    This is exactly my point too. But there are forcings in the real world, be it changes in Earth’s orbit around the sun, geological, volcanic, changes in the sun, or in the concentrations of the greenhouse gases. So my point is that LTP in this simulation without forcing will not be the same as in the real world where there is forcing. The forcing introduces LTP. We see this by comparing the two simulations – the black and the grey curves.

    I guess it’s possible there could have been a substantial period of Earth’s history with zero forcing, though I would suggest it’s an extremely unlikely occurrence. Let’s say there was a period with zero forcing. Given that we don’t know when that was or, presumably, have any sort of reliable climatic records for this hypothetical period I don’t know how you’ve arrived at the strong conclusion ‘The truth is, however, that climate on Earth has never been static.’ Do you have an example of a period which experienced zero forcing to support this apparent certainty?

  • Paul S

    Demetris,

    No, my comment was only about “external” forcing and I quoted Rasmus providing a basic list of external forcings: ‘changes in Earth’s orbit around the sun, geological, volcanic, changes in the sun, or in the concentrations of the greenhouse gases’.

    So, my question again is: Can you give an example of a period in Earth’s history which has not been influenced by external forcing?

  • UndislosedUsername

    This is a very unsatisfying discussion.

    To me it lacks a clear definition of Long-Term Persistence. Two of the invited debaters use it but both do not seem to be able to explain it in simple terms what is and how it comes about. Why is that? The rest of the discussion seems to be a lot of talking to one another on different levels. Telling others to read the papers doesn’t help if you can’t explain succinctly what you are on about. I do notice the blog operators trying to get something more out of it, but i feel their efforts failed. Pity…

Off-topic comments (click to expand)
  • Lange termijn persistentie en interne variabiliteit in het klimaatsysteem | Klimaatverandering

    [...] lange stilte is nu een nieuwe discussie van start gegaan op ClimateDialogue.org. Het onderwerp is lange termijn persistentie in de mondiale temperatuurdata en de eventuele implicaties daarvan voor d…. Deze discussie is gerelateerd aan de vraag of de opwarming een ‘random walk’ zou kunnen zijn, [...]

  • Climate Dialogue nu echt van start

    [...] en Armin BundeLong time persistance Na een veel te lange pauze is onze nationale klimaattrots climatedialogue.org weer van start gegaan met een discussie.  Deze keer een veel wetenschappelijker thema, maar daarom niet minder interessant: hoe staat het [...]

  • Jim Cripwell

    http://arctic.atmos.uiuc.edu/cryosphere/

    Jos Hagelaars writes “the amount of sea ice in the world is dropping,” This statement does not seem to accord with the most recent data we have. See http://arctic.atmos.uiuc.edu/cryosphere/. I think it would be a good idea for those who comment on this blog to ensure that their statements are in accordance with the most recent empirical data

  • Eli Rabett

    Actually, although there has been a small gain in sea ice extent in the Antarctic, the loss in the Arctic has counterbalanced that, and it is worse if you look at sea ice volume. For example

    http://blog.chron.com/sciguy/2012/09/does-the-expanding-antarctic-sea-ice-disprove-global-warming/

  • Jim Cripwell

    Eli, you give me a reference. What a way to distort the data!!! They show graphs for the Arctic and Antarctic BOTH at Aug 2012. In other words, the height of summer in the Arctic and the height of winter in the Antarctic. The rest of the article is similar scientific garbage.

    “If you torture data long enough, it will confess.” Roland Coase.

  • Eli Rabett

    “I have been looking at several climate blogs to see if this Climate Dialogue discussion has attracted the interest of bloggers and climate discussers. I would say the coverage is thin:”

    Given that the first discussion was a complete train wreck, this is not astounding. Even here the two on one nature of the “dialog” has made for difficulties, but this is a great improvement

  • Eli Rabett

    Jim Cripwell
    2013-05-05 13:39:28

    Eli, you give me a reference. What a way to distort the data!!! They show graphs for the Arctic and Antarctic BOTH at Aug 2012. In other words, the height of summer in the Arctic and the height of winter in the Antarctic. The rest of the article is similar scientific garbage.

    ———————————————
    Whatever. Also this and this

  • Long term persistence and internal climate variability | CfpDir.com

    [...] a long hiatus, Climate Dialogue has just opened a second discussion. This time it’s about the presence of long term persistence in timeseries of global average temperature, and its implicati…. This discussion is strongly related to the question of whether global warming could just be a [...]

  • Gerbrand Komen over de SPM « De staat van het klimaat

    [...] Het hele stuk is de moeite waard en verwijst ook naar de Climate Dialogue over lange termijn persistentie. [...]


Jump to expert comments | Jump to public comments | Jump to off-topic comments

Leave a Reply