Longterm persistence and trend significance
Update 28 April 2014: Climate Dialogue summaries now online
The summary of the second Climate Dialogue discussion on longterm persistence is now online (see below). We have made two versions: a short and an extended version. We apologize for the delay in publishing the summary.
Both versions can also be downloaded as pdf documents:
Summary of the Climate Dialogue on Longterm persistence
Extended summary of the Climate Dialogue on Longterm persistence
[End update]
In this second Climate Dialogue we focus on longterm persistence (LTP) and the consequences it has for trend significance.
We slightly changed the procedure compared to the first Climate Dialogue (which was about Arctic sea ice). This time we asked the invited experts to write a first reaction on the guest blogs of the others, describing their agreement and disagreement with it. We publish the guest blogs and these first reactions at the same time.
We apologise again for the long delay. As we explained in our first evaluation it turned out to be extremely difficult to find the right people (representing a range of views) for the dialogues we had in mind.
Climate Dialogue editorial staff
Rob van Dorland, KNMI
Marcel Crok, science writer
Bart Verheggen
Introduction longterm persistence
How persistent is the climate and what is its implication for the significance of trends?
The Earth is warmer now than it was 150 years ago. This fact itself is uncontroversial. It’s not trivial though how to interpret this warming. The attribution of this warming to anthropogenic causes relies heavily on an accurate characterization of the natural behavior of the system. Here we will discuss how statistical assumptions influence the interpretation of measured global warming.
Most experts agree that three types of processes (internal variability, natural and anthropogenic forcings) play a role in changing the Earth’s climate over the past 150 years. It is the relative magnitude of each that is in dispute. The IPCC AR4 report stated that “it is extremely unlikely (<5%) that recent global warming is due to internal variability alone, and very unlikely (< 10 %) that it is due to known natural causes alone.” This conclusion is based on detection and attribution studies of different climate variables and different ‘fingerprints’ which include not only observations but also physical insights in the climate processes.
Detection and attribution
The IPCC definitions of detection and attribution are:
“Detection of climate change is the process of demonstrating that climate has changed in some defined statistical sense, without providing a reason for that change.”
“Attribution of causes of climate change is the process of establishing the most likely causes for the detected change with some defined level of confidence.”
The phrase ‘change in some defined statistical sense’ in the definition for detection turns out to be the starting point for our discussion. Because what is the ‘right’ statistical model (assumption) to conclude whether a change is significant or not?
According to AR4, “An identified change is ‘detected’ in observations if its likelihood of occurrence by chance due to internal variability alone is determined to be small.” The magnitude of internal variability can be estimated in different ways, e.g. by control runs of global climate models, by characterising the statistical behaviour of the time series, by inspection of the spatial and temporal fingerprints in observations, and by comparing models and observations (e.g. via their respective power spectra, cf. fig 9.7 in AR4).
Longterm persistence
Critics argue though that most if not all changes in the climatological time series are an expression of longterm persistence (LTP). Longterm persistence means there is a long memory in the system, although unlike a random walk it remains bounded in the very long run. There are stochastic /unforced fluctuations on all time scales. More technically, the autocorrelation function goes to zero algebraically (very slowly). These critics argue that by taking LTP into account trend significance is reduced by orders of magnitude compared to statistical models that assume shortterm persistence (AR1), as was applied e.g. in the illustrative trend estimates in table 3.2 of AR4. [1,2]
This has consequences for attribution as well, since long term persistence is often assumed to be a sign of unforced (internal) variability (e.g. Cohn and Lins, 2005; Rybski et al, 2006). In reaction to Cohn and Lins (2005), Rybski et al. (2006)[3] concluded that even when LTP is taken into account at least part of the recent warming cannot be solely related to natural factors and that the recent clustering of warm years is very unusual (see also Zorita (2008)[4]). Bunde and Lennartz also looked extensively at the consequences of LTP for trend significance and concluded that globally averaged temperature data on land do show a significant trend, but the sea surface temperatures don’t. [5,6]
These different conclusions translate directly into the question of how important the statistical model used is for determining the significance of the observed trends.
Climate Dialogue
Although the IPCC definition for detection seems to be clear, the phrase ‘change in some defined statistical sense’ leaves a lot of wiggle room. For the sake of a focussed discussion we define here the detection of climate change as showing that some of this change is outside the bounds of internal climate variability. The focus of this discussion is how to best apply statistical methods and physical understanding to address this question of whether the observed changes are outside the bounds of internal variability. Discussions about the physical mechanisms governing the internal variability are also welcome.
Specific questions
1. What exactly is longterm persistence (LTP), and why is it relevant for the detection of climate change?
2. Is “detection” purely a matter of statistics? And how does the statistical model relate to our knowledge of internal variability?
3. What is the ‘right’ statistical model to analyse whether there is a detected change or not? What are your assumptions when using that model?
4. How long should a time series be in order to make a meaningful inference about LTP or other statistical models? How can one be sure that one model is better than the other?
5. Based on your statistical model of preference do you conclude that there is a significant warming trend?
6. Based on your statistical model of preference what is the probability that 11 of the warmest years in a 162 year long time series (HadCrut4) all lie in the last 12 years?
7. If you reject detection of climate change based on your preferred statistical model, would you have a suggestion as to the mechanism(s) that have generated the observed warming?
[1] Cohn,. T. A., and H. F. Lins (2005), Nature's style: Naturally trendy,. Geophys. Res. Lett., 32, L23402, doi:10.1029/2005GL024476
[2] Koutsoyiannis, D., Montanari, A., Statistical analysis of hydroclimatic time series: Uncertainty and insights, Water Resour. Res., Vol. 43, W05429, doi:10.1029/2006WR005592, 2007
[3] Rybski, D., A. Bunde, S. Havlin, and H. von Storch (2006), Longterm persistence in climate and the detection problem, Geophys. Res. Lett., 33, L06718, doi:10.1029/2005GL025591
[4] Zorita, E., T. F. Stocker, and H. von Storch (2008), How unusual is the recent series of warm years?, Geophys. Res. Lett., 35, L24706, doi:10.1029/2008GL036228
[5] Lennartz, S., and A. Bunde. "Trend evaluation in records with long‐term memory: Application to global warming." Geophysical Research Letters 36.16 (2009)
[6] Lennartz, Sabine, and Armin Bunde. "Distribution of natural trends in longterm correlated records: A scaling approach." Physical Review E 84.2 (2011): 021129
Guest blog Rasmus Benestad
The debate about trends gets lost in statistics...
Rasmus E. Benestad, senior researcher, Norwegian Meteorological Institute
Figure 1. The recorded changes in the global mean temperature over time (red). The grey curve shows a calculation of the temperature based on greenhouse gases, ozone, and changes in the sun.
From time to time, the question pops up whether the global warming recorded by a network of thermometers around the globe is a result of natural causes, or if the warming is forced by changes in the atmospheric concentrations of greenhouse gases (GHGs).
In 2005, there was a scientific paper [1] suggesting that statistical models describing random longterm persistence (LTP) could produce similar trends as seen in the global mean temperature. I wrote a comment then on this paper on RealClimate.org with the title Naturally trendy?.
More recently, another paper [2] followed somewhat similar ideas, although for the Arctic temperature rather than the global mean. A discussion ensued after my posting of ‘What is signal and what is noise?’ on RealClimate.org. A meeting is planned in Tromsø, Norway in the beginning of May to discuss our differences  much in the spirit of Climate Dialogue.
Weather statistics
It is easy to make a statement about the Earth’s climate, but what is the story behind the different views? And what is really the problem?
If we start from scratch, we first need to have a clear idea about what we mean by climate and climate change. Often, climate is defined as the typical weather, described by the weather statistics: what range of temperature and rainfall we can expect, and how frequently. Experts usually call this kind of statistics ‘frequency distribution’ or ‘probability density functions’ (pdfs).
A climate change happens when the weather statistics are shifted: weather that was typical in the past is no longer typical. It involves a sustained change rather than shortlived variations. New weather patterns emerge during a climate change.
At the same time, there is little doubt that some primary features of our climate involve shortlived natural ‘spontaneous’ fluctuations, caused by the climate itself. The natural fluctuations are distinct to a forced climate change in terms of their duration [3].
A diagnostic for climate change involves statistical analyses to assess whether the present range and frequency of temperature and precipitation are different from those of the past.
Natural variations
However, the presence of slower natural variations in the climate makes it difficult to make a correct diagnosis.
Longterm persistence (LTP) describes how slow physical processes change over time, where the gradual nature is due to some kind of ‘memory’. This memory may involve some kind of inertia, or the time it takes for physical processes to run their course. Changes over large space take longer time than local changes.
The diffusion of heat and transport of mass and energy are subject to finite flow speeds, and the time it takes for heat to leak into the surroundings is wellunderstood. Often the rate decays at an exponential rate (with a negative exponent).
There may be more complex behaviour when different physical processes intervene and affect each other, such as the oceans and the atmosphere [4]. The oceans are sluggish and the density and heat capacity of water are much higher than that for the air. Hence the ocean acts like a flywheel, and once it gets moving, it will go on for a while.
There are some known examples of LTP processes, such as the ice ages, changes in the ocean circulation, and the El Niño Southern Oscillation.
We also know that the Earth’s atmosphere is nonlinear (‘chaotic’) and may settle into different states, known as ‘weather regimes’ [57]. Such behaviour may also produce LTP. Changes in the oceans through the overturning of water masses can result in different weather regimes.
Laws of physics
It is also possible to show that LTP takes place when many processes are combined, which individually do not have LTP. For example, the river flow may exhibit some LTP characteristics, resulting from a collection of rainfall over several watersheds.
We should not forget that longterm changes in the forcing from GHGs also result in LTP behaviour [3].
The climate involves more than just observations and statistics, and like everything else in the universe, it must obey the laws of physics. From this angle, climate change is an imbalance in terms of energy and heat.
It is fairly straightforward to measure the heat stored in the oceans, as warmer water expands. The global sea level provides an indicator for Earth’s accumulation of heat over time [3].
Hence, the diagnosis (“detection”) of a climate change is not purely a matter of statistics. The laws of physics sets fundamental constraints which lets us narrow down to a small number of ‘suspects’.
Energy and mass budgets are central, but also the hydrological cycle is entangled with the temperatures and the oceans [8]. Modern measurements provide a comprehensive picture: we see changes in the circulation and rainfall patterns.
Classical mistakes
There are two classical mistakes made in the debate about climate change and LTP: (a) looking only at a single aspect (one single time series) isolated from the rest of Earth’s climate and (b) confusing signal for noise.
The term ‘signal’ can have different meanings depending on the question, but here it refers to manmade climate change. ‘Noise’ usually means everything else, and LTP is ‘noise in slow motion’.
There are always physical processes driving both LTP and spontaneous changes on Earth (known as the weather), and these must be subject to study if we want to separate noise from signal.
If an upward trend in the temperature were to be caused by internal ocean overturning, then this would imply a movement of heat from the oceans to the air and land surface. Energy must be conserved. When the heat content increases in both the oceans [9] and the air, and the ice is melting, then it is evident that the trend cannot be explained in terms of LTP.
The other mistake is neglecting the question: What is signal and what is noise? Some researchers have adapted statistical trend models to mimic the evolution seen in measured data [1,2]. These models must be adapted to the data by adjusting number of parameters so that they can reproduce the LTP behaviour.
In science, we often talk about a range of different types of models, and they come in all sorts of sizes and shapes. A climate model calculates the temperature, air flow, rainfall, and ocean currents over the whole globe, based on our knowledge about the physics. Statistical trend models, on the other hand, take past measurements for which they try to find statistical curves that follow the data.
Statistical models are sometimes fed random numbers in order to produce a result that looks like noise [10]. It is concluded that the measured data are a result of a noisy process if these models produce results that look the same. In other words, these models are used to establish a benchmark for assessing trends.
Statistical trend models may, however, produce misguided answers if proper care is not taken. For example, those adapted to data containing both signal and noise cannot be used to infer whether the observed trends are unusual or not. The LTP associated with GHGs can be quite substantial, and forced trends in measured temperatures will fool the statistical models which assume the LPT is due to noise.
The misapplication of statistical trend models can easily be demonstrated by subjecting them to certain tests. The statistical trend models describing LTP make use of information embedded in the data, revealed by their respective autocorrelation function (ACF).
Figure 2 below shows a comparison between two ACFs for temperature for the Arctic (area mean above 60N), taken from two different climate model simulations. One simulation represents the past (black) driven with historical changes in GHGs. The other (grey) describes a hypothetical world where the GHGs were constant, representing a ‘stable’ climate in terms of external forcings.
It is clear that the ACFs differ, and the statistical models used to assess trends and LTP rely on the shape of the ACF.
Figure 2. The upper graph shows the annual mean temperature for the Arctic simulated by a climate model. The black curve shows the yeartoyear variations for a run where the model was given the observed GHG measurements. The grey curve shows a similar run, but where the GHGs do not change. The bottom panel shows the ACF, and the black curve indicates that the effect of GHGs on temperature is to increase the LTP. For the assessment of trends, the statistical models should be trained on the grey curve, for which we know there is no forced trend and where all the variations are due to changes in the oceans.
Circular reasoning
It is the way models are used that really matters, rather than the specific model itself. All models are based upon a set of assumptions, and if these are violated, then the models tend to give misleading answers. Statistical LTPnoise models used for the detection of trends involve circular reasoning if adapted to measured data. Because this data embed both signal and noise.
For realworld data, we do not know what degree of the variations are LTPnoise and what are signal.
We can nevertheless specify the degree of forcing in climate model simulations, and then use these results to test the statistical models. Even if the climate models are not completely perfect, they serve as a test bench [11].
The important assumptions are therefore that the statistical trend models, against which the data are benchmarked, really provide a reliable description of the noise.
We need more than a centurylong time series to make a meaningful inference about LTP if natural variations have time scales of 7090 years. Most thermometer records do not go much longer back in time than a century, although there are some exceptions.
It is possible to remedy the lack of thermometer records to some extent by supplementing the information with evidence based on e.g. tree rings, sediment samples, and ice cores^{3,12,13}. Still, such evidence tends to be limited to temperature, whereas climate change involves a whole range of elements.
For all intents and purposes, however, it is important to account for both natural and forced climate change on these time scales. Most people would worry more about the combined effect of these components, as natural variations may be just as devastating as forced. For most people, the distinction between trend and noise is an academic question. For politicians, it's a question about cutting GHG emissions.
Regression
Another side to the story is that the magnitude of the unforced LTP variations may give us an idea about the climate's sensitivity to changing conditions. Often, such cycles are caused by delayed but reinforcing processes. Conditions which amplify an initial response are inherent to the atmosphere, and may act both on forced response (GHGs) as well as internal variations. Damping mechanisms will also tend to strangle oscillations, which is well known from many different physical systems.
The combination of statistical information and physics knowledge lead to only one plausible explanation for the observed global warming, global mean sea level rise, melting of ice, and accumulation of ocean heat. The explanation is the increased concentrations of GHGs.
We can also use another statistical technique for diagnosing a cause [11] which is also used in medical sciences and known as ‘regression analysis’. Figure 1 shows the results of a multiple regression analyses with inputs representing expected physical connections to Earth’s climate. In this case, the regression analysis used GHGs, ozone and changes in the sun’s intensity as inputs, and the results followed the HadCRUT4 data closely.
The probability that this fit is accidental is practically zero if we assume that that the temperature variations from yeartoyear are independent of each other. LTP and the oceans inertia will imply that the degrees of freedom is lower than the number of data points, making it somewhat more likely to happen even by chance.
Furthermore, taking paleoclimatic information into account, there is no evidence that there have been similar temperature excursion in the past ~1000 years [1214]. If the present warming was a result of natural fluctuations, it would imply a high climate sensitivity, and similar variations in the past. Moreover, it would suggest that any known forcing, such as GHGs, would be amplified accordingly. The climate sensitivity may be a common denominator for natural fluctuations and forced trends (Figure 3).
Figure 3. Comparison between trend estimates and the amplitude of 10year lowpassed internal variability in stateoftheart global climate models.
There may be some irony here: The warming 'hiatus' during last decade is due to LPTnoise [15,16]. However, when the undulations from such natural processes mask the GHGdriven trend, it may in fact suggest a high climate sensitivity – because such natural variations would not be so pronounced without a strong amplification from feedback mechanisms. Figure 3 shows that such natural variations in the climate models are more pronounced for the models with stronger transient climate response (TCR, a rough indicator for climate sensitivity).
For complete probability assessment, we need to take into account both the statistics and the physicsbased information, such as the fact that GHGs absorb infrared light and thus affect the vertical energy flow through the atmosphere.
Summary
In summary, we do not really know what the LTP in the real world would be like without GHG forcings, and we don’t know the real degrees of freedom in the measured data record. The lack of such information limits our ability to make a statisticsbased probability estimate. On the other hand, we know from past reconstructions and physical reasoning that present warming is highly unusual.
Biosketch
Rasmus Benestad is a physicist by training. Benestad has a D.Phil in physics from Atmospheric, Oceanic & Planetary Physics at Oxford University in the United Kingdom. He has affiliations with the Norwegian Meteorological Institute.
Recent work involves a good deal of statistics (empiricalstatistical downscaling, trend analysis, model validation, extremes and record values), but he has also had some experience with electronics, cloud microphysics, ocean dynamics/airsea processes and seasonal forecasting.
In addition, Benestad wrote the book ‘Solar Activity and Earth’s Climate’ (2002), published by PraxisSpringer, and together with two colleagues the text book ‘EmpiricalStatistical Downscaling’ (2008; World Scientific Publishers). He has also written a number of Rpackages for climate analysis posted on http://cran.rproject.org.
Benestad was a member of the council of the European Meteorological Society for the period (20042006), representing the Nordic countries and the Norwegian Meteorology Society, and he has served as a member of the CORDEX Task Force on Regional Climate Downscaling.
He is a regular contributor to the wellknown climate blog RealClimate.org.
References
1. Cohn, T. A. & Lins, H. F. Nature’s style: Naturally trendy. Geophys. Res. Lett. 32, n/a–n/a (2005).
2. Franzke, C. On the statistical significance of surface air temperature trends in the Eurasian Arctic region. Geophys. Res. Lett. 39, n/a–n/a (2012).
3. Climate Change: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. (Cambridge University Press, 2007).
4. Anderson, D. L. T. & McCreary, J. P. Slowly Propagating Disturbances in a Coupled OceanAtmosphere Model. J. Atmospheric Sci. 42, 615–629 (1985).
5. Gleick, J. Chaos. (Cardinal, 1987).
6. Lorenz, E. Deterministic nonperiodic flow. J. Atmospheric Sci. 20, 130–141 (1963).
7. Shukla, J. Predicatability in the Mids of Chaos: A scientific Basis for climate forecasting. Science 282, 728–733 (1998).
8. Durack, P. J., Wijffels, S. E. & Matear, R. J. Ocean Salinities REveal Strong Global Water Cycle Intensification During 1950 to 2000. Science 336, 455–458 (2012).
9. Balmaseda, M. A., Trenberth, K. E. & Källén, E. Distinctive climate signals in reanalysis of global ocean heat content. Geophys. Res. Lett. n/a–n/a (2013). doi:10.1002/grl.50382
10. Paeth, H. & Hense, A. Sensitivity of climate change signals deduced from multimodel Monte Carlo experiments. Clim. Res. 22, 189–204 (2002).
11. Benestad, R. E. & Schmidt, G. A. Solar Trends and Global Warming. JGR 114, D14101 (2009).
12. Mann, M. E., Bradley, R. S. & Hughes, M. K. Globalscale temperature patterns and climate forcing over the past six centuries. Nature 392, 779–787 (1998).
13. Moberg, A., Sonechkin, D. M., Holmgren, K., Datsenko, N. M. & Karlén, W. Highly variable Northern Hemisphere temperatures reconstructed from low and highresolution proxy data. Nature 433, 613–617 (2005).
14. Marcott, S. A., Shakun, J. D., Clark, P. U. & Mix, A. C. A Reconstruction of Regional and Global Temperature for the Past 11,300 Years. Science 339, 1198–1201 (2013).
15. Easterling, D. R. & Wehner, M. F. Is the climate warming or cooling? Geophys Res Lett 36, L08706 (2009).
16. Foster, G. & Rahmstorf, S. Global temperature evolution 1979–2010. Environ. Res. Lett. 6, 044022 (2011).
Guest blog Demetris Koutsoyiannis
LTP: Looking Trendy—Persistently
Demetris Koutsoyiannis, National Technical University of Athens, Greece
Stochastics and its importance in studying climate
Probability, statistics and stochastic processes, lately described by the collective term stochastics, provide essential concepts and tools to deal with uncertainty useful for all scientific disciplines. However, there is at least one scientific discipline whose very domain relies on stochastics: Climatology. To refer to a popular definition of this domain by IPCC[1] (also quoted in Wikipedia):
Climate in a narrow sense is usually defined as the average weather, or more rigorously, as the statistical description in terms of the mean and variability of relevant quantities over a period of time ranging from months to thousands or millions of years. The classical period for averaging these variables is 30 years, as defined by the World Meteorological Organization.
“Average”, “statistical description”, “mean”, “variability”, are all statistical terms. Several questions related to climate also involve probability, as exemplified in question 6 of the Introduction of this Climate Dialogue theme:[2]
Based on your statistical model of preference what is the probability that 11 of the warmest years in a 162 year long time series (HadCrut4) all lie in the last 12 years?
Interestingly, similar probabilistic and statistical notions are implied in a recent President Obama statement:[3]
Yes, it’s true that no single event makes a trend. But the fact is, the 12 hottest years on record have all come in the last 15.
The latter statement highlights how important statistical questions are for policy matters and presumes some public perception of probability and statistics, which determines how the message is received.
I have no doubt that the average human being has some understanding of probability and statistics, not only thanks to education, but because life is uncertain and each of us needs to develop understanding of uncertainty and skills to cope with it. However, common experience and perception are mostly related to too simple uncertainties, like in coin tosses, dice throws and roulette wheels. Also education is mainly based on classical statistics in which:
· Consecutive events are independent to each other: the outcome of an event does not affect that of the next one.
· As a result, time averages tend to stabilize relatively fast: their variability, expressed by the probabilistic concept of variance, is inversely proportional to the length of the averaging period.
Adhering to classical statistics, when dealing with climate and other complex geophysical processes, is not just a problem of laymen. There are numerous research publications adopting, tacitly or explicitly, the independence assumption for systems in which it is totally inappropriate. Even the very definition of climate quoted above, particularly the phrase “The classical period is 30 years” historically reflects a perception of a constant climate[4][5] and a hope that 30 years would be enough for a climatic quantity to get stabilized to a constant value—and this is roughly supported by classical statistics. In this perception a constant climate would be the norm and a deviation from the norm would be something caused by an extraordinary agent. The same staticclimate conviction is evident in the “weather vs. climate” dichotomy (e.g. the “climate is what you expect, weather is what you get”; (see critical discussions in Refs. [6], [7], [5]).
Figure 1. Probability that a 12year period contains the specified number of warmest years (n) or more in a 162year long period, as calculated assuming a random climate and a HurstKolmogorov (HK) climate with Hurst parameter H = 0.92 (see text below for explanation of the latter).
Now let us pretend that, as in classical statistics, climate was synthesized by averaging random events without dependence and try to study on this basis the above question (slightly modified for reasons that will be explained later). So, what is the probability that, in a 162year long time series, at least n (where n = 1, 2, …12) of the warmest years all lie in a 12year long period? The reply is depicted in Figure 1 labelled “Random climate”. The first seven points are calculated by MonteCarlo simulation. For n = 7 years this probability is 0.00005 (1/20 000). The Monte Carlo simulation would require too much time to find the probability that all 12 warmest years are consecutive (n = 12), because this probability is really an astonishingly small number; but I was able to find it analytically and plotted it on the graph. I also approximated with analytical calculations the probability that at least 11 warmest years are clustered within 12 years. From the graph we can conclude that it is quite unlikely that more than 89 warmest years would cluster, even throughout the entire Earth’s life (4.5 billion years separated in segments of 162 years). To have 11 warmest events clustering in a 12year period we would need, on the average, one hundred thousand times the age of the Universe.
Is this overwhelming evidence that something extraordinary has occurred during our lives, or that the independence assumption leads to blatantly irrational results?
No one would believe that the weather this hour does not depend on that an hour ago. It is natural to assume that there is time dependence in weather. Therefore, we must study weather not on the basis of classical statistics, but we should rather use the notion of a stochastic process. Now, if we average the process to another scale, daily, monthly, annual, decadal, centennial, etc. we get other stochastic processes, not qualitatively different from the hourly one. Of course, as the scale of averaging increases the variability decreases—but not as much as implied by classical statistics. Naturally the dependence makes clustering of similar events more likely.
The first who studied clustering in natural processes was Harold Edwin Hurst, a British hydrologist who worked in the Nile for more than 60 years. In 1951 he published a seminal paper[8] in which he stated:
Although in random events groups of high or low values do occur, their tendency to occur in natural events is greater. This is the main difference between natural and random events.
Herodotus said that the Egyptian land is "a gift of the Nile". Nile gave also hydrology and climatology invaluable gifts: one of them is the longest record of instrumental observations in history. Its water levels were measured in socalled Nilometers and archived for many centuries. In the 1920s Omar Toussoun, Prince of Alexandria, published a book[9] containing, among other things, annual minimum and maximum water levels of the Nile at the Roda Nilometer from AD 622 to 1921. Figure 2 depicts the time series of annual minimum levels up to 1470 (849 values; unfortunately, after 1470 there are substantial gaps). Climatic, i.e. 30year average, values are also plotted. One may say that these values are not climatic in strict sense. But they are strongly linked to the variability of the climate of a large area, from Mediterranean to the tropics. And they are instrumental.
The clustering of similar events, more formally described as LongTerm Persistence (LTP) is obvious. For example, around AD 780 we have a group of low values producing a low climatic value, and around 1110 and 1440 we have groups of large values. Such grouping would not appear in a climate that would be the synthesis of independent random events. The latter would be more flat as illustrated by the synthetic example of Figure 3.
Another way of viewing the longterm variability of the Nile in Figure 2 is by using the notion of trends, irregularly changing from positive to negative and from mild to steep. The long instrumental Nile series may help those who prefer the view of variability in terms of trends to recognize “Nature's style [as] Naturally trendy” to invoke the title of a celebrated recent paper.[10]
Figure 2. Nile River annual minimum water level at Roda Nilometer (from Ref. 9, here converted into water depths assuming a datum for the river bottom at 8.80 m), along with 30year averages (centred). A few missing values at years 1285, 1297, 1303, 1310, 1319, 1363 and 1434 are filled in using a simple method from Ref. [11]. The estimated statistics are mean = 2.74 m, standard deviation = 1.00 m, Hurst parameter = 0.87.
Figure 3. A synthetic time series from an independent (white noise) process with same statistics as those of the Nilometer series shown in the caption of Figure 2.
Seeking a proper stochastic model for climate
Variability over different time scales, trends, clustering and persistence are all closely linked to each other. The former is a more rigorous concept and is mathematically described by the variance (or the standard deviation) of the averaged process as a function of the averaging time scale, aka climacogram. [6],[12] The variability over scale (the climacogram), is also onetoone related (by transformation) to the stochastic concepts of dependence in time (the autocorrelation function) and the spectral properties (the power spectrum) of the process of interest. [6][12]
In white noise, i.e., the process characterized by complete independence in time, the variability is infinite at the instantaneous time scale (in technical terms its autocorrelation in continuous time is a Dirac Delta function). No variability is added at any finite time scale. Clearly, this is a mathematical construct which cannot occur in nature (the adjective “white”, suggestive of the white light as a mixture of frequencies, is misleading; the spectral density of white noise is flat, while that of the white light is not).
A seemingly realistic stochastic process, which has been widely used for climate, is the Markov process, whose discrete time version is more widely known as the AR(1) process. The characteristic properties of this process are two:
· Its past has no influence on the future whenever the present is known (in other words, only the latest known value matters for the future).[13]
· It assumes a single characteristic time scale in which change or variability is created (but in contrast to the white noise, this time scale is nonzero and technically is expressed by the denominator of the exponent in an exponential function that constitutes its autocorrelation function). As a result, when the time scale of interest is fairly larger than this characteristic scale, the process behaves like white noise.
It is difficult to explain why this model has become dominant in climatology. Even these two theoretical properties should have hampered its popularity. How could the future be affected just by the latest value and not by the entire past? Could any geophysical process, including climate, be determined by just one mechanism acting on a single time scale?
The flow in a river (not necessarily the Nile) may help us understand better the multiplicity of mechanisms producing change and the multiplicity of the relevant time scales (see also Ref. [14]):
· Next second: the hydraulic characteristics (water level, velocity) will change due to turbulence.
· Next day: the river discharge will change (even dramatically, in case of a flood).
· Next year: The river crosssection will change (erosiondeposition of sediments).
· Next century: The river basin characteristics (e.g. vegetation, land use) will change.
· Next millennium: All could be very different (e.g. the area could be glacialized).
· Next millions of years: The river may have disappeared (owing to tectonic processes).
Of course none of these changes will be a surprise; rather, it would be a surprise if things remained static. Despite being anticipated, all these changes are not predictable.
Does a plurality of mechanisms acting on different scales require a complex stochastic model? Not necessarily. A decade before Hurst detected LTP in natural processes, Andrey Kolmogorov,[15] devised a mathematical model which describes this behaviour using one parameter only, i.e. no more than in the Markov model. We call this model the HurstKolmogorov (HK) model (aka fGn—for fractional Gaussian noise, simple scaling process etc.), while its parameter has been known as the Hurst parameter and is typically denoted by H. In this model, change is produced at all scales and thus it never becomes similar to white noise, whatever the time scale of averaging is.
Specifically, the variance will never become inversely proportional to time scale; it will decrease at a lower rate, inversely proportional to the power (2 – 2H) of the time scale (nb. 0.5 ≤ H < 1, where the value H = 0.5 corresponds to white noise). A characteristic property of the HK process is that its autocorrelation function is independent of time scale. In other words if there is some correlation in the average temperature between a year and the next one (and in fact there is), the same correlation will be between a decade and the next one, a century and the next one, and so on to infinity. Why? Because there will always be another natural mechanism acting on a bigger scale, which will create change, and thus positive correlation at all lower scales (the relationship of change with autocorrelation is better explained in Ref. 6). The HK behaviour seems to be consistent with the principle of extremal entropy production.[16]
The Nilometer record described above is consistent with the HK model with H = 0.87. Are there other records of geophysical processes consistent with the HK behaviour? A recent overview paper[17] cites numerous studies where this behaviour has been verified. It also examines several instrumental and proxy climate data series related to temperature and, by superimposing the climacograms of the different series, it obtains an overview of the variability for time scales spanning almost nine orders of magnitude—from 1 month to 50 million years. The overall climacogram supports the presence of HK dynamics in climate with H at least 0.92. The orbital forcing (Milankovitch cycles) is also evident in the combined climacogram at time scales between 10 and 100 thousand years.
Statistical assessment of current climate evolution
Reexamining the statistical problem of 11 warmest years in 12 within 162 year period, now within an HK framework with H = 0.92, we will find spectacularly different results from those of the random climate, as shown in Figure 1. We may see, for example, that what, according to the classical statistical perception, would require the entire age of the Earth to occur once (i.e. clustering of 89 events) is a regular event for an HK climate, with probability on the order of 110%.
This dramatic difference can help us understand why the choice of a proper stochastic model is relevant for the detection of changes in climate. It may also help us realize how easy it is to fool ourselves, given that our perception of probability may heavily rely on classical statistics.
Figure 4 gives a closeup of the results, excluding the very low probabilities and also generalizing the “12year period” to “Nyear period” so that it can host, in addition to the Climate Dialogue statistical question 6, the results for the “Obama version” thereof as quoted above. In addition, Figure 4 is based on a slightly higher value of the Hurst coefficient, H = 0.94, as estimated by the Least Squares based on Standard Deviation method[18] for the HadCrut4 record. Both versions result in about the same answer: the probability of having 11 warmest years in 12, or 12 warmest years in 15, is 0.1%.
Figure 4. Probability that a Nyear period, where N = 12 or 15, contains the specified number, n, of warmest years or more in a 162year long period, calculated in the same manner as in Figure 1 with H = 0.94.
If we used the IPCC AR4 terminology[19] we would say that either of these events is exceptionally unlikely to have a natural cause. Interestingly, the present results do not contradict those of a recent study of Zorita, Stocker and von Storch,[20] who examined a similar question and concluded that:
Under two statistical nullhypotheses, autoregressive and longmemory, this probability turns to be very low: for the global records lower than p = 0.001…
I note, though, that there are differences in the methodology followed here and that in Zorita et al.; for example, the analysis here did not consider whether the Nyear period (where the n warmest years are clustered) is located in the end of the examined observation period or anywhere else in it (the reason will be explained below).
One may note that the above results, as well as those by Zorita et al., are affected by uncertainty, associated with the parameter estimation but also with the data set itself. The data are altered all the time as a result of continuous adaptations and adjustments. Even the ranks of the different years are changing: for example in the CRU data examined by Koutsoyiannis and Montanari (2007)[21], 1998 was rank 1 (the warmest of all) and 2005 was rank 2, while now the ranking of these two years was inverted. But most importantly, the analysis is affected by the Mexican Hat Fallacy (MHF), if I am allowed to use this name to describe a neat example of statistical misuse offered by von Storch,[22] in which the conclusion is drawn that:
The Mexican Hat is not of natural origin but manmade.
Von Storch [22] aptly describes the fallacy in these words:
The fundamental error is that the null hypothesis is not independent of the data which are used to conduct the test. We know apriori that the Mexican Hat is a rare event, therefore the impossibility of finding such a combination of stones cannot be used as evidence against its natural origin. The same trick can of course be used to “prove” that any rare event is “nonnatural”, be it a heat wave or a particularly violent storm  the probability of observing a rare event is small.
I believe that by rephrasing “11 of the warmest years … all lie in the last 12 years” into “11 of the warmest years … all lie in a 12year long period ” reduces the MHF effect, but I do not think it eliminates it. That is why I prefer other statistical methods of detecting changes[23], such as the tests proposed by Hamed[24] and by Cohn and Lins [10]. The former relies on a test statistic based on the ranks of all data, rather than a few of them, while the second considers also the magnitude of the actual change, not that of the change in the ranks.
Another test statistic was proposed by Rybski et al.,[25] and was modified to include the uncertainty in the estimation of standard deviation by Koutsoyiannis and Montanari [21], who also applied it for the CRU temperature data up to 2005. Note that, to make the test simple, the uncertainty in the estimation of H was not considered even in the latter version (thus it could rather be called a pseudotest). Here I updated the application of this test and I present the results in Figure 5.
Figure 5 Updated Fig. 2 in Koutsoyiannis and Montanari [21] testing lagged climatic differences based on the HadCrut4 data set (18502012; see explanation in text).
The method has the advantages that it uses the entire series (not a few values), it considers the actual climatic values (not their ranks) and it avoids specifying a mathematical form of trend (e.g. linear). Furthermore, it is simple: First we calculate the climatic value of each year as the average of the present and the previous 29 years. This is plotted as a pink continuous line in Figure 5, where we can see, among other things, that the latest climatic value is 0.31°C (at 2012, being the average of HadCrut4 data values for 19832012), while the earliest one was –0.30°C (at 1879, being the average of 185079). Thus, during the last 134 years the climate has warmed by 0.61°C. Note that no subjective smoothing is made here (in contrast to the graphs by CRU), and thus the climatic series has length 134 years (but with only 5 nonoverlapping values), while the annual series has length 163.
Our (pseudo)test relies on climatic differences for different time lags (not just that of the latest and earliest values). For example, assuming a lag of 30 years (equal to the period for which we defined a climatic value), the climate difference between 2012 and 1982 is 0.31°C – (–0.05°C) = 0.36°C, where the value  0.05°C is the average of years 195382. The value 0.36°C is plotted as a green triangle in Figure 5 at year 2012. Likewise, we find climatic differences for years 2011, 2010, …, 1909, all for lag 30. Plotting all these we get the series of green triangles shown in Figure 5. We repeat the same procedure for time lags that are multiples of 30 years, namely 60 years (red points), 90 years (blue points) and 120 years (purple points).
Finally, we calculate, in a way described in Ref. 21, the critical values of the test statistic, which is none other than the lagged climate difference. The critical values are different for each lag and are plotted as flat lines with the same colour as the corresponding points. Technically, the (pseudo)test was made as twosided for significance level 1% but only the high critical values are plotted in the graph. Practically, as long as the points lie below the corresponding flat lines, nothing significant has happened. This is observed for the entire length of the lag30 and lag60 differences. A few of the last points of the lag90 series exceed the critical value; this corresponds to the combination of high temperatures in the 2000s and low temperatures in the 1910s. But then all points of the lag120 series lie again below the critical value, indicating no significant change.
Concluding remarks
Assuming that the data set we used is representative and does not contain substantial errors, the only result that we can present as fact is that in the last 134 years the climate has warmed by 0.6°C (nb., this is a difference of climatic—30year average—values while other, often higher, values that appear in the literature refer to trends based on annual values). Whether this change is statistically significant or not depends on assumptions. If we assume a 90year lag and 1% significance, it perhaps is; again I cannot be certain as the pseudotest did not consider the uncertainty in H. Note, the 1% significance corresponds to ±2.58 standard deviations away from the mean; if we made it ±3 everything would become insignificant.
Irrespective of statistical significance, paleoclimate and instrumental data provide evidence that the natural climatic variability, the natural potential for change, is high and concerns all time scales. The mechanisms producing change are many and, in practice, it is more important to quantify their combined effects, rather than try to describe and model each one separately.
From a practical point of view, it could be dangerous to pretend that we are able to provide credible quantitative description of the mechanisms, their causes and effects, and their combined consequences: We know that the mechanisms and their interactions are nonlinear, as well as that the climate model hind casts are poor.[26]^{,[27]} Indeed, it has been demonstrated that, particularly for runoff that is mostly relevant for water availability and flood risk, deterministically projected future traces can be too flat in comparison to changes that can be expected (and stochastically generated) admitting stationarity and natural variability characterized by HK dynamics[28].
Biosketch
Demetris Koutsoyiannis received his diploma in Civil Engineering from the National Technical University of Athens (NTUA) in 1978 and his doctorate from NTUA in 1988. He is professor of Hydrology and Analysis of Hydrosystems at the Department of Water Resources and Environmental Engineering of NTUA (and former Head of the Department). He is also CoEditor of Hydrological Sciences Journal and member of the editorial board of Hydrology and Earth System Sciences (and formerly of Journal of Hydrology and Water Resources Research). He teaches undergraduate and postgraduate courses in hydrometeorology, hydrology, hydraulics, hydraulic works, water resource systems, water resource management, and stochastic modelling. He is an experienced researcher in the areas of hydrological modelling, hydrological and climatic stochastics, analysis of hydrosystems, water resources engineering and management, hydroinformatics, and ancient hydraulic technologies. His record includes about 650 scientific and technological contributions, spanning from research articles to engineering studies, among which 96 publications in peer reviewed journals. He received the Henry Darcy Medal 2009 by the European Geosciences Union for his outstanding contributions to the study of hydrometeorological variability and to water resources management.
References
[1] IPCC (2007), Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change [Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M. Tignor and H.L. Miller (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 996 pp (Annex 1, Glossary: http://www.ipcc.ch/pdf/assessmentreport/ar4/wg1/ar4wg1annexes.pdf).
[2] Climate Dialogue Editorial Staff (2013), How persistent is the climate system and what is its implication for the significance of observed trends and for internal variability? (This blog post).
[3] Obama’s 2013 State of the Union Address, The New York Times, http://www.nytimes.com/2013/02/13/us/politics/obamas2013stateoftheunionaddress.html?pagewanted=3&_r=0
[4] Lamb, H. H. (1972), Climate: Past, Present, and Future, Vol. 1, Fundamentals and Climate Now, Methuen and Co.
[5] Lovejoy, S., and D. Schertzer (2013), The climate is not what you expect, Bull. Amer. Meteor. Soc., doi: 10.1175/BAMSD1200094.
[6] Koutsoyiannis, D. (2011), HurstKolmogorov dynamics and uncertainty, Journal of the American Water Resources Association, 47 (3), 481–495.
[7] Koutsoyiannis, D. (2010), A random walk on water, Hydrology and Earth System Sciences, 14, 585–601.
[8] Hurst, H.E. (1951), Long term storage capacities of reservoirs, Trans. Am. Soc. Civil Engrs., 116, 776–808.
[9] Toussoun, O. (1925). Mémoire sur l’histoire du Nil, in Mémoires a l’Institut d’Egypte, vol. 18, pp. 366404.
[10] Cohn, T. A., and H. F. Lins (2005), Nature's style: Naturally trendy, Geophys. Res. Lett., 32, L23402, doi: 10.1029/2005GL024476.
[11] Koutsoyiannis, D., and A. Langousis (2011), Precipitation, Treatise on Water Science, edited by P. Wilderer and S. Uhlenbrook, 2, 27–78, Academic Press, Oxford ( p. 57).
[12] Koutsoyiannis, D. (2013), Encolpion of stochastics: Fundamentals of stochastic processes, 30 pages, National Technical University of Athens, Athens, http://itia.ntua.gr/1317/, accessed 20130417.
[13] Papoulis, A. (1991), Probability, Random Variables and Stochastic Processes, 3rd edn., McGrawHill, New York (p. 635).
[14] Koutsoyiannis, D. (2013), Hydrology and Change, Hydrological Sciences Journal (accepted with minor revisions; currently available in the form of an IUGG Plenary lecture, Melbourne 2011, http://itia.ntua.gr/1135/, accessed 20130417).
[15] Kolmogorov, A. N. (1940). Wiener spirals and some other interesting curves in a Hilbert space, Dokl. Akad. Nauk SSSR, 26, 115118. English translation in: Tikhomirov, V.M. (ed.) 1991. Selected Works of A. N. Kolmogorov: Mathematics and mechanics, Vol. 1, Springer, 324326.
[16] Koutsoyiannis, D. (2011), HurstKolmogorov dynamics as a result of extremal entropy production, Physica A: Statistical Mechanics and its Applications, 390 (8), 1424–1432.
[17] Markonis, Y., and D. Koutsoyiannis (2013), Climatic variability over time scales spanning nine orders of magnitude: Connecting Milankovitch cycles with Hurst–Kolmogorov dynamics, Surveys in Geophysics, 34 (2), 181–207.
[18] Tyralis, H., and D. Koutsoyiannis (2011), Simultaneous estimation of the parameters of the HurstKolmogorov stochastic process, Stochastic Environmental Research & Risk Assessment, 25 (1), 21–33.
[19] As Ref. 1, p. 23.
[20] Zorita, E., T. F. Stocker, and H. von Storch (2008), How unusual is the recent series of warm years?, Geophys. Res. Lett., 35, L24706, doi: 10.1029/2008GL036228.
[21] Koutsoyiannis, D., and A. Montanari (2007), Statistical analysis of hydroclimatic time series: Uncertainty and insights, Water Resources Research, 43 (5), W05429, doi: 10.1029/2006WR005592.
[22] von Storch, H. (1999), Misuses of statistical analysis in climate research, in Analysis of Climate Variability, Applications of Statistical Techniques, Proceedings of an Autumn School organized by the Commission of the European Community, Edited by H. von Storch and A. Navarra, 2nd updated and extended edition, http://www.hvonstorch.de/klima/books/SNBOOK/springer.pdf, accessed 201304.
[23] Koutsoyiannis, D. (2003), Climate change, the Hurst phenomenon, and hydrological statistics, Hydrological Sciences Journal, 48 (1), 3–24.
[24] Hamed, K. H. (2008), Trend detection in hydrologic data: The MannKendall trend test under the scaling hypothesis, Journal of Hydrology, 349(34), 350363.
[25] Rybski, D., A. Bunde, S. Havlin and H. von Storch (2006), Longterm persistence in climate and the detection problem, Geophys. Res. Lett., 33, L06718, doi: 10.1029/2005GL025591.
[26] Anagnostopoulos, G. G., D. Koutsoyiannis, A. Christofides, A. Efstratiadis and N. Mamassis (2010), A comparison of local and aggregated climate model outputs with observed data, Hydrological Sciences Journal, 55 (7), 1094–1110.
[27] Koutsoyiannis, D., A. Christofides, A. Efstratiadis, G. G. Anagnostopoulos, and N. Mamassis (2011), Scientific dialogue on climate: is it giving black eyes or opening closed eyes? Reply to “A black eye for the Hydrological Sciences Journal” by D. Huard, Hydrological Sciences Journal, 56 (7), 1334–1339.
[28] Koutsoyiannis, D., A. Efstratiadis, and K. Georgakakos (2007), Uncertainty assessment of future hydroclimatic predictions: A comparison of probabilistic and scenariobased approaches, Journal of Hydrometeorology, 8 (3), 261–281.
Guest blog Armin Bunde
How to estimate the significance of global warming when taking explicitly into account the longterm persistence in temperature?
Armin Bunde, Institut für Theoretische Physik, Universität Giessen, Germany
1. Longterm Persistence in climate and its detection
Longterm persistence (LTP), also called longterm correlations or longterm memory, plays an important role to characterize records in physiology (e.g. heartbeats), computer science (e.g. internet traffic) and in financial markets (volatility). The first hint that LTP is important in climate has been given in the classic papers by Hurst more than 50 years ago when studying the historic levels of the Nile River.
We can distinguish between uncorrelated records (''white noise''), shortterm persistent records (STP) and longterm persistent records. In white noise all data points x_{1}, x_{2}, ..., x_{N} are independent of each other. In STP records, each data point x_{i} depends on a short subset of previous points x_{i1}, x_{i2},.. x_{im}, i.e., the memory has a finite range m. In LTP records, in contrast, x_{i} depends on all previous points. The simplest model for STP is the ''AR1 process'' where x_{ i} is proportional to the foregoing data point x_{i1}, plus a white noise component η_{i1},
Despite the evidence that temperature anomalies cannot be characterized by the AR1 process, most climate scientists have used the AR1 model when trying to describe temperature fluctuations and estimating the significance of a trend. This usually leads to a considerable overestimation of the external trend and its significance.
There are several methods to quantify the memory in a given sequence. (For a recent review see [1] and references therein). The first one is the autocorrelation function C(s) where s = 1,2,3,… is the lag time between 2 data points. For white noise, there is no memory and C(s) = 0. For the AR1 process, C(s) decays exponentially, C(s) ~ exp{ s/S} where S =1/ln a is the ''persistence length''. For infinitely long stationary LTP data C(s) decays algebraically,
where ϒ is called correlation exponent.
The first figure shows parts of an uncorrelated (left) and a synthetic longterm persistent (right) record, with ϒ = 0.4. The full line is the moving average over 30 data points. For the uncorrelated data, the moving average is close to zero, while for the LTP data, the moving average can have large deviations from the mean, forming some kind of mountainvalley structure that looks as if it contained some external deterministic trend. The figure shows that it is not a straightforward task to separate the natural fluctuations from an external trend, and this makes the detection of external trends in LTP records a difficult task. I will return to this later.
Figure 1.
One can show analytically [2], that in LTP records with a finite length N, the algebraic dependence of C(s) on s can be seen only for very small time lags s, satisfying the inequality (s/N)^{ϒ} << 1. Already for ϒ = 1/2 and records of length 600 (which corresponds to 50 years of monthly data), this condition can only be met for very small time lag times s, roughly s < 6. For larger time lags, C(s) decays faster than algebraically. This is an artifact of the method called ''Finite Size''Effect. If one is not aware of this effect, one may be led to the wrong conclusion that there exists no longterm memory in sequences of a finite length.
A similar mistake may happen, when one uses the second traditional method for detecting LTP, the power spectrum (spectral density) S(f). The discrete frequency f is equivalent to an inverse lag time, f=1/s, and a multiple of 1/N. For white noise, S(f) is constant. For STP data, S(f) is constant for f well below m/N (since the data are uncorrelated at time lags s above m), and then decreases monotonously.
For LTP records, S(f) decreases by a power law,
so one may detect LTP also by considering the power spectrum. However, due to the discreteness of f, the algebraic decay cannot be clearly observed at frequencies below 50/N, which again may lead to the wrong conclusion that there is no longterm memory.
In addition to the remarkable finite size resp. discreteness effects, both methods lead an overestimation of the LTP in the presence of external deterministic trends.
In recent years, several methods (see, e.g.,[1,3]) have been developed where longterm correlations in the presence of deterministic polynomial trends can be detected. These methods include the detrended fluctuation analysis (DFA2) and Haarwavelet analysis (WT2), where linear trends are eliminated systematically. DFA2 is quite accurate in the time window 8 ≤ s < N/4 while WT2 is accurate for 1 ≤ s < N/50. In both methods, one determines a fluctuation function F(s) which measures the fluctuations of the record in time windows of length s around a trend line. For LTP records with correlation exponent ϒ, F(s) increases as
where α is usually called Hurst exponent. By combining DFA2 and WT2 one can obtain a consistent picture on time scales between s=1 and s=N/4. For a meaningful analysis, the records should consist of more than N = 500 data points. I like to emphasize again that in the case an external deterministic trend cannot be excluded, the evaluation of the LTP and the determination of the Hurst exponent must be done with trendeliminating methods, e.g., DFA2 and WT2, as described above.
The second figure summarizes the results of our earlier analysis (for references, see [1]) for a large number of atmospheric and sea surface temperatures as well as precipitation and river runoffs. Each histogram shows how many stations have Hurst exponents around 0.5, 0.55, 0.6, 0.65 and so on.
Figure 2.
For the daily precipitation records and the continental atmospheric temperatures the distribution of Hurst exponents is quite narrow. For daily precipitation, the exponent is close to 0.5, indicating the absence of persistence (see also [3]), while for the daily continental temperature records, the exponent is close to 0.65, indicating a “universal” persistence law. Both laws can be used very efficiently as test bed for climate models and paleo reconstructions (for references, see [1] and [3]).
There are also more intuitive measures of LTP, and one of them is the distribution of the persistence lengths l in a record (see, e. g. [3]). In temperature data, l describes the lengths of warm resp cold periods where the temperature anomalies (deviation of the daily or monthly temperature from their seasonal mean) are above resp. below zero. It is easy to show that the distribution P(l) of the persistence length decays exponentially for uncorrelated data, i.e., ln P(l) ~  l. For LTP data, P(l) decays via a stretched exponential, ln P(l) ~ l^{ϒ} where ϒ is the correlation exponent (see [1]). Accordingly, in LTP records large persistence lengths are more frequent, which is intuitively clear.
2. Detection of external trends in LTP data
For detection and estimation of external trends (“detection problem”) one needs a statistical model. For monthly (and annual) temperature records the best statistical model is longterm persistence, as we have seen in the foregoing section. The main features of a longterm persistent record of length N are determined by the Hurst exponent α. Synthetic LTP records characterized by these two parameters can be easily generated by a Fouriertransformation (see, e.g., [1]) with the help of random number generators.
When using LTP as statistical model we assume that there are no additional short term correlations, generated by ''Großwetterlagen'' (blocking situations). Since the persistence length of these short term correlations is below 14 days, they are not present in monthly data sets.
For the detection problem, one then needs to know the probability W(x) that in a longterm correlated record of length L and Hurst exponent α, the relative trend exceeds x. For temperature data, the relative trend is the ratio between the temperature change (determined by a simple regression analysis) and the standard deviation σ around the trend line. For Gaussian LTP data, an analytical expression for W(x), for given α and N, has been derived in [4], which is easy to implement and can serve also as a very good approximation for NonGaussian data. In order to decide if a measured relative trend x_{m}may be natural or not, one has to determine the exceedance probability at x_{m}. If W(x_{m}) is below 0.025, the trend usually is called significant (within the 95 percent confidence interval), if it is below 0.005, the trend is called highly significant (within the 99 percent confidence interval).
From the condition W(y) = 0.025 one may derive error bars y (within the 95 percent confidence interval) for the expected external trend,
If x
_{m} is slightly below y, then the minimum value of the external trend is negative and thus the trend is not significant. But the maximum value of the external trend can be large, and thus an external trend cannot be excluded, even though the trend is not significant. Accordingly, if a trend is
not significant since W(x
_{m}) is above 0.025, this does
not mean that one can
exclude the possibility of an external deterministic trend. It only means that one is
not forced to assume an external trend in order to describe the variability of the record properly. For example, if we observe a small insignificant positive trend, then this trend may either arise from the superposition of a strong positive natural fluctuation (as in Fig.1b) and a small
negative external trend or from a strong negative fluctuation (as in Fig. 1b, but downwards) and a
large positive external trend.
These conclusions are independent of the used model and hold also for the STP model. In previous significance analyses, climate scientists usually used the STP model, where the model parameter a has been determined from measuring C(1) of the data, see Sect. 1. The significance of a trend (see below) is clearly underestimated by this model.
3. Detection of climate change within the LTP model
Using our terminology of “significant” and “highly significant” we have obtained a mixed picture of the significance of temperature records, partly reviewed in [1].
(i) The global sea surface temperature increased, in the past 100y, by about 0.6^{ }degree, which is not significant. The reason for this is the large persistence of the oceans, reflected by a large Hurst exponent.
(ii) The global land air temperature, in the past 100y, increased by about 0.8 degrees. We find this increase even highly significant. The reason for this is the comparatively low persistence of the land air temperature, which makes large natural increases unlikely.
(iii) Local temperatures: In local temperature records it is more difficult to detect external trends due to their large variability. We have studied a large number of local stations around the globe. For stations at high elevation like Sonnblick in Austria or in Siberia, we found highly significant trends. For about half of the other stations, we could not find a significant trend. However, when averaging the records in a certain area, this picture changed. Due to the averaging, the fluctuations around the trend line decrease and the temperature increases become more significant.
Our estimations are basically in line with earlier, less rigorous trend estimations in LTP data by Rybski et al [5] and in line with the conclusions of Zorita et al when estimating the probability that 11 of the warmest years in a 162 year long record all lie in the last 12 years.
My conclusion is that the AR1 process falsely used by climate scientists to describe temperature variability leads to a strong overestimation of the significance of external trends. When using the proper LTP model the significance is considerably lower. But also the LTP model does not reject the hypothesis of anthropogenic climate change.
Biosketch
Armin Bunde is professor of theoretical physics in Giessen. After receiving his PhD in theoretical solid state physics at Giessen University, he spent several years as a Post Doc in Antwerp, Saarbrücken, and Konstanz. He received a prestigious Heisenberg Fellowship in 1984 and spent three years at Boston University and Bar Ilan University in Israel, where he worked with H.Eugene Stanley (Boston) and Shlomo Havlin (Israel). In 1985 he received the CarlWagner Award. Between 1987 and 1993 he was professor of theoretical physics at Hamburg University, since 1993 he is back in Giessen.
In the last 20 years, his main research areas are disordered materials, percolation theory and applications in materials science, as well as fractals and time series analysis in different disciplines, among them geo science, where he is mainly interested in longterm persistence, extreme events and climate networks. In geo science, he has cooperated intensively with HansJoachim Schellnhuber, Donald Turcotte, Hans von Storch, and Shlomo Havlin. He has published more than 300 papers and his Hirsch Index (google) is well above 50.
References
[1] A. Bunde and S. Lennartz, Acta Geophysica 60, 562 (2012) and references therein
[2] S. Lennartz and A. Bunde, Phys. Rev. E (2009)
[3] A. Bunde, U. Büntgen, J. Ludescher, J. Luterbacher, and H. von Storch, Nature Climate Change, 3, 174 (2013), see also: online supplementary information
[4] S. Lennartz and A. Bunde, Phys. Rev. E 84, 021129 (2011)
[5] D. Rybski, A. Bunde, S. Havlin, and H. von Storch, Geophys. Res. Lett. 35, L06718 (2006)
[6] E. Zorita, T.F. Stocker, and H. von Storch, Geophys. Res. Lett. 35, L24706 (2008)
Summary of the Climate Dialogue on longterm persistence
Summary of the Climate Dialogue on LongTerm Persistence
Authors: Marcel Crok, Bart Strengers (PBL), Bart Verheggen (PBL), Rob van Dorland (KNMI)
This summary is based on the contributions of Rasmus Benestad, Armin Bunde and Demetris Koutsoyiannis who participated in this Climate Dialogue that took place in May 2013.
Long term persistence and trend significance
“Is global average temperature increase statistically significant?” To answer this question one needs to make assumptions on the statistical nature of the temperature time series and to choose what statistical model is most appropriate.
If the temperature of this year is not related to that of last year or next year we can use statistics to determine whether the increase in global temperature is significant or not. In such an “uncorrelated climate”, i.e. if the temperature of this year is fully independent of other years, the average value becomes zero (or a fixed value) quickly and deviations from the mean last only shortly. However, if there is (strong) temporal dependence the moving average can have large deviations from the mean. This is called longterm dependence or longterm persistence (LTP).
The three participants agree that LTP exists in the climate (Table 1), but they disagree about the exact definition and the physical processes that lie behind it. Benestad and Bunde describe LTP in terms of “long memory”. Koutsoyiannis holds the opinion that LTP is mainly the result of the irregular and unpredictable changes that take place in the climate (Table 2). Both Bunde and Koutsoyiannis are in favour of a formal (mathematical) definition of LTP, which states that on longer time scales climate variability decreases—but not as much as implied by classical statistics.
Benestad said that ice ages and the El Niño Southern Oscillation (ENSO) are examples of LTP processes. Bunde and Koutsoyiannis disagreed (Table 3).
Table 1

Benestad 
Bunde 
Koutsoyiannis 
Does LTP exist in the climate? 
Yes 
Yes 
Yes 
Table 2

What is longterm persistence (LTP)? 
Benestad 
LTP describes how slow physical processes change over time, where the gradual nature is due to some kind of ‘memory’. 
Bunde 
LTP is a process with long memory; the value of a parameter (e.g. temperature) today depends on all previous points. 
Koutsoyiannis 
It is unfortunate that LTP has been interpreted as memory; it is the change, mostly irregular and unpredictable in deterministic terms, that produces the LTP, while the autocorrelation results from change. 
Table 3

Benestad 
Bunde 
Koutsoyiannis 
Quasioscillatory phenomena like ENSO can be described as LTP. 
Yes 
No 
No 
Is LTP relevant for the detection of climate change?
There was confusion about the exact meaning of the IPCC definition about detection. The definition reads: “Detection of change is defined as the process of demonstrating that climate or a system affected by climate has changed in some defined statistical sense without providing a reason for that change. An identified change is detected in observations if its likelihood of occurrence by chance due to internal variability alone is determined to be small.”
Bunde and Koutsoyiannis both think detection is mainly a matter of statistics while Benestad thinks it also involves a physical interpretation of distinguishing unforced internal variability from forced changes.
Table 4

Benestad 
Bunde 
Koutsoyiannis 
Is detection purely a matter of statistics? 
No, laws of physics sets fundamental constraints 
Yes 
Not purely but primarily yes 
LTP versus AR(1)
Bunde and Koutsoyiannis argue that LTP is the proper model to describe temperature variability, that climate scientists in general use a Short Term Persistence (STP) model like AR(1) which leads to a strong overestimation of the significance of trends (Table 5). Koutsoyiannis showed that the clustering of warm years, for example, is orders of magnitude more likely to happen if you use an LTP model. Benestad agrees that the AR(1) model may not necessarily be the best model. He argues that in general statistical models and LTP in particular, used for the detection of trends involve circular reasoning when applied to what is called the instrumental period, because in this period the data embed both “signal” and “noise”. LTP or STP or whatever statistical model are meant to describe “the noise” only in his opinion (Table 6). Koutsoyiannis in response gave a few examples why in his opinion the danger of circular reasoning is not justified in this case (see Extended Summary).
Table 5

Benestad 
Bunde 
Koutsoyiannis 
Is LTP relevant/important for the statistical significance of a trend? 
Yes (though physics still needed) 
Yes, very much 
Yes, very much 
Table 6

What is the relevance of LTP for the detection of climate change? 
Benestad 
Statistical LTPnoise models used for the detection of trends involve circular reasoning if adapted to measured data. State of the art detection and attribution is needed. 
Bunde 
For detection and estimation of external trends (“detection problem”) one needs a statistical model and LTP is the best model to do this. 
Koutsoyiannis 
LTP is the only relevant statistical model for the detection of changes in climate. 
Table 7

Benestad 
Bunde 
Koutsoyiannis 
Is the AR(1) model a valid model to describe the variability in time series of global average temperature? 
No, if physics based information is neglected 
No 
No 
Does the AR(1) model leads to overestimation of the significance of trends? 
Yes, if you don’t also take into account the physicsbased information 
Yes 
Yes 
LTP and chaos
There was disagreement about the relation between LTP and chaos (Table 8). According to Benestad chaos theory implicates the memory of the initial conditions is lost after a finite time interval. Benestad interprets “the system loses memory” as “LTP is not a useful concept”. Koutsoyiannis considers memory as a bad interpretation of LTP: it is the change which produces the LTP and thus LTP is fully consistent with the chaotic behaviour of climate.
Table 8

Benestad 
Bunde 
Koutsoyiannis 
Is the climate chaotic? 
Yes 
Yes 
Yes 
Does chaos mean memory is lost and does this apply for climatic timescales as well? 
Yes 
Chaos is not a useful concept for describing the variability of climate records on longer time scales 
No; LTP is not memory 
Does chaos exclude the existence of LTP? 
Yes, at both weather and climatic time scales 
No 
No; on the contrary, chaos can produce LTP 
Does chaos contribute to the existence of LTP? 
No, but chaos may give an impression of LTP 
Yes 
Yes, LTP does involve chaos 
Signal and noise
There was disagreement about concepts like signal and noise. According to Benestad the term “signal” refers to manmade climate change. “Noise” usually means everything else, and LTP is ‘noise in slow motion’ (Table 9). Koutsoyiannis argued that the “signal” vs. “noise” dichotomy is subjective and that everything we see in the climate is signal. To isolate one factor and call its effect “signal” may be misleading in view of the nonlinear chaotic behavior of the system. Bunde does assume there is an external deterministic trend from the greenhouse gases but he calls the remaining part of the total climate signal natural “fluctuations” and not noise (Table 9). All three seem to agree that one cannot use LTP to make a distinction between forced and unforced changes in the climate (Table 10).
Table 9

Signal versus noise 
Benestad 
The signal is manmade climate change; the rest is noise and LTP is noise in slow motion. 
Bunde 
My working hypothesis: there is a deterministic external trend; the rest are natural fluctuations which are best described by LTP. 
Koutsoyiannis 
Excepting observation errors, everything we see in climate is signal. 
Table 10

Benestad 
Bunde 
Koutsoyiannis 
Is the signal versus noise dichotomy meaningful? 
Yes 
Yes 
No* 
Can LTP distinguish between forced and unforced components of the observed change? 
No 
No 
* 
Can LTP distinguish between natural fluctuations (including natural forcings) and trends? 
No 
Yes 
* 
* Koutsoyiannis thinks that even the formulation of these questions, which imply that that the description of a complex process can be made by partitioning it into additive components and trying to know the signatures of each one component, indicates a linear view for a system that is intrinsically nonlinear.
Forced versus unforced
According to Bunde Natural Forcing plays an important role for the LTP and is omnipresent in climate. Koutsoyiannis agreed that (changing) forcing can introduce LTP and that forcing is omnipresent, but LTP can also emerge from the internal dynamics alone.
Table 11

Benestad 
Bunde 
Koutsoyiannis 
Does forcing introduce LTP? 
Yes 
Yes 
Yes 
Is forcing omnipresent in the real world climate? 
Yes 
Yes 
Yes 
What according to you is the main mechanism behind LTP? 
Forcings 
Natural Forcing plays an important role for the LTP and is omnipresent in climate 
I believe it is the internal dynamics that determines whether or not LTP would emerge 
Is the warming significant?
The three participants gave different answers on the key question of this Climate Dialogue, namely of the warming in the past 150 years is significant or not. They used different methods to answer the question. Benestad is most confident that both the changes in land and sea temperatures are significant. Bunde concludes that due to a strong Long Term Persistence the increase in sea temperatures are not significant but the land and global temperatures are. Koutsoyiannis concludes that for most time lags the warming is not significant. In some cases it maybe is.
Table 12

Benestad^{I} 
Bunde^{II} 
Koutsoyiannis^{III} 
Is the rise in global average temperature during the past 150 years statistically significant? 
Yes 
Yes^{IV} 
No^{V} 
Is the rise in global average sea surface temperature during the past 150 years statistically significant? 
Yes 
No 
No 
Is the rise in global average land surface temperature during the past 150 years statistically significant? 
Yes 
Yes 
No 
^{I} Benestad’s conclusions are based on the difference between GCM simulations with and without anthropogenic forcing (Box 10.1 or Figs 10.1 & 10.7 in AR5)
^{II} Based on the Detrended Fluctuation Analysis (DFA) and/or the wavelet technique (WT).
^{III} Based on the climacogram and different time lags (30, 60, 90 and 120 years).
^{IV} This change is 99% significant according to Bunde.
^{V} For a 90 year time lag and a 1% significance level it maybe is significant (see guest blog).
Is there a large contribution of greenhouse gases to the warming?
Bunde is more convinced of a substantial role for greenhouse gases on the climate than Koutsoyiannis although he admits he cannot rule out that the warming on land is (partly) due to urban heating. Bunde said he may not fully agree with Koutsoyiannis: “We cannot show in our analysis of instrumental temperature data that GHG are responsible for the anomalously strong temperature increase that we see and that we find is significant, but it is my working hypothesis.” Koutsoyiannis believes the influence of greenhouse gases is relatively weak, “so weak that we cannot conclude with certainty about quantification of causative relationships between GHG and temperature changes”. Benestad on the other hand said the increased concentrations of GHGs is the only plausible explanation for the observed global warming, global mean sea level rise, melting of ice, and the accumulation of ocean heat.
Table 13

Benestad 
Bunde 
Koutsoyiannis 
Is the warming mainly of anthropogenic origin? 
The combination of statistical information and physics knowledge lead to only one plausible explanation: GHGs 
Yes, it is my working hypothesis 
No, I think the effect of CO2 is small 
Extended summary of the Climate Dialogue on longterm persistence
Extended Summary of the Climate Dialogue on Long Term Persistence
Author: Marcel Crok
With contributions from Bart Verheggen, Rob van Dorland (KNMI) en Bart Strengers (PBL)
Introduction
This summary is based on the contributions of the three invited scientists who participated in the dialogue entitled “Longterm persistence and trend significance”. We want to thank Rasmus Benestad, Armin Bunde and Demetris Koutsoyiannis for their participation.
The summary is not meant to be a consensus statement. It’s just a summary of the discussion and should give a good overview of how these three scientists view the topic at this moment. This summary was written by Marcel Crok and then reviewed and adjusted by the other editors of Climate Dialogue and the advisory board members. In some cases the editors disagreed about the text. In the summary we make clear when this is the case.
The summary was then reviewed by the three invited participants. They do not necessarily endorse the full text or our selection of the dialogue. We did ask them to check the claims in all the tables though in order to make these consistent with their views.
Long term persistence and trend significance
In science one often asks whether a change in some parameter, variable or process is statistically significant. So we could ask: is the increase in global average temperature statistically significant? Whether an observed trend is significant or not is related to the chance of occurrence and thus on the underlying variability, noise and errors, as well as the temporal stochastic structure thereof.
Figure 2.14 from the IPCC AR5 WGI report. Global annual average landsurface air temperature (LSAT) anomalies relative to a 1961–1990 climatology from the latest versions of four different data sets (Berkeley, CRUTEM, GHCN and GISS).
The temperature time series in the figure above shows variation on annual and multidecadal scales. However, how do we know if this trend is part of the natural variability of the climate system or whether it is due to some forced changes and whether is significant or not? Is the increase very unlikely or quite normal in terms of natural variability?
This is a statistical problem and thus the way to look at it is making a statistical analysis of the time series to determine the amplitude of natural variability. For instance to answer the question for the yeartoyear variability we would need to know for every single time step (year) what the chance is to go up or down and how strong these excursions can be. The difficulty is that we don’t have a data set for the “undisturbed” climate, i.e. the climate without anthropogenic influences, which could be used as a reference period and to assess the significance of the recent warming trend. It is noted, though, that there are a lot of proxy data sets, which can be used to infer the stochastic structure of natural climatic variability. These data sets do not describe the climate precisely, but certainly can give information on its stochastic structure and also are free of anthropogenic influences.
With such issues we enter the arena of the Climate Dialogue about longterm persistence (LTP).
Definition of longterm persistence
Both Bunde and Koutsoyiannis showed figures in their guest blogs to explain the difference between independent or uncorrelated data and longterm correlated data. Below is the graph that Bunde showed:
Figure 1 of Bunde’s guest blog showing the difference between uncorrelated data (left) and data with longterm persistence (right). The coefficient γ (gamma) is a measure of persistence.
As he explained: “For the uncorrelated data, the moving average is close to zero, while for the LTP data, the moving average can have large deviations from the mean, forming some kind of mountainvalley structure that looks as if it contained some external deterministic trend. The figure shows that it is not a straightforward task to separate the natural fluctuations from an external trend, and this makes the detection of external trends in LTP records a difficult task.”
Koutsoyiannis said it as follows: “No one would believe that the weather this hour does not depend on that an hour ago. It is natural to assume that there is time dependence in weather. (…) Now, if we average the process to another scale, daily, monthly, annual, decadal, centennial, etc. we get other stochastic processes, not qualitatively different from the hourly one. Of course, as the scale of averaging increases the variability decreases—but not as much as implied by classical statistics.”
Benestad gave the following description of LTP: “Longterm persistence (LTP) describes how slow physical processes change over time, where the gradual nature is due to some kind of ‘memory’. This memory may involve some kind of inertia, or the time it takes for physical processes to run their course. Changes over large space take longer time than local changes.”
So they all accept that LTP ‘exists’ in the climate or is part of climate. There were disagreements though, even about the concept of LTP. Bunde and Koutsoyiannis are both in favour of a formal (mathematical) definition of LTP, which describes what Koutsoyiannis said above, that on longer time scales variability decreases—but not as much as implied by more “classical statistics” like AR(1)[1].
Benestad found this proposition “somewhat artificial” when dealing with temperature time series. He said a great deal of variance is usually removed before the data is analysed, like seasonal variations and the diurnal cycle. “Most of the variance is tied up to these wellknown cycles, forced by regional changes in incoming sunlight. Furthermore, ENSO has a time scale that is ~38 years, and is associated with most of the variance after the seasonal and diurnal scales are neglected.” Elsewhere Benestad said: “There are some known examples of LTP processes, such as the ice ages, changes in the ocean circulation, and the El Niño Southern Oscillation.” Bunde disagreed that ENSO is an LTP process: “Rasmus [Benestad] will recognize that ENSO is not an example of LTP, in the same way as other quasioscillatory phenomena cannot be described as LTP.”
So there is confusion/disagreement about what LTP really “is”. The reason could be that for Bunde and Koutsoyiannis LTP is a statistical property of climatic time series and according to Bunde, as such, it is not an “abstract issue”.
A key issue seemed to be whether it is possible to talk about LTP in terms of physical processes. Koutsoyiannis thinks the system is just too complex to talk about simple physical causes for observed changes and he does not accept the dichotomy physics vs. statistics as in complex physical systems a statistical description is the most pertinent and the most powerful.
According to Koutsoyiannis it is unfortunate that LTP has been commonly described in the literature in association with autocorrelation and as a result of memory mechanisms. For him it is “the change, mostly irregular and unpredictable in deterministic terms, that produces the LTP”. For Benestad LTP is a manifestation of memory in the climate system.
Summary
To answer the question “is the increase in global average temperature statistically significant?” one needs to make assumptions about the statistical nature of the time series and one needs to choose what statistical model is the most appropriate.
If the temperature of this year is not related to that of last year or next year we can use classical statistics to determine whether the increase in global temperature is significant or not. In such an “uncorrelated climate” the average value becomes zero (or a fixed value) quickly and deviations from the mean last only shortly. If there is (strong) temporal dependence though the moving average can have large deviations from the mean. Bunde and Koutsoyiannis claim the climate displays such longterm dependence or longterm persistence (LTP).
The three participants agree that LTP exists in the climate (Table 1). They disagree about the exact definition though and about the physical processes that lie behind it. Benestad and Bunde describe LTP in terms of “long memory”. Koutsoyiannis says that in his opinion LTP is mainly the result of the irregular and unpredictable changes that take place in the climate (Table 2). Bunde and Koutsoyiannis are both in favour of a formal (mathematical) definition of LTP, which states that on longer time scales climate variability decreases—but not as much as implied by “classical statistics” such as AR(1).
Benestad said that ice ages and the El Niño Southern Oscillation are examples of LTP processes. Bunde and Koutsoyiannis disagreed and said that quasioscillatory phenomena cannot be described as LTP (Table 3).
Table 1

Benestad 
Bunde 
Koutsoyiannis 
Does LTP exist in the climate? 
Yes 
Yes 
Yes 
Table 2

What is longterm persistence (LTP)? 
Benestad 
LTP describes how slow physical processes change over time, where the gradual nature is due to some kind of ‘memory’. 
Bunde 
LTP is a process with long memory; the value of a parameter (e.g. temperature) today depends on all previous points. 
Koutsoyiannis 
It is unfortunate that LTP has been interpreted as memory; it is the change, mostly irregular and unpredictable in deterministic terms, that produces the LTP, while the autocorrelation results from change. 
Table 3

Benestad 
Bunde 
Koutsoyiannis 
Quasioscillatory phenomena like ENSO can be described as LTP. 
Yes 
No 
No 
Is LTP relevant for the detection of climate change?
The full IPCC definitions of detection and attribution in AR5 are (our emphasis)[2]:
“Detection of change is defined as the process of demonstrating that climate or a system affected by climate has changed in some defined statistical sense without providing a reason for that change. An identified change is detected in observations if its likelihood of occurrence by chance due to internal variability alone is determined to be small.”
Attribution is defined as “the process of evaluating the relative contributions of multiple causal factors to a change or event with an assignment of statistical confidence”. As this wording implies, attribution is more complex than detection, combining statistical analysis with physical understanding.
The definition of detection has been differently interpreted by the members of the Editorial Staff:
Interpretation 1 (Rob van Dorland, Bart Verheggen): the second part clarifies the first part that you need (some defined) statistical model to distinguish between forced and unforced (internal variability) change. In the first part it is stated that this is done without knowing the causeof the forced change, i.e. whether it is anthropogenic or natural (sun, volcanoes etc).
Interpretation 2 (Marcel Crok): The first part of the definition suggests that you only need statistics to do detection. The second part suggests you need more than statistics (physical models), unless a statistical method would be able to distinguish between forced changes and internal variability. So the definition is selfcontradictory.
Bunde and Koutsoyiannis both think detection is mainly a matter of statistics while Benestad thinks it also involves a physical interpretation of distinguishing unforced internal variability from forced changes.
Bunde wrote: “For detection and estimation of external trends (“detection problem”) one needs a statistical model.” Koutsoyiannis preferred the word “primarily” instead of “purely”: “I would say it is primarily a statistical problem, but I would not use the advert “purely”. Besides, as we wrote in Koutsoyiannis/Montanari (2007)[3], even the very presence of LTP should not be discussed using merely statistical arguments.”
Benestad wrote: “Hence, the diagnosis (“detection”) of a climate change is not purely a matter of statistics. The laws of physics set fundamental constraints which let us narrow down to a small number of ‘suspects’. For complete probability assessment, we need to take into account both the statistics and the physicsbased information, such as the fact that GHGs absorb infrared light and thus affect the vertical energy flow through the atmosphere.”
Bart Verheggen wrote the following analysis of this part of the discussion: “This discussion showed that the participants used a slightly different operational definition of detection. Benestad followed the first interpretation of the IPCC definition, i.e. testing the significance of observed changes relative to what is expected from only unforced internal variability. Bunde and Koutsoyannis take detection to mean testing the significance of observed change w.r.t. some reference period without anthropogenic forcings (but with natural forcings). The latter definition in effect sets a higher bar for detection than the former (as the observed trend has to exceed not just unforced internal variability, but also the effect of natural forcings). These differences are probably rooted in different perceptions of what internal variability is (and whether or not it is different in principle from natural forcings).”
Summary
There was confusion about the exact meaning of the IPCC definition about detection. The definition reads: “Detection of change is defined as the process of demonstrating that climate or a system affected by climate has changed in some defined statistical sense without providing a reason for that change. An identified change is detected in observations if its likelihood of occurrence by chance due to internal variability alone is determined to be small.”
Bunde and Koutsoyiannis both think detection is mainly a matter of statistics and that it is very relevant for the detection of climate change. Benestad on the other hand thinks detection is not mainly a statistical issue and that it also involves a physical interpretation of distinguishing unforced internal variability from forced changes.
Table 4

Benestad 
Bunde 
Koutsoyiannis 
Is detection purely a matter of statistics? 
No, laws of physics sets fundamental constraints 
Yes 
Not purely but primarily yes 
LTP versus AR(1)
Bunde’s main conclusion in his guest blog was: “My conclusion is that the AR1 process falsely used by climate scientists to describe temperature variability leads to a strong overestimation of the significance of external trends. When using the proper LTP model the significance is considerably lower.” The AR(1) process refers to the simplest model for shortterm persistence (STP). So Bunde is saying several things here: 1) LTP is the proper model to describe temperature variability; 2) climate scientists in general use a STP model like AR(1) and 3) this leads to a strong overestimation of the significance of trends.
In a comment Bunde added that “This crucial mistake appeared also in the IPCC report [AR4] since the authors were (…) not aware of the LTP of the climate. They assumed STP [in table 3.2] and thus got the trend estimations wrong by overestimating the significance.”
Koutsoyiannis agrees with Bunde’s conclusions. In figure 1 of his guest blog he showed that the clustering of warm years, for example, is orders of magnitude more likely to happen if you use an LTP model. “We may see, for example, that what, according to the classical statistical perception, would require the entire age of the Earth to occur once (i.e. clustering of 89 events) is a regular event for an HK [HurstKolmogorov] climate[4], with probability on the order of 110%.” He added that “this dramatic difference can help us understand why the choice of a proper stochastic model is relevant for the detection of changes in climate.” In a comment he also said that “a Markov [AR(1)] process […] finally produces a static climate […]. The truth is, however, that climate on Earth has never been static.”
Benestad agrees that “the AR1 model may not necessarily be the best model”, but adds that “it is difficult to know exactly what the noise looks like in the presence of a forced signal.” Elsewhere he wrote: “The important assumptions are therefore that the statistical trend models, against which the data are benchmarked, really provide a reliable description of the noise.”
In the discussion of this summary Benestad disagreed with Bunde’s claim that the AR(1)process is falsely used by climate scientists and the IPCC. According to Benestad, Table 3.2 in AR4 as mentioned by Bunde is not seriously arguing that internal variability is AR(1), but merely uses this method as a crude estimate of the trend significance for that particular plot. Benestad: “The relevant question is whether the trend is anthropogenic or due to LTP (or signal versus noise) and to answer this question, you must look at chapter 9 in AR4 on detection and attribution and in particular figure 9.5 on the comparison between global mean surface temperature anomalies from observations and model simulations, and not at Table 3.2. In chapter 9 there are zero hits on ‘AR(1)’.”
He warns for the danger of circular reasoning when using statistical models. “It is the way models are used that really matters, rather than the specific model itself. All models are based upon a set of assumptions, and if these are violated, then the models tend to give misleading answers. Statistical LTPnoise models used for the detection of trends involve circular reasoning if adapted to measured data. Because this data embed both signal and noise.”
This is a key argument of Benestad. He claims statistical models are useless when applied to what is called the instrumental period, because in this period the data embed both “signal” and “noise” and LTP or STP or whatever statistical model are meant to describe “the noise” only in his opinion. Benestad therefore favours other methods: “Stateoftheart detection and attribution work do not necessarily rely on the AR1 concept, but use results from climate models and errorcovariance matrices based on the model results to evaluate trends, rather than simple AR(1) methods.”
Koutsoyiannis in response gave a few examples why in his opinion the danger of circular reasoning is not justified in this case. In his first example he divided the global average time series in two parts: “The HadCrut4 data set is 163 year long. So, let us exclude the last 63 years and try to estimate H [the Hurst coefficient][5] based on the 100year long period 18501949. The Hurst coefficient estimate becomes 0.93 instead of 0.94 for the entire period.” So he disagrees that the global average temperature time series cannot be used because the record is ‘contaminated’ by anthropogenic forcing. He also referred to analyses of proxies made by Koutsoyiannis and Montanari (2007), who estimated high values of the Hurst coefficient (H between 0.860.93) for the period 14001855 and by Markonis and Koutsoyiannis (2013)[6], who showed that a combination of proxies supports the presence of LTP with H > 0.92 for time scales up to 50 million years.
However, Benestad is in favour of separating forced and unforced climate change (as the definition of detection implies), and part of the 18501949 temperature changes are due to (natural) forcing. This implies that it is difficult to draw conclusions like Koutsoyiannis did. There was no further discussion on this issue.
Summary
Bunde and Koutsoyiannis argue that LTP is the proper model to describe temperature variability, that climate scientists (and the IPCC) in general use an STP model like AR(1) and that this leads to a strong overestimation of the significance of trends (Table 5 and 7). Koutsoyiannis showed that the clustering of warm years, for example, is orders of magnitude more likely to happen if you use an LTP model. Benestad agrees that the AR(1) model may not necessarily be the best model, but in general statistical models are useless in his opinion when applied to what is called the instrumental period, because in this period the data embed both “signal” and “noise” and LTP or STP or whatever statistical model are meant to describe “the noise” only in his opinion: “Statistical LTPnoise models used for the detection of trends involve circular reasoning if adapted to measured data because this data embed both signal and noise.” (Table 6). Koutsoyiannis in response gave a few examples why in his opinion the danger of circular reasoning is not justified in this case.
Table 5

Benestad 
Bunde 
Koutsoyiannis 
Is LTP relevant/important for the statistical significance of a trend? 
Yes (though physics still needed) 
Yes, very much 
Yes, very much 
Table 6

What is the relevance of LTP for the detection of climate change? 
Benestad 
Statistical LTPnoise models used for the detection of trends involve circular reasoning if adapted to measured data. State of the art detection and attribution is needed. 
Bunde 
For detection and estimation of external trends (“detection problem”) one needs a statistical model and LTP is the best model to do this. 
Koutsoyiannis 
LTP is the only relevant statistical model for the detection of changes in climate. 
Table 7

Benestad 
Bunde 
Koutsoyiannis 
Is the AR(1) model a valid model to describe the variability in time series of global average temperature? 
No, if physics based information is neglected 
No 
No 
Does the AR(1) model leads to overestimation of the significance of trends? 
Yes, if you don’t also take into account the physicsbased information 
Yes 
Yes 
LTP and chaos
There was disagreement about the relation between LTP and chaos. Obviously, the participants agree that the climate system possesses a chaotic component, but they differ in the extent and time scales of this component. For Benestad though the implication is that LTP cannot be a valid concept for the climate on longer terms. For example in his first reaction to Bunde’s guest blog Benestad wrote: “I presently think one major weakness in your reasoning is [when you say that] ‘in LTP records, in contrast, x_{i} depends on all previous points.’ This cannot be true if the weather evolution is chaotic, where the weather system loses the memory of the initial state after some bifurcation point.”
In another comment Benestad wrote: “The socalled ‘butterfly effect’, an aspect of ‘chaos’ theory, is wellestablished with meteorology, which means there is a fundamental limit to the predictability of future weather due to the fact that the system loses the memory of the initial state after a certain time period. (…) For geophysical processes, chaos plays a role and may give an impression of LTP, and still the memory of the initial conditions is lost after a finite time interval.”
Koutsoyiannis referred to some of his papers and said that yes, “these publications show that LTP does involve chaos.” In another comment that dealt with untangling the different causes of climate change he said: “in chaotic systems described by nonlinear equations, the notion of a cause may lose its meaning as even the slightest perturbation may lead, after some time, to a totally different system trajectory (cf. the butterfly effect).” So Koutsoyiannis and Benestad largely agree about how chaotic systems behave. The main difference though is that Benestad interprets “the system loses memory” as “LTP is not a useful concept” and here Koutsoyiannis and Bunde disagree with him. In particular, Koutsoyiannis considers memory as a bad interpretation of LTP: it is the change which produces the LTP and thus LTP is fully consistent with the chaotic behaviour of climate.
Summary
There was disagreement about the relation between LTP and chaos (Table 8). According to Benestad chaos theory implicates the memory of the initial conditions is lost after a finite time interval. Benestad interprets “the system loses memory” as “LTP is not a useful concept”. Koutsoyiannis considers memory as a bad interpretation of LTP: it is the change which produces the LTP and thus LTP is fully consistent with the chaotic behaviour of climate.
Table 8

Benestad 
Bunde 
Koutsoyiannis 
Is the climate chaotic? 
Yes 
Yes 
Yes 
Does chaos mean memory is lost and does this apply for climatic timescales as well? 
Yes 
chaos is not a useful concept for describing the variability of climate records on longer time scales 
No; LTP is not memory 
Does chaos exclude the existence of LTP? 
Yes, at both weather and climatic time scales 
No 
No; on the contrary, chaos can produce LTP 
Does chaos contribute to the existence of LTP? 
No, but chaos may give an impression of LTP. 
Yes 
Yes, LTP does involve chaos 
Signal and noise
There was disagreement about concepts like signal and noise. In his guest blog Benestad wrote: “The term ‘signal’ can have different meanings depending on the question, but here it refers to manmade climate change. ‘Noise’ usually means everything else, and LTP is ‘noise in slow motion’.”
Koutsoyiannis disagreed with this distinction: “I would never agree with your term “noise” to describe the natural change. Nature’s song cannot be called “noise”. Most importantly, your “signal” vs. “noise” dichotomy is something subjective, relying on incapable deterministic (climate) models and on, often misused or abused, statistics.”
In another comment Koutsoyiannis elaborated on this point: “The climate evolution is consistent with physical laws and is influenced by numerous factors, whether these are internal to what we call climate system or external forcings. To isolate one of them and call its effect “signal” may be misleading in view of the nonlinear chaotic behaviour of the system.”
Bunde seems to take a position in between Benestad and Koutsoyiannis. He does assume – as a working hypothesis  that there is an external deterministic trend from the greenhouse gases but he calls the remaining part of the total climate signal natural “fluctuations” and not noise. Bunde: “we have to note that we distinguish between natural fluctuations and trends. When looking at a LTP curve, we cannot say a priori what is trend and what is LTP. (...) The LTP is natural, the trend is external and deterministic.”
The distinction in signal and noise is another way of stating what detection aims to do: distinguishing whether the (forced) changes are significantly outside of the bounds of the unforced or internal variability. All three appear to agree that purely based on LTP, this distinction can’t be made.
Summary
There was disagreement about concepts like signal and noise. According to Benestad the term ‘signal’ refers to manmade climate change. ‘Noise’ usually means everything else, and LTP is ‘noise in slow motion’ (Table 9). Koutsoyiannis argued that the “signal” vs. “noise” dichotomy is subjective and that everything we see in the climate is signal. To isolate one factor and call its effect “signal” may be misleading in view of the nonlinear chaotic behaviour of the system. Bunde does assume there is an external deterministic trend from the greenhouse gases but he calls the remaining part of the total climate signal natural “fluctuations” and not noise (Table 9). All three seem to agree that one cannot use LTP to make a distinction between forced and unforced changes in the climate (Table 10).
Table 9

Signal versus noise 
Benestad 
The signal is manmade climate change; the rest is noise and LTP is noise in slow motion. 
Bunde 
My working hypothesis: there is a deterministic external trend; the rest are natural fluctuations which are best described by LTP. 
Koutsoyiannis 
Excepting observation errors, everything we see in climate is signal. 
Table 10

Benestad 
Bunde 
Koutsoyiannis 
Is the signal versus noise dichotomy meaningful? 
Yes 
Yes 
No* 
Can LTP distinguish between forced and unforced components of the observed change? 
No 
No 
* 
Can LTP distinguish between natural fluctuations (including natural forcings) and trends? 
No 
Yes 
* 
* Koutsoyiannis thinks that even the formulation of these questions, which imply that that the description of a complex process can be made by partitioning it into additive components and trying to know the signatures of each one component indicates a linear view for a system that is intrinsically nonlinear.
Forced versus unforced
In our introduction we introduced the three climate influences that climate scientists distinguish:
“Most experts agree that three types of processes (internal variability, natural and anthropogenic forcings) play a role in changing the Earth’s climate over the past 150 years. It is the relative magnitude of each that is in dispute. The IPCC AR4 report stated that “it is extremely unlikely (<5%) that recent global warming is due to internal variability alone, and very unlikely (< 10 %) that it is due to known natural causes alone.” This conclusion is based on detection and attribution studies of different climate variables and different ‘fingerprints’ which include not only observations but also physical insights in the climate processes.”
There was a lot of discussion about the physical mechanisms behind LTP. Bart Verheggen of the Climate Dialogue team asked a series of questions about this: Can we agree that forcing introduces LTP? Can we agree that forcing is omnipresent for the real world climate? Is LTP mainly internal variability or the result of a combination of internal variability and natural forcings?
Bunde replied that “Natural Forcing plays an important role for the LTP and is omnipresent in climate (so yes and yes to first two questions).” Koutsoyiannis also agreed that “(changing) forcing can introduce LTP and that it [forcing] is omnipresent. But LTP can also emerge from the internal dynamics alone as the above examples show. Actually, I believe it is the internal dynamics that determine whether or not LTP would emerge.”
Verheggen concluded: “All three invited participants agree that radiative forcing can introduce LTP and that it is omnipresent. It follows that the presence of LTP cannot be used to distinguish forced from unforced changes in global average temperature. The omnipresence of both unforced and forced changes means that it’s very difficult (if not impossible) to know the LTP signature of each. Therefore, LTP by itself doesn’t seem to provide insight into the causal relationships of change. It is however relevant for trend significance, but fraught with challenges since the unforced LTP signature is not known.”
Summary
According to Bunde natural forcing plays an important role for LTP and is omnipresent in climate. Koutsoyiannis agreed that (changing) forcing can introduce LTP and that forcing is omnipresent, but LTP can also emerge from the internal dynamics alone.
Table 11

Benestad 
Bunde 
Koutsoyiannis 
Does forcing introduce LTP? 
Yes 
Yes 
Yes 
Is forcing omnipresent in the real world climate? 
Yes 
Yes 
Yes 
What according to you is the main mechanism behind LTP? 
Forcings 
Natural Forcing plays an important role for the LTP and is omnipresent in climate. 
I believe it is the internal dynamics that determines whether or not LTP would emerge. 
Is the warming significant?
This brings us to one of the key questions in this climate dialogue: do you conclude there is a significant warming trend? The participants used different models and methods to answer this question and understanding their different views requests a detailed understanding of these methods which is outside the scope of this dialogue. So here we just mention the differences and focus on the results.
Benestad preferred to use a regression analysis of the global average temperature against known climate forcings as these may be considered as additional information with respect to any statistical model. The results are shown in his figure 1:
Benestad’s Figure 1. The recorded changes in the global mean surface temperature over time (red). The grey curve shows a model calculation of this temperature based on greenhouse gases (GHGs), ozone (O_{3}), and changes in the sun (S_{0}).
Benestad: “The probability that this fit [in the regression analysis] is accidental is practically zero if we assume that that the temperature variations from yeartoyear are independent of each other. LTP and the oceans inertia will imply that the degrees of freedom is lower than the number of data points, making it somewhat more likely to happen even by chance.” Benestad says it is very likely that the main physical causes of the change are clear and that greenhouse gases are the main contributors to the warming since the midst of the 20^{th} century (as also illustrated by figure 9.15 in AR4 or figure 10.7 in AR5).
Koutsoyiannis asked Benestad whether his model shown in his Figure 1 is free of circular reasoning, which means that at least he has split the data into two periods for modelling and validation. Benestad left the question unanswered and there was no further discussion on this issue.
Bunde and Koutsoyiannis use different statistical methods. Bunde explained that “nowadays, there is a large number of methods available that is able to detect the natural fluctuations in the presence of simple monotonous trends. Two of them are the detrended fluctuation analysis (DFA) and the wavelet technique (WT).”
Based on these methods Bunde reached the following conclusions:
“(i) The global sea surface temperature increased, in the past 100y, by about 0.6^{ }degree, which is not significant. The reason for this is the large persistence of the oceans, reflected by a large Hurst exponent.
(ii) The global land air temperature, in the past 100 years, increased by about 0.8 degrees. We find this increase even highly significant. The reason for this is the comparatively low persistence of the land air temperature, which makes large natural increases unlikely.”
Koutsoyiannis used a different method to identify and quantify the LTP which he calls a climacogram. Koutsoyiannis is the most ‘skeptical’ of the three participants when it comes to the significance of trends: “Assuming that the data set we used is representative and does not contain substantial errors, the only result that we can present as fact is that in the last 134 years the climate has warmed by 0.6°C (this is a difference of climatic—30year average—values while other, often higher, values that appear in the literature refer to trends based on annual values). Whether this change is statistically significant or not depends on assumptions. If we assume a 90year lag and 1% significance, it perhaps is.”
Koutsoyiannis’ Figure 5: testing lagged climatic differences based on the HadCrut4 data set (18502012). Differences are not statistically significant according to Koutsoyiannis, except maybe for the 90 year lag.
When asked specifically if his results mean that “detection” has not yet taken place, Koutsoyiannis replied: “Yes, I believe it has not taken place. Whether it comes close: It is likely.”
Koutsoyiannis argues that the current temperature signal is not outside of the bounds of what could be expected from natural forced and unforced changes, thereby using a stricter rating (1%) than the standard definition of “detection” (5%). He bases his statement on a higher Hurst coefficient than Bunde does which partly explains why Koutsoyiannis and Bunde don’t reach exactly the same conclusions.
Summary
The three participants gave different answers on the key question of this Climate Dialogue, namely of the warming in the past 150 years is significant or not. They used different methods to answer the question. Benestad is most confident that both the changes in land and sea temperatures are significant. Bunde concludes that due to a high Hurst parameter the sea temperatures are not significant but the land and global temperatures are. Koutsoyiannis concludes that for most time lags the warming is not significant. In some cases it maybe is.
Table 12

Benestad^{I} 
Bunde^{II} 
Koutsoyiannis^{III} 
Is the rise in global average temperature during the past 150 years statistically significant? 
Yes 
Yes^{IV} 
No^{V} 
Is the rise in global average sea surface temperature during the past 150 years statistically significant? 
Yes 
No 
No 
Is the rise in global average land surface temperature during the past 150 years statistically significant? 
Yes 
Yes 
No 
^{I} Benestad’s conclusions are based on the difference between GCM simulations with and without anthropogenic forcing (Box 10.1 or Figs 10.1 & 10.7 in AR5)
^{II} Based on the detrended fluctuation analysis (DFA) and/or the wavelet technique (WT).
^{III} Based on the climacogram and different time lags (30, 60, 90 and 120 years).
^{IV} This change is 99% significant according to Bunde.
^{V} For a 90 year time lag and a 1% significance level it maybe is significant (see Koutsoyiannis’ guest blog).
Is there a large contribution of greenhouse gases to the warming?
While Bunde and Koutsoyiannis share similar views about the importance of LTP for detection, there are some differences as well which are reflected in the table above. Koutsoyiannis for example does not agree with Bunde’s conclusion that the increase in the global land air temperature in the past 100 years is significant.
Bunde and Koutsoyiannis seem to disagree about the level of LTP (i.e. the value of the Hurst coefficient) in land surface temperature records. Koutsoyiannis believes that on climatic time scales, sea surface temperatures (SSTs) and land surface temperatures (LSTs) should be highly correlated: “I believe if you accept that the sea surface temperature has strong LTP, then logically the land temperature will have too, so I cannot agree that the latter has “comparatively low persistence”. (…) I believe climates on sea and land are not independent to each other—particularly on the long term.” Bunde though thinks that the persistence in the SSTs is higher than that in LSTs.
Bunde is more convinced of a substantial role for greenhouse gases on the climate than Koutsoyiannis although he admits he cannot rule out that the warming is (partly) due to urban heating. “First of all, from our trend significance calculations we can see, without any doubt, that there is an external temperature trend which cannot be explained by the natural fluctuations of the temperature anomalies. We cannot distinguish between Urban Warming and GHG here, but there are places on the globe where we do not expect urban warming but we still see evidence for an external trend, so we may conclude that it is GHG.”
In another comment Bunde wrote: “as a consequence of the LTP in the temperature data, the error bars are very large, considerably larger than for shortterm persistent records. But nevertheless, except for the global sea surface temperature, we have obtained strong evidence from this analysis that the present warming has an anthropogenic origin.” And in another comment: “Regarding GHG [greenhouse gases] I may not fully agree with Demetris [Koutsoyiannis]: We cannot show in our analysis of instrumental temperature data that GHG are responsible for the anomalously strong temperature increase that we see and that we find is significant, but it is my working hypothesis.”
When we asked Koutsoyiannis whether he believes the influence of greenhouse gases is small he answered: “Yes, I believe it is relatively weak, so weak that we cannot conclude with certainty about quantification of causative relationships between GHG and temperature changes. In a perpetually varying climate system, GHG and temperature are not connected by a linear, oneway and onetoone, relationship. I believe climate models and the thinking behind them have resulted in oversimplifying views and misleading results. As far as climate models are not able to reproduce a climate that (a) is chaotic and (b) exhibits LTP, we should avoid basing conclusions on them.”
Benestad on the other hand wrote: “The combination of statistical information and physics knowledge lead to only one plausible explanation for the observed global warming, global mean sea level rise, melting of ice, and accumulation of ocean heat. The explanation is the increased concentrations of GHGs.”
Summary
Bunde is more convinced of a substantial role for greenhouse gases on the climate than Koutsoyiannis although he admits he cannot rule out that the warming on land is (partly) due to urban heating. Bunde said he may not fully agree with Koutsoyiannis: “We cannot show in our analysis of instrumental temperature data that GHG are responsible for the anomalously strong temperature increase that we see and that we find is significant, but it is my working hypothesis.” Koutsoyiannis believes the influence of greenhouse gases is relatively weak, “so weak that we cannot conclude with certainty about quantification of causative relationships between GHG and temperature changes”. Benestad on the other hand said the increased concentrations of GHGs is the only plausible explanation for the observed global warming, global mean sea level rise, melting of ice, and accumulation of ocean heat.
Table 13

Benestad 
Bunde 
Koutsoyiannis 
Is the warming mainly of anthropogenic origin? 
The combination of statistical information and physics knowledge lead to only one plausible explanation: GHGs 
Yes, it is my working hypothesis 
No, I think the effect of CO2 is small 
[1] In statistics and signal processing, an autoregressive (AR) model is a representation of a type of random process; as such, it describes certain timevarying processes in nature, economics, etc. The autoregressive model specifies that the output variable depends linearly on its own previous values. It is a special case of the more general ARMA model of time series. More details, see Wikipedia.
[2] Section 10.2.1 in AR5.
[3] Koutsoyiannis, D., and A. Montanari (2007), Statistical analysis of hydroclimatic time series: Uncertainty and insights, Water Resources Research, 43 (5), W05429, doi: 10.1029/2006WR005592.
[4] HurstKolmogorov is a term that Koutsoyiannis has introduced and which is synonymous to LTP. In his guest blog he explains where it comes from: “A decade before Hurst detected LTP in natural processes, Andrey Kolmogorov, devised a mathematical model which describes this behaviour using one parameter only, i.e. no more than in the Markov [AR(1)] model. We call this model the HurstKolmogorov (HK) model.”
[5] The Hurst coefficient H is a measure for longterm persistence. H is a number between 0.5 and 1. The closer to 1, the more persistent a system is.
[6] Markonis, Y., and D. Koutsoyiannis, Climatic variability over time scales spanning nine orders of magnitude: Connecting Milankovitch cycles with Hurst–Kolmogorov dynamics, Surveys in Geophysics, 34 (2), 181–207, 2013.
Welcome back to Climate Dialogue for our second dialogue. We slightly changed our procedure. This time we asked the invited experts to give a first reaction on the guest blogs of the two other discussants. We hope this will generate a guide for the following discussion.
Marcel Crok
First comments on the trend discussion by Armin Bunde:
I think that Bunde provides a nice overview of the statistics behind longterm persistence (LTP) and the mathematical theory. He also provides some examples which also suggests that there are a number of areas where processes exhibit LTP. However, I’m not convinced that the climate system necessarily fulfils all the assumptions made in the LTP analysis.
It is especially the proposition that any observed number xi in a LTP records depends on all previous points which as not established for geophysical processes. Lorenz1 published in 1962 the concept of nondeterministic flow, also known as ‘chaos’2.
The socalled ‘butterfly effect’, an aspect of ‘chaos’ theory, is wellestablished with meteorology, which means there is a fundamental limit to the predictability of future weather due to the fact that the system looses the memory of the initial state after a certain time period. The reason for this is that the future outcome is sensitive to infinitesimally small differences in the description of the state of the atmosphere. This sensitivity can be estimated through a set of lyapunov exponents.
Failure to predict longrange climate variability with statistical models suggests that there is little predictable precursory signal (memory) on time scales longer than months over large parts of the world. El Niños are notoriously hard to predict from one year the next.
Nevertheless, chaotic systems will look like LTP processes because the strange attractors describing the state of the system tends to reside in certain ‘regimes’ for some time before flipping over to another regime. So does it matter if a chaotic process look lie LTP but where the memory the initial conditions is ‘lost’ after some time? I haven’t really thought about this before.
It is true that e.g. temperature cannot be characterized by the AR1 process, and this is because we know a priori that it is not just noise. The temperature is physically forced by a number of processes, be it from changes in the Earth’s orbit around the sun, changes in the sun, changes in geology, changes in volcanic eruptions, or changes in the ocean currents. We can simulate such changes with climate models.
While the AR1 model may not necessarily be the best model, but it is difficult to know exactly what the noise looks like in the presence of forced signal. Stateoftheart detection and attribution work do not necessarily rely on the AR1 concept, but use results from climate models and errorcovariance matrices based on the model results to evaluate trends, rather than simple AR(1) methods.
Although there are ways to deal with trends in data in the context of LTP analyses, there are drawbacks associated with uncertainties associated with the different models, e.g. specifying polynomial trends and their coefficients. Furthermore, a combination of a Fourier transformation and a random number generators (‘phase scrambeling’) will produce numbers which mimic the statistical characteristics of the original data, but will not easily distinguish between signal and noise.
I would not be surprised if trend tests suggest that the longterm increase global mean sea surface temperatures do not qualify as statistically significant, and the reason for that is due to a number of ocean processes rather than the inertia that resides in the oceans. We know that ocean ciculation patterns such as the El Niño Southern Oscillation, the Atlantic Meridional Overturning circulation, and the Pacific decadal oscillation are responsible for slow variations with time scales of months to decades.
We can look at different quantities such as the ocean heat content and the global mean sea level and the trends are more prominent while such variations are small in comparison.
1. Lorenz, E. Deterministic nonperiodic flow. J. Atmospheric Sci. 20, 130–141 (1963).
2. Gleick, J. Chaos. (Cardinal, 1987).
First comments on the trend discussion by Demetris Koutsoyiannis:
Koutsoyiannis shares offers a definition of ‘climate’, which can be summerised as ‘what are the ranges and how often can I expect that a particular weather event to take place?’ However, climate is not just statistics, but also about physics: the flow of energy and transport of matter.
Modern climatology has drawn on experience from weather forecasting and physics in addition to statistics, and the success of daily operation weather forecasting provides convincing evidence suggesting that we do have a good understanding of the role various processes in atmosphere and the oceans play.
The question of whether 30 years is an appropriate time scale for defining climatic normals may seem a bit academic – it’s a practical time horizon in terms of the time scale of a generation or the life time of a construction. To a great extent, climatology evolved out from the need to provide society with a guidance on how to design infrastructure and plan agriculture. However, this is different to the discussion regarding climate change.
As far as I know, if is wellknown that the real degrees of freedom is less than the number of observations due to persistence and autocorrelation. This has long been tacit knowledge, and the seeming persistence has to do with the chaotic nature of the weather system.
I disagree with him and think there is no staticclimate assumption behind “weather vs. climate” dichotomy – it’s a question of probabilities and predicting the probability density function (pdf). Nobody says that the pdf needs to be constant. In fact, the definition of a climate change is just that: a shift in the pdf over time.
The question of whether we are now seeing a trend may be answered differently depending on ones assumption. Does it mean that the pdf for the temperature now is different from that of the past? If it is, then that is of relevance to society and it may call for actions to adapt. Is it something we ought to expect because this happens all the time – as Koutsoyiannis suggests? Alternatively, we may ask whether the recent 11 warmest temperatures would at all been possible without a forcing such as GHGs?
I think that we may be asking different questions.
Furthermore, we make different assumptions. I think that the following assertion is invalid in the context of geophysical processes: ‘by averaging to another scale, daily, monthly, annual, decadal, centennial, etc. we get other stochastic processes, not qualitatively different from the hourly one”. The reason is that different known physical processes take place with different preferred time scales.
I also do not see that the idea of longterm persistence (LTP) is omnipresent. Take classical statistical physics for instance– the classical statistics work quite well, and the concept of temperature is indeed an aggregate based on classical statistics in the absence of LTP.
For geophysical processes, chaos plays a role and may give an impression of LTP, and still the memory of the initial conditions is lost after a finite time interval.
While the case of the Nile is interesting, I do not think it is a valid analogy for the global mean temperature – the physical processes are just not the same.
The Nile river flow is influenced by the precipitation over one catchment, which again is determined by the transport of airborne moisture through the atmospheric circulation over the eastern African region. It is affected by monsoons and geography, in addition to the management of the river.
The global mean temperature, on the other hand, is affected balance between the energy received from the sun and loss to space. Furthermore, it is sensitive to the altitude where the heat escapes to space. This is to some extent linked to the global hydrological cycle, however.
Trends in the Earth’s mean temperature will have implication for the energy budget, as the natural system is constrained by the laws of physics. The Earth’s atmosphere and oceans represent a closed system in space where only energy enters and leaves and where the physics keeps the state in check. There are less restoring forces for river levels, and the physical constraints are not as limited for the rainfall over a limited region.
I too do not think that white noise is not really a realistic assumption, and I think that most of my colleagues agree. It is news to me that climate is determined on just one time scale. Where is this stated?
Another difference in opinion is that we know that autocorrelation function is not independent of time scale: at hourly scales, there is a 24hr cycle; at a daily scale, there is a distinct 36525day cycle, and beyond the annual cycle, the presence of longterm variability becomes more vague.
There are some signatures of volcanic eruptions, El Niños, and decadal variations, but these are irregular rather than regular. At geological scales, there have been a number of ice ages, but there is no evidence suggesting that they have lasted more than a few million years. Thus we cannot assume that there will always be another natural mechanism acting on a bigger scale.
I also think that it is big mistake to use of Hust coefficient with the HadCRUT4 without distinguishing forced changes to internal changes – what is noise and what is signal? Or what is the probability that we would see similar high temperatures without forcings? In my mind, Koutsoyiannis mixes forced variations with noise, which muddles the understanding and produces misleading results.
First comments on Armin Bunde’s post
Dear Armin,
It is a great pleasure to contribute, together with you, in this dialogue and to make this comment on your post, following the suggestion of the CD Editorial Team to identify points of agreement and disagreement. I am particularly glad to report my agreement on the major issues with a few exceptions to which I will refer below. I endorse your statements:
I decided not to use mathematical equations in my post, so I welcome the inclusion of equations in yours. From first glance they seem consistent with mine, even though we use different notation. For example, I denote the Hurst coefficient by H (the only mathematical symbol that I used in my post) and it seems that it is identical to your α, while it is related to your γ by γ = 2(1 – H).
On the other hand, while I agree with your notion of the “finite size effect” I would not agree with treating it in terms of inequalities, as you do, because inequalities give the false impression that in some areas (for large sample size N or for small lag s, to use your own notation) your statistical estimations are perfectly safe. They never are, particularly because LTP magnifies the uncertainty and also introduces (negative) bias, sometimes substantial bias, in estimators which according to classical statistics are unbiased (e.g. that of the variance; see Koutsoyiannis, 2003, and Koutsoyiannis and Montanari, 2007). Instead of using inequalities to identify seemingly safe areas, I think it is better to explicitly take the bias into account in all cases. I will come again later to this.
In terms of your results for individual stations, as shown in your second figure, these are mostly consistent with mine; however from recent analyses on thousands of raingauges worldwide (Iliopoulou et al., 2013) it seems that precipitation records have an average H closer to 0.6 than to 0.5 which you report. It also seems the LTP is again more appropriate for them than the AR(1) model.
Now I am coming to your conclusions numbered (i) and (ii). First, while in my post I refer to the combined land and sea surface temperatures (the HadCrut4 data set as mentioned in the introductory entry by the CD Editorial Team), you examine separately the sea surface temperature and the land temperature. I have made analyses also for these (based on the HadSST2 and CRUTEM4 data respectively). So, I can agree with your conclusion (i), i.e.:
Indeed, the behaviour I see for SST is not different from that of my Figure 5, except that the climatic difference for the entire 134year period (18792012) is 0.5°C (but if you limit it to the last 100 years it indeed becomes 0.6°C as you report).
Now coming to the land temperatures, again I find a similar behaviour as in my Figure 5, the only difference being that the few points going out of the critical values in this case refer to the lag 120 years rather than to the lag 90 years shown in the figure (the latter all remain below the critical values). This does not agree with your conclusion:
For the computational part of the disagreement please take into account that the standard deviation in the land temperature is by more than 50% higher than in the sea surface temperature. Also please recall our discussions when the paper by Koutsoyiannis and Montanari (2007) was published, in which we criticized your approach in Rybski et al. (2006) and, in particular the fact that you did not take into account the uncertainty/bias in the standard deviation into your calculations. I had the impression that you had agreed on that, but perhaps you have forgotten it by now as I infer from your list of references.
But there is a disagreement also on the logical part of your conclusion (ii). I believe if you accept that the sea surface temperature has strong LTP, then logically the land temperature will have too, so I cannot agree that the latter has “comparatively low persistence”. We are speaking about longterm persistence, which manifests itself in decadal, centennial, etc., time scales. I cannot imagine that, on the long term, the land would not be affected by the longterm fluctuations of the sea temperature. I believe climates on sea and land are not independent to each other—particularly on the long term.
Demetris
References
Iliopoulou, T., S.M. Papalexiou, and D. Koutsoyiannis (2013), Assessment of the dependence structure of the annual rainfall using a large data set, European Geosciences Union General Assembly 2013, Geophysical Research Abstracts, Vol. 15, Vienna, EGU20135276, European Geosciences Union.
Koutsoyiannis, D. (2003), Climate change, the Hurst phenomenon, and hydrological statistics, Hydrological Sciences Journal, 48 (1), 3–24.
Koutsoyiannis, D., and A. Montanari (2007), Statistical analysis of hydroclimatic time series: Uncertainty and insights, Water Resources Research, 43 (5), W05429, doi: 10.1029/2006WR005592.
Rybski, D., A. Bunde, S. Havlin and H. von Storch (2006), Longterm persistence in climate and the detection problem, Geophys. Res. Lett., 33, L06718, doi: 10.1029/2005GL025591.
First comments on Rasmus Benestad’s post
Dear Rasmus,
Your introduction and particularly your reference to the paper by Cohn and Lins (2005) and to your post “Natually trendy?” reminded me of our discussion in your posr in RealClimate seven years ago. This was my first contribution in a blog and thanks to it and its subsequent reposting in ClimateAudit by Steve McIntyre, brought me in contact with many colleagues, including yourself, as well as Tim Cohn and Harry Lins. So, I thank you for that post.
I also welcome your recognition and explanation of LTP in your current post; these are useful for all of us (particularly for myself as I estimate that from now I will not have as many difficulties in publishing papers related to LTP as in the case of the paper by Koutsoyiannis and Montanari, 2007—see its “prehistory” in http://itia.ntua.gr/781/).
Most of all, I welcome your Figure 2, which supports all that I have been saying for years. The autocorrelation of synthetic climate, produced by a climate model without GHG change, becomes 0 for lag as small as 5 and keeps a value around 0 for all subsequent lags. As I have described in my post, with zero autocorrelation the climate would be flat. But a static climate has never been the case on Earth. In other words, the climate models are inconsistent with the real world climate, which is characterized by change on all time scales. The LTP is the stochastic representation of irregular change and is also reflected in the autocorrelation function with high values of autocorrelation. LTP has been a dominant characteristic of Nature (see my post as well as Armin’s). Your graph shows that the models need to assume an external (anthropogenic) agent to produce what has been the rule in Nature all the time.
I believe it is unfortunate that LTP has been commonly described in the literature in association with autocorrelation and as a result of memory mechanisms. It is the change, mostly irregular and unpredictable in deterministic terms, that produces the LTP. The high autocorrelation is just the reflection of change upon that mathematical concept. The first who understood that was Klemes (1974).
I guess there is difference in the way we are viewing change. Perhaps what I call change you view it as “variation” or “internal variability”. But I have difficulties to understand what you call change. You say:
A climate change happens when the weather statistics are shifted.
I hope you agree that these statistics are a human invention to describe Nature, not a natural property per se. Furthermore, their assigned values depend on assumptions, like the time scale of averaging, e.g. 10 or 30 years. Whatever these assumptions are, I cannot imagine any two, adjacent or not, time periods whose statistics (estimated from data, e.g. temporal average) would be the same. In other words, changes, call them shifts if you wish, occur all the time. This shows that your distinction of “variation” and “change” is ambiguous.
You are clearer when you distinguish “signal” from “noise” as you associate the former with “manmade climate change”. But I would never agree with your term “noise” to describe the natural change. Nature’s song cannot be called “noise”. Most importantly, your “signal” vs. “noise” dichotomy is something subjective, relying on incapable deterministic (climate) models and on, often misused or abused, statistics.
I found very interesting the argument you offer with respect to potential misuse of statistics:
Statistical LTPnoise models used for the detection of trends involve circular reasoning if adapted to measured data. Because this data embed both signal and noise.
I agree that circular reasoning can be a real risk. I also accept your deliberation that you need 7090 years for a meaningful inference about LTP. Based on this, I can offer several ways to avoid the circular reasoning.
a. The HadCrut4 data set is 163 year long. So, let us exclude the last 63 years and try to estimate H based on the 100year long period 18501949. The Hurst coefficient estimate becomes 0.93 instead of 0.94 of the entire period. Is LTP artificial then?
b. Look at Koutsoyiannis and Montanari (2007), Table 1. It examines several proxies for temperature for the last 5002000 years and provides two sets of H estimates: One for the entire period covered by each of the proxies and one for the period 14001855, common for all proxies. Do you see any noteworthy difference (say, greater than 0.03) in the estimates of H between the two periods? Don’t these high values of H (0.860.93 for the period 14001855) indicate LTP? Can they be the result of anthropogenic origin? Does your “circular logic” argument apply to them?
c. Look at Markonis and Koutsoyiannis (2013), Fig. 9. This shows that a combination of proxies supports the presence of LTP with H > 0.92 for time scales up to 50 million years. Is this a result of your “signal” which you identify with “manmade climate change”?
Furthermore, if you trust only instrumental series you may look at the Nile example in my post, which I trust you can assume to be free of “manmade climate change” as it does not go beyond the 15th century. With respect to instrumental temperature records, you are right that most thermometer records do not go back in time longer than a century. However, there are some that they do—some exceptions as you say. As handy examples, I can offer those of Vienna (see Fig. 3 in Koutsoyiannis, 2011) and Berlin/Tempelhof (see Fig. 13 in Koutsoyiannis et al., 2007). Again the LTP is evident, even without considering the last period (for example, as I recall from the latter publication, the first onethird of the record, years 17561839, gives H = 0.83 while for the total period H = 0.77).
In any case, not only is the risk for circular reasoning a real one, but it also concerns much wider areas than LTP. As another example, consider the “Mexican Hat Fallacy” that I referred to in my post. The circular reasoning here is that I formulate a hypothesis after I have seen the data. Deterministic modelling can also be affected by circular reasoning. The hydrological community has given importance to avoiding circular logic. Thus, it has thus been a standard practice in modelling to follow the splitsample technique (Klemes, 1986). We split the available observations into two (sometimes three) segments. We use one segment for building and calibrating a model and the other one for validating it. Has such model validation technique been used in climate models? From my experience (Koutsoyiannis et al., 2008, 2011; Anagnostopoulos et al., 2010) I can only imagine a negative answer.
In the beginning of your post you present a graph that compares the HadCRUT4 data series with results of a regression model “based on greenhouse gases, ozone and changes in the sun” as you say. You do not give a citation to see the details. So, the natural question is: Did you use a splitsample technique for your model, or any similar validation technique, in which a data segment is kept out when building your model and calculating the regression parameters? If yes, then the danger for circular reasoning is minimal—but not zero, because in reality you have seen all the data. So, what do you say?
Demetris
References
Anagnostopoulos, G. G., D. Koutsoyiannis, A. Christofides, A. Efstratiadis and N. Mamassis (2010), A comparison of local and aggregated climate model outputs with observed data, Hydrological Sciences Journal, 55 (7), 1094–1110.
Cohn, T. A., and H. F. Lins (2005), Nature’s style: Naturally trendy, Geophys. Res. Lett., 32, L23402, doi: 10.1029/2005GL024476.
Klemes, V. (1974) The Hurst phenomenon: A puzzle?, Water Resources Research, 10 (4), 675688.
Klemes, V. (1986), Operational testing of hydrological simulation models, Hydrological Sciences Journal, 31(1), 13–24.
Koutsoyiannis, D. (2011), HurstKolmogorov dynamics as a result of extremal entropy production, Physica A: Statistical Mechanics and its Applications, 390 (8), 1424–1432.
Koutsoyiannis, D., A. Efstratiadis, and K. Georgakakos (2007), Uncertainty assessment of future hydroclimatic predictions: A comparison of probabilistic and scenariobased approaches, Journal of Hydrometeorology, 8 (3), 261–281.
Koutsoyiannis, D., A. Efstratiadis, N. Mamassis, and A. Christofides (2008), On the credibility of climate predictions, Hydrological Sciences Journal, 53 (4), 671684.
Koutsoyiannis, D., A. Christofides, A. Efstratiadis, G. G. Anagnostopoulos, and N. Mamassis (2011), Scientific dialogue on climate: is it giving black eyes or opening closed eyes? Reply to “A black eye for the Hydrological Sciences Journal” by D. Huard, Hydrological Sciences Journal, 56 (7), 1334–1339.
Koutsoyiannis, D., and A. Montanari (2007), Statistical analysis of hydroclimatic time series: Uncertainty and insights, Water Resources Research, 43 (5), W05429, doi: 10.1029/2006WR005592.
Markonis, Y., and D. Koutsoyiannis (2013), Climatic variability over time scales spanning nine orders of magnitude: Connecting Milankovitch cycles with Hurst–Kolmogorov dynamics, Surveys in Geophysics, 34 (2), 181–207.
First comments on Benestad’s and Koutsoyiannis’ guest blogs
I actually can agree with most of what Demetris writes on LTP in his interesting and pedagogic blog. I only think that the tools he used (similar as ours in the 2006 paper by Rybski et al) are not the optimum tools. Among climate scientists, the best accepted tool is the exceedance probability (as I wrote in my blog) from which the significance of a trend can be derived. Unfortunately, since the exceedance probability for LTP records was not known before we published our main results in 2009 and 2011, climate scientists used the wrong assumption of an AR1 process to estimate the significance of a trend and considerably overestimated it this way. I am sure, when Demetris will use the exceedance probability and the analytical result we published in 2011, he will arrive at our conclusions, too.
In contrast, I cannot agree with a large fraction of Rasmus blog. My disagreement is not based on philosophical arguments but simply on mathematics and modern time series analysis. LTP is not an abstract issue, but a process with long memory that (for stationary records) decays in time by a simple power law. Accordingly, the ENSO, for example, is not an example for LTP. It is an example of a very complex and very difficult quasi ocillatory phenomenon which is difficult to forecast, but not LTP. I think it is very important in science and also in climate science to be precise, without being precise, modeling is impossible and progress hard to achieve.
It is very easy to generate LTP records numerically, one only needs to know how to generate Gaussian random numbers and how to make a Fourier transform. This way, one can very efficiently study theoretically the properties of LTP records. By doing this properly one will find that the autocorrelation function (ACF) as used by Rasmus unfortunately is not an appropriate tool to detect LTP in records with a length below 50 000. As we have shown in 2009 in Physical Review E, there are strong finite size effects which are even worse than anticipated by Rasmus. But this is NOT a problem of LTP, but only of the employed method. The second problem of the ACF can be seen also in Rasmus Fig. 1: It depends on the external trend. Therefore, if one only knows the ACF as tool for detecting LTP one may led to think as Rasmus does in his summary, that we do not really know what the LTP in the real world would be like without GHG forcing.
If I had worked 20 years ago in this field, I had agreed with this statement. But nowadays, there is a large number of methods available that is able to detect the natural fluctuations in the presence of simple monotonous trends. Two of them are the detrended fluctuation analysis (DFA) and the wavelet technique (WT). These are techniques which have been applied in many different disciplines, ranging from physiology via computer science to the financial markets and tested extensively. By combining them one can quantify LTP on time scales up to N/4 where N is the record length, and N should be above 500. This means, from a monthly record of 40y one can detect LTP when using the proper methods. Again, when using the ACF, 150y are not enough because of the tremendous finite size effects.
We appreciate today that monthly temperature data are LTP on all time scales, with Hurst exponents around 0.65 for continental and around 0.8 for island stations and sea surface temperatures. Long historical runs from AOCGMs are able to reproduce this behavior. Daily data additionally show short term persistence on scales up to 2 weeks which are averaged out in monthly data. Accordingly, the problem is NO LONGER to determine the Hurst exponent in the presence of anthropogenic warming, but to estimate the contribution of e.g GHG to the warming.
Within time series analysis, this can be done as I wrote above and in my blog. But as a consequence of the LTP in the temperature data, the error bars are very large, considerably larger than for shortterm persistent records. But nevertheless, except for the global sea surface temperature, we have obtained strong evidence from this analysis that the present warming has an anthropogenic origin.
Best wishes,
Armin
Rasmus, you say:
I will be glad to offer news to you, but I have to clarify that my phrase:
means that the climate CANNOT be determined by one scale.
Those who use a Markov/AR(1) model assume that it can be determined by one scale, not me. Note, in a Markov model the autocovariance for lag t is c(t) = b exp(t/a) where a is the SINGLE time scale for that model. It is trivial to show that on a climatic time scale D > > a, the climate produced by a Markov process has variance Var[x] ~ a/D. This is just the same as in the white noise, where the climatic variance is inversely proportional to the time scale of averaging. Thus, those who assume a Markov behaviour for hydroclimatic processes also accept a white noise (random) behaviour at climatic scales.
To repeat it once more, it’s not me who assumes a Markov model, a random climate, or a single scale. On the contrary, I stress that all these assumptions are wrong.
So, I hope with these clarifications my “news” is more informative for you.
D.
Thanks for your comments on my post, Demetris. I think we need to live with our disagreements on a few points – but that’s fine. This is what drives science forward. I think one objective was to try to identity the exact points where we diverge in our interpretations. Here is my take on that: let’s start with your this paragraph:
“In other words, the climate models are inconsistent with the real world climate, which is characterized by change on all time scales. The LTP is the stochastic representation of irregular change and is also reflected in the autocorrelation function with high values of autocorrelation. LTP has been a dominant characteristic of Nature (see my post as well as Armin’s). Your graph shows that the models need to assume an external (anthropogenic) agent to produce what has been the rule in Nature all the time.”
I do not see how you can claim that climate models are inconsistent with the real world climate. We know that they do reproduce the main important aspects of Earth’s climate, such as cirulation patterns, wind patterns, and the past temperature trends. We also know that these kinds of models taught us about the fundamentals about Chaos theory.
I also argue that we do not know if LTP really is stochastic, and my demonstration showed that in fact external forcings such as GHGs do introduce LTP characteristics. It does not help you with long time series when you do not know what is stochastic and what is forced.
Many processes in nature may look like LTP, and a good deal of it is due to physical processes and a chain of causality – not necessarily randomness. Autocorrelation functions can give you some indication about the degree of persistence, but cannot provide insights into the physics in isolation.
I want to ask if you consider nonlinear chaos – which does not have a longterm memory – as an aspect of LTP – it does not satisfy that value x(t’) is influenced by all x(t < t’). If you think LTP includes chaos, then is arises from the internal physical processes. If LTP does not involve chaos, then I wonder how you’d distinguish between the two.
I appreciate Armin explanation, and it’s OK to disagree. I’m not convinced that there are methods that can distinguish noise and signal – or trends and fluctuations – when there is an external forcing. I do not believe that tere is anything magic about LTP, and as any other process, it is caused by physical processes and internal dynamics. Such as changes in the oceans and nonlinear chaos.
I guess we could set up some double blind experiments, with systems where a forced trend of LTP were designed into the results. An mimic the analysis as I did for the ACF for two time series, one with and one without trends.
Question to both:
We know that the errorbars of the global mean temperature estimate T(2m) are greater when the sample of thermometer records from the globe was smaller in the realy part of the record. Hence the variance associated with T(2m) is greater in the early part of the record compared to the latter, due to greater statistical fluctuations. The reduction in the fluctuations associated with the sampling, however, must be considered as ‘artificial’ because it does not reflect the changes in the real world.
So the question: how do the LTP methods deal with imperfect data?
Also, we know that theforcing is a combination of the contributions from greenhouse gases (GHGs), volcanoes and solar forcing. They have different time scales, where volcanic eruptions leave an imprint which may last for a few years, whereas GHGs have longlasting effects. We also expect the trends to change over time, and that splitting the temperature record into say ~60 year intervals will result in different segments where the frocings is different. How do the LTP methods account for the presence of such forcings if you a priori do not know exactly what they are?
Rasmus,
I fully agree with your assessment that we disagree. I also fully agree with your statement.
The latter is an important agreement, given a recent opposite trend, i.e. towards consensus building, which unfortunately has affected climate science (and not only).
So, let us focus on our disagreements and illustrate them with a few numbers, when possible. You refer to my statement about climate models and you say:
Recall that my statement was based on your Figure 2, lower panel. I took your grey curve, which refers to nonchanging conditions with respect to forcing. I was able to see that this is a perfectly Markovian curve, with autocorrelation ρ(t) = exp(t/a), where a = 1.25 years (I guess your lag is in years, right?). Now, considering a climatic scale with averaging time scale D = 30 years or D/a = 24, we can infer a characteristic correlation ρ(D) = exp(D/a) = 3.8 x 10^11, so small that could be regarded as zero. As a result, even not equating it to zero, two consecutive values of climate (for two consecutive 30year periods) are virtually uncorrelated (with exact calculations the correlation of two consecutive variables of your “climate” is of the order of 10^2).
Yes, I contend that this behaviour, in which climate appears as an uncorrelated process, is inconsistent with the real world climate, in which consecutive variables are correlated. To summarize the results of these calculations, in three points:
(a) Your climate models produced a series whose stochastic structure at annual scale is Markovian with characteristic time scale a = 1.25 years.
(b) At a climatic scale of 30 years this corresponds to a climate uncorrelated in time.
(c) An uncorrelated climate (see my Figure 3 and Armin’s Figure 1, left panel) is static and inconsistent to real world climate, which exhibits correlation and is changing (see my Figure 2 and Armin’s Figure 1, right panel).
D.
Rasmus
In my previous comment I replied to your sentence:
I wish to continue with the subsequent two, so that I have answered at least one paragraph of one of your comments. You say:
Right, I do not dispute that they reproduce the patterns you indicate, nor their usefulness as simulation tools. What I doubt is their ability to reproduce trends (as I explained above, they produce static climate) and the credibility of their predictions (or projections if you prefer the IPCC idiom) for the future.
Did they really teach us about chaos? What is your take of the following statement: Schmidt (2007, The physics of climate modeling, Physics Today, 60 (1), 72–73; emphasis added):
But if you imply that, irrespective of what climate models yield, the real climate is chaotic, then I agree with you.
You can see further information about my views and contributions with respect to climate and chaos in:
Scientific dialogue on climate: is it giving black eyes or opening closed eyes?, 2011 (it contains further discussion of the above quotation by Schmidt, 2007).
A random walk on water, 2010.
A toy model of climatic variability with scaling behaviour, 2006.
These publications show that LTP does involve chaos.
D.
Dear Demetris,
The grey curve represents a climate simulations with no forcings, and provides abenchmark to the one with forcings. We know a priori that this simulation is artificial in the sense that it will have different LTP behavious to the real world where real forcings are present. Hence, it’s a mistake to assume that this simulation is equivalent to that of the real world. Again, I think your problem is that you do not know what is signal or what is noise (you may call the latter music or whatever, but the normal term is noise).
You also misunderstand the way climate models work. Of course they simulate time evolutions which are chaotic – because they simulate weather fluctuations. When you regard the climate as a boundary problem, as Gavin does, then you do indeed see that climate becomes predictable and in that sense nonchaotic. Take this example: I do not know the weather for an exact day in December because of fundamental limitations due to chaos. However, I know that the weather statistics for the December month will show lower temperatures than now.
Moreover, the LTP issue concerns the time evolution and hence how the simulated weather changes from one dayt to the next. It concerns the ACF. The simulated weather in the climate models is chaotic. Such behaviour has been studied by numerous scholars, from simplified models to fullscale weather models. But the question of whether the weather is chaotic does not imply that the response to a systematic forcing (boundary condition) is chaotic.
So my point is that the weather evolution is chaotic both in the climate model simulations and the real world. My question to you then is whether you think that such chaos and LTP are fundamentally different, and how would you ditinguish between these?
Dear Rasmus,
Do you believe I had not under understood that? I feel I have to repeat what I said. If you do not feed these models with changing forcings, then they produce a static climate; that is clear from your graph. The real world climate has never been static. The anthropogenic GHG forcing was not present, say, during the Medieval Warm Period, but again the climate was changing. You had implied that, e.g. in your EGU talk Climatic and Hydrological perspectives on longterm changes: a northern European view (2008), where for example you stated:
Had the climate been static, it would not generate favourable climatic conditions for Vikings.
Of course models are not identical with natural processes. Of course models produce artificial simulations—this is what they should and can do. We should never confuse models with nature. But we should demand from models to reproduce in their simulations some important elements of reality. Climatic models whose simulations are characterized by shortlived autocorrelations and, as a result, produce a static climate as a rule, and need external agents to produce change, are not consistent with the real world climate in which change is the rule.
D.
This is exactly my point too. But there are forcings in the real world, be it changes in Earth’s orbit around the sun, geological, volcanic, changes in the sun, or in the concentrations of the greenhouse gases. So my point is that LTP in this simulation without forcing will not be the same as in the real world where there is forcing. The forcing introduces LTP. We see this by comparing the two simulations – the black and the grey curves.
So perhaps the real world climate would be static in the absence of forcings. We cannot tell from observations alone.
As far as I know, the favourable climatic conditions for Vikings was limited to Greenland and the NorthAtlantic. When we look at regional variations – as opposed to global – we see more pronounced variations linked to changes in the winds and ocean currents. The early Holocene was also influenced by a difference in Earth’s orbit, whereby the high north received more sunlight (forcing).
Could we agree on a few ideas?
In my view, a correct explanation implies that both the assumptions are valid and the mathematics is correct. It is possible to have a beautiful and valid mathematical model/formulae, but if the underlying assumptions are wrong, the answer will not be correct (unless one is extremely lucky). Furthermore, one can start from a set of valid assumptions, and then use incorrect matamatical equations. This too will lead to the wrong answer.
So I wonder if it’s useful to identify which aspect we are discussing here. I feel that both Demetris and Armin have some impressive mathematical framework (although I’m a bit concerned with the issue regarding LTP vs Chaos), but that their assumptions lead them on the wrong path.
Thank you Rasmus, Demetris and Armin, for three very informative essays on the statistical properties of global average temperature timeseries. In these, Rasmus argued that physical considerations are important in the interpretation of statistics. Demetris and Armin mainly discussed the application of statistical methodology to climate data. That leaves me curious how the latter two view the physical interpretation of their statistical analyses.
Rasmus showed (fig 2), using GCM data, that a deterministic trend causes long term persistence to increase. Demetris welcomed this figure as “supporting what he has been saying for years”, though apparently with a different interpretation that Rasmus, namely that the unforced climate as simulated by GCM’s (low LTP) is unrealistic of the real world (high LTP). However, as Rasmus pointed out, the real world has been impacted by natural and anthropogenic forcings, so one can’t compare an unforced model run with observations that are impacted by forcings: Of course they don’t agree! Also going further back in time (e.g. the past millennia), climate forcings (e.g. changes in the sun, volcanism, land use, greenhouse gases) likely played a substantial role in influencing global average temperature. This severely hampers the quantification of internal climate variability based purely on the presence of LTP, since forced changes increase this persistence.
Yet implicitly or explicitly, Demetris seems to equate the presence of LTP to natural internal (unforced) variability (also phrased as chaotic behavior). How do you square this interpretation with the fact that forced changes to climate also increase this persistence?
Rasmus asks very much the same question in his latest comment:
“Can we agree on that forcing introduces LTP?”
“Can we agree on that forcing is omnipresent for the real world climate?”
These are important questions in order to establish areas of agreement and disagreement. I invite Demetris and Armin to comment on these.
As to the physical interpretation: internal variability would in all likelihood mean a redistribution of energy within the earth system. That ought to result in some components of this system cooling down while the surface is warming up. This is however not what is being observed: Recently, also the deep oceans (at least down to 2000 m depth) have been observed to be warming (Balmaseda et al., 2013). The energy balance provided a powerful constraint to the global average temperature; that has been the case for past changes in earth’s climate and it’s currently still the case. Ignoring this aspect can lead to very strange interpretations (as I tried to show in my april 1st blogpost a couple of years ago, by applying statistical reasoning void of any physical underpinning to imaginary data of my body weight).
Another question would thus be: If you deem a substantial portion of recent (past ~150 years) global warming to be due to internal variability, where does this increase in energy come from?
Dear Bart,
sorry that I could not answer earlier. There are several questions you raised, which also Rasmus raised, but most of them have found already an answer in the literauture. For example, The fact that the ACF is affected strongly by trends is not new at all, we have discussed it in our 1998 PRL extensively where we used detrending methods to determine the True LTP of temperature data. What we called FA at that time is a method equivalent to the ACF and strongly affected by trends. For a more extensive discussion, see our 2001 Physica A paper ( Kantelhardt et al). Scientists who do not have experience with LTP and are not aware of the better detection methods usually think that the deficit of the ACF is a deficit of LTP, but this is just wrong. I like Rasmus to read our early papers on LTP that were published in the last 15y in Phys. Rev. Lett, Phys. Rev. E, Journal of Geophys. Res. D, GRL as well as Nature Climate Change to become more familiar with LTP, its definition, the methods to detect it, its consequences on the occurrence of extremes. It is easy to find my refs, just go to Google Scholar and type in my name.
It is important to use the same definition of LTP, but from what Rasmus writes, I understand that he has something in mind which remarkably differs from the well established definition of LTP, which you can find also in the pioneering contributions of Benoit Mandelbrot. So, on this basis it is nearly impossible to have a meaningful discussion.
I would like to know from Rasmus, for example, what is the evidence for El Nino to be LTP (scientists usually consider this as a quasi oscillatory phenomenon) and does he consider then also other oscillatory (seasons) or quasioscillatory (sun spots) as LTP? If yes, there is a problem, since then he mixes between trends and fluctuations, which is the worsest one can do in this field.
But let me come to the question of the origin of LTP. One of the origins is certainly the coupling of the atmosphere to the oceans. But the natural forcings also play a role. We have shown in our 2002 PRL that models that only use GHG forcing cannot reproduce the proper LTP. We have discussed extensively the role of the different ,forcings in our 2004 GRL where we found that the natural forcings are important to reproduce the proper LTP. In contrast, GHG forcing did not contribute to the LTP when using the proper methods (NOT the ACF!). So we need the natural forcings for obtaining the correct LTP. We have shown this also in our 2008 JGRD where we compared millenium runs with natural forcings and without. So we have answered your two questions already long time ago: Natural Forcing plays an important role for the LTP and is omnipresent in climate.
For describing quantitatively the resultung LTP we do NOT need, in contrast to the claim of Rasmus, understand in detail the role of the many different natural forcings. We just need to understand the mathematical structure of the LTP and how we can model it. Then we can use the methods that we developed in our 2011 PRE to quantify if a trend is natural or not.
Of course, it is nice to talk about the effect of the different forcings. But since everything is interwoven in a linear and even non linear way it is nearly impossible to separate the effects from the different forcings on the LTP in a satisfying manner. The pragmatic way is to learn what LTP is, to learn and even improve the detection methods that can separate the natural fluctuations from external deterministic trends and then use the method we developped to estimate the effect of the trend. This way is doable and is actually common in climatology. The difference is only that in previous attempts the natural fluctuations have been considered incorrectly as Short Term Persistent, which significantly overestimates the trend significance.
Best wishes,
Armin
ps: If it is difficult to download my papers, I can email them to those who are interested in them.
Demetris, Rasmus, Armin,
Thank you very much for your points of view and your responses to each other. Although it is worthwhile to discuss the performance of climate models, the key point of this discussion is the determination of the significance of observed trends(in global mean temperature) beyond what would be expected from internal variability only. We can distinguish the following opposite views:
1) Using statistical models only
2) Using statistical models in combination with constraints using physical knowledge of the climate system (e.g. internal variability, energy balance)
I think Demetris is going for option 1, while Rasmus favors option 2. I am not so sure about Armin’s opinion. Armin, can you make a statement on this subject?
A second and related point is the choice of the statistical model, in particular with respect to the distinction between trends and fluctuations. So, let’s take the method of analysis of the statistical properties applied to the global mean temperature as described in section 5 of the Markonis & Koutsoyiannis (2012) paper (hereafter MK2012):
In order to construct the climacogram MK2012 compute the standard deviations σ(k) by cutting the time series in N samples of length k (in years). As a test for their method let’s consider two time series: 1) an upward trend in temperatures (white line) and 2) a fluctuation with a two times stronger slope in the first half of the time interval and the same, but negative slope in the second half of the time interval (blue line, see figure).
Both time series have the same standard deviation for any value of k: σ(k)=Sqrt(1/12).bkN (N>>2), where b is he slope of the trend line. Therefore, the trend and the fluctuation project both onto the same curve in the climacogram and they are indistinguishable.
In terms of energy changes of the climate system (in case no external forcing is compensating) signal x(t) implies an increasing energy loss, while signal y(t) implies no net energy loss since at time t=kN the mean global temperature has returned to the initial value.
Therefore, I would like you to react on the next three statements in an attempt to focus the discussion:
1) The method of MK2012 doesn’t distinguish between trends and fluctuations. In other words, trends in global mean temperatures are considered to be fluctuations in the climacogram. Conclusions about the behavior of the climate system using this climacogram lack energy balance considerations.
2) The variation in time of the global mean temperature in the absence of external forcing causes an energy imbalance that works to restore the temperature to its equilibrium value (where outgoing energy equals incoming energy).
3) If energy considerations are taken into account, i.e. on the basis of estimated external forcings, significance levels of measured temperature trends will be met sooner than on the basis of the pure statistical method of MK (2012), because the latter is prone to misinterpreting forced changes as internal variability. In other words, the MK2012 method is not suitable for determining the significance of an observed trend.
Until now I avoided reference to fundamentals of science, although several of Rasmus’s comments offered me this temptation. Instead, I tried to refer to some of my papers which provide my views on such fundamentals, for example the Random Walk on Water. But as I understand from Bart’s comment and particularly the link to his example on his Weight Gain Problem, my references did not work. It is thus unavoidable to clarify my views on some fundamental issues. I will try to clarify my positions in terms of several dichotomies, some of which have been used or implied by Rasmus and most recently by Bart.
1. Models vs. reality
In my view this is a true dichotomy.
My grandson has taught me that the virtual reality, e.g. in computer games, can be fascinating. Of course he and his playmates are able to distinguish it from the realworld reality; for example they are well aware that, in contrast to their computer games, reality does not offer a pool of additional lives if one dies.
On the other hand, in scientific conferences I have often seen graphs mixing up observational data of the past with model projections of the future and speakers presenting model projections as if they were reality. I have seen IPCC texts, scientific publications and policy documents speaking about the conditions in 2100 using “will” without adverbs like “likely”, “probably”, etc., e.g. “extreme events will become more frequent”.
Furthermore, it has been very common to use concepts of stochastics, i.e. concepts pertinent to models, as if they were real objects. Concepts like probability density function, autocorrelation function, stationarity, and many more, apply to the world of models, not to the real world. They build upon the concepts of a random variable, a stochastic process, an ensemble, etc. These are abstract mathematical objects, not objects of real life. Largescale realworld processes, like the climatic processes, have a single life, a unique evolution, and are not repeatable. There are no ensembles (pools of many lives) in real world processes. The idea of an ensemble is a useful one, to define abstract concepts like stationarity, but we should be aware that is applies to models only.
2. Physics vs. statistics
In my view this is NOT a true dichotomy.
I think the language of physics is (or at least includes) mathematics. In mathematics we write 1 +2 = 3, 1 x 2 = 2 etc. Likewise, in physics, 1 kg + 2 kg = 3 kg or 1 kg x 2 m/s^2 = 2 N.
Sometimes addition and multiplication are not enough to study physical phenomena. Thus, we may use for instance differential equations. We may also use more abstract concepts, like random variables to represent uncertain quantities, or stochastic processes to represent uncertain quantities evolving in time. Further, we may use statistical methods to estimate these quantities from measurements. This does not mean that we departed from physics and we landed on another continent which is statistics or stochastics. We still live in physics. As long as we have the feeling that we are doing physics when we add certain numbers of kilograms, we may well have the same feeling whenever these numbers of kilograms are uncertain and we opted to treat them as random variables.
As we accept that the regular addition should be made correctly, we should also use correct mathematics when we use random variables. For example, if x, y and z are random variables related by x + y = z, we should be aware that Var[z] = Var[x] + Var[y] + 2 Cov [x,y].This is different to regular addition. Neglecting the last term, the covariance, may result in dramatic errors.
Furthermore, we may give these peculiar quantities, the covariances, a sound physical meaning. This is the case for example in Reynolds stresses in fluid flows, which are none other than covariances of fluid velocities.
Finally, we may build sound physical theories based on statistics. For example, the big progress in thermodynamics, which also constitutes the foundation of climatology, happened when statistical thermophysics was able to dominate over the funny notion of the caloric fluid.
In other words, statistics is physics. For simple physical problems, in which quantities are exact, statistical considerations are not necessary. In complex systems statistics within physics is indispensable.
3. Signal vs. noise
In my view this is NOT a true dichotomy in geophysical sciences, while it is meaningful in electrical engineering and telecommunications.
Excepting observation errors, everything we see in climate is signal. The climate evolution is consistent with physical laws and is influenced by numerous factors, whether these are internal to what we call climate system or external forcings. To isolate one of them and call its effect “signal” may be misleading in view of the nonlinear chaotic behaviour of the system (see also dichotomy 5 below).
My reasoning as to why I regard this dichotomy as misleading has been exposed in earlier comments. To repeat it as briefly as possible: Let us assume that it is possible to remove the “signal” (the anthropogenic influence) from “noise” (whatever this is). Would the stochastic properties of the “noise” be different from those of the whole climate? As I wrote extensively earlier, the properties of the “noise” alone can be easily seen in older periods, those not affected by the “signal”. And it seems that stochastic properties remained unaltered. In contrast, climate models, which allegedly can distinguish “signal” and “noise” yield a “noise” which is fully inconsistent with past climate.
4. Trends vs. fluctuations
In my view this is NOT a true dichotomy.
I am not aware of a rigorous definition of the term “trend”. I think it is used in a loose and colloquial manner. Indeed, loosely speaking and with reference to my Figure 2 above, we could say that between AD 640 and 780 there was a falling trend in Nile’s water level. However, in view of the longer 849year record, we may rather say that this was part of a longterm fluctuation. On the other hand people who lived some decades after the recording of the Nile River level had started, i.e., in the 7th8th century, may have called this an unprecedented trend, may have worried that it would continue in the future, that it would have catastrophic effects, etc.
From a scientific point of view, if we do not provide rigorous definitions of the terms we use, it may be difficult to discuss in a constructive manner.
5. Linearity vs. nonlinearity
In my view this is a true dichotomy.
It is possible that our understanding of complex natural phenomena has been influenced by that of simple systems, particularly those which can be effectively modelled by linear differential equations. In those systems, solutions corresponding to different causes/perturbations can be added together to form the solution corresponding to the combined effect of all causes. Reversely, we can allocate a weight, with respect to the combined effect, to each of the causes.
In more complex systems (yet the most common ones) whose study needs to abandon linear models, the contribution of each cause or forcing is not straightforward. A colleague from Mexico offered me this illustration: “A typical example that I give to my students, because it is (unfortunately…) wellunderstood here in Mexico, is the following: If someone is being machinegunned by two people at the same time, it is objectively impossible to quantify the contribution of each killer to that person’s death, since the wounds caused by each one of the two would have killed the person anyway, even in the absence of the other killer”. If we wish to go backward to the causes of the causes, examining for instance how these killers acquired the machine guns, how it happened to be in that place at that time etc., things become even trickier.
Even worse than this, in chaotic systems described by nonlinear equations, the notion of a cause may lose its meaning as even the slightest perturbation may lead, after some time, to a totally different system trajectory (cf. the butterfly effect).
6. Stochastic vs. deterministic models
In my view this is a true dichotomy.
According to my view, things do not happen spontaneously. Also natural phenomena are not infected by a virus of randomness, which after infection turns them from deterministic to random. There is no such virus. Rather, physical laws hold true all the time.
Whenever we are able to use deterministic models to describe nature and to find solutions which are in good agreement with reality, we don’t need any stochastic descriptions. We use the latter only when the deterministic solutions fail.
Let us consider Bart’s Weight Gain Problem. The problem may have different versions. For example, one version would be to explain why he gained weight during the last decade. If he kept detailed records of inputs (how many brownies, chocolate fudge cakes, etc. he ate) and outputs, he might be able to make a detailed deterministic model to describe the evolution of his weight.
Another version of the problem would appear if he tried to predict the future. In this case such a deterministic model may not work. Perhaps he would then think of constructing a macroscopic, rather than detailed, model. He would perhaps recognize that, in principle, whatever the effort he puts, the model will not be accurate. Consequently, he may think of changing the representation of his weight from an exact variable to a random variable. Using a representation by a random variable will not mean that his weight was infected by the virus of randomness or that the law of mass conservation ceased to hold. A random variable is just an uncertain variable. Nothing more and nothing less than this. The modelling framework has now become stochastic. Once he uses a stochastic framework, he may also use the stochastic toolbox, which contains tools such as averages, variances, covariances, autocorrelations, power spectra, and many more. He may try to infer the stochastic properties of the future after fitting the stochastic model on stochastic properties (not necessarily specific values) of the past. He may further try to find and study time series of other people’s weights perhaps with greater ages, in order to see whether age does matter or not in the evolution of weight, etc. All these may help more than the law of mass conservation in the modelling, although we can be sure that this law will always hold true.
In a few words stochastic modelling is modelling under uncertainty. It is not denial of physical laws. With reference to dichotomy 3 above, stochastics describes the signal and does not need a decomposition “signal + noise” to work.
Dear colleagues, you may feel free not to adopt my views, to think that they are heretic or to characterize them however you wish. However, I hope you will recognize that I cannot contribute in discussing questions that distinguish statistics from physics; I cannot accept that using a stochastic modelling approach violates or contrasts physics; I cannot offer too much in a discussion about separation of signals and noises; and I cannot offer too much in answering questions that are formulated using undefined terms and concepts.
Dear Rob,
Thanks very much for the excellent example. However, I believe instead of illustrating some weakness in Markonis and Koutsoyiannis (2013) as you may imply, it illustrates that restricting our vision and making improper use of stochastics can be dangerous.
From your graph I assume that what you call “fluctuation” is a periodic pattern with period kN and what you call “trend” is a line that goes unaltered in the future and the past. Without having the future and the past in mind, we cannot call your “fluctuation” fluctuation; it can well be two consecutive trends, continuing in the past and the future by extrapolating the two segments as straight lines.
So you chose a time window equal to one period (kN) to view the two cases and you find that the two yield the same climacograms, which is correct. If one wished to restrict the vision further, one could perhaps take a time window of size kN/2 and see two straightline trends, without any hint for fluctuation.
But what I propose is the opposite, to widen our vision to, say, an order of magnitude farther (e.g. 10 k N). Or even better try to see how the two cases behave asymptotically, as time tends to infinity.
We will see that the “trend” case gives a constant climacogram (nb., here, as in Markonis and Koutsoyiannis 2013, I speak about the temporally averaged process, not the cumulative one), not changing with time scale k. For small time scales, the “fluctuation” case will also yield a fairly constant climacogram. However, for large time scale, not only will it give a decreasing climacogram, but the rate of decrease will be very steep, much steeper than in the case of white noise. I have not made the exact calculations for this case, but I guess the behaviour will be virtually the same with what you see in Figure 8 of Markonis and Koutsoyiannis, the series labelled “harmonic”. That is, I guess we will have an envelope curve with a slope equal to 1 (vs. 0.5 for the white noise).
However, there additional problems here. For the “trend” case we have abused the notion of standard deviation and hence that of climacogram. Indeed, taking some points on a line (straight or whatever) we can calculate a numerical value of a standard deviation using the classical statistical formula. But does this represent anything in a stochastic theoretical framework? The answer is negative. The process is simply nonstationary, so we don’t have the right to treat the points of the time series as if they were generated by a stationary and ergodic process and to use the statistical formula that applies to stationary processes.
Another point is that we would not need at all to treat this case in a stochastic framework. It would suffice to describe it deterministically, fully avoiding reference to standard deviation.
A final point is that, if a “trend” like this would be a realistic representation of the climate, we would not be here to discuss. Loosely speaking (as in my previous comment) local trends appeared throughout all Earth history, but these were parts of fluctuations. A consistent trend would lead to a runaway behaviour. That is why it is better to use stationary descriptions within stochastic models of nature (see additional justification in Koutsoyiannis 2006, 2011).
The “fluctuation” case, could, loosely speaking, classify as a stationary process, under the terms explained in Appendix 1 in Markonis and Koutsoyiannis (2013). However, again we would not need to use stochastics if the “fluctuation” was regular and hence describable in deterministic terms. The need for stochastic description arises when the period and amplitude of fluctuation vary, and when we have many scales of fluctuations. The synthesis of many scales of fluctuations results in a process with LTP, as described in the section “A physical explanation” in Koutsoyiannis (2002).
My conclusion is let us not restrict our vision. Note, Markonis and Koutsoyiannis used the widest possible time windows for the available instrumental and proxy records.
D.
References
Koutsoyiannis, D., The Hurst phenomenon and fractional Gaussian noise made easy, Hydrological Sciences Journal, 47 (4), 573–595, 2002.
Koutsoyiannis, D., Nonstationarity versus scaling in hydrology, Journal of Hydrology, 324, 239–254, 2006.
Koutsoyiannis, D., HurstKolmogorov dynamics and uncertainty, Journal of the American Water Resources Association, 47 (3), 481–495, 2011.
Markonis, Y., and D. Koutsoyiannis, Climatic variability over time scales spanning nine orders of magnitude: Connecting Milankovitch cycles with Hurst–Kolmogorov dynamics, Surveys in Geophysics, 34 (2), 181–207, 2013.
Thanks for your clarifications, Demetris. I do note however that you didn’t answer the questions posed by Rasmus and myself:
Can we agree that radiative forcing (be it natural or anthropogenic) introduces LTP?
Can we agree that radiative forcing (be it natural or anthropogenic) is omnipresent for the real world climate?
If you deem a substantial portion of recent (past ~150 years) global warming to be due to internal (unforced) variability, where does this increase in energy come from?
Could you please briefly comment on these specific questions?
Regarding the different dichotomities that you mention: I certainly agree that physics and statistics are not a dichotomy: Both are needed to make sense of the climate system (and of the world around us in general). That is also the point that Rasmus is making, if I understood him correctly. In your reply to Armin, you questioned a statistical result (a strong difference in LTP of land and sea data) based on physical reasoning. That is also what I am doing when I question the dominance of unforced internal variability in explaining the current warming (which is what I interpret your position to be), since I find it inconsistent with basic energy balance considerations (hence the third question above).
Regarding my weight gain analogy, you wrote “If he kept detailed records of inputs (how many brownies, chocolate fudge cakes, etc. he ate) and outputs, he might be able to make a detailed deterministic model to describe the evolution of his weight.” However, based purely on statistical analyses, the evolution of the timeseries was interpreted to be due to stochastic variations. This would presumably still be the case even if I had detailed records of all inputs and outputs (since the argument was made purely on statistical grounds). Now with the climate system, we do have some (imperfect) idea about energy inputs and outputs (see e.g. Trenberth et al or Hansen et al). These indicate that the energy content of the whole climate system has increased (i.e. radiative forcing acting on the system).
How could this not have influenced the global average temperature of the planet?
Dear Bart,
I do not avoid answering your questions—or other people’s ones. I am just subject to energy and time constraints. There is a lot of interesting stuff to read and comment on. As you see, I am working to contribute, but I have limitations (and additional duties). Thus I must put priorities. My first priority is that we should understand each other on general principles before we can discuss more specific issues. This is not easy. For example, when, after my extensive essay on dichotomies, in your last comment you say “the evolution of the timeseries was interpreted to be due to stochastic variations”, I feel that I have again to stress my general view: There is no such thing as “stochastic variation”. The variation is real, it was caused by a real cause, e.g. because you ate too much chocolate. In this respect I would never call it noise. It is signal.
Stochastic can be just the model that we use to describe the real variation. And as I said, such type of model is useful only if the deterministic description fails. Whenever one is obliged to use terms like average, variance, autocorrelation, significance testing etc., he must be aware that, most probably, he has already departed from a deterministic description and resorted to stochastic description.
A stochastic model should fully respect all laws and be consistent with all available information. If we everything is known then the stochastic model should reduce to the deterministic one. If there is uncertainty, then the stochastic model can describe it in terms of probability.
D.
Dear Demetris,
I’m trying to focus the discussion, and do not in any way mean to imply that you’re “avoiding to answer the questions”.
The dichotomy that is at the centre of this discussion is that between internal (unforced) variability and forced changes (where the latter could be natural or anthropogenic). And what LTP could tell us about this distinction (if anything).
Internal variability usually involves a redistribution of energy within the climate system (where the ensuing changes in surface temperature cause an energy imbalance which works to restore the system back to its equilibrium value where outgoing energy equals incoming energy; Rob’s second point in an earlier comment). Climate forcings involve a change in the energy balance (e.g. from changes in the sun, the albedo or the greenhouse gas loading), which causes the earth’s temperature to change.
One apparent disagreement about this distinction is whether the presence of LTP is related to the degree of internal variability. Rasmus argued that this is not necessarily the case, since forcings also contribute to LTP and because forcings are omnipresent in the temperature data.
I would be curious as to your and Armin’s view on this, time permitting of course.
Dear Demetris,
My point of showing a “trend” and a “fluctuation” over a limited time span is that by applying your method of constructing a climacogram you will lose information on the signature (time evolution) of the investigated time series.
What you are actually doing is cutting time series into N pieces (averages) of length k years and determine the standard deviation of the distribution. For the same distribution, you may reconstruct time series with different signature by putting the pieces back in a different order. This implies that you lose information on the signature.
Time series (of global mean temperature) with different signatures may have different effects on the energy budgets of the climate system. The energy considerations are then mathematically speaking constraining the degrees of freedom, you find with your statistical method.
So, can you confirm that by using your statistical method (MK2012)
1) you lose information on the signature of the time series
2) by adding energy considerations to your method, you will lose less information about the signature of the time series of global mean temperature
Dear Rob,
Yes, of course, by using the standard deviation you lose information in comparison to the complete catalogue of values in the time series. However, using standard deviations at a continuum of timescales (the climacogram) you lose less information, or if you allow me to put it differently, you lose that information that you want to lose.
I think it is a general practice in modelling of physical phenomena not to care about the details of the system components. In other words, in modelling we always lose information in comparison to the realworld system. We try to find which the essential elements of the system are and we try to represent those, ignoring the less important. For example, in describing a mole of a gas we usually do not care about the positions and momenta of the 6 x 10^23 molecules it contains and prefer to describe the system it in terms of macroscopic quantities like the pressure and temperature, which by the way are statistical quantities. Certainly, the use of temperature and pressure is associated to loss of information, but I think this is intentional, isn’t it.
Stochastic models are macroscopic descriptions. Now, the climacogram is onetoone related to the autocovariance function and the power spectrum of a stochastic process. Thus, if you assume a model for the power spectrum of a process, you have a model for the climacogram; no more and no less information with respect to the modelled power spectrum.
The loss of information in stochastic modelling can be controlled. I believe if you can quantify the “signatures”, the “energy budgets”, the “energy considerations” and the “degrees of freedom”, you refer to, they can be represented in stochastic modelling. Of course this will need some work, but I think in principle it is possible.
If we accept that these quantities (the quantified “signatures”, “budgets” etc.) are somewhat reflected in the data, they are already there in the existing analysis. But if you can provide me with additional, quantified constraints, I will think how these could be taken into account.
D.
Arthur, you say:
I am sorry that you feel like this. Perhaps Rob, Marcel and Bart made a bad choice by inviting a person who has little to contribute. On the other hand, as I see from his comment, Bart agreed that physics and statistics are not a dichotomy and he did not express disagreements on what I wrote for the other dichotomies, so what is your point?
Please note that I am an engineer by profession and I would be unprofessional if I sat back and watched. Do you feel that I am sitting back and watching right now, or am I trying to interact with you and the other people involved in the climate dialogue?
Of course it has. But a slow variation at a single time scale does not suffice to generate LTP. A single time scale of variation results in Markovian dependence. You need to have many scales of change to get LTP. I refer you to my post above; I devoted effort and time to write it, and I hope it would not be waste of time if you read it. In addition, I can refer you to some of my publications already mentioned above, investigating the relation of LTP with extremal entropy production, or the changes over a continuum of scales.
It is not difficult to infer that the dominant mechanism is the second because we speak about the 7th15th centuries, during which the exploitation of the Nile was minimal, in comparison to that of today. The dams have been built only in the 20th century. Note, we have also a flow record of the Nile of the modern period (starting at 1870). The record is naturalized, meaning that hydrologists were able from water budget data to find what the natural flow discharge was, taking into account human withdrawals etc. The modern naturalized record verifies LTP. We have also records in other sites within the Nile basin, which again verify LTP.
If this is garbage to you, I am sorry; the only thing I can do is to invite you to make a better analysis and shed more light.
D.
Dear Spencer,
I have no words to thank you for your comments. I could not reach the clarity (not to mention the level of English) of your comments. In particular, I warmly endorse your last comment assessing that my opinion may not be classed as “option 1″ and clarifying that random walk cannot stand as a physically realistic model, while LTP can. I trust Bart, who seems to dislike unitroot processes, will appreciate the latter feature.
Not only does your comment make my life easier, in terms of effort to reply, but it also gives me a warm feeling of sharing similar ideas with a knowledgeable colleague, who, notably, has a signal processing background that I do not have.
Demetris
Demetris, in your reply to Spencer you state that LTP can stand as a physically realistic model. What do you mean by that? What is your physical interpretation of the presence of LTP? Is a physical interpretation based solely on the presence of LTP possible at all? I do note that Spencer equates LTP with internal variability (rather than being the combined result of internal variability and radiative forcing). What is your view on that?
You state that I dislike unit root processes. That would be nonsensical, as LTP and unit roots are just statistical features of a timeseries. What I aluded to in my april first post is that certain interpretations of stastical features may be unphysical.
Dear Bart,
You say:
First, I am sorry if my failed attempt to inject a little bit of humour in the discussion annoyed you and if you found it nonsensical. Therefore, I’ll try to be serious in this comment.
Second, I would formulate your statement in a different way: Strictly speaking, LTP and unit roots are not features of a time series but of a stochastic model. A time series can be different things, e.g. a series of observations of a natural process, a series of outputs of a deterministic or a stochastic model etc. The time series alone cannot tell us that it exhibits LTP, STP, unitroot behaviour, or whatever. If it could, we would not make this discussion.
If we do not know the exact dynamics that gave rise to a certain time series, different (infinitely many) stochastic models could be used to represent it. Statistical analysis of the time series will then tell us which of the models, e.g. an LTP model, an STP model, a unitroot model, etc., is more consistent with the statistical properties of the time series. Such comparisons are unavoidably inductive procedures, not strict mathematical proofs.
Third, I fully agree with you that certain interpretations (and models) may be unphysical. This implies an additional, very powerful means to reject some of the models, the deductive logic.
As a first example, let us examine a unit root process. I must admit that, I dislike even the term “unit root” which overstresses a minor mathematical feature of a process. The important characteristic of such process is that it is nonstationary. A nonstationary process in which the mean and/or the standard deviation tends to infinity with increasing time, can be rejected on the basis of deduction. No need to compare the statistical properties of the time series with the theoretical properties of the model.
The simplest case of such a nonstationary process is the random walk process, closely related to the Brownian motion. Spencer has explained why we should reject it:
I also quote another explanation by Lubos Motl from his blog:
Lubos adds some “damping” to the random walk to make it more physically realistic. He thus gets a Markovian process. This is indeed more realistic but it classifies as a STP process. An STP process is not easy to reject on a deductive basis. However, as I explained in my main post, a Markovian process has some theoretical problems; quoting my main post:
For these reasons, explained in more detail in my main post, among nonstationary, stationary STP, and stationary LTP processes, the less unphysical are the stationary LTP.
So dear Bart, does my main post and the additional explanations in my above give answers to your following questions:
If not, could you also see some of the papers I give in my references, particularly those I mention in my reply to Arthur (also cited in Spencer’s comments)?
I left one sentence of your comment unanswered. I will answer this soon (I have something else to do in the meantime).
D.
Bart, you say:
I am not sure if Spencer equates LTP with internal variability. In my reading he says that internal variability can lead to LTP and he uses for that a nice example, the cloud cover.
I have tried to investigate the question whether a system could exhibit LTP if its inputs are constant. You may read two papers about this question ( 2006 and 2010). As you can see, even nonsensically constant input can give rise to LTP.
This brings us to Rasmus’s questions, which you also quoted in one of your other comments:
My answer to these would be: I agree that (changing) forcing can introduce LTP and that it is omnipresent. But LTP can also emerge from the internal dynamics alone as the above examples show. Actually, I believe it is the internal dynamics that determine whether or not LTP would emerge. As a counterexample, I can imagine a hypothetical system with strong damping/stabilizing mechanisms, in which even an incoming signal exhibiting LTP would not be manifested in the system state. For the Earth system, the radiative forcing itself, being a balance of incoming and outgoing radiation depends on the internal variability.
D.
Dear Demetris,
Thanks for answering the questions as posed by Rasmus (and repeated by me): “I agree that (changing) forcing can introduce LTP and that it is omnipresent.”
From this I conclude that you agree that the presence of LTP is not necessarily indicative of internal variability being dominant (since forcing also introduce LTP).
Is this a correct interpretation of your views?
Regarding physics vs statistics: They are both useful and needed to make sense of the world around us, hence there is no dichotomy. Statistics in the context of climate science is a tool to understand the physics. My repeated questions were an attempt to have you clarify what your statistical analyses mean in physical terms. Your answers re forcings and LTP (as quoted above) and about conservation of energy (earlier) are steps in that direction which I warmly welcome.
Dear Bart, as I wrote above
Please see also the publications I mentioned which demonstrate that the internal variability is sufficient to introduce LTP, even without external variability.
Otherwise, I welcome your recognition of statistics as a tool to understand physics.
D.
Also, see Armin’s excellent comment about it.
Dear Demetris,
Thanks for clarifying the nature of the different stochastic models; very useful to read. I appreciate that you accept that conservation of energy prevents the climate from behaving as a random walk. That’s a strong physical constraint and it’s important to be clear about its implications; thanks for adding this clarity from your part too.
You further state that stationary LTP is the least unphysical process by which to interpret the evolution of global avg temperature. However, that doesn’t tell me anything about the physical processes which you deem responsible for the observed changes in temperature. Are these changes predominantly due to a redistribution of energy within the climate system or due to (natural and anthropogenic) climate forcings or due to something else (e.g. fast feedbacks wondering off)? Does LTP imply anything about the warming being governed by unforced vs forced changes, or by anthropogenic vs natural factors? I’m hoping for an answer in physical, not statistical terms.
Dear Armin,
Thanks for joining in again. I appreciate you answering the questions posed earlier by Rasmus and myself: “Natural Forcing plays an important role for the LTP and is omnipresent in climate”. Is it correct to deduce from this that the presence of LTP cannot by itself distinguish between unforced and forced changes in climate? (i.e. LTP is not necessarily indicative of internal (unforced) variability)
You claim that natural forcings contribute to LTP while GHG forcing does not. That seems in contradiction to what Rasmus claimed (his fig 2). Hopefully Rasmus can respond with his view on this.
I am somewhat surprised that a trend (e.g. from GHG forcing or from whatever other cause) would not contribute to LTP; how could it not? Since the analysis method is purely based on the statistical behavior, I would guess that it’s purely coincidental that LTP is not increased by one (GHG) forcing whereas it is increased by other (natural) forcings. After all, if the GHG were from natural origin, it would have the same (according to you negligible) impact on LTP. So from that perspective the presence of LTP would not be a strong indication of anthropogenic versus natural causation, right? Moreover, human forcing is not limited to GHG, but also includes aerosol forcing and land use (albedo) forcing (both of which are thought to be net negative, i.e. moderating the warming from GHG). I’m unclear as to how that factors into your analysis and interpretation (but that’s perhaps a detail).
Dear Rasmus,
Could you expand on your view of LTP in light of Armin’s comments? Where does your definition differ from or agree with his? I am also curious to hear some more details re your fig 2: you mention only the in and exclusion of GHG forcing (black and grey line, resp.). However, does the black line include all known (natural and anthropogenic) forcings and the grey line none of these (i.e. only the showing the modeled internal variability)?
Dear Demetris,
Thank you very much for your answers on my questions. These may lead to the quintessence of differences in insights. But let’s start where I agree with you.
You stated: “I think it is a general practice in modelling of physical phenomena not to care about the details of the system components. In other words, in modelling we always lose information in comparison to the realworld system. We try to find which the essential elements of the system are and we try to represent those, ignoring the less important.”
I agree with this statement. Although I find it a little bit surprising that you say in your comments of 1 May 2013 8:07 pm that “Models vs. reality is a true dichotomy”. It contradicts in my view.
But let’s focus on two kind of models: Atmospheric Ocean General Circulation Models (AOGCMs) and your statistical model (described in MK2012). AOGCMs are the most comprehensive models climate scientists are using in the sense that most (relevant) physical processes following physical laws are incorporated. Every gridbox of the model obeys the first law of thermodynamics, i.e. conservation of energy. This is in my view an essential element.
It implies on the macroscopic scale that if there is a positive radiative forcing acting for a long time (e.g. the 30% increase in total solar irradiance in the course of the history of the earth), the climate system will gain energy until the outgoing energy flux of the system is in equilibrium with the influx. This will be reflected in global mean temperature changes.
How about your model? You process time series (of global mean temperatures) as if they consist of multiple fluctuations at all time scales. By ignoring the possibilities of (long lasting) trends (clearly the case of the brightness increase of the sun) energy is not conserved due to reasons I explained in my last comment.
In my view physics should be an essential part of any (climate)model. Statistics can be tool for analyzing observations or model data, it’s not the same as physical laws (as you stated in your comments of 1 May 2013 8:07 pm: “In other words, statistics is physics.”).
So, I am very interested to hear whether it is possible that under a myriad of external forcings (natural and in the last centuries also anthropogenic) conservation of energy is not violated in a model which includes fluctuations only.
Rob
Dear Bart,
The first paragraph of your last comment gave me the impression that we understand each other better now, but then the second part made me worry:
I believe if something does not tell you or me anything about the observed changes, etc., is irrelevant with our discussion. What you or I understand as physical processes depends on your or my understanding (nb., understanding is subjective). Physical processes are not just those explained by Newton’s laws. Some people are able to understand systems governed by the Second Law of thermodynamics (which by the way is a statistical mechanical law) and classify them as physical systems. Some are even able to understand, and classify as physical systems, even quantum systems governed by Schrödinger equation, in which the concept of probability is central. Some even dare speak about flows of probability, and imply a law of conservation of probability. Most of these concepts I have difficulties to understand, but I would not characterize them unphysical. I would avoid applying the Procrustean idea that physical is only what fits in my own mind. I think the climate system is quite difficult to study and I find that naïve attempts to describe (and understand) it in deterministic terms can only lead to deadlocks. Therefore I welcome any ideas from the stochastic toolbox, such as probability and its fluxes, the principle of maximum entropy, Bayesian statistics, and the like. All these become physical as long as they are applied to physical systems.
Furthermore, I cannot understand how these two extracts from your own comments can be consistent to each other:
Quote 1 ( earlier comment)
Quote 2 ( last comment)
But I hope the explanations given in the last excellent comment by Spencer answer your questions “in physical terms”.
D.
Arthur,
I certainly do not attack you or your straw man, whose existence I was not aware of. My impression is that we are doing dialogue, as we are prompted by the title of this forum, right? As “dialogue” happens to be a Greek word, I believe I have a good understanding of its meaning. Well, climate is also a Greek word, but certainly this is more difficult to understand
You are right that I did not provide citations covering all my assertions about Nile. So here is a paper which examines the modern record of the Nile: Mediumrange flow prediction for the Nile: a comparison of stochastic and deterministic methods. You may see in Table 2 that the Hurst coefficient of the annual series is 0.85, which indicates LTP. I hope after the scheduled event on Harold Hurst additional citations will be available (in particular, about other sites in the Nile basin). Note, that I have published about the Nilometer record several times, by Figure 2 published in my main post above is for a longer period and is an original figure (as well as all other figures)—not copied from previous publications.
About the question what “physical insight” is, please refer to my answer to Bart above. But I appreciate that you “have read enough about maximum entropy”, and I respect your wish to see another example. So here it is: Uncertainty assessment of future hydroclimatic predictions: A comparison of probabilistic and scenariobased approaches”.
This is about a catchment which is important for the Athens water supply system. As you will see, LTP is present in both rainfall and temperature and is further amplified in river flow because of the interaction of the river with groundwater processes as well as because of the changes in the human withdrawals of water (the latter is of minor importance).
With reference to modelling principles, which I referred to in my first comment to Rasmus, please notice the following extract from that paper:
You may also notice in the last panel of Fig. 12 how poor the performance of climate models is, even after downscaling, and how stable the future projections, obtained by using climate model projections, are. Of course in the system management we have ignored GCM results as detailed in another paper, HurstKolmogorov dynamics and uncertainty.
D.
Dear Rob,
Believe it or not, when I was writing my reply to Bart above I had not refreshed my browser and I had not seen your comment. So, my interpretation of your comment is that you guessed what I was writing as an answer to Bart and wrote a comment with a relevant question
In other words, my reply whether statistics is physics is already there as well as in many of my other comments and in many of my publication, and there is little to add. Of course you may feel free to banish statistical thermophysics and quantum physics from physics, but I will not follow you. Assuming that you banish them, can you prove the ideal gas law (PV = nRT = NkT) without using statistics?
As per myself, given that I regard statistics as physics, I can use some tools like the law of large numbers, the central limit theorem and the principle of maximum entropy as powerful tools to make inference in physics. These say that you do not need to study the details in order to know the macroscopic behaviour of a system comprising myriads of variables. This is the case in the example I gave before: in a monoatomic gas you have 6 (variables/molecule) x 6 x 10^23 (molecules/mol) = a great many myriads of variables, and you do not care to know exactly even a single one of them. For, you can infer the behaviour of the system using the three above probability laws. The conservation of momentum and energy are put as constraints for the entire system, neglecting the details of energy exchange between each of the molecules.
The situation is quite similar in climate. Why do you think that my approach should violate the conservation of energy? Can you prove that? Or can you give an indication that it should violate it?
D.
All three invited participants agree that radiative forcing can introduce LTP and that it is omnipresent. It follows that the presence of LTP can not be used to distinguish forced from unforced changes in global avg temperature. The omnipresence of both unforced and forced changes means that it’s very difficult (if not impossible) to know the LTP signature of each. Therefore, LTP by itself doesn’t seem provide insight into the causal relationships of change. It is however relevant for trend significance, but frought with challenges since the unforced LTP signature is not known.
I would also like to put the issue forward that was voiced by Lennart van der Linde in the public comments, echoing Rasmus: The stronger natural variability, the stronger spontaneous changes in the earth system dynamics are amplified, hence the stronger climate sensitivity. This limits the fraction of warming that can be attributed to natural variability, since it would simultaneously enhance the fraction attributable to a given amount of radiative forcing.
Dear Demetris,
“Believe it or not, when I was writing my reply to Bart above I had not refreshed my browser and I had not seen your comment. So, my interpretation of your comment is that you guessed what I was writing as an answer to Bart and wrote a comment with a relevant question”
Yes, that’s quite a coincidence, I tend to believe you. Your answer contains interesting information about physics and statistics.
In my view there is – to speak with your words – a dichotomy between statistics, derived from fundamental physics, and statistics, used as an analyzing tool. Let’s call it statistics1 and statistics2, respectively.
Statistics1:
For example molecular velocities following the MaxwellBoltzmann distribution, photon energies following the BoseEinstein distribution, the gaslaw linking temperature, pressure and volume (as you mentioned), uncertainty principle of Heisenberg in quantum mechanics (also mentioned in your earlier comment). All of these can be derived from fundamental physics. These distributions and/or probabilities are fundamental properties of nature.
Statistics 2:
For example decomposing time series into harmonics (Fourier transformation) or into other functions (e.g. Laplace transformation), wavelet analysis and regression. These are not based on fundamental physics, but are mathematical tools for analyzing experimental results and/or observations. Your climacogram is also an analyzing tool as you have characterized it yourself in the abstract of your paper with Markonis (MK2012):
“In our analysis, we use a simple tool, the climacogram, which is the logarithmic plot of standard deviation versus time scale, and its slope can be used to identify the presence of HK dynamics”
All of these analyzing techniques are based on assumptions and have their limitations. They can be useful, but are by no means fundaments of nature. Therefore statistics1 is fundamentally different from statistics2.
Can you agree with the distinction I make between statistics1 and statistics2?
To respond to your questions:
“Why do you think that my approach should violate the conservation of energy? Can you prove that? Or can you give an indication that it should violate it?”
As explained in my earlier comment, your approach doesn’t say anything about conservation of energy. Considering system Earth, this is an important constraint. Including this you might be able to distinguish between forced and unforced climate (global mean temperature) change. As you already mentioned the climacogram cannot make the distinction. My guess is that taking energy constraints into account will affect the conclusions on trend significance. So it’s not a matter of violation, but of omission instead.
Rob
Dear Armin,
I think we’ve reached an agreement on the issue whether forcings affect LTP, if I understand you right. And forcing is omnipresent.
If we say that trends too have a LTP nature in addition to longterm variability, then we still haven’t managed to distinguish the two.
Can we agree that any use of LTP in hypothesis testing has to distinguish between intrinsic (“noise”) and externally forced LTP before we can say whether a trend is part of the natural internal variability?
You may not think it “is nearly impossible to have a meaningful discussion”, but I think you’re wrong.
Firstly, climate analysis and trend testing is not just a question of LTP. Furthermore, one always need to make a set of assumptions before making sense out of the mathematics, even when applying LTP models.
It is easy to produce a bunch of meaningles numbers even with the most elegant mathematical model. for instance the mathematical structure of the LTP.
Can you really demonstrated this? I’m sorry, I haven’t read your paper, although the papers I’ve read on LTP so far have not really convinced me. Too idealistic and not sufficiently practical. THese papers also neglect a large scope of additional and relevant information.
In order to really understand LTP, you need to unveil the physical mechanisms which are at play.
All correlations can give you misleading numbers when based on a finite sample, and if there is a persistence, the greater the danger of getting an accidentally good fit. With lowlevel chaos and different (unpredictable) weather regiimes, I wonder if LTP assumtpions and models may be tricked.
One way to test you methods is to carry out a large set of double blind tests with data samples for which the answers already are known. For instance from climate/weather models (“surrogate” data or “pseudoreality).
Dear Demetris,
Let me respond to some of your points:
1. Models vs. reality – I think we agree that models are not the real world, but they are nevertheless useful concepts. They are very useful for providing ‘pseudoreality’ against which statistical models can be tested, mainly because we can make decisions about the simulated world. E.g. forcing ofr no forcing.
2. Physics vs. statistics – I agree that physics and statistics both are important ways of understanding the universe. I also think that physical laws constrain possible outcomes and this must be reflected in the statistics.
4. Trends vs. fluctuations – I agree that the concept ‘trend’ is not very definite, and it’s important to state the scientific question more clearly. The question that I think we are dealing with is whether the current global warming (“trend”) could arise from no forcing – or just natural forcings.
5. Linearity vs. nonlinearity – The difference between linearity and nonlinearity is not contriverisal. You could add other more relevant phenomena, such as tropical cyclones. On the other hand, there are aspects which are more linear, and we know that there are some types of forcings which cause a somewhat linear response. Take the seasonal cycle for instance. It is fairly clear what effect what effect changes in the incoming solar energy has on the temperature statistics. We also see the effect from volcanoes, although it may be more complicated. Slow changes in the Earth’s orbit around the sun also appears to produce a response which is fairly clear. And we know that the greenhouse gases has an effect – we can look to the other planets in our solar system. There may be nonlinear effects from the oceansatmosphere causing decadal variations, and the question is whether these are sufficient for explaining the exceptional warming that we see now. Data from the past suggest this warming is hihly unusual, and if it was a spontaneous freak event, it would still be quite unlikely.
6. Stochastic vs. deterministic models – on the quantum physics level, everything is stochasic, but the ‘law of statistics’ make the classicalscale physics considerably more deterministic. Still, the presence of chaotic dynamics make some processes impossible to predict on a longer time scale. I also agree that not all stochastic models violate the laws of physics, but there are many stochastic models which do. Furthermore, there may be some stochastic models which provide useful predictions for certain scientific questions and still misleading results for others.
Models describing the mean surface temperature on a planet must account for the energy budget, thermodynamics, and dynamics. There are some models which are nonstationary such as “random walk” models which may be hard to reconcile with the planets energy balance and hydrostatic stability. In such circumstances, there is a true dichotomy – but in many cases there is no dichotomy.
The way I view this is that any trend will affect the autocorrelations. In addition, there is a question about how the climate system reacts to even a linear trend forcing. We agree that there is a presence of internal dynamics and there are feedbacks which results in interannual and decadal varioations. In a chaotic system, the changes in external conditions may imply changes in the systems evolution at points of bifurcation.
I’ve earlier stated that I believe that the weather is ‘chaotic’ whereas the climate is not. Just like the indicidual days the nexte year are unpredictable whereas the seasonal statistcs are fairly well known (seasonal froecasts try to say something about how much the future will diverge from the ‘normal’ state). This, however, makes an assumption about time scales. We know that there are fluctuations in the global mean temperature, but these are distinct to the slow changes.
In other words, I’m not convinced that GHG forcing does not contribute to LTP.
Just to comment on ‘unphysical models‘ to clriy my position.
In my view, a model is ‘unphysical’ if it violates one or more laws of physics. A model may have some aspects of physics (e.g. maximising entropy), but if it violates other physical contraints such as the conservation of mass, energy, or charge on the classical scale, then it’s ‘unphysical’.
If the global mean temperature undergoes large excursions, then this will clearly have an impact on other parts of the climate system, as it implies that heat is shifted around. Conservation of energy implies that.
When it comes to LTP, trends, and internal variations, it may be useful to look at sea level pressure (SLP) or
SLP indices. We know a priory that the SLP is not likely to have any longterm trend (or perhaps just a tiny bit due to changes in the atmopsphere composition) as the barometric pressure is a measure of the mass of air
above the point of measurement. If we can rule out the possibility that the atmosphere increases in mass, then SLP is only expected to exhibit LTP but no trend.
It’s a bit speculative – agreed – that the SLP should have similar LTP as the global mean temperature.
Another diagnostics could be the global mean sea level change or the total ocean heat content. See the recent post on RealClimate.org. The question is whether this measure shows more of a trend and less decadal variability. How do the LTP detection differ in all these three cases and wht can we learn from that? As I stateed in my opening part, I think it’s a great mistake to look at only one indicator – and in a sense, this is where one ‘unphysical’ property resides.
Dear Bart,
sorry for answering late. For answering your question, we have to note that we distinguish between natural fluctuations and trends. When looking at a LTP curve, we cannot say a priory what is trend and what is LTP. When analysing LTP in the INPROPER way, which still is one of the best liked way among many climate scientists, namely by the ACF, then we also cannot distinguish and this is the reason why Rasmus comes to the wrong conclusion that external trends contribute to the LTP.
The real crux is that many of our colleagues just have problems in reading and understanding literature on LTP that requires some mathematical understanding, and so they cannot appreciate the enormous progress done in this field in the past 15y. By defintion, an external trend does not contribute to the LTP of a record. The LTP is natural, the trend is external and deterministic. When using the PROPER methods like DFA and WT, you can indeed quantify the natural LTP in a record also in the presence of an external monotonous trend!
Of course, if you use the improper ACF as Rasmus did, you cannot!!
When testing to what extent GHG is responsible for LTP, we found it is not, please have a look at our 2004 GRL, where we also specified the methods.
It is very unfortunate that Rasmus does not seem to be able to read this and our other articles on LTP. I am sure he would appreciate them and see that LTP is not a beautiful and needless kind of theory made by strange theoreticians to make real climate scientists the life harder (forgive me, when I am joking a bit). If he would read them, he would certainly understand (1) from our 2009 PRE that the ACF is a poor tool for analysing LTP in a short record (we have specified the ACF analytically as a function of the record length and the correlation exponent, he only needs to read the formula), (2) from our 1998 PRL, 2001 Physica A, and several other papers including our 2013 Nature Climate Change he would appreciate that there are much better tools (DFA and WT) than the ACF and he would see also the consequences of LTP, e.g., the clustering of extremes. (3)
Finally, when reading our 2009 GRL and the more extensive 2011 PRE he would highly appreciate that it is easy to determine the significance of a trend in a longterm correlated record, because we have specified analytic formulas for the significance as well as for the confidence intervals. This way, he would recognize that natural LTP is not kind of a strange idea of theoreticians, but is of real use.
The WRONG alternative, and when reading the nonspecific comments of Rasmus I think he may favourate that one, is to assume that natural climate variability is an AR1 process. Indeed, climate scientists liked it in the past, because it is easy to handle, easy to generate, and a significance analysis was not too difficult to do. Indeed, mathematicians in the 1930s developped already tools for this. What these climate scientists do not know, is that an LTP process can also be generated very easily (one only needs to know Fourier Transforms) and that it is even easier than for the STP to perform a significance analysis. But I am sure, this will change in the next years.
Finally, Rasmus will recognize that ENSO is not an example for LTP, in the same way as other quasioscillatory phenomena cannot be described as LTP
When reading this again, I cannot exclude the possibility that for Rasmus, LTP is STP plus trend, but this would be a serious mistake.
Best wishes,
Armin
Dear Rasmus,
thank you. I apologize that I only can answer today. I have written a quite lengthy answer to the post of Bart from May 3, where I comment quite a lot on your ideas. After reading this, I am convinced that reading and trying to understand our papers really would help you much in this issue. Please do this!
You will see that in some of them, highly recognized climate scientists like HJ Schellnhuber and H v Storch contributed. You will learn new methods, you will see quite practically, by real formulas, how poor the ACF really is, and you will see how easily in LTP records the significance of external trends can be estimated, in the same way as for STP records, but easier. Everything very practical! The only thing you have to do, is to read these articles!
We scientists are, of course, open to new ideas, and so it must be a great pleasure for you to get
acquainted will all this new stuff. Just take the chance!!
Best wishes,
Armin
Dear Armin,
Thank you for your response. I have already had some discussions on LTP, but if you can recommend one particular paper of yours, I’ll of course read it.
In the meanwhile, I will urge you to use all the information available, and not just rely on one time series and it’s sole LTP characteristics. That is in my view ‘unphysical’ because we know that the various geophysical data are interlinked in the cliamte system. I will like to stress that we need to take a comprehensive view when we want to address questins such as whether the global warming we now witness is due to natural fluctuations (LTP) or if it is indeed forced by GHGs.
I realise that highly recognized climate scientists like HJ Schellnhuber and H v Storch have contributed contributed to LTP work, and that this topic is very interesting. So far, I think these findings suggest possible properties, rather than the exclusion of causalities. Here the practical question which we originally addressed was whether we can decide if the current trends are ‘unnatural’.
Yes, you can use some ‘noise models’ (e.g. FARIMA, fGN, or whatever) and fit it to the data, and say that, given such properties, the trends could easily have happened by chance. My doubts on this strategy is about the design of the analytical strategy.
My point regarding the simple ACF is that the forcing also will induce LTP properties. You can of course try to remove the trend, but then you will have to make assumptions. You will need to provide convincing demonstrations that shows an objective method that is not biased towards one answer.
Maybe it’s easy to test the strategy against pseudodata: results from climate models subject to different forcings. I’d be curious to know what results wou’d get then.
I also appreciate that LTP is present in many different physical processes, but we know that there are many types of physical processes with different properies. We know that the Earth’s temperature is not self similar or power law, an both the diurnal cyce and annual cycle have welldefined temporal scales. However, we expect a convoluted response to these.
If you can provide some indication as to what mechanisms would be responsible for LTP for the global temperature, that could shed some light on our differences. Again, I’d suggest subjecting e.g. the mean sealevel pressure to similar tests for LTP, as we expect there to be no trend as the atmospheric mass is constant (more or less). We could also subject different ocean data to the same type of test.
Dear Armin,
You wrote “Natural Forcing plays an important role for the LTP and is omnipresent in climate.” and later on you wrote “By defintion, an external trend does not contribute to the LTP of a record. The LTP is natural, the trend is external and deterministic.”
I have difficulties reconciling these two statements. Does radiative forcing (e.g. from a change in solar irradiance) contribute to LTP? From the former cite above I gather your answer would be yes, from the latter I gather you would answer no.
If yes, why would a natural forcing (like a change in solar irradiance, or like Milankowitch forcing) contribute to LTP and anthropogenic forcing (like a change in GHG or aerosols, which could also change though typically over longer time scales due to natural processes) not?
Lennart, you said:
I thought it was clear from what I wrote earlier that I fully disagree with such statements. There is no “LTPnoise”. LTP is property of the real world climate, which emerges from its dynamics. Well, this dynamics may be difficult to infer and express deterministically in its details, but it has some macroscopic characteristics. The macroscopic characteristics are consistent with the HurstKolmogorov (stochastic) dynamics.
All remaining parts of the statement are hypotheses stemming from a belief that climate models tell us the truth. For example, how do we know about “GHGdriven trend”? Because the climate models tell us so. But there is a problem here as they did not predict the “warming ‘hiatus’ during last decade” so, let’s invent some “noise” to rectify this.
But as I clearly described above and clarified later, the “noise” properties of the climate models are inconsistent with LTP.
Those who believe that climate evolution can be described in deterministic terms, should have provided us with models that predict the ‘hiatus’ in deterministic terms.
If they need to add some “noise” in their deterministic models to match reality, they should have taken care so that their “noise” be compatible with their own models first. To this aim, they could run their models in “unforced” conditions, infer the “noise” properties from there, and then use this “noise”. Can a Markovian noise with characteristic time scale of 1.25 years explain a climatic ‘hiatus’? I do not think so.
To me climate is by definition a stochastic concept (see my main post), its physics is describable only in stochastic terms, and the foundation and tools for decent climate modelling is offered by stochastics. A first step in such modelling is to explore, based on instrumental data, proxies, and other quantified information, the stochastic behaviour of the real climate. I hope my publications, some of which I referred to in above comments and my initial post, have contributed to the latter step.
D.
I have been looking at several climate blogs to see if this Climate Dialogue discussion has attracted the interest of bloggers and climate discussers. I would say the coverage is thin: only Bishop Hill devoted an entry to this discussion, while, perhaps coincidentally, William Briggs discussed the Mexican Hat Fallacy, also discussed in my main post above. But I saw a few comments in other independent posts, among which I think one is interesting. Before I provide the link to this comment, I will tell two stories which I recalled after seeing some of the comments in blogs (including in this forum).
Once I was explaining to a colleague that in the NavierStokes equations (which, by the way, are used to model water flows, the weather, the ocean currents, etc.) the turbulent stresses are stochastic quantities (covariances of velocities). The colleague told me something like: Look, you may be right but I do not care. As a student in the university I learned some of this stuff, differential equations stuff and stochastic stuff. Now I have repelled them from my mind as I do not need anything more than the high school physics background. Actually, whenever I am not able to downgrade my explanations to elementary school level, people do not believe me.
Another colleague, in a discussion about climate which started as scientific and ended up as political, told me: I do not care whether climate change is real or not. Even if it was not, we should have to invent it, in order to save the planet from the various threats. (By the way, my own view is that, of course, climate change is real—climate had been changing all the time—and that we should beware saviors).
Now the comment I wish to refer to is by Dr. Robert G. Brown posted at Watts Up With That?. He offers some interesting (advanced level) physical insights and then (after his phrase in which, very rightly, puts two words in bold) also discusses political aspects of the climate agenda.
The Orthodox Easter break gave me the chance to try to see this discussion from a more macroscopic point of view; some statistics about word usage (shown in the graph below) helped me.
I looked again at the title of the forum entry and I verified that it is “Longterm persistence and trend significance”. Both constituents of the title are statistical terms, yet from the beginning of the dialogue I felt that some of the discussers view statistics as disjointed from physics and identify climate with physics.
No doubt that climate fully obeys physical laws, but, as I wrote in my introductory post, it happens that climate is based on statistics even in its very definition, so by depreciating statistics we also depreciate the scientific basis of climatology. More generally, in complex systems, to express/derive physical laws we need statistics. I am happy that, finally, my persistence on this thesis resulted in recognition, by most of the discussers, of statistics as essential part of physics. I let aside the neologisms of Statistics_1 and Statistics_2 and the implied new dichotomy—I take it as a joke. Of course in every scientific problem there is good and bad use of statistics, mathematics, logic. But eventually the correct use will prevail.
Another interesting point I noted in the discussion is a tactic like this: if something is not consistent with our ideas, let us call it unphysical, that is, violating physical laws. Of course, if a theory or analysis violates physical laws, then it should be rejected. But we have to prove which law it violates and how. A stochastic model whose fitting is based on data cannot be pronounced unphysical just because it did not consider explicitly a specific physical law, e.g. conservation of energy. Inasmuch as conservation of energy is reflected in the data, it is indirectly respected by the model as well. Unless we convict also the data (e.g. those used in my calculations or other) and call them unphysical because they are not consistent with what we trust as being physical (in this case what the climate models are telling us).
Reading this discussion one would perhaps develop an impression that conservation of energy is the only important physical law and that it can explain everything. Of course— I repeated it several times—it is an important law, it is never violated, but on the other hand it cannot explain everything. It does not explain for example, why my room has roughly uniform temperature. Infinitely many nonuniform options would not violate the conservation of energy—they would have the same total energy content (for example if half of the room was below freezing point and the other half much warmer). The most powerful laws in physics are the variational principles rather than the conservation laws: Fermat’s principle (for determining the path of light), the principle of extremal action (for determining the trajectories of simple physical systems), the principle of maximum entropy (for more complex systems). It is the latter principle which explains the uniformity of temperature in my room, not merely the conservation of energy. Conservation of energy offers just one equation for the interacting parts. In contrast, the variational principle offers as many equations as there are unknowns. That is why I believe we should employ variational principles in climate. In particular, the one I believe is most relevant in a changing climate is the principle of extremal entropy production. This gives rise to the LTP, as I explained in my paper I already mentioned several times.
This brings be to another point of this macroscopic overview. Of course, the stuff contained in this forum is not any formal peer reviewed publication. But it is better if the arguments put can be supported by peer reviewed publications and if the discussers read these publications and refer to them. Each of the discussers has his own publications. I have tried to refer to some of my own several times, but perhaps I was not convincing. I am afraid that Armin had the same feeling when he wrote to Rasmus:
Looking at the word cloud above I think it remains one of the most popular issues, I did not cover in this brief overview: that of forcing. But I think there is nothing to add to what I wrote earlier in other comments except that I fully endorse Spencer’s comment, from which I wish to quote this part:
Dear Rasmus (and Bart),
sorry for answering late, I just did not have time before. Rasmus, from your comments I really can see that you must get more familiar with LTP and with the distinction between “external trends” and “natural
fluctuations” as well as with the classical techniques based on exceeding probabilities and confidence intervals to evaluate if some temperature increase is significant or not. Significant means, it cannot be explained by the natural fluctuations in the system. You know, these things are right at the heart of this Climate Dialogue, and in order to have a meaningful discussion we must make sure that we share the same basic knowledge.
One paper to read is certainly not enough here, this is like ” a drop on a hot stone”. May be, you start with our introductory review from 2012 in Acta Geophysica and with the SI in our recent Nature Climate Change, coauthored by Hans von Storch. The references are in my blog. Then you should, for a deeper insight into the methods which are essential in this field (it is here the same as in physics, the appropriate techniques and methods are essential!!) read the 2001 Physica A article on Dentrended Fluctuation Analysis. If you want to go further, you even can read our 2002 article in Physica A, again by Kantelhardt et al, on Multifractality. This article will soon exceed the limit of 1000 citations and may be you like it, but it contains a large mathematics part. But you know, mathematics is the heart of science, so you should not worry.
After that, I suggest you to read four of our articles with HJ Schellnhuber, i.e. our joint PRLs (PRL is the most prestigious physics journal as you certainly know) from 1998, 2002, and 2004 (Comment) as well as the paper by Eichner et al. in PRE. In addition, and this article is very important for you and I mentioned it already several times, you should look at our 2004 GRL, where we discuss, for the 2nd time after our 2002 PRL, to which extent the different forcings contribute to the (quasiuniversal) persistence law for continental temperatures. We show that with GHG forcing alone, as in our 2002 PRL, the persistence law cannot be reproduced by the AOGCM, but the natural forcings are essential to get it right, in particular volcanic forcings seemed to play an important point. You see, these are quite PRACTICAL PAPERS, but involve some kind of mathematics.
Having read these papers, you can pass to our 2005 PRL, where we could show that LTP implies a clustering of extreme events, and could show that the clustering of extremes that we PREDICTED on the basis of LTP indeed can be seen in climate records. Again, this paper involves mathematics, but is again VERY PRACTICAL, isnt`t it? After that, you will be in the right mood to look at our 2009 PRE on the ACF. This paper is more demanding in mathematics, but you do not need go through it in detail, just look at the formulas for the ACF and how it depends on the system size. After reading this article, I am sure, you will not continue to analyse data dy the ACF, because the finite size ffects are drastic and hide the proper behavior. So you see, this is also a VERY PRACTICAL paper, which will help you tremendously.
Finally then, I suggest you to look at our 2009 GRL or better our 2011 PRE which is at the heart of this ClimateDialogue. The paper is also mathematically demanding, but not too much, but since we are scientists, this does not create problems for us, since mathematics is the language of science. I hope you agree. (Otherwise, we were philosophers). Reading all these papers will take you some time, but this will be a very good investment!!! Just do it!!! If you have specific questions, not philosophical ones, I will be more than happy to help you. Also, if you cant get all papers from the Internet, just tell me and I will email them to you. Getting familiar with LTP and the significance of anthropogenic trends is enormously important and at the heart of this CD and the PREREQUISITE for a meaningful discussion. So take the chance!!
Now let me come to the confusion that arises when you and Bart talk about LTP. First of all, LTP is specific and canbe described by mathematics in a well defined manner, unlike your El Nino example which for sure is not LTP. If I want to find LTP there, I have to analyse the time intervals between ENSO events (you know they are well defined) and then check if these intervals are LTP. This cannot be done satisactorily because we have far less than 100 intervals. I said in my first response to you alreadsy, that since we are scientists and not philosophers, we have to be specific, otherwise no progress in this complex field.
Now, Bart and Rasmus, what do we understand as natural fluctuations of the climate? For sure not climate without natural forcings. Natural forcings belong to the climate and make the persistence law right (see our 2004 GRL). I wrote already before that climate is highly complex, linear and nonlinear, and everything is interwoven. The natural forcings are not responsible for external deterministic trends, just for the persistence, and one does not need to ask what are the single contributions of the various forcings to temperature changes, since everything is interwoven and probably impossible to separate. It is very naiv to think that mathematical techniques could or should be able to distinguish between the various forcings. So, the natural forcings together with the unforced climate system (which is certainly not white noise) make the ups and downs of temperatures which you can see easily when you look at the data with a sliding average, and we do not need actually to specify where they come from. These ups and down that look like mountains and valleys on all time scales are synonymous for LTP. I find it intriguing that we can classify these fluctuations, in a very satisfying manner, by a singlenumber, the Hurst exponent or correlation exponent. Some climatologists, due to a lack of mathematical understanding, think that these fluctuations can be desribed by an AR1 process, but this is simply nonsense and in disagreement with all facts.
Now, in addition to the natural forcings, there are anthropogenic forcings, mainly by GHG but also urban effects must be considered. The question is now, what is the effect of these forcings on temperature. This is a highly relevant question and rgards the climate sensitivity. I am not expert in this, but from my colleagues who are experts I learnt that this is a difficult and not fully settled question, in particular when the temperature evolution of the past 15y areconsidered.
A PRAGMATIC way to approach this problem is what we are doing. We assume there are natural fluctuations (among others driven by the natural forcings) and a probably anthropogenic monotonously increasing part which is kind of deterministic and which we call external trend. You see, instead of vague speculations and philosophical discussions that lead to nowhere we put this simple assumption. This pragmatic procedure is not unusual in climate science, it is actually being used in most articles that are concerned with temperature increases and significane estimations,and you should be aware of it.
Now, the big mistake made by our colleagues is that they used IMPERFECT methods like you do, namely the ACF or the power spectrum, to conclude that the natural fluctuations defined in the way above, can be described by an AR1 process. Then they used known mathematical techniques to estimate the signifance of a trend. This crucial mistake appeared also in the IPCC report since the authors were, unlike you after reading our papers, not aware of the LTP of the climate. They assumed STP and thus got the trend estimations wrong by overestimating the significance. Our 2009 GRL and 2011 PRE show how to do these estimation for LTP records right.
Now I am getting tired. I hope you will enjoy our articles,
Cheers, Armin
Thank you Paul, I like your idea about the role of the volcanoes
Arthur, thank you for your Comment. I think I answered it in my quite lengthy reply to Rasmus.
Please have a look at it. DFA and all the other methods can only eliminate monotonous trends.
We now prefer DFA2 because it eliminates linear trends in the original data, together with WT2.
If you are more interested, have a look at our 2001 Physica A paper or the SI in our
2013 Nature Climate Change.
Lennart,
Thanks for your comment, for pointing out these publications and for your questions. I may not be the right person to judge these publications; for example my knowledge about deepocean heat uptake is zero. In addition, my university does not subscribe to Nature.Com journals and I do not have access to these publications. So, my reply will be general.
In brief, based on the information you give about these studies, my answer to your question “Is this the kind of prediction you’re asking for …?” is negative. I believe that retrospect studies are useful to explore possible explanations of observed phenomena. But it may be dangerous to believe that a skill in retrospect explanation (I would not call it “prediction”) should imply a prediction skill. I will give a very relevant example from a report by Philip J. Klotzbach and William M. Gray, entitled “Extended Range Forecast of Atlantic Seasonal Hurricane Activity and Landfall Strike Probability for 2012” (7 December 2011). The abstract reads (emphasis added):
Also, a retrospect explanation that seems to have some skill in numerical terms does not imply that the explanation given is correct. Infinitely many models could be fitted, with good performance, to a limited data set in retrospect. Even in blogs and in news we often (perhaps on a monthly or weekly basis) see diverse model fits that explain the global temperature evolution based on various explanatory variables (of solar, atmospheric or ocean origin), or explain various other phenomena based on global warming. In some cases we also see future predictions based on these models. I will not criticize any specific of them; rather I will give a funny counterexample: if you google “proof of global warming”, you will find images implying global temperature being an “explanatory variable” of a hilarious “dependent variable”. I believe there may indeed be significant correlation between the two variables this counterexample refers to, but of course this is just a joke and is put as such, I guess. On the other hand, there is no shortage of studies of similar type but pretending to be serious and claiming causative relationships between global warming and numerous aspects of nature and life.
I think that to take an analysis of this type seriously, a minimum prerequisite is to contain validation of the hypothesis made. I have explained this in my initial comment to Rasmus, with respect to his model presented in his Figure 1. I am copying here what I wrote to Rasmus, also adding hyperlinks to the references I used for your convenience.
So, since you have seen the studies you point out, may I ask you two questions? Have these studies followed a splitsample technique, with a separate validation period? Do they also provide a future prediction to enable validation/falsification in a few years from now? If the replies to both questions are positive, then the papers are worth of respect. Whether or not they tell the truth is another issue; we’ll know it later.
D.
Dear Demetris,
Apart from your really offtopic comment of May 8, 2013 at 7:29 pm, which gives, however, some nice insight in your motivation, I would like to respond to your ontopic comment of May 8, 2013 at 7:57 pm. (which I consider as a reply on my comment of May 5, 2013 at 12:59 pm.)
You say:
It really surprised me, that you didn’t get my point. You have a statistical method to analyze time series and applied this to the earth’s climate. I don’t think anyone is disputing that this is just one way of looking at the behavior of the climate system. There are, however, more ways to investigate climate, e.g. considering physical laws (and yes, this includes also fundamental physical properties which can only be expressed in terms of distributions or probabilities – but again, this is fundamentally different from statistical analyzing tools, you are talking about).
So, please, consider your method as one of the pieces of the (climate) puzzle. The more pieces you gather, the clearer view you get of the complete picture of the puzzle. In other words, if you have more independent information you can exclude (some) options you had with just one piece/method. I think in the public comments Arthur Smith gave an excellent example to illustrate how more information can lead to more constraints (http://www.climatedialogue.org/longtermpersistenceandtrendsignificance/#comment454). He stated:
In my view (and relevant for this discussion) this means that if you describe the movements of planets and sun in terms of mathematical formulae, you can do that with any assumption on the center of rotation. If you add physics (here: Newton’s laws of gravitation), there is only one possibility left: all planets rotate around the centre of gravity (which happens to be inside the sphere of the sun).
Let’s return to the climate system. You say that in order to reveal the behavior of the climate system it is sufficient to consider the climacogram, because all the physics is contained in the investigated timeseries. In my view this is analogue of saying that all physics is contained in the movement of the planets and the sun. In other words: the results of the climacogram can be considered as the epicycles of climate. We should be looking for physical laws to limit the possibilities.
In order to clarify possible differences in view, I would very much like you to briefly answer the following questions:
1. Do you claim that your statistical tool (climacogram) has a similar status as statistics derived from fundamental physics? Or in other words that the climacogram is a fundamental property of nature?
2. Do you recognize that the climate system can be externally forced? For instance if the sun becomes brighter then it will affect the global energy balance?
3. Do you agree that if there is change in the energy balance then physical processes in the climate system will be influenced? And more specifically, these might affect global mean temperatures?
4. Do you agree that time series of global mean temperature have deterministic as well as chaotic elements?
5. Do you agree that by including physical contraints you get a better picture of behavior of the climate system than by only considering the climacogram?
6. As you confirmed that information is lost in the climacogram analyzing tool (as I showed this concerns information on the signature of the investigated time series, e.g. no distinction between fluctuations and increasing signals), do you agree that adding physical insights may account for the lost information?
Rob
Rob,
Arthur’s example of epicycles is an excellent one, so thanks for drawing again attention on it. First, if I remember well, the epicycle model is not only one of the ancient world; even Copernicus, who revived Aristarchus’s (3rd century BC) heliocentric model, used epicycles in his model.
It is useful to think why this model was introduced and prevailed for so many years. It is usually asserted that the reason was that the ancient Greeks regarded the circle as a perfect shape, so that Nature could not follow anything else than this. This may be part of the truth but not the whole truth. Now, from the information we have from the Antikythera Mechanism, the ancient Greek analog computer to model the planetary motions and eclipses, we can infer that this very model may have affected the physical insight. For it is relatively easy to materialize epicycles using gears (the constituent elements of the Mechanism) than to materialize ellipses.
Whatever the reason for the prevailing of epicycles was, a metaphysical view about Nature or an effect of the then available computer model, the example teaches us not to develop fixations about Nature, neither to adhere to available models.
Now my replies to your questions:
No the climacogram is not a fundamental property of nature. Variability and change are. Climacogram is a stochastic means to describe them.
Furthermore, what you call “statistics derived from fundamental physics” may be the other way round, i.e. physics derived from fundamental statistics. But it is even better to say it in a more symmetric manner: The combined use of fundamental physics and fundamental probability enables description of complex physical systems. For example, if you take the principle of maximum entropy in its pure probabilistic formulation and the laws of conservation of momentum and energy, then you get a convenient and incredibly simple description of the pressure and temperature in your room.
Of course it can. But the global energy balance depends also on the internal dynamics.
Of course it will. But the internal dynamics are able to cause change as well.
No I do not agree. The deterministic and chaotic elements are not a dichotomy. Better to say deterministic chaotic. The deterministic chaotic elements do not exclude a probabilistic description thereof. Rather they necessitate it. Once again, see my Random Walk on Water.
If the solution violates a physical constraint, yes, you should include it in a second step. But if your solution respects the constraint, then by explicitly adding it you won’t gain anything. You will find the same solution. For example, if you use the principle of least action to derive the equations of motion of a body, it is not necessary to include the conservation of mechanical energy; rather the latter will be derived as a result of the least action.
Physical insights are always welcome—but it depends on what you call physical insights. As a counterexample, in hydrology there used to be a view that by cutting a catchment into numerous pieces and applying first principles on each piece you would be able to make a model that does not need data and calibration. This reductionist thinking, which was named “physicallybased modelling” is receding now as it was gradually understood that merely first principles cannot provide a decent model and that the smaller the pieces, the bigger the requirements for data and calibration.
D.
Just a short comment on the interesting review by Arthur Smith.
Regarding the length of a record, that you need to distinguish between white noise and LTP with H=0.65: For this simple distinction, you need certainly much less than 500 data points, which is about 40y monthly. It is much more difficult to distiguish between LTP and an STP process. We have found recently that 600 data points (50y monthly data) are sufficient when using DFA2. The larger the Hurst exponent is, the easier the distinction.
Regarding ENSO and La Nina and other cycles: These are not LTP processes but contribute also to LTP. When analysing temperature data you only eliminate the seasonal cycle but not the others.
Regarding the warmest years: The estimation of the probability has been given by Zorita et al. quantitatively, as I wrote earlier.
Regarding deterministic trends:
In the global temperature, for example, the trend is highly significant on both 50y and 100y scales . The Hurst exponent here is close to1.
You can find these values in our 2011 PRE and in our 2012 review.
Lennart,
thank you for pointing me out these references. I am not an expert in this topic, but I find the arguments in the nature paper convincing.
Regarding GHG I may not fully agree with Demetris: We cannot show in our analysis of instrumental temperature data that GHG are
responsible for the anomalously strong temperature increase that we see and that we find is significant, but It is my working hypothesis.
Today I stumbled upon a paper published this week in Digital Signal Processing, which I found ontopic: Navarro et al. (2013). Some may find this paper too technical, statistical, or even offtopic. Till now, I generally avoided being too technical in my comments, but, since Armin has raised several technical issues, this gives me the opportunity to speak about a few of them.
The above paper supports what Armin had said about the appropriateness of the DFA method for identifying LTP properties. More specifically, the paper studied lognormally distributed data and concluded that three methods had best performance, namely DFA (detrended fluctuation analysis), DWT (discrete wavelet transform) and LSSD (least squares based on standard deviation). Quoting from the abstract:
Coming to what Armin has said about appropriate methods for identifying LTP and estimating parameters, I fully agree with him that the empirical autocorrelation function distorts the LTP properties and should be totally avoided. The reason is that empirical autocorrelation is highly biased as shown in my 2003 paper and graphically illustrated in slide 15 of a 2010 presentation.
I also agree with Armin that the periodogram/empirical power spectrum is not an appropriate method. The reason is that it is too rough (spiked), whereas common smoothing techniques distort the information. For those who are familiar with spectrum and prefer to view phenomena in the frequency domain, I have recently (2013) proposed a pseudospectrum based on the climacogram, which has similar (or same) asymptotic slopes with the spectrum while avoiding its caveats.
My colleague Hristos Tyralis and I have also tested roughly all of the related statistical methods and reported our results in a recent (2011) paper. Indeed DFA did not perform bad as shown in our Table 1 (we use the name “Var. of residuals”). However, it is not one of those we recommend (sorry about that, Armin). As we show in this paper, the Hurst parameter and the standard deviation are correlated to each other and therefore their estimation cannot be done separately. Thus, the method of preference should respect this correlation. DFA does not have this property and treats the estimate of standard deviation as if it was unbiased, when in fact it is highly biased (I have mentioned this problem above in my first comment to Armin, asserting that this has also affected their results in Rybski et al.).
The three methods we recommend in Tyralis and Koutsoyiannis (2011; Table 2) fully account for the interdependency of the parameters and also have the best performance. Not surprisingly, the maximum likelihood method (as streamlined in the paper in full analytical manner, without using approximations) provides best estimates. Yet it has three caveats: (a) as a fully parametric method, it is dependent on the marginal distribution function of the process, (b) it is computationally demanding, and (c) it does not provide graphical diagnostic means to assess the model suitability. The first problem can be tackled easily by normalizing (by an appropriate nonlinear transformation) the data before application. This will deal with the spiked patterns mentioned by Navarro et al. (2013); for example, for their lognormal data, first one should apply a logarithmic transformation to the data.
Such normalization is also advisable (albeit not necessary) for the next two methods, both of which are based on the climacogram: the LSSD (already mentioned) and the LSV (least squares based on variance). These two are almost equivalent, they are simple, economic and fast: they do not use any concept additional to standard deviation or variance, respectively, whose statistical behaviours are well known. They are also transparent. Thus, they provide a diagnostic tool, which is the comparison of empirical and theoretical climacograms, very easily constructed.
Lennard, since you quoted Virginie’s reply, for completeness I am posting my reply to her (also copied to you).
Dear Demetris,
You make an assertment about climate models with which I disagree:
Often, people compare the observed global mean temperature with the average of the results from many different climate model simulations, which are not corresponding quantities. That would be like comparing this years spring temperatures with climatology – of course you’d expect the daytoday values to fluctuate about the normal values. And likewisem you’de expect the yeartoyear variations in the real world to fluctuate about the slow trend due to ‘noise’ – or internal variations driven by the system’s nonlinear dynamics.
The paper by Easterling and Wehner (2009; GRL; DOI:10.1029/2009GL037810) provides a good discussion on this topic. We can also examine the 10year interval from 92 CMIP5 simulations (RCP4.5) and we see that there are indeed some models which indicate decades over which the global mean temperature do not increase. This is explained more on Realclimate.org.
The figure above provides an example, where the black line is HadCRUT4. The red lines are the model simulations for which the temperature increases over 20022012, whereas the blue ones show those which decrease.
I must state that it’s failry meaningless to use such short intervals to say anything about longterm trends – just as Easterling and Wehner state.
Perhaps, and this may be perhaps because all the LTP is due to external forcing. But you have not demonstrated this, Demetris, and I’m not so sure that you’re right. The models do after all embody a nonlinear dynamical system which simulate slow variations due to oceanatmosphere coupling. We also know that they simulate chaotic weather.
I do not need to rely on climate models, but they are handy for testing out the different hpotheses. We know that these models do have some merit in e.g. predicting the ENSO phenomenon, and they reproduce most othe the phenomena that are oserved in the real world.
Another way to shed more light on our differences is for you to explain what mechanisms are involved in setting LTP in nature. If you cannot pinpoint the exact physics, I will regard the statement as speculations rather than facts.
I think the following quote reveals that we are on different wavelength:
Please read the paper by Easterling and Wehner which I cited above. I do not think you have understood the tacit understanding that the climate evolution is due both to chaotic variations as well as forced longterm trends.
I agree there is a degree of stochastic element in our climate, but there is also a substantial degree of deteminism, depending on your scientific question and the scales you look at. For instance, I can confidently say that the mean DecemberFebruary temperature in Oslo will be substantially lower than the JuneAugust mean in 2015. But I cannot yet say what weather we will get on July 14 this year. These two statements represent two different scientific questions, and both are quite trrivial. Nevertheless, they can illustrate the fact that our climate is not just stochastic, and that this observations is supported be real measurements.
Dear Demetris,
Please allow me to comment this quote:
The idea that anything that violates the laws of physics is ‘unphysical’ is fairly straightforward. Measured data are not unphysical, but the assumptions you make when analysing them may be inconsistent with physics. You can always find a mathematical framework for fitting a set of data, but that does not mean that this mathematical framework represents meaningful physics. One example is Fourier expansion and the Dirichlet condition.
In our situation, the global mean temperature does not exist in isolation, but is one aspect of a more comprehensive climate system where processes are interconnected. Moreover, the temperature is a heat measure and plays a role for energy fluxes and evaporation. There is much more relevant information about the global mean available, and I think you reach misguided conclusions by just by looking at the LTPbehaviour of the time series and neglecting all the other knowledge. You cannot just look at the statistics, but need to consider both statistics and physics. You also need to consider other independent measurements.
Dear Armin,
Thank you for your reply. I’d very much like to read your paper ‘Longterm correlations in earth sciences’, but it’s behind a paywall (even though Norway is a rich country, it does not mean that science is splashing with money). Could you please make it available for us to read?
I see that you have a great number of papers, but I also expect that you shall be able to explain your points without me and other having to read all your papers – please remember that we have many other things to do, and as long as you have not convinced that LTP is ‘the magic bullet’, you cannot expect others to spend all their time following your example. also, I think you underestimate my competence – just because we come from different angles. I do appreciate the mathematics, and I do have an understanding of the meaning of the Hurst exponent.
I notice that your position is:
My take on this is that it depends on your scientific question/hypothesis. Also, we do have climate models and can carry out numerical experiments to explore the different effects. And if there is a deterministic response, you can use regression techniques to look for ‘fingerprints’.
One of your comments caught my interest:
I think it’s well appreciated that extreme events often come in clusters, and you can for instance see the flood mark and the years of flooding on old English towns. An alternative explanation is that the weather tends to follow a strange attractor (lowlevel chaos) which too leads to such clustering. Hence, by implying the possibility that there is LTP behind the clustering does not exclude other explanations.
I presently think one major weakness in your reasoning is
This cannot be true if the weather evolution is chaotic, where the weather system loses the memory of the initial state after some bifurcation point. You also need to examine the Lyapunov exponents to compare with alternative theories.
Another weakness may arise in running MonteCarlo simulations for LTP processes, as you assume that the random number generators are perfect. They have improved substantially over the recent years, but I’m not sure if they are free from generating their own patterns.
You do not always always need sophisticated mathematics (I do like the math) to spot profound differences between the the autocorrelation C(s). And we can look at other data than the global mean temperature, for which we expect a forced trend to be present. For instance, we do not expect there to be a trend in the sea level pressure (SLP), but we do expect that it should exhibit similar internal variability as the temperature (at least on regional scales).
Another experiment can be to look at the different components of the global mean temperature. If we use standardised values and look at the hemispheric differences, we expect to see variations caused by geographical shifts and ocean overturning. We can also examine the difference between the tropics and the higher latitudes, as the lower panels below. We see that C(s) changes profoundly when we look at these geographical differences, rather than the global mean for which we expect a trend. Also note the strong fluctuations in the early part of the record which are due to a smaller data coverage (a higher degree of statistical fluctuation). Thus, this geophysical record is probably not homogeneous.
The first question we asked was “What exactly is longterm persistence?”
After two weeks of interesting discussions even this basic question doesn’t seem to be answered satisfactorily.
In our introduction we wrote:
I searched the different blog posts and comments for remarks about what is LTP. Here are some relevant fragments.
Rasmus:
Armin:
Demetris:
Later in two different comments (here and here) Armin wrote:
I agree that it is important to first agree on the definition of LTP. As we can see above a lot of different things have been said about LTP. Armin (and I suppose Demetris agrees) said that LTP is well defined, already by Mandelbrot. Armin and/or Demetris, what is the formal definition of LTP?
Dear Spencer,
Thanks again for the great comments. I fully agree with the first one and, from first glance, with the second too, although I need some more time to assimilate the latter.
To your former comment I wish to add two references, which I think could be very useful for those who wish to penetrate on stochastic aspects of physics, are not too much adhered to stereotypes and can devote some time in reading.
These are two books that provide the mathematical basis for real physics of complex systems. I must notify that they are very dense and need to be read several times for a good result:
Michael C. Mackey, Time’s Arrow: The Origins of Thermodynamic Behavior, Dover, 1992 (a small one: 158 pages).
Andrzej Lasota and Michael C. Mackey, Chaos, Fractals and Noise: Stochastic Aspects of Dynamics, SpringerVerlag, 1994 (a big one: 459 pages).
D.
Dear Demetris,
I would like to elaborate a little more about what you show in your figure 5 and the conclusions you draw from it.
In our introduction we mentioned the IPCC AR4 definition of “detection”: “Detection of climate change is the process of demonstrating that climate has changed in some defined statistical sense, without providing a reason for that change.”
Would you say that the method you followed in Koutsoyiannis/Montanari (2007) is your preferred answer to the phrase “in some defined statistical sense”?
Is “detection” purely a matter of statistics for you?
Does your results mean that in your view “detection” has not yet taken place, although it comes close?
If you indeed conclude that detection has not taken place yet, does this mean for you that the effect of GHG’s on the climate (temperature) is relatively weak? I ask this with the following in mind: I suppose an external force on the climate could be so big or fast that a significant change on e.g. the global temperature is quickly achieved even when you take LTP into account. The impact of a big meteorite for example causing major cooling on the earth. So in theory the increase in greenhouse gases could also have this effect, do you agree? Does the fact that despite the relatively rapid increase in GHG’s there is no significant change yet in the global temperature, mean that the effect of GHG’s is relatively weak?
As an aside: did you ever analyse the CO2 record? I can imagine that the rise from 280 to now 400 ppm is very significant even when LTP is taken into account (I suppose this time series also has a Hurst parameter of at least 0.6).
Marcel
Dear Armin,
In his first comment on your guest blog Demetris wrote:
As far as I know/remember I haven’t seen a response from you on this statement. So I am interested to hear your reaction.
I read in one of your papers that the fact that you find a significant change in the land temperatures does not necessarily mean that greenhouse gases are the cause. It could also be the Urban Heat Island effect for example.
This topic also reminded me of an interesting paper by Compo and Sardeshmukh a couple of years ago. This paper shows that the land warming is following the ocean warming (If I remember well, the ocean temperatures were prescribed in their model). This comes close to what Koutsoyiannis is saying here as well, that there is or should be a close connection (especially on the longer term) between ocean and land temperatures.
Marcel
Marcel, you ask:
Actually, I have given a formal definition already. Quoting from my initial post:
So, whenever 0.5 < H < 1 we have LTP. There are other equivalent definitions: replace in the above “variance” with “autocovariance” and “time scale” with “time lag” and you will have one. Another equivalent definition has been offered by Spencer, in terms of the power spectral density (PSD); he said:
where f is frequency and a = 2H – 1.
All these are equivalent. One should have in mind that they refer to asymptotic properties, i.e. they should be valid for arbitrarily large time scale or time lag, and for arbitrarily small frequency. For this reason in more formal writing we use the concept of limit (“lim”) for the definition.
My preference is the definition based on variance (or standard deviation) because it is the most economical and simplest, as well as because it provides the best interpretation, that related to variability/change (rather than “memory” etc.)
More explanations about LTP and change, hopefully in simple words, can be found in my paper Hydrology and Change just published (today; in preprint format). If anybody is interested to see it and he/she does not have access to the journal (linked above), he/she can email me to send a copy of the preprint.
D.
PS. Here is the abstract of the paper:
Dear Rasmus,
Do you accept the formal definition given by Demetris in his latest comment and in his original post?
In your post you wrote:
If the “signal” refers to “manmade climate change”, this suggests that time series before let’s say 1900 only have noise. Is that how you see it?
In his first comments to you Demetris wrote:
At least before there was “manmade climate change”, LTP seemed to be the norm. Do you agree with that?
Also the examples of Demetris show there hasn’t been a huge change in LTP since GHG’s started to rise. Do you accept this?
Marcel
Dear Marcel
Thanks for asking questions most relevant to the topic of the dialogue. You say:
Definitely yes. I explained the reasons in my introductory post:
As I noted, the method needs some further elaboration to include the uncertainty in the estimation of H.
I would say it is primarily a statistical problem, but I would not use the advert “purely”. Besides, as we wrote in Koutsoyiannis Montanari (2007), even the very presence of LTP should not be discussed using merely statistical arguments.
Yes, I believe it has not taken place. Whether it comes close: It is likely. I believe the present dialogue should have been made a decade ago. If LTP was studied more, we would perhaps know more things now. However, as we write in Koutsoyiannis, Montanari, Lins and Cohn: Climate, hydrology and freshwater: towards an interactive incorporation of hydrological experience into climate research (2009):
May I add that detection of a change through statistical significance is not the only thing that matters. The magnitude of the change is even more relevant. The observed climate warning is 0.6°C in 134 years. Assuming that this is statistically significant and a 0.5°C warming is not, does significance make a big difference? Thus, it is important to compare the observed change to what would be a normally expected change.
[Note: It is interesting to see how the observed 0.6°C climate warming in 134 years, correspond to what people have in mind about current warming. My student’s age is about 20 years. In each of my new classes I ask the question to students: how much do you believe the global temperature has increased in the last 15 years (since you started to go to school)? A typical answer is 5°C with the minimum usually being 2°C.]
Yes, I believe it is relatively weak, so weak that we cannot conclude with certainty about quantification of causative relationships between GHG and temperature changes. In a perpetually varying climate system, GHG and temperature are not connected by a linear, oneway and onetoone, relationship. I believe climate models and the thinking behind them have resulted in oversimplifying views and misleading results. As far as climate models are not able to reproduce a climate that (a) is chaotic and (b) exhibits LTP, we should avoid basing conclusions on them.
No I have not analysed CO2 data. From paleoclimatic graphs I can guess that CO2 concentration exhibits LTP, too, with a high H—as well as that it is correlated to temperature, but not with a oneway and onetoone relationship. Unfortunately, the time scales of the paleo time series are too broad and the instrumental observations of CO2 are too short; thus coupling the two sources of information is too difficult. But I believe the change from 280 to 400 ppm is significant.
D.
Dear Rasmus,
thank you for your Comments. Regarding the review we published last year, it is very unfortunate that it is not available without charge. I wrote a long article in this book, and am quite unhappy that I also have to pay for the other articles in this book. Can you be so kind and send me your email address. I will bundle a collection
of our papers and will mail them to you. It would be nice to have a discussion on this fascinating and interesting topic also later when this Climate Dialogue finishes.
Best wishes, Armin
Dear Marcel and Demetris,
I apologize that I could not answer earlier. As you may have seen, I share many thoughts with Demetris regarding the definition and detection of LTP and so on.
But we do not agree in all points.
First of all, from our trend significance calculations we can see, without any doubt, that there is an external temperature trend which cannot be explained by the natural fluctuations of the temperature anomalies. We cannot distinguish between Urban Warming and GHG here, but there are places on the globe where we do not expect urban warming but we still see evidence for an external trend, so we may conclude that it is GHG.
Second, as you certainly know there is a long discussion on athmospheric temperatures versus SST. It has been argued by Fraedrich and also by us that the inertia of the oceans is an important factor for the LTP and thus we expected and confirmd that the Hurst exponent is larger for SST than for SAT. Based on this, Fraedrich even concluded that H should decrease continously when departing from the coast, i.e. stations very far away from the coast line, like Urumqui in China, should have H=1/2. I found this hypothesis interesting but we could not support it finally from our own analysis. So there is a discussion on the point you make, but I think it is settled ( and also the models show this) that SST has a higher persistence than SAT.
Best wishes,
Armin
Rasmus, you say:
As I explained here and there, your own grey curve in your Figure 2 is fully consistent with a Markov model with a characteristic time scale of a = 1.25 years. See the graph below if you do not believe that: I fitted the red curve which is an exponential decrease with a = 1.25 years and plotted it on your own graph.
A climate with a = 1.25 years is a static climate (at scale, say, 30 years or more) and any deviation from mean is too small and purely random, without correlation with previous periods.
From what you write, I guess you can accept that, during Earth’s history, there were periods in which your “signal” as you define it (you say “here it refers to manmade climate change”) or your “external forcing”, was not present. At those periods, the climate models should behave as in your grey line, which is identical to my red line, which in turn signifies a Markov process, which finally produces a static climate (sorry for repeating trivial things). The truth is, however, that climate on Earth has never been static.
Otherwise, I agree with you that models can simulate chaotic weather. The problem is whether or not they can simulate (a) a chaotic climate and (b) a climate consistent with LTP. From what I know they fail to both.
Dear Rasmus,
You say:
I fully agree; actually, I think I had said already that in my phrase you quoted:
Furthermore, you say:
Yes, they may be inconsistent, but they may not, as well. Therefore, to tell that something is inconsistent with something else needs a proof. I may have told this several times in this blog, but I have not seen any proof of inconsistency. Sorry for having to repeat it once again. To save time, I will not comment on the rest of your comment by repeating things that I have already said as well.
D.
Dear Rasmus,
Thanks for the interesting graph which shows that among numerous (hundreds?) climate model runs there were a few (six?) that did not suggest a warming climate in the last decade. This is a nice demonstration that Earth’s climate does not feel obliged to do what the majority of climate models dictate. It also demonstrates the vanity of deterministic modelling and, in my view, suggests the need to develop stochastic approaches to climate.
I think the following conclusion from Koutsoyiannis et al. (2011) is relevant:
D.
Dear Lennart,
You say:
Indeed, I have a few comments.
First, I noticed the following phrase from the first paragraph (typeset in bold) of the paper:
However, I have difficulties to read this in connection to what Virginie emailed to us:
Second, I noticed in this paper the phrase:
This suggests that the method followed was to reset every year the conditions in order to match the reality. Of course, such method is not feasible if we speak about future predictions and I do not think it is useful even in hindcasts. This is because that any model, even the funniest one, if regularly reset to the current conditions, will exhibit a good performance in reproducing a process characterized by persistence.
Third, as a colleague who saw the paper noticed, the way the results are graphically presented raises questions. You may see, for example, that in Fig. 1c of the paper, which refers to noninitialized forecasts, the points corresponding to consecutive years are connected to each other by lines, while in Fig. 1a, referring to initialized forecasts, the points are not connected by lines. (Perhaps by leaving the points disconnected, you get a feeling of better agreement). Furthermore, Fig. 1c, which is for 35 years ahead of initialization, does not indicate any impressive agreement with reality.
For these reasons, I do not think the paper has explained the pause of warming.
D.
Paul S,
Please read my comment again and, in particular, please notice that I have used quotation marks for the terms I quoted from Rasmus, which are “signal” and “external forcing”. Your reply is about forcing in general. Of course there is forcing all the time, but it can be internal, produced by the climate system per se, not by external factors, like Rasmus’s “signal”, which, as defined by himself, “here it refers to manmade climate change”.
D.
Dear Marcel,
I think the proposition
is somewhat artificial in the case for temperatures, as we do know that there is a great deal of variance that is usually is removed before there analysis: the seasonal variations and the diurnal cycle. Most of the variance is tied up to these wellknown cycles, forced by regional changes in incoming sunlight. Furthermore, ENSO has a time scale that is ~38 years, and is associated with most of the variance after the seasonal and diurnal scales are neglected.
For precipitation, the picture may be different.
Yes and no. The response to natural forcing is still not ‘internal’. We know there have been natural forcings before and we know that they have caused some variations on Earth. Now we have the best instruments ever, and we can measure natural forcings and infer their effects. I still think that external forcings do influence the analysis of LTP, and I have not seen any demonstrations to the opposite. I believe that we need to work through the numbers and would like to see numerical demonstrations to show if LTP exists without forcings. E.g. onthe sealevel pressure and other variables.
One of the major weaknesses I think with the arguements presented by Armin and Demetris is that they only look at one climate inidacotr, when we know in fact that climate involves many related aspects. It is important to draw on all available information, rather than neglecting related physics and observations and focus only on the statistical aspects of just one index.
Although the HadCrut4 163 year, it does not represent the same locations over the entire record. In fact, the early part is calculated from a smaller sample of thermometers, and one may even by eye discern the change in the sampling fluctuations associated with the changes in the data coverage. Again, the temperature is affected by external forcings. I suggest using sealevel pressure. Another approach is to subtract the northern from the southern hemisphere, assuming that the forcings and the trends affect the whole planet and that the two hempispheres are affected somewhat similar – as I’ve done and is shown in #470
I have not read Koutsoyiannis and Montanari (2007) nor Markonis and Koutsoyiannis (2013) – please provide the details here. The Nile is a completely different case to the global mean temperature. THe physics is entirely different. LTP may be true for the Nile, but not for other situations. For local temperature measurements, I would not be surprised to see some longterm like persistence, but I would ascribe most of it to lowlevel chaos and natural forcings.
This is a general comment (i.e. not addressed to a particular discusser) — and a rather pessimistic one.
In one of my earlier comments to Rasmus I wrote:
Earlier, in another comment I wrote:
Now Rasmus makes a statement which for me was shocking:
Of course, I did not expect that Rasmus would have read the papers by my colleagues and me. However, I would expect that each of the discussers reads the comments in this blog, particularly those addressed to them — or at least uses the “Find” utility of the browser.
So using the “Find” utility I was able to see that full details of Koutsoyiannis and Montanari (2007) are given several times in this blog:
In Ref. [2] in the Introductory post by Rob, Marcel and Bart;
In Ref [21] in my own main post;
In my First comments on Armin Bunde’s post.
Also link to the paper is contained in my First comments on Rasmus Benestad’s post.
Furthermore, for Markonis and Koutsoyiannis (2013) full details are given:
In Ref [17] in my main post;
In my First comments on Rasmus Benestad’s post;
In Spencer Stevens comment to Bart.
Also, both details and link are also contained in my reply to Rob.
All the above makes me wonder if climate dialogue is possible.
Demetris,
In a recent comment (http://www.climatedialogue.org/longtermpersistenceandtrendsignificance/#comment490 ) you wrote in response to a question from Marcel about whether detection (of global warming) had taken place:
“I believe it has not taken place.”
This I found surprising in light of what you wrote earlier:
http://www.climatedialogue.org/longtermpersistenceandtrendsignificance/#comment351 : “The global land air temperature, in the past 100y, increased by about 0.8 degrees. We find this increase even highly significant.” And in your opening post: “the probability of having 11 warmest years in 12, or 12 warmest years in 15, is 0.1%.” (based on a value of the Hurst coefficient, H = 0.94, higher than others have found) and “Whether this change is statistically significant or not depends on assumptions. If we assume a 90year lag and 1% significance, it perhaps is”
I took your earlier responses to mean that according to you, the recent warming if significant to 99 or 99.9%, dependent on the exact metric used. You mentioned “highly significant”. And now you state “detection has not taken place”. Aren’t these statements mutually exclusive?
As we wrote in the introductory text, according to AR4 “an identified change is ‘detected’ in observations if its likelihood of occurrence by chance due to internal variability alone is determined to be small.” So how small do you think this chance is?
Dear Bart,
Sorry if you were misled, but it was not my fault. I usually use blockquotes in my comments, but if I remember well the one you quote was from my initial comment. That was sent to the forum editors before the appearance of the dialogue. The editors posted it for me, without using blockquotes.
So what I said, as you may see it above, in my comment to Armin, was this:
So, what you quote as if it was said be me, was in fact said by Armin and is not supported by my calculations.
You can check it if you read my entire comment, rather than part of it which actually is a quotation to which I replied. The meaning is clear—even without blockquotes.
D.
Bart, please also read my pessimistic comment just before yours.
D.
Paul, I regard greenhouse gases part of the climate system. For example, in my view, changes in the vapour concentration classify as internal forcing. As I wrote in an earlier comment (Section 5, Linearity vs. nonlinearity):
So I may not be able to calculate the particular contribution of external and internal forcings. For me it suffices to say that the climate was never static, which implies variability — particularly due to internal dynamics. If you can do such separation, please do and let me know if you find that in the entire Earth’s history the ever changing climate has been driven by external forcing only.
Thanks Demetris, that clarifies part of the discrepancy, but the other two quotes remain, which I still cannot reconcile with your later statement. So my question still stands:
What is the chance that the observed changes are due to internal variability? (meaning a redistribution of energy within the climate system – there seems to be quite some confusion about what the different terms forced vs unforced/internal var. mean, which I will come back to in a later comment)
Bart, thanks for understanding. My answer stands too. If you read my main post you will see that I provide quantified answers for the “chance”. See in particular my graphs and their explanations. I hope Marcel can verify that what I replied to his comment (actually verifying his own reading of my post and comments) is consistent to what I wrote in my post and my later comments. So, I am afraid I cannot see what looks surprising to you.
Some recent signs (lack of progress, repetitions) may indicate that this discussion approaches its end. I wish to thank the editorial team, Bart, Marcel and Rob, for inviting me, my coguests Armin and Rasmus, and all contributors for the fascinating discussion during these three weeks.
My best wishes for the continuation and further development of the Climate Dialogue forum. Even with the difficulties encountered, dialogue is the only way forward. Besides, as Heraclitus said, “Tο αντíξουν συμφέρoν και εκ των διαφερόντων καλλíστην αρμoνíαν και πάντα κατ’ έριν γíνεσθαι” (Opposition unites, the finest harmony springs from difference, and all comes about by strife).
If I may offer a simple suggestion for the future dialogues, I would propose to merge the two sections “Expert comments” and “Public comments”. First, these section titles are not very accurate; it would be more accurate to say “Editors and guests” rather than “experts”. My feeling is that everybody who contributes in this dialogue is an expert—both the eponymous and the pseudonymous discussers. Second, the reading of the comments would be more convenient and sensible if the comments were in chronological order rather than separated into two sections.
D.
I’d like to offer the following observation of the discussion so far (more comments remain welcome, but are by no means demanded).
There appear to be different interpretations of natural variability and of detection which may be a frequent cause of misunderstanding in this dialogue and beyond. Below I’ll try to describe these different interpretations in an effort to elucidate where the different opinions may (partly) be coming from.
In general, the following processes involved in climate change can be distinguished:
 natural unforced variability (e.g. internal variability involving a redistribution of energy)
 natural forced variability (e.g. changes in the output of the sun or in volcanism)
 anthropogenic forced variability (e.g. changes in greenhouse gas or aerosol concentrations)
where a forcing refers to a process causing an energy imbalance, which in turn causes a temperature change. Internal variability on the other hand causes a temperature change arising from semirandom internal processes. This temperature change can then cause an energy imbalance (since outgoing energy scales as T^4), but the causeeffect chain (linking temperature change and energy imbalance) is opposite to a radiative forcing)
As we wrote in the introductory text, according to AR4 “an identified change is ‘detected’ in observations if its likelihood of occurrence by chance due to internal variability alone is determined to be small.”
In other words, detection is based on distinguishing the forced (natural and anthropogenic) from the unforced (natural) component.
Demetris seems to argue that these different processes can not be distinguished, or at least that internal (unforced) variability and natural forcings can not be distinguished. Anthropogenic forcings can only be distinguished by virtue of them not having been acting on the system prior to ~1850. Armin seems to take a somewhat similar view, in combining natural unforced and forced changes in what he terms natural fluctuations. Rasmus seems to take the view as I outlined above (the distinction in three main types of processes).
Demetris argues that the current temperature signal is not outside of the bounds of what could be expected from natural forced and unforced changes, thereby using a higher bar than the standard definition of “detection”. He bases his statement on a higher Hurst coefficient than Armin does, which increases the bar further.
This may clarify how the statement that climate forcings introducing LTP and climate forcings being omnipresent (which all three agreed on) can still lead to different conclusions regarding the presence of LTP saying anything about internal variability, because different operational definitions of detection and internal variability (and perhaps also of LTP, as has been put forward by Armin) are used (where in one view internal variability is only the unforced component of change, where in another view internal variability also includes natural forcings).
This brings up the question, if (according to Demetris) the recent warming is not outside of the bounds of natural forced and unforced variability, where does all the excess energy come from that is observed to accumulate in the climate system? It doesn’t seem to be due to natural forcings (which show no warming trend over the past 50 years), nor is there any sign of a redistribution of energy within the climate system (everywhere we look it’s warming). Where is the energy hiding, or where is it coming from (if not from excess greenhouse gases inhibiting planetary heat loss)?