The (missing) tropical hot spot

The (missing) tropical hot spot is one of the long-standing controversies in climate science. Climate models show amplified warming high in the tropical troposphere due to greenhouse forcing. However data from satellites and weather balloons don’t show much amplification. What to make of this? Have the models been ‘falsified’ as critics say or are the errors in the data so large that we cannot conclude much at all? And does it matter if there is no hot spot?

We are really glad that three of the main players in this controversy have accepted our invitation to participate: Steven Sherwood of the University of New South Wales in Sydney, Carl Mears of Remote Sensing Systems and John Christy of the University of Alabama in Huntsville.

Climate Dialogue editorial staff
Rob van Dorland, KNMI
Marcel Crok, science writer
Bart Verheggen 

Introduction The (missing) tropical hot spot

The (missing) hot spot in the tropics

Based on theoretical considerations and simulations with General Circulation Models (GCMs), it is expected that any warming at the surface will be amplified in the upper troposphere. The reason for this is quite simple. More warming at the surface means more evaporation and more convection. Higher in the troposphere the (extra) water vapour condenses and heat is released. Calculations with GCMs show that the lower troposphere warms about 1.2 times faster than the surface. For the tropics, where most of the moist is, the amplification is larger, about 1.4.

This change in thermal structure of the troposphere is known as the lapse rate feedback. It is a negative feedback, i.e. attenuating the surface temperature response due to whatever cause, since the additional condensation heat in the upper air results in more radiative heat loss.

IPCC published the following figure in its latest report (AR4) in 2007:

Source: http://www.ipcc.ch/publications_and_data/ar4/wg1/en/figure-9-1.html (based on Santer 2003)

The figure shows the response of the atmosphere to different forcings in a GCM. As one can see, over the past century, the greenhouse forcing was expected to dominate all other forcings. The expected warming is highest in the tropical troposphere, dubbed the tropical hot spot.

The discrepancy between the strength of the hot spot in the models and the observations has been a controversial topic in climate science for almost 25 years. The controversy[i] goes all the way back to the first paper of Roy Spencer and John Christy[ii] about their UAH tropospheric temperature dataset in the early nineties. At the time their data didn’t show warming of the troposphere. Later a second group (Carl Mears and Frank Wentz of RSS) joined in, using the same satellite data to convert them into a time series of the tropospheric temperature. Several corrections, e.g. for the orbital changes of the satellite, were made in the course of years with a warming trend as a result. However the controversy remains because the tropical troposphere is still showing a smaller amplification of the surface warming which is contrary to expectations.

Positions
Some researchers claim that observations don’t show the tropical hot spot and that the differences between models and observations are statistically significant[iii]. On top of that they note that the warming trend itself is much larger in the models than in the observations (see figure 2 below and also ref.[iv]). Other researchers conclude that the differences between the trends of tropical tropospheric temperatures in observations and models are statistically not inconsistent with each other[v]. They note that some radiosonde and satellite datasets (RSS) do show warming trends comparable with the models (see figure 3 below).

The debate is complex because there are several observational datasets, based on satellite (UAH and RSS) but also on radiosonde measurements (weather balloons). Which of the dataset is “best” and how does one determine the uncertainty in both datasets and model simulations?

The controversy flared up in 2007/2008 with the publications of two papers[vi][vii] of the opposing groups. Key graphs in both papers are the best way to give an impression of the debate. First Douglass et al. came up with the following graph showing the disagreement between models and observations:

Figure 2. Temperature trends for the satellite era. Plot of temperature trend (°C/decade) against pressure (altitude). The HadCRUT2v surface trend value is a large blue circle. The GHCN and the GISS surface values are the open rectangle and diamond. The four radiosonde results (IGRA, RATPAC, HadAT2, and RAOBCORE) are shown in blue, light blue, green, and purple respectively. The two UAH MSU data points are shown as gold-filled diamonds and the RSS MSU data points as gold-filled squares. The 22-model ensemble average is a solid red line. The 22-model average ±2σSE are shown as lighter red lines. MSU values of T2LT and T2 are shown in the panel to the right. UAH values are yellow-filled diamonds, RSS are yellow-filled squares, and UMD is a yellow-filled circle. Synthetic model values are shown as white-filled circles, with 2σSE uncertainty limits as error bars. Source: Douglass et al. 2008

Santer et al. criticized Douglass et al. for underestimating the uncertainties in both model output and observations and also for not showing all radiosonde datasets. They came up with the following graph:

Figure 3. Vertical profiles of trends in atmospheric temperature (panel A) and in actual and synthetic MSU temperatures (panel B). All trends were calculated using monthly-mean anomaly data, spatially averaged over 20 °N–20 °S. Results in panel A are from seven radiosonde datasets (RATPAC-A, RICH, HadAT2, IUK, and three versions of RAOBCORE; see Section 2.1.2) and 19 different climate models. The grey-shaded envelope is the 2σ standard deviation of the ensemble-mean trends at discrete pressure levels. The yellow envelope represents 2σSE, DCPS07’s estimate of uncertainty in the mean trend. The analysis period is January 1979 through December 1999, the period of maximum overlap between the observations and most of the model 20CEN simulations. Note that DCPS07 used the same analysis period for model data, but calculated all observed trends over 1979–2004. Source: Santer (2008)

The grey-shaded envelope is the 2σ standard deviation of the ensemble-mean trends of Santer et al. while the yellow band is the estimated uncertainty of Douglass et al. Some radiosonde series in the Santer graph (like the Raobcore 1.4 dataset) show even more warming higher up in the troposphere than the model mean.

Updates
Not surprisingly the debate didn’t end there. In 2010 McKitrick et al.[viii] updated the results of Santer (2008), who limited the comparison between models and observations to the period 1979-1999, to 2009. They concluded that over the interval 1979–2009, model projected temperature trends are two to four times larger than observed trends in both the lower troposphere and the mid troposphere and the differences are statistically significant at the 99% level.

Christy (2010)[ix] analysed the different datasets used and concluded that some should be discarded in the tropics:

Figure 4. Temperature trends in the lower tropical troposphere for different datasets and for slightly differing periods (79-05 = 1979-2005). UAH and RSS are the estimates based on satellite measurements. HadAt, Ratpac, RC1.4 and Rich are based on radiosonde measurements. C10 and AS08[x] are based on thermal wind data. The other three datasets give trends at the surface (ERSST being for the oceans only while the other two combine land and ocean data). Source: Christy (2010)

Christy (2010) concluded that part of the tropical warming in the RSS series is spurious. They also discarded the indirect estimates that are based on thermal wind. Not surprisingly Mears (2012) disagreed with Christy’s conclusion about the RSS trend being spurious writing that “trying to determine which MSU [satellite] data set is “better” based on short-time period comparisons with radiosonde data sets alone cannot lead to robust conclusions”.[xi]

Scaling ratio
Christy (2010) also introduced what they called the “scaling ratio”, the ratio of tropospheric to surface trends and concluded that these scaling ratios clearly differ between models and observations. Models show a ratio of 1.4 in the tropics (meaning troposphere warming 1.4 times faster than the surface), while the observations have a ratio of 0.8 (meaning surface warming faster than the troposphere). Christy speculated that an alternate reason for the discrepancy could be that the reported trends in temperatures at the surface are spatially inaccurate and are actually less positive. A similar hypothesis was tested by Klotzbach (2009).[xii]

In an extensive review article about the controversy published in early 2011 Thorne et al. ended with the conclusion that “there is no reasonable evidence of a fundamental disagreement between tropospheric temperature trends from models and observations when uncertainties in both are treated comprehensively”. However in the same year Fu et al.[xiii] concluded that while “satellite MSU/AMSU observations generally support GCM results with tropical deep‐layer tropospheric warming faster than surface, it is evident that the AR4 GCMs exaggerate the increase in static stability between tropical middle and upper troposphere during the last three decades”. More papers then started to acknowledge that the consistency of tropical tropospheric temperature trends with climate model expectations remains contentious.[xiv][xv][xvi][xvii]

Climate Dialogue
We will focus the discussion on the tropics as the hot spot is most pronounced there in the models. Core questions are of course whether we can detect/have detected a hot spot in the observations and if not what are the implications for the reliability of GCMs and our understanding of the climate?

Specific questions

1) Do the discussants agree that amplified warming in the tropical troposphere is expected?

2) Can the hot spot in the tropics be regarded as a fingerprint of greenhouse warming?

3) Is there a significant difference between modelled and observed amplification of surface trends in the tropical troposphere (as diagnosed by e.g. the scaling ratio)?

4) What could explain the relatively large difference in tropical trends between the UAH and the RSS dataset?

5) What explanation(s) do you favour regarding the apparent discrepancy surrounding the tropical hot spot? A few options come to mind: a) satellite data show too little warming b) surface data show too much warming c) within the uncertainties of both there is no significant discrepancy d) the theory (of moist convection leading to more tropospheric than surface warming) overestimates the magnitude of the hotspot

6) What consequences, if any, would your explanation have for our estimate of the lapse rate feedback, water vapour feedback and climate sensitivity?


[i]Thorne, P. W. et al., 2011, Tropospheric temperature trends: History ofan ongoing controversy. WIRES: Climate Change, 2: 66-88

[ii]Spencer RW, Christy JR. Precise monitoring of global temperature trends from satellites. Science 1990, 247:1558–1562.

[iii] Christy, J. R., B. M. Herman, R. Pielke Sr., P. Klotzbach, R. T. McNider, J. J. Hnilo, R. W. Spencer, T. Chase, and D. H. Douglass (2010), What do observational datasets say about modeled tropospheric temperature trends since 1979?, Remote Sens., 2, 2148–2169, doi:10.3390/rs2092148.

[iv] http://www.drroyspencer.com/wp-content/uploads/CMIP5-73-models-vs-obs-20N-20S-MT-5-yr-means1.png

[v]Thorne, P.W. Atmospheric science: The answer is blowing in the wind. Nature Geosci. 2008, doi:10.1038/ngeo209

[vi] Douglass DH, Christy JR, Pearson BD, Singer SF. A comparison of tropical temperature trends with model predictions. Int J Climatol 2008, 27:1693–1701

[vii] Santer, B.D.; Thorne, P.W.; Haimberger, L.; Taylor, K.E.; Wigley, T.M.L.; Lanzante, J.R.; Solomon, S.; Free, M.; Gleckler, P.J.; Jones, P.D.; Karl, T.R.; Klein, S.A.; Mears, C.; Nychka, D.; Schmidt, G.A.; Sherwood, S.C.; Wentz, F.J. Consistency of modelled and observed temperature trends in the tropical troposphere. Int. J. Climatol. 2008, doi:1002/joc.1756

[viii] McKitrick, R. R., S. McIntyre and C. Herman (2010) “Panel and Multivariate Methods for Tests of Trend Equivalence in Climate Data Sets.” Atmospheric Science Letters, 11(4) pp. 270-277, October/December 2010 DOI: 10.1002/asl.290

[ix] Christy, J. R., B. M. Herman, R. Pielke Sr., P. Klotzbach, R. T. McNider, J. J. Hnilo, R. W. Spencer, T. Chase, and D. H. Douglass (2010), What do observational datasets say about modeled tropospheric temperature trends since 1979?, Remote Sens., 2, 2148–2169, doi:10.3390/rs2092148

[x] Allen RJ, Sherwood SC. Warming maximum in the tropical upper troposphere deduced from thermal winds. Nat Geosci 008, 1:399–403

[xi] Mears, C. A., F. J. Wentz, and P. W. Thorne (2012), Assessing the value of Microwave Sounding Unit–radiosonde comparisons in ascertaining errors in climate data records of tropospheric temperatures, J. Geophys. Res., 117, D19103, doi:10.1029/2012JD017710

[xii] Klotzbach PJ, Pielke RA Sr., Pielke RA Jr., Christy JR, McNider RT. An alternative explanation for differential temperature trends at the surface and in the lower troposphere. J Geophys Res 2009, 114:D21102. DOI:10.1029/2009JD011841

[xiii] Fu, Q., S. Manabe, and C. M. Johanson (2011), On the warming in the tropical upper troposphere: Models versus observations, Geophys. Res. Lett., 38, L15704, doi:10.1029/2011GL048101

[xiv] Seidel, D. J., M. Free, and J. S. Wang (2012), Reexamining the warming in the tropical upper troposphere: Models versus radiosonde observations, Geophys. Res. Lett., 39, L22701, doi:10.1029/2012GL053850

[xv] Po-Chedley, S., and Q. Fu (2012), Discrepancies in tropical upper tropospheric warming between atmospheric circulation models and satellites, Environ. Res. Lett

[xvi] Benjamin D. Santer, Jeffrey F. Painter, Carl A. Mears, Charles Doutriaux, Peter Caldwell, Julie M. Arblaster, Philip J. Cameron-Smith, Nathan P. Gillett, Peter J. Gleckler, John Lanzante, Judith Perlwitz, Susan Solomon, Peter A. Stott, Karl E. Taylor, Laurent Terray, Peter W. Thorne, Michael F. Wehner, Frank J. Wentz, Tom M. L. Wigley, Laura J. Wilcox, and Cheng-Zhi Zou, Identifying human influences on atmospheric temperature, PNAS 2013 110 (1) 26-33; published ahead of print November 29, 2012, doi:10.1073/pnas.1210514109

[xvii] Thorne, P. W., et al. (2011), A quantification of uncertainties in historical tropical tropospheric temperature trends from radiosondes, J. Geophys. Res., 116, D12116, doi:10.1029/2010JD015487

Guest blog Carl Mears

Thoughts and plots about the tropical tropospheric hot spot.

Carl Mears, Remote Sensing Systems

In the deep tropics, in the troposphere, the lapse rate (the rate of decrease of temperature with increasing height above the surface) is largely controlled by the moist adiabatic lapse rate (MALR). This is true both in complicated simulations performed by General Circulation Models, and in simple, back of the envelope calculations (Santer et al, 2005). The reasoning behind this is simple. If the lapse rate were larger than MALR, then the atmosphere would be unstable to convection. Convection (a thunderstorm) would then occur, and heat the upper troposphere via the release of latent heat as water vapor condenses into clouds, and cool the surface via evaporation and the presence of cold rain/hail. If the lapse rate were smaller than MALR, then convection would be suppressed, allowing the surface to heat up without triggering a convective event. On average, these processes cause the lapse rate to be very close to the MALR. Note that this argument does not apply outside the tropics, because the dynamics become more complex due to the Coriolis force and the presence of large north/south temperature gradients, or in regions with very low relative humidity, such as deserts, where the atmosphere may be far from saturated near the surface and thus the MALR does not apply.

Because the MALR decreases with temperature, this means the any temperature increase at the surface becomes even larger high in the troposphere. This causes the so called hot spot, a region high in the troposphere that shows more warming (or cooling) than the surface. Note that at this point, I haven’t said a thing about greenhouse gases. In fact, this effect has nothing to do with the source of the warming, as long as it arises near the surface. Surface warming due to any cause would show a tropospheric hotspot in the absence of other changes to the heating and cooling of the atmosphere. Never the less, the tropospheric hotspot is often presented as some sort of lynchpin of global warming theory. It is not. It is just a feature of a close-to-unstable moist atmosphere.

Now, I will turn my attention to one of the core questions of this discussion – “can we detect/have detected a hot spot in the observations “. On monthly time scales, there is no question. If we average across the tropics, the temperature of the upper troposphere is strongly correlated with the temperature of the surface, only with larger amplitude (Santer et al., 2005). On decadal time scales, the results obtained depend on the datasets chosen, as Santer 2005 showed for the RSS and UAH satellite datasets and a few homogenized radiosonde datasets. Here we expand this a little further to include more homogenized radiosonde datasets and two of the more recent reanalysis datasets, MERRA and ERA-Interim. Figure 1 shows the ratio of the mid to upper tropospheric temperature trends to surface temperature trends in the deep tropics (20S to 20N). Each point on the graph is the trend starting in January 1979, and ending at the date on the x-axis. The surface temperature is from HADCRUT4. The mid to upper tropospheric data is the “temperature tropical troposphere” product, or TTT, first introduced by Fu and Johanson (2005). For MSU/AMSU, it is equal to 1.1*TMT – 0.1*TLS. This combination has the effect of adjusting for the cooling effect of the stratosphere on TMT by subtracting off part of the stratospheric cooling measured by TLS. The weighting function for this product is centered in the mid to upper tropical troposphere, where we expect the hot spot to be most pronounced.

Fig. 1. Ratio of trends in TTT to trends in TSurf as a function of the ending year of the trend analysis. The starting point is January 1979. The surface dataset used is HADCRUT4. The pink horizontal line is at a value of 1.4, the amplification factor for TTT in reference 1.

Two conclusions can easily be reached from this plot. First, it takes about 25 years (or more) for the measured trend ratios to settle down to reasonably constant values. This is due to the effects of both measurement errors, and “weather noise”. I think that this is part of the cause of the controversy surrounding this topic – we began discussing such trend ratios before we had enough data for the ratios to be stable over time. Second, the values that are ultimately reached depend strongly depend on which upper air dataset is used. For some datasets (HadAT, UAH, IUK, RAOBCORE 1.5, ERA-Interim), the trend ratio is less than 1.0, indicating lack of a tropospheric hotspot. For other datasets (RICH, RAOBCORE 1.4, RSS, MERRA, and STAR), the ratio is greater than one, indicating tropospheric amplification and the presence of a hotspot. CMIP-3 Climate models predicted an amplification value of about 1.4 for the TTT temperature product used here (Santer et al., 2005). Some upper air datasets are in relatively close agreement with these expectations, such as the RSS and STAR satellite data, the older version of RAOBCORE (V1.4), and the MERRA reanalysis (which uses the STAR data as one of its inputs, so it is not completely independent of STAR). Often one or more of these datasets is used to argue that a tropical hotspot exists or does not exist. A more balanced analysis shows that it is difficult to prove or disprove the presence of the tropospheric hotspot given the current state of the data.

In Fig. 2, I have reproduced panel D of Fig. 4 from Santer et al. (2005), except with updated measured data, the addition of the reanalysis data, and the use of CMIP-5 model results. The CMIP-5 model results for 1979-2012 were made by splicing together results from 20th century simulations (before 2005, using measured values of the forcings), and RCP8.5 21st century predictions (after 2005, results using predicted values for the various forcings. For details on this process, see Santer et al., 2012)

Fig. 2. Scatter plot of trend (1979-2012) in TTT as a function of trend in TSurf. The model results cluster around a line with a slope of 1.45, indicating a tropospheric hotspot. For the observed results, HadCRUT4 is used for the surface temperatures, and various sources of tropospheric temperature (Satellites, Radiosondes, and Reanalysis) are used to TTT.

The general story around the hotspot remains unchanged from Santer et al 2005, except that the expected scaling ratio has increased from 1.40 to 1.45 with the use of CMIP-5 data. Two sources of measured data (I realize that a reanalysis is not really a measurement), STAR and MERRA, are reasonably close to the fitted line, while others, such as the HADAT Radiosonde dataset and the UAH satellite dataset, are far from the line. Other datasets are distributed in between. Note that the RSS data point has error bars both in the X and Y direction. These are 90% uncertainty ranges derived from the error ensembles that have been recently produced for the RSS dataset (Mears et al., 2011) and the HadCRUT4 dataset (Morice et al, 2012). (These error ensembles are made up of different realizations of the datasets that are consistent with the estimated errors, including measurement, sampling, and construction errors. The correlations of the errors across both time and location are thus automatically included if the error ensemble members are processed by the user in the same way as the baseline data.) This is the first time that we have been able to put error bars on the observed points on this plot, in both directions, in such a consistent manner.

Looking at Fig. 2., it is obvious that the observed trends in both temperature datasets are at the extreme low end of the model predictions. This problem has grown over time as the length of the measured data grows. (As the comparison time period gets longer, the uncertainty in linear trends both the measured and modeled time series decreases simply because of the longer time period.) For the time being, I am tabling the discussion of this problem and focusing in the discussion of the hot spot. In my mind, the problem of the trend magnitude is more interesting than the argument about the hotspot, and I hope to return to it later in this process. But for now I will stay focused on the hotspot.

Fig. 3. Histograms of the troposphere/surface trend scaling ratio from the RSS/HadCRUT4 error ensembles, and from 33 CMIP-5 model runs.

In Fig. 3, we explore the implications of the RSS and HadCRUT4 error ensembles further. The top histogram shows the range of scaling ratios consistent with the RSS satellite data and the HadCRUT4 surface data when the estimated errors in each are taken into consideration. The bottom histogram shows the range of scaling ratio shown in the 33 CMIP-5 model runs. The two distributions overlap, indicating consistency of this set of observations with the models, though the mean value shown by the observations is clearly lower than that predicted by the models.

It has been suggested that the lack of a tropospheric hotspot (if there is such a lack) is mostly due to errors in the surface temperature datasets, which are (in this story line) suspected of being biased in the direction of too much warming. This seems unlikely. Clearly, the above spread in results for different upper air datasets reveals considerable structural uncertainty (Thorne et al, 2005) for the upper air data, and the error bar on the RSS trend values is much larger than the error bar for the HadCRUT4 value. Also, the various surface datasets are much more similar to each other. To show this, I redo the analysis in Figure 1 using a different surface dataset constructed by NOAA (GHCN-ERSST). The final trend ratios are almost identical to those found using HADCRUT4, and the conclusions reached are unchanged.

Figure 4. Ratio of trends in TTT to trends in TSurf as a function of the ending year of the trend analysis. The starting point is January 1979. The surface dataset used is GHCN-ERSST.

Conclusion: Taken as a whole, the errors in the measured tropospheric data are too great to either prove or disprove the existence of the tropospheric hotspot. Some datasets are consistent (or even in good agreement) with the predicted values for the hotspot, while others are not. Some datasets even show the upper troposphere warming less rapidly than the surface.

Biosketch
Dr. Mears has a B.S. in Physics from the University of Washington (1985), and a PhD. in Physics from University of California, Berkeley (1991), where his thesis research involved the development of quantum-noise-limited superconducting microwave heterodyne receivers. He joined Remote Sensing Systems in 1998. Since then, he has validated SSM/I and TMI winds versus in situ measurements, developed and validated a rain-flagging algorithm for the QuikScat Scatterometer. Over the past several years he has constructed and maintained a climate-quality data record of atmospheric temperatures from MSU and AMSU, and studied human-induced change in atmospheric water vapor and oceanic wind speed using measurements from passive microwave imagers. Dr. Mears was a convening lead author for the U.S. Climate Change Science Program Sythesis and Assessment product 1.1 (the first CCSP report to reach final form), and a contributing author to the IGPP 4th assessment report. He is a member two international working groups, the Global Climate Oberving System Working Group on Atmospheric Reference Observations, and the WCRP Stratospheric Trends Working Group, which is part of the Stratospheric Processes and their Role in Climate (SPARC) project.

References

B. D. Santer et al., “Amplification of surface temperature trends and variability in the tropical atmosphere,” Science, vol. 309, no. 5740, pp. 1511-1556, 2005.

C. A. Mears and F. J. Wentz, “Construction of the Remote Sensing Systems V3.2 atmospheric temperature records from the MSU and AMSU microwave sounders,” Journal of Atmospheric and Oceanic Technology, vol. 26, pp. 1040-1056, 2009.

J. R. Christy, R. W. Spencer, W. B. Norris, W. D. Braswell, and D. E. Parker, “Error estimates of version 5.0 of MSU-AMSU bulk atmospheric temperatures,” Journal of Atmospheric and Oceanic Technology, vol. 20, no. 5, pp. 613-629, 2003.

P. W. Thorne et al., “Revisiting radiosonde upper-air temperatures from 1958 to 2002,” Journal of Geophysical Research, vol. 110, 2005. (This is the HADAT dataset)

L. Haimberger, “Homogenization of radiosonde temperature time series using innovation statistics,” Journal of Climate, vol. 20, no. 7, pp. 1377-1403, 2007. (This describes the RAOBCORE dataset)

L. Haimberger, C. Tavolato, and S. Sperka, “Towards the elimination of warm bias in historic radiosonde records -- some new results from a comprehensive intercomparison of upper air data,” Journal of Climate, vol. 21, pp. 4587-4606, 2008. (This describes the RICH dataset)

S. C. Sherwood, C. L. Meyer, R. J. Allen, and H. A. Titcher, “Robust tropospheric warming revealed by iteratively homogenized radiosonde data,” Journal of Climate, vol. 21, no. 20, pp. 5336-5352, Oct. 2008. (This describes the IUK dataset)

R. H. Rienecker et al., A. da Silva, “MERRA - NASA’s Modern-Era Retrospective Analysis for Research and Applications,” Journal of Climate, 2011. (This describes the MERRA dataset)

Q. Fu and C. M. Johanson, “Satellite-derived vertical dependence of tropospheric temperature trends,” Geophysical Research Letters, vol. 32, 2005. (This introduces the concept of TTT)

Mears, CA, FJ Wentz, P Thorne and D. Bernie, 2011, “Assessing uncertainty in estimates of atmospheric temperature changes from MSU and AMSU using a Monte-Carlo estimation technique”, Journal of Geophysical Research, 116, D08112, doi:10.1029/2010JD014954. (This discussed RSS error ensembles).

Thorne, P. W., D. E. Parker, J. R. Christy, and C. A. Mears, 2005, “Uncertainties in Climate Trends: Lessons From Upper-Air Temperature Records”, Bulletin of the American Meteorological Society, 86, 1437-1442. (This discusses the idea of structural uncertainty)

Morice, C. P., J. J. Kennedy, N. A. Rayner, and P. D. Jones (2012),Quantifying uncertainties in global and regional temperature change using an ensemble of observational estimates: The HadCRUT4 data set, J. Geophys. Res., 117, D08101, doi:10.1029/2011JD017187. (This discussed HadCRUT4 and the HadCRUT4 error ensembles)

Santer, B. D., J. F. Painter, C. A. Mears, C. Doutriaux, P. Caldwell, J. M. Arblaster, P. J. Cameron-Smith, N. P. Gillett, P. J. Gleckler, J. Lanzante, J. Perlwitz, S. Solomon, P. A. Stott, K. E. Taylor, L. Terray, P. W. Thorne, M. F. Wehner, F. J. Wentz, T. M. L. Wigley, L. J. Wilcox, and C. Z. Zou, 11-29-2012: Identifying Human Influences on Atmospheric Temperature. Proceedings of the National Academy of Sciences, 110, 26-33, 10.1073/pnas.1210514109. (This describes, amoung other things, the construction of the 1979-2012 model datasets by combining 20th century simulations with RCP8.5 21st century predictions)

Guest blog Steven Sherwood

The tropical upper-tropospheric warming “hot spot”: is it missing, and what if it were?

Prof. Steven Sherwood, Director, Climate Change Research Centre, University of New South Wales, Sydney Australia.

In this post I’ll address two issues: first, confidence in tropical lapse-rate* changes and what they would mean for our understanding of atmospheric physics; second, the broader implications for global warming. My main positions on this issue could be summarised as: a) lapse-rate changes differing significantly from those expected from basic thermodynamic arguments would be very interesting, but, b) they would have no clear implications for global warming, and c) evidence that they have occurred is not reliable (which in a way is too bad, because of (a)). A side point is that there are other model-observation discrepancies that I think are more worthy of attention (and are accordingly receiving more attention from the mainstream scientific community).

Confidence and Implications for Atmospheric Physics
I first became interested in the tropical lapse rate (now alternatively known as “hot spot”) issue around 2001, shortly after it was raised in a prominent paper in Science (Gaffen et al. 2000). I had been using radiosonde data to look at wind fields and temperature trends near the tropical tropopause and lower stratosphere. This new problem drew my attention because I was interested in how tropical atmospheric convection (e.g. storms) responds to its environment, one of the grand unsolved problems in atmospheric modelling. Tropical convection was supposed to prevent the kind of lapse-rate changes that were being reported, so what was going on?

I considered various ways that aerosols might alter the convective lapse rate and how to test these hypotheses. Before going too far with this, however, I wanted to assure myself that the reported trends were robust, and began my own analysis of the radiosonde data (or in fact continued it, since I was already using radiosondes to understand change in the lower stratosphere). By 2005, based on my own work and others’ plus a better understanding of the basic challenges, I no longer thought there was credible evidence for any unexpected changes in atmospheric temperature structure. Consequently I dropped this as a research topic (my student who was thinking along these lines, Bob Allen, changed gears to examine the possible impacts of aerosol on the general circulation which did lead to some very interesting results published later; he also showed that wind trends in the tropics were consistent with the hot spot).

Small changes
Although there has been more to-ing and fro-ing in the literature since then, as described in the opening article for this exchange, I still remain unconvinced that we can observe the small changes in temperature structure that are being discussed. Tests of radiosonde homogenisation methods (e.g., Thorne et al. 2011) show that they are often unreliable. MSU is not well calibrated and its homogenisation issues are also serious, as shown by the range of results previously obtained from this instrument series. To obtain upper-tropospheric trends from Channel 2 of MSU requires subtracting out a large contribution to trends in this channel coming from lower-stratospheric cooling. The latter remains highly uncertain due to a discrepancy between cooling rates in radiosondes and MSU. Tropical ozone trends are sufficiently uncertain so as to render either of these physically plausible (Solomon et al. 2012). I used to think (as do most others) that the radiosondes were wrong, but in Sherwood et al. 2008 we found (to my surprise) that when we homogenised the global radiosonde data they began to show cooling in the lower stratosphere that was very similar to that of MSU Channel 4 at each latitude, except for a large offset that varied smoothly with latitude. Such a smoothly varying and relatively uniform offset is very different from what we’d expect from radiosonde trend biases (which tend to vary at lot from one station to the next) but is consistent with an uncorrected calibration error in MSU Channel 4. If that were indeed responsible, it would imply that there has been more cooling in the stratosphere than anyone has reckoned on, and that the true upper-tropospheric warming is therefore stronger than what any group now infers from MSU data. By the way, our tropospheric data also came out very close to those published at the time by RSS, both in global mean and in the latitudinal variation (Sherwood et al., 2008).

Changes in tropical lapse rate remain an interesting problem in principle, because we know that convective schemes in global atmospheric models need improving, and this could be informative as to what is wrong. Current schemes enforce the theoretical “moist-adiabatic” lapse rate quite strongly. It would not surprise me much if it turned out that they are too heavy-handed in this respect, and that a better model would anchor the upper tropospheric temperature less firmly to the surface temperature. Indeed there is reason to believe that other problems with these models, such as difficulties in generating proper hurricanes or a tropical phenomenon known as the Madden-Julian oscillation, may also derive from the schemes triggering convection too easily and enforcing these lapse rates too vigorously. So I would not at all discount the possibility of these lapse-rate changes occurring, but one needs strong evidence, and we just don’t have that.

Broader implications for global warming
Perhaps the most remarkable and puzzling thing about the “hot spot” question is the tenacity with which climate contrarians have promoted it as evidence against climate models, and against global warming in general.

If I were looking for climate model defects, there are far more interesting and more damning ones around. For example, no climate model run for the IPCC AR4 (c. 2006) was able to reproduce the losses of Arctic sea ice that had been observed in recent decades (and which have continued accelerating since). No model, to my knowledge, produces the large asymmetry in warming between the north and south poles observed since 1980. Models underpredict the observed poleward shifts of the atmospheric circulation and climate zones by about a factor of three over this same period (Allen et al. 2012); cannot explain the warmings at high latitudes indicated by paleaoclimate data in past warm climates such as the Pliocene (Fedorov et al. 2013); appear to underpredict observed trends in the hydrological cycle (Wentz et al. 2007, Min et al. 2011) and in their simulated climatologies tend to produce rain that is too frequent, too light, and on land falls at the wrong time of day (Stephens et al. 2010). Finally, the tropical oceans are not warming as much as the land areas, or as much as predicted by most models, and this may be the root cause of why the recent warming of the tropical atmosphere is slower than predicted by most models (there is a nice series of posts about this on Isaac Held’s blog). What makes the “hot spot” more important than these other discrepancies which, in many cases, are supported by more convincing evidence? Is it because the “missing hot spot” can be spun into a tale of model exaggeration, whereas all the other problems suggest the opposite problem?

Nil
Let us suppose for the moment that the “hot spot” really has been missing while the surface has warmed. What would the implications be?

The implications for attribution of observed global warming are nil, as far as I can see. The regulation of lapse rate changes by atmospheric convection is expected to work exactly the same way whether global temperature changes are natural or forced (say, by greenhouse gases from fossil fuel burning).

The implications for climate sensitivity are also roughly nil. The total feedback from water vapour and lapse-rate changes depends only on the changes in relative humidity in the upper troposphere, not on the lapse rate itself (see Ingram, 2013). In fact, in climate models where the lapse rate becomes relatively steeper as climate warms (as would be the case with a missing hot spot), the total warming feedback is very slightly stronger because the increased lapse rate increases the greenhouse effect of carbon dioxide and other well-mixed greenhouse gases. So a missing hot spot would not mean less surface warming, at least according to our current understanding.

Moreover, the discrepancy with models was opposite from 1958-1979 (Gaffen et al. 2000)—that is to say, the observed tropical upper-tropospheric warming was evidently stronger than expected. But the world was warming then too. So if this interesting phenomenon is real, it probably is not connected to global warming.

Fig. 1. Weaker upper-tropospheric warming and hence weaker water-vapour feedback actually implies, on average, slightly stronger overall positive feedback due to lapse rate and water vapour combined (from Ingram 2013).

Anyone who wants to argue that the “missing hot spot” implies something as to the future (say, that global warming will be less than current models predict) needs to come up with an alternative model of climate that agrees just as well with observations, obeys physical laws, predicts the absence of a “hot spot,” and predicts less future global warming (or whatever other novel outcome). This is how science advances---through the consideration of multiple hypotheses. If a new one comes along that fits the observations, I’ll gladly consider it.

Currently none of the explanations I can see for the “missing hot spot” would change our estimate of future warming from human activities, except one: that the overall warming of the tropics is simply slower than expected. It does seem that global-mean surface warming is starting to fall behind predictions, and this is particularly so in the tropical oceans (though not, curiously, on land). Possible causes are (a) aerosols, solar or other forcings have recently exerted a stronger (temporary) cooling influence than we think; (b) negative feedbacks from clouds have kicked in; or (c) the oceans are burying the heat faster than we expected. If (b) were true, we would revise our estimates of climate sensitivity downward. There are observations supporting options (c) and to a small extent (a), but there is plenty of room for new surprises. If it is (c) (which appears most likely), we then have to decide whether this is a natural variation or if it is a feature of global warming. In the former case the heat will soon come back; in the latter, the oceans will delay climate change more effectively than we thought. Another decade or so of observations should reveal the answer.

*for readers unfamiliar with the term “lapse rate,” it is the rate at which air temperature decreases with altitude.

Biosketch
Dr. Steven Sherwood is professor at the Climate Change Research Centre of the The University of New South Wales in Sydney. He received his M.S. degree (1989) in Eng. Physics/Fluid Mechanics at the University of California San Diego, USA and his Ph.D. degree (1995) in Oceanography at the Scripps Institution of Oceanography.
Sherwood studies how the various processes in the atmosphere conspire to establish climate, how these processes might be expected to control the way climate changes, and how the atmosphere will ultimately interact with the oceans and other components of Earth. Clouds and water vapour in particular remain poorly understood in many respects, but are very important not only in bringing rain locally, but also to global climate through their effect on the net energy absorbed and emitted by the planet. Tropospheric convection (disturbed weather) is a key process by which the atmosphere transports water and energy and in the process creates clouds, but it is also a turbulent phenomenon for which we have no basic theory and which observations cannot yet fully characterise.
Sherwood leads a research group that applies basic physics to complex problems by a combination of simple theoretical ideas and hypotheses and directed analyses of observations.

References
Allen, R. J., S. C. Sherwood, J. R. Norris and C. Zender, Recent Northern Hemisphere tropical expansion primarily driven by black carbon and tropospheric ozone, Nature, Vol. 485, 2012, 350-355.

Fedorov, A. V., C. M. Brierley, K. T. Lawrence, Z. Liu, P. S. Dekens and A. C. Ravelo (2013). "Patterns and mechanisms of early Pliocene warmth." Nature 496(7443): 43-+.

Gaffen, D. J., B. D. Santer, J. S. Boyle, J. R. Christy, N. E. Graham and R. J. Ross, Multidecadal changes in the vertical temperature structure of the tropical troposphere, Science, 2000, V. 287, 1242-1245.

Ingram, W. (2013). "Some implications of a new approach to the water vapour feedback." Climate Dynamics 40: 925-933.

Min, S. K., X. B. Zhang, F. W. Zwiers and G. C. Hegerl (2011). "Human contribution to more-intense precipitation extremes." Nature 470(7334): 378-381.

Sherwood, S. C., C. L. Meyer, R. J. Allen, and H. A. Titchner, Robust tropospheric warming revealed by iteratively homogenized radiosonde data. Journal of Climate, Vol. 21, 2008, 5336-5352.

Solomon, S., P. J. Young and B. Hassler (2012). "Uncertainties in the evolution of stratospheric ozone and implications for recent temperature changes in the tropical lower stratosphere." Geophysical Research Letters 39.

Stephens, G. L., T. L'Ecuyer, R. Forbes, A. Gettlemen, J.-C. Golaz, A. Bodas-Salcedo, K. Suzuki, P. Gabriel and J. Haynes (2010). "Dreary state of precipitation in global models." Journal of Geophysical Research 115: D24211.

Thorne, P. W. et al., A quantification of uncertainties in historical tropical tropospheric temperature trends from radiosondes, J. Geophys. Res., Vol. 116, 2011, D12116.

Wentz, F. J., L. Ricciardulli, K. Hilburn and C. Mears, 2007, How much more rain will global warming bring?, Science, Vol. 317, 233-235.

Guest blog John Christy

Why should we care about the tropical temperature?

John R. Christy, Distinguished Professor, Department of Atmospheric Science, Director Earth System Science Center, The University of Alabama in Huntsville

One important part of climate change research is to document the amount of change that can already be attributed to human activity. In other words we want to know the answer to the question, “How has the climate changed specifically because of the enhancement of the natural greenhouse effect caused by extra emissions due to human progress?” These rising emissions come primarily from energy production using carbon-based fuels which emit, as a by-product, the ubiquitous and life-sustaining greenhouse gas, carbon dioxide (CO2). From about 280 ppm in the 19th century, the current concentration of CO2 has risen to about 400 ppm.

So, what has the extra CO2 and other greenhouse gases done to the climate as of today? Climate model simulations indicate that a prominent and robust response to extra greenhouse gases is the warming of the tropical troposphere, a layer of air from the surface to about 16 km altitude in the region of the globe from 20°S to 20°N. A particularly obvious feature of this expected warming, and is a key focus of this blog post, is that this warming increases with altitude where the rate of warming at 10 km altitude is over twice that of the rate at the surface. This clear model response should be detectible by now (i.e. 2012) which gives us an opportunity to check whether the real world is responding as the models’ simulate for a large-scale, easy-to-compare quantity. This is why we care about the tropical atmospheric temperature.

Accumulating heat
There are two aspects to this tropical warming that are sometimes confused. One aspect is the simple magnitude of the warming rate, or temperature trend, of the entire troposphere. This metric quantifies the amount of heat that is accumulating in the bulk atmosphere. A well-established result of adding greenhouse gases to the atmosphere is that heat energy (in units of joules) will accumulate in the troposphere which can be detected as a rise in temperature. [The fundamental issue of the effects of greenhouse warming is: how many joules of heat are accumulating in the climate system per year?]

We don’t know at what rate that accumulation might occur as other processes may come into play which reduce or magnify it. For example, with extra greenhouse gases, the rate at which the joules are allowed to escape to space may be reduced by additional responses, causing even more heating. On the other hand, there could be an increase in cloudiness which may limit the number of joules (from the sun) which enter the climate system, thus causing a cooling influence. A reaction of the climate system to extra CO2 that promotes even more accumulation over what would have happened due to CO2 alone is a positive feedback, while one that limits the accumulation of joules is a negative feedback. In the climate system, there are numerous feedbacks of both signs, all interdependent and intertwined.

The second aspect of enhanced temperature change is the amount of amplification the higher altitude layers will experience relative to the surface warming as noted earlier – which is linked to the first aspect and is a feature discussed as a complement to the first aspect. In simple thinking, if enough joules are added to the troposphere to increase its temperature by 1 °C throughout, one would expect a uniform 1 °C warming from the surface to the top of the troposphere. However, as seen in the way the real atmosphere behaves on monthly and yearly time scales, the surface temperature change tends to be less than 1 °C while the upper troposphere warms to more than 1 °C. Since there is a reduction in the expected increase of the surface temperature given the number of joules added, this phenomenon is called a negative lapse-rate feedback to surface temperature (even though the upper air heats up more.) So the models anticipate that there will be a strong amplification of the surface temperature change as one ascends through the troposphere. [So, if someone claims that surface and upper air trends agree in magnitude, then they are also claiming that this is not consistent with the enhanced greenhouse effect since, according to models, the temperature trend of those two levels should not agree.]

Thus, there are two ideas to test in the tropics, (1) the overall magnitude of the layer-average temperature rise and (2) the magnification or amplification of the surface temperature change with height.

Balloons and satellites
Measurements of tropical tropospheric temperature have been performed by balloons that ascend through the air and radio back the atmosphere’s vital statistics, like temperature, humidity, etc. Due to a number of changes in these instruments through the years research organizations have spent a lot of effort to remove such problems and create homogenous or consistent databases of these readings. For this study we shall assume that the average of four major and well-published datasets (known as RATPAC, RAOBCORE, RICH and HadAT2) will serve as the “best guess” of the tropical temperatures at the various elevations (see Christy and Hnilo, 2007, Christy et al. 2010, 2011 for descriptions and earlier results.)

For a layer-average of the tropospheric temperature there are two satellite-based tropospheric datasets (known as UAH and RSS) which have by independent methods combined the readings from several spacecraft carrying microwave instruments into a time series beginning in late 1978. There are dozens of publications which detail the methods used by the various groups to generate both balloon and satellite products. Through the years each group has updated their products as new information has come to light, and we use the latest versions as of June 2013.

The time frame we shall consider here will begin in Jan 1979 and end in Dec 2012 as this is the time we have output from models and from observations, both balloons and satellites. It is also the period for which the greatest amount of accumulation of heat energy (joules) should be evident due to the increasing impact of the rising greenhouse gas concentrations.

To examine the simple magnitude of full-tropospheric trends we look at two layers as represented by what satellites measure which are roughly the average temperature of the surface and to about 10 km (lower troposphere or TLT) and surface to about 17 km (mid-troposphere or TMT). TMT gives more weight to the region between 500 (5.5 km) and 200 hPa (12 km) where the warming is expected to be most pronounced according to models, so the figures will focus on TMT. We can simulate the satellite layer using both balloon data and model output for direct, apples-to-apples comparisons (Fig. 1.)

Figure 1. Time series of the mid-tropospheric temperature (TMT) of 73 CMIP-5 climate models (rcp8.5) compared with observations (circles are averages of the four balloon datasets and squares are averages of the two satellite datasets.) Values are running 5-year averages for all quantities. [There are four basic rcp emission scenarios applied to CMIP-5 models, but their divergence occurs after 2030. Thus, for our comparison which ends in 2012, there are essentially no differences among the rcp scenarios.] The model output for all figures was made available by the KNMI Climate Explorer.

We see that all 73 models anticipated greater warming than actually occurred for the period 1979-2012. Of importance here too is that the balloons and satellites represent two independent observing systems but they display extremely consistent results. This provides a relatively high level of confidence that the observations as depicted here have small errors. The observational trends from both systems are slightly less than +0.06 °C/decade which is a value insignificantly different from zero. The mean TMT model trend is +0.26 °C/decade which is significantly positive in a statistical sense. The observed satellite and balloon TLT trends (not shown) are +0.10 and +0.09 °C/decade respectively, and the mean model TLT trend is +0.28 °C/decade. In a strict hypothesis test, the mean model trend can be shown to be statistically different from that of the observations, so that one can say the model-mean has been falsified (a result stated in a number of publications already for earlier sets of model output.) In other words, the model mean tropical tropospheric temperature trend is warming significantly faster than observations (See Douglass and Christy 2013 for further information.)

Amplification
Regarding the second aspect of temperature change, we show the vertical structure of those changes in Fig. 2 where we display the temperature trend by vertical height (pressure) as indicated by the four balloon datasets (circles), their average (large circle) and 73 model simulations (lines of various types).

Figure 2 Temperature trends in °C/decade by pressure level with 1000 hPa being the surface and 100 hPa being around 16 km. Circles represent the four observational balloon datasets, the largest circle being their mean. The lines represent 73 CMIP-5 model simulations (identities in Fig. 3) with the non-continuous lines representing models sponsored by the U.S. The large black dashed line is the 73-model mean. The pressure values are very close to linear with respect to mass but logarithmic with respect to altitude, so that 500 hPa is near 5.5 km altitude, 300 hPa near 9 km altitude and 200 hPa about 12 km altitude.

Figure 3 Caption for Fig. 2, identifying model runs and observational datasets.

The models (especially) show increasing trends as altitude increases to 250 hPa (about 10 km) before decreasing toward the stratosphere (~90 hPa). In comparing model simulations with the observations it is clear that between 850 and 200 hPa, all model results are warmer than the average of the balloon observations, a result not unexpected given the information in Fig. 1.

The amount of the amplification of the value of the surface trend with elevation in Fig. 2 is somewhat difficult to discern as each model has its own surface trend magnitude. To better compare the amplification effect, we normalize the pressure-level trend values by the trend of the surface value for each dataset and model simulation.

Figure 4. Value of the 1979-2012 temperature trend at various upper levels divided by the magnitude of the respective surface trend, i.e. the ratio of upper air trends to surface trends. Model simulations are lines with the average of the models as the dotted line. Squares are individual balloon observations (green – RATPAC, gray RAOBCORE, purple – RICH and orange – HadAT2) with the averages of observations the gray circles.

Figure 4 displays the ratio, or amplification factor, that observations and models depict for 1979-2012 in the tropics (see Christy et al. 2010 for further information). The mean observational result indicates the values are between +0.5 and +1.5 through the lower and middle troposphere (850 to 250 hPa). [The observational results tend to have greater variability due to the denominator (surface trend) being relatively small. Viewing Fig. 2 shows that the observations are rather tightly bunched for absolute trends in comparison to the model spread.] The models indicate a systematic increase in the ratio from 1.0 at the surface with amplification factors well above +1.5 from 500 to 200 hPa. What this figure clearly indicates is that the second aspect of this discussion, i.e. namely the rising temperatures with increasing altitude, is also over-done in the climate models. The differences of the means between observations and models are significant.

Overwarm
While there is much that can be discussed from these results, we wonder simply why the models overwarm the troposphere compared with observations by such large amounts (on average) during a period when we have the best understanding of the processes that cause the temperature to change. During a period when the mid-troposphere warmed by +0.06 °C/decade, why does the model average simulate a warming of +0.26 °C/decade?

Unfortunately, a complete or even satisfactory answer cannot be provided. Each model is constrained by its own sets of equations and assumptions that prevent simple answers, especially when all of the individual processes are tangled together through their unique complex of interactions. The real world also presents some baffling characteristics since it is constrained by the laws of physics which are not fully and accurately known for this wickedly complex system.

An interesting feature of the models is that almost all show greater year-to-year variability than observations (Fig. 1.) The average model annual variance (detrended) of anomalies is 60 percent greater than that of the observational datasets. This is a clue that suggests the models’ atmospheres are more sensitive to forcing than is the real climate system, so that an increase in greenhouse forcing in models will lead to a greater temperature response than experienced by the actual climate system. But saying the climate models are too sensitive only identifies another symptom of the issue, not the cause.

We want to know why the extra joules of energy that increasing CO2 concentrations should be trapping in the climate system are not found in Nature’s atmosphere compared with what the models simulate.

Could the extra joules be absorbed by the deep ocean and prevented from warming the atmosphere (Guemas et al. 2013)? This requires extremely accurate measurements of the deep ocean (better than 0.01 °C precision) which are not now available comprehensively in space and time. Current studies based only on observations suggest this enhanced sequestration of heat is not happening.

Could there be a separate process like enhanced solar reflection by aerosols that is keeping the number of joules available for absorption at a smaller level relative to the past? The interaction of aerosols with the entire array of climate processes is another fundamental area of research that has more questions than answers. How do aerosols affect cloudiness (more?, less?, brighter?, darker?). What is the precise, time-varying distribution of all types of aerosols and what exactly does each type do in terms of affecting the absorption and reflection of the joules in all frequencies? The IPCC typically shows very large error ranges for our knowledge of the aerosol effects, so there is a possibility that models have significant and consistent errors in dealing with them (IPCC 2007 AR4 Fig SPM.2).

Clouds and water vapor
Could there be a complex feedback response in the way the real atmosphere handles water vapor and clouds that acts to enhance the expulsion of joules to space under extra greenhouse forcing so they don’t accumulate very rapidly? Of the many processes that models struggle to represent, none are more difficult than clouds and water vapor. As recently shown by Stevens and Bony (2013) different models driven by an identical, simplified forcing produced very different results for cloudiness. This is my favorite option in terms of explaining the lack of joule-accumulation. As my colleague Roy Spencer reminds us, if you think about it, the atmosphere should have 100 percent humidity because it has an essentially infinite source of water in the oceans. However, precipitation prevents that from happening, so precipitation processes are apparently in control of water vapor concentrations – the greenhouse gas with the largest impact on temperature. This means the way precipitation and clouds behave (both in causing changes or responding to them) when slight changes occur in the environment is key in my view. We have actually measured large temperature swings that were preceded by changes in cloudiness in our global temperature measurements. So a response to the extra CO2 forcing by clouds and water vapor, which have a massive impact on temperature, could be the reason for the rather modest temperature rise we’ve experienced (Spencer and Braswell, 2010).

Or, could there be natural variations that completely overcome small enhancements in greenhouse-joule-trapping? These variations have demonstrated the ability to drive large temperature swings in the past, but we cannot simulate or predict them well at all. For that we need extremely accurate ocean simulations along with accurate representations of clouds, precipitation and water vapor (among other things).

The bottom line is that, while I have some ideas based on some evidence, I don’t know why models are so aggressive at warming the atmosphere over the last 34 years relative to the real world. The complete answer is probably different for each model. To answer that question would take a tremendous model evaluation program run by independent organizations that has yet to be formulated and funded.

What I can say from the standpoint of applying the scientific method to a robust response-feature of models, is that the average model result is inconsistent with the observed rate of change of tropical tropospheric temperature - inconsistent both in absolute magnitude and in vertical structure (Douglass and Christy 2013.) This indicates our ignorance of the climate system is still enormous and, as suggested by Stevens and Bony, this performance by the models indicates we need to go back to the basics. From this statement there is only a short distance to the next - the use of climate models in policy decisions is, in my view, not to be recommended at this time.

Biosketch
J.R. Christy is Distinguished Professor of Atmospheric Science at the University of Alabama in Huntsville and Director of the Earth System Science Center. He is Alabama’s State Climatologist. In 1989 he and Dr. Roy Spencer, then of NASA, published the first global, bulk-atmospheric temperatures from microwave satellite sensors. For this achievement they were recognized with NASA’s Medal for Exceptional Scientific Achievement and the American Meteorology Society’s Special Award for developing climate datasets from satellites. Christy has served on the IPCC panels as Contributor, Key Contributor and Lead Author and has testified before the U.S. Congress, federal court, many state legislatures and regulatory boards on climate issues.

References

Christy, J. R., W. B. Norris, R. W. Spencer, and J. J. Hnilo. Tropospheric temperature change since 1979 from tropical radiosonde and satellite measurements, J. Geophys. Res., 2007. 112, D06102, doi:10.1029/2005JD006881.

Christy, J.R., B. Herman, R. Pielke, Sr., P. Klotzbach, R.T. McNider, J.J. Hnilo, R.W. Spencer, T. Chase and D. Douglass, (2010): What do observational datasets say about modeled tropospheric temperature trends since 1979? Remote Sens. 2, 2138-2169.

Christy, J.R., R.W. Spencer and W.B. Norris, 2011: The role of remote sensing in monitoring global bulk atmospheric temperatures. Int. J. Remote Sens., 32, 671-685, DOI:10.1080/01431161.2010.517803.

Douglass, D. and J.R. Christy, 2013: Reconciling observations of global temperature change: 2013. Energy and Env., 24 No. 3-4, 414-419.

Guemas, V., F.J. Doblas-Reyes, I. Andrea-Burillo and M. Asif, 2012: Retrospective prediction of global warming slowdown in the past decade. Nature Clim. Ch., 3, 649-653, DOI:10.1038/nclimate1863.

Spencer, R.W. and W.D. Braswell, 2010: On the diagnosis of radiative feedback in the presence of unknown radiative forcing. J. Geophys. Res., 115, DOI:10.1019/2009JD013371.

Stevens, B. and S. Bony, 2013. What Are Climate Models Missing? Science. 31 May 2013. Doi:10.1126/science/1237554.

Are regional models ready for prime time?

The third Climate Dialogue is about the value of models on the regional scale. Do model simulations at this level have skill? Can regional models add value to the global models?

We have three excellent participants joining this discussion: Bart van den Hurk of KNMI in The Netherlands who is actively involved in the KNMI scenario’s, Jason Evans from the University of Newcastle, Australia, who is coordinator of Coordinated Regional Climate Downscaling Experiment (CORDEX) and Roger Pielke Sr. who through his research articles and his weblog Climate Science is well known for his outspoken views on climate modelling.

Climate Dialogue editorial staff
Rob van Dorland, KNMI
Marcel Crok, science writer
Bart Verheggen

Introduction regional modelling

Are climate models ready to make regional projections?

Climate models are vital tools for helping us understand long-term changes in the global climate system. These models allow us to make physically plausible projections of how the climate might evolve in the future under given greenhouse gas emission scenarios.

Global climate projections for 2050 and 2100 have, amongst other purposes, been used to inform potential mitigation policies, i.e. to get a sense of the challenge we are facing in terms of CO2 emission reductions. The next logical step is to use models for adaptation as well. Stakeholders have an almost insatiable demand for future regional climate projections. These demands are driven by practical considerations related to freshwater resources, especially ecosystems and water related infrastructure, which are vulnerable to climate change.

Global climate models (GCMs) though have grid scales that are quite coarse (>100 km). This hampers the reconstruction of climate change at smaller scales (regional to local). Regions (the size of e.g. the Netherlands) are usually covered by only a few grid points. A crucial question therefore is whether information from global climate models at this spatial scale is realistic and meaningful, in hind cast and/or for the future.

Hundreds of studies have been published in the literature [1] presenting regional projections of climate change for 2050 and 2100. The output of such model simulations is then used by the climate impacts community to investigate what potential future benefits or threats could be expected. However several recent studies cast doubt whether global model output is realistic on a regional scale, even in hind cast. [2-5]

So a legitimate question is whether global and/or regional climate models are ready to be used for regional projections? Is the information reliable enough to use for all kinds of medium to long term adaptation planning? Or should we adopt a different approach?

To improve the resolution of the models other techniques, such as regional climate models (RCMs), or downscaling methods, have been developed. Nesting a regional climate model (with higher spatial resolution) into an existing GCM is one way to downscale data. This is called dynamical downscaling. A second way of downscaling climate model data is through the use of statistical regression. Statistical downscaling is based on relationships linking large-scale atmospheric variables from either GCMs or RCMs (predictors)and local/regional climate variables (predictands) using observations. [6]

Both methods are widely used inside the regional modelling community. The higher spatial resolution allows a more detailed representation of relevant processes, which will hopefully, but not necessarily, result in a “better” prediction. However RCMs operate under a set of boundary conditions that are dependent on the parent GCM. Hence, if the GCM does not do an adequate job of reproducing the climate signal of a particular region, the RCM will simply mimic those inaccuracies and biases. A valid question therefore is if and how the coupling of a RCM to a GCM can provide more refined insights. [7,8]

Recently Kerr [9] caused quite a stir in the regional modelling community by raising doubts about the reliability of regional model output. A debate about the reliability of model simulations is quickly seen as one between proponents and sceptics of anthropogenic global warming. However as Kundzewicz [10] points out “these are pragmatic concerns, raised by hydrologists and water management practitioners, about how useful the GCMs are for the much more detailed level of analysis (and predictability) required for site-specific water management decisions (infrastructure planning, design and operations).”

Climate dialogue
The focus of this Climate Dialogue will be on the reliability of climate simulations for the regional scale. An important question will be if there is added value from regional climate downscaling.

More specific questions:

1) How realistic are simulations by GCM’s on the regional scale?

2) Do some parameters (e.g. temperature) perform better than others (e.g. precipitation)?

3) Do some regions perform better than others?

4) To what extent can regional climate models simulate the past?

5) What is the best way to determine the skill of the hind cast?

6) Is there added value of regional models in comparison with global models?

7) What are the relative merits of dynamical and statistical downscaling?

8) How should one judge projections of these regional models?

9) Should global/regional climate models be used for decisions concerning infrastructure development? If so how? If not, what should form a better scientific base for such decisions?

References:

[1] The CMIP3 and CMIP5 list of publications is a good starting point, see http://www-pcmdi.llnl.gov/ipcc/subproject_publications.php and http://cmip.llnl.gov/cmip5/publications/allpublications

[2] G.J. van Oldenborgh, F.J. Doblas Reyes, S.S. Drijfhout, and E. Hawkins, "Reliability of regional climate model trends", Environmental Research Letters, vol. 8, pp. 014055, 2013. http://dx.doi.org/10.1088/1748-9326/8/1/014055

[3] Anagnostopoulos, G. G., Koutsoyiannis, D., Christofides, A., Efstratiadis, A. &Mamassis, N. (2010) A comparison of local and aggregated climate model outputs with observed data. Hydrol. Sci. J. 55(7), 1094–1110

[4] Stephens, G. L., T. L’Ecuyer, R. Forbes, A. Gettlemen, J.‐C. Golaz, A. Bodas‐Salcedo, K. Suzuki, P. Gabriel, and J. Haynes (2010), Dreary state of precipitation in global models, J. Geophys. Res., 115, D24211, doi:10.1029/2010JD014532

[5] J. Bhend, and P. Whetton, "Consistency of simulated and observed regional changes in temperature, sea level pressure and precipitation", Climatic Change, 2013. http://dx.doi.org/10.1007/s10584-012-0691-2

[6] Wilby, R. L. (2010) Evaluating climate model outputs for hydrologicalapplications – Opinion. Hydrol. Sci. J. 55(7), 1090–1093

[7] Kundzewicz, Zbigniew W. and Stakhiv, Eugene Z.(2010) 'Are climate models “ready for prime time” inwater resources management applications, or is more research needed?', Hydrological Sciences Journal, 55: 7, 1085 —1089

[8] Pielke, R. A. S., and R. L. Wilby, 2012: Regional climate downscaling: What’s the point? Eos Trans.AGU, 93, PAGE 52, doi:201210.1029/2012EO050008

[9] R.A. Kerr, "Forecasting Regional Climate Change Flunks Its First Test", Science, vol. 339, pp. 638-638, 2013. http://dx.doi.org/10.1126/science.339.6120.638

[10] Kundzewicz, Zbigniew W. and Stakhiv, Eugene Z.(2010) 'Are climate models “ready for prime time” in water resources management applications, or is more research needed?', Hydrological Sciences Journal, 55: 7, 1085 —1089

Guest blog Bart van den Hurk

The added value of Regional Climate Models in climate change assessments

Regional downscaling of climate information is a popular activity in many applications addressing the assessment of possible effects of a systematic change of the climate characteristics at the local scale. Adding local information, not captured in the coarse scale climate model or observational archives, can provide an improved representation of the relevant processes at this scale, and thus yield additional information, for instance concerning topography, land use or small scale features such as sea breezes or organisation of convection. A necessary step in the application of tools used for this regional downscaling is a critical assessment of the quality of the tools: are regional climate models (RCMs), used for this climate information downscaling, good enough for this task?

It is important to distinguish the various types of analyses that are carried out with RCMs. And likewise to assess the ability of the RCM to perform the task that is assigned to them. And these types of analyses clearly cover a wider range than plain prediction of the local climate!

Regional climate prediction

Pielke and Wilby (2012) discuss the lack of potential of RCMs to increase the skill of climate predictions at the regional scale. Obviously, these RCM predictions heavily rely on the quality of the boundary conditions provided by global climate models, and fail to represent dynamically the spatial interaction between the region of interest and the rest of the world. However, various “big brother” type experiments (in which the ability of RCMs to reproduce a filtered signal provided by the boundary conditions (Denis et al, 2002), for instance carried out by colleagues at KNMI) do show that a high resolution regional model can add value to a coarse resolution boundary condition by improving the spatial structure of the projected mean temperatures. Also the spatial structure of changes in precipitation linked to altered surface temperature by convection can be improved by using higher resolution model experiments, although the relative gain here is generally small (Di Luca et al, 2012).

Van Oldenborgh et al (2013) point out that the spatial structure of the mean temperature trend in the recent CMIP5 model ensemble compares fairly well with observations, but anomalies from the mean temperature trend aren’t well captured. This uncertainty clearly limits the predictability of temperatures at the regional scale beyond the mean trend. Van Haren et al (2012) also nicely illustrate the dependence of regional skill on lateral boundary conditions: simulations of (historic) precipitation trends for Europe failed to match the observed trends when lateral boundary conditions were provided from an ensemble of CMIP3 global climate model simulations, while a much better correspondence with observations was obtained when reanalyses were used as boundary condition. Thus, a regional prediction of a trend can only be considered to be skilful when the boundary forcings represent the signal to be forecasted adequately. And this does apply to mean temperature trends for most places in the world, but not for anomalies from these mean trends, nor for precipitation projections.

For regional climate predictability, the added value of RCMs should come from better resolving the relationship between mean (temperature) trends and key indicators that are supposedly better represented in the high resolution projections utilizing additional local information, such as temperature or precipitation extremes. Also here, evidence of adding skill is not univocally demonstrated. Min et al (2013) evaluate the ability of RCMs driven by reanalysis data to reproduce observed trends in European annual maximum temperatures, and conclude that there is a clear tendency to underestimate the observed trends. For Southern Europe biases in maximum temperatures could be related to errors in the surface flux partitioning (Stegehuis et al, 2012), but no such relationship was found for NW Europe by Min et al (2013).

Thus indeed, the limitations to predictability or regional climate information by RCMs as discussed by Pielke and Wilby (2012) and others are valid, and care must be taken while interpreting RCM projections as predictive assessments. But is this the only application of RCMs? Not really. We will discuss two other applications, together with the degree to which limitations in RCM skill apply and are relevant.

Bottom up environmental assessments

A fair point of critique to exploring a cascade of model projections ranging from the global scale down to the local scale of a region of interest to developers of adaptation or mitigation policies is the virtually unmanageable increase of the range of degrees of freedom, also addressed as “uncertainty”. Uncertainty arises from imperfect models, inherent variability, and unknown evolution of external forcings. And in fact the process of (dynamical) downscaling adds another level of uncertainty, related to the choice of downscaling tools and methodologies. The reverse approach, starting from the vulnerability of a region or sector of interest to changes in environmental conditions (Pielke et al, 2012), does not eliminate all sources of uncertainty, but allows a focus on the relevant part of the spectrum, including those elements that are not related to greenhouse gas induced climate change.

But also here, RCMs can be of great help, not necessarily by providing reliable predictions, but also by supporting evidence about the salience of planned measures or policies (Berkhout et al, 2013). A nice example is a near flooding situation in Northern Netherlands (January 2012), caused by a combined occurrence of a saturated soil due to excessive antecedent precipitation, a heavy precipitation event in the coastal area and a storm surge with a duration of several days that hindered the discharge of excess water from the area. This is typically a “real weather” event that is not necessarily exceptional but does expose a local vulnerability to superfluous water. The question asked by the local water managers was whether the combination of the individual events (wet soil, heavy rain, storm surge) has a causal relationship, and whether the frequency of occurrence of compound events can be expected to change in the future. Observational analyses do suggest a link between heavy precipitation and storm surge, but the available dataset was too short to explore the statistical relationships in a relevant part of the frequency distribution. A large set of RCM simulations is now explored to increase the statistical sample, but – more importantly – to provide a physically comprehensive picture of the boundary conditions leading up to an event like this. By enabling the policy makers to communicate this physically comprehensive picture provides public support for measures undertaken to adapt to this kind of events. This exploration of model based – synthetic – future weather is a powerful method to assess the consequences of possible changes in regional climate variability for the local water management.

Process understanding

Apart from a tool to predict a system given its initial state and the boundary forcings on it, a model is a collection of our understanding of the system itself. Its usefulness is not limited to its ability to predict, but also to describe the dynamics of a system, governed by internal processes and interactions with its environment. Regional climate models should likewise be considered as “collections of our understanding of the regional climate system”. And can likewise be used to study this system, and learn about it. There are numerous studies where regional climate model studies have increased our understanding of the mechanism of the climate system acting on a regional scale. A couple of examples:

  • Strong trends in coastal precipitation, and particularly a series of extreme precipitation events in the Netherlands, could successfully be attributed to anomalies in sea surface temperature (SST) in the nearby North Sea (Lenderink et al, 2009). Strong SST gradients close to the coast needed to be imposed to the RCM simulations carried out to reveal this mechanism. The relationship between SSTs and spatial gradients in changes in (extreme) precipitation is an important finding for analysing necessary measures to anticipate future changes in the spatial and temporal distribution of rainfall in the country.
  • During the past century land use change has given rise to regional changes in the local surface climatology, particularly the mean and variability of near surface temperature (Pitman et al, 2012). A set of GCM simulations dedicated to quantify the effect of land use change relative to changes in the atmospheric greenhouse gas concentration over the past century revealed that the land use effect is largely limited to the area of land use change. Wramneby et al (2010) explored the regional interaction between climate and vegetation response using a RCM set-up, and highlighted the importance of this interaction for assessing the mean temperature response particularly at high latitudes (due to the role of vegetation in snow covered areas) and in water limited evaporation regimes (due to the role of vegetation in controlling surface evaporative cooling).
  • In many occasions the degree to which anomalies in the land surface affect the overlying atmosphere depends on the resolved spatial scale. As an example, Hohenegger et al (2009) investigated the triggering of precipitation in response to soil moisture anomalies with a set of regional models ranging in physical formulation and resolution. This issue that deserves a lot of attention in the literature due to the possible existence of (positive) feedbacks that may affect occurrence or intensity of hydrological extremes such as heatwaves. In her study, RCMs operating at the typical 25 – 50 km scale resolution tend to overestimate the positive soil moisture – precipitation feedback in the Alpine area, which is better represented by higher resolution models. It is a study that points at a possible mechanism that needs to be adequately represented for generating reliable projections.

Each of these examples (and many more that can be cited) generates additional insight in the processes controlling local climate variability by allowing to zoom in on these processes using RCMs. They thus contribute to setting the research agenda in order to improve our understanding of drivers of regional change.

Climate predictions versus climate scenarios

The notion that a tool – an RCM – may possess shortcomings in its predictive skill, but simultaneously prove to be a valuable tool to support narratives that are relevant to policy making and spatial planning can in fact be extended to highlighting the difference between “climate predictions” and “climate scenarios”. Scenarios are typically used when deterministic or probabilistic predictions show too little skill to be useful, either because of the complexity of the considered system, or because of the fundamental limitations to its predictability (Berkhout et al, 2013). A scenario is a “what if” construction, a tool to create a mental map of the possible future conditions assuming a set of driving boundary conditions. For a scenario to be valuable it does not necessarily need to have predictive skill, although a range of scenarios can be and are being interpreted as a probability range for future conditions. A (single) scenario is mainly intended to feed someone’s imagination with a plausible, comprehensible and internally consistent picture. Used this way, also RCMs with limited predictive skill can be useful tools for scenario development and providing supporting narratives that generate public awareness or support for preparatory actions. For this, the RCM should be trustworthy in producing realistic and consistent patterns of regional climate variability, and abundant application, verification and improving is a necessary practice. Further developments of RCMs as a Regional Earth System Exploration tool, by linking the traditional meteorological models to hydrological, biogeophysical and socio-economic components, can further develop their usefulness in practice.

Biosketch
Bart van den Hurk has a PhD on land surface modelling, obtained in Wageningen in 1996. Since then he has worked at the Royal Netherlands Meteorological Institute (KNMI) as researcher, involved in studies addressing modelling land surface processes in regional and global climate models, data assimilation of soil moisture, and constructing regional climate change scenarios. He is strongly involved with the KNMI global modelling project EC-Earth, and is co-author of the land surface modules of the European Centre for Medium Range Weather Forecasts (ECMWF). Since 2005 he is part-time professor “Regional Climate Analysis” at the Institute of Marine and Atmospheric Research (IMAU) at the Utrecht University. There he teaches masters students, supervises PhD-students and is involved in several research networks. Between 2007 and 2010 he was chair of the WCRP-endorsed Global Land-Atmosphere System Studies (GLASS) panel, since 2006 member of the council of the Netherlands Climate Changes Spatial Planning program, and since 2008 member of the board of the division “Earth and Life Sciences” of the Dutch Research Council (NWO-ALW). He is convenor at a range of incidental and periodic conferences, and editor for Hydrology and Earth System Science (HESS).

References

Berkhout, F., B. van den Hurk, J. Bessembinder, J. de Boer, B. Bregman en M. van Drunen (2013), Framing climate uncertainty: using socio-economic and climate scenarios in assessing climate vulnerability and adaptation; submitted Regional and Environmental Change.

Denis, B., R. Laprise, D. Caya and J. Côté, 2002: Downscaling ability of one-way-nested regional climate models: The Big-Brother experiment. Clim. Dyn. 18, 627-646.

Di Luca, A., Elía, R. & Laprise, R., 2012. Potential for small scale added value of RCM’s downscaled climate change signal. Climate Dynamics. Available at: http://www.springerlink.com/index/10.1007/s00382-012-1415-z

Hohenegger, C., P. Brockhaus, C. S. Bretherton, C. Schär (2009): The soil-moisture precipitation feedback in simulations with explicit and parameterized convection, in: Journal of Climate 22, pp. 5003–5020.

Lenderink, G., E. van Meijgaard en F. Selten (2009), Intense coastal rainfall in the Netherlands in response to high sea surface temperatures: analysis of the event of August 2006 from the perspective of a changing climate; Clim. Dyn., 32, 19-33, doi:10.1007/s00382-008-0366-x.

Min, E., W. Hazeleger, G.J. van Oldenborgh en A. Sterl (2013), Evaluation of trends in high temperature extremes in North-Western Europe in regional climate models; Environmental Research Letters, 8, 1, 014011, doi:10.1088/1748-9326/8/1/014011.

Pielke, R. A. S., and R. L. Wilby, 2012: Regional climate downscaling: What’s the point? Eos Trans.AGU, 93, PAGE 52, doi:201210.1029/2012EO050008

Pielke, R. A., Sr., R. Wilby, D. Niyogi, F. Hossain, K. Dairuku,J. Adegoke, G. Kallos, T. Seastedt, and K. Suding (2012), Dealing with complexity and extreme events using a bottom-up, resource-based vulnerability perspective, in Extreme Events and Natural Hazards: The Complexity Perspective, Geophys. Monogr. Ser., vol. 196, edited by A. S. Sharma et al. 345–359, AGU, Washington, D. C., doi:10.1029/2011GM001086.

Pitman, A., N. de Noblet, F. Avila, L. Alexander, J.P. Boissier, V. Brovkin, C. Delire, F. Cruz, M.G. Donat, V. Gayler, B.J.J.M. van den Hurk, C. Reick en A. Voldoire (2012): Effects of land cover change on temperature and rainfall extremes in multi-model ensemble simulations; Earth System Dynamics, 3, 213-231, doi:10.5194/esd-3-213-2012.

Stegehuis A., Vautard R., Teuling R., Ciais P. Jung M. Yiou, P. (2012) Summer temperatures in Europe and land heat fluxes in observation-based data and regional climate model simulations, in press by Climate Dynamics; doi 10.1007/s00382-012-1559-x

Van Haren, R. G.J. van Oldenborgh, G. Lenderink, M. Collins en W. Hazeleger (2012), SST and circulation trend biases cause an underestimation of European precipitation trends; Clim. Dyn., doi:10.1007/s00382-012-1401-5.

Van Oldenborgh, G.J. F.J. Doblas-Reyes, S.S. Drijfhout en E. Hawkins (2013), Reliability of regional climate model trends; Environmental Research Letters, 8, 1, 014055, doi:10.1088/1748-9326/8/1/014055.

Wramneby, A., B. Smith, and P. Samuelsson (2010), Hot spots of vegetation-climate feedbacks under future greenhouse forcing in Europe, J. Geophys. Res., 115, D21119, doi:10.1029/2010JD014307.

Guest blog Jason Evans

Are climate models ready to make regional projections?

Global Climate Models (GCMs) are designed to provide insight into the global climate system. They have been used to investigate the impacts of changes in various climate system forcings such as volcanoes, solar radiation, and greenhouse gases, and have proved themselves to be useful tools in this respect. The growing interest in GCM performance at regional scales, rather than global, has come from at least two different directions: the climate modelling community and the climate change adaptation community.

Due, in part, to the ever increasing computational power available, GCMs are being continually developed and applied at higher spatial resolutions. Many GCM modelling groups have been increasing the resolution from ~250km grid boxes 7 years ago to ~100km grid boxes today. This model resolution increase leads naturally to model development and evaluation exercises that pay closer attention to smaller scales, in this case, regional instead of global scales. The Fifth Coupled Model Intercomparison Experiment (CMIP5) provides a large ensemble of GCM simulations, many of which are at resolutions high enough to warrant evaluation at regional scales. Over the next few years these GCM simulations will be extensively evaluated, problems will be found (as seen in some early evaluations1,2), followed hopefully by solutions that lead to further model development and improved simulations. This step of finding a solution to an identified problem is the hardest in the model development cycle, and I applaud those who do it successfully.

Probably the stronger demand for regional scale information from climate models is coming from the climate change adaptation community. Given only modest progress in climate change mitigation, adaptation to future climate change is required. Some sectors, such as those involved in large water resource projects (e.g. building a new dam), are particularly vulnerable to climate change. They are planning to invest large amounts of money (millions) in infrastructure, with planned lifetimes of 50-100 years, that directly depend on climate to be successful. Over such long lifetimes, greenhouse gas driven climate change is expected to increase temperature by a few degrees, and may cause significant changes in precipitation, depending on the location. Many of the systems required to adapt are more sensitive to precipitation than temperature, and projections of precipitation often have considerably more uncertainty associated with them. The question for the climate change adaptation community is whether the uncertainty (including model errors) in the projected climate change is small enough to be useful in a decision making framework.

Regional projections
From a GCM perspective then, the answer to “Are climate models ready to make regional projections?” is two-fold. For the climate modelling community the answer is yes. GCMs are being run at high enough resolution to make regional scale (so long as your regions are many 100kms across) evaluations and projections useful to inform the model development and hopefully improve future simulations. For the climate change adaptation community, whose spatial scale of interest is often much lower than current high resolution GCMs can capture, the answer in general is no. The errors in the simulated regional climate and the inter-model uncertainty in regional climate projections from GCMs is often too large to be useful in decision making. These climate change adaptation decisions need to be made however, and in an effort to produce useful regional scale climate information that embodies the global climate change a number of “downscaling” techniques have been developed.

It is worth noting that some climate variables, such as temperature, tend to be simulated better by climate models than other variables, such as precipitation. This is at least partly due to the scales and non-linearity of physical processes which effect each variable. This is demonstrated in the fourth IPCC report which mapped the level of agreement in the sign of the change in precipitation projected by GCMs. This map showed large parts of the world where GCMs disagreed about the sign of the change in precipitation. However this vastly underestimated the agreement between the GCMs3. They showed that much of this area of disagreement is actually areas where the GCMs agree that the change will be small (or zero). That is, if the actual projected change is zero then by chance, some GCMs will project small increases and some small decreases. This does not indicate disagreement between the models, rather they all agree that the change is small.

Regional climate models
Before describing the downscaling techniques, it may be useful to consider this question: What climate processes, that are important at regional scales, may be missing in GCMs?

The first set of processes relates directly to how well resolved land surface features such as mountains and coastlines are. Mountains cause local deviations in low level air flow. When air is forced to rise to get over a mountain range it can trigger precipitation, and because of this, mountains are often a primary region for the supply of fresh water resources. At GCM resolution mountains are often under-represented in terms of height or spatial extent and so do not accurately capture this relationship with precipitation. In fact, some regionally important mountain ranges, such as the eastern Mediterranean coastal range or the Flinders Range in South Australia, are too small to be represented at all in some GCMs. Using higher spatial resolution to better resolve the mountains should improve the models ability to capture this mountain-precipitation relationship.

Similarly, higher resolution allows model coastlines to be closer to the location of actual coastlines, and improves the ability to capture climate processes such as sea breezes.

The second set of processes are slightly more indirect and often involve an increase in the vertical resolution as well. These processes include the daily evolution of the planetary boundary layer, and the development of low level and mountain barrier jets.

A simple rule of thumb is that one can expect downscaling to higher resolution to improve the simulation of regional climate in locations that include coastlines and/or mountain ranges (particularly where the range is too small to be well resolved by the GCM but large enough to be well resolved at the higher resolution) while not making much difference over large homogeneous, relatively flat regions (deserts, oceans,...).

So, there are physical reasons one might expect downscaling to higher resolution will improve the simulation of regional climate. How do we go about this downscaling?

Downscaling
Downscaling techniques can generally be divided into two types: statistical and dynamical. Statistical techniques generally use a mathematical method to form a relationship between the modelled climate and observed climate at an observation station. A wide variety of mathematical methods can be used but they all have two major limitations. First, they rely on long historical observational records to calculate the statistical relationship, effectively limiting the variables that can be downscaled to temperature and precipitation, and the locations to those stations where these long records were collected. Second, they assume that the derived statistical relationship will not change due to climate change.

Dynamical downscaling, or the use of Regional Climate Models (RCMs), does not share the limitations of statistical downscaling. The major limitation in dynamical downscaling is the computational cost of running the RCMs. This generally places a limit on both the spatial resolution (often 10s of kilometres) and the number of simulations that can be performed to characterise uncertainty. RCMs also contain biases, both inherited from the driving GCM and generated with the RCM itself. It is worth noting that statistical downscaling techniques can be applied to RCM simulations as easily as GCM simulations to obtain projections at station locations.

The RCM limitations are actively being addressed through current initiatives and research. Like GCMs, RCMs benefit from the continued increase in computational power, allowing more simulations to be run at higher spatial resolution. The need for more simulations to characterise uncertainty is being further addressed through international initiatives to have many modelling groups contribute simulations to the same ensembles (e.g. CORDEX - COordinated Regional climate Downscaling EXperiment http://wcrp-cordex.ipsl.jussieu.fr/). New research into model independence is also pointing toward ways to create more statistically robust ensembles4. Novel research to reduce (or eliminate) the bias inherited from the driving GCM is also showing promise5,6.

Above I used simple physical considerations to suggest there would be some added value from regional models compared to global models. Others have investigated this from an observational viewpoint7,8 as well as through direct evaluation of model results at different scales9,10,11. In each case the results agreed with the rule of thumb given earlier. That is, in the areas with strong enough regional climate influences we do see improved simulations of regional climate at higher resolutions. Of course we are yet to address the question of whether the regional climate projections from these models have low enough uncertainty to be useful in climate change adaptation decision making.

Performance
To date RCMs have been used in many studies12,13 and a wide variety of evaluations of RCM simulations against observations have been performed. In attempting to examine the fidelity of the regional climate simulated, a variety of variables (temperature, precipitation, wind, surface pressure,...) have been evaluated using a variety of metrics14,15, with all the derived metrics then being combined to produce an overall measure of performance. When comprehensive assessments such as these are performed it is often found that different models have different strengths and weaknesses as measured by the different metrics. If one has a specific purpose in mind, e.g. building a new dam, one may wish to focus on metrics directly relevant to that purpose. Often the projected climate change is of interest so the evaluation should include a measure of the models ability to simulate change, often given as a trend over a recent historical period16,17. In most cases the RCMs are found to do a reasonably good job of simulating the climate of the recent past. Though, there are usually places and/or times where the simulation is not very good. Not surprising for an active area of research and model development.

Given the evaluation results found to date it is advisable to carefully evaluate each RCMs simulations before using any climate projections they produce. Being aware of where and when they perform well (or poorly) is important when assessing the climate change that model projects. It is also preferable for the projections themselves to be examined with the aim of understanding the physical mechanisms causing the projected changes. Good process level understanding of the causes behind the changes provides another mechanism through which to judge their veracity.

Ready for adaptation decisions?
Finally we come to the question of whether regional climate projections should be used in climate change adaptation decisions concerning infrastructure development? In the past such decision were made assuming a stationary climate such that observations of the past were representative of the future climate. So the real question here is will the use of regional climate projections improve decisions made when compared to the use of historical climate observations?

If the projected regional change is large enough that it falls outside the historical record even when considering the associated model errors and uncertainties, then it may indeed impact the decision. Such decisions are made within a framework that must consider uncertainty in many factors other than climate including future economic, technological and demographic pathways. Within such a framework, risk assessments are performed to inform the decision making process, and the regional climate projections may introduce a risk to consider that is not present in the historical climate record. If this leads to decisions which are more robust to future climate changes (as well as demographic and economic changes) then it is worthwhile including the regional climate projections in the decision making process.

Of course, this relies on the uncertainty in the regional climate projection being small enough for the information to be useful in a risk assessment process. Based on current models, this is not the case everywhere, and continued model development and improvement is required to decrease the uncertainty and increase the utility of regional climate projections for adaptation decision making.

Biosketch
Jason Evans has a B. Science (Physics) and a B. Math (hons) from the University of Newcastle, Australia. He has a Ph.D. in hydrology and climatology from the Australian National University. He worked for several years as a research associate at Yale University, USA, before moving to the University of New South Wales, Sydney, Australia. He is currently an Associate Professor in the Climate Change Research Centre there. His research involves general issues of regional climate and water cycle processes over land. He focuses at the regional (or watershed) scale and studies processes including river flow, evaporation/transpiration, water vapour transport and precipitation. He is currently Co-Chair of the GEWEX Regional Hydroclimate Panel, and coordinator of the Coordinated Regional Climate Downscaling Experiment (CORDEX), both elements of the World Climate Research Programme.

References

[1] van Oldenborgh GJ, Doblas Reyes FJ, Drijfhout SS, Hawkins E (2013) Reliability of regional climate model trends. Environmental Research Letters 8

[2] Bhend J, Whetton P (2013) Consistency of simulated and observed regional changes in temperature, sea level pressure and precipitation.

[3] Power SB, Delage F, Colman R, Moise A (2012) Consensus on twenty-first-century rainfall projections in climate models more widespread than previously thought. Journal of Climate 25:3792–3809

[4] Bishop CH, Abramowitz G (2012) Climate model dependence and the replicate Earth paradigm. Clim Dyn:1–16

[5] Xu Z, Yang Z-L (2012) An Improved Dynamical Downscaling Method with GCM Bias Corrections and Its Validation with 30 Years of Climate Simulations. Journal of Climate 25:6271–6286

[6] Colette A, Vautard R, Vrac M (2012) Regional climate downscaling with prior statistical correction of the global climate forcing. Geophys Res Lett 39:L13707

[7] di Luca A, Elía R de, Laprise R (2012) Potential for added value in precipitation simulated by high-resolution nested Regional Climate Models and observations. Clim Dyn 38:1229–1247

[8] di Luca A, Elía R de, Laprise R (2013) Potential for small scale added value of RCM’s downscaled climate change signal. Climate Dynamics 40:601–618

[9] Christensen OB, Christensen JH, Machenhauer B, Botzet M (1998) Very High-Resolution Regional Climate Simulations over Scandinavia—Present Climate. Journal of Climate 11:3204–3229

[10] Önol B (2012) Effects of coastal topography on climate: High-resolution simulation with a regional climate model. Clim Res 52:159–174

[11] Evans J, McCabe M (2013) Effect of model resolution on a regional climate model simulation over southeast Australia. Climate Research 56:131–145

[12] Wang Y, Leung L, McGregor J, Lee D, Wang W, Ding Y, Kimura F (2004) Regional climate modeling: Progress, challenges, and prospects. Journal of the Meteorological Society of Japan 82:1599–1628

[13] Evans JP, McGregor JL, McGuffie K (2012) Future Regional Climates. In: Henderson-Sellers A, McGuffie K (eds) The future of the World’s climate. Elsevier, p 223–252

[14] Christensen J, Kjellstrom E, Giorgi F, Lenderink G, Rummukainen M (2010) Weight assignment in regional climate models. Climate Research 44:179–194

[15] Evans J, Ekström M, Ji F (2012) Evaluating the performance of a WRF physics ensemble over South-East Australia. Climate Dynamics 39:1241–1258

[16] Lorenz P, Jacob D (2010) Validation of temperature trends in the ENSEMBLES regional climate model runs driven by ERA40. Climate Research 44:167–177

[17] Bukovsky MS (2012) Temperature trends in the NARCCAP regional climate models. Journal of Climate 25:3985–3991

Guest blog Roger Pielke Sr.

Are climate models ready to make regional projections?

The question that is addressed in my post is, with respect to multi-decadal model simulations, are global and/or regional climate models ready to be used for skillful regional projections by the impacts and policymaker communities?

This could also be asked as

Are skillful (value-added) regional and local multi-decadal predictions of changes in climate statistics for use by the water resource, food, energy, human health and ecosystem impact communities available at present?

As summarized in this post, the answer is NO.

In fact, the output of these models are routinely being provided to the impact communities and policymakers as robust scientific results, when, in fact, they only provide an illusion of skill. Simply plotting high spatial resolution model results is not, by itself, a skillful product!

Skill is defined as accurately predicting changes in climate statistics on this time period. This skill must be assessed by predicting global, regional and local average climate, and any climate change that was observed over the last several decades (i.e. “hindcast model predictions”).

One issue that we need to make sure is clearly understood are the terms “prediction” and “projection”. The term “projection”, of course, is just another word for a “prediction” when specified forcings are prescribed; e.g. such as different CO2 emission scenarios - see Pielke (2002). Thus “projection” and “prediction” are synonyms.

Dynamic and statistical downscaling is widely used to refine predictions from global climate models to smaller spatial scales. In order to classify the types of dynamic and statistical downscaling, Castro et al (2005) defined four categories. These are summarized in Pielke and Wilby (2012) and in Table 1 from that paper.

In the current post, I am referring specifically to Type 4 downscaling. For completeness, I list below all types of downscaling. The intent of downscaling is to achieve accurate, higher spatial resolution of weather and other components of the climate system than is achievable with the coarser spatial resolution global model. Indeed, one test of a downscaled result is whether its results agree more with observations than does the global model results simply downscaled by interpolation to a finer terrain and landscape map.

Types of Downscaling

Type 1 downscaling is used for short-term, numerical weather prediction. In dynamic Type 1 downscaling, the regional model includes initial conditions from observations. In Type 1 statistical downscaling the regression relationships are developed from observed data and the Type 1 dynamic model predictions.

The Type 1 application of downscaling is operationally used by weather services worldwide. They provide very significant added regional and local prediction skill beyond what is available from the parent global model (e.g. see http://weather.rap.ucar.edu/model/). Millions of Type 1 forecasts are made every year and verified with real world weather, thus providing an opportunity for extensive quantitative testing of their predictive skill (Pielke Jr. 2010) .

Type 2 dynamic downscaling refers to regional weather (or climate) simulations in which the regional model’s initial atmospheric conditions are forgotten (i.e., the predictions do not depend on the specific initial conditions), but results still depend on the lateral boundary conditions from a global numerical weather prediction where initial observed atmospheric conditions are not yet forgotten, or are from a global reanalysis. Type 2 statistical downscaling uses the regression relationships developed for Type 1 statistical downscaling except that the input variables are from the Type 2 weather (or climate) simulation. Downscaling from reanalysis products (Type 2 downscaling) defines the maximum forecast skill that is achievable with Type 3 and Type 4 downscaling.

Type 2 downscaling provides an effective way to provide increased value-added information on regional and local spatial scales. It is important to recognize, however, that type 2 downscaling is not a prediction (projection) for the cases where the global data comes from a reanalysis (e.g. a reanalysis is a combination of real world observations, folded into a global model). An example of this type of application is reported in Feser et al., 2011, Pielke 2013 and Mearns et al 2012, 2013b).

When Type 2 results are presented to the impacts communities as a valid analysis of the skill of Type 4 downscaling, those communities are being misled on the actual robustness of the results in terms of multi-decadal projections. Type 2 results, even from global models used in a prediction mode, still retain real world information in the atmosphere (such as from long wave jet stream patterns), as well as sea surface temperatures, deep soil moisture, and other climate variables that have long term persistence.

Type 3 dynamic downscaling takes lateral boundary conditions from a global model prediction forced by specified real world surface boundary conditions, such as for seasonal weather predictions based on observed sea surface temperatures, but the initial observed atmospheric conditions in the global model are forgotten. Type 3 statistical downscaling uses the regression relationships developed for Type 1 statistical downscaling, except using the variables from the global model prediction forced by specified real-world surface boundary conditions.

Type 3 downscaling is applied, for example, for seasonal forecasts where slowly changing anomalies in the surface forcing (such as sea surface temperature) provide real-world information to constrain the downscaling results. Examples of the level of limited, but non-zero, skill achievable are given in Castro et al (2007) and Veljovic et al (2012).

Type 4 dynamic downscaling takes lateral boundary conditions from an Earth system model in which coupled interactions among the atmosphere, ocean, biosphere, and cryosphere are predicted [e.g., Solomon et al., 2007]. Other than terrain, all other components of the climate system are calculated by the model except for human forcings, including greenhouse gas emissions scenarios, which are prescribed. Type 4 dynamic downscaling is widely used to provide policy makers with impacts from climate decades into the future. Type 4 statistical downscaling uses transfer functions developed for the present climate, fed with large scale atmospheric information taken from Earth system models representing future climate conditions. It is assumed that statistical relationships between real-world surface observations and large scale weather patterns will not change.

The level of skill achievable deteriorates from Type 1 to Type 2 to Type downscaling, as fewer observations are used to constrain the model realism. For Type 4, except for the prescribed forcing (such as added CO2), there are no real world constraints.

It is also important to realize that the models, while including aspects of basic physics (such as the pressure gradient force, advection, gravity), are actually engineering code. All of the parameterizations of the physics, chemistry and biology include tunable parameters and functions. For the atmospheric part of climate models, this engineering aspect of weather models is discussed in depth in Pielke (2013a). Thus, such engineering (parameterized) components can result in the drift of the model results away from reality when observations are not there to constrain this divergence from reality. The climate models are not basic physics.

There are two critical tests for skillful Type 4 model runs:

1. The model must provide accurate replications of the current climatic conditions on the global, regional and local scale? This means the model must be able to accurately predict the statistics of the current climate. This test, of course, needs to be performed in a hindcast mode?

2. The model must also provide accurate predictions of changes in climatic conditions (i.e. the climatic statistics) on the regional and local scale? This means the model must be able to replicate the changes in climate statistics over this time period. This can also, of course, only be assessed by running the models in a hindcast mode.

For Type 4 runs [i.e. multidecadal projections], the models being used include the CMIP5 projections.

As reported in The CMIP5 - Coupled Model Intercomparison Project Phase 5

CMIP5 promotes a standard set of model simulations in order to:

· evaluate how realistic the models are in simulating the recent past,

· provide projections of future climate change on two time scales, near term (out to about 2035) and long term (out to 2100 and beyond), and

· understand some of the factors responsible for differences in model projections, including quantifying some key feedbacks such as those involving clouds and the carbon cycle

The CMIP5 runs, unfortunately, perform poorly with respect to the first bullet listed above, as documented below.

If the models’ are not sufficiently realistic in simulating the climate in the recent past, they are not ready to be used to provide projections for the coming decades!

A number of examples from the peer reviewed literature illustrate this inadequate performance.

Summary of a Subset of Peer-Reviewed Papers That Document the Limitations of the CMIP5 model projections with respect to Criteria #1.

· Taylor et al, 2012: Afternoon rain more likely over drier soils. Nature. doi:10.1038/nature11377. Received 19 March 2012 Accepted 29 June 2012 Published online 12 September 2012

“…the erroneous sensitivity of convection schemes demonstrated here is likely to contribute to a tendency for large-scale models to `lock-in’ dry conditions, extending droughts unrealistically, and potentially exaggerating the role of soil moisture feedbacks in the climate system.”

· Driscoll, S., A. Bozzo, L. J. Gray, A. Robock, and G. Stenchikov (2012), Coupled Model Intercomparison Project 5 (CMIP5) simulations of climate following volcanic eruptions, J. Geophys. Res., 117, D17105, doi:10.1029/2012JD017607. published 6 September 2012.

The study confirms previous similar evaluations and raises concern for the ability of current climate models to simulate the response of a major mode of global circulation variability to external forcings.

· Fyfe, J. C., W. J. Merryfield, V. Kharin, G. J. Boer, W.-S. Lee, and K. von Salzen (2011), Skillful predictions of decadal trends in global mean surface temperature, Geophys. Res. Lett.,38, L22801, doi:10.1029/2011GL049508

”….for longer term decadal hindcasts a linear trend correction may be required if the model does not reproduce long-term trends. For this reason, we correct for systematic long-term trend biases.”

· Xu, Zhongfeng and Zong-Liang Yang, 2012: An improved dynamical downscaling method with GCM bias corrections and its validation with 30 years of climate simulations. Journal of Climate 2012 doi: http://dx.doi.org/10.1175/JCLI-D-12-00005.1

”…the traditional dynamic downscaling (TDD) [i.e. without tuning) overestimates precipitation by 0.5-1.5 mm d-1.....The 2-year return level of summer daily maximum temperature simulated by the TDD is underestimated by 2-6°C over the central United States-Canada region".

· Anagnostopoulos, G. G., Koutsoyiannis, D., Christofides, A., Efstratiadis, A. & Mamassis, N. (2010) A comparison of local and aggregated climate model outputs with observed data. Hydrol. Sci. J. 55(7), 1094–1110

".... local projections do not correlate well with observed measurements. Furthermore, we found that the correlation at a large spatial scale, i.e. the contiguous USA, is worse than at the local scale."

· Stephens, G. L., T. L’Ecuyer, R. Forbes, A. Gettlemen, J.‐C. Golaz, A. Bodas‐Salcedo, K. Suzuki, P. Gabriel, and J. Haynes (2010), Dreary state of precipitation in global models, J. Geophys. Res., 115, D24211, doi:10.1029/2010JD014532.

"...models produce precipitation approximately twice as often as that observed and make rainfall far too lightly.....The differences in the character of model precipitation are systemic and have a number of important implications for modeling the coupled Earth system .......little skill in precipitation [is] calculated at individual grid points, and thus applications involving downscaling of grid point precipitation to yet even finer‐scale resolution has little foundation and relevance to the real Earth system.”

· Sun, Z., J. Liu, X. Zeng, and H. Liang (2012), Parameterization of instantaneous global horizontal irradiance at the surface. Part II: Cloudy-sky component, J. Geophys. Res., doi:10.1029/2012JD017557, in press.

“Radiation calculations in global numerical weather prediction (NWP) and climate models are usually performed in 3-hourly time intervals in order to reduce the computational cost. This treatment can lead to an incorrect Global Horizontal Irradiance (GHI) at the Earth’s surface, which could be one of the error sources in modelled convection and precipitation. …… An important application of the scheme is in global climate models….It is found that these errors are very large, exceeding 800 W m-2 at many non-radiation time steps due to ignoring the effects of clouds….”

· Ronald van Haren, Geert Jan van Oldenborgh, Geert Lenderink, Matthew Collins and Wilco Hazeleger, 2012: SST and circulation trend biases cause an underestimation of European precipitation trends Climate Dynamics 2012, DOI: 10.1007/s00382-012-1401-5

“To conclude, modeled atmospheric circulation and SST trends over the past century are significantly different from the observed ones. These mismatches are responsible for a large part of the misrepresentation of precipitation trends in climate models. The causes of the large trends in atmospheric circulation and summer SST are not known.”

· Driscoll, S., A. Bozzo, L. J. Gray, A. Robock, and G. Stenchikov (2012), Coupled Model Intercomparison Project 5 (CMIP5) simulations of climate following volcanic eruptions, J. Geophys. Res., 117, D17105, doi:10.1029/2012JD017607. published 6 September 2012.

The models generally fail to capture the NH dynamical response following eruptions. ……The study confirms previous similar evaluations and raises concern for the ability of current climate models to simulate the response of a major mode of global circulation variability to external forcings

· Mauritsen, T., et al. (2012), Tuning the climate of a global model, J. Adv. Model. Earth Syst., 4, M00A01, doi:10.1029/2012MS000154. published 7 August 2012 During a development stage global climate models have their properties adjusted or tuned in various ways to best match the known state of the Earth’s climate system…..The tuning is typically performed by adjusting uncertain, or even non-observable, parameters related to processes not explicitly represented at the model grid resolution.

· Jiang, J. H., et al. (2012), Evaluation of cloud and water vapor simulations in CMIP5 climate models using NASA “A-Train” satellite observations, J. Geophys. Res., 117, D14105, doi:10.1029/2011JD017237. published 18 July 2012. The modeled mean CWCs [cloud water] over tropical oceans range from ∼3% to ∼15× of the observations in the UT and 40% to 2× of the observations in the L/MT. For modeled H2Os, the mean values over tropical oceans range from ∼1% to 2× of the observations in the UT and within 10% of the observations in the L/MT….Tropopause layer water vapor is poorly simulated with respect to observations. This likely results from temperature biases

· Van Oldenborgh, G.J., F.J. Doblas-Reyes, B. Wouters, W. Hazeleger (2012): Decadal prediction skill in a multi-model ensemble. Clim.Dyn. doi:10.1007/s00382-012-1313-4 who report quite limited predictive skill in two regions of the oceans on the decadal time period, but no regional skill elsewhere, when they conclude that "A 4-model 12-member ensemble of 10-yr hindcasts has been analysed for skill in SST, 2m temperature and precipitation. The main source of skill in temperature is the trend, which is primarily forced by greenhouse gases and aerosols. This trend contributes almost everywhere to the skill. Variation in the global mean temperature around the trend do not have any skill beyond the first year. However, regionally there appears to be skill beyond the trend in the two areas of well-known low-frequency variability: SST in parts of the North Atlantic and Pacific Oceans is predicted better than persistence. A comparison with the CMIP3 ensemble shows that the skill in the northern North Atlantic and eastern Pacific is most likely due to the initialisation, whereas the skill in the subtropical North Atlantic and western North Pacific are probably due to the forcing."

· Sakaguchi, K., X. Zeng, and M. A. Brunke (2012), The hindcast skill of the CMIP ensembles for the surface air temperature trend, J. Geophys. Res., 117, D16113, doi:10.1029/2012JD017765. published 28 August 2012

the skill for the regional (5° × 5° – 20° × 20° grid) and decadal (10 – ∼30-year trends) scales is rather limited…. The mean bias and ensemble spread relative to the observed variability, which are crucial to the reliability of the ensemble distribution, are not necessarily improved with increasing scales and may impact probabilistic predictions more at longer temporal scales.

· Kundzewicz, Z. W., and E.Z. Stakhiv (2010) Are climate models “ready for prime time” in water resources managementapplications, or is more research needed? Editorial. Hydrol. Sci. J. 55(7), 1085–1089. who conclude that “Simply put, the current suite of climate models were not developed to provide the level of accuracy required for adaptation-type analysis.”

Even on the global average scale, the multi-decadal global climate models are performing poorly since 1998, as very effectively shown in an analysis by John Christy in the post by Roy Spencer which is reproduced below. As reported in Roy’s post, these plots by John are based upon data from the KNMI Climate Explorer with a comparison of 44 climate models versus the UAH and RSS satellite observations for global lower tropospheric temperature variations, for the period 1979-2012 from the satellites, and for 1975 – 2025 for the models.

Thus the necessary criteria #1 is not satisfied. Obviously, the first criteria must be satisfactorily addressed before one can have any confidence in the second criteria of claiming to skillfully predict changes in climate statistics.

Summary
To summarize the current state of modeling and the use of regional models to downscale for multi-decadal projections, as reported in Pielke and Wilby (2012):

1. The multi-decadal global climate model projection must include all first-order climate forcings and feedbacks, which, unfortunately, they do not.

2. Current global multi-decadal predictions are unable to skillfully simulate regional forcing by major atmospheric circulation features such as from El Niño and La Niña and the South Asian monsoon , much less changes in the statistics of these climate features. These features play a major role of climate impacts at the regional and local scales.

3. While regional climate downscaling yields higher spatial resolution, the downscaling is strongly dependent on the lateral boundary conditions and the methods used to constrain the regional climate model variables to the coarser spatial scale information from the parent global models. Large-scale climate errors in the global models are retained and could even be amplified by the higher-spatial- resolution regional models. If the global multi-decadal climate model predictions do not accurately predict large-scale circulation features, for instance, they cannot provide accurate lateral boundary conditions and interior nudging to regional climate models. The presence of higher spatial resolution information in the regional models, beyond what can be accomplished by interpolation of the global model output to a finer grid mesh, is only an illusion of added skill.

4. Apart from variable grid approaches, regional models do not have the domain scale (or two-way interaction between the regional and global models) to improve predictions of the larger-scale atmospheric features. This means that if the regional model significantly alters the atmospheric and/or ocean circulations, there is no way for this information to affect larger scale circulation features that are being fed into the regional model through the lateral boundary conditions and nudging.

5. The lateral boundary conditions for input to regional downscaling require regional-scale information from a global forecast model. However the global model does not have this regional-scale information due to its limited spatial resolution. This is, however, a logical paradox because the regional model needs something that can be acquired only by a regional model (or regional observations). Therefore, the acquisition of lateral boundary conditions with the needed spatial resolution becomes logically impossible. Thus, even with the higher resolution analyses of terrain and land use in the regional domain, the errors and uncertainty from the larger model still persist, rendering the added simulated spatial details inaccurate.

6. There is also an assumption that although global climate models cannot predict future climate change as an initial value problem, they can predict future climate statistics as a boundary value problem [Palmer et al., 2008]. However, for regional downscaling (and global) models to add value (beyond what is available to the impacts community via the historical, recent paleorecord and a worst case sequence of days), they must be able to skillfully predict changes in regional weather statistics in response to human climate forcings. This is a greater challenge than even skillfully simulating current weather statistics.

It is therefore inappropriate to present type 4 results to the impacts community as reflecting more than a subset of possible (plausible) future climate risks. As I wrote in Pielke (2011) with respect to providing multi-decadal climate predictions to the impacts and policy communities, there is a

“serious risk of overselling what [can be] provide[d] to policy makers. A significant fraction of the funds they are seeking for prediction could more effectively be used if they were spent on assessing risk and ways to reduce the vulnerability of local/regional resources to climate variability and change and other environmental issues using the bottom-up, resources-based perspective discussed in Pielke and Bravo de Guenni (2004), Pielke (2004), and Pielke et al. (2009). This bottom-up focus is “of critical interest to society.”

We wrote this recommendation also in Pielke and Wilby (2012):

As a more robust approach, we favor a bottom-up, resource-based vulnerability approach to assess the climate and other environmental and societal threats to critical assets [Wilby and Dessai, 2010; Kabat et al., 2004]. This framework considers the coping conditions and critical thresholds of natural and human environments beyond which external pressures (including climate change) cause harm to water resources, food, energy, human health, and ecosystem function. Such an approach could assist policy makers in developing more holistic mitigation and adaptation strategies that deal with the complex spectrum of social and environmental drivers over coming decades, beyond carbon dioxide and a few other greenhouse gases.

This is a more robust way of assessing risks, including from climate, than using the approach adopted by the Intergovernmental Panel on Climate Change (IPCC) which is primarily based on downscaling from multi-decadal global climate model projections. A vulnerability assessment using the bottom-up, resource-based framework is a more inclusive approach for policy makers to adopt effective mitigation and adaptation methodologies to deal with the complexity of the spectrum of social and environmental extreme events that will occur in the coming decades as the range of threats are assessed, beyond just the focus on CO2 and a few other greenhouse gases as emphasized in the IPCC assessments.

This need to develop a broader approach was even endorsed in the climate research assessment and recommendations in the “Report Of The 2004-2009 Research Review Of The Royal Netherlands Meteorological Institute”.

In this 2011 report, we wrote

The generation of climate scenarios for plausible future risk, should be significantly broadened in approach as the current approach assesses only a limited subset of possible future climate conditions. To broaden the approach of estimating plausible changes in climate conditions in the framing of future risk, we recommend a bottom-up, resource-based vulnerability assessment for the key resources of water, food, energy, human health and ecosystem function for the Netherlands. This contextual vulnerability concept requires the determination of the major threats to these resources from climate, but also from other social and environmental issues. After these threats are identified for each resource, then the relative risk from natural- and human-caused climate change (estimated from the global climate model projections, but also the historical, paleo-record and worst case sequences of events) can be compared with other risks in order to adopt the optimal mitigation/adaptation strategy.

Since the 2011 report (which I was a member of the Committee that wrote it), I now feel that using the global climate model projections, downscaled or not, to provide regional and local impact assessment on multi-decadal time scales is not an effective use of money and other resources. If the models cannot even accurately simulate current climate statistics when they are not constrained by real world data, the expense to run them to produce detailed spatial maps is not worthwhile. Indeed, it is counterproductive as it provides the impact community and policymakers with an erroneous impression on their value.

A robust approach is to use historical, paleo-record and worst case sequences of climate events. Added to this list can be perturbation scenarios that start with regional reanalysis (e.g. such as by arbitrarily adding a 1C increase in minimum temperature in the winter, a 10 day increase in the growing season, a doubling of major hurricane landfalls on the Florida coast, etc). There is no need to run the multi-decadal global and regional climate projections to achieve these realistic (plausible) scenarios.

Hopefully, our debate on this weblog will foster a movement away from the overselling of multi-decadal climate model projections to the impact and policy communities. I very much appreciate the opportunity to present my viewpoint in this venue.

Biosketch
Roger A. Pielke Sr. is currently a Senior Research Scientist in CIRES and a Senior Research Associate at the University of Colorado-Boulder in the Department of Atmospheric and Oceanic Sciences (ATOC) at the University of Colorado in Boulder (November 2005 -present). He is also an Emeritus Professor of Atmospheric Science at Colorado State University and has a five-year appointment (April 2007 - March 2012) on the Graduate Faculty of Purdue University in West Lafayette, Indiana.
Pielke has studied terrain-induced mesoscale systems, including the development of a three-dimensional mesoscale model of the sea breeze, for which he received the NOAA Distinguished Authorship Award for 1974. Dr. Pielke has worked for NOAA's Experimental Meteorology Lab (1971-1974), The University of Virginia (1974-1981), and Colorado State University (1981-2006). He served as Colorado State Climatologist from 1999-2006. He was an adjunct faculty member in the Department of Civil and Environmental Engineering at Duke University in Durham, North Carolina (July 2003-2006). He was a visiting Professor in the Department of Atmospheric Sciences at the University of Arizona from October to December 2004.
Roger Pielke Sr. was elected a Fellow of the AMS in 1982 and a Fellow of the American Geophysical Union in 2004. From 1993-1996, he served as Editor-in-Chief of the US National Science Report to the IUGG (1991-1994) for the American Geophysical Union. From January 1996 to December 2000, he served as Co-Chief Editor of the Journal of Atmospheric Science. In 1998, he received NOAA's ERL Outstanding Scientific Paper (with Conrad Ziegler and Tsengdar Lee) for a modeling study of the convective dryline. He was designated a Pennsylvania State Centennial Fellow in 1996, and named the Pennsylvania State College of Earth and Mineral Sciences Alumni of the year for 1999 (with Bill Cotton). He is currently serving on the AGU Focus Group on Natural Hazards (August 2009-present) and the AMS Committee on Planned and Inadvertent Weather Modification (October 2009-present). He is among one of three faculty and one of four members listed by ISI HighlyCited in Geosciences at Colorado State University and the University of Colorado at Boulder, respectively.
Dr. Pielke has published over 370 papers in peer-reviewed journals, 55 chapters in books, co-edited 9 books, and made over 700 presentations during his career to date. A listing of papers can be viewed at the project website: http://cires.colorado.edu/science/groups/pielke/pubs/. He also launched a science weblog in 2005 to discuss weather and climate issues. This weblog was named one of the 50 most popular Science blogs by Nature Magazine on July 5, 2006 and is located at http://pielkeclimatesci.wordpress.com/.

References

Castro, C.L., R.A. Pielke Sr., and G. Leoncini, 2005: Dynamical downscaling: Assessment of value retained and added using the Regional Atmospheric Modeling System (RAMS). J. Geophys. Res. - Atmospheres, 110, No. D5, D05108, doi:10.1029/2004JD004721. http://pielkeclimatesci.wordpress.com/files/2009/10/r-276.pdf

Castro, C.L., R.A. Pielke Sr., J. Adegoke, S.D. Schubert, and P.J. Pegion, 2007: Investigation of the summer climate of the contiguous U.S. and Mexico using the Regional Atmospheric Modeling System (RAMS). Part II: Model climate variability. J. Climate, 20, 3866-3887. http://pielkeclimatesci.wordpress.com/files/2009/10/r-307.pdf

Mearns, Linda O. , Ray Arritt, Sébastien Biner, Melissa S. Bukovsky, Seth McGinnis, Stephan Sain, Daniel Caya, James Correia, Jr., Dave Flory, William Gutowski, Eugene S. Takle, Richard Jones, Ruby Leung, Wilfran Moufouma-Okia, Larry McDaniel, Ana M. B. Nunes, Yun Qian, John Roads, Lisa Sloan, Mark Snyder, 2012: The North American Regional Climate Change Assessment Program: Overview of Phase I Results. Bull. Amer.Met Soc. September issue. pp 1337-1362.

Mearns, L.O. , R. Leung, R. Arritt, S. Biner, M. Bukovsky, D. Caya, J. Correia, W. Gutowski, R. Jones, Y. Qian, L. Sloan, M. Snyder, and G. Takle 2013: Reply to R. Pielke, Sr. Commentary on Mearns et al. 2012. Bull. Amer. Meteor. Soc., in press. doi: 10.1175/BAMS-D-13-00013.1. http://pielkeclimatesci.files.wordpress.com/2013/02/r-372a.pdf

Pielke, R.A. Jr., 2010: The Climate Fix: What Scientists and Politicians Won't Tell You About Global Warming. Basic Books. http://www.amazon.com/Climate-Fix-Scientists-Politicians-Warming/dp/B005CDTWBS#_

Pielke Sr., R.A., 2002: Overlooked issues in the U.S. National Climate and IPCC assessments. Climatic Change, 52, 1-11. http://pielkeclimatesci.wordpress.com/files/2009/10/r-225.pdf

Pielke Sr., R.A.,, 2004: A broader perspective on climate change is needed. Global Change Newsletter, No. 59, IGBP Secretariat, Stockholm, Sweden, 16–19. http://pielkeclimatesci.files.wordpress.com/2009/09/nr-139.pdf

Pielke Sr., R., K. Beven, G. Brasseur, J. Calvert, M. Chahine, R. Dickerson, D. Entekhabi, E. Foufoula-Georgiou, H. Gupta, V. Gupta, W. Krajewski, E. Philip Krider, W. K.M. Lau, J. McDonnell, W. Rossow, J. Schaake, J. Smith, S. Sorooshian, and E. Wood, 2009: Climate change: The need to consider human forcings besides greenhouse gases. Eos, Vol. 90, No. 45, 10 November 2009, 413. Copyright (2009) American Geophysical Union. http://pielkeclimatesci.wordpress.com/files/2009/12/r-354.pdf

Pielke Sr., R.A., 2010: Comments on “A Unified Modeling Approach to Climate System Prediction”. Bull. Amer. Meteor. Soc., 91, 1699–1701, DOI:10.1175/2010BAMS2975.1, http://pielkeclimatesci.files.wordpress.com/2011/03/r-360.pdf

Pielke Sr, R.A., 2013a: Mesoscale meteorological modeling. 3rd Edition, Academic Press, in press.

Pielke Sr., R.A. 2013b: Comment on “The North American Regional Climate Change Assessment Program: Overview of Phase I Results.” Bull. Amer. Meteor. Soc., in press. doi: 10.1175/BAMS-D-12-00205.1. http://pielkeclimatesci.files.wordpress.com/2013/02/r-372.pdf

Pielke, R. A. Sr.and L. Bravo de Guenni, 2004: Conclusions. Vegetation, Water, Humans and the Climate: A New Perspective on an Interactive System, P. Kabat et al., Eds., Springer, 537–538. http://pielkeclimatesci.files.wordpress.com/2010/01/cb-42.pdf

Pielke Sr., R.A., and R.L. Wilby, 2012: Regional climate downscaling – what’s the point? Eos Forum, 93, No. 5, 52-53, doi:10.1029/2012EO050008. http://pielkeclimatesci.files.wordpress.com/2012/02/r-361.pdf

Pielke Sr., R.A., R. Wilby, D. Niyogi, F. Hossain, K. Dairaku, J. Adegoke, G. Kallos, T. Seastedt, and K. Suding, 2012: Dealing with complexity and extreme events using a bottom-up, resource-based vulnerability perspective. Extreme Events and Natural Hazards: The Complexity Perspective Geophysical Monograph Series 196 © 2012. American Geophysical Union. All Rights Reserved. 10.1029/2011GM001086. http://pielkeclimatesci.files.wordpress.com/2012/10/r-3651.pdf

Pielke Sr, R.A., Editor in Chief., 2013: Climate Vulnerability, Understanding and Addressing Threats to Essential Resources, 1st Edition. J. Adegoke, F. Hossain, G. Kallos, D. Niyoki, T. Seastedt, K. Suding, C. Wright, Eds., Academic Press, 1570 pp. Available May 2013.

Pielke et al, 2013: Introduction. http://pielkeclimatesci.files.wordpress.com/2013/04/b-18intro.pdf

Veljovic, K., B. Rajkovic, M.J. Fennessy, E.L. Altshuler, and F. Mesinger, 2010: Regional climate modeling: Should one attempt improving on the large scales? Lateral boundary condition scheme: Any impact? Meteorologische Zeits., 19:3, 237-246. DOI 10.1127/0941-2948/2010/0460.

Long-term persistence and trend significance

In this second Climate Dialogue we focus on long-term persistence (LTP) and the consequences it has for trend significance.

We slightly changed the procedure compared to the first Climate Dialogue (which was about Arctic sea ice). This time we asked the invited experts to write a first reaction on the guest blogs of the others, describing their agreement and disagreement with it. We publish the guest blogs and these first reactions at the same time.

We apologise again for the long delay. As we explained in our first evaluation it turned out to be extremely difficult to find the right people (representing a range of views) for the dialogues we had in mind.

Climate Dialogue editorial staff
Rob van Dorland, KNMI
Marcel Crok, science writer
Bart Verheggen

Introduction long-term persistence

How persistent is the climate and what is its implication for the significance of trends?

The Earth is warmer now than it was 150 years ago. This fact itself is uncontroversial. It’s not trivial though how to interpret this warming. The attribution of this warming to anthropogenic causes relies heavily on an accurate characterization of the natural behavior of the system. Here we will discuss how statistical assumptions influence the interpretation of measured global warming.

Most experts agree that three types of processes (internal variability, natural and anthropogenic forcings) play a role in changing the Earth’s climate over the past 150 years. It is the relative magnitude of each that is in dispute. The IPCC AR4 report stated that “it is extremely unlikely (<5%) that recent global warming is due to internal variability alone, and very unlikely (< 10 %) that it is due to known natural causes alone.” This conclusion is based on detection and attribution studies of different climate variables and different ‘fingerprints’ which include not only observations but also physical insights in the climate processes.

Detection and attribution
The IPCC definitions of detection and attribution are:

“Detection of climate change is the process of demonstrating that climate has changed in some defined statistical sense, without providing a reason for that change.”

“Attribution of causes of climate change is the process of establishing the most likely causes for the detected change with some defined level of confidence.”

The phrase ‘change in some defined statistical sense’ in the definition for detection turns out to be the starting point for our discussion. Because what is the ‘right’ statistical model (assumption) to conclude whether a change is significant or not?

According to AR4, “An identified change is ‘detected’ in observations if its likelihood of occurrence by chance due to internal variability alone is determined to be small.” The magnitude of internal variability can be estimated in different ways, e.g. by control runs of global climate models, by characterising the statistical behaviour of the time series, by inspection of the spatial and temporal fingerprints in observations, and by comparing models and observations (e.g. via their respective power spectra, cf. fig 9.7 in AR4).

Long-term persistence
Critics argue though that most if not all changes in the climatological time series are an expression of long-term persistence (LTP). Long-term persistence means there is a long memory in the system, although unlike a random walk it remains bounded in the very long run. There are stochastic /unforced fluctuations on all time scales. More technically, the autocorrelation function goes to zero algebraically (very slowly). These critics argue that by taking LTP into account trend significance is reduced by orders of magnitude compared to statistical models that assume short-term persistence (AR1), as was applied e.g. in the illustrative trend estimates in table 3.2 of AR4. [1,2]

This has consequences for attribution as well, since long term persistence is often assumed to be a sign of unforced (internal) variability (e.g. Cohn and Lins, 2005; Rybski et al, 2006). In reaction to Cohn and Lins (2005), Rybski et al. (2006)[3] concluded that even when LTP is taken into account at least part of the recent warming cannot be solely related to natural factors and that the recent clustering of warm years is very unusual (see also Zorita (2008)[4]). Bunde and Lennartz also looked extensively at the consequences of LTP for trend significance and concluded that globally averaged temperature data on land do show a significant trend, but the sea surface temperatures don’t. [5,6]

These different conclusions translate directly into the question of how important the statistical model used is for determining the significance of the observed trends.

Climate Dialogue
Although the IPCC definition for detection seems to be clear, the phrase ‘change in some defined statistical sense’ leaves a lot of wiggle room. For the sake of a focussed discussion we define here the detection of climate change as showing that some of this change is outside the bounds of internal climate variability. The focus of this discussion is how to best apply statistical methods and physical understanding to address this question of whether the observed changes are outside the bounds of internal variability. Discussions about the physical mechanisms governing the internal variability are also welcome.

Specific questions

1. What exactly is long-term persistence (LTP), and why is it relevant for the detection of climate change?

2. Is “detection” purely a matter of statistics? And how does the statistical model relate to our knowledge of internal variability?

3. What is the ‘right’ statistical model to analyse whether there is a detected change or not? What are your assumptions when using that model?

4. How long should a time series be in order to make a meaningful inference about LTP or other statistical models? How can one be sure that one model is better than the other?

5. Based on your statistical model of preference do you conclude that there is a significant warming trend?

6. Based on your statistical model of preference what is the probability that 11 of the warmest years in a 162 year long time series (HadCrut4) all lie in the last 12 years?

7. If you reject detection of climate change based on your preferred statistical model, would you have a suggestion as to the mechanism(s) that have generated the observed warming?

[1] Cohn,. T. A., and H. F. Lins (2005), Nature's style: Naturally trendy,. Geophys. Res. Lett., 32, L23402, doi:10.1029/2005GL024476
[2] Koutsoyiannis, D., Montanari, A., Statistical analysis of hydroclimatic time series: Uncertainty and insights, Water Resour. Res., Vol. 43, W05429, doi:10.1029/2006WR005592, 2007
[3] Rybski, D., A. Bunde, S. Havlin, and H. von Storch (2006), Long-term persistence in climate and the detection problem, Geophys. Res. Lett., 33, L06718, doi:10.1029/2005GL025591
[4] Zorita, E., T. F. Stocker, and H. von Storch (2008), How unusual is the recent series of warm years?, Geophys. Res. Lett., 35, L24706, doi:10.1029/2008GL036228
[5] Lennartz, S., and A. Bunde. "Trend evaluation in records with long‐term memory: Application to global warming." Geophysical Research Letters 36.16 (2009)
[6] Lennartz, Sabine, and Armin Bunde. "Distribution of natural trends in long-term correlated records: A scaling approach." Physical Review E 84.2 (2011): 021129

Guest blog Rasmus Benestad

The debate about trends gets lost in statistics...

Rasmus E. Benestad, senior researcher, Norwegian Meteorological Institute

Figure 1. The recorded changes in the global mean temperature over time (red). The grey curve shows a calculation of the temperature based on greenhouse gases, ozone, and changes in the sun.

From time to time, the question pops up whether the global warming recorded by a network of thermometers around the globe is a result of natural causes, or if the warming is forced by changes in the atmospheric concentrations of greenhouse gases (GHGs).

In 2005, there was a scientific paper [1] suggesting that statistical models describing random long-term persistence (LTP) could produce similar trends as seen in the global mean temperature. I wrote a comment then on this paper on RealClimate.org with the title Naturally trendy?.

More recently, another paper [2] followed somewhat similar ideas, although for the Arctic temperature rather than the global mean. A discussion ensued after my posting of ‘What is signal and what is noise?’ on RealClimate.org. A meeting is planned in Tromsø, Norway in the beginning of May to discuss our differences - much in the spirit of Climate Dialogue.

Weather statistics
It is easy to make a statement about the Earth’s climate, but what is the story behind the different views? And what is really the problem?

If we start from scratch, we first need to have a clear idea about what we mean by climate and climate change. Often, climate is defined as the typical weather, described by the weather statistics: what range of temperature and rainfall we can expect, and how frequently. Experts usually call this kind of statistics ‘frequency distribution’ or ‘probability density functions’ (pdfs).

A climate change happens when the weather statistics are shifted: weather that was typical in the past is no longer typical. It involves a sustained change rather than short-lived variations. New weather patterns emerge during a climate change.

At the same time, there is little doubt that some primary features of our climate involve short-lived natural ‘spontaneous’ fluctuations, caused by the climate itself. The natural fluctuations are distinct to a forced climate change in terms of their duration [3].

A diagnostic for climate change involves statistical analyses to assess whether the present range and frequency of temperature and precipitation are different from those of the past.

Natural variations
However, the presence of slower natural variations in the climate makes it difficult to make a correct diagnosis.

Long-term persistence (LTP) describes how slow physical processes change over time, where the gradual nature is due to some kind of ‘memory’. This memory may involve some kind of inertia, or the time it takes for physical processes to run their course. Changes over large space take longer time than local changes.

The diffusion of heat and transport of mass and energy are subject to finite flow speeds, and the time it takes for heat to leak into the surroundings is well-understood. Often the rate decays at an exponential rate (with a negative exponent).

There may be more complex behaviour when different physical processes intervene and affect each other, such as the oceans and the atmosphere [4]. The oceans are sluggish and the density and heat capacity of water are much higher than that for the air. Hence the ocean acts like a flywheel, and once it gets moving, it will go on for a while.

There are some known examples of LTP processes, such as the ice ages, changes in the ocean circulation, and the El Niño Southern Oscillation.

We also know that the Earth’s atmosphere is non-linear (‘chaotic’) and may settle into different states, known as ‘weather regimes’ [5-7]. Such behaviour may also produce LTP. Changes in the oceans through the overturning of water masses can result in different weather regimes.

Laws of physics
It is also possible to show that LTP takes place when many processes are combined, which individually do not have LTP. For example, the river flow may exhibit some LTP characteristics, resulting from a collection of rainfall over several watersheds.

We should not forget that long-term changes in the forcing from GHGs also result in LTP behaviour [3].

The climate involves more than just observations and statistics, and like everything else in the universe, it must obey the laws of physics. From this angle, climate change is an imbalance in terms of energy and heat.

It is fairly straight-forward to measure the heat stored in the oceans, as warmer water expands. The global sea level provides an indicator for Earth’s accumulation of heat over time [3].

Hence, the diagnosis (“detection”) of a climate change is not purely a matter of statistics. The laws of physics sets fundamental constraints which lets us narrow down to a small number of ‘suspects’.

Energy and mass budgets are central, but also the hydrological cycle is entangled with the temperatures and the oceans [8]. Modern measurements provide a comprehensive picture: we see changes in the circulation and rainfall patterns.

Classical mistakes
There are two classical mistakes made in the debate about climate change and LTP: (a) looking only at a single aspect (one single time series) isolated from the rest of Earth’s climate and (b) confusing signal for noise.

The term ‘signal’ can have different meanings depending on the question, but here it refers to manmade climate change. ‘Noise’ usually means everything else, and LTP is ‘noise in slow motion’.

There are always physical processes driving both LTP and spontaneous changes on Earth (known as the weather), and these must be subject to study if we want to separate noise from signal.

If an upward trend in the temperature were to be caused by internal ocean overturning, then this would imply a movement of heat from the oceans to the air and land surface. Energy must be conserved. When the heat content increases in both the oceans [9] and the air, and the ice is melting, then it is evident that the trend cannot be explained in terms of LTP.

The other mistake is neglecting the question: What is signal and what is noise? Some researchers have adapted statistical trend models to mimic the evolution seen in measured data [1,2]. These models must be adapted to the data by adjusting number of parameters so that they can reproduce the LTP behaviour.

In science, we often talk about a range of different types of models, and they come in all sorts of sizes and shapes. A climate model calculates the temperature, air flow, rainfall, and ocean currents over the whole globe, based on our knowledge about the physics. Statistical trend models, on the other hand, take past measurements for which they try to find statistical curves that follow the data.

Statistical models are sometimes fed random numbers in order to produce a result that looks like noise [10]. It is concluded that the measured data are a result of a noisy process if these models produce results that look the same. In other words, these models are used to establish a benchmark for assessing trends.

Statistical trend models may, however, produce misguided answers if proper care is not taken. For example, those adapted to data containing both signal and noise cannot be used to infer whether the observed trends are unusual or not. The LTP associated with GHGs can be quite substantial, and forced trends in measured temperatures will fool the statistical models which assume the LPT is due to noise.

The misapplication of statistical trend models can easily be demonstrated by subjecting them to certain tests. The statistical trend models describing LTP make use of information embedded in the data, revealed by their respective autocorrelation function (ACF).

Figure 2 below shows a comparison between two ACFs for temperature for the Arctic (area mean above 60N), taken from two different climate model simulations. One simulation represents the past (black) driven with historical changes in GHGs. The other (grey) describes a hypothetical world where the GHGs were constant, representing a ‘stable’ climate in terms of external forcings.

It is clear that the ACFs differ, and the statistical models used to assess trends and LTP rely on the shape of the ACF.

Figure 2. The upper graph shows the annual mean temperature for the Arctic simulated by a climate model. The black curve shows the year-to-year variations for a run where the model was given the observed GHG measurements. The grey curve shows a similar run, but where the GHGs do not change. The bottom panel shows the ACF, and the black curve indicates that the effect of GHGs on temperature is to increase the LTP. For the assessment of trends, the statistical models should be trained on the grey curve, for which we know there is no forced trend and where all the variations are due to changes in the oceans.

Circular reasoning
It is the way models are used that really matters, rather than the specific model itself. All models are based upon a set of assumptions, and if these are violated, then the models tend to give misleading answers. Statistical LTP-noise models used for the detection of trends involve circular reasoning if adapted to measured data. Because this data embed both signal and noise.

For real-world data, we do not know what degree of the variations are LTP-noise and what are signal.

We can nevertheless specify the degree of forcing in climate model simulations, and then use these results to test the statistical models. Even if the climate models are not completely perfect, they serve as a test bench [11].

The important assumptions are therefore that the statistical trend models, against which the data are benchmarked, really provide a reliable description of the noise.

We need more than a century-long time series to make a meaningful inference about LTP if natural variations have time scales of 70-90 years. Most thermometer records do not go much longer back in time than a century, although there are some exceptions.

It is possible to remedy the lack of thermometer records to some extent by supplementing the information with evidence based on e.g. tree rings, sediment samples, and ice cores3,12,13. Still, such evidence tends to be limited to temperature, whereas climate change involves a whole range of elements.

For all intents and purposes, however, it is important to account for both natural and forced climate change on these time scales. Most people would worry more about the combined effect of these components, as natural variations may be just as devastating as forced. For most people, the distinction between trend and noise is an academic question. For politicians, it's a question about cutting GHG emissions.

Regression
Another side to the story is that the magnitude of the unforced LTP variations may give us an idea about the climate's sensitivity to changing conditions. Often, such cycles are caused by delayed but reinforcing processes. Conditions which amplify an initial response are inherent to the atmosphere, and may act both on forced response (GHGs) as well as internal variations. Damping mechanisms will also tend to strangle oscillations, which is well known from many different physical systems.

The combination of statistical information and physics knowledge lead to only one plausible explanation for the observed global warming, global mean sea level rise, melting of ice, and accumulation of ocean heat. The explanation is the increased concentrations of GHGs.

We can also use another statistical technique for diagnosing a cause [11] which is also used in medical sciences and known as ‘regression analysis’. Figure 1 shows the results of a multiple regression analyses with inputs representing expected physical connections to Earth’s climate. In this case, the regression analysis used GHGs, ozone and changes in the sun’s intensity as inputs, and the results followed the HadCRUT4 data closely.

The probability that this fit is accidental is practically zero if we assume that that the temperature variations from year-to-year are independent of each other. LTP and the oceans inertia will imply that the degrees of freedom is lower than the number of data points, making it somewhat more likely to happen even by chance.

Furthermore, taking paleoclimatic information into account, there is no evidence that there have been similar temperature excursion in the past ~1000 years [12-14]. If the present warming was a result of natural fluctuations, it would imply a high climate sensitivity, and similar variations in the past. Moreover, it would suggest that any known forcing, such as GHGs, would be amplified accordingly. The climate sensitivity may be a common denominator for natural fluctuations and forced trends (Figure 3).

Figure 3. Comparison between trend estimates and the amplitude of 10-year low-passed internal variability in state-of-the-art global climate models.

There may be some irony here: The warming 'hiatus' during last decade is due to LPT-noise [15,16]. However, when the undulations from such natural processes mask the GHG-driven trend, it may in fact suggest a high climate sensitivity – because such natural variations would not be so pronounced without a strong amplification from feedback mechanisms. Figure 3 shows that such natural variations in the climate models are more pronounced for the models with stronger transient climate response (TCR, a rough indicator for climate sensitivity).

For complete probability assessment, we need to take into account both the statistics and the physics-based information, such as the fact that GHGs absorb infrared light and thus affect the vertical energy flow through the atmosphere.

Summary
In summary, we do not really know what the LTP in the real world would be like without GHG forcings, and we don’t know the real degrees of freedom in the measured data record. The lack of such information limits our ability to make a statistics-based probability estimate. On the other hand, we know from past reconstructions and physical reasoning that present warming is highly unusual.

Biosketch
Rasmus Benestad is a physicist by training. Benestad has a D.Phil in physics from Atmospheric, Oceanic & Planetary Physics at Oxford University in the United Kingdom. He has affiliations with the Norwegian Meteorological Institute.
Recent work involves a good deal of statistics (empirical-statistical downscaling, trend analysis, model validation, extremes and record values), but he has also had some experience with electronics, cloud micro-physics, ocean dynamics/air-sea processes and seasonal forecasting.
In addition, Benestad wrote the book ‘Solar Activity and Earth’s Climate’ (2002), published by Praxis-Springer, and together with two colleagues the text book ‘Empirical-Statistical Downscaling’ (2008; World Scientific Publishers). He has also written a number of R-packages for climate analysis posted on http://cran.r-project.org.
Benestad was a member of the council of the European Meteorological Society for the period (2004-2006), representing the Nordic countries and the Norwegian Meteorology Society, and he has served as a member of the CORDEX Task Force on Regional Climate Downscaling.
He is a regular contributor to the well-known climate blog RealClimate.org.

References

1. Cohn, T. A. & Lins, H. F. Nature’s style: Naturally trendy. Geophys. Res. Lett. 32, n/a–n/a (2005).

2. Franzke, C. On the statistical significance of surface air temperature trends in the Eurasian Arctic region. Geophys. Res. Lett. 39, n/a–n/a (2012).

3. Climate Change: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. (Cambridge University Press, 2007).

4. Anderson, D. L. T. & McCreary, J. P. Slowly Propagating Disturbances in a Coupled Ocean-Atmosphere Model. J. Atmospheric Sci. 42, 615–629 (1985).

5. Gleick, J. Chaos. (Cardinal, 1987).

6. Lorenz, E. Deterministic nonperiodic flow. J. Atmospheric Sci. 20, 130–141 (1963).

7. Shukla, J. Predicatability in the Mids of Chaos: A scientific Basis for climate forecasting. Science 282, 728–733 (1998).

8. Durack, P. J., Wijffels, S. E. & Matear, R. J. Ocean Salinities REveal Strong Global Water Cycle Intensification During 1950 to 2000. Science 336, 455–458 (2012).

9. Balmaseda, M. A., Trenberth, K. E. & Källén, E. Distinctive climate signals in reanalysis of global ocean heat content. Geophys. Res. Lett. n/a–n/a (2013). doi:10.1002/grl.50382

10. Paeth, H. & Hense, A. Sensitivity of climate change signals deduced from multi-model Monte Carlo experiments. Clim. Res. 22, 189–204 (2002).

11. Benestad, R. E. & Schmidt, G. A. Solar Trends and Global Warming. JGR 114, D14101 (2009).

12. Mann, M. E., Bradley, R. S. & Hughes, M. K. Global-scale temperature patterns and climate forcing over the past six centuries. Nature 392, 779–787 (1998).

13. Moberg, A., Sonechkin, D. M., Holmgren, K., Datsenko, N. M. & Karlén, W. Highly variable Northern Hemisphere temperatures reconstructed from low- and high-resolution proxy data. Nature 433, 613–617 (2005).

14. Marcott, S. A., Shakun, J. D., Clark, P. U. & Mix, A. C. A Reconstruction of Regional and Global Temperature for the Past 11,300 Years. Science 339, 1198–1201 (2013).

15. Easterling, D. R. & Wehner, M. F. Is the climate warming or cooling? Geophys Res Lett 36, L08706 (2009).

16. Foster, G. & Rahmstorf, S. Global temperature evolution 1979–2010. Environ. Res. Lett. 6, 044022 (2011).

Guest blog Demetris Koutsoyiannis

LTP: Looking Trendy—Persistently

Demetris Koutsoyiannis, National Technical University of Athens, Greece

Stochastics and its importance in studying climate

Probability, statistics and stochastic processes, lately described by the collective term stochastics, provide essential concepts and tools to deal with uncertainty useful for all scientific disciplines. However, there is at least one scientific discipline whose very domain relies on stochastics: Climatology. To refer to a popular definition of this domain by IPCC[1] (also quoted in Wikipedia):

Climate in a narrow sense is usually defined as the average weather, or more rigorously, as the statistical description in terms of the mean and variability of relevant quantities over a period of time ranging from months to thousands or millions of years. The classical period for averaging these variables is 30 years, as defined by the World Meteorological Organization.

“Average”, “statistical description”, “mean”, “variability”, are all statistical terms. Several questions related to climate also involve probability, as exemplified in question 6 of the Introduction of this Climate Dialogue theme:[2]

Based on your statistical model of preference what is the probability that 11 of the warmest years in a 162 year long time series (HadCrut4) all lie in the last 12 years?

Interestingly, similar probabilistic and statistical notions are implied in a recent President Obama statement:[3]

Yes, it’s true that no single event makes a trend. But the fact is, the 12 hottest years on record have all come in the last 15.

The latter statement highlights how important statistical questions are for policy matters and presumes some public perception of probability and statistics, which determines how the message is received.

I have no doubt that the average human being has some understanding of probability and statistics, not only thanks to education, but because life is uncertain and each of us needs to develop understanding of uncertainty and skills to cope with it. However, common experience and perception are mostly related to too simple uncertainties, like in coin tosses, dice throws and roulette wheels. Also education is mainly based on classical statistics in which:

· Consecutive events are independent to each other: the outcome of an event does not affect that of the next one.

· As a result, time averages tend to stabilize relatively fast: their variability, expressed by the probabilistic concept of variance, is inversely proportional to the length of the averaging period.

Adhering to classical statistics, when dealing with climate and other complex geophysical processes, is not just a problem of laymen. There are numerous research publications adopting, tacitly or explicitly, the independence assumption for systems in which it is totally inappropriate. Even the very definition of climate quoted above, particularly the phrase “The classical period is 30 years” historically reflects a perception of a constant climate[4][5] and a hope that 30 years would be enough for a climatic quantity to get stabilized to a constant value—and this is roughly supported by classical statistics. In this perception a constant climate would be the norm and a deviation from the norm would be something caused by an extraordinary agent. The same static-climate conviction is evident in the “weather vs. climate” dichotomy (e.g. the “climate is what you expect, weather is what you get”; (see critical discussions in Refs. [6], [7], [5]).

Figure 1. Probability that a 12-year period contains the specified number of warmest years (n) or more in a 162-year long period, as calculated assuming a random climate and a Hurst-Kolmogorov (HK) climate with Hurst parameter H = 0.92 (see text below for explanation of the latter).

Now let us pretend that, as in classical statistics, climate was synthesized by averaging random events without dependence and try to study on this basis the above question (slightly modified for reasons that will be explained later). So, what is the probability that, in a 162-year long time series, at least n (where n = 1, 2, …12) of the warmest years all lie in a 12-year long period? The reply is depicted in Figure 1 labelled “Random climate”. The first seven points are calculated by Monte-Carlo simulation. For n = 7 years this probability is 0.00005 (1/20 000). The Monte Carlo simulation would require too much time to find the probability that all 12 warmest years are consecutive (n = 12), because this probability is really an astonishingly small number; but I was able to find it analytically and plotted it on the graph. I also approximated with analytical calculations the probability that at least 11 warmest years are clustered within 12 years. From the graph we can conclude that it is quite unlikely that more than 8-9 warmest years would cluster, even throughout the entire Earth’s life (4.5 billion years separated in segments of 162 years). To have 11 warmest events clustering in a 12-year period we would need, on the average, one hundred thousand times the age of the Universe.

Is this overwhelming evidence that something extraordinary has occurred during our lives, or that the independence assumption leads to blatantly irrational results?

No one would believe that the weather this hour does not depend on that an hour ago. It is natural to assume that there is time dependence in weather. Therefore, we must study weather not on the basis of classical statistics, but we should rather use the notion of a stochastic process. Now, if we average the process to another scale, daily, monthly, annual, decadal, centennial, etc. we get other stochastic processes, not qualitatively different from the hourly one. Of course, as the scale of averaging increases the variability decreases—but not as much as implied by classical statistics. Naturally the dependence makes clustering of similar events more likely.

The first who studied clustering in natural processes was Harold Edwin Hurst, a British hydrologist who worked in the Nile for more than 60 years. In 1951 he published a seminal paper[8] in which he stated:

Although in random events groups of high or low values do occur, their tendency to occur in natural events is greater. This is the main difference between natural and random events.

Herodotus said that the Egyptian land is "a gift of the Nile". Nile gave also hydrology and climatology invaluable gifts: one of them is the longest record of instrumental observations in history. Its water levels were measured in so-called Nilometers and archived for many centuries. In the 1920s Omar Toussoun, Prince of Alexandria, published a book[9] containing, among other things, annual minimum and maximum water levels of the Nile at the Roda Nilometer from AD 622 to 1921. Figure 2 depicts the time series of annual minimum levels up to 1470 (849 values; unfortunately, after 1470 there are substantial gaps). Climatic, i.e. 30-year average, values are also plotted. One may say that these values are not climatic in strict sense. But they are strongly linked to the variability of the climate of a large area, from Mediterranean to the tropics. And they are instrumental.

The clustering of similar events, more formally described as Long-Term Persistence (LTP) is obvious. For example, around AD 780 we have a group of low values producing a low climatic value, and around 1110 and 1440 we have groups of large values. Such grouping would not appear in a climate that would be the synthesis of independent random events. The latter would be more flat as illustrated by the synthetic example of Figure 3.

Another way of viewing the long-term variability of the Nile in Figure 2 is by using the notion of trends, irregularly changing from positive to negative and from mild to steep. The long instrumental Nile series may help those who prefer the view of variability in terms of trends to recognize “Nature's style [as] Naturally trendy” to invoke the title of a celebrated recent paper.[10]

Figure 2. Nile River annual minimum water level at Roda Nilometer (from Ref. 9, here converted into water depths assuming a datum for the river bottom at 8.80 m), along with 30-year averages (centred). A few missing values at years 1285, 1297, 1303, 1310, 1319, 1363 and 1434 are filled in using a simple method from Ref. [11]. The estimated statistics are mean = 2.74 m, standard deviation = 1.00 m, Hurst parameter = 0.87.

Figure 3. A synthetic time series from an independent (white noise) process with same statistics as those of the Nilometer series shown in the caption of Figure 2.

Seeking a proper stochastic model for climate

Variability over different time scales, trends, clustering and persistence are all closely linked to each other. The former is a more rigorous concept and is mathematically described by the variance (or the standard deviation) of the averaged process as a function of the averaging time scale, aka climacogram. [6],[12] The variability over scale (the climacogram), is also one-to-one related (by transformation) to the stochastic concepts of dependence in time (the autocorrelation function) and the spectral properties (the power spectrum) of the process of interest. [6][12]

In white noise, i.e., the process characterized by complete independence in time, the variability is infinite at the instantaneous time scale (in technical terms its autocorrelation in continuous time is a Dirac Delta function). No variability is added at any finite time scale. Clearly, this is a mathematical construct which cannot occur in nature (the adjective “white”, suggestive of the white light as a mixture of frequencies, is misleading; the spectral density of white noise is flat, while that of the white light is not).

A seemingly realistic stochastic process, which has been widely used for climate, is the Markov process, whose discrete time version is more widely known as the AR(1) process. The characteristic properties of this process are two:

· Its past has no influence on the future whenever the present is known (in other words, only the latest known value matters for the future).[13]

· It assumes a single characteristic time scale in which change or variability is created (but in contrast to the white noise, this time scale is non-zero and technically is expressed by the denominator of the exponent in an exponential function that constitutes its autocorrelation function). As a result, when the time scale of interest is fairly larger than this characteristic scale, the process behaves like white noise.

It is difficult to explain why this model has become dominant in climatology. Even these two theoretical properties should have hampered its popularity. How could the future be affected just by the latest value and not by the entire past? Could any geophysical process, including climate, be determined by just one mechanism acting on a single time scale?

The flow in a river (not necessarily the Nile) may help us understand better the multiplicity of mechanisms producing change and the multiplicity of the relevant time scales (see also Ref. [14]):

· Next second: the hydraulic characteristics (water level, velocity) will change due to turbulence.

· Next day: the river discharge will change (even dramatically, in case of a flood).

· Next year: The river cross-section will change (erosion-deposition of sediments).

· Next century: The river basin characteristics (e.g. vegetation, land use) will change.

· Next millennium: All could be very different (e.g. the area could be glacialized).

· Next millions of years: The river may have disappeared (owing to tectonic processes).

Of course none of these changes will be a surprise; rather, it would be a surprise if things remained static. Despite being anticipated, all these changes are not predictable.

Does a plurality of mechanisms acting on different scales require a complex stochastic model? Not necessarily. A decade before Hurst detected LTP in natural processes, Andrey Kolmogorov,[15] devised a mathematical model which describes this behaviour using one parameter only, i.e. no more than in the Markov model. We call this model the Hurst-Kolmogorov (HK) model (aka fGn—for fractional Gaussian noise, simple scaling process etc.), while its parameter has been known as the Hurst parameter and is typically denoted by H. In this model, change is produced at all scales and thus it never becomes similar to white noise, whatever the time scale of averaging is.

Specifically, the variance will never become inversely proportional to time scale; it will decrease at a lower rate, inversely proportional to the power (2 – 2H) of the time scale (nb. 0.5 ≤ H < 1, where the value H = 0.5 corresponds to white noise). A characteristic property of the HK process is that its autocorrelation function is independent of time scale. In other words if there is some correlation in the average temperature between a year and the next one (and in fact there is), the same correlation will be between a decade and the next one, a century and the next one, and so on to infinity. Why? Because there will always be another natural mechanism acting on a bigger scale, which will create change, and thus positive correlation at all lower scales (the relationship of change with autocorrelation is better explained in Ref. 6). The HK behaviour seems to be consistent with the principle of extremal entropy production.[16]

The Nilometer record described above is consistent with the HK model with H = 0.87. Are there other records of geophysical processes consistent with the HK behaviour? A recent overview paper[17] cites numerous studies where this behaviour has been verified. It also examines several instrumental and proxy climate data series related to temperature and, by superimposing the climacograms of the different series, it obtains an overview of the variability for time scales spanning almost nine orders of magnitude—from 1 month to 50 million years. The overall climacogram supports the presence of HK dynamics in climate with H at least 0.92. The orbital forcing (Milankovitch cycles) is also evident in the combined climacogram at time scales between 10 and 100 thousand years.

Statistical assessment of current climate evolution

Re-examining the statistical problem of 11 warmest years in 12 within 162 year period, now within an HK framework with H = 0.92, we will find spectacularly different results from those of the random climate, as shown in Figure 1. We may see, for example, that what, according to the classical statistical perception, would require the entire age of the Earth to occur once (i.e. clustering of 8-9 events) is a regular event for an HK climate, with probability on the order of 1-10%.

This dramatic difference can help us understand why the choice of a proper stochastic model is relevant for the detection of changes in climate. It may also help us realize how easy it is to fool ourselves, given that our perception of probability may heavily rely on classical statistics.

Figure 4 gives a close-up of the results, excluding the very low probabilities and also generalizing the “12-year period” to “N-year period” so that it can host, in addition to the Climate Dialogue statistical question 6, the results for the “Obama version” thereof as quoted above. In addition, Figure 4 is based on a slightly higher value of the Hurst coefficient, H = 0.94, as estimated by the Least Squares based on Standard Deviation method[18] for the HadCrut4 record. Both versions result in about the same answer: the probability of having 11 warmest years in 12, or 12 warmest years in 15, is 0.1%.

Figure 4. Probability that a N-year period, where N = 12 or 15, contains the specified number, n, of warmest years or more in a 162-year long period, calculated in the same manner as in Figure 1 with H = 0.94.

If we used the IPCC AR4 terminology[19] we would say that either of these events is exceptionally unlikely to have a natural cause. Interestingly, the present results do not contradict those of a recent study of Zorita, Stocker and von Storch,[20] who examined a similar question and concluded that:

Under two statistical null-hypotheses, autoregressive and long-memory, this probability turns to be very low: for the global records lower than p = 0.001…

I note, though, that there are differences in the methodology followed here and that in Zorita et al.; for example, the analysis here did not consider whether the N-year period (where the n warmest years are clustered) is located in the end of the examined observation period or anywhere else in it (the reason will be explained below).

One may note that the above results, as well as those by Zorita et al., are affected by uncertainty, associated with the parameter estimation but also with the data set itself. The data are altered all the time as a result of continuous adaptations and adjustments. Even the ranks of the different years are changing: for example in the CRU data examined by Koutsoyiannis and Montanari (2007)[21], 1998 was rank 1 (the warmest of all) and 2005 was rank 2, while now the ranking of these two years was inverted. But most importantly, the analysis is affected by the Mexican Hat Fallacy (MHF), if I am allowed to use this name to describe a neat example of statistical misuse offered by von Storch,[22] in which the conclusion is drawn that:

The Mexican Hat is not of natural origin but man-made.

Von Storch [22] aptly describes the fallacy in these words:

The fundamental error is that the null hypothesis is not independent of the data which are used to conduct the test. We know a-priori that the Mexican Hat is a rare event, therefore the impossibility of finding such a combination of stones cannot be used as evidence against its natural origin. The same trick can of course be used to “prove” that any rare event is “non-natural”, be it a heat wave or a particularly violent storm - the probability of observing a rare event is small.

I believe that by rephrasing “11 of the warmest years … all lie in the last 12 years” into “11 of the warmest years … all lie in a 12-year long period ” reduces the MHF effect, but I do not think it eliminates it. That is why I prefer other statistical methods of detecting changes[23], such as the tests proposed by Hamed[24] and by Cohn and Lins [10]. The former relies on a test statistic based on the ranks of all data, rather than a few of them, while the second considers also the magnitude of the actual change, not that of the change in the ranks.

Another test statistic was proposed by Rybski et al.,[25] and was modified to include the uncertainty in the estimation of standard deviation by Koutsoyiannis and Montanari [21], who also applied it for the CRU temperature data up to 2005. Note that, to make the test simple, the uncertainty in the estimation of H was not considered even in the latter version (thus it could rather be called a pseudo-test). Here I updated the application of this test and I present the results in Figure 5.

Figure 5 Updated Fig. 2 in Koutsoyiannis and Montanari [21] testing lagged climatic differences based on the HadCrut4 data set (1850-2012; see explanation in text).

The method has the advantages that it uses the entire series (not a few values), it considers the actual climatic values (not their ranks) and it avoids specifying a mathematical form of trend (e.g. linear). Furthermore, it is simple: First we calculate the climatic value of each year as the average of the present and the previous 29 years. This is plotted as a pink continuous line in Figure 5, where we can see, among other things, that the latest climatic value is 0.31°C (at 2012, being the average of HadCrut4 data values for 1983-2012), while the earliest one was –0.30°C (at 1879, being the average of 1850-79). Thus, during the last 134 years the climate has warmed by 0.61°C. Note that no subjective smoothing is made here (in contrast to the graphs by CRU), and thus the climatic series has length 134 years (but with only 5 non-overlapping values), while the annual series has length 163.

Our (pseudo)test relies on climatic differences for different time lags (not just that of the latest and earliest values). For example, assuming a lag of 30 years (equal to the period for which we defined a climatic value), the climate difference between 2012 and 1982 is 0.31°C – (–0.05°C) = 0.36°C, where the value - 0.05°C is the average of years 1953-82. The value 0.36°C is plotted as a green triangle in Figure 5 at year 2012. Likewise, we find climatic differences for years 2011, 2010, …, 1909, all for lag 30. Plotting all these we get the series of green triangles shown in Figure 5. We repeat the same procedure for time lags that are multiples of 30 years, namely 60 years (red points), 90 years (blue points) and 120 years (purple points).

Finally, we calculate, in a way described in Ref. 21, the critical values of the test statistic, which is none other than the lagged climate difference. The critical values are different for each lag and are plotted as flat lines with the same colour as the corresponding points. Technically, the (pseudo)test was made as two-sided for significance level 1% but only the high critical values are plotted in the graph. Practically, as long as the points lie below the corresponding flat lines, nothing significant has happened. This is observed for the entire length of the lag-30 and lag-60 differences. A few of the last points of the lag-90 series exceed the critical value; this corresponds to the combination of high temperatures in the 2000s and low temperatures in the 1910s. But then all points of the lag-120 series lie again below the critical value, indicating no significant change.

Concluding remarks

Assuming that the data set we used is representative and does not contain substantial errors, the only result that we can present as fact is that in the last 134 years the climate has warmed by 0.6°C (nb., this is a difference of climatic—30-year average—values while other, often higher, values that appear in the literature refer to trends based on annual values). Whether this change is statistically significant or not depends on assumptions. If we assume a 90-year lag and 1% significance, it perhaps is; again I cannot be certain as the pseudo-test did not consider the uncertainty in H. Note, the 1% significance corresponds to ±2.58 standard deviations away from the mean; if we made it ±3 everything would become insignificant.

Irrespective of statistical significance, paleoclimate and instrumental data provide evidence that the natural climatic variability, the natural potential for change, is high and concerns all time scales. The mechanisms producing change are many and, in practice, it is more important to quantify their combined effects, rather than try to describe and model each one separately.

From a practical point of view, it could be dangerous to pretend that we are able to provide credible quantitative description of the mechanisms, their causes and effects, and their combined consequences: We know that the mechanisms and their interactions are nonlinear, as well as that the climate model hind casts are poor.[26],[27] Indeed, it has been demonstrated that, particularly for runoff that is mostly relevant for water availability and flood risk, deterministically projected future traces can be too flat in comparison to changes that can be expected (and stochastically generated) admitting stationarity and natural variability characterized by HK dynamics[28].

Biosketch
Demetris Koutsoyiannis received his diploma in Civil Engineering from the National Technical University of Athens (NTUA) in 1978 and his doctorate from NTUA in 1988. He is professor of Hydrology and Analysis of Hydrosystems at the Department of Water Resources and Environmental Engineering of NTUA (and former Head of the Department). He is also Co-Editor of Hydrological Sciences Journal and member of the editorial board of Hydrology and Earth System Sciences (and formerly of Journal of Hydrology and Water Resources Research). He teaches undergraduate and postgraduate courses in hydrometeorology, hydrology, hydraulics, hydraulic works, water resource systems, water resource management, and stochastic modelling. He is an experienced researcher in the areas of hydrological modelling, hydrological and climatic stochastics, analysis of hydrosystems, water resources engineering and management, hydro-informatics, and ancient hydraulic technologies. His record includes about 650 scientific and technological contributions, spanning from research articles to engineering studies, among which 96 publications in peer reviewed journals. He received the Henry Darcy Medal 2009 by the European Geosciences Union for his outstanding contributions to the study of hydro-meteorological variability and to water resources management.

References



[1] IPCC (2007), Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change [Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M. Tignor and H.L. Miller (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 996 pp (Annex 1, Glossary: http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-annexes.pdf).

[2] Climate Dialogue Editorial Staff (2013), How persistent is the climate system and what is its implication for the significance of observed trends and for internal variability? (This blog post).

[3] Obama’s 2013 State of the Union Address, The New York Times, http://www.nytimes.com/2013/02/13/us/politics/obamas-2013-state-of-the-union-address.html?pagewanted=3&_r=0

[4] Lamb, H. H. (1972), Climate: Past, Present, and Future, Vol. 1, Fundamentals and Climate Now, Methuen and Co.

[5] Lovejoy, S., and D. Schertzer (2013), The climate is not what you expect, Bull. Amer. Meteor. Soc., doi: 10.1175/BAMS-D-12-00094.

[6] Koutsoyiannis, D. (2011), Hurst-Kolmogorov dynamics and uncertainty, Journal of the American Water Resources Association, 47 (3), 481–495.

[7] Koutsoyiannis, D. (2010), A random walk on water, Hydrology and Earth System Sciences, 14, 585–601.

[8] Hurst, H.E. (1951), Long term storage capacities of reservoirs, Trans. Am. Soc. Civil Engrs., 116, 776–808.

[9] Toussoun, O. (1925). Mémoire sur l’histoire du Nil, in Mémoires a l’Institut d’Egypte, vol. 18, pp. 366-404.

[10] Cohn, T. A., and H. F. Lins (2005), Nature's style: Naturally trendy, Geophys. Res. Lett., 32, L23402, doi: 10.1029/2005GL024476.

[11] Koutsoyiannis, D., and A. Langousis (2011), Precipitation, Treatise on Water Science, edited by P. Wilderer and S. Uhlenbrook, 2, 27–78, Academic Press, Oxford ( p. 57).

[12] Koutsoyiannis, D. (2013), Encolpion of stochastics: Fundamentals of stochastic processes, 30 pages, National Technical University of Athens, Athens, http://itia.ntua.gr/1317/, accessed 2013-04-17.

[13] Papoulis, A. (1991), Probability, Random Variables and Stochastic Processes, 3rd edn., McGraw-Hill, New York (p. 635).

[14] Koutsoyiannis, D. (2013), Hydrology and Change, Hydrological Sciences Journal (accepted with minor revisions; currently available in the form of an IUGG Plenary lecture, Melbourne 2011, http://itia.ntua.gr/1135/, accessed 2013-04-17).

[15] Kolmogorov, A. N. (1940). Wiener spirals and some other interesting curves in a Hilbert space, Dokl. Akad. Nauk SSSR, 26, 115-118. English translation in: Tikhomirov, V.M. (ed.) 1991. Selected Works of A. N. Kolmogorov: Mathematics and mechanics, Vol. 1, Springer, 324-326.

[16] Koutsoyiannis, D. (2011), Hurst-Kolmogorov dynamics as a result of extremal entropy production, Physica A: Statistical Mechanics and its Applications, 390 (8), 1424–1432.

[17] Markonis, Y., and D. Koutsoyiannis (2013), Climatic variability over time scales spanning nine orders of magnitude: Connecting Milankovitch cycles with Hurst–Kolmogorov dynamics, Surveys in Geophysics, 34 (2), 181–207.

[18] Tyralis, H., and D. Koutsoyiannis (2011), Simultaneous estimation of the parameters of the Hurst-Kolmogorov stochastic process, Stochastic Environmental Research & Risk Assessment, 25 (1), 21–33.

[19] As Ref. 1, p. 23.

[20] Zorita, E., T. F. Stocker, and H. von Storch (2008), How unusual is the recent series of warm years?, Geophys. Res. Lett., 35, L24706, doi: 10.1029/2008GL036228.

[21] Koutsoyiannis, D., and A. Montanari (2007), Statistical analysis of hydroclimatic time series: Uncertainty and insights, Water Resources Research, 43 (5), W05429, doi: 10.1029/2006WR005592.

[22] von Storch, H. (1999), Misuses of statistical analysis in climate research, in Analysis of Climate Variability, Applications of Statistical Techniques, Proceedings of an Autumn School organized by the Commission of the European Community, Edited by H. von Storch and A. Navarra, 2nd updated and extended edition, http://www.hvonstorch.de/klima/books/SNBOOK/springer.pdf, accessed 2013-04.

[23] Koutsoyiannis, D. (2003), Climate change, the Hurst phenomenon, and hydrological statistics, Hydrological Sciences Journal, 48 (1), 3–24.

[24] Hamed, K. H. (2008), Trend detection in hydrologic data: The Mann-Kendall trend test under the scaling hypothesis, Journal of Hydrology, 349(3-4), 350-363.

[25] Rybski, D., A. Bunde, S. Havlin and H. von Storch (2006), Long-term persistence in climate and the detection problem, Geophys. Res. Lett., 33, L06718, doi: 10.1029/2005GL025591.

[26] Anagnostopoulos, G. G., D. Koutsoyiannis, A. Christofides, A. Efstratiadis and N. Mamassis (2010), A comparison of local and aggregated climate model outputs with observed data, Hydrological Sciences Journal, 55 (7), 1094–1110.

[27] Koutsoyiannis, D., A. Christofides, A. Efstratiadis, G. G. Anagnostopoulos, and N. Mamassis (2011), Scientific dialogue on climate: is it giving black eyes or opening closed eyes? Reply to “A black eye for the Hydrological Sciences Journal” by D. Huard, Hydrological Sciences Journal, 56 (7), 1334–1339.

[28] Koutsoyiannis, D., A. Efstratiadis, and K. Georgakakos (2007), Uncertainty assessment of future hydroclimatic predictions: A comparison of probabilistic and scenario-based approaches, Journal of Hydrometeorology, 8 (3), 261–281.

Guest blog Armin Bunde

How to estimate the significance of global warming when taking explicitly into account the long-term persistence in temperature?

Armin Bunde, Institut für Theoretische Physik, Universität Giessen, Germany

1. Long-term Persistence in climate and its detection

Long-term persistence (LTP), also called long-term correlations or long-term memory, plays an important role to characterize records in physiology (e.g. heartbeats), computer science (e.g. internet traffic) and in financial markets (volatility). The first hint that LTP is important in climate has been given in the classic papers by Hurst more than 50 years ago when studying the historic levels of the Nile River.

We can distinguish between uncorrelated records (''white noise''), short-term persistent records (STP) and long-term persistent records. In white noise all data points x1, x2, ..., xN are independent of each other. In STP records, each data point xi depends on a short subset of previous points xi-1, xi-2,.. xi-m, i.e., the memory has a finite range m. In LTP records, in contrast, xi depends on all previous points. The simplest model for STP is the ''AR1 process'' where x i is proportional to the foregoing data point xi-1, plus a white noise component ηi-1,

Despite the evidence that temperature anomalies cannot be characterized by the AR1 process, most climate scientists have used the AR1 model when trying to describe temperature fluctuations and estimating the significance of a trend. This usually leads to a considerable overestimation of the external trend and its significance.

There are several methods to quantify the memory in a given sequence. (For a recent review see [1] and references therein). The first one is the autocorrelation function C(s) where s = 1,2,3,… is the lag time between 2 data points. For white noise, there is no memory and C(s) = 0. For the AR1 process, C(s) decays exponentially, C(s) ~ exp{- s/S} where S =1/|ln a| is the ''persistence length''. For infinitely long stationary LTP data C(s) decays algebraically,

where ϒ is called correlation exponent.

The first figure shows parts of an uncorrelated (left) and a synthetic long-term persistent (right) record, with ϒ = 0.4. The full line is the moving average over 30 data points. For the uncorrelated data, the moving average is close to zero, while for the LTP data, the moving average can have large deviations from the mean, forming some kind of mountain-valley structure that looks as if it contained some external deterministic trend. The figure shows that it is not a straightforward task to separate the natural fluctuations from an external trend, and this makes the detection of external trends in LTP records a difficult task. I will return to this later.

Figure 1.

One can show analytically [2], that in LTP records with a finite length N, the algebraic dependence of C(s) on s can be seen only for very small time lags s, satisfying the inequality (s/N)ϒ << 1. Already for ϒ = 1/2 and records of length 600 (which corresponds to 50 years of monthly data), this condition can only be met for very small time lag times s, roughly s < 6. For larger time lags, C(s) decays faster than algebraically. This is an artifact of the method called ''Finite Size''-Effect. If one is not aware of this effect, one may be led to the wrong conclusion that there exists no long-term memory in sequences of a finite length.

A similar mistake may happen, when one uses the second traditional method for detecting LTP, the power spectrum (spectral density) S(f). The discrete frequency f is equivalent to an inverse lag time, f=1/s, and a multiple of 1/N. For white noise, S(f) is constant. For STP data, S(f) is constant for f well below m/N (since the data are uncorrelated at time lags s above m), and then decreases monotonously.

For LTP records, S(f) decreases by a power law,

so one may detect LTP also by considering the power spectrum. However, due to the discreteness of f, the algebraic decay cannot be clearly observed at frequencies below 50/N, which again may lead to the wrong conclusion that there is no long-term memory.

In addition to the remarkable finite size resp. discreteness effects, both methods lead an over-estimation of the LTP in the presence of external deterministic trends.

In recent years, several methods (see, e.g.,[1,3]) have been developed where long-term correlations in the presence of deterministic polynomial trends can be detected. These methods include the detrended fluctuation analysis (DFA2) and Haar-wavelet analysis (WT2), where linear trends are eliminated systematically. DFA2 is quite accurate in the time window 8 ≤ s < N/4 while WT2 is accurate for 1 ≤ s < N/50. In both methods, one determines a fluctuation function F(s) which measures the fluctuations of the record in time windows of length s around a trend line. For LTP records with correlation exponent ϒ, F(s) increases as

where α is usually called Hurst exponent. By combining DFA2 and WT2 one can obtain a consistent picture on time scales between s=1 and s=N/4. For a meaningful analysis, the records should consist of more than N = 500 data points. I like to emphasize again that in the case an external deterministic trend cannot be excluded, the evaluation of the LTP and the determination of the Hurst exponent must be done with trend-eliminating methods, e.g., DFA2 and WT2, as described above.

The second figure summarizes the results of our earlier analysis (for references, see [1]) for a large number of atmospheric and sea surface temperatures as well as precipitation and river run-offs. Each histogram shows how many stations have Hurst exponents around 0.5, 0.55, 0.6, 0.65 and so on.

Figure 2.

For the daily precipitation records and the continental atmospheric temperatures the distribution of Hurst exponents is quite narrow. For daily precipitation, the exponent is close to 0.5, indicating the absence of persistence (see also [3]), while for the daily continental temperature records, the exponent is close to 0.65, indicating a “universal” persistence law. Both laws can be used very efficiently as test bed for climate models and paleo reconstructions (for references, see [1] and [3]).

There are also more intuitive measures of LTP, and one of them is the distribution of the persistence lengths l in a record (see, e. g. [3]). In temperature data, l describes the lengths of warm resp cold periods where the temperature anomalies (deviation of the daily or monthly temperature from their seasonal mean) are above resp. below zero. It is easy to show that the distribution P(l) of the persistence length decays exponentially for uncorrelated data, i.e., ln P(l) ~ - l. For LTP data, P(l) decays via a stretched exponential, ln P(l) ~ -lϒ where ϒ is the correlation exponent (see [1]). Accordingly, in LTP records large persistence lengths are more frequent, which is intuitively clear.

2. Detection of external trends in LTP data

For detection and estimation of external trends (“detection problem”) one needs a statistical model. For monthly (and annual) temperature records the best statistical model is long-term persistence, as we have seen in the foregoing section. The main features of a long-term persistent record of length N are determined by the Hurst exponent α. Synthetic LTP records characterized by these two parameters can be easily generated by a Fourier-transformation (see, e.g., [1]) with the help of random number generators.

When using LTP as statistical model we assume that there are no additional short term correlations, generated by ''Großwetterlagen'' (blocking situations). Since the persistence length of these short term correlations is below 14 days, they are not present in monthly data sets.

For the detection problem, one then needs to know the probability W(x) that in a long-term correlated record of length L and Hurst exponent α, the relative trend exceeds x. For temperature data, the relative trend is the ratio between the temperature change (determined by a simple regression analysis) and the standard deviation σ around the trend line. For Gaussian LTP data, an analytical expression for W(x), for given α and N, has been derived in [4], which is easy to implement and can serve also as a very good approximation for Non-Gaussian data. In order to decide if a measured relative trend xmmay be natural or not, one has to determine the exceedance probability at xm. If W(xm) is below 0.025, the trend usually is called significant (within the 95 percent confidence interval), if it is below 0.005, the trend is called highly significant (within the 99 percent confidence interval).

From the condition W(y) = 0.025 one may derive error bars y (within the 95 percent confidence interval) for the expected external trend,

If xm is slightly below y, then the minimum value of the external trend is negative and thus the trend is not significant. But the maximum value of the external trend can be large, and thus an external trend cannot be excluded, even though the trend is not significant. Accordingly, if a trend is not significant since W(xm) is above 0.025, this does not mean that one can exclude the possibility of an external deterministic trend. It only means that one is not forced to assume an external trend in order to describe the variability of the record properly. For example, if we observe a small insignificant positive trend, then this trend may either arise from the superposition of a strong positive natural fluctuation (as in Fig.1b) and a small negative external trend or from a strong negative fluctuation (as in Fig. 1b, but downwards) and a large positive external trend.

These conclusions are independent of the used model and hold also for the STP model. In previous significance analyses, climate scientists usually used the STP model, where the model parameter a has been determined from measuring C(1) of the data, see Sect. 1. The significance of a trend (see below) is clearly underestimated by this model.

3. Detection of climate change within the LTP model

Using our terminology of “significant” and “highly significant” we have obtained a mixed picture of the significance of temperature records, partly reviewed in [1].

(i) The global sea surface temperature increased, in the past 100y, by about 0.6 degree, which is not significant. The reason for this is the large persistence of the oceans, reflected by a large Hurst exponent.

(ii) The global land air temperature, in the past 100y, increased by about 0.8 degrees. We find this increase even highly significant. The reason for this is the comparatively low persistence of the land air temperature, which makes large natural increases unlikely.

(iii) Local temperatures: In local temperature records it is more difficult to detect external trends due to their large variability. We have studied a large number of local stations around the globe. For stations at high elevation like Sonnblick in Austria or in Siberia, we found highly significant trends. For about half of the other stations, we could not find a significant trend. However, when averaging the records in a certain area, this picture changed. Due to the averaging, the fluctuations around the trend line decrease and the temperature increases become more significant.

Our estimations are basically in line with earlier, less rigorous trend estimations in LTP data by Rybski et al [5] and in line with the conclusions of Zorita et al when estimating the probability that 11 of the warmest years in a 162 year long record all lie in the last 12 years.

My conclusion is that the AR1 process falsely used by climate scientists to describe temperature variability leads to a strong overestimation of the significance of external trends. When using the proper LTP model the significance is considerably lower. But also the LTP model does not reject the hypothesis of anthropogenic climate change.

Biosketch

Armin Bunde is professor of theoretical physics in Giessen. After receiving his PhD in theoretical solid state physics at Giessen University, he spent several years as a Post Doc in Antwerp, Saarbrücken, and Konstanz. He received a prestigious Heisenberg Fellowship in 1984 and spent three years at Boston University and Bar Ilan University in Israel, where he worked with H.Eugene Stanley (Boston) and Shlomo Havlin (Israel). In 1985 he received the Carl-Wagner Award. Between 1987 and 1993 he was professor of theoretical physics at Hamburg University, since 1993 he is back in Giessen.

In the last 20 years, his main research areas are disordered materials, percolation theory and applications in materials science, as well as fractals and time series analysis in different disciplines, among them geo science, where he is mainly interested in long-term persistence, extreme events and climate networks. In geo science, he has cooperated intensively with Hans-Joachim Schellnhuber, Donald Turcotte, Hans von Storch, and Shlomo Havlin. He has published more than 300 papers and his Hirsch Index (google) is well above 50.

References

[1] A. Bunde and S. Lennartz, Acta Geophysica 60, 562 (2012) and references therein

[2] S. Lennartz and A. Bunde, Phys. Rev. E (2009)

[3] A. Bunde, U. Büntgen, J. Ludescher, J. Luterbacher, and H. von Storch, Nature Climate Change, 3, 174 (2013), see also: online supplementary information

[4] S. Lennartz and A. Bunde, Phys. Rev. E 84, 021129 (2011)

[5] D. Rybski, A. Bunde, S. Havlin, and H. von Storch, Geophys. Res. Lett. 35, L06718 (2006)

[6] E. Zorita, T.F. Stocker, and H. von Storch, Geophys. Res. Lett. 35, L24706 (2008)

Melting of the Arctic sea ice

Update 25 February 2013: Climate Dialogue summary now online
The summary of the first Climate Dialogue discussion on the melting of the Arctic sea ice is now online (see below). We have made two versions: a short and an extended version. The discussion between the experts is now officially closed. The public comments remain open. We apologize for the delay in publishing the summary.

Both versions can also be downloaded as pdf documents:
Summary of the Climate Dialogue on Arctic sea ice
Extended summary of the Climate Dialogue on Arctic sea ice

Introduction
The Arctic sea ice extent has been decreasing steadily for the past three decades. Scientists discuss the potential causes of this decrease. For practical reasons expert comments (comments by the invited scientists) are also separated from public comments. Anyone can comment. You need to subscribe once or use your own WordPress account. Public comments should be polite and on-topic and are moderated in advance.

Climate Dialogue editorial staff
Rob van Dorland, KNMI
Bart Strengers, PBL
Marcel Crok, science writer

Introduction Arctic sea ice

Melting of the Arctic
What are the causes of the decline in Arctic sea ice? Is it dominated by global warming or can it be explained by natural variability?

Introduction
Over the period 1979–2012, the Northern Hemisphere minimum sea ice extent for September—the end of the summer melt season— has declined by more than 11% per decade and the trend appears to be steeper for the last decade with record lows in 2007 and 2012. The decrease in winter sea ice extent is less strong, but the amount of thicker, older ice has decreased as well and therefore the decrease in total sea ice volume is even stronger than sea ice extent, both in summer and winter.

What is the cause?
Several studies have suggested that the decline in arctic sea ice is at least partly caused by global warming. An oft cited paper by Stroeve (2007)[1] and also more recent studies (i.e. Rampal 2011)[2] show that greenhouse forced climate models greatly underestimate the observed trend in arctic sea ice. A more recent study by Stroeve (2012)[3] shows that progress has been made in this area, but the climate models still cannot account for the full extent of the Arctic sea ice decline.

This could be explained as “it’s worse than we thought.” However it could also be interpreted as “models are yet unable to realistically simulate the sea ice behavior” and thus unable to conclude on the dominant role of global warming in the decline of Arctic sea ice. The low performance of climate models may be due to the difficulty in accounting for natural variations, or the physics associated with positive feedbacks. For example, it is not fully clear yet what role different oscillations have played like the Arctic Oscillation, the North Atlantic Oscillation and the Pacific Decadal Oscillation. Also, other anthropogenic forcings like black carbon could be important.

Discussion
A central question for the discussion is what is causing the recent decline in arctic sea ice. And can these processes be related to the emission of anthropogenic greenhouse gases?

Questions that are relevant for the discussion:

1) What are the main processes causing the decline in Arctic sea ice?

2) How unusual is the current decline in historical perspective?

3) What is the evidence for a substantial role of “global warming” in the current Arctic sea ice decline?

4) What is the evidence for a substantial role of natural variability (AO, AMO, NAO, PDO)?

5) What percentage of the recent decline would you attribute to anthropogenic greenhouse gases?

6) Do you think the Arctic could be ice free in the (near) future and when do you think this could happen?



[1] Stroeve, J.; Holland, M. M.; Meier, W.; Scambos, T.; Serreze, M. (2007). "Arctic sea ice decline: Faster than forecast". Geophysical Research Letters 34 (9): L09501

[2] IPCC climate models do not capture Arctic sea ice drift acceleration: Consequences in terms of projected sea ice thinning and decline; P. Rampal , J. Weiss , C. Dubois , J.-M. Campin; Journal: Journal of Geophysical Research , vol. 116, 2011, DOI: 10.1029/2011JC007110

[3] Stroeve, J., V. Kattsov, A. Barrett, M. Serreze, T. Pavlova, M. Holland, and W. N. Meier (2012a), Trends in Arctic sea ice extent from CMIP5, CMIP3 and observations, Geophys. Res. Lett., doi:10.1029/2012GL052676

Guest blog Walt Meier
Arctic sea ice decline: past, present, and future

Walt Meier, Research Scientist, National Snow and Ice Data Center

Over the last 30+ years, Arctic sea ice has declined precipitously, particularly during summer. Summer extent has decreased by ~50%, including most of the older, thicker ice [1, 2]. Globally warming temperatures have been the primary cause of the long-term decline in Arctic sea ice [3]. Many processes effect the sea ice on several temporal spatial scales – e.g., winds, ocean currents, clouds – but the multi-decadal decline in all seasons, and in virtually all regions (Bering Sea in winter being an exception) cannot be explained without the long-term warming trend that has been attributed to anthropogenic greenhouse gas (GHG) emissions.

Unique
The current decline appears to be unique in at least the last 5000 years. While the consistent satellite record began only in 1979, earlier partial records indicate decreased extent in the Russian Arctic and the Greenland and Barents seas during the 1930s [4]. However, these reductions were regionally and temporally variable, unlike the pan-Arctic decline seen in recent decades. There are also indications that the North American side of the Arctic did not experience warm temperatures and thus low sea ice conditions during those years [5]. Thus, the 1930s period appears to be more of a regional event, as opposed to the pan-Arctic warming and sea ice decline we’re seeing now.

Earlier than the 1930s, proxy records from paleographic data (e.g., sediment cores) are essential to understand Arctic-wide sea ice conditions. The indicate reduced sea ice extent at levels near or possibly below current conditions during the Holocene Maximum, a period between 5000 and 10,000 years ago [6], though these are far from comprehensive. The next earliest potential period when ice conditions might have been low as now was during the Eemian period, the previous interglacial, about 130,000 years ago when temperatures were quite warm.

Global warming
The evidence for a substantial role of “global warming” in the current sea ice decline comes from the fact that the decline (1) correlates with the global warming temperatures over the past several decades, (2) is outside the range of normal variability over the past several decades and likely over the past several centuries, (3) the decline is pan-Arctic, with all regions experiencing declines throughout all or most of the year.  Also, model simulations of sea ice cover consistently show a response of declining sea ice to increasing GHGs (albeit slower than the observed decline); conversely, model runs over the last 30 years without GHG forcing do not show a decline [7, 8]. Finally, there does not appear to be a mechanism to sufficiently explain the long-term decline without including the effect of GHGs [9].

Arctic Oscillation
Along with the long-term GHG forcing, there is substantial natural variability in the sea ice system. The winter mode of the Arctic Oscillation (AO) or North Atlantic Oscillation (NAO) has been linked to summer sea ice conditions and the amount of multiyear ice in the Arctic [10]. A strong winter positive mode results in greater outflow of multiyear ice, resulting in a summer ice cover more prone to melt. Conversely, a negative mode tends to retain more multiyear ice, resulting in higher summer extents. However, the AO typically has a 3-7 year cycle, which does not correspond to the long-term trend. In addition, in recent years, the influence of the AO appears to have been broken, or at least weakened. For example, the 2009-2010 winter had the lowest AO on record (since 1950), and yet the summer 2010 minimum was among the lowest in the satellite record [11].

The Atlantic Meridional Overturning Circulation (AMOC) has some influence on sea ice extents in the Greenland, Barents, and Kara Seas, and thereby some effect on the total ice extent, particularly in winter [12]; it may explain some of the recent declining trend in sea ice [13]. Likewise, the Pacific Decadal Oscillation (PDO) affects winter ice conditions in the Bering Sea, but not elsewhere. These naturally varying climate oscillations cannot sufficiently explain the long-term decline in sea ice.

Attribution
It is difficult to put a precise number on how much of the decline is due to GHGs. There is strong natural variability, which is seen in observations and in model simulations. It is likely that at least some of the acceleration of the loss of sea ice in the past ~10 years is due to natural variability. A modeling study [14] suggested that about half of the observed September sea ice trend from 1979-2005 could be explained by natural variability, with the rest attributable to GHGs. There may also be some influence of black carbon, though how much is unclear.

Ice-free
The Arctic has been seasonally ice-free in the past under temperatures not much higher than in recent years. With continued GHG forcing and resulting increased temperatures, the Arctic will again become seasonally ice-free*. When this will occur is highly uncertain due to a number of factors. First, the ice cover has high interannual variability. Model simulations indicate periods of rapid ice loss, such as we have seen in the last decade [15]. However, periods of stasis or even increasing extent over several years are possible [14].

A couple of recent model studies have indicated a long-term linear response to GHG forcing [16, 17], suggesting that the recent acceleration in the decline is temporary and due to natural variability. Such a response would first result in ice-free conditions near the end of the century.  However, there are reasons to believe that a linear response is unlikely due to feedbacks and the response of the ocean to the loss of sea ice [18]. Over the satellite record, the observations are declining much faster than GCMs have indicated [7, 8]. IPCC models that match the historical record most closely indicate ice-free conditions by sometime in the 2030-2050 timeframe [19, 20]. This range seems reasonable, though it may not encompass the full range of possibilities.

However, even after ice-free conditions are reached for the first time, whether it be 2050, 2030, or even earlier, high interannual variability will continue. It is likely that there will be some subsequent years with more than 1 million square kilometers of sea ice remaining at the end of summer. Thus, prediction of sea ice conditions, particularly on decadal scales will be a challenge. Regardless of when ice-free conditions first occur, impacts of the sea ice loss are already being felt within the Arctic (and likely outside of the Arctic). These impacts will continue to increase well before summer ice-free conditions occur.

*There will likely be at least some ice throughout the summer, thick ice that piles up along the Greenland coast, ice in protected bays and inlets. Thus, it is important to define what is meant by “ice-free”. Here I will accept the 1 million square kilometers used in Wang and Overland (2009, 2012) as a reasonable threshold.

Biosketch
Dr. Walt Meier is a research scientist at the National Snow and Ice Data Center (NSIDC), part of the University of Colorado Boulder’s Cooperative Institute for Research in Environmental Sciences. His research focuses on studying the changing sea ice cover using satellite sensors and investigating impacts of the declining Arctic sea ice on climate. Dr. Meier also serves as lead scientist for NSIDC’s sea ice datasets. He has participated in several national and international activities, including a lead author on the AMAP Snow, Water, Ice and Permafrost in the Arctic (SWIPA) assessment report published earlier this year. He received a B.S. degree in from the University of Michigan, Ann Arbor in 1991 and an M.S. and Ph.D. degree from the University of Colorado in 1992 and 1998 respectively. From 1999 to 2001 Dr. Meier served as a visiting scientist at the U.S. National Ice Center in Washington, DC where he researched improved products for operational support of vessels in and near ice-infested waters. From 2001 through 2003 he was an adjunct assistant professor at the U.S. Naval Academy in Annapolis, MD teaching undergraduate courses in remote sensing and polar science.



References

[1] NSIDC Arctic Sea Ice News and Analysis, http://nsidc.org/arcticseaicenews/.

[2] Maslanik, J., J. Stroeve, C. Fowler, and W. Emery (2011), Distribution and trends in Arctic sea ice age through spring 2011, Geophys. Res. Lett., 38, L13502, doi:10.1029/2011GL047735.

[3] Overland, J.E., M. Wang, and S. Salo (2008), The recent Arctic warm period, Tellus, doi: 10.1111/j.1600-0870.2008.00327.x.

[4] Mahoney, A. R., R. G. Barry, V. Smolyanitsky, and F. Fetterer (2008), Observed sea ice extent in the Russian Arctic, 1933–2006, J. Geophys. Res., 113, C11005, doi:10.1029/2008JC004830.

[5] Overland, J.E., M.C. Spillane, D.B. Percival, M. Wang, H.O. Mofjeld (2004), Seasonal and regional variation of pan-Arctic surface air temperature over the instrumental record, J. Climate, 17, 3263-3282.

[6] Polyak, L., and several others (2010), History of sea ice in the Arctic, Quaternary Sci. Rev., 29, 1757-1778, doi: 10.1016/j.quascirev.2010.02.010.

[7] Stroeve, J., M.M. Holland, W. Meier, T. Scambos, and M. Serreze (2007), Arctic sea ice decline: Faster than forecast, Geophys. Res. Lett., 34, L09501, doi:10.1029/2007GL029703.

[8] Stroeve, J.C., V. Kattsov, A. Barrett, M. Serreze, T. Pavlova, M. Holland, and W.N. Meier (2012), Trends in Arctic sea ice extent from CMIP5, CMIP3 and observations, Geophys. Res. Lett., 39, L16502, doi:10.1029/2012GL052676.

[9] Notz, D. and J. Marotzke (2012), Observations reveal external driver for Arctic sea-ice retreat, Geophys. Res. Lett., 39, L08502, doi:10.1029/2012GL051094.

[10] Rigor, I.G., J.M. Wallace, and R.L. Colony (2002), Response of sea ice to the Arctic Oscillation, J. Climate, 15, 2648-2663.

[11] Stroeve, J. C., J. Maslanik, M. C. Serreze, I. Rigor, W. Meier, and C. Fowler (2011), Sea ice response to an extreme negative phase of the Arctic Oscillation during winter 2009/2010, Geophys. Res. Lett., 38, L02502, doi:10.1029/2010GL045662.

[12] Mahajan, Salil, Rong Zhang, Thomas L. Delworth (2011), Impact of the Atlantic Meridional Overturning Circulation (AMOC) on Arctic surface air temperature and sea ice variability. J. Climate, 24, 6573–6581, doi: 10.1175/2011JCLI4002.1.

[13] Day, J.J., J.C Hargreaves, J.D. Annan, and A. Abe-Ouchi (2012), Sources of multi-decadal variability in Arctic sea ice extent, Env. Res. Lett., 7, 034011, doi: 10.1088/1748-9326/7/3/034011.

[14] Kay, J. E., M. M. Holland, and A. Jahn (2011), Inter-annual to multi-decadal Arctic sea ice extent trends in a warming world, Geophys. Res. Lett., 38, L15708, doi:10.1029/2011GL048008.

[15] Holland, M.M., Bitz, C.M. and Tremblay, B. (2006), Future abrupt reductions in the summer Arctic Sea ice. Geophys. Res. Lett. 33, L23503, doi:10.1029/2006GL028024.

[16] Tietsche, S., D. Notz, J. H. Jungclaus, and J. Marotzke (2011), Recovery mechanisms of Arctic summer sea ice, Geophys. Res. Lett., 38, L02707, doi:10.1029/2010GL045698.

[17] Amstrup, S.C., E.T. DeWeaver, D.C. Douglas, B.G. Marcot, G.M. Durner, C.M. Bitz, and D.A. Bailey (2010), Greenhouse gas mitigation can reduce sea-ice loss and increase polar bear persistence, Nature, 468, 955-958, doi: 10.1038/nature09653.

[18] Maslowski, W., J.C. Kinney, M. Higgins, and A. Roberts (2012), The future of Arctic sea ice, Ann. Rev. Earth and Planetary Sciences, 40, 625-654, doi: 10.1146/annurev-earth-042711-105345.

[19] Wang, M., and J. E. Overland (2009), A sea ice free summer Arctic within 30 years?, Geophys. Res. Lett., 36, L07502, doi:10.1029/2009GL037820.

[20] Wang, M. and J. E. Overland (2012), A sea ice free summer Arctic within 30 years: An update from CMIP5 models, Geophys. Res. Lett., 39, L18501, doi:10.1029/2012GL052868.

Guest blog Judith Curry

On the decline of Arctic sea ice

Judith Curry

I applaud the Dutch Ministry for establishing Climate Dialogue, and I am very pleased to participate in this inaugural dialogue on the decline of Arctic sea ice.

At my blog Climate Etc. http://judithcurry.com , I’ve written four lengthy articles on Arctic sea ice over the past 18 months:

Pondering the Arctic Ocean. Part I: Climate Dynamics

Likely causes of recent changes in Arctic sea ice

Reflections on the Arctic sea ice minimum: Part I

Reflections on the Arctic sea ice minimum: Part II

This essay presents an overview of my perspective on this topic; see the original articles for more details and references to scientific publications.

Observations

The conventional understanding of Arctic sea ice extent shows a general retreat of seasonal ice since about 1900, and accelerated retreat of both seasonal and annual ice during the latter half of the 20th century. Hints that this understanding may be overly simplistic in view of the uncertainties and ambiguities in the period prior to satellites are described in this presentation by John Walsh about plans for a gridded sea ice product back to 1870. Further, I’ve recently had some discussions about this with a historian that is investigating historical reports of sea ice extend during the period 1920-1950. He has found reports of reduced wintertime extent during this period, and a general lack of data from the Russian sector. While this material is not yet published, it reminds us that prior to 1979, we do not have a reliable data set of global sea ice extent. The lack of such a data set hampers our ability to test our ideas about the impact of natural variability versus anthropogenic forcing on sea ice variability and change.

Analysis of climate dynamics and sea ice physical processes

The following factors impact the sea ice fate during the melt season:

▪ Thickness and compactness of sea ice at the beginning of the melt season: ice that starts out thinner is more easily melted away. Further, first year ice has different optical and thermodynamic characteristics than multi-year ice.

▪ Transport of ice through the Fram Strait (between Greenland and Europe), which depends on a combination of atmospheric and ocean circulation patterns

▪ Weather patterns that act to either break up or consolidate the ice

▪ Radiative forcing (which is dominated by the cloud patterns)

▪ Melting from below by warm ocean currents.

▪ Melting from above by warm atmospheric temperatures.

▪ Geographic distribution of the sea ice, which depends on a combination of all of the above

And all this is complicated by the fact that the minimum sea ice extent in an individual season doesn’t simply reflect that season’s weather processes, but also reflects the decadal history of sea ice characteristics, sea ice export and atmospheric and oceanic circulation patterns. And the sea ice extent itself influences the atmospheric and oceanic circulation patterns. Hence, the sea ice characteristics tend to be out of equilibrium with the thermal forcing in a particular year.

Older ice
Here’s the basic story as I see it. During the late 1980s and early 1990s, the circulation patterns favored the motion of older, thicker sea ice out of the Arctic. This set the stage for the general decline in Arctic sea ice extent starting in the 1990′s. In 2001/2002, a hemispheric shift in the teleconnection indices occurred, which accelerated the downward trend. A local regime shift occurred in the Arctic during 2007, triggered by summertime weather patterns conspired to warm and melt the sea ice. The loss of multi-year ice during 2007 has resulted in all the minima since then being well below normal, with a high amplitude seasonal cycle. After 2007, there was another step loss in ice volume in 2010. In 2012, the basic pattern of this new regime was given a ‘kick’ by a large cyclonic storm in early August.

Anthropogenic
So, what is the contribution of anthropogenic global warming to all this? It’s difficult to separate it out. The polar regions are extra sensitive to CO2 forcing and water vapor feedback, owing the low amounts of water vapor. However, any radiative forcing from greenhouse gases is swamped by inter-annual variability in cloud radiative forcing. In the bigger picture sense, greenhouse forcing is involved in complex nonlinear ways with the climate regime shifts. So there is undoubtedly a contribution from CO2 forcing, but it is difficult to find any particular signal in this year’s record minimum, other than the contribution of greenhouse warming to a longer term trend. In the overall scheme of what is going on with the sea ice, I think 2007 was the most significant event, followed by 2010. The big event in 2012 was the cyclonic storm, and the impact on ocean mixing may turn out to be more significant than the sea ice minimum.

There is a complex interplay between natural internal variability and CO2 forcing, with complex interactions among ocean dynamics and heat transport, sea ice dynamics forced both by atmospheric winds and ocean currents, and atmospheric thermodynamic forcing acting to determine recent variations in multi-year sea ice extent. Hence sorting dynamical versus thermodynamic factors and attribution to increased greenhouse gases is not at all straightforward.

So . . . what is the bottom line on the attribution of the recent sea ice melt? My assessment is that it is likely (>66% likelihood) that there is 50-50 split between natural variability and anthropogenic forcing, with +/-20% range. Why such a ‘wishy washy’ statement with large error bars? Well, observations are ambiguous, models are inadequate, and our understanding of the complex interactions of the climate system is incomplete.

Whence an ‘ice free’ Arctic?

‘Ice free’ is put in quotes, because ‘ice free’ as commonly used doesn’t mean free of ice, as in zero ice. The usual definition of ‘ice free’ Arctic is ice extent below 1 M sq. km (current minimum extent is around 3.5 M sq. km). This definition is used because it is very difficult to melt the thick ice around the Canadian Archipelago. And the issue of ‘ice free’ in the 21st century is pretty much a non-issue if your require this thick ice to disappear.

What do the climate models have to say? Several recent papers have analyzed the CMIP5 simulations, and find near ice free conditions by mid-century, and even as early as the 2030’s. Whereas sea ice models are becoming quite sophisticated, most recently in terms of the radiative transfer, melt ponds, and aerosols, prediction of sea ice is hostage to predictions of the chaotic atmospheric and oceanic dynamics.

For the next two decades, natural variability will almost certainly trump any direct effects from anthropogenic warming by a long shot. The current sea ice situation does not seem stable, but it is not at all clear whether we can expect a reversion to the (more recently) normal state or yet a larger ice loss.

The issue is whether the ice is now sufficiently thin that it would be difficult to reverse the decline. Growing and diminishing the sea ice pack are not symmetric processes: ice export that contributes to diminishing the sea ice pack does not have a reverse counterpart; at best you stop the export and stop the decrease.

So the question then becomes what processes could contribute to a recovery of the Arctic sea ice on the time scale of two decades?

Recovery (?)

So, can we infer that the Arctic sea ice is caught in an irreversible ‘spiral of death’? Here are some processes that would contribute to a recovery of the sea ice:

▪ Reduction of the sea ice export through the Fram Strait

▪ Reduction of warm water inflow from the Atlantic and Pacific

▪ Fewer clouds in winter and/or more clouds in summer

▪ Less snow fall on ice in autumn and more in spring

▪ Less soot transported to the Arctic

▪ No rainfall on snow covered ice before mid-June

▪ Fewer storms in summer causing ice breakup, and more storms in autumn/early winter causing ice ridging/rafting

These processes depend on both random weather patterns and the teleconnection climate regimes. Can I predict how this might go over the next two decades? Heck no, other than that I suspect that the cool phase of the PDO will persist and at some point probably within two decades we will switch to the cool phase of the AMO.

And then there are the known unknowns: what solar radiation will do (looks like cooling), volcanoes are always a wild card, and then there are the less known unknowns such as cosmic ray effects, magnetic field effects, etc. And in terms of climate shifts, there may be something happening on much longer time scales (e.g. the Atlantic Meridional Overturning Circulation) that could influence the next climate regime shift. Focusing on CO2 as the dominant influence on the time scale of two decades seems very misguided to me.

Does ‘ice free’ matter?

The first issue to debunk is that an ‘ice free’ Arctic is some sort of ‘tipping point.’ A number of recent studies find that in models, the loss of summer sea ice cover is highly reversible.

The impact of September sea ice loss on the ice albedo feedback mechanism is interesting. The minimum sea ice occurs during a period when the sun is at low elevation, so the direct ice albedo effect isn’t all that large. Less sea ice in autumn means more snowfall on the continents, which can have a larger impact on albedo. The impacts of the freeze-thaw over the annual cycle influences ocean circulations. But sea ice would continue to freeze and thaw on an annual cycle.

Clouds would change, atmospheric circulation patterns would change. The net effect on climate outside the Arctic Ocean would be what? More snow during winter on the continents is the most obvious expected change. But we really don’t know.

There would likely be regional triggers that could feedback onto larger scale regime shifts. Would any of these patterns or extreme events fall outside the envelope of what we have seen over the past century? Hard to know.

Would melting sea ice trigger some sort of clathrate methane release into the atmosphere? Well in terms of thawing permafrost, it seems like more snow fall on the continents would inhibit permafrost thawing. Same for the stability of the Greenland ice cap.

These are all qualitative speculations, but I am not seeing a big rationale for climate catastrophe if the sea ice melts.

Biosketch
Judith Curry obtained her Ph.D. in Geophysical Sciences from the University of Chicago. She is currently Professor and Chair of the School of Earth and Atmospheric Sciences at the Georgia Institute of Technology, and is also President of Climate Forecast Applications Network LLC. Her research addresses a range of topics in atmospheric and climate science, including sea ice and the climate dynamics of the Arctic http://curry.eas.gatech.edu/climate/arctic.htm. She is a Fellow of the AAAS, the American Geophysical Union, and the American Meteorological Society. She has recently served on the U.S. National Research Council Space Studies Board and Climate Research Committee. She currently serves on the U.S. Department of Energy Biological and Environmental Research Committee and the Earth Science Subcommittee of the NASA Advisory Council. Curry is an active spokesperson on issues related to integrity in scientific research. She is the proprietor of a blog Climate Etc. judithcurry.com, which is a forum to discuss topics related to climate science and the science-policy interface.

Guest blog Ron Lindsay

Melting of the Arctic
Ron Lindsay

I usually choose to focus on the mean ice thickness within the Arctic Ocean as opposed to all ice-covered seas because this limited system is better defined. The atmospheric forcing is more uniform over the region, the volume-thickness relationship is well established since it is a defined region and any residual summer ice will mostly be found there. In addition, ice thickness is a much more consistent climate indicator than ice area or extent. Trends in thickness explain much more of the inter-annual variability than sea ice area and even more than extent.

If we want to understand the fate of summer ice, it is the Arctic Ocean we need to look at. The peripheral seas in many respects don’t matter. I use the ice thickness estimates from the retrospective studies using the PIOMAS ice-ocean model. I am the first to say models are far from perfect, but the general pattern of ice thickness simulated by this model has been shown to be pretty good when compared to observations (Schweiger et al 2011[1]). If anything, the model may be a little conservative in estimating the decline in ice thickness.

Greenhouse gases
I believe fundamentally the main process causing the decline in Arctic sea ice is increasing greenhouse gases. Evidence for the role of greenhouse gases must come primarily from modeling studies. Only those can help us separate natural climate variations from variations caused by changes in greenhouse gases or other external forcing mechanisms (e.g. sun or volcanoes). Examining past climate records and asking questions about the forcing mechanisms responsible for changes can also help. For example there is some evidence that there was less sea ice about 9000 years ago when solar insolation was stronger.

But teasing out the actual mechanisms for the decline is very tricky (e.g. whether it is changes in the ocean or in the atmosphere or both or what processes are responsible). Observational evidence is difficult to interpret, since the decline itself modifies the lower atmosphere and the surface fluxes. What is cause and what is effect? One piece of evidence though is the high correlation (R = -0.72, 57 years through 2011) between the rate of melt (including export) in the Arctic Ocean and melt season (May to September) surface temperatures in the rest of the hemisphere, from 20N to 60N (NCEP-R1). The surface temperatures south of the Arctic are likely less influenced by ice loss and the trends are likely more influenced by global forcings. Sea ice basically responds to hemispheric conditions and is not on its own trajectory.

Unknowable
But the actual detailed mechanisms for the decline are currently unknowable. The trend in the latent heat content of the ice in September is less than 0.5 W/m2/year in September. It seems this means annual change in the net heat balance of the ice and the changes in the mean annual melt and growth rates are much too small to be accurately measured by any observational system that looks at the entire region. That the PIOMAS model gives reasonably good estimates of the mean ice thickness trends is because it has been tuned to the atmospheric forcing fields that we use. So while the total melt and growth rates must be about right (to get the correct thickness) an analysis of the individual components of the surface or oceanic heat fluxes and their trends likely won’t provide a useful answer. Because of the small shifts in energy needed to melt the ice it is perhaps no surprise that there is a wide scatter in sea ice trends between climate models.

Unusual
The current decline in ice extent and volume is highly unusual. Maybe the best way to show this is in the consistency in the trends. The linear trend in the September ice thickness from 1987 to 2012 explains 90% of the variability, much more than any other comparable interval since 1948. The observational record for sea ice is more spotty before this time, though researchers are piecing together a more comprehensive picture from various ship observations. So far that picture doesn’t suggest that large variations in sea ice extent were anything but regional over the last 120 years or so.

A reconstruction based on proxy records suggest that sea ice extent is now the lowest in 1450 years (Kinnard, 2011[2]). A recent review of the current state of knowledge (Polyak et al, 2010[3]) concludes: “This ice loss appears to be unmatched over at least the last few thousand years and unexplainable by any of the known natural variabilities.”

The evidence for a substantial role of “global warming” in the current Arctic sea ice decline is very strong, both from observations and from modeling studies. Of course neither can “prove” the role of greenhouse gases but there is overwhelming evidence it is true. To refute the evidence from models, one would have to show that they wildly underestimate natural variability (w.r.t to sea ice). Even in the NCAR CCSM4 which is one of the CMIP5 models with the highest “natural” variability, the sea ice extent trend over the last 30 years is still 50% due to greenhouse gases (Kay et al. 2011[4]).

Natural variability
For those that think this is not the case, they need to show some evidence that there are alternative explanations. Comparing ice volume instead of sea ice extent greatly reduces the natural variability compared to the trend and shows an earlier and more definitive separation than ice area between models run with or without increased greenhouse gas forcings (Schweiger et al 2011[5]).

While natural variability is very important for determining the ice extent, primarily through the action of the winds, I see a very consistent trend in the mean ice thickness with relatively little year-to-year variations. So while natural variability can strongly influence the ice area and extent, I doubt there is a strong component in the variability of the mean ice thickness within the Arctic Ocean. In the peripheral seas the winds are very important for determining heavy or light ice years and hence in these areas the variability associated with circulation changes can be very large, both for ice extent and thickness. There is evidence that an extended positive phase of the AO during the early 1990’s helped flush out older thicker ice and helped set up the subsequent decline in sea ice (Rigor et al. 2002[6], Lindsay et al. 2005[7]). Since then the ice decline cannot be explained by variations in the AO. Evidence from models (Day et al. 2012[8]) indicates that the AO may not play much of a role in sea ice variability. That same study suggest that the AMO indeed may indeed play a significant role in sea ice decline and that as much as 3%/decade of the 10%/decade trend in September sea ice extent between 1979-2010 may be due to AMO variability. There is also some observational evidence that sea ice extent may be influenced by the AMO but none of this evidence suggest that an arctic-wide change in ice extent as seen over the last decade is possible due to these type of modes of natural variability alone. It also appears ice thickness within the Arctic Ocean is less closely tied to the AMO.

As shown by Stroeve et al. (2012[9]), the CMIP5 models as a group underestimate the sea ice trend. But there is no requirement that reality should follow the group mean nor that any model that reproduces the observed trend any better than another is indeed the preferable model. In fact, there are ensemble members that match reality fairly well (e.g. Schweiger et al., 2011) but that shouldn’t fool us in to believing them more. Given that the CCSM4, a model with rather large (and possibly excessive) natural variability pins 50% of sea ice loss on greenhouse gases, probably more than this is due to the greenhouse emissions.

Ice free
The Arctic will likely be largely ice free at the end of some summers within a decade or two. Small bits of ice might remain some years, but they may not matter for much. Current research does not support the notion of any “tipping” points for summer sea ice so if we somehow magically could turn off the forcing that comes from greenhouse gases, sea ice would likely grow back relatively quickly. Unfortunately that is not likely to happen. Winter ice will remain for a long time, a century or more. How long probably depends mostly on the future rate of greenhouse gas emissions.

Biosketch
Ron Lindsay is an Arctic climatologist at the Polar Science Center Applied Physics Laboratory of the University of Washington in Seattle. Lindsay is interested in how the sea ice in the Arctic moves, grows, and decays in response to changing environmental conditions and how the changes in the ice pack are impacting the atmosphere above. To pursue these research themes he uses a wide variety of in situ and remote sensing data and numerical models. In support of these interests he has joined the IceBridge science team to help direct a NASA program to monitor ice thickness from aircraft. He is also developing a capability for modeling the response of the atmosphere to changing pack ice conditions in order to understand the extent to which the heat absorbed in the open water areas in the summer slows the growth of ice in the winter. Lindsay has been conducting Arctic research for over 35 years and has been with the Polar Science Center since 1988.



[1] Schweiger, A., R. Lindsay, J. Zhang, M. Steele, H. Stern, and R. Kwok. 2011. Uncertainty in Modeled Arctic Sea Ice Volume. J. Geophys. Res., doi:10.1029/2011JC007084

[2] Kinnard, C., C. Zdanowicz , D Fisher, and E. Isaksson, 2011: Reconstructed changes in Arctic sea ice over the past 1,450 years, Nature, 509-512, doi 10.1038/nature10581.

[3] Polyak, L., and several others (2010), History of sea ice in the Arctic, Quaternary Sci. Rev., 29, 1757-1778, doi: 10.1016/j.quascirev.2010.02.010.

[4] Kay, J. E., M. M. Holland, and A. Jahn (2011), Inter-annual to multi-decadal Arctic sea ice extent trends in a warming world, Geophys. Res. Lett., 38, L15708, doi:10.1029/2011GL048008.

[5] Schweiger, A., R. Lindsay, J. Zhang, M. Steele, H. Stern, and R. Kwok. 2011. Uncertainty in Modeled Arctic Sea Ice Volume. J. Geophys. Res., doi:10.1029/2011JC007084

[6] Rigor, I.G., J.M. Wallace, and R.L. Colony, Response of Sea Ice to the Arctic Oscillation, J. Climate, v. 15, no. 18, pp. 2648 – 2668, 2002.

[7] Lindsay, R. W. and J. Zhang, 2005: The thinning of arctic sea ice, 1988-2003: have we passed a tipping point?. J. Climate, 18, 4879–4894.

[8] Day, J.J., J.C Hargreaves, J.D. Annan, and A. Abe-Ouchi (2012), Sources of multi-decadal variability in Arctic sea ice extent, Env. Res. Lett., 7, 034011, doi: 10.1088/1748-9326/7/3/034011.

[9] Stroeve, J.C., V. Kattsov, A. Barrett, M. Serreze, T. Pavlova, M. Holland, and W.N. Meier (2012), Trends in Arctic sea ice extent from CMIP5, CMIP3 and observations, Geophys. Res. Lett., 39, L16502,doi:10.1029/2012GL052676.

Summary of the Climate Dialogue on Arctic sea ice

Summary of the Climate Dialogue on Arctic sea ice

The decline of Arctic sea ice is one of the most striking changes of the Earth’s climate in the past three decades. In September 2012 the sea ice extent reached a new record low after an earlier record in 2007. Both ice extent and volume have decreased steadily and if things will continue this way the Arctic will be ice free in the summer some year in the future.

Given the recent new record the melting of the Arctic was the logical choice as the first topic on this new Climate Dialogue platform. We are very glad that Walt Meier, Ron Lindsay and Judith Curry took up the challenge to engage with each other. We also like to thank the many climate scientists and other interested readers who joined the discussion via the public comments. We had over 25,000 hits in the first three weeks, which exceeded our expectation for the first round of discussion.

This summary is solely based on the contributions of the three invited scientists Walt Meier, Ron Lindsay and Judith Curry. It’s not meant to be a consensus statement. It’s just the summary of the discussion and should give a good overview of how these three scientists view the topic at this moment, i.e. on what they agree and disagree and why. In our introductory article we presented six questions and we will treat each one separately.

1. What are the main processes causing the decline in Arctic sea ice?

Over the last 30+ years, Arctic sea ice has declined precipitously, particularly during summer. Summer ice extent has decreased by ~50%, including most of the older, thicker ice.

Source: http://nsidc.org/arcticseaicenews/charctic-interactive-sea-ice-graph/

Sea ice volume has decreased even more, with the monthly averaged ice volume for September 2012 of 3,400 km3, which is 72% lower than the mean over the period.

Source: http://psc.apl.washington.edu/wordpress/research/projects/arctic-sea-ice-volume-anomaly/

There is no disagreement about these facts. However, it is less clear what are the main processes that caused the decline.
The discussants agree that relatively little heat (~0.5 W/m2) is necessary to explain the decline of Arctic sea ice in the past three decades. These changes are so small that our observational systems are unable (yet) to detect the main sources for this trend.
The discussants agree that in general melting from the ocean is much more effective than melting from the air. However there is little evidence that transport from either the Atlantic or the Pacific contributed much to the melting in the past decades.
A number of processes seems relevant: earlier snow melting in spring leads to melt ponds in the sea ice, opening the Arctic ocean for incoming solar radiation, which then melts the ice from both above and below the ice. Clouds are also an important player in these processes, although not much is known about the trends in clouds in this area.
The discussants stress that it’s difficult to separate cause and effect. The major forcings and feedbacks influencing the Artic sea ice can change from year to year.

Meier

Curry

Lindsay

The decline in sea ice extent since 1979 is very well documented/undisputed

5

5

5

The decline in sea ice volume since 1979 is very well documented/undisputed

5

4

5

Two thirds of the melting each summer is taking place from below the ice

3

x

x

Earlier snow melt in spring is playing a big role in the summer melting

4

4

4

By far the largest component causing the seasonal melting is the solar flux

5

5

5

The influx of warmer waters from the Atlantic has played a minor role in causing the decline in Arctic sea ice

x

x

x

Scores (don’t know=x, very unlikely=1, unlikely=2, as likely as not=3, likely=4, very likely=5,)

2. How unusual is the current decline in historical perspective?

Lindsay and Meier have more confidence that the current decline is unprecedented in historical context. Curry stressed the lack of data before 1979 which hampers our understanding of the state of Arctic sea ice in the past. Meier on the other hand mentioned several studies that shed light on past sea ice conditions and how they differ from the current situation. The participants agree that during the Holocene Thermal Maximum (around 8000 ybp) the Arctic likely was ice free or near ice free as well in the summer. At that time temperatures in the Arctic were similar as today or even higher.

Meier

Curry

Lindsay

The current decline in ice extent is unprecedented in the last century

5

4

5

The current decline in ice extent is unprecedented in the last two millennia

3

x

3

The current decline in ice extent is unprecedented in the Holocene

3

x

2

Scores (don’t know=x, very unlikely=1, unlikely=2, as likely as not=3, likely=4, very likely=5,)

3. Is there evidence for a substantial role of natural variability?

The discussants agree that a shift in the Arctic Oscillation (AO) in the late 80s seemed to have started the decline. A positive AO, especially in winter, pushed older thicker ice out of the Arctic through the Fram Strait. When the AO went back to normal however, the decline in sea ice continued. Meier and Lindsay conclude from this that oscillations like the AO, but also the NAO and PDO, probably played a minor role in the continuing decline. Model simulations suggest that the AMO might have contributed between 5% and 30% of the melting. Curry is not so sure about this. She mentions a hemispheric climate shift in 2001 that accelerated the decline followed by a local regime shift in 2007, that has resulted in all the minima since then being well below normal, with a high amplitude seasonal cycle. Lindsay and Meier also have more confidence in the models than Curry. Lindsay said it isn’t likely that they hugely underestimate natural variability, but this is exactly what Curry thinks the models do.

Meier

Curry

Lindsay

A shift in the AO to positive values started the decline in the early 90s

4

4

4

Now that the ice is thinner, the effect of natural oscillations on the sea ice trend is much smaller

4

3

4

Models underestimate natural variability considerably

2

5

1

Scores (don’t know=x, very unlikely=1, unlikely=2, as likely as not=3, likely=4, very likely=5,)

4. What is the role of ‘global warming’?

There is disagreement about the role of global warming. Both Lindsay and Meier sum up evidence for a large role of “global warming” in the current decline in sea ice. Lindsay mentions the good correlation with the Northern Hemispheric temperatures, showing that the sea ice is not on its own regional trajectory but follows the trend of a larger area. Meier notes the fact that the warming now is pan-Arctic and outside the range of natural variability for the last few centuries. Curry acknowledges a role for global warming to the longer term trend. But at the same time she notes that locally any radiative forcing from greenhouse gases is swamped by inter-annual variability in cloud radiative forcing.

Meier

Curry

Lindsay

The evidence for a substantial role of “global warming” in the current Arctic sea ice decline is very strong

5

4

5

Scores (don’t know=x, very unlikely=1, unlikely=2, as likely as not=3, likely=4, very likely=5,)

5. Quantification of the anthropogenic contribution to sea ice decline

The participants agree it is unlikely the contribution of greenhouse gases to the recent decline is lower than 30%. Curry even said she wouldn’t know any publishing climate scientist going lower than 30%. Curry proposed a range of 30 to 70% greenhouse gas contribution to the recent decline in sea ice extent. Her best estimate would be 50%. Lindsay agreed with this best estimate of 50% for extent. He added though that sea ice volume is his preferred metric because it shows less year to year variability. For sea ice volume he would go higher, say 70%. Meier proposed a smaller range of 50 to 70%.

Meier
%

Curry
%

Lindsay
%

What is your preferred range w.r.t. the contributions of anthropogenic forcing to the decline in sea ice extent?

50-95%

30-70%

30-95%

What is your preferred range w.r.t. the contributions of anthropogenic forcing to the decline in sea ice volume?

50-95%

30-70%

30-95%

Scores (don’t know=x, very unlikely=1, unlikely=2, as likely as not=3, likely=4, very likely=5,)

6. Could the Arctic be ice free in the near future?

None of the participants is very enthusiastic about the idea that the Arctic could be ice free in the summer within a few years. Meier explained that so far the “easy” ice has melted but that now we’re getting to the “more difficult” ice north of Greenland en the Canadian Archipelago. “The predominant ice circulation pushes ice toward those coasts resulting in thick ice that tends to get replenished.”
Lindsay is most confident that even on a time scale of one or two decades greenhouse forcing should cause a further decline. Curry emphasized that on this time scale natural fluctuations will dominate the effect of CO2. For her a reverse of the trend is therefore possible. Meier “wholeheartedly” agreed with Curry that decadal prediction of sea ice is going be very difficult.
Curry stated that the currently used definition of “ice free” (being less than 1 million km2 of ice) is misleading as it is not really ice free. Meier defended the definition as being valid for all practical purposes like ship navigation, the albedo feedback and impacts on the ecosystem.
None of the participants believe in a tipping point. Lindsay noted that if we magically could turn off the forcing the sea ice could recover pretty quickly. Lindsay: “Unfortunately that is not likely to happen.”

Meier

Curry

Lindsay

The Arctic could be ice-free in a few years

1

1

1

The sea ice could (partly) recover in the next two decades due to natural variability

2

3

2

What is the most likely period that the Arctic will be ice free for the first time?

2030-2050

x

2020-2060

Scores (don’t know=x, very unlikely=1, unlikely=2, as likely as not=3, likely=4, very likely=5,)

Extended summary of the Climate Dialogue on Arctic sea ice

Extended summary of the Climate Dialogue on Arctic sea ice

The decline of Arctic sea ice is one of the most striking changes of the Earth’s climate in the past three decades. In September 2012 the sea ice extent reached a new record low after an earlier record in 2007. Both ice extent and volume have decreased steadily and if things will continue this way the Arctic will be ice free in the summer some year in the future.

Given the recent new record the melting of the Arctic was the logical choice as the first topic on this new Climate Dialogue platform. We are very glad that Walt Meier, Ron Lindsay and Judith Curry took up the challenge to engage with each other. We also like to thank the many climate scientists and other interested readers who joined the discussion via the public comments. We had over 25,000 hits in the first three weeks, which exceeded our expectation for the first round of discussion.

This summary is solely based on the contributions of the three invited scientists Walt Meier, Ron Lindsay and Judith Curry. It’s not meant to be a consensus statement. It’s just the summary of the discussion and should give a good overview of how these three scientists view the topic at this moment, i.e. on what they agree and disagree and why. In our introductory article we presented six questions and we will treat each one separately.

1. What are the main processes causing the decline in Arctic sea ice?

Over the last 30+ years, Arctic sea ice has declined precipitously, particularly during summer. Summer ice extent has decreased by ~50%, including most of the older, thicker ice.

Source: http://nsidc.org/arcticseaicenews/charctic-interactive-sea-ice-graph/

Sea ice volume (as determined by the Piomas model[i]) has decreased even more. The model mean annual cycle of sea ice volume over the period 1979-2011 ranges from 28,700 km3 in April to 12,300 km3 in September. However, monthly averaged ice volume for September 2012 was 3,400 km3. This value is 72% lower than the mean over this period, 80% lower than the maximum in 1979, and 2.0 standard deviations below the 1979-2011 trend.

Source: http://psc.apl.washington.edu/wordpress/research/projects/arctic-sea-ice-volume-anomaly/

There is no disagreement about these facts. However, it is less clear what are the main processes that caused the decline.

The discussants all agree that relatively little heat is necessary to get the melt rates of the recent past. Like Lindsay said: “[…] given the small amount of heat needed to melt the ice at the rate we have seen (less than 0.5 W/m2 annual average), is it a hopeless task to find a definitive mechanism, particularly since the dominant forcing for ice anomalies likely changes from year to year?”

These changes are too small to measure. Lindsay: “It seems this means annual change in the net heat balance of the ice and the changes in the mean annual melt and growth rates are much too small to be accurately measured by any observational system that looks at the entire region.” He adds: “[…] teasing out the actual mechanisms for the decline is very tricky (e.g. whether it is changes in the ocean or in the atmosphere or both or what processes are responsible). Observational evidence is difficult to interpret, since the decline itself modifies the lower atmosphere and the surface fluxes. What is cause and what is effect?”

Meier adds that given the small amount of heat necessary to melt the ice it is rather surprising that the sea ice has been relatively stable in the past. “In regards to the stability of the Arctic sea ice, I would agree with Ron [Lindsay] that it may be more surprising that it has been so stable over our years of observations, both in our modern satellite record (since 1979) and earlier. Given that the ice is quite thin (overall on average 2.5 meters or so, with multiyear being 3-4 meters), it doesn’t take much forcing, relatively speaking, to melt completely during summer (1 W/m2 or less).”

Ocean
The discussants agree that heat from the ocean is more effective in melting the ice than heat from the air. As Curry puts it: “Melting the ice from below is much more powerful than melting the ice from above, in terms of W/m2. Note that solar radiation penetrates into the mixed layer through open water leads and melt ponds and thin ice, so solar acts to melt from below as well as from above.”

Curry also referred to model calculations that tried to figure out how much heat from the atmosphere would be needed. “From the Arbetter et al. paper[ii], which asked how much IR forcing is needed to melt the ice from above, the answer depended on which sea ice model you used, but for a dynamic/thermodynamic sea ice model, the result was tens of W/m2.”

Meier: “The ocean is indeed a very important part of the sea ice melt story. Water is a much more effective mechanism to transfer heat to the ice compared to the atmosphere. This is seen in mass balance buoys presented by Don Perovich and colleagues at the U.S. Army Cold Regions Research and Engineering Lab that measure the relative contributions by the ocean and atmosphere to summer melt[iii]. […] near the ice edge bottom melting tends to dominate.”

It isn’t clear yet whether transport of warm water into the Arctic is contributing much to the melting from below. Lindsay: “I don’t know if it is possible to measure any additional heat being drawn from the warm Atlantic layer below the cold halocline.” Meier mentions some indications for transport into the Arctic: “This ocean contribution [to the melting] is due to in situ ocean warming or transport of warm water into the Arctic. There are some indications of influxes of warmer surface and near-surface water in the Pacific region (e.g. by W. Maslowski at the U.S. Naval Postgraduate School), but most of the heating is in situ due to solar insolation[iv]. Steele et al. also find that near the ice edge, bottom melt accounts for 2/3 of the thickness melt vs. 1/3 for surface melt from the atmosphere.”

This 1/3 vs. 2/3 ratio can’t be used for the Arctic as a whole yet. Lindsay: “I don’t agree that 2/3 of the melt comes from the ocean. How do we know that?” Adding: “If there is a trend in ocean heat flux I am not aware of it, except for solar heat absorbed by the ocean that subsequently melts ice.”

Solar insolation
Nevertheless a lot of the heat that is causing the seasonal melting now seems to come from in situ solar heating. Like Meier explains: “Even though the solar insolation maximum occurs when much of the Arctic Ocean is still ice-covered (i.e. June 21), a significant amount of heat is absorbed through the ocean. Buoy data[v] surface temperatures are >5°C, which is 7+°C above the melting point for the ocean water. These are surface temperatures, but the heat extends down several meters (via communication with Mike Steele). That is a lot of heat.”

With lots of the heat coming from in situ solar heating in the summer, a few processes are relevant: early snow melting in spring leading to melt ponds in the sea ice, opening the Arctic ocean for incoming solar radiation, which then melts the ice from both above and below the ice. Clouds are also an important player in these processes, although not much is known about the trends in clouds in this area. Curry: “With regards to the summer minimum, the clouds are contributing and seem to have been a major factor during 2007. Clouds are much more powerful radiatively than CO2, so if we are talking about radiative forcing, clouds should be front and center in the discussion.” This doesn’t mean though that Curry is thinking that clouds in general are the main driver for summer melting: "This response does not imply that I think clouds are the only or even the main driver of the summer sea ice minimum."

Lindsay: “It is very hard to really know what the long-term changes in the cloud radiative forcing is in the central Arctic, in part because satellite retrievals of cloud properties can be biased by changing surface properties, so if the surface changes (less ice) is the change in estimated cloud properties real or just an artifact?” Slightly later in the discussion Lindsay adds this: “As Judith says, clouds are the big player in radiative fluxes. How they are changing in response to changing ice in amount, composition, and vertical structure is still an open research question, so we don’t really know if cloud changes will be a positive or a negative feedback. Because they are so important I would not be surprised if they are found to be at least part of the source of the ice melt, but because cloud temperature and properties may change due to changing surface properties, sorting out cause and effect could be difficult.”

So snow and ice feedback could both be positive leading to larger melting in the summer. Lindsay: “Atmospheric fluxes are highly variable and accurately determining a long-term trend is difficult. A possible strong feedback is the lower albedo of melting snow in the spring so that an earlier onset of melt is amplified with earlier snow melt, earlier melt pond formation, and earlier lower albedo of bare ice[vi] which would amplify melt. In terms of melt, by far the largest component is the solar flux, so understanding how the surface albedo changes is crucial.”

Ice growth in winter
The extra heat in the oceans is quickly lost in the autumn though and first year ice is growing faster than multi-year ice. As Curry explains: “Ice volume increase during winter can actually be larger for first year (FY) ice than for a field dominated by multi-year (MY) ice. Thin ice grows at a much faster rate than thick ice. The change in ice volume for MY-dominated vs. FY dominated during summer is trickier. Thicker ice actually starts melting a bit earlier than thin ice owing to the larger sensible heat loss over the thin ice, but the thin ice may entirely disappear over the course of the summer by melting. A key issue is whether the FY ice can survive through the summer. This depends on the local thermodynamics, and also the export [through the Fram Strait] and also breakup induced by a big cyclonic storm. So following the time variation of the second year ice is another key to understanding what is going on (thermodynamics vs dynamics/export)[vii].

Lindsay: “A lot of different aspects of the system are changing and I can’t say which one is dominant, if any. I am not aware of observations that show the heat flux from the ocean is increasing. A lot of solar heat is now being dumped into the ocean in late summer, but much of this heat is quickly lost to space in the fall, so the impact on winter growth may be modest, again the thin ice growth rate feedback. How much of this new summer heat is sequestered and slows ice growth all winter is an open research question.”

Summary
Over the last 30+ years, Arctic sea ice has declined precipitously, particularly during summer. Summer ice extent has decreased by ~50%, including most of the older, thicker ice. Sea ice volume has decreased even more, with the monthly averaged ice volume for September 2012 of 3,400 km3, which is 72% lower than the mean over the period. There is no disagreement about these facts. However, it is less clear what are the main processes that caused the decline.
The discussants agree that relatively little heat (~0.5 W/m2) is necessary to explain the decline of Arctic sea ice in the past three decades. These changes are so small that our observational systems are unable (yet) to detect the main sources for this trend.
The discussants agree that in general melting from the ocean is much more effective than melting from the air. However there is little evidence that transport from either the Atlantic or the Pacific contributed much to the melting in the past decades.
A number of processes seems relevant: earlier snow melting in spring leads to melt ponds in the sea ice, opening the Arctic ocean for incoming solar radiation, which then melts the ice from both above and below the ice. Clouds are also an important player in these processes, although not much is known about the trends in clouds in this area.
The discussants stress that it’s difficult to separate cause and effect. The major forcings and feedbacks influencing the Artic sea ice can change from year to year.

Meier

Curry

Lindsay

The decline in sea ice extent since 1979 is very well documented/undisputed

5

5

5

The decline in sea ice volume since 1979 is very well documented/undisputed

5

4

5

Two thirds of the melting each summer is taking place from below the ice

3

x

x

Earlier snow melt in spring is playing a big role in the summer melting

4

4

4

By far the largest component causing the seasonal melting is the solar flux

5

5

5

The influx of warmer waters from the Atlantic has played a minor role in causing the decline in Arctic sea ice

x

x

x

Scores (don’t know=x, very unlikely=1, unlikely=2, as likely as not=3, likely=4, very likely=5,)

2. How unusual is the current decline in historical perspective?

Lindsay is most outspoken that the current decline is remarkable in historical perspective. “The current decline in ice extent and volume is highly unusual. Maybe the best way to show this is in the consistency in the trends. The linear trend in the September ice thickness from 1987 to 2012 explains 90% of the variability, much more than any other comparable interval since 1948. The observational record for sea ice is more spotty before this time, though researchers are piecing together a more comprehensive picture from various ship observations. So far that picture doesn’t suggest that large variations in sea ice extent were anything but regional over the last 120 years or so.”

Curry on the other hand is much more focused on the uncertainties before the satellite measurements (1979) start. “The conventional understanding of Arctic sea ice extent shows a general retreat of seasonal ice since about 1900, and accelerated retreat of both seasonal and annual ice during the latter half of the 20th century. Hints that this understanding may be overly simplistic in view of the uncertainties and ambiguities in the period prior to satellites are described in this presentation[viii] by John Walsh about plans for a gridded sea ice product back to 1870. Further, I’ve recently had some discussions about this with a historian that is investigating historical reports of sea ice extend during the period 1920-1950. He has found reports of reduced wintertime extent during this period, and a general lack of data from the Russian sector. While this material is not yet published, it reminds us that prior to 1979, we do not have a reliable data set of global sea ice extent. The lack of such a data set hampers our ability to test our ideas about the impact of natural variability versus anthropogenic forcing on sea ice variability and change.”

Meier is closer to Lindsay. He thinks the historical evidence points towards lower ice extent in the 1930s but these changes were more regional: “While our most complete dataset, the one we have the highest confidence in, is the passive microwave record, there is fairly complete coverage from operational ice charts back to at least the mid-1950s. And there are Russian ice charts for the Eurasian Arctic back to the early 1930s. Though not complete, these do extend the record and I think provide some sense of the interannual and decadal natural variability of the ice. There are indications of lower ice in the 1930s in the Russian Arctic[ix], suggesting the influence of a multi-decadal cycle (AMO?), at least in the Russian Arctic. But the data show a different character in terms of the seasonality and regionality of the lower ice conditions compared to the recent decline.” In another comments he adds: “Before the 1950s, the 1930s are often mentioned as a warm period. However, this is primarily in the Atlantic region, where observations were more common. Ice charts from the Denmark[x] and Russia, indicate some periods of low summer ice, but on a more regional scale than we see now.”

Holocene
Going back further in time, both Lindsay and Meier refer to the reconstruction of Kinnard for the last 1.5 millennium: “A reconstruction based on proxy records suggest that sea ice extent is now the lowest in 1450 years[xi].” Even further back goes a review article of Polyak[xii] which concludes: “This ice loss appears to be unmatched over at least the last few thousand years and unexplainable by any of the known natural variabilities.” That same review of longer records indicates that the last time the Arctic had little or no summer ice was during the Holocene Thermal Maximum (~8000 years before present).

There has been little discussion about the uniqueness of the recent decline and the discussants indicate they are no expert on this matter. Meier has looked into the question though because at his institute NSIDC they receive a lot of questions about this. He offers an interesting thought experiment about what it could mean that the Arctic has been ice free in the past (~8000 ybp). “There are (at least) two ways of looking at it: If the Arctic has been ice-free during summer in the past, obviously it was due to natural forcing, so the current decline could also be due to natural forcing. If the Arctic has been ice-free during summer in the past due to natural forcing then anthropogenic forcing of a similar magnitude will have a similar effect. The first view is not implausible on its face, but it is simplistic because it doesn’t consider that the same result could be due to the same causes. Lightning starts forest fires, but that doesn’t rule out that a forest fire may be due to human actions. The second view is much more useful in my view because it has potential predictive value at least for the equilibrium state of the ice cover under a given forcing. For example, in the Holocene (~8000 ybp), the Arctic Ocean likely had ice-free or near ice-free summers and temperatures were similar to, maybe still a bit higher, than Arctic temperatures in recent years. Thus, the decline we’re seeing is entirely expected and we would expect to see it continue to near-zero summer extent in the coming years. The timing is still uncertain but it changes things from an “if the Arctic loses summer sea ice” to “when the Arctic loses summer sea ice”.

Summary
Lindsay and Meier have more confidence that the current decline is unprecedented in historical context. Curry stressed the lack of data before 1979 which hampers our understanding of the state of Arctic sea ice in the past. Meier on the other hand mentioned several studies that shed light on past sea ice conditions and how they differ from the current situation. The participants agree that during the Holocene Thermal Maximum (around 8000 ybp) the Arctic likely was ice free or near ice free as well in the summer. At that time temperatures in the Arctic were similar as today or even higher.

Meier

Curry

Lindsay

The current decline in ice extent is unprecedented in the last century

5

4

5

The current decline in ice extent is unprecedented in the last two millennia

3

x

3

The current decline in ice extent is unprecedented in the Holocene

3

x

2

Scores (don’t know=x, very unlikely=1, unlikely=2, as likely as not=3, likely=4, very likely=5,)

3. Is there evidence for a substantial role of natural variability?

According to Lindsay and Meier, the short answer to this question is ‘no’. Curry is far less sure about this, emphasizing our lack of knowledge about the issue. So it is fair to say that there is disagreement about this topic.
Lindsay stated: “I think we all agree that the AO (Arctic Oscillation), NAO (North Atlantic Oscillation), and PDO (Pacific Decadal Oscillation) have little role in the long-term decline. I am less certain about the AMO (Atlantic Multidecadal Oscillation), although I am inclined to think it too is a minor player.” Curry on the other hand said: “Large scale atmospheric and ocean circulations vary on multi-decadal time scales, which influence circulation regions on shorter time scales as well, which among other things influences sea ice characteristics. Untangling how all this works has received far too little attention in my opinion.”

The kick
Let’s first discuss why Meier and Lindsay don’t see a large role for natural oscillations. They acknowledge that the AO and other oscillations have influence on the sea ice but stress that this influence now seems minor. For example all three see a role for the AO when the recent decline started in the late 80s/early 90s. Curry wrote in her guest blog: “During the late 1980s and early 1990s, the circulation patterns favored the motion of older, thicker sea ice out of the Arctic. This set the stage for the general decline in Arctic sea ice extent starting in the 1990′s.” Lindsay and Meier seem to agree with this view on the decline. In a comment Lindsay wrote: “[…] I once wrote a paper that raised the question of whether a tipping point had been passed in the late 1980‘s, when the current decline in ice thickness in the arctic ocean began in earnest, coincident with a shift in the AO, a shift I called “the kick” which started the decline. Despite a return to normal AO conditions the decline continued.” And Meier in a more general explanation wrote: “For example, the AO has been observed to have a large effect on summer ice condition. When the winter AO is positive, thick ice tends to get pushed out of the Arctic through the Fram Strait, leaving a thinner ice pack the following summer that is more likely to melt completely. The converse is true for a negative AO. However, the AO typically has a 3-7 year cycle, which does not correspond to the long-term trend. In addition, in recent years, the influence of the AO appears to have been broken, or at least weakened. For example, the 2009-2010 winter had the lowest AO on record (since 1950), and yet the summer 2010 minimum was among the lowest in the satellite record[xiii].”

So the discussants agree that the AO likely played a role in the early 90s to start the decline by pushing older and thicker ice out of the Arctic through the Fram Strait. Since then the correlation between the AO and Arctic ice has diminished, convincing Meier and Lindsay it now has a smaller role. In his guest blog Lindsay wrote: “Evidence from models[xiv] indicates that the AO may not play much of a role in sea ice variability. That same study suggests that the AMO may indeed play a significant role in sea ice decline and that as much as 3%/decade of the 10%/decade trend in September sea ice extent between 1979-2010 may be due to AMO variability. There is also some observational evidence that sea ice extent may be influenced by the AMO but none of this evidence suggest that an arctic-wide change in ice extent as seen over the last decade is possible due to these type of modes of natural variability alone. It also appears ice thickness within the Arctic Ocean is less closely tied to the AMO.”

Thin ice
Meier thinks the thinner ice has a different interaction with natural oscillations. Meier: “What is different now? The ice is thinner. Thinner ice is more easily moved around and out of the Arctic, is more easily broken up into smaller floes (that are more susceptible to melt), and is more easily melted completely during summer. We’ve seen this in recent years in the Beaufort Sea. Historically, this region has been a nursery of old thick ice and the ice moved in a clockwise direction in the Beaufort Gyre aging and thickening over many years. However, in recent years, the ice in the gyre has not survived the summer. The nursery has become a graveyard. There is likely some influence of ocean waters, which may have a cyclical natural varying component, but the thinner, more broken ice is the larger factor. Our expectations for how the ice responds to natural variability is based upon a thicker ice cover, which may no longer be valid.”

Another reason Lindsay doesn’t see a large influence of natural variability is that the sea ice thickness and/or volume (the preferred metric for Lindsay) is much less influenced by natural variability than the sea ice extent. “While natural variability is very important for determining the ice extent, primarily through the action of the winds, I see a very consistent trend in the mean ice thickness with relatively little year-to-year variations. So while natural variability can strongly influence the ice area and extent, I doubt there is a strong component in the variability of the mean ice thickness within the Arctic Ocean.”

Curry on the other hand sees a large role for so called climate shifts, a concept that has recently been picked up by papers of Swanson and Tsonis[xv] and David Douglass[xvi]. These shifts are related to several natural oscillations, although the physical processes behind it are far from clear yet. In her guest blog she wrote: “In 2001/2002, a hemispheric shift in the teleconnection indices occurred, which accelerated the downward trend. A local regime shift occurred in the Arctic during 2007, triggered by summertime weather patterns conspired to warm and melt the sea ice. The loss of multi-year ice during 2007 has resulted in all the minima since then being well below normal, with a high amplitude seasonal cycle. After 2007, there was another step loss in ice volume in 2010. In 2012, the basic pattern of this new regime was given a ‘kick’ by a large cyclonic storm in early August.”

Models
Another difference of opinion is originating in the confidence one should have in models. Lindsay defends the models: “To refute the evidence from models, one would have to show that they wildly underestimate natural variability (with regard to sea ice). Even in the NCAR CCSM4 which is one of the CMIP5 models with the highest “natural” variability, the sea ice extent trend over the last 30 years is still 50% due to greenhouse gases[xvii].” Adding later: “It is possible there are large unknown sources of long-term natural variability, but I think the models and the observations show CO2 is the major cause of the decline. Other sources of large long-term variability (say 30 to 100 years) that could contribute substantially to the decline are mostly speculation.”

Curry on the other hand does indeed think that models underestimate natural variability: “The relatively high attribution to AGW comes from climate models, which have substantial problems in simulating the Arctic climate […]. Not to mention that these climate models underestimate natural internal variability on multi-decadal timescales. […] I note that my group has just submitted a paper for publication analyzing the CMIP5 simulations of arctic sea ice. I am frankly trying to figure out how these models manage to produce any kind of sensible sea ice given that most of these models are biased cold (many are biased cold by 2°C or more). Our old friend ‘model calibration’ I assume, whereby 5 wrongs might make a ‘right’.”

So pretty strong disagreement here, mainly between Lindsay and Curry. Meier gives some consolation pointing out how complicated the sea ice system and its interaction with the AO might be: “One thing that I think is clear is that the sea ice system is complicated and that many factors that influence each other are changing, making separating them out difficult. Going back to a point I made earlier, I think an intriguing aspect is how the influence of natural variability on the sea ice (and vice versa) is changing. In other words, AO affects sea ice, but how is that effect changing with a changing ice cover and are the changes in the ice in turn influence the AO?”

Summary
The discussants agree that a shift in the Arctic Oscillation (AO) in the late 80s seemed to have started the decline. A positive AO, especially in winter, pushed older thicker ice out of the Arctic through the Fram Strait. When the AO went back to normal however, the decline in sea ice continued. Meier and Lindsay conclude from this that oscillations like the AO, but also the NAO and PDO, probably played a minor role in the continuing decline. Model simulations suggest that the AMO might have contributed between 5% and 30% of the melting. Curry is not so sure about this. She mentions a hemispheric climate shift in 2001 that accelerated the decline followed by a local regime shift in 2007, that has resulted in all the minima since then being well below normal, with a high amplitude seasonal cycle. Lindsay and Meier also have more confidence in the models than Curry. Lindsay said it isn’t likely that they hugely underestimate natural variability, but this is exactly what Curry thinks the models do.

Meier

Curry

Lindsay

A shift in the AO to positive values started the decline in the early 90s

4

4

4

Now that the ice is thinner, the effect of natural oscillations is much smaller

4

3

4

Models underestimate natural variability considerably

2

5

1

Scores (don’t know=x, very unlikely=1, unlikely=2, as likely as not=3, likely=4, very likely=5,)

4. What is the role of ‘global warming’?

This question is of course closely related to the question about natural variability. So the positions on this topic are similar with Lindsay and Meier seeing more evidence for a large role of global warming than Curry. This is based on evidence from both observations and models.

In his guest blog Lindsay points out that the Arctic ice trends are correlated with temperature trends on the Northern Hemisphere. “One piece of evidence […] is the high correlation (R = -0.72, 57 years through 2011) between the rate of melt (including export) in the Arctic Ocean and melt season (May to September) surface temperatures in the rest of the hemisphere, from 20N to 60N (NCEP-R1). The surface temperatures south of the Arctic are likely less influenced by ice loss and the trends are likely more influenced by global forcings. Sea ice basically responds to hemispheric conditions and is not on its own trajectory.”

Meier has similar arguments: “The evidence for a substantial role of “global warming” in the current sea ice decline comes from the fact that the decline (1) correlates with the global warming temperatures over the past several decades, (2) is outside the range of normal variability over the past several decades and likely over the past several centuries, (3) the decline is pan-Arctic, with all regions experiencing declines throughout all or most of the year.” And later in a comment: “I base my view on the decline of not only summer extent but the rapid loss of old ice and the thinning of the ice cover, along with the apparent change in the response of the ice to natural fluctuations (such as the AO mode, as I discussed in my previous comment). Of course, natural variations have played a role and will continue to do so, but I don’t see a large enough effect to account for a substantial fraction of the observed change.”

Curry stated in her guest blog that the “lack of data [before 1979] hampers our ability to test ideas on contribution of natural variability versus anthropogenic forcing on sea ice decline”. However, during the discussion and also in her blog she highlights the role of natural variability in the current sea ice decline. In her post she talks about the “basic story” as she sees it and she then sums up all kind of natural factors and shifts. This leads to the following statement about the role of global warming: “So, what is the contribution of anthropogenic global warming to all this? It’s difficult to separate it out. The polar regions are extra sensitive to CO2 forcing and water vapor feedback, owing to the low amounts of water vapor. However, any radiative forcing from greenhouse gases is swamped by inter-annual variability in cloud radiative forcing. In the bigger picture sense, greenhouse forcing is involved in complex nonlinear ways with the climate regime shifts. So there is undoubtedly a contribution from CO2 forcing, but it is difficult to find any particular signal in this year’s (2012, red.) record minimum, other than the contribution of greenhouse warming to a longer term trend.” With the last sentence Curry’s view comes closer though to that of Meier and Lindsay. She acknowledges the role of greenhouse (global) warming in the decline of Arctic sea ice, although as we will see she tends towards a lower contribution of greenhouse gases than Lindsay and Meier.

Substantial
Models again play a role in the view of Meier and Lindsay. As Meier wrote: “Also, model simulations of sea ice cover consistently show a response of declining sea ice to increasing GHGs (albeit slower than the observed decline); conversely, model runs over the last 30 years without GHG forcing do not show a decline[xviii]. Finally, there does not appear to be a mechanism to sufficiently explain the long-term decline without including the effect of GHGs[xix].”

Lindsay again is the clearest about a large role for global warming: “The evidence for a substantial role of “global warming” in the current Arctic sea ice decline is very strong, both from observations and from modeling studies. Of course neither can “prove” the role of greenhouse gases but there is overwhelming evidence it is true. […] For those that think this is not the case, they need to show some evidence that there are alternative explanations. Comparing ice volume instead of sea ice extent greatly reduces the natural variability compared to the trend and shows an earlier and more definitive separation than ice area between models run with or without increased greenhouse gas forcings[xx].”

Curry on the other hand doesn’t place a lot of trust in model simulations of Arctic climate (see section 3 above).

Summary
There is disagreement about the role of global warming. Both Lindsay and Meier sum up evidence for a large role of “global warming” in the current decline in sea ice. Lindsay mentions the good correlation with the Northern Hemispheric temperatures, showing that the sea ice is not on its own regional trajectory but follows the trend of a larger area. Meier notes the fact that the warming now is pan-Arctic and outside the range of natural variability for the last few centuries. Curry acknowledges a role for global warming to the longer term trend. But at the same time she notes that locally any radiative forcing from greenhouse gases is swamped by inter-annual variability in cloud radiative forcing.

Meier

Curry

Lindsay

The evidence for a substantial role of “global warming” in the current Arctic sea ice decline is very strong

5

4

5

Scores (don’t know=x, very unlikely=1, unlikely=2, as likely as not=3, likely=4, very likely=5,)

5. Quantification of the anthropogenic contribution to sea ice decline

Curry made what she called a ‘wishy washy’ statement about the attribution of the melting to anthropogenic forcing. She wrote: “So . . . what is the bottom line on the attribution of the recent sea ice melt? My assessment is that it is likely (>66% likelihood) that there is 50-50 split between natural variability and anthropogenic forcing, with +/-20% range. Why such a ‘wishy washy’ statement with large error bars? Well, observations are ambiguous, models are inadequate, and our understanding of the complex interactions of the climate system is incomplete.”

Later in a comment she simplified this statement a bit: “I think a simpler way to look at this would be to attempt to put bounds on the AGW contribution to the recent sea ice melt. I propose a range of 30-70%. Walt and Ron seem to be above 50% but not going higher than 70%.” And in another comment she said: “If I am interpreting their [Lindsay’s and Meier’s] assessments correctly, am I correct to infer that we are all in agreement within the range of uncertainty that we would each acknowledge? The disagreement seems to arise if we are each forced to pick a single attribution value: mine would be 50%; I infer that Lindsay’s in particular would be higher. But given the uncertainties, is there any particular reason to force this level of specificity and highlight disagreements?”

Meier replied that a 50-70% would be reasonable. “The 50-70% range for GHGs that Judith mentions is probably a reasonable spread in capturing the potential range. I would lean more toward the high side of that range because I don’t see the AMO and PDO having a large magnitude influence on summer ice. The AO does have a larger influence, but that has largely been lost in recent years.”

Note that here they refer to a different range. Curry’s range is wider, 30-70%. Lindsay replied that he also accepts a large uncertainty: “I agree with Judith that the percent decline in sea ice due to greenhouse gases is rather uncertain. It depends on the time intervals considered for the base case and current state (how much natural variability is averaged out).”

Curry even added she can’t think of many or even any climate scientist that would go either above 70% or below 30%. “One of the complaints from the comments is that the scientists with ‘extreme’ views (i.e. outside of this range) were not included. I wonder if Peter Wadhams would have gone above 70%? I am trying to think of any published scientists working on sea ice or Arctic climate dynamics that would go below 30%, and I can’t think of any.” However, both Lindsay and Meier quoted Day et al (2011)[xxi] in support of their position, who estimate natural variability having caused between 5% and 30% of the decline, which would allow for the anthropogenic contribution to be up to 95%.

Lindsay separates his estimates for ice extent and ice volume, leaning towards high percentages for ice volume: “I come back to the observation that ice volume is a much more consistent measure of the ice cover, showing much less year to year variability compared to the trend than ice extent or area. The CCSM3 model, for example, shows a clear separation in ice volume between the control and the A1B scenario as early as 1985, 10 to 15 years before the separation in ice extent[xxii]. The decline in volume is consistent with the PIOMAS estimates of ice volume (which is tied to the observed past weather and ice extent), given the uncertainties in both data sources. CCSM4 simulations show about a 50% decline in ice volume since the 1960’s with a typical ensemble spread on the order of 15%, so the CCSM4 runs indicate the decline in ice volume is about 3 times the natural variability, or about 70% of the decline is due to greenhouse gases. The decline in volume seen in the PIOMAS simulations is also very consistent, particularly if one focuses just on the Arctic Ocean since the late 1980’s. So I would go on the high side of the percentage loss due to greenhouse gases for ice volume and less for ice extent, maybe near 50%.”

Speculate
Later in the discussion all three acknowledged a great deal of uncertainty when making attribution statements. Meier for example wrote: “There seems to be a lot of wrangling over exactly what fraction of the observed change is attributable to GHGs vs. natural and other human (e.g., black carbon). There is clearly still uncertainty in any estimates and the models and data are not to point where we can pin a number with great accuracy. Judith is more on the lower end, rightly pointing out the myriad natural factors. Ron and I tend toward the higher end. I base my view on the decline of not only summer extent but the rapid loss of old ice and the thinning of the ice cover, along with the apparent change in the response of the ice to natural fluctuations (such as the AO mode, as I discussed in my previous comment). Of course, natural variations have played a role and will continue to do so, but I don’t a large enough effect to account for a substantial fraction of the observed change.”

And Lindsay wrote: “About the attribution of the decline to AGW vs. natural. The fact is we don’t really know and all we can do is speculate. That is not really science. Also the question is a little ambiguous. Are we referring to a linear trend, and if so over what period, or to a change for a particular year and against what base line? Do we assume natural variability is only contributing to the decline, or might it also slow it? So trying to put definite numbers on the ratio for the observed climate, even the uncertainty, is likely a futile effort. This can be done for ensemble model simulations of the climate, with all of the caveats that go with such studies, and the answer is a statistical summary for the model used, not a real-world analysis.”

Curry defended her focus on uncertainty by saying that “[…] when my uncertainty level seems higher than others, it is because I have a longer list of known unknowns than most others, which is traced back to my own publications and my service on the committees listed above.”

Summary
The participants agree it is unlikely the contribution of greenhouse gases to the recent decline is lower than 30%. Curry even said she wouldn’t know any publishing climate scientist going lower than 30%. Curry proposed a range of 30 to 70% greenhouse gas contribution to the recent decline in sea ice extent. Her best estimate would be 50%. Lindsay agreed with this best estimate of 50% for extent. He added though that sea ice volume is his preferred metric because it shows less year to year variability. For sea ice volume he would go higher, say 70%. Meier proposed a smaller range of 50 to 70%.

Meier
%

Curry
%

Lindsay
%

What is your preferred range w.r.t. the contributions of anthropogenic forcing to the decline in sea ice extent?

50-95%

30-70%

30-95%

What is your preferred range w.r.t. the contributions of anthropogenic forcing to the decline in sea ice volume?

50-95%

30-70%

30-95%

Scores (don’t know=x, very unlikely=1, unlikely=2, as likely as not=3, likely=4, very likely=5,)

6. Could the Arctic be ice free in the near future?

Meier explained why he is not very enthusiastic about Peter Wadhams’ prediction that the Arctic could be ice free within a few years. “While I won’t totally dismiss the possibility, I think it is a very low probability event. My rationale is that getting there relies on continuing to lose volume at the same rate as we’ve done over the last decade or so. I think there are a couple of very good reasons why this is unlikely. First, as Judith has particularly noted, there have been some extreme events in recent years that have helped kick down the summer ice and remove old ice. While I think there is room to debate how large of an effect these have had, we can’t depend on such events to happen in a timely manner in the next few years. But the larger reason for my doubts is that the rapid losses have come from the Siberian and Alaskan regions of the Arctic. The region along northern Greenland and Canadian Archipelago have not lost much summer ice – for good reason. The predominant ice circulation pushes ice toward those coasts resulting in thick ice that tends to get replenished. In other words, we’ve seen a rapid decline in the “easy” ice to lose, but now we’re getting to the “more difficult” ice. I think it’s likely that that will go more slowly.”

Although Lindsay agrees with Meier about the ‘Wadham hypothesis’, he does think the summers could be ice free within a decade or two. “The Arctic will likely be largely ice free at the end of some summers within a decade or two. Small bits of ice might remain some years, but they may not matter for much.”

Not surprisingly Curry is most reluctant in providing an estimate. She writes in her blog that: “prediction of sea ice is hostage to predictions of the chaotic atmospheric and oceanic dynamics.” In a comment she writes: “Pretending that extrapolating an observed trend or that CMIP5 simulations will produce a useful decadal prediction of sea ice is pointless (well there is a potential point but it is to mislead).” And in a comment to James Annan: “My point is that I don’t know with any high level of confidence what the sea ice will look like in 10 years or 100 years.” And finally she concludes that: “On timescales of two decades, I expect the natural variability to dominate, and this could still take us in either direction (more or less ice).”

Definition
Curry was also a bit annoyed about the generally accepted definition of ice-free. “[…] ‘ice free’ is usually taken to mean less than 1 million sq. km. This is far from ‘ice free’ in actuality (I find this use of ‘ice free’ to be highly misleading). So what is the point of talking about some sort of ‘tipping’ point, when even during summer, the Arctic Ocean is not really projected to be ice free?”

Following up from Curry in regards to what “ice-free” means, Meier agreed that this isn’t a particularly well-defined term, but he thinks it is nonetheless useful for thinking about changes in the ice cover and its impact. “It will indeed be difficult to reach completely ice-free (zero) ice because of the geography and dynamics of the system. Semi-enclosed bays and channels (e.g., the Canadian Archipelago) are more resilient to break-up and melt and, as I mentioned a previous post, the Greenland side of the Arctic gets replenished by ice moving across from Siberia. This ice can pile up into thick (several meters) ridges that are difficult to break up and melt completely.” Curry agrees: “This definition is used because it is very difficult to melt the thick ice around the Canadian Archipelago. And the issue of ‘ice free’ in the 21st century is pretty much a non-issue if your require this thick ice to disappear.”

For practical purposes Meier still thinks the definition is valid. “Instead of simply saying “ice-free”, my views is that it should be described as “ice-free for all practical purposes”. To me this means: seeing blue instead of white throughout the Arctic Ocean (except along the coasts), allowing ships to operate within the Arctic Ocean with little chance of seeing substantial ice, having a significant effect on the Arctic ecosystem, and having a significant effect on Arctic. (The last two are likely already occurring.) […] So I’m not worried too much about what we mean by “ice-free” or what specific number we put on it – that’s for gamblers placing bets to quibble about.”

Tipping point
The discussants agree there doesn’t seem to be a tipping point and that sea ice could recover pretty soon if the circumstances are right. Lindsay: “Current research does not support the notion of any “tipping” points for summer sea ice so if we somehow magically could turn off the forcing that comes from greenhouse gases, sea ice would likely grow back relatively quickly. Unfortunately that is not likely to happen. Winter ice will remain for a long time, a century or more. How long probably depends mostly on the future rate of greenhouse gas emissions.” And later in a comment: “I now believe a tipping point for summer sea ice is not a good way to characterize the system. The sea ice in the Arctic responds to global forcing in the atmosphere and ocean and is not on its own trajectory. A number of studies have supported this conclusion.” Curry wrote something similar: “The first issue to debunk is that an ‘ice free’ Arctic is some sort of ‘tipping point.’ A number of recent studies find that in models, the loss of summer sea ice cover is highly reversible.”

Also Meier and Lindsay point out that we might have several years with low ice-extent (or even near ice free conditions) followed by years with a higher extent, implicating that a ‘first’ year of being ice-free is not particularly meaningful. Meier indicates in his blog that IPCC models that match historical records indicate ice-free conditions in 2030-2050.

For the near future anything can happen according to Curry: “Whereas sea ice models are becoming quite sophisticated, most recently in terms of the radiative transfer, melt ponds, and aerosols, prediction of sea ice is hostage to predictions of the chaotic atmospheric and oceanic dynamics. For the next two decades, natural variability will almost certainly trump any direct effects from anthropogenic warming by a long shot. The current sea ice situation does not seem stable, but it is not at all clear whether we can expect a reversion to the (more recently) normal state or yet a larger ice loss.” And then, Curry adds, there are the known unknowns: “what solar radiation will do (looks like cooling), volcanoes are always a wild card, and then there are the less known unknowns such as cosmic ray effects, magnetic field effects, etc. And in terms of climate shifts, there may be something happening on much longer time scales (e.g. the Atlantic Meridional Overturning Circulation) that could influence the next climate regime shift. Focusing on CO2 as the dominant influence on the time scale of two decades seems very misguided to me.”

Lindsay though thinks the CO2 induced global warming will continue to cause a decline in the ice volume, even on a time scale of a decade or two. “I agree with Walt that the summer ice will come and go for a number of years, depending on the weather and the winds, but when it first goes to near zero is quite hard to predict.”

Meier “wholeheartedly” agrees with Curry that decadal prediction of sea ice is going be very difficult. “I have been involved in several meetings about the issue including the one that is the basis of the NRC report Judith references (Ron was also a participant). I will note that among those most skeptical of the models’ capability to capture decadal variability are the modelers themselves.”

The discussants agree that for a long time the Arctic will refreeze in winter. Curry notes that “sea ice would continue to freeze and thaw on an annual cycle”. Later in a comment she explains: “The massive cooling during the polar night causes substantial heat loss, which on land results in surface temperatures of -40°C and colder. Sea water in the Arctic Ocean freezes at a temp slightly warmer than -2°C. The only conceivable way to keep the Arctic Ocean ice free is to bring in much warmer ocean water from lower latitudes. Unless the geography of the Arctic basin dramatically changes, e.g. Alaska disappears, there is no way for this to happen. I don’t think that anyone would dispute these points.” Lindsay: “Winter ice will remain for a long time, a century or more. How long probably depends mostly on the future rate of greenhouse gas emissions.”

Summary
None of the participants is very enthusiastic about the idea that the Arctic could be ice free in the summer within a few years. Meier explained that so far the “easy” ice has melted but that now we’re getting to the “more difficult” ice north of Greenland en the Canadian Archipelago. “The predominant ice circulation pushes ice toward those coasts resulting in thick ice that tends to get replenished.”
Lindsay is most confident that even on a time scale of one or two decades greenhouse forcing should cause a further decline. Curry emphasized that on this time scale natural fluctuations will dominate the effect of CO2. For her a reverse of the trend is therefore possible. Meier “wholeheartedly” agreed with Curry that decadal prediction of sea ice is going be very difficult.
Curry stated that the currently used definition of “ice free” (being less than 1 million km2 of ice) is misleading as it is not really ice free. Meier defended the definition as being valid for all practical purposes like ship navigation, the albedo feedback and impacts on the ecosystem.
None of the participants believe in a tipping point. Lindsay noted that if we magically could turn off the forcing the sea ice could recover pretty quickly. Lindsay: “Unfortunately that is not likely to happen.”

Meier

Curry

Lindsay

The Arctic could be ice-free in a few years

1

1

1

The sea ice could (partly) recover in the next two decades due to natural variability

2

3

1

What is the most likely period that the Arctic will be ice free for the first time?

2030-2050

x

2020-2050

Scores (don’t know=x, very unlikely=1, unlikely=2, as likely as not=3, likely=4, very likely=5,)



[i] http://psc.apl.washington.edu/wordpress/research/projects/arctic-sea-ice-volume-anomaly/

[ii] Arbetter, T., J.A. Curry, M.M. Holland, and J. M. Maslanik, 1997: Response of sea ice models to perturbations in surface heat flux. Ann. Glaciol., 25, 193-197.

[iii] http://www.chrispolashenski.com/docs/a57a188.pdf. Note Figure 1

[iv] Steele et al., 2010, http://www.agu.org/pubs/crossref/2010/2009JC005849.shtml

[v] Look at August and September ocean temperature data (from M. Steele and colleagues) here: http://psc.apl.washington.edu/UpTempO/Data.php

[vi] Perovich, D. K., S. V. Nghiem, T. Markus, and A. Schweiger (2007), Seasonal evolution and interannual variability of the local solar energy absorbed by the Arctic sea ice–ocean system, J. Geophys. Res., 112, C03005, doi:10.1029/2006JC003558

[vii] http://curry.eas.gatech.edu/currydoc/Schramm_JGR102.pdf

[viii] John Walsh presentation: http://nsidc.org/noaa/iicwg/presentations/IICWG_2011/Fetterer_Back_to_1870_Plans_for_a_Gridded_Sea_Ice_Product.pdf

[ix] Mahoney, A. R., R. G. Barry, V. Smolyanitsky, and F. Fetterer (2008), Observed sea ice extent in the Russian Arctic, 1933–2006, J. Geophys. Res., 113, C11005, doi:10.1029/2008JC004830.

[x] http://nsidc.org/data/docs/noaa/g02203-dmi/

[xi] Kinnard, C., C. Zdanowicz , D Fisher, and E. Isaksson, 2011: Reconstructed changes in Arctic sea ice over the past 1,450 years, Nature, 509-512, doi 10.1038/nature10581.

[xii] Polyak, L., and several others (2010), History of sea ice in the Arctic, Quaternary Sci. Rev., 29, 1757-1778, doi: 10.1016/j.quascirev.2010.02.010.

[xiii] Stroeve, J. C., J. Maslanik, M. C. Serreze, I. Rigor, W. Meier, and C. Fowler (2011), Sea ice response to an extreme negative phase of the Arctic Oscillation during winter 2009/2010, Geophys. Res. Lett., 38, L02502, doi:10.1029/2010GL045662.

[xiv] Day, J.J., J.C Hargreaves, J.D. Annan, and A. Abe-Ouchi (2012), Sources of multi-decadal variability in Arctic sea ice extent, Env. Res. Lett., 7, 034011, doi: 10.1088/1748-9326/7/3/034011.

[xv] Tsonis, Anastasios A., Kyle Swanson, and Sergey Kravtsov. "A new dynamical mechanism for major climate shifts." Geophysical Research Letters 34.13 (2007): L13705.

[xvi] http://judithcurry.com/2011/11/30/shifts-phase-locked-state-and-chaos-in-climate-data/

[xvii] Kay, J. E., M. M. Holland, and A. Jahn (2011), Inter-annual to multi-decadal Arctic sea ice extent trends in a warming world, Geophys. Res. Lett., 38, L15708, doi:10.1029/2011GL048008.

[xviii] Stroeve, J., M.M. Holland, W. Meier, T. Scambos, and M. Serreze (2007), Arctic sea ice decline: Faster than forecast, Geophys. Res. Lett., 34, L09501, doi:10.1029/2007GL029703 and Stroeve, J.C., V. Kattsov, A. Barrett, M. Serreze, T. Pavlova, M. Holland, and W.N. Meier (2012), Trends in Arctic sea ice extent from CMIP5, CMIP3 and observations, Geophys. Res. Lett., 39, L16502, doi:10.1029/2012GL052676.

[xix] Notz, D. and J. Marotzke (2012), Observations reveal external driver for Arctic sea-ice retreat, Geophys. Res. Lett., 39, L08502, doi:10.1029/2012GL051094.

[xx] Schweiger, A., R. Lindsay, J. Zhang, M. Steele, H. Stern, and R. Kwok. 2011. Uncertainty in Modeled Arctic Sea Ice Volume. J. Geophys. Res., doi:10.1029/2011JC007084

[xxi] Day, J.J., J.C Hargreaves, J.D. Annan, and A. Abe-Ouchi (2012), Sources of multi-decadal variability in Arctic sea ice extent, Env. Res. Lett., 7, 034011, doi: 10.1088/1748-9326/7/3/034011

[xxii] Schweiger, A., R. Lindsay, J. Zhang, M. Steele, H. Stern, and R. Kwok. 2011. Uncertainty in Modeled Arctic Sea Ice Volume. J. Geophys. Res., doi:10.1029/2011JC007084