Was early onset industrial-era warming anthropogenic, as Abram et al. claim?

A guest post by Nic Lewis

Introduction

A recent PAGES 2k Consortium paper in Nature,[i] Abram et al., that claims human-induced, greenhouse gas driven warming commenced circa 180 years ago,[ii] has been attracting some attention. The study arrives at its start dates by using a change-point analysis method, SiZer, to assess when the most recent significant and sustained warming trend commenced. Commendably, the lead author has provided the data and Matlab code used in the study, including the SiZer code.[iii]

Their post-1500 AD proxy-based regional reconstructions are the PAGES2K reconstructions, which have been discussed and criticized on many occasions at CA (see tag), with the Gergis et al 2016 Australian reconstruction substituted for the withdrawn version. I won’t comment on the validity of the post-1500 AD proxy-based regional reconstructions on which the observational side of their study is based – Steve McIntyre is much better placed than me to do so.

However, analysis of those reconstructions can only provide evidence as to when sustained warming started, not as to whether the cause was natural or anthropogenic. In this post, I will examine and question the paper’s conclusions about the early onset of warming detected in the study being attributable to the small increase in greenhouse gas emissions during the start of the Industrial Age.

The authors’ claim that the start of anthropogenic warming can be dated to the 1830s is based on model simulations of climate change from 1500 AD on.[iv] A simple reality check points to that claim being likely to be wrong: it flies in the face of the best estimates of the evolution of radiative forcing. According to the IPCC 5th Assessment [Working Group I] Report (AR5) estimates, the change in total effective radiative forcing from preindustrial (which the IPCC takes as 1750) to 1840 was –0.01 W/m2, or +0.01 W/m2 if changes only in anthropogenic forcings, and not solar and volcanic forcings, are included. Although the increase in forcing from all greenhouse gases (including ozone) is estimated to be +0.20 W/m2 by 1840, that forcing is estimated to be almost entirely cancelled out by negative forcing, primarily from anthropogenic aerosols and partly from land use change increasing planetary albedo.[v]  Total anthropogenic forcing did not reach +0.20 W/m2 until 1890; in 1870 it was still under +0.10 W/m2. Continue reading


Re-examining Cook’s Mt Read (Tasmania) Chronology

In today’s post, I’m going to re-examine (or more accurately, examine de novo) Ed Cook’s Mt Read (Tasmania) chronology, a chronology recently used in Gergis et al 2016, Esper et al 2016, as well as numerous multiproxy reconstructions over the past 20 years.

Gergis et al 2016  said that they used freshly-calculated “signal-free” RCS chronologies for tree ring sites except Mt Read (and Oroko). For these two sites, they chose older versions of the chronology,  purporting to justify the use of old versions “for consistency with published results” – a criterion that they disregarded for other tree ring sites.   The inconsistent practice immediately caught my attention.  I therefore calculated an RCS chronology for Mt Read from measurement data archived with Esper et al 2016.  Readers will probably not be astonished that the chronology disdained by Gergis et al had very elevated values in the early second millennium and late first millennium relative to the late 20th century.

I cannot help but observe that Gergis’ decision to use the older flatter chronology was almost certainly made only after peeking at results from the new Mt Read chronology, yet another example of data torture (Wagenmakers 2011, 2012) by Gergis et al. At this point, readers are probably de-sensitized to criticism of yet more data torture. In this case, it appears probable that the decision impacts the medieval period of their reconstruction where they only used two proxies, especially when combined with their arbitrary exclusion of Law Dome, which also had elevated early values.

Further curious puzzles emerged when I looked more closely at the older chronology favored by Gergis (and Esper). This chronology originated with Cook et al 2000 (Clim Dyn), which clearly stated that they had calculated an RCS chronology and even provided a succinct description of the technique (citing Briffa et al 1991, 1992) as authority.  However, their reported chronology (both as illustrated in Cook et al 2000 and as archived at NOAA in 1998), though it has a very high correlation to my calculation, has negligible long-period variability.  In this post, I present the case that the chronology presented by Cook as an RCS chronology was actually (and erroneously) calculated using a “traditional” standardization method that did not preserve low-frequency variance.

Although the Cook chronology has been used over and over, I seriously wonder whether any climate scientist has ever closely examined it in the past 20 years. Supporting this surmise are defects and errors in the Cook measurement dataset, which have remained unrepaired for over 20 years.  Cleaning the measurement dataset to be usable was very laborious and one wonders why these defects have been allowed to persist for so long.
Continue reading

Esper et al 2016 and the Oroko Swamp

Jan Esper, prominent in early Climate Audit posts as an adamant serial non-archiver, has joined with 17 other tree ring specialists, to publish “Ranking of tree-ring based temperature reconstructions of the past millennium” (pdf). This assesses 39 long tree ring temperature reconstructions. The assessment is accompanied by an archive containing 39 reconstruction versions, together with the underlying measurement data for 33 of 39 reconstructions. (It seems odd that measurement data would continue to be withheld for six sites, but, hey, it’s climate science.)

Because I’ve been recently looking at data used in Gergis et al, I looked first at Esper’s consideration of Oroko, one of two long proxies retained in Gergis screening.  I’ve long sought Oroko measurement data, first requesting it from Ed Cook in 2003.  Cook refused. Though Oroko reconstructions have been used over the years in multiproxy studies and by IPCC, the underlying measurement data has never made archived  The archive for Esper et al 2016 is thus the very first archive of Oroko measurement data (though unfortunately it seems that even the present archive is incomplete and not up-to-date).

Despite claims to use the most recent reconstruction, Esper’s Oroko temperature reconstruction is decidedly out of date.  Worse, it uses a n Oroko “reconstruction” in which Cook replaced proxy data (which went down after 1960) with instrumental data (which went up) – in a contemporary variation of what is popularly known as “Mike’s Nature trick”, though Mike’s Nature trick, as discussed at CA here, was a little different.

In today’s post, I’ll look at the “new” Oroko data, which, needless to say,  has some surprises.

Continue reading

Gergis and Law Dome

In today’s post, I’m going to examine Gergis’ dubious screening out of the Law Dome d18O series, a series that has been of long-standing interest at Climate Audit (tag).

Gergis et al 2016 stated that they screened proxies according to significance of the correlation to local gridcell temperature. Law Dome d18O not only had a significant correlation to local temperature, but had a higher correlation to local instrumental temperature than:

  • 24 of the 28 proxies retained by Gergis in her screened network;
  • a higher t-statistic than 19 of 28 proxies retained by Gergis;
  • a higher correlation and t-statistic than either of the other two long proxies (Mt Read, Oroko Swamp tree ring chronologies);

Nonetheless,  the Law Dome d18O series was excluded from the Gergis et al network. Gergis effected her exclusion of Law Dome not because of deficient temperature correlation, but through an additional arbitrary screening criterion, which excluded Law Dome d18O, but no other proxy in the screened network.

This is not the first occasion in which IPCC authors have arbitrarily excluded Law Dome d18O. CA readers may recall Climategate revelations on the contortions of IPCC AR4 Lead Authors to keep Law Dome out of the AR4 diagram illustrating long Southern Hemisphere proxies (see CA post here).

Law Dome d18O is intrinsically an extremely interesting proxy for readers interested in a Southern Hemisphere perspective on the Holocene (balancing the somewhat hackneyed commentary citing the Cuffey-Clow reconstruction based on GISP2 ice core in Greenland). The utility of Law Dome d18O is much reduced by inadequate publishing and archiving by the Australian Antarctic Division,  a criticism that I make somewhat reluctantly since they have been polite in their correspondence with me, but ultimately unresponsive.
Continue reading

Joelle Gergis, Data Torturer

Cheney-torture-worksIn 2012, the then much ballyhoo-ed Australian temperature reconstruction of Gergis et al 2012 mysteriously disappeared from Journal of Climate after being criticized at Climate Audit. Now, more than four years later, a successor article has finally been published. Gergis says that the only problem with the original article was a “typo” in a single word. Rather than “taking the easy way out” and simply correcting the “typo”, Gergis instead embarked on a program that ultimately involved nine rounds of revision, 21 individual reviews, two editors and took longer than the American involvement in World War II.  However, rather than Gergis et al 2016 being an improvement on or confirmation of Gergis et al 2012, it is one of the most extraordinary examples of data torture (Wagenmakers, 2011, 2012) that any of us will ever witness.

Also see Brandon S’s recent posts here here. Continue reading

Gergis

redirect to here

Are energy budget TCR estimates biased low, as Richardson et al (2016) claim?

A guest post by Nic Lewis

 

Introduction and Summary

In a recently published paper (REA16),[1] Mark Richardson et al. claim that recent observation-based energy budget estimates of the Earth’s transient climate response (TCR) are biased substantially low, with the true value some 24% higher. This claim is based purely on simulations by CMIP5 climate models. As I shall show, observational evidence points to any bias actually being small. Moreover, the related claims made by Kyle Armour, in an accompanying “news & views” opinion piece,[2] fall apart upon examination.

The main claim in REA16 is that, in models, surface air-temperature warming over 1861-2009 is 24% greater than would be recorded by HadCRUT4 because it preferentially samples slower-warming regions and water warms less than air. About 15 percentage points of this excess result from masking to HadCRUT4v4 geographical coverage. The remaining 9 percentage points are due to HadCRUT4 blending air and sea surface temperature (SST) data, and arise partly from water warming less than air over the open ocean and partly from changes in sea ice redistributing air and water measurements.

REA16 infer an observation-based best estimate for TCR from 1.66°C, 24% higher than the value of 1.34°C if based on HadCRUT4v4.. Since the scaling factor used is based purely on simulations by CMIP5 models, rather than on observations, the estimate is only valid  if those simulations realistically reproduce the spatiotemporal pattern of actual warming for both SST and near-surface air temperature (tas), and changes in sea-ice cover. It is clear that they fail to do so. For instance, the models simulate fast warming, and retreating sea-ice, in the sparsely observed southern high latitudes. The available evidence indicates that, on the contrary, warming in this region has been slower than average, pointing to the bias due to sparse observations over it being in the opposite direction to that estimated from model simulations. Nor is there good observational evidence that air over the open ocean warms faster than SST. Therefore, the REA16 model-based bias figure cannot be regarded as realistic for observation-based TCR estimates. Continue reading

Deflategate: Controversy is due to Scientist Error

I’ve submitted an article entitled “New Light on Deflategate: Critical Technical Errors” pdf to Journal of Sports Analytics. It identifies and analyzes a previously unnoticed scientific error in the technical analysis included in the Wells Report on Deflategate. The article shows precisely how the “unexplained” deflation occurred prior to Anderson’s measurement and disproves the possibility of post-measurement tampering. At present, there is insufficient information to determine whether the scientific error arose because the law firm responsible for the investigation (Paul, Weiss) omitted essential information in their instructions to their technical consultants (Exponent) or whether the technical consultants failed to incorporate all relevant information in their analysis.  In either event, the error was missed by the NFL consultant Daniel Marlow of the Princeton University Department of Physics, by the authors of the Wells Report and by the NFL.

 

 

Continue reading

Schmidt’s Histogram Diagram Doesn’t Refute Christy

schmidt histogram GLBIn my most recent post,  I discussed yet another incident in the long running dispute about the inconsistency between models and observations in the tropical troposphere – Gavin Schmidt’s twitter mugging of John Christy and Judy Curry.   Included in Schmidt’s exchange with Curry was a diagram with a histogram of model runs. In today’s post, I’ll parse the diagram presented to Curry, first discussing the effect of some sleight-of-hand and then showing that Schmidt’s diagram, after removing the sleight-of-hand and when read by someone familiar with statistical distributions, confirms Christy rather than contradicting him. Continue reading

Gavin Schmidt and Reference Period “Trickery”

In the past few weeks, I’ve been re-examining the long-standing dispute over the discrepancy between models and observations in the tropical troposphere.  My interest was prompted in part by Gavin Schmidt’s recent attack on a graphic used by John Christy in numerous presentations (see recent discussion here by Judy Curry).   christy_comparison_2015Schmidt made the sort of offensive allegations that he makes far too often:

@curryja use of Christy’s misleading graph instead is the sign of partisan not a scientist. YMMV. tweet;

@curryja Hey, if you think it’s fine to hide uncertainties, error bars & exaggerate differences to make political points, go right ahead.  tweet.

As a result, Curry decided not to use Christy’s graphic in her recent presentation to a congressional committee.  In today’s post, I’ll examine the validity (or lack) of Schmidt’s critique.

Schmidt’s primary dispute, as best as I can understand it, was about Christy’s centering of model and observation data to achieve a common origin in 1979, the start of the satellite period, a technique which (obviously) shows a greater discrepancy at the end of the period than if the data had been centered in the middle of the period.  I’ll show support for Christy’s method from his long-time adversary, Carl Mears, whose own comparison of models and observations used a short early centering period (1979-83) “so the changes over time can be more easily seen”. Whereas both Christy and Mears provided rational arguments for their baseline decision,  Schmidt’s argument was little more than shouting.

Continue reading