Reconstructing the Esper Reconstruction

As discussed in previous article, Esper et al (2024) link, the newest hockey stick diagram, asserted that 2023 was the “warmest summer” in millennia by an updated version of “Mike’s Nature trick” –  by comparing 2023 instrumental temperature to purported confidence intervals of temperature estimates from “ancient tree rings” for the past two millennia.  In today’s article, I will report on detective work on Esper’s calculations, showing that the article is not merely a trick, but a joke.

Background

Esper et al 2024 provided only a sketchy and incomplete description of methodology and negligible supporting data.  Like Mann et al 1998.

Indeed, the only supporting data thus far released by Esper is a single table of his final reconstruction (Recon.), target instrumental temperature (Obs.) and the purported lower and upper confidence intervals (link)

Esper’s description of methodology was cursory to say the least, consisting of the following paragraph. Footnote 23 linked to Buentgen et al (Nature Communications 2021 link), a prior article by two of the Esper et al 2024 coauthors (Esper, Buentgen).  This article, unlike Esper et al 2024, had an associated data archive (link), which, while far from complete, provided a foothold for analysing Esper’s 2024 calculations.

Buentgen et al (2021) reported on what they called a “double blind” experiment in which they sent out measurement data from 9 prominent tree ring sites to 15 different climate science groups, asking each of them to respond with a “reconstruction” of Northern Hemisphere (extratropic) temperature for the past 2000 years.  (Many of the nine tree ring sites are familiar to Climate Audit readers between 2005 and 2012: they include both bristlecone and Briffa 2008 sites, as I’ll discuss later.)   The 15 reconstructions varied dramatically (as will also be discussed below). Buentgen’s takeaway conclusion was that the ensemble “demonstrated the influence of subjectivity in the reconstruction process”:

Differing in their mean, variance, amplitude, sensitivity, and persistence, the ensemble members demonstrate the influence of subjectivity in the reconstruction process.

This was, to say the least, an understatement.  What the experiment actually demonstrated was that different climate groups could get dramatically different reconstructions from identical data.  Thus, over and above the many well known defects and problems in trying to use tree ring data to reconstruct past temperatures, there was yet one more source of uncertainty that had not been adequately canvassed: the inconsistency between climate groups presented with the same data.

The Buentgen (2021) Rmean reconstruction

Buentgen et al’s NOAA archive contained a sheet with all 15 reconstructions plus their mean (Rmean) and median (Rmedian). Comparison of the Buentgen Rmean reconstruction to the Esper et al 2024 reconstruction (“Recon.”) was an obvious first step. The Rmean reconstruction had an exact correlation (r=1) to the Esper reconstruction, but was both dilated (higher standard deviation) and displaced upwards, as shown in diagram below.  The Buentgen 2021 reconstructions used a 1961-1990 reference period (matching the reference period of common instrumental temperature datasets);  the Esper 2024 reconstruction used a 1851-1900 reference period. But how (and why) was  the standard deviation change?

In further detail, here are the steps required to go from the Rmean reconstruction to the Esper version:

  1. re-centering the Rmean reconstruction to a 1901-2010 reference period and re-scaling its standard deviation in 1901-2010 period to match the corresponding 1901-2010 standard deviation of (“scaled against”) the Berkeley JJA 30-90N  instrumental series
  2. re-centering the resulting reconstruction to a 1851-1900 reference period (i.e. subtracting the 1851-1900 mean of the step 1 reconstruction series 1 to center at zero over 1851-1900.

If the Esper et al (2024) target instrumental series is re-centered on 1961-1990 reference period, it is an almost exact match to the Tmean instrumental series of Buentgen et al 2021 (link).  I presume that the change to Berkeley JJA 30-90N is to extend the record to 2023.  I don’t have any objection or issue with this, other than that I was unable to locate the Berkeley JJA 30-90N in its native form. In the diagram below, I re-centered the archived Esper et al 2024 “Obs.” series to 1961-1990 reference period, yielding the reconciliation shown below.

To get from the underlying Berkeley JJA 30-90N instrumental series to the version archived in Esper et al (2024), Esper et al did the following:

  1. re-center the Berkeley JJA 30-90N to 1901-2010 (as part of their re-scaling of the Buentgen Rmean reconstruction)
  2. re-center the step 1 instrumental series to 1851-1900 by the 1851-1900 mean of the step 1 reconstruction.  The instrumental series is NOT centered on 1851-1900.

The effect of these manipulations can be seen by plotting (left) the Buentgen et al 2021 Rmean and Tmean data (both reference 1961-1990) to Esper et al 2024 Extended Figure 3 (right).  Using the original Buentgen version of the data, the instrumental data (red) increases almost twice as quickly as the proxy reconstruction, while, in the Esper version, the two series rise at similar rates.   Had Esper re-centered the instrumental temperature to 1851-1900 (to correspond with the re-centering of the reconstruction), this would have reduced the visual coherence in the recent period of interest.

 

 

 

 

 

 

 

 

 

 

 

 

There is no statistical requirement for any of the above Esper et al 2024 re-scaling and re-centering operations. The only purpose appears to have been to force a reduction in the divergence between the Rmean reconstruction and instrumental temperature.

The “Confidence” Intervals

Esper et al 2024 stated that their confidence intervals were estimated by scaling the 15 Buentgen et al (2021) “ensemble members” against the instrumental temperature target in 1901-2010 period and the “variance among ensemble members was used to approximate 95% confidence intervals”:

The most obvious interpretation of this cryptic description is to calculate year-by=year variance and compare to the reported confidence interval.  However, the Esper upper confidence interval is highly correlated to (0.96) to the maximum of the Buentgen ensemble and the Esper lower confidence interval is highly correlated (0.97) to the minimum of the Buentgen ensemble (in each case, the values closely match after deducting the offset to 1851-1900 reference period.)   The emulation is shown below.  It appears that there is some additional re-scaling that I haven’t figured out yet.

Also note the asymmetry between the upper and lower “confidence intervals”.

I remind readers that there “confidence intervals” are nothing more than the range of answers obtained by 15 different climate groups from the same measurement datasets. There is no statistical basis for assuming that this range of inconsistent answers corresponds to an actual confidence interval.

The Buentgen “Ensemble”: Regression vs Averaging

In most walks of science, one expects that groups from one scientific institution will be arrive at more or less the same results from the same data. But look at the enormous inconsistency among five Buentgen (2021) reconstructions in the period since 1980.  Reconstructions R8 and R10 increase by 1.2 and 1.6 deg C respectively, while reconstructions R13, R12 and R2 are unchanged or decline.  How is such inconsistency possible?

The next figure shows the R8 and R10 reconstructions against target instrumental temperature (reference 1961-1990).  Both R8 and R10 show an astounding – almost perfect – reconstruction of the target instrumental temperature.  The reconstructions are too perfect.  Indeed, the astounding accuracy of these two reconstructions raises an obvious question:  why didn’t Buentgen et al (2021) – and Esper et al (2024) – rely on these near-perfect reconstructions, rather than blending them into a mean with reconstructions (R2, R13, R14) that didn’t replicate modern instrumental temperature?

 

Additional details on the individual reconstructions is available at the Buentgen et al (2021) Supplementary Information (link).   It turns out that R10 “include[d] instrumental temperature measurements in the reconstruction”.  Esper et al conceded that “since R10 integrates instrumental temperature measurements during the calibration period, [R10] is not entirely independent of the target.”   This seriously under-states the problem: R10 was so seriously dependent on the target as to disqualify its use in calculation of confidence intervals.

….

As soon as I became aware of the near-perfection of the R8 reconstruction, my initial surmise was that it involved some sort of inverse regression of temperature onto the nine tree ring chronologies. This was confirmed in the Supplementary Information.  R8 carried out two inverse regressions:  a “high-frequency” and a “low-frequency” regression, followed by combining the two.  This is clearly a recipe for overfitting – the construction of a model that fits “too well” in the calibration period, but of negligible merit outside the calibration period.

In contrast to the inverse regression of R8, R13 stated that it used a sort of average of the available tree ring chronologies:

Similarly, the R12 reconstruction was based on averaging chronologies, rather than inverse regression or splicing.

Conclusion

At first reading, Esper et al (2024) carried out multiple re-scaling and re-centering operations on Buentgen et al (2021) series that were already reconstructions centered on reference period 1961-1990.  The only purpose for these operations appears to have been to “improve” the coherence of the Buentgen Rmean reconstruction with temperature.   A sort of air brushing of their hockey stick diagram.

And, at the end of the day, Esper et al (2024) is best described as climate pornography.  In the premier modern journal for climate pornography: Nature. And while climate partisans (and scientists) pretend to read the articles and the fine print, in reality, they, like Penthouse readers in the 1980s, are only interested in the centerfold. In the present case, an air brushed hockey stick diagram. A diagram that raises the same question that Penthouse readers asked back in the day: real or fake?

 

Appendix

Some notes and some figures not used in this note.

Buentgen et al (2021), the reference in Esper (2024) footnote 23, has an associated data archive at NOAA (link) as follows:

  • the results of the 15 reconstructions  (link) plus the overall mean (Rmean) and overall median.  The reconstructions were “anomalized”, but the reference period is not stated in the archive and does not appear to be consistent across the reconstructions. Five of the reconstructions can be determined to be centered on 1961-1990; I haven’t figured out the reference period for the others.
  • an archive (link) for target instrumental data: year, 15 columns for target instrumental data for each group, overall mean (Tmean) and overall median. These anomalies are all centered on a 1961-1990 reference period.
  • an archive of measurement data for eight of the nine tree ring measurement data sets – inexplicably leaving out one data set (Yamal).  Most of the datasets are familiar to Climate Audit readers from 2005-2012: two are from Graybill bristlecone sites (inclusive of updates by Salzer et al); three are based on (or identical to) measurement data from Briffa et al 2008 (which was under discussion when Climategate emails released).  I presume that they used the Yamal dataset from Briffa (2013) and neglected to include it in the archive.

Below is a comparison of corresponding diagrams for Buentgen (2021) and Esper (2024).  Buentgen (2021) appears to have a reference period of 1961-1990 and Esper (2024) a reference period of 1851-1900.

 

 

7 Comments

  1. sherro01
    Posted Jun 2, 2024 at 6:20 PM | Permalink | Reply

    How can these Esper words be accepted for publication in a scientific journal with a high past reputation, Nature?

    The words are not about Science, they are about subjective manipulation of some groups of numbers whose links to actual measurements have steadily become disconnected. The authors of these groups of numbers might as well make up their own numbers. Some of the math manipulations are not part of accepted, classical statistics. Surely there are regulatory guidelines as to which number manipulations are permitted and which are punished. Where are the regulators?

    As some of us have stated before, this type of subjective adjustment can be rewarded by a prison term in some fields. Imagine the outcry if these manipulations were applied to assays from an emerging gold mine.

    Geoff S. Geochemist

    • DaveS
      Posted Jun 3, 2024 at 12:19 PM | Permalink | Reply

      It might also be asked, where were the peer reviewers?

  2. Danley B. Wolfe
    Posted Jun 2, 2024 at 7:43 PM | Permalink | Reply

    One main reason that the “climate consensus” survive to this is that once it gained momentum they are all on the payroll – it’s not about seeking truth in science.

  3. Jeff Alberts
    Posted Jun 3, 2024 at 9:09 AM | Permalink | Reply

    Maybe I’m off base here, but has the “divergence problem” been airbushed out as well?

    • Posted Jun 6, 2024 at 3:41 AM | Permalink | Reply

      That was my first thought. If the sensitivity to temperature changes over time, then the older part of the reconstruction should be more swingy. That may be hidden by the noise of the different series averaging out.

      Are those dates in the Esper diagram (246 and 536) just chance events, or is there any prospect that they represent real extremes?

  4. GK
    Posted Jun 8, 2024 at 11:08 AM | Permalink | Reply

    When I rescale the observations to a 1850-1900 baseline so that the comparison to the proxy anomaly is apples-to-apples, I get 2023 = 1.82. The proxy reconstruction for year 613 shows a 97.5 value of 1.88. That would appear to mean that the claim that 2023 warming was unprecedented no longer holds when the anomalies use the same baseline.

  5. Nicholas V
    Posted Jun 15, 2024 at 4:28 AM | Permalink | Reply

    I’m glad you are still at it, Steve, after all these years. Even if it feels like beating a dead horse now.

2 Trackbacks

  1. […] Reconstructing the Esper Reconstruction […]

  2. […] ClimateAudit.org, and has written sparingly but in great depth over the years. In the piece titled Reconstructing the Esper Reconstruction, he reviews the latest version of Michael Mann’s discredited Hockey Stick, Esper et al […]

Post a Comment

Required fields are marked *

*
*