The effect of Atlantic internal variability on TCR estimation – an unfinished study

A guest article by Frank Bosse (posted by Nic Lewis)

A recent paper by the authors Stolpe, Medhaug and Knutti (thereafter S. 17) deals with a longstanding question: By how much are the Global Mean Surface Temperatures (GMST) influenced by the internal variability of the Atlantic (AMV/AMO) and the Pacific (PMV/PDO/IPO)?

The authors analyze the impacts of the natural up’s and down’s of both basins on the temperature record HadCRUT4.5.

A few months ago this post of mine was published which considered the influence of the Atlantic variability.

I want to compare some of the results.

In the beginning I want to offer some continuing implications of S. 17.

The key figure of S. 17 (fig. 7a) describes most of the results. It shows a variability- adjusted HadCRUT4.5 record:

Fig.1: The Fig. 7a from S. 17 shows the GMST record (orange) between
1900 and 2005, adjusted for the Atlantic & Pacific variability.

 

Continue reading


How dependent are GISTEMP trends on the gridding radius used?

A guest post by Nic Lewis

Introduction

Global surface temperature (GMST) changes and trends derived from the standard GISTEMP[1] record over its full 1880-2016 length exceed those per the HadCRUT4.5 and NOAA4.0.1 records, by 4% and 7% respectively.  Part of these differences will be due to use of different land and (in the case of HadCRUT4.5) ocean sea-surface temperature (SST) data, and part to methodological differences.

GISTEMP and NOAA4.0.1 both use data from the ERSSTv4 infilled SST dataset, while HadCRUT4.5 uses data from the non-infilled HadSST3 dataset. Over the full 1880-2016 GISTEMP record, the global-mean trends in the two SST datasets were almost the same: 0.56 °C/century for ERSSTv4 and  0.57 °C /century for HadSST3. And although HadCRUT4v5 depends (via its use of the CRUTEM4 record) on a different set of land station records from GISTEMP and NOAA4.0.1 (both of which use GHCNv3.3 data), there is a great commonality in the underlying set of stations used.

Accordingly, it seems likely that differences in methodology may largely account for the slightly faster 1880-2016 warming in GISTEMP. Although the excess warming in GISTEMP is not large, I was curious to find out in more detail about the methods it uses and their effects. The primary paper describing the original (land station only based) GISTEMP methodology is Hansen et al. 1987.[2] Ocean temperature data was added in 1996.[3] Hansen et al. 2010[4] provides an update and sets out changes in the methods.

Steve has written a number of good posts about GISTEMP in the past, locatable using the Search box. Some are not relevant to the current version of GISTEMP, but Steve’s post showing how to read GISTEMP binary SBBX files in R (using a function written by contributor Nicholas) is still applicable, as is a later post covering related other R functions that he had written. All the function scripts are available here.

How GISTEMP is constructed

Rather than using a regularly spaces grid, GISTEMP divides the Earth’s surface into 8 latitude zones, separated at 0°, 23.58°, 44.43° and 64.16° (from now on rounded to the nearest degree).  Moving from pole to pole, the zones have area weights of 10%, 20%, 30%, 40%, 40%, 30%, 20% and 10%, and are divided longitudinally into respectively 4, 8, 12 16, 16, 12, 8 and 4 equal sized boxes. This partitioning results in 80 equal area boxes. Each box is then divided into 100 subboxes, with equal  longitudinal extent, but graduated latitudinal extent so that they all have equal areas. Figure 1, reproduced from Hansen et al. 1987, shows the box layout. Box numbers are shown in their lower right-hand corners; the dates and other numbers have been superseded.

Figure 1. 80 equal area box regions used by GISTEMP. From Hansen et al. 1987, Fig.2.

Continue reading

Centenary of the End of the Battle of the Somme

November 18 marks the centenary of the end of the Battle of the Somme, an event that passed essentially unnoticed, though it was a seminal event in the development of modern Canada. canadian-artillery-in-action-corrected_0Its carnage was over 1.1 million casualties from a combined population (both sides) of about 170 million. (For a scale, there have been approximately 35,000 U.S. casualties in Iraq from 2003-1016.)

I became interested in the Battle of the Somme earlier this year due to a sheaf of papers in the back of my mother’s china cabinet, which I noticed while she was moving.

The papers were copies of transcripts of letters from the front by the adjutant of the 75th Canadian Battalion (4th Canadian Division), one of the battalions which led the closing assault at the Battle of the Somme.  While other war-time correspondence in family archives tended to be sincere but dreary epistles, these letters were full of interesting details about life at the front – not just mud and food, but flares, “dug-outs”, young men having horse races, sightseeing at Amiens Cathedral five days after a battle in which 25% of the battalion were killed or wounded, the moral quandary of court-martialing soldiers who had wounded themselves to avoid further battle, typically because of what we today call post-traumatic stress disorder, with penalties shocking to today’s sensibility.  In this note, I’ve collated all of the china cabinet letters available from the china cabinet, interweaving with information from War Diaries, to provide a narrative (pdf).

In the transcript, neither the author nor addressee were transcribed.  From details in the letters, it is evident that the author was Miles Langstaff, then a recent graduate of Osgoode Law School.  I presume that his correspondent, who had knitted him a sweater and walked with him in the valley of the Humber River in west Toronto, was my grandmother. Langstaff was killed on March 1, 1917 in an ill-conceived raid at Vimy Ridge, a month before the major victory in April 1917.

 

 

 

Transcript and narrative here.

 

The Destruction of Huma Abedin’s Emails on the Clinton Server and their Surprise Recovery

Despite extraordinarily intense coverage of all aspects of Hillary Clinton’s emails, all commentary to date (to my knowledge), even the underlying FBI Report, has paid little to no attention to the destruction of Huma Abedin’s emails, also stored on the Clinton server.  Further, even with the greatly increased interest in Huma’s emails arising from the discoveries on Anthony Weiner’s laptop, speculation has mostly focused on the potential connection to deleted Hillary emails, rather than the potentially much larger tranche of deleted Huma emails from the Clinton server (many of which would, in all probability, be connected to Hillary in any event.)

Both Hillary and Huma had clintonemail accounts. Huma was unique in that respect among Hillary’s coterie. (Chelsea Clinton, under the pseudonym of Diane Reynolds, was the only other person with a clintonemail address.)

The wiping and bleaching of the Clinton server and backups can be conclusively dated to late March 2015.  All pst files for both Hillary and Huma’s accounts were deleted and bleached in that carnage.  While an expurgated version of Hillary’s archive (her “work-related” emails) had been preserved at her lawyer’s office (thereby giving at least a talking-point against criticism), no corresponding version of Huma’s archive was preserved from the Clinton server.

Huma had accessed her clintonemail account through a web browser and, to her knowledge, had not kept a local copy on her own computer. So when Huma’s pst files were deleted from the Clinton server and backups, those were the only known copies of her clintonemail.com emails.  When Huma was eventually asked by the State Department to provide any federal records “in her possession”, her lawyers took the position that emails on the Clinton server were not in Huma’s possession and made no attempt to search Huma’s account on the Clinton server (though such an attempt would have been fruitless by the time that they were involved). Huma’s ultimate production of non-gov emails was a meagre ~6K emails, while, in reality, the number of non-gov emails that she sent or received is likely to be an order of magnitude greater.

Hillary was also asked to return all federal records “in her possession”, but did not return Huma’s emails on the Clinton server.  In today’s post, I’ll examine Hillary’s affidavit and answer to interrogatories in the Judicial Watch proceedings, both made under “penalty of perjury” to show the misdirection.  You have to watch the pea very carefully

In respect to the ~600K emails recently discovered on the Anthony Weiner laptop, my surmise is that many, if not most, will derive from Huma’s unwitting backup of her clintonemail account prior to its March 2015 destruction on the Clinton server. In other words, the March 2015 destruction of pst files from the Clinton server included several hundred thousand Huma emails from her tenure at the State Department, over and about the 30K Hillary emails about “yoga”.

Continue reading

Was early onset industrial-era warming anthropogenic, as Abram et al. claim?

A guest post by Nic Lewis

Introduction

A recent PAGES 2k Consortium paper in Nature,[i] Abram et al., that claims human-induced, greenhouse gas driven warming commenced circa 180 years ago,[ii] has been attracting some attention. The study arrives at its start dates by using a change-point analysis method, SiZer, to assess when the most recent significant and sustained warming trend commenced. Commendably, the lead author has provided the data and Matlab code used in the study, including the SiZer code.[iii]

Their post-1500 AD proxy-based regional reconstructions are the PAGES2K reconstructions, which have been discussed and criticized on many occasions at CA (see tag), with the Gergis et al 2016 Australian reconstruction substituted for the withdrawn version. I won’t comment on the validity of the post-1500 AD proxy-based regional reconstructions on which the observational side of their study is based – Steve McIntyre is much better placed than me to do so.

However, analysis of those reconstructions can only provide evidence as to when sustained warming started, not as to whether the cause was natural or anthropogenic. In this post, I will examine and question the paper’s conclusions about the early onset of warming detected in the study being attributable to the small increase in greenhouse gas emissions during the start of the Industrial Age.

The authors’ claim that the start of anthropogenic warming can be dated to the 1830s is based on model simulations of climate change from 1500 AD on.[iv] A simple reality check points to that claim being likely to be wrong: it flies in the face of the best estimates of the evolution of radiative forcing. According to the IPCC 5th Assessment [Working Group I] Report (AR5) estimates, the change in total effective radiative forcing from preindustrial (which the IPCC takes as 1750) to 1840 was –0.01 W/m2, or +0.01 W/m2 if changes only in anthropogenic forcings, and not solar and volcanic forcings, are included. Although the increase in forcing from all greenhouse gases (including ozone) is estimated to be +0.20 W/m2 by 1840, that forcing is estimated to be almost entirely cancelled out by negative forcing, primarily from anthropogenic aerosols and partly from land use change increasing planetary albedo.[v]  Total anthropogenic forcing did not reach +0.20 W/m2 until 1890; in 1870 it was still under +0.10 W/m2. Continue reading

Re-examining Cook’s Mt Read (Tasmania) Chronology

In today’s post, I’m going to re-examine (or more accurately, examine de novo) Ed Cook’s Mt Read (Tasmania) chronology, a chronology recently used in Gergis et al 2016, Esper et al 2016, as well as numerous multiproxy reconstructions over the past 20 years.

Gergis et al 2016  said that they used freshly-calculated “signal-free” RCS chronologies for tree ring sites except Mt Read (and Oroko). For these two sites, they chose older versions of the chronology,  purporting to justify the use of old versions “for consistency with published results” – a criterion that they disregarded for other tree ring sites.   The inconsistent practice immediately caught my attention.  I therefore calculated an RCS chronology for Mt Read from measurement data archived with Esper et al 2016.  Readers will probably not be astonished that the chronology disdained by Gergis et al had very elevated values in the early second millennium and late first millennium relative to the late 20th century.

I cannot help but observe that Gergis’ decision to use the older flatter chronology was almost certainly made only after peeking at results from the new Mt Read chronology, yet another example of data torture (Wagenmakers 2011, 2012) by Gergis et al. At this point, readers are probably de-sensitized to criticism of yet more data torture. In this case, it appears probable that the decision impacts the medieval period of their reconstruction where they only used two proxies, especially when combined with their arbitrary exclusion of Law Dome, which also had elevated early values.

Further curious puzzles emerged when I looked more closely at the older chronology favored by Gergis (and Esper). This chronology originated with Cook et al 2000 (Clim Dyn), which clearly stated that they had calculated an RCS chronology and even provided a succinct description of the technique (citing Briffa et al 1991, 1992) as authority.  However, their reported chronology (both as illustrated in Cook et al 2000 and as archived at NOAA in 1998), though it has a very high correlation to my calculation, has negligible long-period variability.  In this post, I present the case that the chronology presented by Cook as an RCS chronology was actually (and erroneously) calculated using a “traditional” standardization method that did not preserve low-frequency variance.

Although the Cook chronology has been used over and over, I seriously wonder whether any climate scientist has ever closely examined it in the past 20 years. Supporting this surmise are defects and errors in the Cook measurement dataset, which have remained unrepaired for over 20 years.  Cleaning the measurement dataset to be usable was very laborious and one wonders why these defects have been allowed to persist for so long.
Continue reading

Esper et al 2016 and the Oroko Swamp

Jan Esper, prominent in early Climate Audit posts as an adamant serial non-archiver, has joined with 17 other tree ring specialists, to publish “Ranking of tree-ring based temperature reconstructions of the past millennium” (pdf). This assesses 39 long tree ring temperature reconstructions. The assessment is accompanied by an archive containing 39 reconstruction versions, together with the underlying measurement data for 33 of 39 reconstructions. (It seems odd that measurement data would continue to be withheld for six sites, but, hey, it’s climate science.)

Because I’ve been recently looking at data used in Gergis et al, I looked first at Esper’s consideration of Oroko, one of two long proxies retained in Gergis screening.  I’ve long sought Oroko measurement data, first requesting it from Ed Cook in 2003.  Cook refused. Though Oroko reconstructions have been used over the years in multiproxy studies and by IPCC, the underlying measurement data has never made archived  The archive for Esper et al 2016 is thus the very first archive of Oroko measurement data (though unfortunately it seems that even the present archive is incomplete and not up-to-date).

Despite claims to use the most recent reconstruction, Esper’s Oroko temperature reconstruction is decidedly out of date.  Worse, it uses a n Oroko “reconstruction” in which Cook replaced proxy data (which went down after 1960) with instrumental data (which went up) – in a contemporary variation of what is popularly known as “Mike’s Nature trick”, though Mike’s Nature trick, as discussed at CA here, was a little different.

In today’s post, I’ll look at the “new” Oroko data, which, needless to say,  has some surprises.

Continue reading

Gergis and Law Dome

In today’s post, I’m going to examine Gergis’ dubious screening out of the Law Dome d18O series, a series that has been of long-standing interest at Climate Audit (tag).

Gergis et al 2016 stated that they screened proxies according to significance of the correlation to local gridcell temperature. Law Dome d18O not only had a significant correlation to local temperature, but had a higher correlation to local instrumental temperature than:

  • 24 of the 28 proxies retained by Gergis in her screened network;
  • a higher t-statistic than 19 of 28 proxies retained by Gergis;
  • a higher correlation and t-statistic than either of the other two long proxies (Mt Read, Oroko Swamp tree ring chronologies);

Nonetheless,  the Law Dome d18O series was excluded from the Gergis et al network. Gergis effected her exclusion of Law Dome not because of deficient temperature correlation, but through an additional arbitrary screening criterion, which excluded Law Dome d18O, but no other proxy in the screened network.

This is not the first occasion in which IPCC authors have arbitrarily excluded Law Dome d18O. CA readers may recall Climategate revelations on the contortions of IPCC AR4 Lead Authors to keep Law Dome out of the AR4 diagram illustrating long Southern Hemisphere proxies (see CA post here).

Law Dome d18O is intrinsically an extremely interesting proxy for readers interested in a Southern Hemisphere perspective on the Holocene (balancing the somewhat hackneyed commentary citing the Cuffey-Clow reconstruction based on GISP2 ice core in Greenland). The utility of Law Dome d18O is much reduced by inadequate publishing and archiving by the Australian Antarctic Division,  a criticism that I make somewhat reluctantly since they have been polite in their correspondence with me, but ultimately unresponsive.
Continue reading

Joelle Gergis, Data Torturer

Cheney-torture-worksIn 2012, the then much ballyhoo-ed Australian temperature reconstruction of Gergis et al 2012 mysteriously disappeared from Journal of Climate after being criticized at Climate Audit. Now, more than four years later, a successor article has finally been published. Gergis says that the only problem with the original article was a “typo” in a single word. Rather than “taking the easy way out” and simply correcting the “typo”, Gergis instead embarked on a program that ultimately involved nine rounds of revision, 21 individual reviews, two editors and took longer than the American involvement in World War II.  However, rather than Gergis et al 2016 being an improvement on or confirmation of Gergis et al 2012, it is one of the most extraordinary examples of data torture (Wagenmakers, 2011, 2012) that any of us will ever witness.

Also see Brandon S’s recent posts here here. Continue reading

Gergis

redirect to here