PAGES 2017: Arctic Lake Sediments

Arctic lake sediment series have been an important component of recent multiproxy studies.  These series have been discussed on many occasions at Climate Audit (tag), mostly very critical.  PAGES 2017 (and related Werner et al 2017) made some interesting changes to the Arctic lake sediment inventory of PAGES 2013, which I’ll discuss today. Continue reading


PAGES2017: New Cherry Pie

Rosanne D’Arrigo once explained to an astounded National Academy of Sciences panel that you had to pick cherries if you wanted to make cherry pie – a practice followed by D’Arrigo and Jacoby who, for their reconstructions, selected tree ring chronologies which went the “right” way and discarded those that went the wrong way – a technique which will result in hockey sticks even from random red noise.  Her statement caused a flurry of excitement among Climategate correspondents, but unfortunately the NAS panel didn’t address or explain the defects in this technique to the lignumphilous paleoclimate community.

My long-standing recommendation to the paleoclimate community has been to define a class of proxy using ex ante criteria e.g. treeline black spruce chronologies, Antarctic ice cores etc., but once the ex ante criterion is selected, use a “simple” method on all members of the class.  The benefits of such a procedure seem obvious, but the protocol is stubbornly resisted by the paleoclimate community. The PAGES paleoclimate community have recently published a major compilation of climate series from the past millennium, but, unfortunately, their handling of data which goes the “wrong” way is risible. Continue reading

Comey’s Mishandling of Classified Information

Recently, there has been controversy over allegations that former FBI Director Comey leaked classified information, an issue that I mentioned on twitter  a month ago. The recent news-cycle began with a story in The Hill,  leading to a tweet by Trump, followed by a series of sneering “rebuttals” in the media (CNN, Slate, Politico, Vanity Fair).  Comey defenders (like Hillary Clinton’s) claim that classification was done “retroactively”:

In fact, the Hill’s John Solomon noted that it’s unclear whether the classified information in the memos was classified at the time the memos were written and Politico’s Austin Wright reports Monday afternoon that some of Comey’s memos were indeed classified only retroactively

Thus far undiscussed by either side is Comey’s testimony to the House Intelligence Committee on March 20, which dealt directly with both the classification of details of Comey’s January 6 meeting with Trump and Comey’s understanding of obligations in respect to classified information. (Comey’s questionable briefing and conduct in the January 6 meeting merit extremely close scrutiny, but that’s a story for another day.)  The net result is that it seems inescapable that Comey either misled the congressional committee or mishandled classified information.

The January 6 Meeting

There was much public anticipation for Trump’s January 6 intelligence briefing, presented to him by Comey, CIA Director John Brennan, Director of National Intelligence James Clapper and NSA director Michael Rogers. An unclassified intel assessment was concurrently released on January 6, which, in respect to hacking allegations, added nothing to the earlier Dec 29 intel assessment on hacking. The presence of Comey, Brennan, Clapper and Rogers at the intel briefing was widely reported.  Towards the end of the briefing, Comey asked for the opportunity to meet with Trump one-on-one.

After Trump aides and the other three intel directors left the room, Comey briefed Trump on the Steele Dossier, the story of which remains mostly untold. The Steele Dossier, paid for by a still unidentified “Democratic donor”, had been produced by a DC opposition research firm (Fusion GPS) directly connected with the “Kremlin-connected” lawyer, Natalia Veselniskaya, who had met with Donald Trump Jr in June 2016. Although the Steele Dossier contained multiple fabrications, its lurid allegations were taken very seriously by both the CIA and FBI, which, together with other government agencies, had been investigating them for months. Although Comey later objected to Trump talking to him one-on-one without a DOJ minder present, it was Comey himself who initiated the practice.

According to Comey’s written evidence on June 5, the ostensible purpose of Comey’s briefing to Trump on unverified material was to “alert the incoming President to the existence of this material” because they “we knew the media was about to publicly report the material” and, “to the extent there was some effort to compromise an incoming President, we could blunt any such effort with a defensive briefing.”  Even though the FBI, CIA and other agencies had been investigating allegations in the Steele Dossier for months, Comey, “without [Trump] directly asking the question”), “offered [Trump] assurance” “that we were not investigating him personally” supposedly to avoid any “uncertain[ty]” on Trump’s part “about whether the FBI was conducting a counter-intelligence investigation of his personal conduct”.

Comey’s briefing to Trump on January 6 appears to have intentionally misled Trump about counter-intelligence investigations into the Steele dossier, in effect treating Trump like a perp, rather than a legitimately elected president.  It took a while for Trump to figure out that he was being played by Comey.

The outcome of Comey’s briefing about the Steele dossier was the exact opposite of Comey’s subsequent self-serving explanation. The information that Trump had been briefed on the Steele Dossier was immediately leaked to the press, which had long been aware of the questionable and unverifiable dossier but thus far resisted the temptation to publish it. (Some details from the Steele Dossier had been previously published, they provide an interesting tracer on previous leaks – a topic that I’ll discuss on another occasion.)

CNN broke the news that intel chiefs had “presented Trump with claims of Russian efforts to compromise him” – using the leaked information about the contents of Comey’s briefing to Trump as a hook to notify the public about the existence of the dossier. CNN, having thrown the bait into the water, sanctimoniously refrained from publishing the Steele Dossier itself as unverified. Once CNN wedged the news, the dossier story went viral. Within an hour, Buzzfeed published the controversial Steele Dossier itself. Once it was in the sunlight, secondary parties named in the dossier (Trump’s lawyer Michael Cohen, one-time Trump campaigner Carter Page, Webzilla) were able to challenge fabrications in the Steele Dossier, which had seemingly gone undetected during months of investigation by the FBI, CIA and other agencies.  (The allegation of Putin’s “direct” involvement originated in the Steele Dossier. Although the intel agencies gild the accusation in secret “sources and methods”, it appears highly possible and even probable that there is no evidence for this allegation other than the Steele Dossier.)

Trump was (quite reasonably, in my opinion) livid that details of Comey’s briefing to him had been leaked. These concerns were a major issue in his next meeting with Comey – a narrative that I’ll discuss on another occasion.

Comey at the House Intelligence Committee, March 20

Skipping forward, Comey testified before the House Intelligence Committee. One of the major points of interest in this meeting was the January 6 briefing. Rep. King, who, like Trump, was both frustrated and concerned with leaks from the intel community, focussed in on the CNN leak because it concerned a classified briefing and, unlike most other leaks, only a very small number of people were involved. King (reasonably) thought that these considerations would make it relatively easy to track down the leaker.  King’s exchange with Comey is fascinating to re-read, knowing, as we do now, that the briefing on the Steele Dossier had been done by Comey himself one-on-one with Trump.

King asked Comey about the leak to CNN as follows:

Do you — does that violate any law? I mean you were at a classified briefing with the president-elect of the United States and it had to be a very, very small universe of people who knew that you handed them that dossier and it was leaked out within hours. Are you making any effort to find out who leaked it and do you believe that constitute a criminal violation?

Comey responded that “any unauthorized disclosure of classified conversations or documents” was very serious and that such incidents “should be investigated aggressively and if possible, prosecuted”:

COMEY: I can’t say, Mr. King except I can answer in general.

KING: Yes.

COMEY: Any unauthorized disclosure of classified conversations or documents is potentially a violation of law and a serious, serious problem. I’ve spent most of my career trying to figure out unauthorized disclosures and where they came from. It’s very, very hard.

Often times, it doesn’t come from the people who actually know the secrets. It comes from one hop out, people who heard about it or were told about it. And that’s the reason so much information that reports to be accurate classified information is actually wrong in the media. Because the people who heard about it didn’t hear about it right. But, it is an enormous problem whenever you find information that is actually classified in the media. We don’t talk about it because we don’t wanna confirm it, but I do think it should be investigated aggressively and if possible, prosecuted so people take as a lesson, this is not OK. This behavior can be deterred and its deterred by locking some people up who have engaged in criminal activity.

King then attempted to draw out from Comey who was “in the room”. King presumed that Comey, Clapper, Brennan and Rogers were “in the room” and wondered if there were any others:

KING: Well, could you say it was — obviously, Admiral Rogers was in the room, you were in the room, General Clapper was in the room and Director Brennan was in the room. Were there any other people in the room that could’ve leaked that out?

I mean this isn’t a report that was circulated among 20 people. This is an unmasking of names where you may have 20 people in the NSA and a hundred people in the FBI, its not putting together a report or the intelligence agency. This is four people in a room with the president-elect of the United States. And I don’t know who else was in that room and that was leaked out, it seemed within minutes or hours, of you handing him that dossier and it was so confidential, if you read the media reports that you actually handed it to him separately.

So believe me, I’m not saying it was you. I’m just saying, it’s a small universe of people that would’ve known about that. And if it is a disclosure of classified information, if you’re going to start with investigating the leaks, to me that would be one place where you could really start to narrow it down.

Comey (the only person “in the room”) refused to answer on the grounds that he did not want to confirm any details of “a classified conversation with a president or president-elect”:

COMEY: And again, Mr. King, I can’t comment because I do not ever wanna confirm a classified conversation with a president or president-elect. I can tell you my general experience. It often turns out, there are more people who know about something you expected. At first, both because there may be more people involved in the thing than you realized, not — not this particular, but in general. And more people have been told about it or heard about it or staff have been briefed on it. And those echoes are in my experience, what most often ends up being shared with reporters.

King persisted:

KING: Well, could you tell us who else was in the room that day?

COMEY: I’m sorry?

KING: Could you tell us who else was in the room with you that day?

But Comey would not be drawn in:

COMEY: No, because I’m not going to confirm that there was such a conversation because then, I might accidentally confirm something that was in the newspaper.

King then tried to find out whether there had even been a conversation about the Steele Dossier:

KING: But could you tell us who was in the room, whether or not there was a conversation?

Comey refused to even confirm that there was a “conversation” in an unclassified setting (while allowing that he might be more forthcoming in a “classified setting”):

COMEY: No, I’m not confirming there was a conversation. In a classified setting, I might be able to share more with you, but I’m not going to confirm any conversations with either President Obama or President Trump or when President Trump was the President-elect.

King then tried to get Comey to say “who was in the room for the briefing”:

KING: Well, not the conversation or even the fact that you gave it to him, but can you — can you tell us who was in the room for that briefing that you gave?

COMEY: That you’re saying later ended up in the newspaper?

KING: Yes.

Comey again refused, citing the classified setting of the event

COMEY: So my talking about who was in the room would be a confirmation that was in the newspaper was classified information, I’m not going to do that. I’m not going to help people who did something that — that is unauthorized.

King then tried to elicit a comment on whether the four directors had gone to Trump Tower, with Comey still being coy but using the event as an example of protecting classified information:

KING: Yeah, but we all know that the four of you went to Trump Tower for the briefing, I mean that’s not classified, is it?

COMEY: How do we all know that, though?

KING: OK.

(LAUGHTER)

COMEY: Yeah.

KING: You know, you can — you see the predicament we’re in, here.

COMEY: I get it. I get it. But we are duty-bound to protect classified information, both in the first when we get it, and then to make sure we don’t accidentally jeopardize classified information by what we say about something that appears in the media.

Comey’s Written Evidence, June 5

After refusing to answer questions from the House Intel Committee on the January 6 meeting on the grounds that such details were classified, Comey, supposedly drawing on a contemporary memo on the meeting (which does not appear to have been filed in the FBI document system), provided numerous details on the classified meeting in his written evidence on June 5:

 I first met then-President-Elect Trump on Friday, January 6 in a conference room at Trump Tower in New York. I was there with other Intelligence Community (IC) leaders to brief him and his new national security team on the findings of an IC assessment concerning Russian efforts to interfere in the election. At the conclusion of that briefing, I remained alone with the President-Elect to brief him on some personally sensitive aspects of the information assembled during the assessment.

The IC leadership thought it important, for a variety of reasons, to alert the incoming President to the existence of this material, even though it was salacious and unverified. Among those reasons were: (1) we knew the media was about to publicly report the material and we believed the IC should not keep knowledge of the material and its imminent release from the President- Elect; and (2) to the extent there was some effort to compromise an incoming President, we could blunt any such effort with a defensive briefing.

The Director of National Intelligence asked that I personally do this portion of the briefing because I was staying in my position and because the material implicated the FBI’s counter- intelligence responsibilities. We also agreed I would do it alone to minimize potential embarrassment to the President-Elect. Although we agreed it made sense for me to do the briefing, the FBI’s leadership and I were concerned that the briefing might create a situation where a new President came into office uncertain about whether the FBI was conducting a counter-intelligence investigation of his personal conduct.

It is important to understand that FBI counter-intelligence investigations are different than the more-commonly known criminal investigative work. The Bureau’s goal in a counter-intelligence investigation is to understand the technical and human methods that hostile foreign powers are using to influence the United States or to steal our secrets. The FBI uses that understanding to disrupt those efforts. Sometimes disruption takes the form of alerting a person who is targeted for recruitment or influence by the foreign power. Sometimes it involves hardening a computer system that is being attacked. Sometimes it involves “turning” the recruited person into a double-agent, or publicly calling out the behavior with sanctions or expulsions of embassy-based intelligence officers. On occasion, criminal prosecution is used to disrupt intelligence activities.

Because the nature of the hostile foreign nation is well known, counterintelligence investigations tend to be centered on individuals the FBI suspects to be witting or unwitting agents of that foreign power. When the FBI develops reason to believe an American has been targeted for recruitment by a foreign power or is covertly acting as an agent of the foreign power, the FBI will “open an investigation” on that American and use legal authorities to try to learn more about the nature of any relationship with the foreign power so it can be disrupted. In that context, prior to the January 6 meeting, I discussed with the FBI’s leadership team whether I should be prepared to assure President-Elect Trump that we were not investigating him personally. That was true; we did not have an open counter-intelligence case on him. We agreed I should do so if circumstances warranted. During our one-on-one meeting at Trump Tower, based on President-Elect Trump’s reaction to the briefing and without him directly asking the question, I offered that assurance.

Had Rep King known these details on March 20 – in particular, that Comey was the only person present in the briefing to Trump on the Steele Dossier, it is evident that his questioning on the CNN leak would have gone in a very different direction. But Comey withheld that information from him.

Conclusion

Comeys defenders have argued that the content of the memoranda was classified “retroactively”, thus supposedly rebutting any fault on Comey’s part or, alternatively, that Comey wrote his memoranda so that no classified material was included.

However, neither applies to the January 6 meeting (and perhaps others). The January 6 meeting is easier because of Comey’s own evidence. In his evidence to the House Intel Committee, Comey unequivocally stated that any and all details about the January 6 meeting were “classified” and used this as an excuse to refuse to answer questions on the meeting, thereby concealing his unique role in the briefing from the committee.  Having taken this position before the Committee, Comey is on the horns of a dilemma: either the details were classified (as he told the Committee) or he lied to the Committee.  Neither explanation is to Comey’s credit.

 

Does a new paper really reconcile instrumental and model-based climate sensitivity estimates?

A guest post by Nic Lewis

A new paper in Science Advances by Cristian Proistosescu and Peter Huybers “Slow climate mode reconciles historical and model-based estimates of climate sensitivity” (hereafter PH17) claims that accounting for the decline in feedback strength over time that occurs in most CMIP5 coupled global climate models (GCMs), brings observationally-based climate sensitivity estimates from historical records into line with model-derived estimates. It is not the first paper to attempt to do so, but it makes a rather bold claim and, partly because Science Advances seeks press coverage for its articles, has been attracting considerable attention.

Some of the methodology the paper uses may look complicated, with its references to eigenmode decomposition and full Bayesian inference.  However, the underlying point it makes is simple. The paper addresses equilibrium climate sensitivity (ECS)[1] of GCMs as estimated from information corresponding to that available during the industrial period. PH17 terms such an estimate ICS; it is usually called effective climate sensitivity. Specifically, PH17 estimates ICS for GCMs by emulating their global surface temperature (GST) and top-of-atmosphere  radiative flux imbalance (TOA flux)[2] responses under a 1750–2011 radiative forcing history matching the IPCC AR5 best estimates.

In a nutshell, PH17 claims that for the current generation (CMIP5) GCMs,  the median ICS estimate is only 2.5°C, well short of their 3.4°C median ECS and centred on the range of observationally-based climate sensitivity estimates, which they take as 1.6–3.0°C. My analysis shows that their methodology and conclusion is incorrect for several reasons, as I shall explain. My analysis of their data shows that the median ICS estimate for GCMs is 3.0°C, compared with a median for sound observationally-based climate sensitivity estimates in the 1.6–2.0°C range. To justify my conclusion, I need first to explain how ECS and ICS are estimated in GCMs, and what PH17 did.

For most GCMs, ICS is smaller than ECS, where ECS is estimated from ‘abrupt4xCO2’ simulation data,[3] on the basis that their behaviour in the later part of the simulation will continue until equilibrium. That is because, when CO2 concentration – and hence forcing, denoted by F – is increased abruptly, most GCMs display a decreasing-over-time response slope of TOA flux (denoted by H in the paper, but normally by N) to changes in GST (denoted by T). That is, the GCM climate feedback parameter λ decreases with time after forcing is applied.[4] Over any finite time period, ICS will fall short of ECS in the GCM simulation. Most but not all CMIP5 coupled GCMs behave like this, for reasons that are not completely understood. However, there is to date relatively little evidence that the real climate system does so.

Figure 1, an annotated reproduction of Fig. 1 of PH17, illustrates the point. The red dots show annual mean T (x-coordinate) and H (y-coordinate) values during the 150-year long abrupt4xCO2 simulation by the NorESM1-M GCM.[5] The curved red line shows a parameterised ‘eigenmode decomposition’ fit to the annual data. The ECS estimate for NorESM1-M based thereon is 3.2°C, the x-axis intercept of the red line. The estimated forcing in the GCM for a doubling of CO2 concentration (F) is 4.0 Wm−2, the y-axis intercept of the red line. The ICS estimate used, per the paper’s methods section, is represented by the x-axis intercept of the straight blue line, being ~2.3°C. That line starts from the estimated F value and crosses the red line at a point corresponding approximately to the same ratio of TOA flux to F as currently exists in the real climate system. If λ were constant, then the red dots would all fall on a straight line with slope −λ and ICS would equal ECS; if ECS (and ICS) were 2.3°C the red dots would all fall on the blue line, and if ECS were 3.2°C they would all fall on the dashed black line. The standard method of estimating ECS for a GCM from its abrupt4xCO2 simulation data, as used in IPCC AR5, has been to regress H on T over all 150 years of the simulation and take the x-axis intercept. For NorESM1-M, this gives an ECS estimate of 2.8°C, below the 3.2°C estimate based on the eigenmode decomposition fit. Regressing over years 21–150, a more recent and arguably more appropriate approach, also gives an ECS estimate of 3.2°C.

 

Fig. 1. Reproduction of Fig. 1 of PH17, with added brown and blue lines illustrating ICS estimates

Continue reading

The effect of Atlantic internal variability on TCR estimation – an unfinished study

A guest article by Frank Bosse (posted by Nic Lewis)

A recent paper by the authors Stolpe, Medhaug and Knutti (thereafter S. 17) deals with a longstanding question: By how much are the Global Mean Surface Temperatures (GMST) influenced by the internal variability of the Atlantic (AMV/AMO) and the Pacific (PMV/PDO/IPO)?

The authors analyze the impacts of the natural up’s and down’s of both basins on the temperature record HadCRUT4.5.

A few months ago this post of mine was published which considered the influence of the Atlantic variability.

I want to compare some of the results.

In the beginning I want to offer some continuing implications of S. 17.

The key figure of S. 17 (fig. 7a) describes most of the results. It shows a variability- adjusted HadCRUT4.5 record:

Fig.1: The Fig. 7a from S. 17 shows the GMST record (orange) between
1900 and 2005, adjusted for the Atlantic & Pacific variability.

 

Continue reading

How dependent are GISTEMP trends on the gridding radius used?

A guest post by Nic Lewis

Introduction

Global surface temperature (GMST) changes and trends derived from the standard GISTEMP[1] record over its full 1880-2016 length exceed those per the HadCRUT4.5 and NOAA4.0.1 records, by 4% and 7% respectively.  Part of these differences will be due to use of different land and (in the case of HadCRUT4.5) ocean sea-surface temperature (SST) data, and part to methodological differences.

GISTEMP and NOAA4.0.1 both use data from the ERSSTv4 infilled SST dataset, while HadCRUT4.5 uses data from the non-infilled HadSST3 dataset. Over the full 1880-2016 GISTEMP record, the global-mean trends in the two SST datasets were almost the same: 0.56 °C/century for ERSSTv4 and  0.57 °C /century for HadSST3. And although HadCRUT4v5 depends (via its use of the CRUTEM4 record) on a different set of land station records from GISTEMP and NOAA4.0.1 (both of which use GHCNv3.3 data), there is a great commonality in the underlying set of stations used.

Accordingly, it seems likely that differences in methodology may largely account for the slightly faster 1880-2016 warming in GISTEMP. Although the excess warming in GISTEMP is not large, I was curious to find out in more detail about the methods it uses and their effects. The primary paper describing the original (land station only based) GISTEMP methodology is Hansen et al. 1987.[2] Ocean temperature data was added in 1996.[3] Hansen et al. 2010[4] provides an update and sets out changes in the methods.

Steve has written a number of good posts about GISTEMP in the past, locatable using the Search box. Some are not relevant to the current version of GISTEMP, but Steve’s post showing how to read GISTEMP binary SBBX files in R (using a function written by contributor Nicholas) is still applicable, as is a later post covering related other R functions that he had written. All the function scripts are available here.

How GISTEMP is constructed

Rather than using a regularly spaces grid, GISTEMP divides the Earth’s surface into 8 latitude zones, separated at 0°, 23.58°, 44.43° and 64.16° (from now on rounded to the nearest degree).  Moving from pole to pole, the zones have area weights of 10%, 20%, 30%, 40%, 40%, 30%, 20% and 10%, and are divided longitudinally into respectively 4, 8, 12 16, 16, 12, 8 and 4 equal sized boxes. This partitioning results in 80 equal area boxes. Each box is then divided into 100 subboxes, with equal  longitudinal extent, but graduated latitudinal extent so that they all have equal areas. Figure 1, reproduced from Hansen et al. 1987, shows the box layout. Box numbers are shown in their lower right-hand corners; the dates and other numbers have been superseded.

Figure 1. 80 equal area box regions used by GISTEMP. From Hansen et al. 1987, Fig.2.

Continue reading

Centenary of the End of the Battle of the Somme

November 18 marks the centenary of the end of the Battle of the Somme, an event that passed essentially unnoticed, though it was a seminal event in the development of modern Canada. canadian-artillery-in-action-corrected_0Its carnage was over 1.1 million casualties from a combined population (both sides) of about 170 million. (For a scale, there have been approximately 35,000 U.S. casualties in Iraq from 2003-1016.)

I became interested in the Battle of the Somme earlier this year due to a sheaf of papers in the back of my mother’s china cabinet, which I noticed while she was moving.

The papers were copies of transcripts of letters from the front by the adjutant of the 75th Canadian Battalion (4th Canadian Division), one of the battalions which led the closing assault at the Battle of the Somme.  While other war-time correspondence in family archives tended to be sincere but dreary epistles, these letters were full of interesting details about life at the front – not just mud and food, but flares, “dug-outs”, young men having horse races, sightseeing at Amiens Cathedral five days after a battle in which 25% of the battalion were killed or wounded, the moral quandary of court-martialing soldiers who had wounded themselves to avoid further battle, typically because of what we today call post-traumatic stress disorder, with penalties shocking to today’s sensibility.  In this note, I’ve collated all of the china cabinet letters available from the china cabinet, interweaving with information from War Diaries, to provide a narrative (pdf).

In the transcript, neither the author nor addressee were transcribed.  From details in the letters, it is evident that the author was Miles Langstaff, then a recent graduate of Osgoode Law School.  I presume that his correspondent, who had knitted him a sweater and walked with him in the valley of the Humber River in west Toronto, was my grandmother. Langstaff was killed on March 1, 1917 in an ill-conceived raid at Vimy Ridge, a month before the major victory in April 1917.

 

 

 

Transcript and narrative here.

 

The Destruction of Huma Abedin’s Emails on the Clinton Server and their Surprise Recovery

Despite extraordinarily intense coverage of all aspects of Hillary Clinton’s emails, all commentary to date (to my knowledge), even the underlying FBI Report, has paid little to no attention to the destruction of Huma Abedin’s emails, also stored on the Clinton server.  Further, even with the greatly increased interest in Huma’s emails arising from the discoveries on Anthony Weiner’s laptop, speculation has mostly focused on the potential connection to deleted Hillary emails, rather than the potentially much larger tranche of deleted Huma emails from the Clinton server (many of which would, in all probability, be connected to Hillary in any event.)

Both Hillary and Huma had clintonemail accounts. Huma was unique in that respect among Hillary’s coterie. (Chelsea Clinton, under the pseudonym of Diane Reynolds, was the only other person with a clintonemail address.)

The wiping and bleaching of the Clinton server and backups can be conclusively dated to late March 2015.  All pst files for both Hillary and Huma’s accounts were deleted and bleached in that carnage.  While an expurgated version of Hillary’s archive (her “work-related” emails) had been preserved at her lawyer’s office (thereby giving at least a talking-point against criticism), no corresponding version of Huma’s archive was preserved from the Clinton server.

Huma had accessed her clintonemail account through a web browser and, to her knowledge, had not kept a local copy on her own computer. So when Huma’s pst files were deleted from the Clinton server and backups, those were the only known copies of her clintonemail.com emails.  When Huma was eventually asked by the State Department to provide any federal records “in her possession”, her lawyers took the position that emails on the Clinton server were not in Huma’s possession and made no attempt to search Huma’s account on the Clinton server (though such an attempt would have been fruitless by the time that they were involved). Huma’s ultimate production of non-gov emails was a meagre ~6K emails, while, in reality, the number of non-gov emails that she sent or received is likely to be an order of magnitude greater.

Hillary was also asked to return all federal records “in her possession”, but did not return Huma’s emails on the Clinton server.  In today’s post, I’ll examine Hillary’s affidavit and answer to interrogatories in the Judicial Watch proceedings, both made under “penalty of perjury” to show the misdirection.  You have to watch the pea very carefully

In respect to the ~600K emails recently discovered on the Anthony Weiner laptop, my surmise is that many, if not most, will derive from Huma’s unwitting backup of her clintonemail account prior to its March 2015 destruction on the Clinton server. In other words, the March 2015 destruction of pst files from the Clinton server included several hundred thousand Huma emails from her tenure at the State Department, over and about the 30K Hillary emails about “yoga”.

Continue reading

Was early onset industrial-era warming anthropogenic, as Abram et al. claim?

A guest post by Nic Lewis

Introduction

A recent PAGES 2k Consortium paper in Nature,[i] Abram et al., that claims human-induced, greenhouse gas driven warming commenced circa 180 years ago,[ii] has been attracting some attention. The study arrives at its start dates by using a change-point analysis method, SiZer, to assess when the most recent significant and sustained warming trend commenced. Commendably, the lead author has provided the data and Matlab code used in the study, including the SiZer code.[iii]

Their post-1500 AD proxy-based regional reconstructions are the PAGES2K reconstructions, which have been discussed and criticized on many occasions at CA (see tag), with the Gergis et al 2016 Australian reconstruction substituted for the withdrawn version. I won’t comment on the validity of the post-1500 AD proxy-based regional reconstructions on which the observational side of their study is based – Steve McIntyre is much better placed than me to do so.

However, analysis of those reconstructions can only provide evidence as to when sustained warming started, not as to whether the cause was natural or anthropogenic. In this post, I will examine and question the paper’s conclusions about the early onset of warming detected in the study being attributable to the small increase in greenhouse gas emissions during the start of the Industrial Age.

The authors’ claim that the start of anthropogenic warming can be dated to the 1830s is based on model simulations of climate change from 1500 AD on.[iv] A simple reality check points to that claim being likely to be wrong: it flies in the face of the best estimates of the evolution of radiative forcing. According to the IPCC 5th Assessment [Working Group I] Report (AR5) estimates, the change in total effective radiative forcing from preindustrial (which the IPCC takes as 1750) to 1840 was –0.01 W/m2, or +0.01 W/m2 if changes only in anthropogenic forcings, and not solar and volcanic forcings, are included. Although the increase in forcing from all greenhouse gases (including ozone) is estimated to be +0.20 W/m2 by 1840, that forcing is estimated to be almost entirely cancelled out by negative forcing, primarily from anthropogenic aerosols and partly from land use change increasing planetary albedo.[v]  Total anthropogenic forcing did not reach +0.20 W/m2 until 1890; in 1870 it was still under +0.10 W/m2. Continue reading

Re-examining Cook’s Mt Read (Tasmania) Chronology

In today’s post, I’m going to re-examine (or more accurately, examine de novo) Ed Cook’s Mt Read (Tasmania) chronology, a chronology recently used in Gergis et al 2016, Esper et al 2016, as well as numerous multiproxy reconstructions over the past 20 years.

Gergis et al 2016  said that they used freshly-calculated “signal-free” RCS chronologies for tree ring sites except Mt Read (and Oroko). For these two sites, they chose older versions of the chronology,  purporting to justify the use of old versions “for consistency with published results” – a criterion that they disregarded for other tree ring sites.   The inconsistent practice immediately caught my attention.  I therefore calculated an RCS chronology for Mt Read from measurement data archived with Esper et al 2016.  Readers will probably not be astonished that the chronology disdained by Gergis et al had very elevated values in the early second millennium and late first millennium relative to the late 20th century.

I cannot help but observe that Gergis’ decision to use the older flatter chronology was almost certainly made only after peeking at results from the new Mt Read chronology, yet another example of data torture (Wagenmakers 2011, 2012) by Gergis et al. At this point, readers are probably de-sensitized to criticism of yet more data torture. In this case, it appears probable that the decision impacts the medieval period of their reconstruction where they only used two proxies, especially when combined with their arbitrary exclusion of Law Dome, which also had elevated early values.

Further curious puzzles emerged when I looked more closely at the older chronology favored by Gergis (and Esper). This chronology originated with Cook et al 2000 (Clim Dyn), which clearly stated that they had calculated an RCS chronology and even provided a succinct description of the technique (citing Briffa et al 1991, 1992) as authority.  However, their reported chronology (both as illustrated in Cook et al 2000 and as archived at NOAA in 1998), though it has a very high correlation to my calculation, has negligible long-period variability.  In this post, I present the case that the chronology presented by Cook as an RCS chronology was actually (and erroneously) calculated using a “traditional” standardization method that did not preserve low-frequency variance.

Although the Cook chronology has been used over and over, I seriously wonder whether any climate scientist has ever closely examined it in the past 20 years. Supporting this surmise are defects and errors in the Cook measurement dataset, which have remained unrepaired for over 20 years.  Cleaning the measurement dataset to be usable was very laborious and one wonders why these defects have been allowed to persist for so long.
Continue reading