Guccifer 2 and “Russian” Metadata

The DHS-FBI intel assessment of the DNC hack concluded with “high confidence” that Guccifer 2 was a Russian operations, but provided (literally) zero evidence in support of their attribution.  Ever since Guccifer 2’s surprise appearance on June 15, 2016 (one day after Crowdstrike’s announcement of the DNC hack by “Russia”), there has been a widespread consensus that Guccifer 2 was a Russian deception operation, with only a few skeptics (e.g. Jeffrey Carr questioning evidence but not necessarily conclusion; Adam Carter challenging attribution).

Perhaps the most prevalent argument in attribution has been the presence of “Russian” metadata in documents included in Guccifer 2’s original post – the theory being that the “Russian” metadata was left by mistake. I’ve looked at lots of metadata both in connection with Climategate and more recently in connection with the DNC hack, and, in my opinion, the chances of this metadata being left by mistake is zero. Precisely what it means is a big puzzle though.

Continue reading


Guccifer 2 Email Time Zone

One of the major differences between Mr FOIA and Guccifer 2 is the latter’s use of email to correspond to journalists.

G2 contacted Gawker and Smoking Gun on June 15, corresponding further with Smoking Gun on June 21 and June 27. He corresponded with Vocativ on July 4-5 and with the Hill on July 11 and 14.  Both the content and metadata are available for the June 27, July 4-5, July 11 and July 14 emails. Threat Connect has been the most prominent in using email metadata in efforts to link Guccifer 2 to Russia: here, here, here.  Jeffrey Carr has been one of the most prominent critics of these metadata analyses.

In today’s post, I’m going to discuss some timestamp information that, to my knowledge, has not been previously canvassed.  The analysis turns on information from the accumulation of a chain involving different time zones. Readers of Climategate emails will recall many such chains as emails passed forth between CRU and the US.

First, here is a screenshot of an email from guccifer20@aol.fr to The Smoking Gun offering emails on Hillary Clinton’s staff.  (For orientation, this is three weeks after Trump Jr’s meeting and one week after the first memo in the Steele dossier.) It’s received at 3:43 PM Eastern (Daylight).

TSG replied a few minutes later, expressing interest, resulting in a second email from Guccifer 2 (Stephan Orphan) at 4:18 PM (Eastern).   Within the thread, there is timestamp information on the timezone of G2’s computer: Guccifer 2 received his answer from Smoking Gun at 14:46- implying his timezone is reading one hour earlier i.e. Central.

The same applies to a subsequent email, where once again the receive time for Guccifer 2 appears to be in a timezone one hour earlier (Central).

Discussion

The time zone information here is consistent with the time zone information on the cf.7z dossier. Because computer time zones can be set and reset in a few seconds, so one cannot place much weight on this.  I don’t know how far a fake timezone setting in a computer is carried forward into email headers and metadata. I’d be interested in information on this.  While this indicia seems fairly slight, other indicia used to attribute Guccifer 2 are just as slight if not worse.

Time Zone of Guccifer 2 cf.7z

In a recent post, I observed that the majority of the emails in the Wikileaks DNC archive were sent AFTER Crowdstrike installed their anti-Russian software on May 6.  In today’s post, I’ll look at a metadata issue concerning Guccifer 2, who was, with “high confidence”, attributed by the US intel community to be Russian, supposedly working under the personal direction of Putin.  I’m going to look closely at document metadata in the two 7z dossiers published by Guccifer 2 in fall 2016. Neither of the two dossiers contained any documents of any relevance to the 2016 election.

Earlier this year, Forensicator observed  that the ngpvan.7z dossier showed evidence of several copying and collating operations, including a copying operation in which the modification date-times of all documents were set to a 14 minute window on July 5, 2016. From analysis of the metadata, Forensicator plausibly argued that the copy-to computer was set to Eastern time zone. Forensicator didn’t comment on the other Guccifer 2 dossier (cf.7z).

I’ve closely examined both dossiers and noticed that the time zone of the cf.7z copy-to computer appears to be one hour earlier than the time zone of the copy-to computer analysed by Forensicator i.e. Atlantic Canada time.  I am much less knowledgeable than Forensicator and similar analysts in such details and am unable to present a solution.

Forensicator’s Analysis of ngpvan.7z Time Zones

The top directory of Guccifer 2’s ngpvan.7z dossier contained 13 .rar folders, 4 .zip folders and 5 documents (pdf,png).  All .rar folders had modification dates of Sept 1, 2016 – a few days before announcement of the dossier on Sept 4, 2016 (^).  All .zip files, documents in the top directory and documents in the .rar folders had modification dates of July 5, 2016.  Forensicator, working in Pacific time zone, noticed that there was a 3 hour time difference between modification times displayed for documents within the .rar files and located in the top directory (as shown in the figure below). Forensicator explained (here) this difference as due to the following: 7z stored documents in UTC while the .rar files, constructed using WinRAR4 were in local relative time, from which he deduced that the copy-to computer of the July 5 copy operation was in Eastern time zone.

His explanation is terse. To fully understand his point in operational terms, I adjusted my computer to UTC and took equivalent observations. A file outside the RAR folders (e.g. sf3.pdf), which was displayed as 15:46 Pacific, is displayed as 22:46 UTC, reflecting the 7 hour time difference. However, a files within the RAR folders (e.g. DonorsByMM.xlsx), which was displayed as 18:51 Pacific, is now displayed as 18:51 UTC.  In other words, 7z doesn’t know the correct timezone of the RAR documents and incorrectly assumes they come from the timezone of the current user.  The timezones only match using Eastern Daylight -0400.

Forensicator’s point is unequivocally correct.  I would prefer that he not have said “we need to adjust the .7z file times to reflect Eastern Time”.  Having spent time trying to parse through this, I would have said that “we need to adjust the RAR file times”, since it is the RAR timezone that 7z gets wrong, but that doesn’t impact the correctness, importance or originality of his observations.

 

July 5, 2016 Copying in cf.7z

Guccifer 2’s other 7z dossier (cf.7z) was released on October 4, 2016 in a blogpost promising (but not delivering) salacious details of the Clinton Foundation.  Like the previous dossier, the documents in cf.7z are mundane administration details of the Democratic Party of Virginia (DPVA) – not even the DNC. Whereas the documents of ngpvan.7z were all extremely stale (most recent documents from 2011), cf.7z consists of documents from 2013-2016. Its most recent document is from June 1-2, 2016, but documents originating after April 2016 are very sparse.

Three directories contain documents with modification dates of July 5, 2016.  From the time gaps in the ngpvan.7z dossier, Forensicator had postulated that a much larger copying operation had taken place on July 5.  The cf.7z documents with modification dates of July 5 seem to originate from this larger copy operation – but display as exactly one hour earlier, indicating a difference in time zone display rather than a different origin. The earliest time in the ngpvan.7z dossier was 18:39; the documents in the cf.7z/OFA directory (152.6 MB) have modification times between 17:34 and 17:38, immediately preceding allowing for the postulated one hour time zone difference:

The cf.7z/Donor Research and Prospecting contains documents with modification dates ranging from March 2015 to July 5, 2016 (plus one 2011 outlier). Some documents were copied in what Forensicator called the “Windows” style, while others, including the most recent batches (dated May 23, June 6 and July 5),  were copied in what Forensicator called the “Unix” style that was used in the July 5 copy step of ngpvan.7z.  The July 5 tranche has modification times between 17:39 and 17:52, which again fit, allowing for the proposed one hour time zone difference. (Displayed time for computer set to Atlantic Canada time match perfectly.)

Documents in a third directory (the very small cf.7z/emails directory) also match, allowing for the proposed one-hour time zone difference.

DonorsByMM.xlsx

It turns out that two documents in the cf.7z/Donor Research and Prospecting directory (DonorsBy MM.xls and DonorsByMM_2.xls) were also uploaded to the ngpvan.7z/DonorAnalysis directory where the postulated one hour time zone difference can be demonstated to one second accuracy. More detailed properties can be obtained by right-clicking on the files, with results for each shown below. To the nearest second, the respective copy times are shown as 17:52:00 and 18:51:59, one hour apart to the second.

There are differences in technique in the preparation of the two dossiers. Times in the cf.7z dossier appear to be rounded to the nearest minute or second, while times in the ngpvan.7z are chopped off. Thus a file with a time of ending in 59.6 seconds would be rounded in one case, chopped in the other. One archive used a LZMA2:26 method, while the other used m3:22. The ngpvan.7z archive mentions Win32, not mentioned for cf.7z.

Conclusion and Question

It seems certain to me that the DonorsByMM_2.xlsx document in each archive originated in a single copy operation with metadata differences arising from later processing. The timezone of the cf.7z dossier has somehow been set one hour earlier than the time zone of the ngpvan.7z dossier, which Forensicator deduced as Eastern North America. This implies Central time zone. In addition, somewhat different techniques were used in the preparation of the two dossiers. I don’t know enough of the details of the copy operations to diagnose further and would welcome any ideas.

 

[Update Sep 19- removed an incorrect speculation on upload to mediafire, which reflected my location not anyone else’s]

Email Dates in the Wikileaks DNC Archive

Yesterday, Scott Ritter published a savage and thorough critique of the role of Dmitri Alperovitch and Crowdstrike, who are uniquely responsible for the attribution of the DNC hack to Russia. Ritter calls it “one of the greatest cons in modern American history”.  Ritter’s article gives a fascinating account of an earlier questionable incident in which Alperovitch first rose to prominence – his attribution of the “Shady Rat” malware to the Chinese government at a time when there was a political appetite for such an attribution. Ritter portrays the DNC incident as Shady Rat 2.  Read the article.

My post today is a riff on a single point in the Ritter article, using analysis that I had in inventory but not written up.  I’ve analysed the dates of the emails in the Wikileaks DNC email archive: the pattern (to my knowledge) has never been analysed. The results are a surprise – standard descriptions of the incident are misleading. Continue reading

Arctic Lake Sediments: Reply to JEG

Julien Emile-Geay (JEG) submitted a lengthy comment concluding with the tasteless observation that “Steve’s mental health issues are beyond PAGES’s scope. Perhaps the CA tip jar pay for some therapy?”  – the sort of insult that is far too characteristic of activist climate science.  JEG seems to have been in such a hurry to make this insult that he didn’t bother getting his facts right.

Inventory

In the article, I had inventoried Arctic lake sediment series introduced in four major multiproxy studies: Mann et al 2008, Kaufman et al 2009, PAGES 2013 and PAGES 2017, observing that a total of 32 different series had been introduced, showing the split in the first line of the table shown in the article (replicated below). In each case, the series had been declared “temperature sensitive” but 16 had been declared in a subsequent study to be not temperature sensitive after all. In the table, I listed withdrawals by row, showing (inter alia) that three had been withdrawn in P14 (McKay and Kaufman 2014), four in PAGES 2017 (which also reinstated two proxies used in earlier studies) and three in Werner et al 2017 (CP17).   In my comments on Werner et al 2017, I distinguished the three series that were discarded from series not used in that study because they were not annual (of which there were nine.)

arctic_inventory

Here’s JEG’s comment on this table:

Responding to the post, not the innumerable comments (many of which are OT).

It is incorrect to claim that PAGES2k discarded 50% of the lake sediment records.

PAGES 2013, v1.0 had 23 arctic lake records
PAGES 2013, v1.1., rejected 3 (see https://www.nature.com/ngeo/journal/v8/n12/full/ngeo2566.html)
PAGES 2017, v2.0, we rejected another 4 and added 3, for reasons explained in Table S2.

Werner et al CPD 2017 is a climate field reconstruction based on a slightly earlier version of this dataset.
They excluded non-annually resolved records for reasons made clear in the manuscript – there is nothing “strange” about that – unless you want to misconstrue it. The entire point of a compilation like PAGES is that it is relatively permissive, so users who are more stringent can raise the bar and use only a subset of records for their own purposes.

So, out of the original 23, 7 (30.43%) were rejected because of more stringent inclusion criteria, with 3 additions. Anyone is welcome to see what impact this made to an Arctic composite or reconstruction using a method that meets CA standard.

None of his comments rebuts or contradicts anything in my post.  JEG says that 3 proxies were discarded in v1.1 – precisely as shown in the third row of the table and discussed in the article. JEG says that 4 proxies were discarded in PAGES 2017 – precisely as shown in the sixth row of the table.

Of Werner et al 2017, he says that they “excluded non-annually resolved records for reasons made clear in the manuscript – there is nothing “strange” about that – unless you want to misconstrue it.”   I didn’t “misconstrue it. While I noted that “in their reconstruction, they elected not to use 9 series on the grounds that they lacked annual resolution”, I excluded those nine from the above table.  In addition to these nine, Werner et al 2017 discarded three annual series (Hvitarvatn, Blue Lake, Lehmilampi) as defective. JEG says that Werner et al used a “slightly earlier” version of the PAGES 2017 dataset.  Be that as it may, Werner et al 2017 did in fact discard these three series as shown in the table for the grounds stated in my post (a “very nonlinear response, short overlap with instrumental, unclear interpretation”, the “exact interpretation unclear from original article” and “annual and centennial signal inconsistent”).

As a housekeeping point, I counted 22 Arctic sediment series in PAGES 2013 (not 23 as stated by JEG). I also counted a total of four additions to PAGES 2017 (two new and two re-instatements as shown in the table above), rather than the “three” additions claimed by JEG.

Most fundamentally, the denominator of my comparison was the inventory of series introduced in the four listed papers, not the inventory in PAGES 2013, which already represented a partial cull of Kaufman et al 2009 and Mann et al 2008. I do not understand why JEG misrepresented this simple point.

Finally, JEG says that the discarding was due to “more stringent inclusion criteria”. Three things.  1) The inclusion criteria in later studies are not necessarily “more stringent” – PAGES 2013 included some short series excluded fromKaufman et al 2009 (which required 1000 years) and PAGES 2017 some even shorter series.  Inclusion of short series that do not go back to the medieval period or even AD1500 is less stringent, not more stringent. 2) The stated reasons for exclusion of series in later studies are typically ones that indicate non-compliance with criteria set out in the earlier study, i.e. if a later study correctly determines that the interpretation of the record is “unclear”, its use in the earlier study was an error in the earlier study according to its criteria, not the result of “more stringent” criteria. 3) To keep things in clear perspective, greater stringency is not an antidote to problems arising from ex post screening (see also selection on the dependent variable) and is therefore irrelevant to the main issue. Jeff Id did some good posts on this.  Contrary to JEG, I do not advocate “greater stringency” in ex post screening as proper technique. On the contrary, I object to ex post screening (selection on the dependent variable).

Corrigendum

In my article, I said that “McKay and Kaufman (2014) conceded the [Hvitarvatn] error and issued an amended version of their Arctic reconstruction, but, like Mann, refused to issue a corrigendum to the original article.”

Finally, it is entirely incorrect to claim that PAGES 2k did not issue a corrigendum to identify the errors in v1.0 that were corrected in v1.1. They did so here (https://www.nature.com/ngeo/journal/v8/n12/full/ngeo2566.html), where Steve McIntyre was acknowledged about as clearly as could have been done: “The authors thank D. Divine, S. McIntyre and K. Seftigen, who helped improve the Arctic temperature reconstruction by finding errors in the data set.”

I published my criticism of upside-down Hvitarvatn in April 2013, a few weeks after publication of PAGES 2013. (Varves, particularly Hvitarvatn, had been a prior interest at CA). McKay and Kaufman 2014, published 18 months later (Oct 2014), acknowledged this and other errors, but failed to acknowledge Climate Audit on this and other points. On October 7, 2014, I wrote Nature pointing out that McKay and Kaufman 2014 primarily addressed errors in PAGES 2013 (as opposed to being “original”) and suggested to them that such a “backdoor corrigendum” was no substitute for an on-the-record corrigendum attached to the original article. (In making this point, I was thinking about Mann’s sly walking-back of untrue statements in Mann et al 2008 deep in the SI to a different paper, while not issuing a corrigendum in the original paper.) Nature said that they would look into it.  I also objected to the appropriation of criticisms made at Climate Audit without acknowledgement.  I heard nothing further from them.

In November 2015, over a year later, PAGES 2013 belatedly issued a corrigendum as I had requested in October 2014, including a brief acknowledgement.  I was unaware of this until JEG brought it to my attention in his comment.  Nature had not informed me that they had agreed with my suggestion and none of the authors had had the courtesy to mention the acknowledgement. Needless to say, I’ve not waited 18 months to issue a correction and have done so right away.

 Strange Accusations

JEG concluded his comment with a strange peroration accusing me of “continuing to whine about the lack of acknowledgement”, which he called a “delirium of persecution” and a “mental health issue”, suggesting “therapy”:

Continuing to whine about the lack of acknowledgement is beginning to sound like a delirium of persecution. We can certainly fix issues in the database, but Steve’s mental health issues are beyond PAGES’s scope. Perhaps the CA tip jar pay for some therapy?

Where did this come from?

I’ve objected from time to time about incidents in which climate scientists have appropriated commentary from Climate Audit without proper acknowledgement – in each case with cause.  I made no such complaint in the article criticized by JEG. Nowhere in the post is there any complaint about “lack of acknowledgement”, let alone anything that constitutes “continuing to whine about the lack of acknowledgement”.

The post factually and drily comments on the inventory of Arctic lake sediment proxies, correctly observing the very high “casualty rate” for supposed proxies:

This is a very high casualty rate given original assurances on the supposed carefulness of the original study. The casualty rate tended to be particularly high for series which had a high medieval or early portion (e.g. Haukadalsvatn, Blue Lake).

One should be able to make such comments without publicly-funded academics accusing one of having “mental health issues”, a “delirium of persecution” or requiring “therapy”.

PS. Following the finals of the US National Squash Doubles (Over 65s) in March, I severely exacerbated a chronic leg injury and am receiving therapy for it. Yes, some aches and pains come with growing older, just not the ones fabricated by JEG.

 

 

PAGES 2017: Arctic Lake Sediments

Arctic lake sediment series have been an important component of recent multiproxy studies.  These series have been discussed on many occasions at Climate Audit (tag), mostly very critical.  PAGES 2017 (and related Werner et al 2017) made some interesting changes to the Arctic lake sediment inventory of PAGES 2013, which I’ll discuss today. Continue reading

PAGES2017: New Cherry Pie

Rosanne D’Arrigo once explained to an astounded National Academy of Sciences panel that you had to pick cherries if you wanted to make cherry pie – a practice followed by D’Arrigo and Jacoby who, for their reconstructions, selected tree ring chronologies which went the “right” way and discarded those that went the wrong way – a technique which will result in hockey sticks even from random red noise.  Her statement caused a flurry of excitement among Climategate correspondents, but unfortunately the NAS panel didn’t address or explain the defects in this technique to the lignumphilous paleoclimate community.

My long-standing recommendation to the paleoclimate community has been to define a class of proxy using ex ante criteria e.g. treeline black spruce chronologies, Antarctic ice cores etc., but once the ex ante criterion is selected, use a “simple” method on all members of the class.  The benefits of such a procedure seem obvious, but the protocol is stubbornly resisted by the paleoclimate community. The PAGES paleoclimate community have recently published a major compilation of climate series from the past millennium, but, unfortunately, their handling of data which goes the “wrong” way is risible. Continue reading

Comey’s Mishandling of Classified Information

Recently, there has been controversy over allegations that former FBI Director Comey leaked classified information, an issue that I mentioned on twitter  a month ago. The recent news-cycle began with a story in The Hill,  leading to a tweet by Trump, followed by a series of sneering “rebuttals” in the media (CNN, Slate, Politico, Vanity Fair).  Comey defenders (like Hillary Clinton’s) claim that classification was done “retroactively”:

In fact, the Hill’s John Solomon noted that it’s unclear whether the classified information in the memos was classified at the time the memos were written and Politico’s Austin Wright reports Monday afternoon that some of Comey’s memos were indeed classified only retroactively

Thus far undiscussed by either side is Comey’s testimony to the House Intelligence Committee on March 20, which dealt directly with both the classification of details of Comey’s January 6 meeting with Trump and Comey’s understanding of obligations in respect to classified information. (Comey’s questionable briefing and conduct in the January 6 meeting merit extremely close scrutiny, but that’s a story for another day.)  The net result is that it seems inescapable that Comey either misled the congressional committee or mishandled classified information.

The January 6 Meeting

There was much public anticipation for Trump’s January 6 intelligence briefing, presented to him by Comey, CIA Director John Brennan, Director of National Intelligence James Clapper and NSA director Michael Rogers. An unclassified intel assessment was concurrently released on January 6, which, in respect to hacking allegations, added nothing to the earlier Dec 29 intel assessment on hacking. The presence of Comey, Brennan, Clapper and Rogers at the intel briefing was widely reported.  Towards the end of the briefing, Comey asked for the opportunity to meet with Trump one-on-one.

After Trump aides and the other three intel directors left the room, Comey briefed Trump on the Steele Dossier, the story of which remains mostly untold. The Steele Dossier, paid for by a still unidentified “Democratic donor”, had been produced by a DC opposition research firm (Fusion GPS) directly connected with the “Kremlin-connected” lawyer, Natalia Veselniskaya, who had met with Donald Trump Jr in June 2016. Although the Steele Dossier contained multiple fabrications, its lurid allegations were taken very seriously by both the CIA and FBI, which, together with other government agencies, had been investigating them for months. Although Comey later objected to Trump talking to him one-on-one without a DOJ minder present, it was Comey himself who initiated the practice.

According to Comey’s written evidence on June 5, the ostensible purpose of Comey’s briefing to Trump on unverified material was to “alert the incoming President to the existence of this material” because they “we knew the media was about to publicly report the material” and, “to the extent there was some effort to compromise an incoming President, we could blunt any such effort with a defensive briefing.”  Even though the FBI, CIA and other agencies had been investigating allegations in the Steele Dossier for months, Comey, “without [Trump] directly asking the question”), “offered [Trump] assurance” “that we were not investigating him personally” supposedly to avoid any “uncertain[ty]” on Trump’s part “about whether the FBI was conducting a counter-intelligence investigation of his personal conduct”.

Comey’s briefing to Trump on January 6 appears to have intentionally misled Trump about counter-intelligence investigations into the Steele dossier, in effect treating Trump like a perp, rather than a legitimately elected president.  It took a while for Trump to figure out that he was being played by Comey.

The outcome of Comey’s briefing about the Steele dossier was the exact opposite of Comey’s subsequent self-serving explanation. The information that Trump had been briefed on the Steele Dossier was immediately leaked to the press, which had long been aware of the questionable and unverifiable dossier but thus far resisted the temptation to publish it. (Some details from the Steele Dossier had been previously published, they provide an interesting tracer on previous leaks – a topic that I’ll discuss on another occasion.)

CNN broke the news that intel chiefs had “presented Trump with claims of Russian efforts to compromise him” – using the leaked information about the contents of Comey’s briefing to Trump as a hook to notify the public about the existence of the dossier. CNN, having thrown the bait into the water, sanctimoniously refrained from publishing the Steele Dossier itself as unverified. Once CNN wedged the news, the dossier story went viral. Within an hour, Buzzfeed published the controversial Steele Dossier itself. Once it was in the sunlight, secondary parties named in the dossier (Trump’s lawyer Michael Cohen, one-time Trump campaigner Carter Page, Webzilla) were able to challenge fabrications in the Steele Dossier, which had seemingly gone undetected during months of investigation by the FBI, CIA and other agencies.  (The allegation of Putin’s “direct” involvement originated in the Steele Dossier. Although the intel agencies gild the accusation in secret “sources and methods”, it appears highly possible and even probable that there is no evidence for this allegation other than the Steele Dossier.)

Trump was (quite reasonably, in my opinion) livid that details of Comey’s briefing to him had been leaked. These concerns were a major issue in his next meeting with Comey – a narrative that I’ll discuss on another occasion.

Comey at the House Intelligence Committee, March 20

Skipping forward, Comey testified before the House Intelligence Committee. One of the major points of interest in this meeting was the January 6 briefing. Rep. King, who, like Trump, was both frustrated and concerned with leaks from the intel community, focussed in on the CNN leak because it concerned a classified briefing and, unlike most other leaks, only a very small number of people were involved. King (reasonably) thought that these considerations would make it relatively easy to track down the leaker.  King’s exchange with Comey is fascinating to re-read, knowing, as we do now, that the briefing on the Steele Dossier had been done by Comey himself one-on-one with Trump.

King asked Comey about the leak to CNN as follows:

Do you — does that violate any law? I mean you were at a classified briefing with the president-elect of the United States and it had to be a very, very small universe of people who knew that you handed them that dossier and it was leaked out within hours. Are you making any effort to find out who leaked it and do you believe that constitute a criminal violation?

Comey responded that “any unauthorized disclosure of classified conversations or documents” was very serious and that such incidents “should be investigated aggressively and if possible, prosecuted”:

COMEY: I can’t say, Mr. King except I can answer in general.

KING: Yes.

COMEY: Any unauthorized disclosure of classified conversations or documents is potentially a violation of law and a serious, serious problem. I’ve spent most of my career trying to figure out unauthorized disclosures and where they came from. It’s very, very hard.

Often times, it doesn’t come from the people who actually know the secrets. It comes from one hop out, people who heard about it or were told about it. And that’s the reason so much information that reports to be accurate classified information is actually wrong in the media. Because the people who heard about it didn’t hear about it right. But, it is an enormous problem whenever you find information that is actually classified in the media. We don’t talk about it because we don’t wanna confirm it, but I do think it should be investigated aggressively and if possible, prosecuted so people take as a lesson, this is not OK. This behavior can be deterred and its deterred by locking some people up who have engaged in criminal activity.

King then attempted to draw out from Comey who was “in the room”. King presumed that Comey, Clapper, Brennan and Rogers were “in the room” and wondered if there were any others:

KING: Well, could you say it was — obviously, Admiral Rogers was in the room, you were in the room, General Clapper was in the room and Director Brennan was in the room. Were there any other people in the room that could’ve leaked that out?

I mean this isn’t a report that was circulated among 20 people. This is an unmasking of names where you may have 20 people in the NSA and a hundred people in the FBI, its not putting together a report or the intelligence agency. This is four people in a room with the president-elect of the United States. And I don’t know who else was in that room and that was leaked out, it seemed within minutes or hours, of you handing him that dossier and it was so confidential, if you read the media reports that you actually handed it to him separately.

So believe me, I’m not saying it was you. I’m just saying, it’s a small universe of people that would’ve known about that. And if it is a disclosure of classified information, if you’re going to start with investigating the leaks, to me that would be one place where you could really start to narrow it down.

Comey (the only person “in the room”) refused to answer on the grounds that he did not want to confirm any details of “a classified conversation with a president or president-elect”:

COMEY: And again, Mr. King, I can’t comment because I do not ever wanna confirm a classified conversation with a president or president-elect. I can tell you my general experience. It often turns out, there are more people who know about something you expected. At first, both because there may be more people involved in the thing than you realized, not — not this particular, but in general. And more people have been told about it or heard about it or staff have been briefed on it. And those echoes are in my experience, what most often ends up being shared with reporters.

King persisted:

KING: Well, could you tell us who else was in the room that day?

COMEY: I’m sorry?

KING: Could you tell us who else was in the room with you that day?

But Comey would not be drawn in:

COMEY: No, because I’m not going to confirm that there was such a conversation because then, I might accidentally confirm something that was in the newspaper.

King then tried to find out whether there had even been a conversation about the Steele Dossier:

KING: But could you tell us who was in the room, whether or not there was a conversation?

Comey refused to even confirm that there was a “conversation” in an unclassified setting (while allowing that he might be more forthcoming in a “classified setting”):

COMEY: No, I’m not confirming there was a conversation. In a classified setting, I might be able to share more with you, but I’m not going to confirm any conversations with either President Obama or President Trump or when President Trump was the President-elect.

King then tried to get Comey to say “who was in the room for the briefing”:

KING: Well, not the conversation or even the fact that you gave it to him, but can you — can you tell us who was in the room for that briefing that you gave?

COMEY: That you’re saying later ended up in the newspaper?

KING: Yes.

Comey again refused, citing the classified setting of the event

COMEY: So my talking about who was in the room would be a confirmation that was in the newspaper was classified information, I’m not going to do that. I’m not going to help people who did something that — that is unauthorized.

King then tried to elicit a comment on whether the four directors had gone to Trump Tower, with Comey still being coy but using the event as an example of protecting classified information:

KING: Yeah, but we all know that the four of you went to Trump Tower for the briefing, I mean that’s not classified, is it?

COMEY: How do we all know that, though?

KING: OK.

(LAUGHTER)

COMEY: Yeah.

KING: You know, you can — you see the predicament we’re in, here.

COMEY: I get it. I get it. But we are duty-bound to protect classified information, both in the first when we get it, and then to make sure we don’t accidentally jeopardize classified information by what we say about something that appears in the media.

Comey’s Written Evidence, June 5

After refusing to answer questions from the House Intel Committee on the January 6 meeting on the grounds that such details were classified, Comey, supposedly drawing on a contemporary memo on the meeting (which does not appear to have been filed in the FBI document system), provided numerous details on the classified meeting in his written evidence on June 5:

 I first met then-President-Elect Trump on Friday, January 6 in a conference room at Trump Tower in New York. I was there with other Intelligence Community (IC) leaders to brief him and his new national security team on the findings of an IC assessment concerning Russian efforts to interfere in the election. At the conclusion of that briefing, I remained alone with the President-Elect to brief him on some personally sensitive aspects of the information assembled during the assessment.

The IC leadership thought it important, for a variety of reasons, to alert the incoming President to the existence of this material, even though it was salacious and unverified. Among those reasons were: (1) we knew the media was about to publicly report the material and we believed the IC should not keep knowledge of the material and its imminent release from the President- Elect; and (2) to the extent there was some effort to compromise an incoming President, we could blunt any such effort with a defensive briefing.

The Director of National Intelligence asked that I personally do this portion of the briefing because I was staying in my position and because the material implicated the FBI’s counter- intelligence responsibilities. We also agreed I would do it alone to minimize potential embarrassment to the President-Elect. Although we agreed it made sense for me to do the briefing, the FBI’s leadership and I were concerned that the briefing might create a situation where a new President came into office uncertain about whether the FBI was conducting a counter-intelligence investigation of his personal conduct.

It is important to understand that FBI counter-intelligence investigations are different than the more-commonly known criminal investigative work. The Bureau’s goal in a counter-intelligence investigation is to understand the technical and human methods that hostile foreign powers are using to influence the United States or to steal our secrets. The FBI uses that understanding to disrupt those efforts. Sometimes disruption takes the form of alerting a person who is targeted for recruitment or influence by the foreign power. Sometimes it involves hardening a computer system that is being attacked. Sometimes it involves “turning” the recruited person into a double-agent, or publicly calling out the behavior with sanctions or expulsions of embassy-based intelligence officers. On occasion, criminal prosecution is used to disrupt intelligence activities.

Because the nature of the hostile foreign nation is well known, counterintelligence investigations tend to be centered on individuals the FBI suspects to be witting or unwitting agents of that foreign power. When the FBI develops reason to believe an American has been targeted for recruitment by a foreign power or is covertly acting as an agent of the foreign power, the FBI will “open an investigation” on that American and use legal authorities to try to learn more about the nature of any relationship with the foreign power so it can be disrupted. In that context, prior to the January 6 meeting, I discussed with the FBI’s leadership team whether I should be prepared to assure President-Elect Trump that we were not investigating him personally. That was true; we did not have an open counter-intelligence case on him. We agreed I should do so if circumstances warranted. During our one-on-one meeting at Trump Tower, based on President-Elect Trump’s reaction to the briefing and without him directly asking the question, I offered that assurance.

Had Rep King known these details on March 20 – in particular, that Comey was the only person present in the briefing to Trump on the Steele Dossier, it is evident that his questioning on the CNN leak would have gone in a very different direction. But Comey withheld that information from him.

Conclusion

Comeys defenders have argued that the content of the memoranda was classified “retroactively”, thus supposedly rebutting any fault on Comey’s part or, alternatively, that Comey wrote his memoranda so that no classified material was included.

However, neither applies to the January 6 meeting (and perhaps others). The January 6 meeting is easier because of Comey’s own evidence. In his evidence to the House Intel Committee, Comey unequivocally stated that any and all details about the January 6 meeting were “classified” and used this as an excuse to refuse to answer questions on the meeting, thereby concealing his unique role in the briefing from the committee.  Having taken this position before the Committee, Comey is on the horns of a dilemma: either the details were classified (as he told the Committee) or he lied to the Committee.  Neither explanation is to Comey’s credit.

 

Does a new paper really reconcile instrumental and model-based climate sensitivity estimates?

A guest post by Nic Lewis

A new paper in Science Advances by Cristian Proistosescu and Peter Huybers “Slow climate mode reconciles historical and model-based estimates of climate sensitivity” (hereafter PH17) claims that accounting for the decline in feedback strength over time that occurs in most CMIP5 coupled global climate models (GCMs), brings observationally-based climate sensitivity estimates from historical records into line with model-derived estimates. It is not the first paper to attempt to do so, but it makes a rather bold claim and, partly because Science Advances seeks press coverage for its articles, has been attracting considerable attention.

Some of the methodology the paper uses may look complicated, with its references to eigenmode decomposition and full Bayesian inference.  However, the underlying point it makes is simple. The paper addresses equilibrium climate sensitivity (ECS)[1] of GCMs as estimated from information corresponding to that available during the industrial period. PH17 terms such an estimate ICS; it is usually called effective climate sensitivity. Specifically, PH17 estimates ICS for GCMs by emulating their global surface temperature (GST) and top-of-atmosphere  radiative flux imbalance (TOA flux)[2] responses under a 1750–2011 radiative forcing history matching the IPCC AR5 best estimates.

In a nutshell, PH17 claims that for the current generation (CMIP5) GCMs,  the median ICS estimate is only 2.5°C, well short of their 3.4°C median ECS and centred on the range of observationally-based climate sensitivity estimates, which they take as 1.6–3.0°C. My analysis shows that their methodology and conclusion is incorrect for several reasons, as I shall explain. My analysis of their data shows that the median ICS estimate for GCMs is 3.0°C, compared with a median for sound observationally-based climate sensitivity estimates in the 1.6–2.0°C range. To justify my conclusion, I need first to explain how ECS and ICS are estimated in GCMs, and what PH17 did.

For most GCMs, ICS is smaller than ECS, where ECS is estimated from ‘abrupt4xCO2’ simulation data,[3] on the basis that their behaviour in the later part of the simulation will continue until equilibrium. That is because, when CO2 concentration – and hence forcing, denoted by F – is increased abruptly, most GCMs display a decreasing-over-time response slope of TOA flux (denoted by H in the paper, but normally by N) to changes in GST (denoted by T). That is, the GCM climate feedback parameter λ decreases with time after forcing is applied.[4] Over any finite time period, ICS will fall short of ECS in the GCM simulation. Most but not all CMIP5 coupled GCMs behave like this, for reasons that are not completely understood. However, there is to date relatively little evidence that the real climate system does so.

Figure 1, an annotated reproduction of Fig. 1 of PH17, illustrates the point. The red dots show annual mean T (x-coordinate) and H (y-coordinate) values during the 150-year long abrupt4xCO2 simulation by the NorESM1-M GCM.[5] The curved red line shows a parameterised ‘eigenmode decomposition’ fit to the annual data. The ECS estimate for NorESM1-M based thereon is 3.2°C, the x-axis intercept of the red line. The estimated forcing in the GCM for a doubling of CO2 concentration (F) is 4.0 Wm−2, the y-axis intercept of the red line. The ICS estimate used, per the paper’s methods section, is represented by the x-axis intercept of the straight blue line, being ~2.3°C. That line starts from the estimated F value and crosses the red line at a point corresponding approximately to the same ratio of TOA flux to F as currently exists in the real climate system. If λ were constant, then the red dots would all fall on a straight line with slope −λ and ICS would equal ECS; if ECS (and ICS) were 2.3°C the red dots would all fall on the blue line, and if ECS were 3.2°C they would all fall on the dashed black line. The standard method of estimating ECS for a GCM from its abrupt4xCO2 simulation data, as used in IPCC AR5, has been to regress H on T over all 150 years of the simulation and take the x-axis intercept. For NorESM1-M, this gives an ECS estimate of 2.8°C, below the 3.2°C estimate based on the eigenmode decomposition fit. Regressing over years 21–150, a more recent and arguably more appropriate approach, also gives an ECS estimate of 3.2°C.

 

Fig. 1. Reproduction of Fig. 1 of PH17, with added brown and blue lines illustrating ICS estimates

Continue reading

The effect of Atlantic internal variability on TCR estimation – an unfinished study

A guest article by Frank Bosse (posted by Nic Lewis)

A recent paper by the authors Stolpe, Medhaug and Knutti (thereafter S. 17) deals with a longstanding question: By how much are the Global Mean Surface Temperatures (GMST) influenced by the internal variability of the Atlantic (AMV/AMO) and the Pacific (PMV/PDO/IPO)?

The authors analyze the impacts of the natural up’s and down’s of both basins on the temperature record HadCRUT4.5.

A few months ago this post of mine was published which considered the influence of the Atlantic variability.

I want to compare some of the results.

In the beginning I want to offer some continuing implications of S. 17.

The key figure of S. 17 (fig. 7a) describes most of the results. It shows a variability- adjusted HadCRUT4.5 record:

Fig.1: The Fig. 7a from S. 17 shows the GMST record (orange) between
1900 and 2005, adjusted for the Atlantic & Pacific variability.

 

Continue reading