DNC Hack due to Gmail Phishing??

In two influential articles in June 2016 (June 16 here and June 26 here), SecureWorks purported to link the then recently revealed DNC hack to Russia via a gmail phishing campaign which they had been monitoring since 2015 and which they attributed to APT28 (Fancy Bear). They had observed multiple phishing targets at hillaryclinton.com, dnc.org and personal gmail accounts of campaign officials and surmised that one of these targets at DNC must have been tricked by the phishing campaign, from which APT28 obtained access to the DNC server.

Their argument was quickly accepted by computer security analysts. In an influential article in October 2016, Thomas Rid, a prominent commentator on computer security, stated that this argument was the most important evidence in attribution of the DNC hack to Russia – it was what Rid called the “hackers’ gravest mistake”.

However, the connection of the DNC hack to the gmail phishing campaign, as set out in the SecureWorks article, was very speculative, even tenuous.  In addition, subsequent evidence in the DNC emails themselves conclusively refuted even this thin connection. To be clear, the issues pertaining to the DNC hack are distinct from the Podesta hack – which, though unknown at the time of the June 2016 SecureWorks’ article, can be convincingly attributed to gmail phishing accompanied by bitly link-shorteners.

In today’s post, I’m going to look at the narrow issue of the connection between the gmail phishing campaign and the DNC hack and whether it contributes to Russian attribution of the DNC hack.

Continue reading

Emergent constraints on climate sensitivity in global climate models, Part 1

Their nature and assessment of their validity

A guest post by Nic Lewis

There have been quite a number of papers published in recent years concerning “emergent constraints” on equilibrium climate sensitivity (ECS) in comprehensive global climate models (GCMs), of both the current (CMIP5) and previous (CMIP3) generations. The range of ECS values in GCMs has remained almost unchanged since the early days of climate modelling; in the IPCC 5th Assessment Report (AR5) it was given as 2.1-4.7°C for CMIP5 models.[i]

From the IPCC 1st Assessment Report (FAR) to AR5, the main cause of the large uncertainty as to ECS in GCMs has been the difficulty of simulating clouds and their behaviour.[ii] This has led to cloud feedback differing between GCMs even as to its sign – and to little confidence that the true level of cloud feedback lies within its range in GCMs. Progress in understanding cloud behaviour and related convective dynamics and feedbacks has been painfully slow. We shall see in this 3-part article that emergent constraint approaches have the potential to offer useful insights into cloud behaviour, however the main focus will be on to what extent they narrow the uncertainty range of ECS in GCMs. Continue reading

Arrest of the “Lurk” Banking Trojan Gang

On June 2, 2016, in a major police operation in Russia, 50 hackers from the Lurk banking trojan gang were arrested following 86 raids (Security Week here). Their malware was used for bank fraud (especially in Russia) and ransomware all over the world. The full extent of their activities became clear only after their arrest. In today’s post, I’m going to look back at U.S. computer security analysis (especially by Cisco Talos) prior to the arrests by Russia.  The post contains an Easter egg relating to attribution of the DNC hack, but that will be a story for a different day. Continue reading

Marvel et al.’s new paper on estimating climate sensitivity from observations

A guest post by Nic Lewis

Introduction and summary

Recently a new model-based paper on climate sensitivity was published by Kate Marvel, Gavin Schmidt (the head of NASA GISS) and others, titled ‘Internal variability and disequilibrium confound estimates of climate sensitivity from observations’.[1] It appears to me that the novel part of its analysis is faulty, and that the part which isn’t faulty isn’t novel.

As some readers may recall, I found six serious errors in a well-publicised 2016 paper by Kate Marvel and other GISS climate scientists on the topic of climate sensitivity.[2] Two of the six errors were subsequently corrected.

With regards to the new Marvel et al paper, I find that:

  • the low ECS estimates Marvel et al. obtain when using current (CMIP5) climate models’ historical simulation data arise from using a period with unbalanced volcanic forcing, with the low bias disappearing when that problem is addressed; and
  • the low ECS estimates they obtain when using data from AMIP simulations (those where models are driven by observed evolving sea-surface temperature patterns as well evolving forcing) are not news. They more likely indicate problems with CMIP5 models’ ocean modules, than (as Marvel et al. suggest) that internal variability in recent decades was particularly unusual.

Continue reading

Reply to Patrick Brown’s response to my article commenting on his Nature paper


I thank Patrick Brown for his detailed response (also here) to statistical issues that I raised in my critique “Brown and Caldeira: A closer look shows global warming will not be greater than we thought” of his and Ken Caldeira’s recent paper (BC17).[1] The provision of more detailed information than was given in BC17, and in particular the results of testing using synthetic data, is welcome. I would reply as follows.

Brown comments that I suggested that rather than focusing on the simultaneous use of all predictor fields, BC17 should have focused on the results associated with the single predictor field that showed the most skill: The magnitude of the seasonal cycle in OLR. He goes on to say: “Thus, Lewis is arguing that we actually undersold the strength of the constraints that we reported, not that we oversold their strength.”

To clarify, I argued that BC17 undersold the statistical strength of the relationships involved, in the RCP8.5 2090 case focussed on in their Abstract, for which the signal-to-noise ratio is highest. But I went on to say that I did not think the stronger relationships would really provide a guide to how much global warming there would actually be late this century on the RCP8.5 scenario, or any other scenario. That is because, as I stated, I disagree with BC17’s fundamental assumption that the relationship of future warming to certain aspects of the recent climate that holds in climate models necessarily also applies in the real climate system. I will return to that point later. But first I will discuss the statistical issues. Continue reading

Polar Bears, Inadequate data and Statistical Lipstick


A recent paper Internet Blogs, Polar Bears, and Climate-Change Denial by Proxy by JEFFREY A. HARVEY and 13 others has been creating somewhat of a stir in the blogosphere. The paper’s abstract purports to achieve the following:

Increasing surface temperatures, Arctic sea-ice loss, and other evidence of anthropogenic global warming (AGW) are acknowledged by every major scientific organization in the world. However, there is a wide gap between this broad scientific consensus and public opinion. Internet blogs have strongly contributed to this consensus gap by fomenting misunderstandings of AGW causes and consequences. Polar bears (Ursus maritimus) have become a “poster species” for AGW, making them a target of those denying AGW evidence. *Here, focusing on Arctic sea ice and polar bears, we show that blogs that deny or downplay AGW disregard the overwhelming scientific evidence of Arctic sea-ice loss and polar bear vulnerability.* By denying the impacts of AGW on polar bears, bloggers aim to cast doubt on other established ecological consequences of AGW, aggravating the consensus gap. To counter misinformation and reduce this gap, scientists should directly engage the public in the media and blogosphere.

Reading further into the paper we find that this seems to be yet another piece of  propaganda to push a Climate Change agenda. In line with the high standards of climate science “communication”, there are over 50 occurences of various forms of the derogatory labels “denier” or “deny” in a mere five pages of text and two pages of references. Such derogatory language has become commonplace in the climate change academic world and reflects badly on the authors who use it.

Continue reading

Brown and Caldeira: A closer look shows global warming will not be greater than we thought

A guest post by Nic Lewis


Last week a paper predicting greater than expected global warming, by scientists Patrick Brown and Ken Caldeira, was published by Nature.[1]  The paper (henceforth referred to as BC17) says in its abstract:

“Across-model relationships between currently observable attributes of the climate system and the simulated magnitude of future warming have the potential to inform projections. Here we show that robust across-model relationships exist between the global spatial patterns of several fundamental attributes of Earth’s top-of-atmosphere energy budget and the magnitude of projected global warming. When we constrain the model projections with observations, we obtain greater means and narrower ranges of future global warming across the major radiative forcing scenarios, in general. In particular, we find that the observationally informed warming projection for the end of the twenty-first century for the steepest radiative forcing scenario is about 15 per cent warmer (+0.5 degrees Celsius) with a reduction of about a third in the two-standard-deviation spread (−1.2 degrees Celsius) relative to the raw model projections reported by the Intergovernmental Panel on Climate Change.”

Patrick Brown’s very informative blog post about the paper gives a good idea of how they reached these conclusions. As he writes, the central premise underlying the study is that climate models that are going to be the most skilful in their projections of future warming should also be the most skilful in other contexts like simulating the recent past. It thus falls within the “emergent constraint” paradigm. Personally, I’m doubtful that emergent constraint approaches generally tell one much about the relationship to the real world of aspects of model behaviour other than those which are closely related to the comparison with observations. However, they are quite widely used.

In BC17’s case, the simulated aspects of the recent past (the “predictor variables”) involve spatial fields of top-of-the-atmosphere (TOA) radiative fluxes. As the authors state, these fluxes reflect fundamental characteristics of the climate system and have been well measured by satellite instrumentation in the recent past – although (multi) decadal internal variability in them could be a confounding factor. BC17 derive a relationship in current generation (CMIP5) global climate models between predictors consisting of three basic aspects of each of these simulated fluxes in the recent past, and simulated increases in global mean surface temperature (GMST) under IPCC scenarios (ΔT). Those relationships are then applied to the observed values of the predictor variables to derive an observationally-constrained prediction of future warming.[2]

The paper is well written, the method used is clearly explained in some detail and the authors have archived both pre-processed data and their code.[3] On the face of it, this is an exemplary study, and given its potential relevance to the extent of future global warming I can see why Nature decided to publish it. I am writing an article commenting on it for two reasons. First, because I think BC17’s conclusions are wrong. And secondly, to help bring to the attention of more people the statistical methodology that BC17 employed, which is not widely used in climate science. Continue reading

US East Coast Sea Level Rise: An Adjustocene Hockey Stick

josh_hockeysticksIn 2011, Andy Revkin wrote an article (archive) entitled “Straight Talk on Rising Seas in a Warming World” (among other articles on the topic), in which he optimistically sought guidance on the topic from a then recent study of U.S. East Coast sea level coauthored by Mann (Kemp et al, 2011).  Joshua Willis told Revkin “that, using patterns in layered salt marsh sediment, [they] found a sharp recent uptick in the rate of sea-level rise after 2,000 years of fairly stable conditions — 2011_kemp_comparea pattern Willis refers to as a “sea-level hockey stick” — an allusion to the suite of studies finding a similar pattern for global surface temperatures (albeit a hockey stick with a warped shaft)”.

However, as so often, the supposed “hockey stick” appeared only after the data had been severely adjusted. The difference is shown at the figure at right.  Unadjusted (raw) relative sea level (i.e. how sea level appears locally – the concern of state planners and policy-makers) in North Carolina increased steadily through the last two millennia, with somewhat of an upward inflection in the 19th century; it is only after heavy adjustment that a HS shape appears.

In this case, the relevant data for local and regional planners is the data prior to adjustment by climate warriors, as I’ll discuss below: this is not a hockey stick but an ongoing increase through the Holocene.

Continue reading

New Antarctic Temperature Reconstruction

Stenni et al (2017), Antarctic climate variability on regional and continental scales over the last 2000 years, was published pdf this week by Climate of the Past.  It includes (multiple variations) of a new Antarctic temperature reconstruction, in which 112 d18O and dD isotope series are combined into regional and continental reconstructions. Its abstract warns that “projected warming of the Antarctic continent during the 21st century may soon see significant and unusual warming develop across other parts  of the Antarctic continent [besides the peninsula]”, but no Steigian red spots of supposedly unprecedented warming.

Long-time CA readers will be aware of my long-standing interest in Antarctic ice core proxies, in particular, the highly resolved Law Dome  d18O series.  One of my first appearances in Climategate emails was a request for Law Dome data to Tas van Ommen in Australia, who immediately notified Phil Jones in Sauron’s Tower of this disturbance in the equilibrium of Middleearth. Jones promptly consulted the fiercest of his orcs, who urged that the data be withheld as follows: ” HI Phil, Personally, I wouldn’t send him [McIntyre] anything. I have no idea what he’s up to, but you can be sure it falls into the “no good” category.”  I’ve discussed incidents involving Law Dome data on several occasions in the past. This is what the data looked like as of 2004: elevated values in the early first millennium, declining up to and including the 20th century.


Law Dome – Holocene Perspective

Recently, I’ve commented on many occasions on the benefits of looking at proxy data in a Holocene (10000 year context) rather than just the last 2000 years.  A longer perspective permits one to see Milankovitch factors at work and this is true for Law Dome d18O as well. Although Law Dome d18O analyses were carried out nearly 20 years ago, results have been archived only for the deglacial period (~20000-9000 BP) and for the last 2000 years – shown in the graphic below. The inset shows (unarchived) Law Dome dD values over the Holocene, available only in a panel in a 2000 survey of Antarctic cores (Masson et al 2000).  Though the data is frustratingly (and pointlessly) incomplete, the story is clear: d18O values were very low in the Last Glacial Maximum, then increased fairly steadily for 10000 years reaching a maximum ~9-10000 BP (in the early Holocene), then declined in the past 9000 years. Modern values are neither as high as in the early Holocene, nor as low as the Last Glacial Maximum. Variation over the past two millennia is relatively modest.

Accumulation during the Holocene is more than four times greater than in the glacial period.  Elevation of Law Dome has decreased over the Holocene – an important factor which needs to be accounted for in temperature estimation – Vinther et al 2008 made a really excellent effort at disentangling elevation changes in Greenland d18O data, but no one seems to have made a corresponding effort in Antarctica (including Stenni et al 2017).

Stenni et al 2017 Reconstruction

Stenni et al 2017 calculated a variety of composites from the 112 series considered in their reconstruction, featuring reconstructions weighted by positive correlation to “target” temperature series (which had strong increases in West Antarctic and weak increases in East Antarctica), with negatively correlated isotope series screened out (weight of 0). This is disclosed in SI as follows:

The problem with this recipe is that, when the target has an upward trend (as do key target instrumental series), this methodology has the effect of enhancing the blade-ness of the resulting composite.  The blade bias arises because the series are intrinsically very noisy – but series with too “big” a blade are left in, while series which go down are left out. The defective procedure is made worse when there are a lot of short series, as here.  At least this methodology doesnt turn series upside down (Manng-nam style).

Stenni et al 2017 are somewhat evasive about their results and their graphics contribute to the evasion.  I’ve re-plotted their Antarctic continent reconstruction (decadal version) from archived data in the figure below. Like the Law Dome series, the composite shows elevated values in the first millennium, declining through the last millennium, with the decline continuing well into the 20th century. Values in 1950 and 1960 were among the coldest in the past two millennia, with a very late uptick (1980- 2000). Stenni et al show this series as the dashed orange series in their Figure 8 which has negligible vertical resolution (see inset below).   The very modest blade at the end of this series is almost certainly exaggerated by the defective screening and weighting procedures noted above. But even with their fingers on the scales (so to speak), the main message of the series is that values in the first millennium are consistently elevated above modern values.

Their main reconstruction graphic (their Figure 7) is, if anything, much worse than the panel shown in the above inset, as shown below. It too shows elevated first millennium values, though you’d barely know it from looking at the figure. Its 10:1 horizontal-to-vertical panel size disguises rather than highlights the difference between the first millennium and modern values.


By now, we’re all familiar with the fevered prose of abstracts when climate reconstructions supposedly show “unprecedented” modern results. Needless to say, Stenni et al does not contain colorful and excited descriptions of high first millennium values. The lede to their abstract is relentlessly flat:

“Climate trends in the Antarctic region remain poorly characterized, owing to the brevity and scarcity of direct climate observations and the large magnitude of interannual to decadal-scale climate variability. Here, within the framework of the PAGES Antarctica2k working group, we build an enlarged database of ice core water stable isotope records from Antarctica, consisting of 112 records.”

Continuing the abstract, they report “a significant cooling trend” to 1900 CE, followed by “significant warming trends” after 1900 CE in three regions which are “robust” to something or other and which are “significant” in the weighted reconstructions.

Our new reconstructions confirm a significant cooling trend from 0 to 1900 CE across all Antarctic regions where records extend back into the 1st millennium, with the exception of the Wilkes Land coast and Weddell Sea coast regions. Since 1900 CE, significant warming trends are identified for the West Antarctic Ice Sheet, the Dronning Maud Land coast and the Antarctic Peninsula regions, and these trends are robust across the distribution of records that contribute to the unweighted isotopic composites and also significant in the weighted temperature reconstructions.

This is a pretty outrageous spin, given that the continental Antarctic reconstruction continues the downward trend to 1950-60 – despite the use of a defective method which will enhance the most meager blade.  Despite these adverse results, they close with the obligatory warning of “significant and unusual warming” – none of which is evident in their data.

However, projected warming of the Antarctic continent during the 21st century may soon see significant and unusual warming
develop across other parts of the Antarctic continent.


As noted above, Law Dome has been a long-standing issue at Climate Audit.

It astonishes me that there is no technical journal article on Law Dome d18O data either for the Holocene or for the past 2000 years. Van Ommen planned to publish the data according to my earliest correspondence with him (2004).  It’s disquieting that longer Holocene data for such an important site remains unpublished.

The characterization of Antarctic ice cores in the 2006 NAS report (discussed at CA here, especially at the press conference) was integral to their attempt to distinguish past warming from modern warming:

This [additional] evidence [of the unique nature of recent warmth in the context of the last one or two millennia] includes …the fact that ice cores from both Greenland and coastal Antarctica show evidence of 20th century warming (whereas only Greenland shows warming during medieval times).

However, this assertion in respect to Antarctica was not supported by their data or analysis. I tried unsuccessfully at the time to obtain a source. The Law Dome series, which was in circulation at the time, showed opposite results: warmth in the late first and very early second millennia and which didn’t show evidence of 20th century warming.

Drafts of IPCC AR4 showed a panel diagram of Southern Hemisphere proxies, but conspicuously omitted the Law Dome series. As an AR4 reviewer, I asked that it be included in the diagram (knowing of course that it showed a result that was opposite to what they were claiming.) The IPCC AR4 lead authors knew this as well and refused to show it in their diagram, concocting a ludicrous excuse. There was a revealing discussion in Climategate emails (discussed at CA here).

The Law Dome proxy series was important in the Gergis reconstruction as well. It met ex ante criteria for inclusion in her reconstruction. It was one of only three Gergis proxies with values in the Medieval period; if it were included in the network, medieval values would have been raised significantly. Rather than let this happen, Gergis concocted ex post screening criteria which excluded Law Dome from her network – see CA discussion here.


Reconciling Model-Observation Reconciliations

Two very different representations of consistency between models and observations are popularly circulated. On the one hand, John Christy and Roy Spencer have frequently shown a graphic which purports to show a marked discrepancy between models and observations in tropical mid-troposphere, while, on the other hand, Zeke Hausfather, among others, have shown graphics which purport to show no discrepancy whatever between models and observations.  I’ve commented on this topic on a number of occasions over the years, including two posts discussing AR5 graphics (here, here) with an update comparison in 2016 (here) and in 2017 (tweet).

There are several moving parts in such comparisons: troposphere or surface, tropical or global. Choice of reference period affects the rhetorical impression of time series plots.  Boxplot comparisons of trends avoids this problem. I’ve presented such boxplots in the past and update for today’s post.

I’ll also comment on another issue. Cowtan and Way argued several years ago that much of the apparent discrepancy in trends at surface arose because the most common temperature series (HadCRUT4,GISS etc) spliced air temperature over land with sea surface temperatures. This is only a problem because there is a divergence within CMIP5 models in trends for air temperature (TAS) over ocean and sea surface temperature (TOS). They proposed that the relevant comparandum for HadCRUT4 ought to be a splice as well: of TOS over ocean areas and TAS over land.  When this was done, the discrepancy between HadCRUT4 and CMIP5 models was apparently resolved.

While their comparison was well worth doing, there was an equally logical approach which they either didn’t consider or didn’t report: splicing observations rather than models. There is an independent and long-standing dataset for night marine air temperatures (ICOADS). Combining this data with surface air temperature over land would avoid the problem identified by Cowtan and Way. Further, NMAT data is relied upon to correct/adjust inhomogeneity in SST series arising from changes in observation techniques, e.g. Karl et al 2015:

previous version of ERSST assumed that no ship corrections were necessary after this time, but recently improved metadata (18) reveal that some ships continued to take bucket observations even up to the present day. Therefore, one of the improvements to ERSST version 4 is extending the ship-bias correction to the present, based on information derived from comparisons with night marine air temperatures.

Thus, there seems to be multiple reasons to look just as closely at a comparison resulting from this approach, as one from splicing model data, as proposed by Cowtan and Way.  I’ll show the resulting comparisons without prejudging.


Spencer and Christy’s comparisons are for satellite data (lower troposphere.) They typically show tropical troposphere, for which the discrepancy is somewhat larger than for the GLB troposphere (shown below.) The median value from models is 0.28 deg C/decade, slightly more than double observed trends in UAH (0.13 deg C/decade) or RSS version 3.3 (0.14 deg C.) RSS recently adjusted their methodology resulting in a 37% increase in trend  (now 0.19 deg C/decade.)   The UAH and RSS3.3 trends are below all but one model-run combinations. Even the adjusted RSS4 trend is less than all but two (of 102) model-run combinations.

The obvious visual differences in this diagram illustrate the statistically significant difference between models and observations.  Many climate scientists e.g. Gavin Schmidt are deniers of mainstream statistics and argue that there is no statistically significant difference between models and observations. (See CA discussion here.)

CMIP5 and HadCRUT4

IPCC AR5 compared CMIP5 projections of air temperature (TAS) to HadCRUT4 and corresponding surface temperature indices (all obtained by weighted average of air temperatures over land and SST over ocean.)  In this case, the discrepancy is not as marked, but still significant. Median model trend was 0.241 deg C/decade (less than troposphere) while HadCRUT4 trend was 0.181 deg C/decade (Berkeley 0.163).  Berkeley was lower than all but six runs, HadCRUT4 lower than all but ten. Both were outside the range of the major models. As noted above, the basis of this comparison was criticized by Cowtan and Way, re-iterated by Hausfather.

Cowtan and Way Variation

As noted above, Cowtan and Way (followed by Hausfather) combined CMIP5 models for TAS over land and TOS over ocean, for their comparison to HadCRUT4 and similar temperature data. This had the effect of lowering the median model trend to 0.189 deg C/decade (from 0.241 deg C/decade), indicating a reconciliation with observations (0.181 deg C/decade for HadCRUT4) for surface temperatures (though not for tropospheric temperatures, which they didn’t discuss.)


The ICOADS air temperature series is closely related to SST series. There is certainly no facial discrepancy which disqualifies one versus the other as a valid index. There are major and obvious differences in trends between the ocean series and the land series. The difference is larger than in models, but models do project an increasing difference over the next century.

One wonders why the standard indices (HadCRUT4) combine the unlike series for SST and land air temperature rather than combining two air temperature series.  As an experiment, I constructed “MATCRU” as a weighted average (by area) of ICOADS and CRUTEM.  Rather than the consistency reported by Cowtan-Way and Hausfather, this showed a dramatic inconsistency – not unlike the inconsistency in tropospheric series prior to the recent bodge of RSS data.




What does this all mean? Are models consistent with observations or not?  Up to the recent very large El Nino, it seemed that even climate scientists were on the verge of conceding that models were running too hot, but the El Nino has given them a reprieve. After the very large 1998 El Nino, there was about 15 years of apparent “pause”. Will there be a similar pattern after the very large 2017 El Nino?

When one looks closely at the patterns as patterns, rather than to prove an argument, there are interesting inconsistencies between models and observations that do not necessarily show that the models are WRONG!!!, but neither are they very satisfying in proving that that the models are RIGHT!!!!

  • According to models, tropospheric trends should be greater than surface trends. This is true over ocean, but not over land. Does this indicate that the surface series over land may have baked in non-climatic factors, as commonly argued by “skeptics”, such that the increase, while real, is exaggerated?
  • According to models, marine air temperature trends should be greater than SST trends, but the opposite is the case. Does this indicate that SST series may have baked in some non-climatic factors, such that the increase, while real, is exaggerated?

From a policy perspective, I’m not convinced that any of these issues – though much beloved by climate warriors and climate skeptics – matter much to policy.  Whenever I hear that 2016 (or 2017) is the warmest year EVER, I can’t help but recall that human civilization is flourishing as never before. So we’ve taken these “blows” and not only survived, but prospered. Even the occasional weather disaster has not changed this trajectory.