Bristlecone Addiction in Shi et al 2013

Recently, Robert Way drew attention to Shi et al 2013 (online here), a multiproxy study cited in AR5, but not yet discussed at CA.

The paper by Shi et al (2013) is fairly convincing as to at least the last 1,000 years in the Northern Hemisphere. I am actually surprised that paper has not been discussed here since it aims at dealing with many of the criticisms of paleoclimate research. They use 45 annual proxies which are all greater than 1,000 years in length and all have a “demonstrated” temperature relationship based on the initial authors interpretations.

Robert correctly observed that Shi et al was well within the multiproxy specialization of Climate Audit and warranted coverage here. However, now that I’ve examined it, I can report that it is reliant on the same Graybill bristlecone chronologies that were used in Mann et al 1998-99. While critics of Climate Audit have taken exception to my labeling the dependence of paleoclimatologists on bristlecone chronologies as an “addiction”, until paleoclimatologists cease the repeated use of this problematic data in supposedly “independent” reconstructions, I think that the term remains justified.

While Robert reported that all these series had a “demonstrated” temperature relationship according to the initial authors’ interpretation, this is categorically untrue for Graybill’s bristlecone chronologies, where the original authors said that the bristlecone growth pulse was not due to temperature and sought an explanation in CO2 fertilization. (The preferred CA view is that the pulse is due to mechanical deformation arising from high incidence of strip barking in the 19th century, but that is a separate story.) As a matter of fact, by and large, the bristlecone chronologies failed even Mann’s pick-two test.

Shi et al also show a nodendro reconstruction. This has a much lesser semi-stick. This mainly uses a subset of Kaufman et al 2009 data. In a forthcoming post, I’ll show that even this weak result is questionable due to their use of contaminated data and upside-down data (not Tiljander, something different.)
Continue reading

Data Coverage in Cowtan and Way

As I was reading section 3 (Global temperature reconstruction) of the Cowtan and Way paper, I came across this text:

The HadCRUT4 map series was therefore renormalised to match the UAH baseline period of 1981-2010. For each map cell and each month of the year, the mean value of that cell during the baseline period was determined. If at least 15 of the 30 possible observations were present, an offset was applied to every value for that cell and month to bring the mean to zero; otherwise the cell was marked as unobserved for the whole period.

Renormalization is not a neutral step – coverage is very slightly reduced, however the impact of changes in coverage over recent periods is also reduced. Coverage of the renormalized HadCRUT4 map series is reduced by about 2%.

This raised the question of the general overall coverage of the hadcrut dataset used in the study as well as the effect of any changes in that coverage due to the methodology of the analysis in that paper. To address these issues, the data available on the paper’s website was downloaded and analysed using the R statistical package. The specific data used was contained in the file HadCRUT. which can be downloaded as part of the full data used in the paper.

Continue reading

Behind the SKS Curtain

As a preamble and reprise, I think that it is reasonable for Cowtan and Way to take exception with HadCRU’s failure to estimate temperature in Arctic gridcells and to propose methods for estimating this temperature. At a time when the climate community argued that differences between the major indices and accessibility to CRU data didn’t “matter”, I thought that both mattered. One of the reasons for transparency in CRU data and methods was so that interested parties could carry out their own assessments, as Cowtan and Way have done. They have diagnosed a downward bias in recent HadCRU results. On previous occasions, I’ve observed that the community is more alert to errors that go the “wrong way” than to errors that go the “right way” and this opinion remains unchanged. As noted in my previous post, it doesn’t appear to me that their slight upward revision in temperature estimates has a material impact on the discrepancy between models and observations – a discrepancy which remains, despite efforts to spin otherwise.

In today’s post, I’ve re-examined Robert Way’s contributions to the secret SKS forum, where both he and Cowtan (Kevin C) have been long-time contributors. In my first post, I took exception to Way calling me a “conspiracy wackjob”. However, relative to the tenor of other SKS posts in which their colleagues fantasize about “ripping” out Anthony Watts’ throat and Anthony and I being perp-walked in handcuffs, Way’s language was relatively mild.

In addition, re-reading the relevant threads, other than a couple of occasions (ones to which I had taken exception), Way’s language was mostly temperate and well-removed from the conspiratorial fantasies about the “Denial Machine” that pervade too much of the SKS forum. In addition, this re-reading showed that, on numerous occasions, Way had agreed with Climate Audit critiques, sometimes in very forceful terms and usually against SKS forum opposition. Way typically accompanied these agreements with sideswipes to evidence his disdain for Climate Audit, but seldom, if ever, contradicted things that I had actually said.

I think that readers will be surprised at the degree of Way’s endorsement of the Climate Audit critique of Team paleoclimate practices.

Continue reading

Cowtan and Way 2013

There has been some discussion of Cowtan and Way 2013 take on HadCRUT4 at Lucia’s, Judy Curry’s, Nick Stokes and elsewhere. HadCRUt4 has run cooler than other datasets (including UAH satellite) in recent years. Cowtan and Way observe that HadCRU does not estimate temperature in many Arctic gridcells. Because Arctic temperatures have risen more than low-latitude temperatures, they state that recent HadCRU temperatures are biased low. (Since GISS extrapolates into the Arctic, it is less affected by this bias.)

In the context of IPCC SOD FIgure 1.5 (or similar comparison of models and observations), CW13 is slightly warmer than HadCRUT4 but the difference is small relative to the discrepancy between models and observations; the CW13 variation is also outside the Figure 1.5 envelope.

cowtanway2013 vs ipcc ar5sod figure 1_5
Figure 1. Cowtan and Way 2013 hybrid plotted onto IPCC AR5SOD Figure 1.5

Next, here is a simple plot showing the difference between the CW13 hybrid and HadCRUT 4. Up to the end of 2005, there was a zero trend between the two; the difference has arisen entirely since 2005.

cowtanway2013_difference between hadcru4 and cw2013
Figure 2. Delta between CW Hybrid (basis 1961-1990) and HadCRUT4.

In their online commentary, Cowtan and Way praise Hansen for being the first person to report the effect of missing Arctic data on global temperature. However, no material discrepancy had arisen between their index and the HadCRUT4 index as of 2005 so that Hansen was, according to Cowtan and Way’s own data, observing a discrepancy that had not yet arisen, making their following praise to Hansen seem somewhat premature:

Probably the first mention of an underestimation of recent warming due to poor Arctic coverage comes from Hansen in 2006, who sought to explain why the NASA temperature data showed 2005 as being a record breaking warm year, in contrast to the Met Office temperature record.

That there are continuing defects in HadCRU methodology should hardly come as a surprise to CA readers. Attempts to reconcile and/or explain discrepancies between HadCRU and GISS also seem worthwhile to me.

Nor do efforts to apply kriging seem misplaced to me in principle. On the contrary, for someone with experience in ore reserves, it seems entirely natural e.g. see for example, some of Jeff Id’s discussion of Antarctica. I notice that their methodology results in changes to the Central England gridcell. While I don’t object to the use of kriging or similar methods to estimate values in missing gridcells, I don’t see any benefit to altering values in known gridcells, if that’s what’s happening here. (I haven’t parsed their methods and don’t plan to do so at this time.)

Co-author Way was an active participant at the secret SKS forum, where he actively fomented conspiracy theory allegations. Uniquely among participants in the secret SKS forum, he conceded that Climate Audit was frequently correct in its observations (“The fact of the matter is that a lot of the points he [McIntyre] brings up are valid”) and urged care in contradicting Climate Audit (“I wouldn’t want to go up against that group, between them there is a lot of statistical power to manipulate and make the data say what it needs to say.”) [Update Nov 21: While Way did wrongly associate me with conspiracy theory on a couple of occasions, including a tasteless accusation of being a "conspiracy wackjob", the vast majority of his language is temperate and reasonable and shows remarkable appreciation of the statistical points of our critique, with the slurs being a sort of incidental sideswipe. See the next post.]

Update: Here is annotation of IPCC AR5 SOD Figure 11.12 comparing observations to CMIP5 4.5, with both HadCRU4 outside envelope.
cw13 versus sod figure 11_12 cmip5

Bart Verheggen compared CMIP5 RCP8.5 to observations, saying that “recent observations are at the low side of the CMIP5 model range”.

However, my own calculations using RCP8.5 show that observations are outside the envelope. Verheggen’s calculations are not consistent with similar calculations by others (including IPCC) and I presume that he’s made an error somewhere.

cw2013 cmip5 rcp85 _baseline 1980_1999

Another Absurd Lewandowsky Correlation

Lewandowsky’s recent article, “Role of Conspiracist Ideation” continues Lewandowsky’s pontification on populations of 2, 1 and zero.

As observed here a couple of days ago, there were no respondents in the original survey who simultaneously believed that Diana faked her own death and was murdered. Nonetheless, in L13Role, Lewandowsky not only cited this faux example, but used it as a “hallmark” of conspiracist ideation:

For example, whereas coherence is a hallmark of most scientific theories, the simultaneous belief in mutually contradictory theories—e.g., that Princess Diana was murdered but faked her own death—is a notable aspect of conspiracist ideation [30].

However, this example is hardly an anomaly. The most cursory examination of L13 data shows other equally absurd examples.

One of the more amusing ones pertains to one of Lewandowsky’s signature assertions in Role, in which he claimed, echoing an almost identical assertion in Hoax, that “denial of the link between HIV and AIDS frequently involves conspiracist hypotheses, for example that AIDS was created by the U.S. Government [22–24].”

Lew reported a correlation of -0.111 between CYAIDS and CauseHIV, citing this correlation (together with negative correlations related to smoking and climate change) as follows:

The correlations confirm that rejection of scientific propositions is often accompanied by endorsement of scientific conspiracies pertinent to the proposition being rejected.

However, as with the fake Diana claims, Lewandowsky’s assertions are totally unsupported by his own data.

In the Role survey (1101 respondents), there were 53 who purported to disagree with the proposition that HIV caused AIDS (a vastly higher proportion than in the climate blog survey – a point that I will discuss separately). Of these 53 respondents, only two (3.8% of the 53 and 0.2% of the total) also purported to believe the proposition that the government caused AIDS. It is therefore simply untrue for Lewandowsky to assert, based on this data, that denial of the link between HIV and AIDS was either “frequently” or “often” accompanied by belief in the government AIDS conspiracy. It would be more accurate to say that it was “seldom” accompanied by such belief. Although Lewandowsky did not mention this, both of the two respondents who purported to believe this unlikely juxtaposition also believed that CO2 had caused serious negative damage over the past 50 years.

Lewanowsky’s assertion in Role about a supposed link between denial of a connection between HIV and AIDS and a government AIDS conspiracy had been previously made in Hoax not just once, but twice:

Likewise, rejection of the link between HIV and AIDS has been associated with the conspiratorial belief that HIV was created by the U.S. government to eradicate Black people (e.g., Bogart & Thorburn, 2005; Kalichman, Eaton, & Cherry, 2010)…

Thus, denial of HIV’s connection with AIDS has been linked to the belief that the U.S. gov¬ernment created HIV (Kalichman, 2009)

However, Lewandowsky’s false claim received even less support in the survey of stridently anti-skeptic Planet 3.0 blogs. Even with fraudulent responses, only 16 of 1145 (1.4%) purported to disagree with the proposition that HIV caused AIDS, and of these 16, only 2 (12.5%) also purported to endorse the CYAIDS conspiracy. These two respondents were the two respondents who implausibly purported to believe in every fanciful conspiracy. Even Tom Curtis of SKS argued that these responses were fraudulent. Without these two fraudulent responses, the real proportion in the blog survey is 0. Either way, the data contradicts Lewandowsky’s assertion that disagreement with the HIV-AIDS proposition is “often” or “frequently” accompanied by belief in the government AIDS conspiracy at the climate blogs surveyed by Lewandowsky.

Even though there were even fewer respondents supposedly subscribing to the unlikely propositions in the blog survey, the negative correlation between CYAIDS and CauseHIV propositions was even more extreme: a seemingly significant -0.31, though only the two fake respondents purported to hold the two unlikely propositions.

Update: I’ve added some plots below to illustrate how Lewandowsky’s calculations of correlation go awry.

The contingency table of CauseHIV and CYAIDS for the L13Hoax data is shown below, with the size of each circle proportional to the count in the contingency table. Most of the responses are identical – thus the large circle. Because there are only two respondents purporting to hold the two most unlikely views, this is a very faint dot. A correlation coefficient implies a linear fit and normality of residuals: visually this is obviously not the case. There are a variety of tests that could be applied and the supposed Lewandowsky correlation will fail all of them.


If one goes back to the underlying definition of a correlation coefficent, it is a dot-product of two vectors. In the context of a contingency table, this means that the contribution of each square in the contingency table to the correlation can be separately identified. I’ve done this in the graphic shown below, since the points, while elementary, are not immediately intuitive in these small-population situations. For each square in the contingency table, I’ve calculated the dot-product contribution and multiplied it by the count in the square, thereby giving the contribution to the correlation coefficient (which is the sum of the dot-product contributions.) The area of each circle shows the contribution to the correlation coefficient: pink shows a negative contribution.

There are a few interesting points to observe. In a setup where nearly all the responses are identical and at one extreme, these responses make a positive contribution to the correlation coefficient. Responses in which the respondent strongly disagrees with CYAIDS but only agrees with CauseHIV or in which the respondent strongly agrees with CauseHIV but only disagrees with CYAIDS make a negative contribution to the correlation. Respondents with simple agreement with CauseHIV and simple disagreement with CYAIDS make a strong contribution to the correlation coefficient. The two (fake) respondents make a very large contribution to the correlation coefficent despite only being two responses.

CauseHIVvsCYAIDS r contributions_Hoax

A Scathing Indictment of Federally-Funded Nutrition Research

Edward Archer of the University of South Carolina, lead author of a scathing examination of U.S. federally-funded nutrition research, has written an even more scathing editorial in The Scientist (here) (H/t Margaret Wente of the Toronto Globe and Mail here.)

Some quotes:

We may be witnessing the confluence of two inherent components of the human condition: incompetence and self-interest

And while the self-correcting nature of science necessitates failure, the vast majority of nutrition’s failures were engendered by a complete lack of familiarity with the scientific method.

Rather than training graduate students in the scientific method, and allowing their research to serve the needs of society, the field’s leaders choose to train their mentees to serve only their own professional needs—namely, to obtain grant funding and publish their research.

But by not training mentees in the basics of science and skepticism, the nutrition field has fostered the use of measures that are so profoundly dissonant with scientific principles that they will never yield a definitive conclusion. As such, we now have multiple generations of nutrition researchers who dominate federal nutrition research and the peer review of that work, but lack the critical thinking skills necessary to critique or conduct sound scientific research.

The subjective data yielded by poorly formulated nutrition studies are also the perfect vehicle to perpetuate a never-ending cycle of ambiguous findings leading to ever-more federal funding.

Archer culminates with the following allegation (going much further than any of my comparatively mild critiques of climate scientists):

Perhaps more importantly, to waste finite health research resources on pseudo-quantitative methods and then attempt to base public health policy on these anecdotal “data” is not only inane, it is willfully fraudulent… The fact that nutrition researchers have known for decades that these techniques are invalid implies that the field has been perpetrating fraud against the US taxpayers for more than 40 years—far greater than any fraud perpetrated in the private sector (e.g., the Enron and Madoff scandals).

The study was not funded by the U.S. federal government, but by an “unrestricted research grant” from Coca-Cola.

This study was funded via an unrestricted research grant from The Coca-Cola Company. The sponsor of the study had no role in the study design, data collection, data analysis, data interpretation, or writing of the report.

I wonder if federally-funded nutrition scientists will respond with attacks on the Coke Brothers.

The Zen of Population (N=0)

Mann rose to prominence by supposedly being able to detect “faint” signals using “advanced” statistical methods. Lewandowsky has taken this to a new level: using lew-statistics, lew-scientists can deduce properties of population with no members. Josh summarizes the zen of lew-statistics as follows:


More False Claims from Lewandowsky

Another bogus claim from Lewandowsky would hardly seem to warrant a blog post, let alone a bogus claim about people holding contradictory beliefs. The ability of many climate scientists to hold contradictory beliefs at the same time has long been a topic of interest at climate blogs (Briffa’s self contradiction being a particular source of wonder at this blog). Thus no reader of this blog would preclude the possibility that undergraduate psychology students might also express contradictory beliefs in a survey.

Nonetheless, I’ve been mildly interested in Lewandowsky’s claims about people subscribing to contradictory beliefs at the same time, as for example, the following:

While consistency is a hallmark of science, conspiracy theorists often subscribe to contradictory beliefs at the same time – for example, that MI6 killed Princess Diana, and that she also faked her own death.

Lewandowsky’s assertions about Diana are based by an article by Wood et al. entitled “Dead and Alive: Beliefs in Contradictory Conspiracy Theories”. A few months ago, I requested the supporting data from Wood. Wood initially promised to provide the data, then said that he had to check with coauthors. I sent several reminders without success and eventually without eliciting any response. I accordingly sent an FOI request to his university, accompanied by a complaint under any applicable university data policies. The university responded cordially and Wood immediately provided the data.

The most cursory examination of the data contradicted Lewandowsky’s claim. One can only presume that Lewandowsky did not carry out any due diligence of his own before making the above assertion.

Continue reading

A New Climate Costumed Vigilante

A trivia question today for CA readers.
Continue reading

Rosenthal et al 2013

There has been considerable recent attention to Rosenthal et al 2013: WUWT here, Judy Curry here, Andy Revkin here.

The article itself presents a Holocene temperature reconstruction that is very much at odds both with Marcott et al 2013 and Mann et al 2008. And, only a few weeks after IPCC expressed great confidence in the non-worldwideness of the Medieval Warm Period, Rosenthal et al 2013 argued that the Little Ice Age, Medieval Warm Period and Holocene Optimum were all global events.

Although (or perhaps because) the article apparently contradicts heroes of the revolution, Rosenthal et al 2013 included a single sentence of genuflection to CAGW:

The modern rate of Pacific OHC change is, however, the highest in the past 10,000 years (Fig. 4 and table S3).

In the Columbia and Rutgers press releases accompanying the article, this claim was ratcheted up into the much more grandiose assertion that modern warming is “15 times faster” than in previous warming cycles over the past 10,000 years (though the term “15 times faster” is not actually made in the peer reviewed article):

In a reconstruction of Pacific Ocean temperatures in the last 10,000 years, researchers have found that its middle depths have warmed 15 times faster in the last 60 years than they did during apparent natural warming cycles in the previous 10,000.

Rather than quoting the article itself, Michael Mann, an academic activist at Penn State University, repeated the claim from the press release in an article at Huffington Post entitled “Pacific Ocean Warming at Fastest Rate in 10,000 Years”.

However, both the claim in the press release and the somewhat weaker claim in the article appear to be unsupported by the actual data. Continue reading


Get every new post delivered to your Inbox.

Join 3,191 other followers