Bishop Hill draws attention to the publication of Trenberth’s comment on Spencer and Braswell 2011 in Remote Sensing. Unlike Trenberth’s presentation to the American Meteorological Society earlier this year (see here here here, Trenberth et al 2011 was not plagiarized.
The review process for Trenberth was, shall we say, totally different than the review process for O’Donnell et al 2010 or the comment by Ross and me on Santer et al 2008. The Trenberth article was accepted on the day that it was submitted:
Received: 8 September 2011 / Accepted: 8 September 2011 / Published: 16 September 2011
CA readers are well aware of long-term obstruction by the Team not simply regarding details of methodology, but even data. Trenberth objects to incompleteness of methodological description in Spencer and Braswell 2011 as follows:
Moreover, the description of their method was incomplete, making it impossible to fully reproduce their analysis. Such reproducibility and openness should be a benchmark of any serious study.
Obviously these are principles that have been advocated at Climate Audit for years. I’ve urged the archiving of both data and code for articles at the time of publication to avoid such problems. However, these suggestions have, all too often, been resolutely opposed by the Team. Even supporting data, all to often, remains unavailable. I haven’t had time to fully parse Spencer and Braswell as to reproducibility but note that Spencer promptly provided supporting data to me when requested (as did Dessler.) In my opinion, Spencer and Braswell should have archived data as used and source code concurrent with publication, as I’ve urged others to do. However, their failure to do so is hardly unique within the field. That Trenberth was able to carry out a sensitivity study as quickly as he did suggests to me that their methodology was substantially reproducibile, but, as I noted above, I haven’t parsed the article.
Trenberth observes that “minor changes” in assumptions yielded “major changes” in results, concluding that the claims in Lindzen and Choi 2009 were not robust:
The work of Trenberth et al. , for instance, demonstrated a basic lack of robustness in the LC09 method that fundamentally undermined their results. Minor changes in that study’s subjective assumptions yielded major changes in its main conclusions.
I am not in a position to comment on the truth or falsity of Trenberth’s claims as applied to Lindzen and Choi 2009. However, this sort of argument has been a staple at Climate Audit (and in our published criticisms) of paleo reconstructions. Instead of commending us for such observations in respect to MBH, Trenberth publicly disparaged Ross and I personally for daring to criticize Mann et al. I agree with the principle that Trenberth enunciated here, but not with Trenberth’s hypocritical application of the principle.
Trenberth criticizes Spencer and Braswell for inadequate statistical analysis:
For instance, SB11  fail to provide any meaningful error analysis in their recent paper and fail to explore even rudimentary questions regarding the robustness of their derived ENSO-regression in the context of natural variability.
To a considerable degree, Spencer and Braswell 2011 was a commentary on Dessler 2010. Neither article carried out satisfactory statistical analysis. Dessler 2010 reported a regression with an adjusted r2 of ~0.01 and purported to assert “confidence intervals”. UC carried out the “rudimentary” statistical operation of calculating the slope using the y-variable as regressand for consistency, obtaining different results. Results using CERES clear sky were opposite to results using ERA clear sky. Whatever the merits of CERES versus ERA, this is the sort of sensitivity that should have been reported. This is not to say that the statistical analysis of Spencer and Braswell 2011 was superior to that of Dessler 2010. It wasn’t. Neither article met the criteria enunciated by Trenberth.
If Trenberth really wants to get into the question of failures to explore “rudimentary questions” of robustness, I invite him to examine the infamous CENSORED directory of MBH98 or to search for the verification r2 results of early steps of MBH98.
Trenberth observes that “correlation does not mean causation” – a principle that is important at Climate Audit:
Moreover, correlation does not mean causation. This is brought out by Dessler  who quantifies the magnitude and role of clouds and shows that cloud effects are small even if highly correlated.
Unfortunately, this principle is applied opportunistically in paleoclimate. Team methodology, for example, makes no attempt to verify that 6-sigma bulges in strip bark bristlecone pine are due to temperature (as opposed to a mechanical effect of strip barking itself.) Team methodology accepts Yamal as a temperature proxy without explaining the decline in ring widths in the majority of nearby sites.
Trenberth wildly overstates Dessler 2011 as well by saying that it “quantifies the magnitude and role of clouds and shows that cloud effects are small”. “Quantifying the magnitude and role of clouds” is an enormous undertaking and would take hundreds of pages of analysis. Dessler 2011 is a short little article addressing a narrow issue. It did not pretend to “quantify the magnitude and role of clouds” nor did it do so.
Clouds were the major source of uncertainty in climate models in Charney 1979 and remained so in IPCC AR4 (2007). If Dessler 2011 did in fact show that “cloud effects are small”, this would be an epochal achievement in climate science. Given that a preprint of Dessler 2011 only became available on Sept 2, 2011, there has been little opportunity to analyse its results so far. Whether Dessler 2011 really proves that “cloud effects are small” remains to be seen. If, like Dessler 2010, it makes such assertions based on r2 of ~0.01, I think people could reasonably disagree on whether such far reaching claims had been firmly established.