Warmest in a Millll-yun Years

We’re going to be hearing more about this – see press release here for example. I’ll add to this headnotes, but, for now, I’ll post up some URLs that some of you may find handy.

The underlying article by Hansen et al is at PNAS here. They thank Ralph Cicerone for his review comments. The article itself is a bizarre and undisciplined hodgepodge in which they discuss Hansen’s congressional testimony in 1988 for a while, an ocean sediment record in the Western Equatorial Pool, then sea levels and species extinctions, musing on Dangerous Anthropogenic Intervention and the Framework Convention – which Stephen Schneider set out as an objective some time ago. (Ross twigged to the increasing mentions of Dangerous Anthropogenic Interference – the trigger phrase for the Framework Convention – to which the U.S. is a party.)

The underlying ocean sediment article is online here with some related literature here .

U.S. CCSP REcommends Audit Trails

The U.S. CCSP report on temperature trends includes the following remarkable recommendations on audit trails:

The independent development of data sets and analyses by several independent scientists or teams will serve to quantify structural uncertainty and to provide objective corroboration of the results. In order to encourage further independent scrutiny, data sets and their full metadata (footnote 1) should be made openly available. Comprehensive analyses should be carried out to ascertain the causes of remaining differences between data sets and to refine uncertainty estimates.

In their text, they say:

To ascertain unambiguously the causes of differences in data sets generally requires extensive metadata for each data set (C4; NRC, 2000b). Appropriate metadata, whether obtained from the peer-reviewed literature or from data made available on-line, should include, for data on all relevant spatial and temporal scales:
“‚⠠Documentation of the raw data and the data sources used in the data set construction to enable quantification of the extent to which the raw data overlap with other similar data sets;
“‚⠠Details of instrumentation used, the observing practices and environments and their changes over time to help assessments of, or adjustments for, the changing accuracy of the data;
“‚⠠Supporting information such as any adjustments made to the data and the numbers and locations of the data through time;
“‚⠠An audit trail of decisions about the adjustments made, including supporting evidence that identifies non-climatic influences on the data and justifies any consequent adjustments to the data that have been made; and
“‚⠠Uncertainty estimates and their derivation.
This information should be made openly available to the research community.

For this chapter, the Convening Lead Author: Chris K. Folland, U.K. Met. Office; Lead Authors were: D.E. Parker, U.K. Met. Office; R.W. Reynolds, NOAA; S.C. Sherwood, Yale Univ.; P.W. Thorne, U.K. Met. Office.

Nice to see the term “audit trail” in a climate science publication. But how do these guys reconcile these pieties with acquiescence in total obstruction by Phil Jones and the Hockey Team? Now Folland himself is an important originator of SST data – maybe somneone should see how they do with requesting data and meta data from him?

New Satellite Data

Get ready for a new round of satellite disputes. Spencer and Christy have version 6.0 out in draft form as of Sept 9, 2006. Here’s a plot of the data from the website . The SH result is particularly intriguing – now showing virtually no change. In the recent U.S. CCSP report, as pointed out by Fred Singer in Stockholm, additional to the low trend in tropospheric temperature, there is the fingerprint problem that tropospheric temperature trends are supposed to be running hotter than surface temperature trends – but observations are the opposite.

I’m sure that there will be a new furore. There is a readme at the site about the diurnal corrections in version 6.0.

Trip Report – Holland

A late report on my visit to Holland. I don’t think that I’ve talked as much in a month as I did in 36 hours in Holland. I had two main presentations -one at KNMI in the morning; one at the Free University in the afternoon. I also had two long newspaper interviews and a long meeting on Friday morning with a Dutch mathematician. After the KNMI presentation, I had lunch with Rob van Dorland, Nanne Weber, Jos de Laat of KJNMI, all of whom were very cordial, and spent much of the afternoon talking with them.

Throughout I was very cordially entertained and guided by Marcel Crok of NWT (and his charming wife) Any success that I had was largely due to Marcel’s initiative.

At the Free University lecture, there were some CA readers: Ferdinand Engelbeen, Hans Erren, Hans Labohm; and perhaps a couple of others. (Larry and Lena Hulden were at the Stockholm conference and I had a chance to talk to them there.) Ferdinand kindly gave me some very interesting Belgian beers; Hans Erren a 2003 Burgundy – from the hot summer. All in all, I felt very welcome in Holland. Continue reading

IDAG 2004

Here’s a survey article by the International Detection and Attribution Group url. I’ll post a few threads like this, but don’t plan to comment myself. If someone wants to contribute a brief summary, it would be appreciated.

Stern Review – Technical Appendix

AS both David H’s observe, the Stern Review is expected next month. Their science views are summarized here. I may post some headnotes at a later time, but it’s an interesting browse and some of you may wish to comment on it specifically.

Allen and Tett 1999

I’ve posted up a pdf for Allen and Tett 1999 here as this seems to be a frequently cited article that said that "optimal fingerprinting" was linear regression and gives a flavor for the literature. The approach looks to me like pretty garden variety methodology, such as one would see in the fall term of an econometrics course. It’s hard to believe that this is the Royal Society’s "advanced statistical methods" – I wonder if they checked this with any statisticians.

Royal Society and Detection/Attribution

The U.K. Royal Society has recently sent a letter to ExxonUK which has attracted commentary about whether it is an interference with free speech (see Roger Pielke and the discussion there) . I’m interested in a different aspect of this letter – their reliance on IPCC detection and attribution discussion. The Royal Society takes exception to the following comment in Exxon reporting:

I think that there’s considerable justification for saying that IPCC conclusions rest on "expert judgement" rather than "objective, reproducible statistical methods". I don’t think that there’s anything necessarily wrong with people making decisions based on "expert judgement" – this is done all the time. Indeed the NAS Panel based its impressionistic assessment of temperature history on expert judgement rather than confidence interval estimation – a point made clearly at the NAS press conference. However, the Royal Society took great umbrage at the above characterization. They went on as follows:

In my canvassing of the Hockey Team results, I would say the opposite: Team results are not based on "objective, reproducible statistical methods"; however the Royal Society does not seem to have Hockey Team reconstructions in mind (although U.K. official opinion seems to be that the HS has not been dented); their idea of "objective, reproducible statistical methods" is the Detection and Attribution chapter of IPCC, chapter 12 – note the Appendix, in which they report:

Appendix 12.1: Optimal Detection is Regression The detection technique that has been used in most “optimal detection” studies performed to date has several equivalent representations (Hegerl and North, 1997; Zwiers, 1999). It has recently been recognised that it can be cast as a multiple regression problem with respect to generalised least squares (Allen and Tett, 1999; see also Hasselmann, 1993, 1997) ….

The "detection and attribution" literature uses terms like "optimal fingerprinting", which seems to be high-falutin term for multiple regression (or "multiregression" as it is sometimes referred to in this literature.) Prominent authors in this vein are Hegerl, Stott, Tett and Myles Allen of the climateprediction,et 11.5 deg C press release. I’ve browsed this literature and been put off by the opaque and inflated language, which sometimes makes Mann seem like Hemingway in clarity and purpose. However, it’s probably time to start grasping this particular nettle and see what actually lies underneath these "detection and attribution studies". I’m going to try to pin down which studies are "seminal" in this field. I would be interested in contributions from anyone who is successful in translating any of this turgid prose into conventional statistical concepts – ideally we would start with about 6-8 notes on specific studies that have been cited in the field.

more bender on Emanuel

Here is the (big) change in correlation as the Emanuel data are progressively smoothed, 0x, 1x, 2x. (Here I exclude the endpoints, which is what one ought to do.)

cor(SST,PDI) [1] 0.5050575
cor(SST.1,PDI.1) [1] 0.6278249
cor(SST.2,PDI.2) [1] 0.7484223

Square those to obtain r^2.

Here are two graphs of Emanuel’s SST and PDI, showing the effect of 1x and 2x smoothing on PACF and spectra for SST and PDI respsectively

More bender and Willis on Emanuel

Here’s Willis’ most recent summary of the ongoing dialogue on the Emanuel story. Continue reading