Is Briffa Finally Cornered?

In 2000, Keith Briffa, lead author of the millennial section of AR4, published his own versions of Yamal, Taymir and Tornetrask, all three of which have been staples of all subsequent supposedly “independent” reconstructions. The Briffa version of Yamal has a very pronounced HS and is critical in the modern-medieval differences in several studies. However, the Briffa version for Yamal differs substantially from the version in the publication by the originating authors (Hantemirov, Holocene 2002), but is the one that is used in the multiproxy studies (though it’s hard to tell since Hantemirov is usually cited.) Studies listed in AR4 that use the Briffa versions include not just Briffa 2000, but Mann and Jones 2003, Moberg et al 2005, D’Arrigo et al 2006, Hegerl et al 2007, as well as Osborn and Briffa 2006.

Of the 8 proxies shown in the proxy spaghetti graph (as opposed to the reconstruction spaghetti graph), 3 are from the Briffa 2000 study (called NW Russia, N Russia and N Sweden) but demonstrably the Briffa versions of these sites.

An important characteristic of tree ring chronologies is that they are sensitive to the method used. Chronologies can be quickly and easily calculated from measurement data. Rob Wilson, for example, will nearly always run his own chronologies from measurement data so that he knows for sure how they were done and so that they are done consistently across sites.

Osborn and Briffa 2006 was published in Science, which has a policy requiring the availability of data. It used Briffa’s versions of Yamal, Taymir and Tornetrask. At the time, I requested the measurement data, which had still not been archived 6 years after the original publication of Briffa 2000, despite the availability of excellent international archive facilities at WDCP-A (www.ncdc.noaa.gov/paleo). Briffa refused. I asked Science to require Briffa to provide the data. After some deliberation, they stated that Osborn and Briffa 2006 had not used the measurement data directly but had only used the chronologies from an earlier study and that I should take up the matter with the author of the earlier study, pointedly not identifying the author, who was, of course, Briffa himself. I wrote Briffa again, this time in his capacity as author of the 2000 article in Quaternary Science Reviews and was blown off. (See here for my last account of efforts to get Briffa data via Science mag.)

So years later, the measurement data for key studies used in both canonical multiproxy studies and illustrated in AR4 Box 6.4 Figure 1 (along, remarkably, with Mann’s PC1), remains unarchived, with Briffa resolutely stonewalling efforts to have him archive the data.

But has Briffa, after all these years, finally made a misstep? Continue reading

Ward Hunt Island: Unprecedented since 2005

Bernie draws our attention to an article in the Globe and Mail on another break-off of the Ellesmere Island ice shelf:

The Globe and Mail has an excellent map of the “collapse” of this ice sheet. Apparently its collapse has been proceeding for about 100 years.

Update- The break is said to be unprecedented since as long ago as 2005:

Scientists say the break, the largest on record since 2005 but still small when compared with others

This topic is in the news from time to time – there was another similar story in a couple of years ago. At the time, I looked into the matter and wrote several posts on the topic of Ellesmere Island ice shelves, which people interested in this topic may wish to re-visit.

Ellesmere Island Ice Shelves
Ellesmere Island Driftwood
Ayles Ice Shelf
Ward Hunt Ice Shelf Stratigraphy
Ice Island T-3

Bradley and England 2008 , “The Younger Dryas and the sea of ancient ice”, is a highly readable and interesting discussion of Ica Age climate, which, inter alia, contains an account of late 19th century descriptions of the Ellesmere Island ice shelves, which Bradley and England 2008 propose as an analogue for the much larger “paleocrystic ice” that they propose for the LGM. Their Figure 1 (Shown below) is an 1878 watercolor of an Ellesemere Island ice shelf about which they say (my bold):

We believe this painting provides an eye-witness view of some of the paleocrystic floes that formed during the Little Ice Age, but were breaking up by the end of the 19th century.

Here’s their Figure 1:
globea10.jpg

Reference:
Bradley, R.S. and J. H. England, 2008. . Quaternary Research (in press). doi:10.1016/j.yqres.2008.03.002 url

Koutsoyiannis et al 2008: On the credibility of climate predictions

As noted by Pat Frank, Demetris Koutsoyiannis’ new paper has been published, evaluating 18 years of climate model predictions of temperature and precipitation at 8 locales distributed worldwide. Demetris notified me of this today as well.

The paper is open access and can be downloaded here: http://www.tandfonline.com/doi/pdf/10.1623/hysj.53.4.671

Here’s the citation: D. KOUTSOYIANNIS, A. EFSTRATIADIS, N. MAMASSIS & A. CHRISTOFIDES “On the credibility of climate predictions” Hydrological Sciences–Journal–des Sciences Hydrologiques, 53 (2008).

Abstract “Geographically distributed predictions of future climate, obtained through climate models, are widely used in hydrology and many other disciplines, typically without assessing their reliability. Here we compare the output of various models to temperature and precipitation observations from eight stations with long (over 100 years) records from around the globe. The results show that models perform poorly, even at a climatic (30-year) scale. Thus local model projections cannot be credible, whereas a common argument that models can perform better at larger spatial scales is unsupported.”

Pat Frank observes: “In essence, they found that climate models have no predictive value.”

Hansen Update

No single topic seems to arouse as much blog animosity as any discussion of Hansen’s projections. Although NASA employees are not permitted to do private work for their bosses off-hours (a currying favor prohibition, I suppose) – for example, secretaries are not supposed to do typing, over at realclimate, Gavin Schmidt, in his “private time”, which flexibly includes 9 to 5, has provided bulldog services on behalf of his boss, James Hansen, on a number of occasions.

In January 2008, I discussed here and here how Hansen’s projections compared against the most recent RSS and MSU data, noting a downtick which resulted in a spread not merely between observations and Scenario A, but between observations and Scenario B, sometimes said to have been vindicated. For my January 16, 2008 post, I used the then most recent RSS data (as well as UAH version which showed a lesser downtick. However, a few days later, RSS revised their data to be more in line with UAH. On January 23, 2008, I updated my graphic, using the revised RSS data, which caused a slight modification. Some blog commentators have suggested that I had made in error in my Jan 16, 2008, but these suggestions have no purpose other than defamation. I had used the then current data and promptly updated my graphic within a few days of RSS revising their data. In the latter post, I criticized RSS for not issuing a notice of the change.

In a post today, Andrew Bolt used the earlier version of this graphic from January 16, 2008, rather than the January 23, 2008. A couple of blog commenters have criticized Bolt for using the earlier graphic, with Tim Lambert additionally criticizing me for not having placed a notice of the update on the Jan 16, 2008 post (which I’ve now done.)

However, rather than engaging in further exegesis of the January versions, I thought it would be useful to update the graphics to include satellite data up to June 2008 and GISS data up to May 2008. Ironically, the new data has resulted in a downtick that is more substantial than either of the versions published in January. Lucia has also done many posts on this topic and I urge readers to visit her blog.
Continue reading

Tornetrask Digital Version – Hooray!

I’ve been trying for nearly a year to get a digital version of the Tornetrask reconstruction of Grudd (2008), also referred to in his thesis.

Last week, Hakan Grudd sent me a digital version of the MXD series – hooray! Previously I’ve previewed this new version using clips from the articles or thesis e.g here.

Because Tornetrask is used in every recon, I needed this new version to test the overall impact of the newer versions of key series. In the first figure below, I’ve plotted the three successive versions of Tornetrask – top, from Briffa et al 1992, the scene of what Per referred to as a “gobsmacking” adjustment, discussed here; middle from Briffa 2000, a version used in some later studies (though Juckes et al 2007 used the 1992 version on the pretext that “scientific” procedure mandated the use of older data) ; bottom – from Grudd 2008.

The three versions are overlaid in a smooth version in the 4th panel showing that the Grudd version has an elevated MWP relative to the modern period, that is not present in the Briffa versions. Continue reading

Some Quick Thoughts on CSIRO Drought Info

First of all, the most important issue in this study is acknowledging Hennessy et al 2008. I had to agree to acknowledge them about 10 separate times to download data and so I do so. Acknowledging Hennessy et al 2008 seems to be more important to the authors than the results themselves. I hereby acknowledge:

Hennessy et al. (2008): “An assessment of the impact of climate change on the nature and frequency of exceptional climatic events”. A consultancy report by CSIRO and the Australian Bureau of Meteorology for the Australian Bureau of Rural Sciences, 33pp,

Their data archive is a total pig. Their archive is all in little micro series, which are tarred and so getting the data into a form that you can use is highly annoying and time consuming. Some of it are Excel files within tar files, making the extraction even more time consuming as each Excel file has to be saved into a csv file in order to be read into R or Matlab for statistical analysis. I’ve managed to organize the data into a few usable R-objects so that instead of downloading multiple tar files, manually unzipping each one. I hereby acknowledge:

Hennessy et al. (2008): “An assessment of the impact of climate change on the nature and frequency of exceptional climatic events”. A consultancy report by CSIRO and the Australian Bureau of Meteorology for the Australian Bureau of Rural Sciences, 33pp,

If you don’t want to waste endless amounts of time wading through the goofy tar files, I’ll save you the effort of repeating all the aggravating hoops that I had to go through (I hereby acknowledge Hennessy et al 2008) and use the following commands (watch the quotation signs form WordPress):

download.file(“http://data.climateaudit.org/data/csiro/csiro.cy.tab”,”temp.dat”,mode=”wb”); load(“temp.dat”)
download.file(“http://data.climateaudit.org/data/csiro/rainf.tab”,”temp.dat”,mode=”wb”); load(“temp.dat”)
download.file(“http://data.climateaudit.org/data/csiro/tempf.tab”,”temp.dat”,mode=”wb”); load(“temp.dat”)

The first yields an R-object “csiro” which is a list of 8 objects with names

# rain.5pc rain.95pc tmax.5pc tmax.95pc tmean.5pc tmean.95pc tmin.5pc tmin.95pc

each of which collates the regional and total time series (see names in column heads). The second and third collate the rainfall and temperature forecasts into objects “rainf” and “tempf”, also lists of 7 objects this time by region with the columns being the different model forecasts. The collation script is shown in a comment.

I took a look at the results for under 5 percentile area rainfall for two regions 4- Queensland; and 1- Murray-Darling, picked at random.

According to my calculations, the average intermodel correlation of the results for Mur-Darling was 0.009 (Qld: 0.027), while the average correlation of the model results to observations for Mur-Darl was -0.013 (Qld: -0.017). [David Stockwell observes that the timing of droughts would be stochastic for a given model, which is fair enough. However, as I observe below, looking at the Qld vs GISS, aside from distributions, the GISS model is showing a 20th century increase in drought while the data is showing a decrease. So if the finding promoted to the public is trend, there seems to be a mismatch for this region at a pretty basic level for this model.)

Shown below are two plots the first one comparing Murray-Darling historical to CSIRO Mk 3.5 and the second Queensland historical to GISS. Oh, yes, I hereby acknowledge:

Hennessy et al. (2008): “An assessment of the impact of climate change on the nature and frequency of exceptional climatic events”. A consultancy report by CSIRO and the Australian Bureau of Meteorology for the Australian Bureau of Rural Sciences, 33pp,

I’ve attached my scripts both for collation and for the calcs, as I’ve done this quickly; I’m not experienced with these data sets; they are poorly organized for statistical analysis and I might have collated apples and oranges along the way, in which case I’ll amend the calcs.

new_pa77.gif

new_pa74.gif

You Can’t Make This Stuff Up

Speaking of record handling, here’s a particularly amusing defence of scientists failing to maintain proper records over at Connolley’s: they are less lazy than filmmakers.

Mosher noted that the irony of the thoroughness of the OFcom examination of Swindle records as compared to the haphazard and obstructive availability of data that we’ve seen too frequently in the parts of climate science examined here so far. Mosher:

I was pleased to that the filmakers actually kept good records of their emails and their raw footage. I was also pleased to see that they supplied this material when asked. It’s a standard that we should hold climate science to.

imagine that. science held to the same standards as schlock documentary makers.

Posted by: steven mosher | July 24, 2008 12:39 PM

Here is a defence:

Keep in mind that the retention of records may be a sign of laziness rather than high standards. I’ve got thousands of emails still on our email servers, much to our network admin’s chagrin. I really should delete them, but I’m lazy. Imagine that. Scientists may be less lazy than me and that Durkin fellow.

Posted by: pough | July 24, 2008 12:51 PM

You couldn’t make this up.

Besonen et al 2008 on Hurricane Proxies

There have been a couple of recent mentions of Besonen et al 2008 (including Ray Bradley) which discusses varve sediment thickness in Lower Mystic Lake, New England as a hurricane proxy, reported as a “1,000-year, annually-resolved record of hurricane activity from Boston, Massachusetts”.

Before discussing the article, I checked to see whether any of the data had been archived at WDCP. Bradley had told the House Energy and Commerce Committee

When I or my students have generated data sets they are generally sent to the WDC-A (World Data Center for Paleoclimatology) once the results have been published. This is the normal procedure followed in my field.

Unfortunately, a search under Besonen did not show any contributions. In fact, a search under Bradley likewise showed no contributions other than ones where he was joint author with Mann (or in one case Jones). I guess WDC-A forgot to include Bradley’s contributions in its index. (See CA discussion from 2005 here.)

The article refers to SI to be located at ftp://ftp.agu.org/apend/gl/2008GL033950; however there is no such directory and ftp://ftp.agu.org/apend/gl/ showed no relevant candidates. The Supplementary Information linked in the HTM version of the article consisted only of radiocarbon dates, useful, but hardly complete.

The graph of sediment thickness shown in the article noticeably resembles our favorite shape as shown below. However, as observed in the article, there is (unsurprisingly) substantial non-climatic disturbance of sedimentation patterns in the Boston area and post-1870 data is removed from the analysis. Accordingly the authors say:

We confined our analysis to the period prior to 1870 given the significant anthropogenic interference and altered sedimentation dynamics as discussed above.

besone42.gif
Figure 2 (a) LML varve thickness time series plot and identified extreme events. In the plot, actual varve thicknesses (mm) are plotted by the lower black line. The thickened gray line shows a robust estimate of the time dependent background thickness based on median smoothing with a 17 year window. The upper black line represents the med +3.5 rstd threshold, and varves with thicknesses which fall above this TDV are considered extremes (total of 47 observed). Of the 47 identified extreme events, the 36 which contain a graded bed are marked by filled black circles and listed in the inset table, and the 11 which do not contain a graded bed are marked by open black circles. The dashed vertical line at 1630 indicates the prehistoric/historic boundary for the region.

“Significantly”
Besonen et al report that certain centuries had a “significantly” higher hurricane frequency:

Hurricane frequency, as recorded at LML, has not been constant over the last millennium (Figure 2b); the 12th–16th centuries had a significantly higher level of hurricane activity (up to 8 extreme events occurring per century) compared to the 11th and 17th–19th centuries when only 2–3 per century was the norm.

On the other hand, they note that a nearby study had also observed “significant” changes, but apparently in a different direction:

We note that conclusions about frequency changes reached from the LML record differ from those reached by studies based on lower resolution records from nearby areas. For example, a study from Long Island [Scileppi and Donnelly, 2007] concluded that activity had significantly increased over the last 300 years with reduced activity during the earlier part of the millennium.

Figure 2b, referred to in the above statement, is shown below:
besone43.gif
Figure 2b. (b) Frequency of hurricane-related deposits in the LML record grouped by century. The darker central bars represent the number of extreme events identified using a TDV of med +3.5 rstd. The flanking light gray bars represent the number of identified extremes using TDVs of med +2.0 rstd. (left) and med +5.0 rstd. (right). Note that given our analysis range (1011–1870), the first and last columns do not span a full century.

A Question
Although the authors state that the 12th–16th centuries had a “significantly higher” level of hurricane activity, they do not describe how they carried out their significance test nor do they provide the data by which one can conduct a significance test on one’s own. Perhaps one of the readers are interested in Poisson calculations for hurricanes would be interested n writing to Bradley and Besonen and (1) inquiring what significance test was used as a basis for this claim; (2) obtaining the underlying data used to make the claim and then carry out his own significance test to see whether the variations actually show “significantly higher” levels or whether the data could be the result of a Poisson distribution. I have a pdf of the article.


Reference:

Besonen, M. R., R. S. Bradley, M. Mudelsee, M. B. Abbott, and P. Francus (2008), A 1,000-year, annually-resolved record of hurricane activity from Boston, Massachusetts, Geophys. Res. Lett., 35, L14705, doi:10.1029/2008GL033950. url ftp://ftp.agu.org/apend/gl/2008GL033950

CSIRO: A Limited Hang out??

CSIRO has done the right thing in respect to the drought data used in its recent report and an archive is now available. David Stockwell reports here.

Update: I’ve now done a quick look at their supposed data archive http://www.bom.gov.au/climate/droughtec/download.shtml and it is far from clear that this is anything like an adequate data archive. It may be more like a the sort of limited hang-out that we often see when climate scientists grudgingly release a little bit of data to comply with pressure, but without a commitment to an “open and transparent” process. For example, I did not see any archive of the underlying data, merely the summaries. For example, the article estimates the percentage area affected by drought and gives the 5 percentile series (which is fine as far as it goes and part of a proper archive), but is well short of being an archive that enables one to replicate their result. If this is it, then this is the equivalent of archiving the MBH reconstruction without any of the underlying data and we’ve seen that movie. Maybe David Stockwell will look into this and advise.)

In order to build a true “consensus” to deal with important problems, it’s necessary for climate scientists to be thoroughly committed to an “open and transparent” process. This means more than IPCC authors taking in one another’s laundry. It means more than a bunch of IPCC scientists telling everyone else what to think – even if they’re right and perhaps especially if they’re right. It means that data and methods to support articles used for climate policy must be routinely available concurrent with the publication of the article. Not after the fact.

I think that maybe some progress is being made here, though its been slow. Whatever anyone may think of the role of blogs, they obviously are relevant in trying to get to an “open and transparent” process.

The funny thing is that I’ think that once authors get used to “open and transparent”, they’ll like it.

Even the process of archiving source code, which CSIRO didn’t do. I can vouch for this on source code: it makes me feel comfortable knowing that source code is archived and available. If someone finds a mistake, so be it. It’s out there and you deal with it. But you remove all temptation to be over-defensive, because you’ve got it off your chest. I sometimes archive code in blog posts and I find this handy after the fact because it easier to figure out what I did; and it’s easier for others to do the same thing; it’s something that I’m going to do even more consistently.

Ofcom: The IPCC Complaint

Ofcom’s disposition of the IPCC Complaint is here page 43. There are many interesting aspects to this decision that are distinct from any of the others. Ofcom’s actual finding is extremely narrow. IT rejected 2 of 6 complaints. On 3 of 6, it determined that the producers had provided notice to IPCC but the notice on Feb 27, 2007 did not leave IPCC with “reasonable time” to respond prior to the airing on March 8, 2007 (though Ofcom itself states that “three working days” is a “reasonable time” for the parties to file an appeal of the present decision. They also determined that the producers failed to give IPCC adequate notice that someone in the production would say that they were “politically driven”. Had the producers sent their email of Feb 27, 2007 on (say) Feb 20, 2007, including a mention in the email that one of the contributors stated that IPCC was “politically driven”, then the Swindle producers would appear to have been immune from the present findings. Little things do matter.

The two rejected claims are themselves rather interesting and make you scratch your head. As discussed below, Swindle contributors were said to have claimed that IPCC had predicted climate disaster and the northward migration of malaria as a result of global warming. IPCC denied ever making such claims and apparently felt that its reputation was sullied by being associated with such claims. These two matters were decided on other grounds, but many readers will be interested to read more about IPCC disassociating itself from claims that global warming would cause northward migration of malaria or predictions of climate disaster.

In addition, in its complaint, IPCC made grandiose claims about its “open and transparent process” and the role of review editors, describing the process as being in the public domain and by its nature designed to avoid “undue influence” of any reviewer. This will come as somewhat of a surprise to CA readers, who are familiar with the avoidance of IPCC procedures by Ammann and Briffa and the seemingly casual performance of review editor Mitchell and who have been following the relentless stonewalling by IPCC and IPCC officials of requests for specific information pertaining to this allegedly “open and transparent process”. Continue reading