Alberta #3

In February 2006, Luckman and Wilson archived their STD chronology for the Athabasca Glacier, Alberta site (STD fits each tree individually; RCS fits trees in groups.) Rob Wilson wrote in criticizing an earlier post for, among other things, not showing their STD version and for how I implemented the RCS emulation for Esper et al [2002].

Figure 1 below shows the "archived" Esper RCS ring width version ("archived" meaning the version that Science sent me in February 2006 after months of quasi-litigation) together with the LW05 STD ring width version (archived at WDCP in February 2006) and my emulation of the Esper RCS version using cana170w and cana171w with one regional curve. All curves have been smoothed and then the smoothed versions have been scaled – a common spaghetti graph methodology. There are some differences between the blue (Esper) and black (emulation) versions, but I don’t think that my emulation is "horrendous". (Esper et al 2002 does not contain any information on stratification of the site.) There are obviously significant differences between the Esper RCS version (blue) and the LW05 STD version (red).


Figure 1. Athabaska Glacier, Alberta (smoothed with 40 year gaussian filter, then scaled). Blue – Archived Esper RCS version; red – archived LW05 STD version; black – my emulation of Esper RCS version.

Next is a figure showing a more detailed comparison of my emulation of the Esper RCS series, which was done without dividing trees into "linear" and "nonlinear". Rob has criticized my emulation as being "horrendous", but why is it? I’ve attempted for months to obtain information on how Esper distinguished between linear and nonlinear trees. This should have been in the original SI. But even with an operational defintion, it is surely relevant to examine the effect of distinguishing between "linear" and "nonlinear" trees. The principal effect, regardless of the motivation, is to enhance 20th century levels. Doubtless there is a reason for this, but it’s not been provided so far.


Figure 2. Athabaska Glacier, Alberta RCS RW Versions. Top – Esper contained in Science email; bottom – emulation from cana170w, cana171w using bulk RCS fit. Correlation 0.80.

Rob said that "Significantly more low-frequency information was captured using the MXD data (See Appendix) but no significant gain was observed by using the RCS method on the RW data (analysis not shown)". Figure 3 shows a method that I like for checking the distribution of variance between low- and high-frequency: the distribution of wavelet variance by scale (la8 wavelet used here), in this case for two different "official" chronology versions, Esper RCS and LW05 STD (so that any inadequacies of my emulations are not material.) The graphs show very clearly that the variance share of high-frequency is much larger in the LW05 STD version than in the Esper RCS version. Figure 3 shows an objective method for quantifying statements about "gain in low-frequency information".


Figure 3. Wavelet Variance by Scale for RW Series, Athabasca Glacier, Alberta. Top – Archived LW05 STD chronology. Bottom – Esper RCS chronology.

The only point here, where I’m being critical of LW (as opposed to Esper et al) would be in their conclusion about the "gain" in low-frequency information, where I don’t see that LW05 has provided any support for this claim. I haven’t seen wavelet variance used by dendro people to quantify such statements and I think that it might be a pretty good way. I find these types of barplots to be far more informative than Fourier spectra on noisy series (but they reconcile to Fourier spectra).

The information shown here pertains to ring width site chronologies. The L97 and LW05 temperature reconstructions are primarily based on MXD chronologies; the ring width chronology is negatively weighted in the L97 temperature reconstruction.

Two Editorials

Some people, including some who are not particularly sympathetic to the thoughts expressed here, suggest that the way that I do things is ineffective and have a variety of suggestions on how I could get my views across better. Mostly they involve less blogging and more journal submissions. Maybe they’re right . However, I noticed this week-end that an Op Ed that I wrote last May for the National Post in Toronto has been cited in two journal editorials of diverse origin – the Journal of Cave and Karst Studies and the Journal of the Royal Statistical Society. Continue reading

Nature and Britannica: Round 2

Nature has responded to Encyclopedia Britannica’s accusation of "sloppiness, indifference to basic scholarly standards, and flagrant errors" with a response like this:

In our issue of 15 December 2005 we published a news article that compared the
Internet offerings of Encyclopaedia Britannica and Wikipedia on scientific topics
(“Internet encyclopaedias go head to head”, Nature 438 (7070) p900-901;
http://dx.doi.org/10.1038/438900a). Encyclopaedia Britannica has now posted
a lengthy response to this article on its website, accusing Nature of misrepresentation,
sloppiness and indifference to scholarly standards, and calling on us to retract our
article. We reject those accusations, and are confident our comparison was fair.

Continue reading

Alberta #2

There are 3 different versions of the Alberta site that have been applied in multiproxy reconstructions: 1) Luckman et al [1997] used in Jones et al [1998]; Crowley and Lowery [2000]; Briffa [2000]; 2) Esper et al. [2002] ; 3) the version used in Osborn and Briffa [2006], presumably from Luckman and Wilson [2005] and presumably the one used in D’Arrigo et al [2006] (although details of the latter are lacking). Each version has replication issues with permutations based on data combinations and methodology variations.

A theme that’s emerging for me here is: on a macro-scale, the Hockey Team takes great comfort in what they regard as "similarity" in the hockey sticks in the spaghetti graphs. However, at the level of individual sites, it’s remarkable how little similarity there is between versions. I’ve commented on this for Polar Urals. I’ve noticed something similar for Greenland – remind me to post this up. Today I’ll nibble at the Alberta site, showing the 3 versions used in multiproxy studies and a trial emulation of the Esper et al. version. I’ll get to trial emulations of the Luckman versions on another occasion. Continue reading

The Alberta Site in Esper 2002

Here is some more analysis based on the Esper measurement data sent to me by Science on March 16, 2005 — this time on the “Athabasca” site, which refers to a location in the Rocky Mountains, Alberta near the Athabasca Glacier. This note is rather a status report, so that I keep track of what I know right now, as I’ll have to return to the matter.

There are a few interesting comments about medieval warmth which I’ve excerpted here (and which contrast somewhat with the site chronologies.) There are also lots of dry details here, but I show one interesting graph at the end. I’ll provide some analysis with a little bit more interesting graphs in the next day or two. Continue reading

NCAR Competition Announcement

Turning up on google today was an announcement on March 21, 2006 by the National Science Foundation giving Information on the Competition for the Management of the National Center for Atmospheric Research. Interested parties have to submit Capability Statements by May 1, 2006.

March 21, 2006

The Division of Atmospheric Sciences (ATM) of the Directorate for Geosciences (GEO) of the National Science Foundation (NSF) is preparing a competition that is expected to lead to the award of a single cooperative agreement for the future management and operation of the National Center for Atmospheric Research (NCAR). In accordance with the National Science Board’s resolution 97-224, NSF competes expiring awards unless it is judged to be in the interest of U.S. science and engineering not to do so. Sources that meet the eligibility requirements below are invited to submit Capability Statements in accordance with the stated criteria no later than May 1, 2006.

Important Links

* Dear Colleague Letter About the Competition
* Frequently Asked Questions (Coming Soon)

The dear Colleague letter starts off:

Competition for the Management and Operation of the National Center for Atmospheric Research (NCAR)

Dear Colleague,

The Division of Atmospheric Sciences (ATM) of the Directorate for Geosciences (GEO) of the National Science Foundation (NSF) is preparing a competition that is expected to lead to the award of a single cooperative agreement for the future management and operation of the National Center for Atmospheric Research (NCAR). In accordance with the National Science Board’s resolution 97-224, NSF competes expiring awards unless it is judged to be in the interest of U.S. science and engineering not to do so. Sources that meet the eligibility requirements below are invited to submit Capability Statements in accordance with the stated criteria no later than May 1, 2006. ….

So any of you interested in competing for the right to manage NCAR, you’d better get right at it.

Inhofe, UCAR and NCAR

Senator Inhofe has sent some questions to UCAR, which have riled Climate Watch and others. Climate Watch headlined: Senator Inhofe Launches Inquisition Probing Climate Research Organization. Googling will turn up a few references. I’m not doing a detailed note on this, but am giving a few takes on it, since we’ve talked here about UCAR from time to time. Continue reading

Nature, Wikipedia and “The High Summer of Junk Science”

Nature recently carried out an experiment on its own initiative supposedly comparing the accuracy of Wikipedia and Encyclopedia Britannica, reported here in the Register. The study concluded that the Encyclopedia Britannica had quite a few errors, nearly as many as Wikipedia. Here’s what’s reported:

Nature magazine has some tough questions to answer after it let its Wikipedia fetish get the better of its responsibilities to reporting science. The Encyclopedia Britannica has published a devastating response to Nature‘s December comparison of Wikipedia and Britannica, and accuses the journal of misrepresenting its own evidence.

Where the evidence didn’t fit, says Britannica, Nature‘s news team just made it up. Britannica has called on the journal to repudiate the report, which was put together by its news team.

Independent experts were sent 50 unattributed articles from both Wikipedia and Britannica, and the journal claimed that Britannica turned up 123 "errors" to Wikipedia’s 162.

But Nature sent only misleading fragments of some Britannica articles to the reviewers, sent extracts of the children’s version and Britannica’s "book of the year" to others, and in one case, simply stitched together bits from different articles and inserted its own material, passing it off as a single Britannica entry.

Nice "Mash-Up" – but bad science.

"Almost everything about the journal’s investigation, from the criteria for identifying inaccuracies to the discrepancy between the article text and its headline, was wrong and misleading," says Britannica.

"Dozens of inaccuracies attributed to the Britannica were not inaccuracies at all, and a number of the articles Nature examined were not even in the Encyclopedia Britannica. The study was so poorly carried out and its findings so error-laden that it was completely without merit."

In one case, for example. Nature‘s peer reviewer was sent only the 350 word introduction to a 6,000 word Britannica article on lipids – which was criticized for containing omissions.

A pattern also emerges which raises questions about the choice of the domain experts picked by Nature‘s journalists.

Several got their facts wrong, and in many other cases, simply offered differences of opinion.

"Dozens of the so-called inaccuracies they attributed to us were nothing of the kind; they were the result of reviewers expressing opinions that differed from ours about what should be included in an encyclopedia article. In these cases Britannica’s coverage was actually sound."

The Encyclopedia Britannica stated: [my emphasis]:

We discovered in Nature’s work a pattern of sloppiness, indifference to basic scholarly standards, and flagrant errors so numerous they completely invalidated the results. We contacted Nature, asking for the original data, calling their attention to several of their errors, and offering to meet with them to review our findings in full, but they declined

Update (SM): Nature has responded, refusing to back down from its original article.

technorati tags: , , ,

More Correspondence with Science

Update: Next instalment here

On March 16, Science sent me 10 (out of 14) measurement data sets used by Esper; one gridcell temperature series used by Osborn-Briffa and caused Briffa to archive annual data versions at WDCP in addition to the smoothed versions. The new information has been extremely helpful to me.

However, the information remains incomplete. I can’t imagine why Esper would only send 10 of 14 measurement data sets; you’d think that he’d send them all when he picked up the file. Osborn-Briffa used different versions in several cases and these remain outstanding. Anyway, I’ve re-iterated my request. There are some puzzles in the data provided.

Continue reading

"But They are Very, Very Wrong"

A parody posted up by Spence_UK.