Juckes has much to say about several MM articles, none of it favorable and little of it accurate. Juckes, like the rest of the Team, seldom quotes our articles – instead, he typically paraphrases what we said, often creating a straw man, which he prefers to deal with. It’s a wearisome task disentangling the many mischaracterizations of our work.
I’ll discuss a few points. Although he spends a great deal of time discussing standardization prior to PC calculations, astonishingly he does not discuss or even cite the Huybers Comment or the Reply to Huybers, the NAS panel or the Wegman panel.
In our published articles, we dwelled almost entirely on MBH98. This is not to say that there are not important things to say about MBH99, but we didn’t get to that in our published articles. We’ve had discussion on the blog of some puzzling features of MBH99, including the impenetrable calculation of confidence intervals in MBH99 – an important outstanding issue which Juckes (like Wahl and Ammann before him) simply avoided.
Our principal published reference to MBH99 was in MM05 (EE), where we pointed out that bristlecones were prominent in the MBH99 PC1 regardless of methodological artifice simply because of the dominance of bristlecones in long American tree ring chronologies. We observed this as forcefully as we could as follows:
Although considerable publicity has attached to our demonstration that the PC methods used in MBH98 nearly always produce hockey sticks, we are equally concerned about the validity of series so selected for over-weighting as temperature proxies. While our attention was drawn to bristlecone pines (and to Gaspé cedars) by methodological artifices in MBH98, ultimately, the more important issue is the validity of the proxies themselves. This applies particularly for the 1000–1399 extension of MBH98 contained in Mann et al. . In this case, because of the reduction in the number of sites, the majority of sites in the AD1000 network end up being bristlecone pine sites, which dominate the PC1 in Mann et al.  simply because of their longevity, not through a mathematical artifice (as in MBH98).
Given the pivotal dependence of MBH98 results on bristlecone pines and Gaspé cedars, one would have thought that there would be copious literature proving the validity of these indicators as temperature proxies. Instead the specialist literature only raises questions about each indicator which need to be resolved prior to using them as temperature proxies at all, let alone considering them as uniquely accurate stenographs of the world’s temperature history.
In contrast to MBH99 where the bristlecones dominated merely through longevity, the dominance of bristlecones in the MBH98 AD1400 PC1 was achieved through a mathematical artifice – the Mannian “principal components” method. We observed that the characteristic hockey stick shape of Graybill’s bristlecone and foxtail chronologies falls to the PC4 using covariance PCs and to the PC2 using correlation PCs – but in neither case was the characteristic (bristlecone) hockey stick the “dominant component of variance” as Mann had claimed.
In MM05 (EE), we also observed that the MBH reconstruction using correlation PCs for the North American network was about halfway between the results using covariance PCs and Mannian PCs. (In passing, despite this explicit reporting of results using correlation PCs, Juckes, following Wahl and Ammann, accused us of “omitting” the consideration of what happens when chronologies are divided by their standard deviation. This accusation is false, since using correlation PCs is precisely equivalent to PCs using chronologies divided by their standard deviation. I’ll return to this point in more detail on another occasion.
For now let’s examine how Juckes goes about constructing one of his straw men. He states:
MM2005 argue that the way in which the stage (1) principal component analysis is carried out in MBH generates an artificial bias towards a high “hockey-stick index” and that the statistical significance of the MBH results may be lower than originally estimated.
As always, you have to watch the pea under the thimble with the Team. Yes, we said that the Mannian PC method was “biased”. But we did not say that MBH statistical significance was “lower than originally estimated“. We said that MBH results had no statistical significance. We stated categorically that the AD1400 step of the MBH98 reconstruction failed one of the statistical tests (verification r2) that Mann said had been used. Of course, Mann has subsequently said that the test was not used (that would be a “foolish and incorrect thing” to do, but readers of this blog know that he did calculate the statistic, obtained adverse results which were not reported and that he’s dodged a bullet in not being held accountable.)
The point here is that it’s not a matter of being “lower”; it’s a matter of there being no statistical significance. We stated that the seeming “significance” of the MBH RE statistic was due to inappropriate benchmarking of statistical significance in the context of biased methodology. If I were re-stating this argument today, I would place more attention on high RE statistics from the classical spurious regressions – Yule’s type case of alcoholism versus Church of England marriages has a very high RE statistic and a very high correlation statistic – one that is undoubtedly 99.98% significant in Juckes’ terminology.
But look carefully at what Juckes left out of his summary of MM2005 – bristlecones. The third leg of our abstract was that the biased methodology interacted with proxies whose validity had been questioned by the relevant specialists. Juckes leaves this leg of the argument out and you’ll see how this contributes to the sleight-of-hand below. Juckes mentions bristlecones later (see below) but avoids any sort of systematic discussion of an issue that is surely central in any supposed “evaluation” of miullennial reconstructions.
As to our argument about the “bias” in MBH methodology, this was upheld by both the NAS panel and the Wegman panel. We used the term “bias” carefully – we did not say that the MBH method was the “only” way to get a HS or that it was the only potential form of “bias” (low-tech cherry-picking and data snooping is another source of bias). However, for a study that had been so widely cited for obtaining a HS-shaped result, it was remarkable that it used a methodology that was so seriously “biased”.
We connected the bias to statistical significance through simulations on red noise, originally in MM05 (GRL) and re-stated in Reply to Huybers, in which we argued that MBH failed to account for the bias of their methodology in calculating the statistical significance of their RE statistic. The NAS panel also endorsed our concerns about statistical significance of MBH, stating that our concerns about statistical significance were one of the factors in their withdrawing claims to statistical skill prior to 1600.
However, Juckes completely fails to cite either the NAS Panel or the Wegman report on the issues of bias or statistical significance. Instead, he cites a study by Mann’s sometime coauthors, Wahl and Ammann as follows:
This [sic] issue has been investigated at length by Wahl and Ammann (2006) in the context of the MBH1998 reconstruction (back to AD 1400).
Now this assertion is also false. Wahl and Ammann 2006 is a long article, but it does not contain an “investigation at length” of the issues as presented here: our criticisms of bias in the PC methodology or inappropriate benchmarking of statistical significance. Wahl and Ammann (Climatic Change 2006) cited Ammann and Wahl (GRL, under review) for their results on bias and statistical significance. However, Ammann and Wahl (GRL, under review) was rejected a few days after their Climatic Change article was accepted (although it is still not in print).
If you download Wahl and Ammann 2006 and search on “review”, you will get 12 occurrences of the term “review”. If you then parse through each of the citations, you will readily observe that Wahl and Ammann (Clim Chg 2006) relies on the rejected Ammann and Wahl (GRL 2006) for its claims about bias and statistical signfiicance. Wahl and Ammann 2006 merely re-states (and relies on) results from the rejected paper, but is not an “investigation at length” of these particular issues as claimed by Juckes.
Perhaps Juckes is merely a victim of previous academic check-kiting by Wah and Ammann. The original check-kiting was an issue raised by a peer reviewer in the first review of the article – since Wahl and Ammann’s submission to GRL was originally rejected in May 2005 (it was later re-submitted when a new GRL editor was appointed.) Wahl and Ammann withheld the news of this rejection from the editor of Climatic Change and even cited the already rejected article in response to the reviewer. The reviewer advised the editor of Climatic Change that Wahl and Ammann should not use results from the rejected article. Instead of the Climatic Change editor following this logical advice, the reviewer was terminated.
In any event, Wahl and Ammann knew of the second rejection of their re-submitted GRL article in early March. Since the Climatic Change had not been printed – it still hasn’t appeared in print -in my opinion, Wahl and Ammann had an obligation to amend their Climatic Change article to remove reliance on the rejected article. Of course, they did no such thing and the version presently online is completely unchanged and still cites the GRL article as :under review”. Juckes is now passing the bad check on to more innocent parties.
Juckes goes on to say:
The problem identified by MM2005 relates to the “standardisation” of the proxy time series prior to the EOF decomposition. Figure 3 assess the impact of different standardisations on the reconstruction back to AD 1000, using only the proxy data used in MBH1999 for their reconstruction from AD 1000….This figure shows that there is considerable sensitivity to the adjustment of the first proxy PC, but little sensitivity either to the way in which the principal components are calculated or to the omission of data filled by MBH1999. Thus, the concerns about the latter two points raised by MM2005 do not appear to be relevant here, though the sensitivity to adjustments of principal component may be a cause for concern.
His Figure 3 shows “Reconstructions back to 1000, calibrated on 1856 to 1980 northern hemisphere temperature, using the MBH1999 proxy data collection” with various permutations and combinations.
One straw man here is Juckes’ seeming concern about the impact of closing extrapolations as supposedly expressed in MM2005 [ the “omission of data filled by MBH99″]. The issue of closing extrapolations is simply not an issue in MM2005; I’m 99.9% sure that it’s not even mentioned in either article. Yet no fewer than half of Juckes’ archived PCs pertain to this particular case, which was not at issue.
What’s the background to this? In MM03, we observed that many MBH98 series were extrapolated (“filled”) from end dates in the 1970s to 1980. Because the MBH98 proxy data as originally archived (a version now deleted) included erroneous collation of most of their PC series – which resulted in numerous PC series having identical values – this left a surprising number of 1980 values in the original data set either being erroneous or filled. It now looks like the data set as originally archived was a version developed by Rutherford for Rutherford et al 2005 (which contains a characteristic and am,using collation error as well) and that a different version was used in MBH98 (which materialized only in November 2003 after the publicity from MM03).
In his Nature correspondence and in a rejected submission to Climatic Change, Mann argued that the closing fills made little difference and it was not an issue that we pursued in the MM2005 articles. However, MM05 (EE) contained a lengthy discussion of the undisclosed and unique extrapolation of the Gaspé series, an extrapolation which appears to have had no function other than to get a HS-shaped series into the AD1400 network. (BTW see the discussion elsewhere on Jacoby’s refusal to release an update of the Gaspé cedar series which does not have a HS-shape.)
It was hard to see why Juckes spent so much time on the issue of closing fills, since it simply wasn’t an issue in play. However, Hockey Team curiosities are seldom without a purpose. In the AD1400 network, 14 out of 70 series contain closing fills. These are mostly not bristlecones. By reducing the network to 56 series, the bristlecone impact in the AD1400 network increases and this seemingly unrelated criterion becomes a handy way for the Team to “data snoop” – in this case, use a seemingly unrelated criterion as a way of enhancing bristlecone weighting.
It is also curious to observe such seeming concern over complying with MM03 issues. MM03 also sharply criticized the use of obsolete (and grey) versions of proxies. We illustrated the problem with the TTHH series. An updated version shows a sharp decline in the 1980s – virtually a type case of the divergence problem (And discussed by D’Arrigo in a subsequent paper).
However, there’s a much more important bait-and-switch, where you really have to watch the pea under the thimble. Let’s go back to the MM05 (EE) quote at the beginning. We said that different forms of standardization were not an issue in MBH99 – in MBH99, the issue boiled down to the validity of bristlecone ring width chronologies as a unique measurement system for world temperature plain and simple. We explicitly said that mathematical artifice was in issue in MBH98, but was not a key issue in MBH99. (There are some intersting issue in connection with the “fixing” of the PC1 in MBH99 which bear careful consideration and upon which Jean S and I have been recently corresponding.)
So in connection with MBH99, the “problem” was not “standardization” methods, but the validity of bristlecones. Juckes once again has mis-characterized very explicit statements and then proceeded to a lengthy exposition of a straw man. I’ve tried to point out as forcefully as I can that there is an interaction between the flawed methodology and the flawed proxies (bristlecones) – we determined the connection by seeing what the flawed method did. But a responsible scientist, on becoming aware of the issue with the flawed proxies, cannot simply ignore the problem, as the Team seems hell-bent on doing.
I mentioned above that Juckes considered bristlecones in passing. Here’s what he says:
Briffa and Osborn (1999) and MM2005c suggest that rising CO2 levels may have contributed significantly to the 19th and 20th century increase in growth rate in some trees, particularly the bristlecone pines, but though CO2 fertilisation has been measured in saplings and strip-bark orange trees (which were well watered and fertilised) (Graybill and Idso, 1993, and references therein) efforts to reproduce the effect in controlled experiments with mature forest trees in natural conditions (KàÆàner et al., 2005) have not produced positive results. MM20005c and Wahl and Ammann (2006) both find that excluding the north American bristlecone pine data from the proxy data base removes the skill from the 15th century reconstructions.
As usual, Juckes has not directly quoted us and mis-characterized what we said. I agree that an MBH98-style rconstruction without bristlecones has no skill, but we deny that it is the exclusion of bristlecones that “removes” the skill. We deny that reconstructions with bristlecones have “skill”. Yes, we agree that reconstructions with bristlecones have a seemingly high RE statistic, but we argued that the seeming significance of the RE statistic was an illusion (“spurious significance”) and that the catastrophic verification r2 failure showed a lack of statistical skill even with bristlecones.
As to CO2 fertilization, we were very careful not to take a position on whether CO2 fertilization was or was not responsible for the 20th century growth pulse that Hughes and Funkhouser 2003 characterized as a “mystery”. If anything, our position would be that of Hughes and Funkhouser – the growth pulse is a “mystery”. We argued that, if bristlecone growth was to be used as a unique measure of world temperature, it was, to say the least, highly disssatisfying that MBH co-author Hughes, when talking to specialists in specialist journals, should describe 20th century bristlecone growth as a “mystery”, while at the same time, MBH98-99 results depended on bristlecones.
We noted that Graybill and Idso has posited CO2 fertilization as an explanation but did not rely on it. We also noted that other authors had identified potential nitrate fertilization; that 19th century sheep grazing had caused a pulse in growth in some American Southwest tree species; and that bristlecones competed with big sagebrush, suggesting at least the possibility of an important interaction between precipitation and temperature – an interaction strongly emphasized by Graumlich in connection with foxtails, although Graumlich’s arguments were not explicitly canvassed in MM05 (EE).
We’ve discussed bristlecone growth on the blog on many occasions and new information and thoughts continue to percolate. Recently Louis Scideri, a prominent author in the field, has visited with more interesting comments. As of today, I’m inclined to think that each of temperature, precipitation and fertilization have an impact, with important interaction effects (most marked on strip-bark, but probably present in all forms).
The time is long overdue for a fresh analysis of bristlecones and foxtails. Hughes apparently made a new collection of bristlecones at Sheep Mountain in 2002. It is very unfortunate that these results have neither been published nor archived. However, based on my knowledge of mineral exploration stocks, I feel confident that, had bristlecone ring widths at Sheep Mountain been off the charts due ot warmth in the 1990s and 200s, we’d have heard about it. This is a dog that it is not barking.
As to Juckes, it amazes me how someone can purport to do an “evaluation” of millennial reconstructions and discuss our work at length and then totally fail to analyse the impact of presence/absence of bristlecones on the MBH reconstruction?
But hey, it’s the Team.