I’m going to jump ahead a little and report on Hughes, who spoke on Friday morning. My notes on Hughes are decent by my standards. I’ll come back and describe our presentation next and then get to Mann’s. Neither Hughes nor Mann attended Thursday’s session or reception. I missed meeting Mann on Friday as I spent a few moments after the Friday morning session chatting to people and when I looked up, Mann was gone. I introduced myself to Hughes after the Friday morning session. He was chatting with the BBC reporters about Liverpool soccer and a miraculous comeback last year by Liverpool in Istanbul (which I’d vaguely heard about.) We chit-chatted pleasantly about soccer and administration and such.
In his morning presentation, Hughes said that their mission was to discover something about the climate system. He described the calculation of a global mean as a "somewhat tedious afterthought". He made a very interesting distinction between the Schweingruber and Fritts approaches to disentangling temperature signals from tree ring data.
His first slide asked how central were reconstructions to the debate? Hughes argued that understanding of climate depends on understanding mechanisms of the climate system and that reconstructions are useful if they help us understand the system. Most of the potential of reconstructions has yet to be realized. He mentioned an article by Crucifix et al. He showed a graphing of forcings, noting that max and mean insolation have not had large changes in the last 2000 years, so we are looking for small changes. He noted that the instrumental record was incomplete, thus “natural archives” were necessary. (SM: I’m not sure that this metaphor of “natural archives” is the most apt possible metaphor for tree ring measurements.)
Hughes then asked why annual records were preferred. Hughes said that the reason for annual records was the need to maximize the number of degrees of freedom in the instrumental period (especially if seasonal or annual maps are made.) He said that it doesn’t make sense to have Ougodogou in 1836 when everything else is in 1836. (SM: a few editorial comments. The issue of degrees of freedom pointed out here by Hughes is completely ignored in Ammann and Wahl’s low-frequency “argument” against even consulting the r2 statistic. If you’re trying to do calibration-verification on the instrumental record with 50-year bins for low-frequency, then you only have 2.5 samples and not enough degrees of freedom. Recall here that Hughes and Diaz, 1994 was a very influential programmatic article, which argued for the primacy of annual records, an approach which is being eroded by use of low-frequency proxies in, say, Moberg. I have some sympathy for Hughes’ point here, despite all the tree ring problems. As a second editorial point, it’s ironic that he should emphasize the need for synchronization given that Rutherford et al 2005, in which Hughes is a coauthor, contains an amusing collation error in which the instrumental data is incorrectly collated with the proxy data.
Hughes argued for the primacy in making maps in climate studies, because spatiotemporal maps made more rigorous checking possible. The question — is it physically reasonable? — can then be asked. One of the panel asked him how good were the historical records. His answer seemed to be couched in terms of pioneer accounts of the American West; he said that Chynoweth had discussed it; he also cited Fritts et al, J Interdisc History, 1980 and mentioned travel accounts in the American desert in the 19th century.
Hughes said that the calculation of a global or hemispheric mean was a “somewhat tedious afterthought” to these spatiotemporal maps. He cited Groveman 1979; and Bradley and Jones 1993 as originating the practice. He showed the Wiki spaghetti graph. He said that there was little overlap between the Briffa MXD series and MBH. He said that Oerlemanns, Moberg and Pollack were not dominated by tree rings. (SM editorial comment: we had previously brought the truncation of the Briffa MXD series to the attention of the panel, but no one asked Hughes about this truncation when he brought up this series. Oerlemanns and Pollack do not extend back to the MWP, nor does Briffa MXD, so his only example of non-overlap is Moberg. No one on the panel pointed this out to him.)
Hughes also asked why were the series so scattered in the spaghetti graph? He said it was partly due to different seasons; partly due to different regions, but still said that it was “remarkable” that they were “so consistent”. He showed a diagram from Esper where the reconstructions were re-scaled to a consistent scale, which visually showed more consistency than the standard spaghetti graph.
He then turned to Bradley, Hughes and Diaz, Climate in Medieval Times, which appeared as a Science Perspective, which he said was not peer reviewed. He said that the study showed a cluster of maxima in the 20th century, but not in the MWP, creating a balance of evidence against the existence of the MWP. He then cited Osborn and Briffa 2006, who follow a somewhat similar strategy as Bradley, Hughes and Diaz, i.e. a somewhat non-parametric analysis of the proxy series. He said that results of Osborn and Briffa were robust to the removal of 1-3 records and thus “nor exclusively driven by any particular records”. (Another SM editorial note– this point about O&B robustness to the removal of 1-3 records seems to be its main sales pitch. The article seems to be specifically designed to rebut our claims about non-robustness of MBH98 and other studies to a few proxies (bristlecones in the case of MBH; but the stereotypes include Yamal, Jacoby’s Sol Dav, Thompson’s Dunde, Briffa’s old Polar Urals and a couple of others. Osborn and Briffa have collected nearly every proxy that I’ve ever questioned into one collection — there are even two separate bristlecone/foxtail series, one using Mann’s PC method. So O&B “robustness” is not that of a random sample.)
Hughes said in support of the amplitude of MBH that Oerlemanns showed an increase of 0.6-0.7 deg C over the last 150 years; that Luterbacher benchmarks closely to MBH98 mutatis mutandi.
Hughes said that uncertainty increased prior to 1500 for several reasons. First, records are fewer and “thinner”. (Wallace asked here about bristlecone dominance of the longer records. Hughes said that they dominated US Southwest long networks, but there were other long records elsewhere in the world comprised of shorter records, mentioning South America and swamp cypress (a non-temperature record.) The second uncertainty is potential for non-climatic changes. The third uncertainty comes from time-dependent biases e.g age detrending.
Hughes said that there were two main ways of temperature reconstruction using tree rings: 1) the Schweingruber scheme — choose sites to be temperature sensitive. (The Schweingruber sites are the ones in the Briffa network that decreases in the second half 20th). 2) The “Fritts” scheme: many records are put in the network, which are “empirically disentangled” using methods like PCA. (Another SM editorial note: I think that this is a pretty interesting distinction. Intuitively the “Schweingruber scheme” makes a lot more sense to me. The “Divergence Problem” comes from applying the Schweingruber scheme. The “Fritts scheme”, as applied by Mann, assumes that you can skip worrying about whether proxies are temperature sensitive. I wonder whether Fritts would endorse Mann’s method. But again, even if he does, Fritts may be high authority for matters dendro, but he is hardly a figure whose statistical methods are applied outside dendro. BTW Fritts always used RE after looking at r2).
Hughes asked: what is needed? More records; better records; better understanding of the records. He said that we can only get back to AD1500 with spatiotemporal reconstruction and that there can be no substantial progress until data improves. (SM- this is interesting. Hughes said that spatiotemporal maps are needed to check physical reasonableness, but he says here that this can’t go prior to about 1500.)
Hughes pointed out that many records end in the mid-20th century and need to be recollected; also there was a pressing need to improve low latitudes and Africa. There was a need to improve physical and biological understanding — so that models did not rely on just correlation.
He brought up two possible explanations for the Divergence Problem (I don’t recall how this topic came up.) He first referred to the explanation of Vaganov et al , that the divergence was due to growth delayed die to increasing snowpack; and secondly to Briffa’s speculation that trees may be damaged due to ozone.
Biondi asked whether such de-coupling have occurred in the past? Hughes: that’s why you have to be wary of statistics; that’s why you need multiproxy studies. If we could understand it, we could understand it better. Hughes was asked whether the 20th century disconnect means that a similar disconnect might have occurred in the past. Hughes said “That’s the third explanation that I wasn’t going to discuss”. He was asked whether it precluded use of tree ring proxies to assess whether past climates were warmer than mid 20th century. My notes are a little sparse at this point unfortunately.
Cuffey asked about the potential impact of CO2 fertilization. Isn’t there a danger of calibrating fertilization, if it is covarying with temperature in 20th century? Hughes said that it was “extremely difficult” to demonstrate the effect of CO2 on interannual tree ring widths. He said that Vaganov et al [Nature] reported interannual differences much greater than fertilization. “Of course we need to understand it better”. He said that there were massive similarities, which have to be driven by something in common. (SM — but the bristlecone growth pulse is essential distinct, that’s why you can still recognize the bristlecones in a PC4).
Dickinson asked about training over ENSO patterns, saying that he got uncomfortable extrapolating. Hughes- “so do I.” That’s why Hughes asks what are other proxies doing? Is it physically reasonable.
Hughes was asked – can you train centennially. He answered yes, by referring to a study that he did in 1984 on tree ring MXD in Edinburgh calibrated on very long instrumental records from 1810-1970. He said that this study went up only to 1970. (SM — it’s an odd example, a study that’s over 20 years old, using data that ends over 35 years ago.)
Turekian asked about CO2 and N2 but my notes here are illegible. Christy asked about the spaghetti diagram and again my notes here are illegible.
Cuffey again asked: do we know the temperature 1000 years ago within half degree. Hughes — no. Very few SH records. What about averaged over 100 years? Hughes thought for a while, saying that it was “extremely difficult” and I think that he said that the temperatures then were similar to mid-20th century. I don’t have a clear note on his final answer.