Re-Visiting the "Yamal Substitution"

Reader Tom P observed:

If Steve really wants to invalidate the Yamal chronology, he would have to find another set of cores that also gave good correlation with the instrument record, but indicated a previous climate comparable or warmer than that seen today.

As bender observed, Tom P’s question here is a bit of a slow pitch, since the Polar Urals (the unreported but well-known update) is precisely such a series and since the “Yamal Substitution” (where Briffa 2000 quietly replaced the Polar Urals site with its very pronounced MWP with the HS-shaped Yamal) has been a longstanding concern and issue at Climate Audit.

Yamal and Polar Urals are both nearby treeline sites in northwest Siberia (Yamal 67 30N; 70 30E; Polar Urals 66N 65E). Both have cores crossdated for at least the past millennium. RCS chronologies have been calculated for both sites by Team authors. One chronology (Yamal) is the belle of the ball. Its dancecard is completely full: Briffa 2000; Mann and Jones 2003 (the source for the former UNEP graph); Moberg et al 2005; D’Arrigo et al 2006; Osborn and Briffa 2006; Hegerl et al 2007; Briffa et al 2008; Kaufman et al 2009 and appears in the IPCC AR4 proxy spaghetti graph.

The other chronology (Polar Urals as updated) is a wallflower. It had one dance all evening (Esper et al 2002), but Esper also boogied with not just one, but two strip bark foxtails from California. Polar Urals was not illustrated in the IPCC AR4 proxy spaghetti graph; indeed, it has never been displayed in any article in the PeerReviewedLitchurchur. The only place that this chronology has ever been placed on display is here at Climate Audit.

The question today is – why is Yamal the belle of the ball and Polar Urals a wallflower? Is it because of Yamal’s “inner beauty” (temperature correlation, replication, rolling variance, that sort of thing) or because of its more obvious physical attributes exemplified in the diagram below? Today, we’ll compare the “inner beauty” of both debutantes, starting first with the graphic below, showing their “superficial” attributes.


Figure 1. RCS chronologies (minus 1) for Yamal (Briffa) and Polar Urals (Esper). Note the graph in Rob Wilson’s recent comment compares the RCS chronology for Yamal with the STD chronology for Polar Urals – and does not directly compare the two data sets using a consistent standardization methodology.

The two series are highly correlated (r=0.53) and have the same sort of appearance up to a sort of “dilation” in the modern portion of the Yamal series, which seems highly dilated relative to the Polar Urals series. (This dilation is not unlike the Graybill bristlecone chronologies relative to the Ababneh chronologies, where there was also high correlation combined with modern dilation.) Obviously, Yamal has a huge hockey stick (the largest stick in the IPCC AR4 Box 6.4 diagram), while the Polar Urals MWP exceeds modern values.

I’ve observed on a number of occasions that the difference between Polar Urals and Yamal is, by itself, material to most of the non-bristlecone reconstructions that supposedly “support” the Hockey Stick. For example, in June 2006, I showed the direct impact of a simple sensitivity study using Polar Urals versus Yamal – an issue also recently discussed here.


Figure 2. Impact on Briffa 2000 Reconstruction of using Polar Urals (red) rather than Yamal (black).

The disproportionate impact of Polar Urals versus Yamal motivated many of my Review Comments on AR4 (as reviewed in a recent post here), but these Review Comments were all shunted aside by Briffa, who was acting as IPCC section author.

In February 2006, there were a series of posts at CA comparing the two series, which I broke off to prepare for the NAS presentations in March 2006. At the time, both Osborn and Briffa 2006 and D’Arrigo et al 2006 had been recently published and the Yamal Substitution was very much on my mind. As we’ve recently learned from the Phil Trans B archive in Sept 2009, the CRU data set had abysmally low replication in 1990 for RCS standardization, a point previously unknown to both myself and to other specialists (e.g. the authors of D’Arrigo et al 2006.)

Today’s analysis of the Yamal Substituion more or less picks up from where we left off in Feb 2006. While there is no formal discussion of the Yamal Substitution in the peerreviewedliterature, I can think of three potential arguments that might have been adduced to purport to justify the Yamal Substitution in terms of “inner beauty”: temperature correlation, replication and rolling variance (the latter, an argument invoked by Rob Wilson in discussion here.)

Relationship to Local Temperature
Both Jeff Id and I (and others) have discussed on many occasions that there is a notable bias in selecting proxies from a similarly constructed population (e.g. larch chronologies) ex post. However, for present purposes, even if this point is set aside for now and we temporarily stipulate the validity of such a procedure, the temperature relationships do not permit a preferential selection of Yamal over Polar Urals.

The Polar Urals chronology has a statistically significant relationship to annual temperature of the corresponding HadCRU/CRUTEM gridcell, while Yamal does not (Polar Urals t-statistic – 3.37; Yamal 0.92). For reference the correlation of the Polar Urals chronology to annual temperature is 0.31 (Yamal: 0.14). Both chronologies have statistically significant relationships to June-July temperature, but the t-statistic for Polar Urals is a bit higher (Polar Urals t-statistic – 5.90; Yamal 4.29; correlations are Polar Urals 0.50; Yamal 0.55). Any practising statistician would take the position that the t-statistic, which takes into consideration the number of measurements, is the relevant measure of statistical significance, a point known since the early 20th century.

Thus, both chronologies have a “statistically significant” correlation to summer temperature while being inconsistent in their medieval-modern relationship. This is a point that we’ve discussed from time to time – mainly to illustrate the difficulty of establishing confidence intervals when confronted with such a problem. I made a similar point in my online review of Juckes et al, contesting their interpretation of “99.9% significant”. In my AR4 Review Comments, I pointed out this ambiguity specifically in the context of these two series as follows:

There is an updated version of the Polar Urals series, used in Esper et al 2002, which has elevated MWP values and which has better correlations to gridcell temperature than the Yamal series. since very different results are obtained from the Yamal and Polar Urals Updated, again the relationship of the Yamal series to local temperature is “ambiguous” [ a term used in the caption to the figure being commented on]

In his capacity of IPCC section author, Briffa simply brushed aside this and related comments without providing any sort of plausible answer as discussed in a prior thread on Yamal in IPCC AR4, while conceding that both “the Polar Urals and Yamal series do exhibit a significant relationship with local summer temperature.”

In any event, the relationships of the chronologies to gridcell temperature do not provide any statistical or scientific basis for preferentially selecting the Yamal chronology over the Polar URals chronology into a multiproxy reconstruction.

Replication
The D’Arrigo et al authors believed that Briffa’s Yamal chronology was more “highly replicated” than the Polar Urals chronology, a belief that they held even though they did not actually obtain the Yamal data set from Briffa. CA reader Willis Eschenbach at the time asked the obvious question how they knew that this was the “optimal data-set” if they didn’t have the data.

First, if you couldn’t get the raw data … couldn’t that be construed as a clue as to whether you should include the processed results of that mystery data in a scientific paper? It makes the study unreplicable … Second, why was the Yamal data-set “optimal”? You mention it is for “clear statistical reasons” … but since as you say, you could not get the raw data, how on earth did you obtain the clear statistics?

Pretty reasonable questions. The Phil Trans B archive thoroughly refuted the belief that the Yamal data set was more highly replicated than the Polar Urals data set. The graphic below shows the core counts since 800 for the three Briffa et al 2008 data sets (Tornetrask-Finland; Avam-Taimyr and Yamal) plus Polar Urals. Obviously, the replication of the Yamal data set (10 cores in 1990) is far less than the replication of the other two Briffa et al 2008 data sets (both well over 100 in 1990) and also less than Polar Urals since approximately AD1200 and far below Polar Urals in the modern period (an abysmally low 10 cores in 1990 versus 57 cores for Polar Urals. The modern Yamal replication is far below Briffa’s own stated protocols for RCS chronologies (see here for example.) This low replication was unknown even to specialists until a couple of weeks ago.


Figure 2. Core Counts for the three Briffa et al 2008 data sets plus Polar Urals

Obviously, contrary to previous beliefs of the D’Arrigo et al authors, Briffa’s Yamal data set is not more highly replicated than Polar Urals. Had the D’Arrigo authors obtained the Yamal measurement data during the preparation of their article, there is no doubt in my mind that the D’Arrigo authors would have discovered the low Yamal replication in 2005, prior to publication of D’Arrigo et al 2006. However, they didn’t and the low replication remained unknown until Sept 2009.

Running Variance
Rob Wilson defended the Yamal Substitution at CA in Feb 2006 on the grounds that the variance of the Polar Urals RCS chronology was “not stable through time” and that use of this version would therefore be “wrong”, whereas Yamal “at least had a roughly stable variance through time”.

Rob assessed the supposed variance instability using a 101-year windowed variance – a screening method likewise not mentioned in D’Arrigo et al 2006 nor, to my knowledge, elsewhere in the peerreviewedliterature. An obvious question is: how does the stability of the Polar Urals windowed variance compare to windowed variance on other RCS series that are in use? And does Yamal’s windowed variance show an “inner beauty” that is lacking in Polar Urals? The graphic below calculates windowed 101-year standard deviations for 13 Esper RCS chronologies (including Polar Urals) plus Briffa’s Yamal.


Figure . Running windowed-101 year standard deviation 850-1946 for 13 Esper RCS chronologies. Polar Urals in red; Briffa Yamal in black.

From 1100AD on, the Polar Urals chronology doesn’t seem particularly objectionable relative to the other Esper RCS chronologies. Its variance is elevated in the 15th century, but another Esper chronology has similar variance in the 12th century. Its variance is definitely elevated relative to other chronologies in the 11th century, a period in which there are only a few comparanda, most of which are in less severe conditions: the two strip bark foxtails and Tornetrask (Taimyr is presumably equally severe.)

Also shown in the above graphic is the corresponding graphic for Briffa’s Yamal series. Whatever points of reservation that Wilson may have regarding the Polar Urals RCS chronology would seem to apply even more with the Yamal chronology. Using Wilson’s rolling variance test, the variance of the Yamal chronology has been as high or higher than Polar Urals since AD1100 and has increased sharply in the 20th century when other chronologies have had stable variances. I am totally unable to discern any visual metric by which one could conclude that Yamal had a “roughly stable” variance in any sense that Polar Urals did not have as well. (Rob Wilson’s own comparison (see here used a different (his own) version of Urals RCS, where the rolling variance of the MWP is more elevated than in the version shown here using Esper’s RCS. However, Rob has also recently observed that he will rely on third party RCS chronologies and, in this case, Esper’s Polar Urals RCS would obviously qualify under that count.)

In respect to “rolling variance”, if anything, Yamal seems to have less “inner beauty” than Polar Urals.

Update: Kenneth Frisch in #207 below observes (see his code):

The results of these calculations indicate that the magnitude of the sd follows that of the mean and not that of the tree ring counts. Based on that explanatory evidence, I do not see where Rob Wilson’s sd windows would account for much inner beauty for the Yamal series or, likely, for any other RCS series (Polar Urals).

Yamal Already a “Standard”?
Another possible argument was raised by Ben Hale, supposedly drawing on realclimate: that Yamal was already “standard” prior to Briffa. This is totally untrue – Polar Urals was the type site for this region prior to Briffa 2000.

Briffa et al (Nature 1995), a paper discussed on many occasions here, used the Polar Urals site (Schweingruber dataset russ021) to argue that the 11th century was cold and, in particular, 1032 was the coldest year of the millennium. A few years later, more material from Polar Urals was crossdated (Schweingruber dataset russ176) and, when this crossdated material is combined with the previous material, a combined RCS ring width chronology yields an entirely different picture – a warm MWP. Such calculations were done both by Esper (in connection with Esper et al 2002) and for D’Arrigo et al 2006, but the resulting RCS reconstruction was never published nor, as noted previously, has the resulting RCS reconstruction ever appeared in print nor were the resulting RCS reconstructions placed in a digital archive in connection with either publication.

Instead of using and publishing the updated information from Polar Urals, the Yamal chronology was introduced in Briffa 2000 url, a survey article on worldwide dendro activities, in whichBriffa’s RCS Yamal chronology replaced the Polar Urals in his Figure 1. Rudimentary information like core counts was not provided. Briffa placed digital versions of these chronologies, including Yamal, online at his own website (not ITRDB). A composite of three Briffa chronologies (Yamal, Taimyr and Tornetrask) had been introduced in Osborn and Briffa (Science 1999), a less than one page letter. Despite the lack of any technical presentation and lack of any information on core counts, as noted elsewhere, this chronology was used in multiproxy study after another and was even separately illustrated in the IPCC AR4 Box 6.4 spaghetti graph.

Authors frequently purport to excuse the re-use of stereotyped proxies on the grounds that there are few millennium-length chronologies, a point made on occasion by Briffa himself. Thus, an updated millennium-length Polar Urals chronology should have been a welcome addition to the literature. But it never happened. Briffa’s failure to publish the updated Polar Urals RCS reconstruction has itself added to the bias within the archived information. Subsequent multiproxy collectors could claim that they had examined the “available” data and used what was “available”. And because Briffa never published the updated Polar Urals series, it was never “available”.

The Original Question
At this point, in the absence of any other explanation holding up, perhaps even critics can look squarely at the possibility that Yamal was preferred over Polar Urals because of its obvious exterior attributes. After all, Rosanne D’Arrigo told an astonished NAS panel: “you need to pick cherries if you want to make cherry pie”. Is that what happened here?

I looked at all possible origins of “inner beauty” that might justify why Yamal’s dance card is so full. None hold up. Polar Urals’ temperature correlations are as good or better than Yamal’s; Polar Urals is more “highly replicated” than Yamal since AD1100 with massively better replication in the 19th and 20th centuries; throughout most of the millennium (since approximately AD1100), Yamal’s windowed variance is as high or higher as Polar Urals and massively higher in the 20th century.

In summary, there is no compelling “inner beauty” that would require or even entitle an analyst to select Yamal over Polar Urals. Further, given the known sensitivity of important reconstructions to this decision, the choice should have been clearly articulated for third parties so that they could judge for themselves. Had this been done, IPCC reviewers would have been able to point to these caveats in their Review Comments; because it wasn’t done, IPCC Authors rejected valid Review Comments because, in effect, the IPCC Authors themselves had failed to disclose relevant information in their publications.

Proxy Inconsistency
Over and above the cherrypicking issue is the overriding issue of proxy inconsistency – a point made in our PNAS 2009 comment and again recently at Andy Revkin’s blog recently:

There are fundamental inconsistencies at the regional level as well, including key locations of California (bristlecones) and Siberia (Yamal), where other evidence is contradictory to Mann-Briffa approachs (e.g. Millar et al 2006 re California; Naurzbaev et al 2004 and Polar Urals re Siberia,) These were noted up in the N.A.S. panel report, but Briffa refused to include the references in I.P.C.C. AR4. Without such detailed regional reconciliations, it cannot be concluded that inconsistency is evidence of “regional” climate as opposed to inherent defects in the “proxies” themselves.

I repeat this point because without a reconciliation of such inconsistencies, without an ability to reconcile all the loose ends in regional climate, how can anyone in the field expect to carry out multiproxy studies.

229 Comments

  1. Craig Loehle
    Posted Oct 19, 2009 at 2:07 PM | Permalink

    It’s not fair the way you point out all these facts and stuff. Sniff.
    I hope this clears things up for Nick Stokes and Tom P.

  2. Patrick M.
    Posted Oct 19, 2009 at 2:15 PM | Permalink

    Prediction:

    From 1100AD on, the Polar Urals chronology doesn’t seem particularly objectionable relative to the other Esper RCS chronologies. Its variance is elevated in the 15th century, but another Esper chronology has similar variance in the 12th century. Its variance is definitely elevated relative to other chronologies in the 11th century, a period in which there are only a few comparanda, most of which are in less severe conditions: the two strip bark foxtails and Tornetrask (Taimyr is presumably equally severe.)

    will become:

    From 1100AD on, the Polar Urals [snip] variance is elevated in the 15th century[snip]. Its variance is definitely elevated relative to other chronologies in the 11th century[snip].

  3. Gary
    Posted Oct 19, 2009 at 2:18 PM | Permalink

    In summary, there is no compelling “inner beauty” that would require or even entitle an analyst to select Yamal over Polar Urals. Further, given the known sensitivity of important reconstructions to this decision, the choice should have been clearly articulated for third parties so that they could judge for themselves. Had this been done, IPCC reviewers would have been able to point to these caveats in their Review Comments; because it wasn’t done, IPCC Authors rejected valid Review Comments because, in effect, the IPCC Authors themselves had failed to disclose relevant information in their publications.

    Dangerously close to ascribing motives here. ;-)

  4. Varco
    Posted Oct 19, 2009 at 2:22 PM | Permalink

    Great job as usual Steve. Do you know how many ‘peer reviewed’ papers include use of Yamal? Could we estimate how much of the peer reviewed ‘body of concensus’ needs to be withdrawn and/or modified to maintain scientific credibility?

  5. JFD
    Posted Oct 19, 2009 at 2:23 PM | Permalink

    Steve, you are really good fiddler, but Rome is burning. Cap and Trade is being debated currently while a new international agreement is being readied for Copenhagen in November. Your excellent work needs to get into play soonest, where decisions are being formulated.

    I am drafting a letter to my two Senators currently and have intended to put in a paragraph that one must be wary about the proxy work by Mann et al since you have falsified much of it. Both of them plus some of their buddies need to hear your story first hand. With your permission I will tell them they can contact you to set up a quick meeting.

  6. Fred Harwood
    Posted Oct 19, 2009 at 2:33 PM | Permalink

    Succinct, and fun.

  7. Rob R
    Posted Oct 19, 2009 at 2:58 PM | Permalink

    Its much too late for IPCC AR4 but the “hockey team” will clearly find it harder to justify the same approach if there is a future AR5.

    This information might even make it difficult for the IPCC to use one of the tainted hard-core team as an overall chapter editor for the Paleoclimate material.

    In terms of climate policy discussions over the next couple of months there might not be enough time for these discoveries to make a real impact. As a whole the committed AGW lobby will probably justify continued reliance on AR4 as the best available summary of the “science”.

  8. Alan S. Blue
    Posted Oct 19, 2009 at 3:04 PM | Permalink

    Would it possibly be appropriate to merge the two reconstructions? It would be interesting to see how that affects both the pre-1100AD issue and the modern instrumental period.

  9. bender
    Posted Oct 19, 2009 at 3:05 PM | Permalink

    So much for the argument of Deep Climate that “17 is enough”. If 17 live trees in a sample is enough, isn’t more better?

  10. bender
    Posted Oct 19, 2009 at 3:09 PM | Permalink

    Rob Wilson defended the Yamal Substitution at CA in Feb 2006 on the grounds that the variance of the Polar Urals RCS chronology was “not stable through time” and that use of this version would therefore be “wrong”, whereas Yamal “at least had a roughly stable variance through time”.

    That is a smokescreen. When variance is proportional to the mean (both high during MWP), they are screening on mean (warm MWP) but pointing at the variance (variable MWP). The joke is that such variance inhomogeneity is easily stabilized by log transformation. But did they investigate stabilizing the variance? No. Why not? Because it was the mean that bothered them.

    • Alexander Harvey
      Posted Oct 19, 2009 at 4:40 PM | Permalink

      Re: bender (#11),

      Might I not be right in thinking that one might expect the variance of the chronolgy to vary with the mean, or a least there is no reason to assume that it shouldn’t. After all it is not temperature but possibly a non-linear function of temperature. Also would not a likely reverse mapping be one that did produce a stable variance, whether that be log or other. Also I am not quite sure how one might expect the choronolgy to correlate with local temperature. What mapping do they use before they enter these chronologies into multipoxy results. Or don’t they bother.

      Alex

      • bender
        Posted Oct 19, 2009 at 4:43 PM | Permalink

        Re: Alexander Harvey (#26),

        Might I not be right in thinking that one might expect the variance of the chronolgy to vary with the mean

        That’s what I suggested to romanm last week and he agreed instantly.

      • Posted Oct 19, 2009 at 5:11 PM | Permalink

        Re: Alexander Harvey (#26),

        multipoxy results

        you said it

        • Alexander Harvey
          Posted Oct 20, 2009 at 12:39 AM | Permalink

          Re: Lucy Skywalker (#31),

          Re: Alexander Harvey (#26),

          multipoxy results

          you said it

          I think it was an inevitable error. I wish I had made it on purpose.

  11. mpaul
    Posted Oct 19, 2009 at 3:14 PM | Permalink

    Any update of Briffa’s health? It would be helpful if he would weigh in on precisely why he chose Yamal over Polar Urals. Having him on the sidelines is unfortunate.

  12. bender
    Posted Oct 19, 2009 at 3:16 PM | Permalink

    The only question is, for those who take offense to the comparing of team love of Yamal to an addict’s love of crack, what would be a better analogy? If we’re talking about inner versus surface beauty, maybe a porn addiction would be a more appropriate analogy? Trying to be helpful here …

  13. Don Keiller
    Posted Oct 19, 2009 at 3:21 PM | Permalink

    Fine post Steve. As a plant physiologist I’ve never been conviced by tree ring width/density as a temperature proxy. Too many confounding factors.

    Dated tree LINES, however are another matter. I have chosen this thesis

    http://ethesis.helsinki.fi/julkaisut/mat/geolo/vk/kultti/holocene.pdf

    because it includes treeline changes at Yamal. The data presented argues that the Holocene Optomum (ca 8000=6000 years BP) was about 2.5C higher than present. The Medieval Warm Period about 0.5C warmer. In short Briffa and Kaufmann are wrong- there is nothing “unprecedented” about modern day temperatures.

    • bender
      Posted Oct 19, 2009 at 3:33 PM | Permalink

      Re: Don Keiller (#14),
      Thanks for that. Do you have a reference handy for the mid-Holocene treeline in North America?

      • Steve McIntyre
        Posted Oct 19, 2009 at 3:39 PM | Permalink

        Re: bender (#16), bender, I did some posts in 2005 on treelines in North America. Lamarche studied treeline decline of the bristlecones. I’ve used pictures of medieval foxtails above present treeline on a couple of occasions.

      • bender
        Posted Oct 19, 2009 at 3:44 PM | Permalink

        Re: bender (#16),
        D.S. Kaufman et al. Quaternary Science Reviews 23 (2004) 529–560

        Abstract
        The spatio-temporal pattern of peak Holocene warmth (Holocene thermal maximum, HTM) is traced over 140 sites across the Western Hemisphere of the Arctic (0–180°W; north of 60°N). Paleoclimate inferences based on a wide variety of proxy indicators provide clear evidence for warmer-than-present conditions at 120 of these sites. At the 16 terrestrial sites where quantitative estimates have been obtained, local HTM temperatures (primarily summer estimates) were on average 1.6 +/- 0.8°C higher than present (approximate average of the 20th century), but the warming was time-transgressive across the western Arctic. As the precession-driven summer insolation anomaly peaked 12–10 ka (thousands of calendar years ago), warming was concentrated in northwest North America, while cool conditions lingered in the northeast. Alaska and northwest Canada experienced the HTM between ca 11 and 9 ka, about 4000 yr prior to the HTM in northeast Canada. The delayed warming in Quebec and Labrador was linked to the residual Laurentide Ice Sheet, which chilled the region through its impact on surface energy balance and ocean circulation. The lingering ice also attests to the inherent asymmetry of atmospheric and oceanic circulation that predisposes the region to glaciation and modulates the pattern of climatic change. The spatial asymmetry of warming during the HTM resembles the pattern of warming observed in the Arctic over the last several decades. Although the two warmings are described at different temporal scales, and the HTM was additionally affected by the residual Laurentide ice, the similarities suggest there might be a preferred mode of variability in the atmospheric circulation that generates a recurrent pattern of warming under positive radiative forcing. Unlike the HTM, however, future warming will not be counterbalanced by the cooling effect of a residual North American ice sheet.

    • henry
      Posted Oct 19, 2009 at 3:58 PM | Permalink

      Re: Don Keiller (#14),

      Quote from this paper:

      Several studies consider the 20th century as the warmest in the northern hemisphere during the last millennium (Mann et al., 1999; Crowley and Lowery, 2000) or the last two millennia (Briffa, 2000). However, this study suggests that climate during the Medieval Warm Period was even warmer than during the 20th century (Paper IV).

      And, naturally, includes these as reference papers:

      Briffa, K.R. 2000. Annual climate variability in the Holocene: interpreting the message of ancient trees. Quaternary Science Reviews 19: 87-105.

      Mann, M.E., Bradley, R.S. and Hughes, M.K. 1999. Northern hemisphere temperatures during the past millennium: inferences, uncertainties, and limitations. Geophysical Research Letters 26: 759-762.

  14. AnonyMoose
    Posted Oct 19, 2009 at 3:27 PM | Permalink

    If Steve really wants to invalidate the Yamal chronology, he would have to find another set of cores that also gave good correlation with the instrument record, but indicated a previous climate comparable or warmer than that seen today.

    Why would the characteristics of other cores determine whether Yamal is invalid? Yamal can be invalid based on its own characteristics, no matter what other records indicate. A fake Napoleon’s diary is a fake whether other records agree with it or not.

  15. Steve McIntyre
    Posted Oct 19, 2009 at 3:47 PM | Permalink

    I mentioned that the dilation of Yamal relative to Polar Urals reminded me a lot of the dilation of Graybill relative to Ababneh. Here’s a graphic from an Ababneh discussion here. I think that the similarities are pretty remarkable.

  16. John Hekman
    Posted Oct 19, 2009 at 3:57 PM | Permalink

    I nominate Steve for the Winston Churchill prize for bull-dogging this Yamal issue. All that I can think of when reading chapter after chapter of this tale is the old joke in which the defense lawyer says, my client couldn’t have committed the murder because he was out of town that day; and if he was in town, then he was not at the victim’s house; and if he was at the victim’s house, then he didn’t have a gun; and if he did have a gun, then he didn’t pull the trigger.

  17. bender
    Posted Oct 19, 2009 at 4:00 PM | Permalink

    snort

  18. bender
    Posted Oct 19, 2009 at 4:08 PM | Permalink

    Except in the OP I prefer black. It has inner beauty.
    snoooooort

  19. Tolz
    Posted Oct 19, 2009 at 4:15 PM | Permalink

    It seems that early on at the dance the Team authors found out that Yamal “went all the way”.

  20. Ryan O
    Posted Oct 19, 2009 at 4:23 PM | Permalink

    Tom . . . where are you, Tom?

    Crickets.

  21. Morgan
    Posted Oct 19, 2009 at 4:48 PM | Permalink

    Now wait a second. That windowed variability graph appears to show less variability, on average, in the recent past than in the more distant past. Certainly there is no great increase in variability (leaving aside Yamal). How does this square with a “divergence problem” that uniquely effects modern trees? Are all these series just not impacted by the problem?

    • bender
      Posted Oct 19, 2009 at 5:11 PM | Permalink

      Re: Morgan (#28),
      What? Yamal variability rises at the end because of the divergent uptick. It’s got more than twice the std than the others. You not snorting, err, looking at the right series? (Does high std imply the series is likely to get around?)

      • Craig Loehle
        Posted Oct 19, 2009 at 5:22 PM | Permalink

        Re: bender (#32), Enough of the sexist snorting–we are talking about “inner beauty” here. Tree rings with good character.

      • Morgan
        Posted Oct 20, 2009 at 8:07 AM | Permalink

        Re: bender (#32),

        Oh, I see… Yamal is the divergence problem!

  22. MikeN
    Posted Oct 19, 2009 at 5:11 PM | Permalink

    Max variance vs Min variance. Polar Urals is out.

    Steve: see comment below. Until the basis of windowed variance screening is described and established somewhere, why would you say something like this:

  23. Terry
    Posted Oct 19, 2009 at 5:20 PM | Permalink

    Snip away for ascribing motive if you want, but the evidence (to me) is becoming more and more damning that someone (or ones) – snip

  24. Steve McIntyre
    Posted Oct 19, 2009 at 5:25 PM | Permalink

    Readers, please keep in mind that windowed variance screening has never been reported in any article in the peerreviewedlitchurchur. Nor has this been applied, to my knowledge, to the large inventory of dendro series. What would be the impact of consistent application of this method? Dunno. Is there anything “wrong” with Polar Urals (or for that matter, Yamal) having a more variable windowed variance than a bristlecone or Tornetrask? Surely this would have to be presented and discussed before removing data.

    My point was a narrower one – I’m making no comment on whether the criterion makes any sense: only that application of this criterion does not automatically result in Yamal preference.

    • bender
      Posted Oct 19, 2009 at 5:44 PM | Permalink

      Re: Steve McIntyre (#35),
      Tangentially related. The paper linked to here in the original Esper thread discusses the trimming back of chronologies to retain only those series with high covariance. What is the windowed “EPS” of Yamal versus Polar Urals? EPS formula is given in linked paper.

      Steve: EPS is high in both versions in relevant periods. I’ve written up these functions into R but need to tidy them a little.

      • steven mosher
        Posted Oct 19, 2009 at 8:45 PM | Permalink

        Re: bender (#38), ya looking forward to seeing that EPS figure.

        It occurs to me that once steve has all these things turn keyed we could just plow through all the ring data
        and feed some meat to the old meat grinder.

    • steven mosher
      Posted Oct 19, 2009 at 8:42 PM | Permalink

      Re: Steve McIntyre (#35), I think the best criteria to use are those established by Wilson in his divergence study. Tom P agreed to them. Basically, a correlation > .40. no autocorrelation, at least 10 cores to the present.. some other criteria.. windowed variance is no where in sight.

      • Steve McIntyre
        Posted Oct 19, 2009 at 8:49 PM | Permalink

        Re: steven mosher (#48), that Tom P agreed to a criterion is not relevant to its statistical validity. It may be amusing for debating purposes, but no more.

  25. jae
    Posted Oct 19, 2009 at 5:33 PM | Permalink

    If Steve really wants to invalidate the Yamal chronology, he would have to find another set of cores that also gave good correlation with the instrument record, but indicated a previous climate comparable or warmer than that seen today.

    ?? I don’t understand the logic here. How does the presence of another set of cores (like Polar Urals) have any effect on the “validation” of Yamal?

    • bender
      Posted Oct 19, 2009 at 5:45 PM | Permalink

      Re: jae (#37),
      You’re trying to undertsand Tom P’s “logic”? It’s the logic of addiction. Must Get More.

      • Dean McAskil
        Posted Oct 19, 2009 at 8:03 PM | Permalink

        Re: bender (#39),

        As a long time lurker and rare poster I am also a bit confused. How does the Polar Urals have any effect on validation of Yamal?

        We must be getting close to the Team pulling off latex masks to reveal they are all VP Gore and exclaiming “I would have gotten away with it if it wasn’t for those meddling kids!”

  26. Kazinski
    Posted Oct 19, 2009 at 5:53 PM | Permalink

    I think Lucia’s excellent post on inadvertent cherry picking pertains to entire sets as well as individual trees. If tree rings record climate then the more data points the merrier. Picking one tree over another based on the signal is picking cherries, picking one data set over another is picking cherry baskets.

  27. Les Johnson
    Posted Oct 19, 2009 at 7:18 PM | Permalink

    Over at WUWT, Bruce Banta gave this link:

    http://dsc.discovery.com/news/2008/06/11/tree-leaf-temperature.html

    It suggests that tree leaf temperature is constant in the canopy, at near 70 deg F.

    The study was across 39 species of trees, and 50 deg of latitude (sub-tropical to boreal), and the temperature was a near constant 70 deg F; across species and latitude.

    From the article: …but could upend climate models that use tree rings to infer or predict past and present temperature changes.

    Just a guess, but if the tree canopy is 20-30 deg cooler than ambient, then deforestation in the tropics and high altitudes could impact global temperature.

  28. Les Johnson
    Posted Oct 19, 2009 at 7:24 PM | Permalink

    oops.
    .
    Just a guess, but if the tree canopy is 20-30 deg cooler than ambient, then deforestation in the tropics and high altitudes could impact global temperature.
    .
    should be:
    .
    Just a guess, but if the tree canopy is 20-30 deg cooler than ambient in the tropics, then deforestation in the tropics could raise global temperatures.
    .
    Conversely, reforestation in high altitudes/cold climates, could also raise global temperature (as the Swiss study mentioned), saw 7-9 deg warmer temperatures in the canopy.

  29. Rattus Norvegicus
    Posted Oct 19, 2009 at 7:51 PM | Permalink

    Steve,

    The Yamal series, in the period in question ca. 1000 AD, has about 2x as many cores as Polar Urals. It also has significantly more cores from 1600 to 1800 where Yamal shows an elevated temperature in comparison with PU. The fact that Yamal has more data in the period of interest should give it the edge.

    • bender
      Posted Oct 19, 2009 at 8:32 PM | Permalink

      Re: Rattus Norvegicus (#41),
      Whose job is it to justify these choices? Steve’s? Of course …

    • Steve McIntyre
      Posted Oct 19, 2009 at 8:47 PM | Permalink

      Re: Rattus Norvegicus (#41),

      If Briffa or D’Arrigo wanted to present both series and argue that Yamal is “right” because it has more cores between 1600-1800, that’s their prerogative. This is obviously not a Generally Accepted Accounting Policy. And if people adopt this as a GAAP policy, then they will have to do so in other circumstances, where the results may not be what they want.

      Personally, I see no reason why more cores between 1600-1800 would take priority as a metric over cores for 1800-2000. I agree that the additional Yamal cores in the MWP counts in its favor, but its huge deficit in the 20th century counts heavily against it. Also counting against Yamal in the 20th century is the fact that the Russians selected long cores for corridor standardization – contrary to Briffa policies for RCS.

      The other problem – as I discuss over and over – is inconsistency. HOw can anyone use either of these chronologies as indicators of medieval-modern differentials without reconciling their differences on some analytical basis.

      If one were forced at a point of a gun to use these things, you’d be obliged to do one recon with Yamal and one with Polar Urals and report the difference in an open and transparent way. Yeah, yeah, you say you’re picking Yamal because of its inner beauty. Just like guys used to buy Penthouse for its literary articles.

    • TAG
      Posted Oct 19, 2009 at 9:38 PM | Permalink

      Re: Rattus Norvegicus (#41),

      The fact that Yamal has more data in the period of interest should give it the edge

      Doesn’t hsi seem a lot like arm waving. Cortieroa created on teh fly with no theory demonstrating their validity. This seems like arm waving to me.

  30. bender
    Posted Oct 19, 2009 at 8:33 PM | Permalink

    Mann’s got his. Briffa has has. And round she goes …

  31. Rattus Norvegicus
    Posted Oct 19, 2009 at 9:13 PM | Permalink

    Steve, you are deliberately ducking the real issue here: the fact that Yamal has twice as many cores around 1000 AD, the period you seem to prefer, than Polar Urals (is that updated or not?) This is one of the periods that Rob noted as causing problems in his emails to you several years ago. And what the hell are you throwing GAAP in here for? That is just a red herring.

    Personally, I am not that interested in what happened in the 20th century; we have the instrumental record to attest to that. If both chronologies correlate well with grid cell or nearby station JJA temps then there really isn’t much to choose there. However for the MWP Yamal has substantially more data and should probably be preferred.

    As far as the point of a gun: I would prefer to use the chronology which shows less noise; that would be Yamal.

    • bender
      Posted Oct 19, 2009 at 9:21 PM | Permalink

      Re: Rattus Norvegicus (#52),
      YOU are avoiding the issue. It is called special pleading. Tell your friends at RC to look it up. Tell them google is their friend.

    • bender
      Posted Oct 19, 2009 at 9:22 PM | Permalink

      Re: Rattus Norvegicus (#52),

      I would prefer to use the chronology which shows less noise

      What if it shows less signal?

    • MrPete
      Posted Oct 19, 2009 at 9:24 PM | Permalink

      Re: Rattus Norvegicus (#52),
      Seems to me that:
      a) poor core counts in the calibration (thermometry) period means the CI on the entire sequence goes sky high.
      b) poor core counts in a study period means the CI on that portion of the history goes sky high.

      Seems (b) is much prefered over (a). But perhaps that’s just me.

    • Steve McIntyre
      Posted Oct 19, 2009 at 9:39 PM | Permalink

      Re: Rattus Norvegicus (#52),

      I am using GAAP as a metaphor because you can’t just decide on a policy for this case because it yields a result that you like and not use it in a similar case. Unfortunately opportunism is rife in this field.

      You say that you are not that interested in what happened in the 20th century. Unfortunately the 20th century-11th century is one of the central questions in this field.

      If authors wish to argue that Yamal is superior to Polar Urals, then that opportunity is open to them. They can go for it. Unfortunately, no such argument has ever appeared in the peerreviewedliterature. Indeed, as noted above, the Polar Urals chronology has shamefully never been reported or published – already biasing the literature.

      Any arguments that you are presenting are not ones that have been considered in the peerreviewedliterature nor by the IPCC. They are no more than your opinion. If you can provide any citations from the peerreviewedliterature to support your opinions, they would be welcome.

      ,

      • bender
        Posted Oct 20, 2009 at 1:37 AM | Permalink

        Re: Steve McIntyre (#57),
        Rattus lets RC phrase the debate for him. That’s why he takes the burden of proof off the delinquent proponents of a paper and shifts it onto McIntyre. This is Tom P’s game as well. They get it from Hank Roberts and dhogaza who get it from Schmidt.

  32. jae
    Posted Oct 19, 2009 at 9:55 PM | Permalink

    Just like guys used to buy Penthouse for its literary articles.

    LOL. Maybe I’m still not getting it. Maybe it’s a different generation and I’m a lost soul, but I am really, seriously, honestly having a problem accepting that there would even be PhDs out there that are having a serious debate (or even a beer discussion) about EVEN SUGGESTING that it is EVER DEFENSIBLE to even ARGUE that it makes ANY sense to LOOK AT ALL YOUR SAMPLES AND SELECT THE ONES YOU LIKE, BEFORE THE STUDY COMMENCES!!!!!!!!!! ISN’T THAT WHAT THESE CLOWNS ARE DOING, OVER AND OVER AGAIN, RIGHT IN FRONT OF THE WHOLE SCIENTIFIC ESTABLISHMENT? Do I misunderstand this? Is there really an argument here?

    • Jeff Id
      Posted Oct 19, 2009 at 11:40 PM | Permalink

      Re: jae (#58),

      Yeah, that happens to me too when I can’t hold it in any longer.

      Yup, east is west, cold is hot, top is bottom, rich is poor. In opposite land everything goes and nobody knows.

    • Fred2
      Posted Oct 20, 2009 at 6:24 AM | Permalink

      Re: jae (#58),

      I think the root of the cherry-picking tree is the idea that some trees or forests are better thermometers, so find the ones that correlate to some recent (instrumental) temperature somewhere, and use those.

      This would require, though, that there be enough cores in the recent years(!). And extending this back to the fossil and subfossil (anyways, dead) trees is tricky; They find fossils where the ring patterns of the ending years match the starting rings of the living trees. Extending the statistical inference has got to be even trickier.

      Hey, are any of the dendroclimatology trees actual cherry trees?

  33. Sam
    Posted Oct 19, 2009 at 10:05 PM | Permalink

    jae:

    That’s pretty much it in a nutshell.

    lol is right if the consequences weren’t so tragic.

  34. DJA
    Posted Oct 19, 2009 at 10:26 PM | Permalink

    jea,
    ” “The ability to pick and choose which samples to use is an advantage unique to dendroclimatology.” Esper et al 2003 …”

    Says it all doesn’t it. They do pick and choose which samples to use BEFORE the study commences. Only those samples which conform to a prior criteria are chosen.

    It’s called “Climate Science”

    You and I would call it something else.

    Perhaps they could be more profitably engaged in finding out why the rejected samples do not conform to their prior criteria.

  35. Steve Geiger
    Posted Oct 19, 2009 at 10:27 PM | Permalink

    why, again, can’t both Yamal and Polar Urals be used? Is there something in the method that precludes using both sites due to proximity? If both were/are used, does it have a significant impact on the conclusion?

  36. MikeN
    Posted Oct 19, 2009 at 10:45 PM | Permalink

    Steve, I posted that because it’s the only apparent difference between Polar Urals and Yamal or the other proxies in your list. .4 difference, 4-1 ratio of max to min.

  37. Jimmy
    Posted Oct 19, 2009 at 11:00 PM | Permalink

    Is the data that you used to put together Figure 2 available anywhere?

  38. Bill Hunter
    Posted Oct 19, 2009 at 11:53 PM | Permalink

    Steve, excellent reply to Rattus Norvegicus.

    Its clear we are talking standards and ethics here. Just one comment! GAAP is a good metaphor for standards in determining choices, but allows room for debate as you have allowed for.

    But important debate cannot be allowed to be private, thus in steps GAAS (Generally Accepted Auditing Standards) which deals with consistency and adequate disclosure. GAAS requires adequate disclosure in making choices like you have outlined. Sometimes GAAP does not pin the ethics tail on the donkey and standards of GAAS apply:

    Standards of Reporting

    The auditor must state in the auditor’s report whether the financial statements are in accordance with generally accepted accounting principles (GAAP).
    The auditor must identify in the auditor’s report those circumstances in which such principles have not been consistently observed in the current period in relation to the preceding period.
    When the auditor determines that informative disclosures are not reasonably adequate, the auditor must so state in the auditor’s report.
    The auditor must either express an opinion regarding the financial statements, taken as a whole, or state that such an opinion cannot be expressed in the auditors report. When the auditor cannot express an overall opinion, the auditor should state the reasons therefore in the auditor’s report. . . .

  39. Jeff Id
    Posted Oct 20, 2009 at 12:38 AM | Permalink

    Just a quick comment on the variance plots. I don’t feel like putting the graphs of mean vs Briffa Yamal up again because those who can answer already know the graphs.

    We know RCS using an exponential curve is an arbitrary correction factor based on an assumption of exponential tree ring growth. It is possible that RCS-Exp could be correct in some cases however it is also known in literature to not be correct in all cases. Care must be used. However, there is no proven evidence for it’s accuracy in any situation and other methods are more common. We know also that the huge upspike in Yamal was explicitly created by the exponential fit, compare the mean to the Yamal Briffa version.

    We also know in the post above there is an excellent correlation from nearby but entirely different tree ring sites yet again no huge upspike from RCS at the end. We also know that RCS using a Schweingruber substitution removes the upspike as do other more stable regularization methods – see the original Yamal from the same data.

    So now, since RCS needed more trees to work properly according to other dendro scientists and since it is actually an uncalibrated and arbitrary assumption and further since other methods support just using the mean of tree rings while chopping off the early years and none of the previous methods will ever produce a hockey stick monster. CAN we NOW conclude that Briffa’s Yamal is nothing more than an artifact of RCS standardization on this particular set of trees? BTW, it’s a problem in RCS Briffa has written extensively on. WUWT

    Stick a fork in it – it’s done. IMHO, Six different ways it’s done.

  40. Geronimo
    Posted Oct 20, 2009 at 1:54 AM | Permalink

    I seems to me that there are two issues at play here. First is the perfectly normal human reaction to a belief, and that is to find supporting evidence for that belief. No conspiracy, it is normal for all of us to filter out the data which flies in the face of our beliefs and amplify the data that supports them. I realise we wouldn’t describe this as science in its purest form, but there are plenty of examples in the past where scientists have ignored data, or have accepted a scientific principle, where the observations are right, but the calculations don’t add up and have been forced to add “unknowns” to make the calculations fit the observations(dark matter anyone?). They aren’t always wrong and they aren’t always right, that’s science.

    The second issue at play is how one behaves when the spotlight is put on one’s work after it is published and challenged in open debate. If the work is supporting a political action likely to radically change the world we live in (politically) then it is reasonable to assume that one his harldy likely to be open to criticism, and definitely unwilling to engage in a discourse, because if your intelocutors are correct one has either made a huge error in the science, or deliberately (if unintentionally) warped the data to get the desired results. In other words you have only two options to the outcome of any discourse, you are either a fool or a knave.

    Steve, I recognise that you are the object of tons of gratuitous advice on all manner of issues, but can I suggest that you publish these observations, preferably in Phil Trans B, if they’ll let you. It may be a way to get Briffa et al to come to the table to give their explanations, assuming that Prof Briffa has recovered from his recent illness, because it seems to me that faced with the choice of “fool or knave” I would myself keep a dignified silence in the hope that it would go away.

    • QBeamus
      Posted Oct 20, 2009 at 10:25 AM | Permalink

      Re: Geronimo (#70),

      First is the perfectly normal human reaction to a belief, and that is to find supporting evidence for that belief. No conspiracy, it is normal for all of us to filter out the data which flies in the face of our beliefs and amplify the data that supports them. I realise we wouldn’t describe this as science in its purest form,

      The problem is more difficult, because the “reality check” is a necessary part of good scientific method. Again, this is a lesson I was taught in undergraduate lab. When collecting data, look at your results, and think about whether they make sense. This is how experimental error gets identified and corrected.

      Confirmation bias is a more specific problem. While it’s ok to use a “reality check” filter, it’s not ok when the reality being checked is the very question being investigated.

      This danger is another reason why good scientific method requires that, when filtering data based on one of these reality checks, you don’t do so secretly. You document the bad data, and in the report explain your justification for filtering it, so that other, potentially more objective, scientists can second guess that decision.

      Which brings us back to the core failure of the “Team.” At every turn, they’ve done everything they can to insulate themselves from second guessing. (While pretending otherwise, largely through the “peerreviewedliterature” red herring.) Had they done otherwise, none of the mistakes Steve has identified would have rendered their work useless, because in healthy science, mistakes, too (when detected) advance our understanding. Conversely, even if it turned out, somehow, that Steve’s criticisms are all mistaken, the Team’s work will have failed to advance our understanding, precisely because it is unreasonable to rely upon what is, ultimately, an argument from authority. (We’re the ones published in the venues that matter.) Overcomeing Steve’s criticisms wouldn’t prove that there aren’t other, more valid ones.

  41. Tom P
    Posted Oct 20, 2009 at 3:28 AM | Permalink

    Steve,

    This post makes a stronger case than your introduction of the combined Khadyta-Yamal chronology which had a poor correlation with instrumental temperature. In contrast the correlation/t-statistic of Yamal and Polar Urals chronologies with temperature during the important growth period would appear to be similarly high.

    As an aside, considering so much has been said about possible selection bias concerning Yamal, is there anything intrinsic in the two datasets (non-consecutive core numbers, documentation, etc.,) which would make one more wary of one series over the other in this regard?

    On the more substantial point, the main discrepancy between the two does not seem to be in the 11th century – suitable smoothing would flatten the spikes in both datasets especially in the case of the higher variance Polar Urals series. Rather it is the marked modern divergence between the two series, especially after 1950, that seems to be the dominant feature looking at fig. 1. How does the t-statistic/correlation with instrument record compare for the last half of the previous century? Is the discrepancy just down to the simple observation that there has been more recent recorded warming in Yamal than in the Polar Urals?

    If both chronologies are valid, both should be included in any analysis – their agreement in the pre-instrument period is impressive and it is for this time that dendroclimatology can give the most useful information. It might be worth plotting a mean of the two chronologies to show what the combined contribution might be.

    • hmmm
      Posted Oct 20, 2009 at 11:49 AM | Permalink

      Re: Tom P (#72),

      Tom P
      I think you’d therefore have to acknowledge that uncertainty in temperature history in this region is at LEAST as great as the differences between these two reconstructions. One series shows a MWP without the modern period being anything to write home about, the other shows a distinct OMG the sky is falling hockey stick. The differences in relationships between modern and historical levels are huge. Saying you can just combine these series is like saying they must not have been great thermometers, IMO.

      I think Steve’s main points have nothing to do with selecting a particular accurate treemometer reconstruction, but rather pointing out the differences and what that means to reconstructions in the literature. The team should not have selected just one of these reconstructions without:

      a) acknowledging the existence of the other
      b) explaining reasons why one was picked over the other (or why only one has to be chosen in the first place)
      c) discussing and depicting realistic confidence interval calculations (note that combining these series doesn’t improve on this)
      d) providing raw data of the chosen series for review and replication
      and probably 5 other letters I can’t think of right now.

      Personally I am losing confidence in using trees as calibrated temperature data loggers. I searched a reputable industrial supplies catalog for treemometers and have not found them for sale. My local tree service offers to plant them or cut them down but not to provide temperature reconstructions from them.

  42. Posted Oct 20, 2009 at 4:25 AM | Permalink

    Is a “slow ball” anything like a “googly”?

  43. Beth Cooper
    Posted Oct 20, 2009 at 4:28 AM | Permalink

    Any dancer of discrimination would prefer to dance with Polar Urals,attracted by her clearly superior t-status!

  44. dearieme
    Posted Oct 20, 2009 at 4:45 AM | Permalink

    I feel so sorry for those poor souls whose undergraduate lab books were hurled back at them by a cruel marker, who opined that their incompetence at the bench, married to their weakness for fudging their measurements, and their hopeless ignorance of statistical analysis meant that they’d better give up all hopes of a scientific career. Had they but known it, they could have persevered until eventually flourishing as Climate Scientists. It’s a cruel world. But not a very hot one.

  45. Posted Oct 20, 2009 at 5:53 AM | Permalink

    First thing this biology major (turned NASA engineer) thought of when I looked at the first graph is that the two tree populations starting differentiating back in 1600. It is just that the differentiating took on dramatic change in the last couple of years.

    This tells me (all things being equal) the populations reacted differently to the environment. Are these two sets of the same species and same mix of ages, etc? Because if not, then the answer could be increased CO2 (not temperature). One of the biggest problems I have with tree cores is that the increasing CO2 would naturally increase ring growth, mirroring a temperature increase.

    How does anyone know the rings are not reacting to CO2 (which is a major component in building cellulose)? How does anyone know which species of tree is geared to exploit CO2 the most?

    And to think some people claim they can look at tree rings and determine a global temperature to within a tenth of a degree for 100’s of years.

    This ain’t rocket science – would never qualify due to lack of rigor and proof. All wild speculation made to look precise by statistics.

  46. Rob Wilson
    Posted Oct 20, 2009 at 6:03 AM | Permalink

    Dear All,
    I think I need to clarify a few things as Steve has only given you a select amount of information. As you know, I often reply to Steve privately as it is too time consuming to place a post on this blog due to all the replies that are generated. I just do not have time to reply to all the questions.

    So….

    1. Firstly, the decision to use Yamal was both mine and Rosanne’s and we do not regret this decision.

    2. Please do not forget that DWJ2006 did in fact use BOTH the Polar Urals and Yamal data – the former for the STD version and the latter for the RCS version.

    3. The variance heteroscedasticity issue was not the only reason why I used the Yamal chronology compared to the RCS generated version of the Polar Urals. I also undertook local calibrations against June-July mean temperatures:

    1883-1970
    Polar Urals: r = 0.53 / DW = 1.62 / linear trend in residuals – r = -0.18 (ns)
    Yamal: r = 0.63 / DW = 1.75 / linear trend in residuals – r = -0.05 (ns)

    1883-1990
    Polar Urals: r = 0.53 / DW = 1.54 / linear trend in residuals – r = -0.16 (ns)
    Yamal: r = 0.61 / DW = 1.69 / linear trend in residuals – r = -0.08 (ns)

    So – Yamal was also simply a better estimate (albeit slight) of local summer temperatures. NB. These correlation results are a little different from Steve’s – differences of CRU2 vs CRU3 and period analysed???

    4. It might take me a couple of weeks as I am busy with teaching at the moment, but I will re-calculate the DWJ2006 RCS reconstruction using the POL RCS series instead of Yamal. As we weight the data equally between North America and Eurasia, I am pretty sure that the final outcome will not be that different. Certainly, the MWP values will be slightly higher but it will not change the conclusions of the paper – which – I would like to remind everyone – we were quite explicit in saying that prior to ~1300, the reconstruction estimates should be interpreted cautiously.

    5. Hey – In fact – for the hell of it, I might swap our RW Tornetrask series with Hakan Grudd’s version. I can include new data from central Asia and Europe and North America. Will the final result change significantly – I very much doubt it. Please understand that palaeoclimatology (and science as a whole) is not static. We are continually updating and adding new data-sets. No one paper should ever be treated as the be all and end all. While Steve is mired in the past, we are trying to improve on the uncertainties – not “moving on” as I am sure you will banter around, but simply trying to improve on what has been done before.

    6. Finally, devote followers of this blog seem to be obsessed with an elevated MWP. It is stated so very often that we are “cherry picking” purposely to deflate the MWP. This is simply not the case. In fact, the fatal flaw in this blog and what keeps it from being a useful tool for the palaeoclimatic and other communities is its persistent and totally unnecessary negative tone and attitude, and the assumption that our intention is faulty and biased, which keeps real discourse from taking place.

    Rob

    • Jason
      Posted Oct 20, 2009 at 6:32 AM | Permalink

      Re: Rob Wilson (#77),

      I think that the tone of the blog is a consequence of Steve’s experience dealing with the climate community. They have not been transparent and open in providing data. Often they appear to be less than honest.

      Your honesty is very much appreciated. Unfortunately, from time to time people misunderstand or misspeak, and this can give rise to the appearance of less than honest intentions where there are, in fact, none. For this reason I hope that you can help us reconcile the following statements:

      Recently you stated:

      Finally, I want to clarify that I never asked Keith Briffa for the raw Yamal data.

      But in February 2006 your wrote:

      I would have preferred to have processed the Yamal data myself, but like you, was not able to acquire the raw data.

      and

      Keith would not give me his Yamal raw data, but said that the Yamal series was a robust RCS chronology.

      I’m sure you can understand why some have thought these statements to be contradictory, and why your help understanding this matter would be appreciated.

    • Jean S
      Posted Oct 20, 2009 at 7:30 AM | Permalink

      Re: Rob Wilson (#77),
      thanks for the comment. A simple question:
      what was the reason for choosing either one of them (Polar Urals or Yamal)? It is clear that they both can not be telling the same thing about present temperatures relative to past temperatures. As one of the main objectives of your paper was to study MWP/present relation, and if the method used (correlation to local temperature) could not clearly decide which of them is more accurate, wouldn’t it be reasonable to drop them both?

    • Steve McIntyre
      Posted Oct 20, 2009 at 8:23 AM | Permalink

      Re: Rob Wilson (#78),

      Rob, thanks for commenting. Question: did you use the same gridcell for both comparisons? Polar Urals is in the Salehard gridcell which has a considerably longer temperature record than the Yamal gridcell (Mys Kammenyj). From the statistics, it looks like you’ve used the same gridcell for both comparisons.

      But aside from that, there’s a pretty fundamental (and, I think, interesting) statistical problem in your use of these slight differences in correlation to select one rather than the other – both relationships are “significant” – one that we’ve talked about here from time to time.

      Any differences that you’ve adduced are very slight or depend on methodologies that are nowhere described in the relevant articles or in the Peer Reviewed Literature. What if, say, application of these methods, supported Ababneh rather than Graybill? Without the method, whatever it is, being clearly and laid out, it would be impossible to apply your test elsewhere.

      My own sense is that the temperature correlation data is way insufficient to preclude one rather than the other – which is the point that I’ve been making. Not that Polar Urals is “right” and Yamal “wrong” – that’s a different and much more problematic issue.

      Also the inconsistency between proxies on a regional basis is something that should not be suppressed, but should be clearly addressed in the literature.

      I made all of these points as an AR4 Reviewer and Briffa rejected them.

      • bender
        Posted Oct 20, 2009 at 8:38 AM | Permalink

        Re: Steve McIntyre (#94),

        My own sense is that the temperature correlation data is way insufficient to preclude one rather than the other – which is the point that I’ve been making. Not that Polar Urals is “right” and Yamal “wrong” – that’s a different and much more problematic issue.

        I would like to see cross spectral coherency plots – not just gross correlations – for Yamal & instrumental temp versus Polar Urals & instrumental temp during the periods mentioned by Wilson. Although correlations may be roughly equal (and not necessarily so when proper gridcells are used), I want to know which series pair shows stronger lower-frequency coherence. Is it Yamal? I would reject Yamal on this basis (high probability of spurious correlation). [And no, I did not decide on my criterion after-the-fact. I have mentioned this as a litmus test in earlier commentary.] Side question: Have these correlation statistics been adjusted for autocorrelation during the blade uptick? They should be.
        .
        Tom P is a whiz at cross-spectral methods. Perhaps he could work this up?

      • Tom P
        Posted Oct 20, 2009 at 1:20 PM | Permalink

        Re: Steve McIntyre (#94),

        I’d be grateful if you could post a link to the code required to extract the relevant HadCRU/CRUTEM gridcell temperature series plus the calculation of the t statistic and correlation that you performed in the head post. As I said the difference in the instrumental period is large and bears some further examination.

        • steven mosher
          Posted Oct 21, 2009 at 1:28 AM | Permalink

          Re: Tom P (#117), dont forget to cut yamal off when it drops to 5 cores.. like you agreed to in the other argument.

    • bender
      Posted Oct 20, 2009 at 8:29 AM | Permalink

      Re: Rob Wilson (#78),
      Thanks, as always, to Dr. Wilson, for his participation here. I would never accuse Wilson of “moving on”, and indeed I would welcome any Wilson-generated hockey stick as worthy of scrutiny. I am glad to hear the opinion that the field is moving forward as it attempts to cope with the uncertainty problem. [Dr. Wilson, if I see one more chronology published without error bars, I will scream.] The absence of error bars is possibly a holdover from the days of dendrochronology, when the goal was accurate, annually-resolved dating, and errors in the y-direction didn’t much matter. But that was then. This is now. This is the only science domain I can think of that routinely ignores chronology uncertainty as a matter of GAP. The failure to consider the role of these errors in climate reconstruction is a glaring omission, leading to a very misleading picture of the degree of consistency in tree ring based climate proxies. This unacceptably bad practice has harmed IPCC and the climate action movement.
      .
      The role that “special pleading” plays in dendroclimatology is unacceptable. Making up selection criteria after-the-fact is not credible science. It’s speculative hypothesis generation. It is time for dendroclimatology to grow up.

    • Craig Loehle
      Posted Oct 20, 2009 at 8:31 AM | Permalink

      Re: Rob Wilson (#78), Rob:while the thread here focuses on your paper, the issue is broader. It may well be that in your reconstruction the substitution of yamal for PU has little effect because of equal weighting. However, it is the case, as SM has shown repeatedly, that in many of the reconstructions methods are used such as principal components or weighted regression that amplify the influence of a few data-sets such as Yamal to such an extent that if you remove Yamal, bristlecones, and a couple of others, the whole hockey stick business goes away. It is not an obsession with the MWP (frankly, many here doubt that you can accurately recon the MWP with tree rings) but the sharp uptick in the late 20th century and dodging of the divergence problem that is of concern.

    • Antonio San
      Posted Oct 20, 2009 at 10:50 AM | Permalink

      Re: Rob Wilson (#78),

      6. Finally, devote followers of this blog seem to be obsessed with an elevated MWP. It is stated so very often that we are “cherry picking” purposely to deflate the MWP. This is simply not the case. In fact, the fatal flaw in this blog and what keeps it from being a useful tool for the palaeoclimatic and other communities is its persistent and totally unnecessary negative tone and attitude, and the assumption that our intention is faulty and biased, which keeps real discourse from taking place.

      From the look of it, discourse IS taking place quite nicely. In my opinion, the general lack of civility in climate science discussion has more to do with the disproportionate amount of main stream media attention to some papers, the preposterous distortions activist-reporters get away with and some of the scientists who revel in their new fame glory who let -and sometimes lead to- unscrupulous journalists and editors get away with it all. This situation creates de facto an imbalance that always leads to a radicalisation of tone in response to the hegemonious arrogance of the “official science”. Should the media coverage include caveats and present information instead of opinion, the discourse would gain in substance and form as this blog demonstrates so well #94. In the end, scientists such as youself have been startled by the success of vulgarisation of your discipline (at least some aspects of it) in political, economical and media circles. It surely created pressure on you and your research as other intelligent people with different backgrounds will scrutinize your work since policies affecting them are derived from it. Welcome to the XXI century.

    • Anonymous Lurker
      Posted Oct 20, 2009 at 12:23 PM | Permalink

      Re: Rob Wilson (#78),

      In fact, the fatal flaw in this blog and what keeps it from being a useful tool for the palaeoclimatic and other communities is its persistent and totally unnecessary negative tone and attitude, and the assumption that our intention is faulty and biased, which keeps real discourse from taking place.

      As a lurker on this and many other climate blogs, I have to say that the thing that most stands out about this one is the LACK of the kind of unnecessarily hostile tone you seem to be describing. Maybe it seems hostile and negative to you because your only point of reference is your private conversations within academia and the climate science community, in which there’s no hostility because because you’re all playing for the same team. But on pretty much any other site in the climate blogosphere, enemies-of-the-blog are labelled either ignorant hillbilly fascists in hoc to Big Oil, or radical socialists out to establish One World Government via climate legislation. Here at CA, however, the focus is on the science. Steve seems to utilize his comment-snipping powers almost solely to snip people on his own side of the debate who get overly enthusiastic in their criticisms… other blogs snip only their enemies comments.

      Seriously, if you think CA has an unnecessarily negative tone, you need to spend a few minutes with Google and see what else is out there for comparison.

    • John A
      Posted Oct 20, 2009 at 3:23 PM | Permalink

      Re: Rob Wilson (#78),

      1. Firstly, the decision to use Yamal was both mine and Rosanne’s and we do not regret this decision.

      2. Please do not forget that DWJ2006 did in fact use BOTH the Polar Urals and Yamal data – the former for the STD version and the latter for the RCS version.

      Are you going to issue a corrigendum for the fact that you published Yamal results with Polar Urals core counts?

      I think the point Steve has made is that Yamal fails the statistical control tests that even Briffa himself had said should apply for the RCS method to be valid.

      This makes …

      3. The variance heteroscedasticity issue was not the only reason why I used the Yamal chronology compared to the RCS generated version of the Polar Urals. I also undertook local calibrations against June-July mean temperatures:

      1883-1970
      Polar Urals: r = 0.53 / DW = 1.62 / linear trend in residuals – r = -0.18 (ns)
      Yamal: r = 0.63 / DW = 1.75 / linear trend in residuals – r = -0.05 (ns)

      1883-1990
      Polar Urals: r = 0.53 / DW = 1.54 / linear trend in residuals – r = -0.16 (ns)
      Yamal: r = 0.61 / DW = 1.69 / linear trend in residuals – r = -0.08 (ns)

      So – Yamal was also simply a better estimate (albeit slight) of local summer temperatures. NB. These correlation results are a little different from Steve’s – differences of CRU2 vs CRU3 and period analysed???

      …dangerous and irrelevant.

      Yamal is simply a bad sample with a demonstrably wrong method applied to its compilation. It makes no difference what statistical treatments are applied because they are biased by the sample used. The key variance is against local temperatures using a dangerously low sample of cores from Yamal – which is where the (inadvertant) cherry picking may have occurred. I understand Rosanne likes to pick cherries but that’s only valid in the kitchen and not in the laboratory.

      4. It might take me a couple of weeks as I am busy with teaching at the moment, but I will re-calculate the DWJ2006 RCS reconstruction using the POL RCS series instead of Yamal. As we weight the data equally between North America and Eurasia, I am pretty sure that the final outcome will not be that different. Certainly, the MWP values will be slightly higher but it will not change the conclusions of the paper – which – I would like to remind everyone – we were quite explicit in saying that prior to ~1300, the reconstruction estimates should be interpreted cautiously.

      5. Hey – In fact – for the hell of it, I might swap our RW Tornetrask series with Hakan Grudd’s version. I can include new data from central Asia and Europe and North America. Will the final result change significantly – I very much doubt it. Please understand that palaeoclimatology (and science as a whole) is not static. We are continually updating and adding new data-sets. No one paper should ever be treated as the be all and end all. While Steve is mired in the past, we are trying to improve on the uncertainties – not “moving on” as I am sure you will banter around, but simply trying to improve on what has been done before.

      Steve is mired in the past, as you say, because paleoclimatologists/dendrochronologists are using the same bad sampling, the same bad statistical methods, the same damn bad proxies over and over. And then repeating their mistaken beliefs to the next generation.

      Are you going to tell your students about the mistakes of using a bad sample like Yamal mean that the statistical metrics produced are meaningless?

      6. Finally, devote followers of this blog seem to be obsessed with an elevated MWP. It is stated so very often that we are “cherry picking” purposely to deflate the MWP. This is simply not the case. In fact, the fatal flaw in this blog and what keeps it from being a useful tool for the palaeoclimatic and other communities is its persistent and totally unnecessary negative tone and attitude, and the assumption that our intention is faulty and biased, which keeps real discourse from taking place.

      It’s irresistable that there are climate authors with an agenda to promote the Modern Warming Period as without equal and its not exactly surprising that this has become a controversial issue. As recently as 2005, I noted that Michael Mann was denying that the MWP and LIA were anything other than North Atlantic regional phenomena – and no doubt that he thought has was speaking for the “consensus”.

      snip – OT

      So the ultimate question that is unresolved while you are “trying to improve on the uncertainties” is: what exactly are you recording the variance of?

    • Ryan O
      Posted Oct 20, 2009 at 5:44 PM | Permalink

      Re: Rob Wilson (#78),
      .
      Thank you for dropping by (several times). I apologize for the bluntness of the following (especially since you seem to be one of the good guys), but since you dropped by instead of, say, Briffa, you get to read it. If you’re wondering why the tone is what it is . . . well, here’s my take.
      .
      I take issue with the following:
      .

      6. Finally, devote followers of this blog seem to be obsessed with an elevated MWP. It is stated so very often that we are “cherry picking” purposely to deflate the MWP. This is simply not the case. In fact, the fatal flaw in this blog and what keeps it from being a useful tool for the palaeoclimatic and other communities is its persistent and totally unnecessary negative tone and attitude, and the assumption that our intention is faulty and biased, which keeps real discourse from taking place.

      .
      No. Many of the regulars here are far less concerned about efforts to “deflate” the MWP as they are about the lack of any credible evidence that the proxies used to reconstruct the MWP are capable of doing so – at least not to within the discrimination claimed (like being able to tell whether current temperatures are “unprecedented”). This is a subtle, but important, difference. I personally really couldn’t care less why you chose Yamal over Polar Urals, and I also would not ascribe any ulterior motive to your choice.
      .
      What I do care about is the cavalier attitude the entire dendro community takes with respect to its work. You realize you have multiple collinear signals (temp and CO2) within each sample and no way to distinguish them, right? You do realize that even split cross validation tests take place during times when these two factors remain collinear, right? You do realize that ring growth is not linear in real life, right? Or that precipitation also affects growth? Or that calibrating based off trees that right now are temperature stressed – but then reconstructing with trees that cannot be shown to also be temperature stressed (and likely aren’t) invalidates the calibration, right? Or that if you have two sets of data – like, say, Polar Urals and Yamal – and they behave differently with respect to each other and local temperature – then that means something is frickin wrong with the theory that allows their use as temperature proxies, right?
      .
      If you wonder why there is so much emotion tied to these discussions . . . well, that’s why. The issue of whether Yamal or Polar Urals (or both . . . or neither) should have been used is just one among many. They are all important issues, since the output of this field has had – and continues to have – a dramatic political influence that will affect every one of us. The answers in both the peer reviewed literature and from dendros themselves (both here and elsewhere) have yet to resolve any of them. The behavior of many individuals in the dendro community (from the outside looking in) invite suspicion. My honest assessment of the community is that (generally speaking) it is deliberately trying to over-reach its ability to draw inferences about past climate by biased sampling and archiving, poor use of statistics, habitual refusal to disclose data and methods, and lack of rigorous procedures for selecting and processing data.
      .
      I imagine this sentiment is shared by many.
      .
      If the dendro community wants the tone to change, then it had better start doing rigorous science.

      • ianl8888
        Posted Oct 20, 2009 at 7:31 PM | Permalink

        Re: Ryan O (#127),

        ” … calibrating based off trees that right now are temperature stressed – but then reconstructing with trees that cannot be shown to also be temperature stressed (and likely aren’t) invalidates the calibration … ”

        Yes. I’ve asked that question three times now in the various threads. It’s not been acknowledged as a question, never mind any attempt at an answer. Very sad

        • Ryan O
          Posted Oct 20, 2009 at 8:20 PM | Permalink

          Re: ianl8888 (#130), That’s because there isn’t a quantifiable answer. Therefore, the question is avoided.

        • ianl8888
          Posted Oct 20, 2009 at 9:08 PM | Permalink

          Re: Ryan O (#134),

          Surprise me :)

        • Ryan O
          Posted Oct 20, 2009 at 10:44 PM | Permalink

          Re: ianl8888 (#136), I wish I could.

        • ianl8888
          Posted Oct 20, 2009 at 11:43 PM | Permalink

          Re: Ryan O (#137),

          Thanks, Ryan O … I was being sardonic.

          Long experience has shown me that if a direct question is evaded, ignored, ridiculed or straw-manned, then the question is accurate.

        • MikeN
          Posted Oct 21, 2009 at 12:33 AM | Permalink

          Re: ianl8888 (#130),
          why can’t it be calibrated? Reconstructions of tree line locations are done. If the historical tree line location is known, as well as calibrations to modern trees at the modern tree line, as well as modern trees not at the tree line, couldn’t you then go further back in time, and appropriately calibrate past trees not at the tree line, given the available info?

        • ianl8888
          Posted Oct 21, 2009 at 12:50 AM | Permalink

          Re: MikeN (#140),

          Straw man … right on cue

          The question is:

          if current trees are excluded because they don’t match the instrumented temperature record, how can such similarly affected paleo-specimens be identified and thus excluded from past reconstructions ?

          Your answer offhandedly assumes “appropriately calibrate past trees … available info”. Next, please

        • MikeN
          Posted Oct 21, 2009 at 7:59 AM | Permalink

          Re: ianl8888 (#141), ian you referred to trees that are temperature stressed, so I assume your question was about trees at the tree line now vs trees that were not at the treeline in the past. Now you’ve changed the question.

        • ianl8888
          Posted Oct 21, 2009 at 3:21 PM | Permalink

          Re: MikeN (#159),

          No, I didn’t – same question, I’ve asked it four times now

          Please stop obfuscating and answer it, if you may

        • bender
          Posted Oct 21, 2009 at 3:42 PM | Permalink

          Re: ianl8888 (#178),
          MikeN is answering your question – to the extent that changes in climatic treeline (his point) are the major driver of changes in stress (your question).

        • MikeN
          Posted Oct 22, 2009 at 8:07 AM | Permalink

          Re: ianl8888 (#179), Ian, then perhaps you are misusing the words temperature stressed. What do you mean by the question:
          ” … calibrating based off trees that right now are temperature stressed – but then reconstructing with trees that cannot be shown to also be temperature stressed (and likely aren’t) invalidates the calibration … “

        • bender
          Posted Oct 21, 2009 at 8:20 AM | Permalink

          Re: MikeN (#140),

          why can’t it be calibrated? Reconstructions of tree line locations are done. If the historical tree line location is known, as well as calibrations to modern trees at the modern tree line, as well as modern trees not at the tree line, couldn’t you then go further back in time, and appropriately calibrate past trees not at the tree line, given the available info?

          In theory the ever-shifting response can be calibrated. In practice it can’t. It is possible to reconstruct tree”line” at isolated spot locations, but you can not possibly get enough samples to reconstruct the whole line as it moves over time. Perhaps you could get enough spots to parameterize a simulated fit to the treeline, and then use THAT in the dynamic temperature response calibration. To my knowledge this has never been done. But it’s a good idea for a PhD project. Do it for Yamal larch.

      • Geoff Sherrington
        Posted Oct 21, 2009 at 2:36 AM | Permalink

        Re: Ryan O (#127),

        I would add to your comments only that, AFAIK, this CA community will welcome viable proxy methods suitable for assisting a global policy formulation, if a policy is prudent.

        We just do not see evidence that dendroclimatology is a viable method based on results to date and on uncertainties that are so easy to envisage in Nature.

        It is becoming increasingly clear that stationarity and uniformitarianism cannot be assumed to hold true to the degree frequently assumed in literature written in the last 20 years – for all proxy methods – extending back even a few thousand years.

        If this turns out to be the case, then it will be disappointing. We all have a genuine, positive need to be able to reconstruct with confidence.

        Please do not confuse critical comments with negativism.

    • Layman Lurker
      Posted Oct 20, 2009 at 6:45 PM | Permalink

      Re: Rob Wilson (#78),

      the fatal flaw in this blog and what keeps it from being a useful tool for the palaeoclimatic and other communities is its persistent and totally unnecessary negative tone and attitude, and the assumption that our intention is faulty and biased, which keeps real discourse from taking place.

      Dr. Wilson, thank you for your comments here. AFAIC, I am totally open to your contributions and would encourage more active participation from you. There will be rigorous discussions to be sure, but I think your will find that commenters here by and large are not blind to legitimate science.

    • Steve McIntyre
      Posted Oct 22, 2009 at 1:24 PM | Permalink

      Re: Rob Wilson (#80),

      I also undertook local calibrations against June-July mean temperatures:

      1883-1970
      Polar Urals: r = 0.53 / DW = 1.62 / linear trend in residuals – r = -0.18 (ns)
      Yamal: r = 0.63 / DW = 1.75 / linear trend in residuals – r = -0.05 (ns)

      1883-1990
      Polar Urals: r = 0.53 / DW = 1.54 / linear trend in residuals – r = -0.16 (ns)
      Yamal: r = 0.61 / DW = 1.69 / linear trend in residuals – r = -0.08 (ns)

      So – Yamal was also simply a better estimate (albeit slight) of local summer temperatures. NB. These correlation results are a little different from Steve’s – differences of CRU2 vs CRU3 and period analysed???

      A few points:
      1) I checked CRU2 vs CRU3 and there do not appear to be any material differences.
      2) While Rob did not provide exact details of his analysis or t-statistics, it appears that he didn’t compare Yamal results to the Yamal gridcell, but to the Polar Urals gridcell. Thus his Yamal results differ from mine, which were done on the gridcell that actually corresponds. The Polar Urals gridcell has a much longer record.
      3) I slightly changed my script to also produce Durbin-Watson stats and trend in residuals and I got totally different results than Rob even under his gridcell convention. And there is an interesting statistical issue here. With Yamal crn against Polar Urals gridcell, I got a DW of 0.865 – below 1.5 is a problem area for the DW statistic, while Rob reported a DW statistic of 1.69. Instead of regressing tree ring CRN (effect) against temperature (cause), Rob regressed temperature (effect) against chronology (effect). When I did an inverse regression, I got a DW of 1.61. So we have a situation where the DW statistic yields very different results depending on the direction of the regression.

      I might add that the calibration carried out by Rob Wilson here goes straight into Brown-Sundberg type issues in a simpler context than MBH98 or Mann 2008.

      • bender
        Posted Oct 23, 2009 at 4:41 AM | Permalink

        Re: Steve McIntyre (#204),
        Calling Dr. Wilson …

        • Steve McIntyre
          Posted Oct 23, 2009 at 8:35 AM | Permalink

          Re: bender (#206),

          I emailed Rob separately notifying him of these results. He said that he “probably” did use the Polar Urals gridcell for Yamal as it had a longer record. He observed that as long as the regression residuals were uncorrelated, he was “happy” without commenting directly on the opposing results from direct and inverse regression, commenting indirectly as follows:

          As for regression direction – indeed – a very interesting methodological issue. The easiest way, mathematically, is tree (predictor) vs temp (predictand) which is pretty much the standard approach to most regression based temporal calibration. One can of course swap the process as you say (which results in quite different residual results). You then have to invert the equation to derive the reconstruction.

          Which is correct – there is no easy answer. They are all biased in their own way. Scaling is somewhere in the middle.

          Calibration is a topic that we’ve discussed here from time to time, more or less settling on the Brown and Sundberg approach as offering a principled approach towards multivariate calibration. I reverted to Rob, pointing out that this sort of issue is considered at length in the multivariate calibration statistics literature, observing that Ross and I had also made this point in our short PNAS comment on Mann 2008 (with two highly relevant citations there especially Brown and Sundberg 1987 and that we’ve discussed Brown’s articles at CA, noting that they were not easy, but very good.

          Rob replied a bit testily that since I was “NOT part of the main stream”, I had no no idea what is being discussed w.r.t. methodology in many many meetings and workshops and speculated that it would all start moving into a Bayesian probabilistic direction in the coming years.

        • bender
          Posted Oct 23, 2009 at 8:42 AM | Permalink

          Re: Steve McIntyre (#212),

          as long as the regression residuals were uncorrelated, he was “happy”

          My good eyeballs suggest that a windowed D-W would reveal this is not true in the modern divergence period. If I’m right, will his “happiness” change?

        • bender
          Posted Oct 23, 2009 at 8:46 AM | Permalink

          Re: Steve McIntyre (#212),

          Rob replied a bit testily that since I was “NOT part of the main stream”, I had no no idea what is being discussed w.r.t. methodology in many many meetings and workshops and speculated that it would all start moving into a Bayesian probabilistic direction in the coming years.

          Unfortunately, that won’t solve their addiction problem, which is the primary problem.
          .
          But seriously, “moving on” already, when they still haven’t bothered learning Ed Cook’s bootstrapping method? Good luck, dendros!

  47. Posted Oct 20, 2009 at 6:06 AM | Permalink

    What I would like to see is a measure of precision (the error bars). High correlation and replication is not adding precision, it is gaining confidence in the relationship between ring widths and temperature.

    But as we have seen, you dial a few data in here, a few data out there, and the entire historical record moves by degrees.

    When will someone do the math to show the compounding error bars? Ring widths, as noted here many times, are predicated on many factors beyond Temp. The more factors (local and global) the less impact Temp has on rings, the lower the signal (the higher the noise).

    You cannot gain precision by amplifying the signal – you amplify the errors as well.

    If (as I suspect) these proxies are only good to within a couple of degrees in modern times, and maybe up to 5 degrees in historic times, then we have the answer. These treemometers do not have the precision to measure temperature to a tenth of a degree regionally (therefore the global temp is only known to a degree).

  48. Posted Oct 20, 2009 at 6:09 AM | Permalink

    Mr. Wilson,

    You said you want to improve the uncertainties. How about proving the supposed accuracy or precision? What are the error bars (really, not statistically).

  49. Posted Oct 20, 2009 at 6:39 AM | Permalink

    Dear Dr Wilson

    I am sure I speak for many here in saying I really appreciate your presence and comments, even if I’m not sure I agree with them all. As an ex-“warmist”, I know very well how easy it is (a) to misread cruel intent when none is intended (b) to blame innocent science when the problem may lie elsewhere (c) to vent, when one discovers what look like “whoppers”.

    What we miss most is the open dialogue that might help overcome and prevent all this, and clear up misunderstandings that simply accumulate for want of a dialogue. I know this takes precious time for you, but it would prevent much future mischief.

    The concerns of most here (probably) are not in the past but in the future ie if we sign up to costly measures that will not help the climate because there is no climate problem.

    It was suggested (by bender?) that we might collect (and select) questions for you to make response easier. Would that help?

  50. RickA
    Posted Oct 20, 2009 at 6:40 AM | Permalink

    Re: Rob Wilson #77:

    Thank you for your information.

    I look forward to your new analyses.

  51. dearieme
    Posted Oct 20, 2009 at 7:01 AM | Permalink

    snip – while this may interest you, it has nothing to do with the thread.

  52. Dishman
    Posted Oct 20, 2009 at 7:07 AM | Permalink

    Rob Wilson wrote:

    We are continually updating and adding new data-sets. No one paper should ever be treated as the be all and end all. While Steve is mired in the past, we are trying to improve on the uncertainties – not “moving on” as I am sure you will banter around, but simply trying to improve on what has been done before.

    What that says to me is that the product lifecycle on papers essentially ends with publication. You (individually and collectively) seem uninterested in supporting papers once they have been published.

    That misses out on one of the biggest factors in quality processes – ongoing improvement.

    Rather than address the flaws in a paper, it’s easier to just make a whole new paper. Unfortunately, each new paper will contain its own new set of flaws. There is no feedback mechanism for converging on “quality”.

    Unfortunately, this appears to me to be just one more gaping hole in the overall approach to quality utilized by the “climate science” community.

    Please understand that as an engineer, I do not respect that.

  53. Erich
    Posted Oct 20, 2009 at 7:13 AM | Permalink

    Re: Rob Wilson #77:

    Your point 3:

    You present no convincing evidence for accepting one series over the other using these criteria. Since you quote ‘significance’, continuing down this road you should test for a significant difference in correlation coefficients between the Polar Urals and Yamal. To complicate matters you report ‘trend in residuals’ so it seems that you fitted simple regressions rather than correlations and your ‘r’ is the square root of model R-squared – correct? In any event, this is all very naive given that these are time series data and the autocorrelation of the residuals makes your significance tests far too significant because of the type I errors introduced by ignoring autocorrelation — if that is what you have done.

  54. LarryT
    Posted Oct 20, 2009 at 7:17 AM | Permalink

    There is a lawsuit that is going on that accuses Oil/Gas companies of causing global warming, increased sea height and hurricaine intensity. This is a wonderful opportunity to not only obtain original data, programs and methods but any data/results that went against the theory that was discarded. If your have requested and been denied the above, volunteer as an expert witness and get the attorneys to do a legal discovery.

  55. Geo
    Posted Oct 20, 2009 at 8:00 AM | Permalink

    “Oooh, baby. .. look at the rack on that chronology!”

    “Yeah, and her daddy is loaded too.”

  56. bender
    Posted Oct 20, 2009 at 8:10 AM | Permalink

    Question: if high variance is a hallmark of “divergence”, is it appropriate to dispsense the chronology containing the high MWP variance/divergence? Maybe the reason divergence is “unique to the late 20th century” (Jim Bouldin, pers. comm.) is because nobody’s looked at Polar Urals MWP?

  57. Kenneth Fritsch
    Posted Oct 20, 2009 at 8:15 AM | Permalink

    Steve M, thanks much for allowing me to see what I was missing after reading the two most recent Rob Wilson replies here at CA. I was scratching my head and attempting to understand the relevant points he was making and thinking, in the context of standard statistical analysis (from a layperson’s perspective), I just was not getting it.

    Inner beauty is a beautiful thing, once a sensitive soul points the way. It works its way into the mind and makes acceptance of all things not just a possibility but a probability. And as that Rob Wilson admonished negativity begins to fade away, this former skeptic will start to be at peace with the world.

  58. AJ Abrams
    Posted Oct 20, 2009 at 8:24 AM | Permalink

    Mr. Wilson,

    I think your point number one says more than anything else

    “1. Firstly, the decision to use Yamal was both mine and Rosanne’s and we do not regret this decision.”

    If after finding out that the entire uptick at the end of Yamal was a result of low sample size, improper use of RCS, and trees that are behaving in a manner inconsistent with Briffa’s own methodology, you can still say that you have no regrets about using the series, then I suspect that there is no evidence that could be presented to change your dogma. snip

    Your other point that you compared the two series and then chose the one that best matches the current warming is also completely without merit. Why? The difference between series of your own calculations aren’t significant AND…and this is really important….the two series in reality show vastly different results for post 1900 warming. If you cannot scientifically explain why two series which have statistically identical significance (for the period you are looking at – summer temps) BUT are in reality vastly divergent then you must not use either of them. Sorry, that is science.

  59. Robinedwards
    Posted Oct 20, 2009 at 8:40 AM | Permalink

    This is a fascinating thread, especially as it is attracting attention by a very highly respected climatologist/dendrologist. What is quite clear is that there is a deep controversy or disagreement about which proxy series is likely to represent most faithfully what actually happened in the past. We can I presume take it that the practicalities of measurement and recording technologies are unimpeachable. In other words, the TRW and MXD that have been reported (or at least used!) are absolutely reliable relative to the changes that occur within a core over time.

    If this is so the thread is concentrating on differences in the indications of climate between either individual trees in a fairly small location and/or the differences between substantially different locations within the same geographic area.

    What puzzles me is that for a considerable time (the last hundred years or so?) there have been observations not only of tree cores (or varves) but also actual temperatures. We all know of the problems with temperature measurements, revealed for instance by the studies of hundreds of sites in the USA, and probably happening elsewhere too, but surely actual temperature measurements are a better index of temperatures than “second-hand” estimates, which is what at best all proxies are.

    So, what about checking on the existence or otherwise of fairly recent specific graph shapes by utilising temperatures that have been measured directly?

    Am I missing something fundamental here? Are there no suitable actual sites? What about Abisko, NW Sweden, 68.21N 18.49E, not so far away from Yamal, and adjacent to Tornetrask, where climate records have been kept by the Scientific Research Station (The Royal Swedish Academy of Sciences) since 1913. I would guess that their observations are very sound indeed. If one looks at their data it shows that they have experienced four distinguishable regimes since then:- a cool period up to about 1930 ending with an upward step to a fairly stable warmer period, ending in the early 1950s, a downward step, restoring the original state and enduring until the late 1980s (1988?) when a very sharp and pronounced step to a warmer regime occurred. These hypotheses are based on year average data. I guess I should look at summer temperatures too!

    Perhaps there are more high quality meteorological stations in this region. If so, are they not the places that should inform our conclusions about major climate shifts, such as hockey sticks?

    Does anyone have any suggestions?

    • Posted Oct 20, 2009 at 9:18 AM | Permalink

      Re: Robinedwards (#99), see here for thermometer records around Yamal. Discuss there not here!

    • hmmm
      Posted Oct 20, 2009 at 12:08 PM | Permalink

      Re: Robinedwards (#99),

      Robin,
      The thing is that the modern record doesn’t tell you anything about the last 1500 years. The time range is important because these pre-instrumental records are then turned around and used in fancy climate prediction models based on how temperatures are affected by events:

      http://noconsensus.wordpress.com/2009/10/04/how-important-is-yamal/

      and also because these reconstructions are used as propoganda that we are at unprecedented temperature levels to sway public opinion and to direct political, ummm, direction.

    • Nathan Kurz
      Posted Oct 20, 2009 at 4:03 PM | Permalink

      Re: Robinedwards (#99),

      So, what about checking on the existence or otherwise of fairly recent specific graph shapes by utilising temperatures that have been measured directly?
      Am I missing something fundamental here?

      It’s possible I’m missing more than you are, but I think the point is that the instrumental temperature records are already being used to calibrate the dendro record. In the simplest form (the only form I think I understand) the instrumental record shows a significant but not extreme temperature increase, and Yamal shows an extreme increase in tree ring widths over the same time period.

      If you use this period as calibration, you conclude that the responsiveness of the trees to temperature is small. This in turn causes you to conclude that the increase in modern temperatures is unprecedented, since no other period in the dendro record shows a period of such rapid growth. The hockey stick is created not by elevating the blade, but by flattening the shaft.

      Polar Urals, by contrast, has less increase in ring widths during the calibration period, and thus implies greater temperature sensitivity. This in turn implies that the historical dendro record contains greater temperature swings. Again, this isn’t because the blade is shorter (as this portion has been calibrated to the instrumental record), but because the shaft is less flat.

  60. AJ Abrams
    Posted Oct 20, 2009 at 8:42 AM | Permalink

    Mr. Wilson,

    In addition –

    First, thanks for posting here. I should have stated that to begin with.

    Second, I further benders continuing point that “special pleading” is no-no in the sciences I studied.

  61. Craig Loehle
    Posted Oct 20, 2009 at 8:42 AM | Permalink

    A comment on the level of correlation taken as indicating a good series. In Rob’s comment, he shows r of .5 to .6 for the 2 sites for corr to temp. In many climate papers we have seen series used that correlate with r=.2 or so. In my work I view anything with r.9 as “good”. An r of .5 is only an R^2 of .25 (not much explained). It is all very well to work with data with weak signals in an exploratory sense, but then extraordinary claims are made about the significance of the results that are unwarranted by the precision. Bender’s comment about error bars applies. This is a general comment, not about Rob specifically.

    • Craig Loehle
      Posted Oct 20, 2009 at 8:51 AM | Permalink

      Re: Craig Loehle (#101), BTW, all claims of unprecedented temperature rise completely ignore error bars on the reconstructions (which have of course been absent in many cases to start with), ignoring what is taught in the first month of intro stats.

    • bender
      Posted Oct 20, 2009 at 9:10 AM | Permalink

      Re: Craig Loehle (#101),

      It is all very well to work with data with weak signals in an exploratory sense, but then extraordinary claims are made about the significance of the results that are unwarranted by the precision.

      Here, here.

      • kim
        Posted Oct 20, 2009 at 8:18 PM | Permalink

        Re: bender (#103),

        Hear! Hear! It’s cognate with ‘hark’.

        Don’t it always seem to go,
        Ya’ don’t know what ya’ got til’ it’s gone.
        Take Yamal Grove, and put up a barking lot.
        ===============================

  62. TAC
    Posted Oct 20, 2009 at 10:12 AM | Permalink

    SteveM: Kudos for providing yet another lucid account of “HS antics”. With each revelation this story grows increasingly bizarre, and one wonders when the HS edifice will finally be torn down by its creators and replaced with something else — possibly just an empty lot. A small part of me hopes the end doesn’t come soon, however. CA, with its intellectual rigor and clever language, is great fun to read and I continually look forward to each new chapter. :-)

  63. Gary
    Posted Oct 20, 2009 at 10:52 AM | Permalink

    I’ve not encountered windowed variance before. Has it proven to be a useful measure and what are it’s limitations? A Google search hasn’t helped me find the answers.

  64. Sean
    Posted Oct 20, 2009 at 12:22 PM | Permalink

    Rob (#77) says:

    “the fatal flaw in this blog and what keeps it from being a useful tool for the palaeoclimatic and other communities is its persistent and totally unnecessary negative tone and attitude and the assumption that our intention is faulty and biased, which keeps real discourse from taking place.”

    Rob, could you point us to some blogs that you would have us emulate? Maybe you could link a few threads that criticize Steve McIntyre in an entirely appropriate manner, i.e. no negative tone toward Steve, no questioning of Steve’s intentions, rather just a straightforward, factual analysis of the issues he raises.

  65. Geronimo
    Posted Oct 20, 2009 at 12:35 PM | Permalink

    Thanks for taking part Dr. Wilson your input is much appreciated. But this isn’t a Steve McIntyre and supporters v. climate scientist contest. Whether you realise it or not major policy decisions which will affect all of us for generations are being taken on the back of information provided by climate scientists to politicians, so I would assume that you, Rob, like anyone else outside of the extreme factions in the debate, would want the information provided to the decision makers to be as accurate as possible. Yet here we have Steve McIntyre, for the second time by the way, having been denied the raw data to published papers, able to cast strong doubts on their provenance within days of getting the data. Both papers had been through the peer review process used by climate scientists, and the Wegman report fully supported McIntyre/McKitrick on the first report. The jury is still out on the second. But think carefully these papers overturned previously unchallenged peer reviewed papers that supported the existence of a MWP, both were produced by open supporters of the AGW theory for the perceived rise in temperature of the late 20th century and both principal authors refused to give the raw data for review. It is a more serious situation than McIntyre tweaking the noses of climate scientists and their methods, the papers you, and your colleagues, are publishing are being used to determine how we and our children will live our lives(whether you agree or not), and for my part I want them to be scrutinised to the nth degree before public acceptance. It’s not a competition between McIntyre and the climate community, it’s whether the climate community can, without criticism, change the life of everyone on the planet.

  66. Kevin
    Posted Oct 20, 2009 at 12:47 PM | Permalink

    An aside.

    I read this blog most days, but I worry about groupthink leaving me with blindspots, and the “echo chamber” effect. So I decided to try out the much maligned Real Climate site.

    Its not bad for what it is trying to accomplish, but they do agressively eliminate dissenting comments. Makes me appreciate what Steve does here.

  67. Kenneth Fritsch
    Posted Oct 20, 2009 at 12:56 PM | Permalink

    I have no reason to judge that Dr. Rob Wilson is not a very nice and well-intentioned scientist, but, all humor aside, can we agree that he does not understand the implications of the statistics involved when it comes to selection criteria and further that he is unaware of his lack of understanding.
    .

    In his apparent innocence with this situation, his reaction to criticism from CA as simply a “negative” attitude might be expected. Also would not the dendros reactions be to circle the wagons in a tighter circle? I would think that any changes in the thinking on these topics will have to come from within the community or from some general critical judgments from other (and related) science communities.

  68. Posted Oct 20, 2009 at 1:00 PM | Permalink

    Re Steve’s headpost and Gary #108, Rob Wilson’s windowed variance strikes me as a reasonable, if ad hoc, summary statistic. It clearly shows that when sample sizes are low, as with Polar Urals before 1200 and 1400-1700, and Yamal post 1900 and 1500-1600, variances are inversely high, just as would be expected.

    So both series have periods when their sample size is too small, and periods when they might be OK. It seems to me that the obvious solution, since these sites are relatively near one another, is simply to merge them into one “supersite”, as suggested already by Alan S Blue #8 and Steve Geiger #61, but contra Jean S #89.

    Wilson himself (in #44 over on the NAS Panel and Polar Urals thread) has observed,

    As we have discussed through CA before, I was not happy with the resultant RCS chronology using the Polar Urals data. I know you do not agree with my decision here. Anyway, the Yamal series represented a RCS chronology from a nearby location. The figure below show the strong coherence between the Polar Urals STD and Yamal RCS series.

    So if they are close enough that one is just as good as the other (apart from intermitent small sample and resulting high windowed variance), why not just merge them?

    If they are far enough apart that they might have different RCS curves, they could still be merged by first creating a separate RCS curve for each site, calculating core-wise residuals, and them merging the two sets of residuals. Polars will then dominate when their sample size is bigger, and Yamal will dominate when its sample size is bigger, and everyone will be happy — or unhappy, as the chips may fall.

    • Kenneth Fritsch
      Posted Oct 20, 2009 at 1:27 PM | Permalink

      Re: Hu McCulloch (#116),

      So both series have periods when their sample size is too small, and periods when they might be OK. It seems to me that the obvious solution, since these sites are relatively near one another, is simply to merge them into one “supersite”, as suggested already by Alan S Blue #8 and Steve Geiger #61, but contra Jean S #89.

      So how is the proper sample size determined? I thought sample size was compensated by the affect the size has on the calcualted confidence intervals – until the size is too small to make meaningful calculations. What about the question of combining the series after the fact and not a prior? What if there are other series that could be combined in a super duper series? When do we say enough series or enough I like the result?

      Sorry but to this layperson none of this sounds very satisfying.

  69. MikeN
    Posted Oct 20, 2009 at 1:25 PM | Permalink

    >Polar Urals is in the Salehard gridcell which has a considerably longer temperature record than the Yamal gridcell (Mys Kammenyj).

    Now you tell me. Mys was next on my list of checks.

    • Posted Oct 20, 2009 at 2:09 PM | Permalink

      Re: MikeN (#118), Mike I think you’ve got the links back to front, Salehard borders on the Yamal peninsula (see map) and indeed, Salehard has a long thermometer record.

      • Espen
        Posted Oct 21, 2009 at 5:47 AM | Permalink

        Re: Lucy Skywalker (#120),

        Salehard/Salekhard has a long termometer record indeed, but can it be trusted? It’s a big city (for arctic standards), and got town status in the thirties, according to Wikipedia. And it has an Airport… Still it doesn’t show any dramatic increase, but rather the 1940s and recent camel bumps common to most arctic stations.

        I’m sorry if this is a FAQ, but exactly what recent temperature record was actually used to do the Yamal cherry-picking? I’m surprised that it’s possible to make a hockey stick out of arctic tree ring noise when arctic temperature records usually look more like a camelback than a hockey blade…

    • steven mosher
      Posted Oct 21, 2009 at 1:30 AM | Permalink

      Re: MikeN (#118), rather than work from hadcru processed data why not go to the source data. Nobody has any idea what the hadcru temperature code looks like. or use giss.

  70. bender
    Posted Oct 20, 2009 at 3:01 PM | Permalink

    duke: I didn’t get to finish editing. I saw the error as the thing was heading uncontrollably off to cyberspace.
    .
    Tom P #72: why don’t you try the cross-spectral test that I mentioned in #98?

  71. Good Captain
    Posted Oct 20, 2009 at 3:58 PM | Permalink

    Given the importance replication plays in science, how appropriate might openly taking a second sample series of Yamal cores be where such sampling was performed to a pre-agreed process w/ neutral observers? Is such an idea reasonable?or is it ridiculous?

  72. Feedback
    Posted Oct 20, 2009 at 4:40 PM | Permalink

    (…) the assumption that our intention is faulty and biased(…)

    If you’re doing an “audit” obviously you have to check if the numbers and documentation are OK or not, you don’t have to assume anything – just see to that they hold up to the claims in the annual report (or prospectus, or press release, or peerreviewuedwhateverthatwas).

    Provided that the company gives access to the numbers and documentation, of course. But then again, if they don’t, they’ll be kicked out of the stock excange…and out of business.

  73. Feedback
    Posted Oct 20, 2009 at 4:47 PM | Permalink

    PS. If anything is apt to create suspicion, or assumptions, or speculation, it’s the witholding of data and methods. Better transparency would definiteley contribute to a better tone.

  74. MikeN
    Posted Oct 20, 2009 at 6:10 PM | Permalink

    Lucy, there is a rural station closer than Salekhard.

    • Posted Oct 21, 2009 at 11:53 AM | Permalink

      Re: MikeN (#128), you’re correct. But Mys Kammenyj only ran 1950-1994, it’s more maritime than either Yamal trees or Salehard. And…

      Re: Espen (#155), Salehard’s record is seriously different from the Yamal HS blade, see here, and if it had UHI that needed correcting, the difference would be worse. And…

      I’m going to try and do another visual comparison tonight, on all the nearby GISS records in all their editions, to give some idea of variance in “local” thermometer records, as a first idea by which to imagine likely variance between Polar Urals and Yamal. And…
      I’ll try not to post any more thermometer stuff on this thread. :)

  75. Craig Loehle
    Posted Oct 20, 2009 at 7:37 PM | Permalink

    Rob Wilson: just to clarify–this is a tough crowd. If CA was a state, it would be “The Show Me State”. Handwaving here is like a red flag to a bull. The people here know more about “the divergence problem” than all but a few dendros, and more about some aspects of it (the statistical aspects) than any dendro.

    • jae
      Posted Oct 20, 2009 at 7:49 PM | Permalink

      Re: Craig Loehle (#131),

      LOL, CA IS a state. A broken state that will have to learn some new “paradigms.”

    • steven mosher
      Posted Oct 21, 2009 at 11:48 AM | Permalink

      Re: Craig Loehle (#131), Once upon a time a long time ago we had a thread devoted to parkers UHI study. he didnt visit the site, but we did collect up a series of questions ( suitably de-snarked) and submit them to him via a third party. That worked pretty well. No harm in trying that again.

  76. jae
    Posted Oct 20, 2009 at 8:30 PM | Permalink

    Rob Wilson, I applaud you. You have guts. Please keep up the dialogue.

  77. andy
    Posted Oct 21, 2009 at 1:03 AM | Permalink

    Is the D’Arrigo 2006 available online? I couldn’t find a free version. In the abstract it was stated that 66 sites were used for the NH temperature construction, so I assume it doesn’t realy matter if one or two series are changed to other ones with different current / mediewal temperature ratios.

    Another issue is the divergence problem, the curves showed a good fir with the NH temp during ~1880 – 1980, but the latest decades the gap jumped to maybe 0.6 – 0.7 deg C. How about wether the NH temperature would be derived from corresponding 66 stations close to the proxy areas, instead of the GISS index?

    [Jean S: Here.]

  78. Nathan
    Posted Oct 21, 2009 at 1:45 AM | Permalink

    RyanO

    “No. Many of the regulars here are far less concerned about efforts to “deflate” the MWP as they are about the lack of any credible evidence that the proxies used to reconstruct the MWP are capable of doing so – at least not to within the discrimination claimed (like being able to tell whether current temperatures are “unprecedented”). This is a subtle, but important, difference. I personally really couldn’t care less why you chose Yamal over Polar Urals, and I also would not ascribe any ulterior motive to your choice.”

    Have you done any back of the envelope calcuations on what a warmer MWP means? How does it affect the modelling etc? If the MWP was warmer doesn’t that mean the climate system is less able to dampen pertubations?

    • ChrisZ
      Posted Oct 21, 2009 at 2:08 AM | Permalink

      Re: Nathan (#145),

      Have you done any back of the envelope calcuations on what a warmer MWP means? How does it affect the modelling etc? If the MWP was warmer doesn’t that mean the climate system is less able to dampen pertubations?

      Without having to do any calculations, the very existence of a MWP (and a “Roman Optimum”) warmer than today would show that temperatures rising by another few degrees in future would be anything but catastrophic – after all, the ancestors not only of the people, but also the polar bears and penguins we see today lived during these periods, and there’s no good reason to assume they’d suffer any more fatally from future warmth than they did in the past. In historical records, unreliable as they may be, you don’t find many (if any) reports of fatal events connected with *warmth*, but usually with unseasonable *cold*.

    • Ryan O
      Posted Oct 21, 2009 at 5:35 AM | Permalink

      Re: Nathan (#145), No, I have not. Maybe someday.
      .
      Re: Geoff Sherrington (#149), Ditto that.

  79. Nathan
    Posted Oct 21, 2009 at 2:25 AM | Permalink

    ChrisZ

    Well, not quite, as why would we expect the warming to stop at the same level as the MWP. And if the climate was less able to dampen pertubations, then we would expect the climate to change faster and further than currently predicted.

    It would also be interesting to see if the expected temp increase due to doubling CO2 would rise significantly if the MWP was about as warm (or warmer than today).

    • ChrisZ
      Posted Oct 21, 2009 at 5:16 AM | Permalink

      Re: Nathan (#147),

      why would we expect the warming to stop at the same level as the MWP

      Who said anyone expects that? Please do not put words in my mouth. I have no idea if or when current warming (assuming for the sake of the argument that it is a fact) starts and stops. All I said was that previous warmer periods did – obviously – not cause any of the species we see today go extinct, nor have I seen evidence for these past warm periods being barely bearable extreme situations. On the contrary, the slump down into the LIA was quite a bit more problematic in terms of crop failures etc., so IMHO there’s no well-founded reason to see temperatures exceeding the MWP – and we are still pretty far from there as anyone currently attempting to grow grapes in Greenland will confirm – as necessarily dangerous.

    • Steve McIntyre
      Posted Oct 21, 2009 at 6:07 AM | Permalink

      Re: Nathan (#147),

      NAthan, I’ve said on many occasions in the past that, if you are right and the Stick being wrong means that climate is more sensitive, then we should know and govern ourselves accordingly. And we should not thank those people whose obstruction has delayed the identification of this error. And I should not be the only person asking this question.

      Please continue this discussion at Unthreaded as it has nothing to do with Yamal.

  80. Jonathan
    Posted Oct 21, 2009 at 2:29 AM | Permalink

    Nathan, all your comments are implicitly based on the assumption that we have a near-complete understanding of the major features of the climate system. The most natural interpretation of a MWP warmer than the CWP is that there are some significant features of the climate system which we haven’t yet identified.

  81. per
    Posted Oct 21, 2009 at 3:17 AM | Permalink

    “It is stated so very often that we are “cherry picking” purposely to deflate the MWP. This is simply not the case. “

    hmm. Let’s just look at this.
    Use of Yamal= statistically significant results= unprecedented warming= publication in Nature/ Science= fame= grants= personal wealth for scientist

    Non-use of Yamal= results weaker=precedents, i.e. not novel= no paper in science/nature= no fame/ grants/personal wealth

    Now it may be that climate scientists are the only scientists in the world to whom the prospect of fame/ grants/ personal wealth are matters of supreme indifference. However, I doubt you can make that case, since there appear to be quite a few climate scientists who go to conference all the time, are loaded with grant money, and coin in significant professorial salaries.

    And given that there are personal benefits in bigging up research results, don’t you think it appropriate that scientists should have to justify convenient choices which just happen to help the scientists themselves ?

    per

  82. Robinedwards
    Posted Oct 21, 2009 at 4:47 AM | Permalink

    Thanks, Nathan (#124) for your comments, which I understand. My suggestion is really aimed towards accepting current instrumental observations from the same general area as being reliable indicators of current levels and trends. The Abisko site, adjacent to Tornetrask, is not very far from Yamal, and their expertise and experience will surely mean that their observations really do represent reality. Thus any available recent dendro data can be appraised using Abisko as a benchmark. Substantial divergence would weigh heavily against accepting tree-based data as a temperature indicator. What is highly desirable would be a recent/new collection of dendro data preferably generated by a group or person of great experience (someone like Schweingruber, who has been extraordinarily diligent over many years?) for the region of interest – NW Siberia I suppose with Yamal as the “bull” for obvious reasons, but more instrumental data from northern Swedish and Finnish sites would also be very valuable.

    What it is essential to ascertain is whether unimpeachable sources can identify current levels and, very importantly, trends. Steeply rising trends seem to be a crucial part of the postulates of those who believe in GW and AGW. Any substantial divergence between close-coupled instrumental and proxy data would raise warning flags that should be heeded.

  83. Nathan
    Posted Oct 21, 2009 at 6:14 AM | Permalink

    RyanO

    So, I assume if the hockeystick makers want to get rid of it you think that means there is less impact from CO2? That seems like a good test for your theory, because if having a warmer MWP means that things will be worse with more CO2 it doesn’t make any sense that the hockeystickers want to flatten it…

    I don;t know how to do it. Maybe you know someone who could do a back of the envelope calculation?

  84. Nathan
    Posted Oct 21, 2009 at 6:15 AM | Permalink

    No worries Steve.

  85. Espen
    Posted Oct 21, 2009 at 8:30 AM | Permalink

    About tree lines: In this article (in norwegian), it is discussed whether higher tree lines in southern Norway is a response to climate change, but they also point out that a reduction in domestic animals may be an important driver (I’ve seen parts of Norway’s mountains myself where goats and sheep used to grass, but no longer do, and this clearly has made trees able to reclaim land close to the tree line). They further suggest that an increasing number of domestic raindeer may explain why the tree line in Finnmark (the northernmost part of mainland Norway) has decreased rather than increased (although they also mention that spring temperature has decreased in the north).

    Domestic raindeer are also common in Yamal and other parts of northern Siberia – this may complicate any attempt to use tree lines as a temperature proxy.

    • bender
      Posted Oct 21, 2009 at 8:53 AM | Permalink

      Re: Espen (#161),
      Sure, reconstructing tree lines is not easy. The existence of trees always lags behind the climatic envelope that defines the potential for trees. And latitudinal vs altitudinal treelines will behave differently as well. In the alpine scenario, when conditions are warm and dry and fires are frequent, the tree-climate lag is large. When it is warm and moist and fires are infrequent, the observed treeline starts to catch up to the theoretical treeline.
      .
      MikeN’s question is: supposing you could accurately reconstruct treelines, could you calibrate a dynamic temperature response – one that allows for shifting of the climatic envelope that defines treeline.

  86. Posted Oct 21, 2009 at 9:52 AM | Permalink

    Re Ken Fritsch #119,

    [HM] So both series have periods when their sample size is too small, and periods when they might be OK. It seems to me that the obvious solution, since these sites are relatively near one another, is simply to merge them into one “supersite”, as suggested already by Alan S Blue #8 and Steve Geiger #61, but contra Jean S #89.

    So how is the proper sample size determined? I thought sample size was compensated by the affect the size has on the calcualted confidence intervals – until the size is too small to make meaningful calculations. What about the question of combining the series after the fact and not a prior? What if there are other series that could be combined in a super duper series? When do we say enough series or enough I like the result?

    Obviously, the bigger the sample the better, but also the more costly, so a judgemental tradeoff has to be made between quality and cost. Steve’s point has been that the Yamal HS sample size does not meet standards imposed by dendros elsewhere, so that by their own standards it should not be used alone.

    Likewise, the high variance generated by the comparably small Polar Urals sample size in some of the earlier period caused Wilson to reject it alone. But then to be consistent, he should not have used Yamal alone either.

    My point was simply that if the two are appropriately merged into a supersite, the minimum combined sample size would be much higher than it is for either series by itself for most of the period, and would presumably meet the minimal core count standards imposed elsewhere. I’m hoping Wilson will comment here whether there would be some objection to combining them, after calculating separate RCS curves for each site if appropriate.

    Since the ring widths are highly non-Gaussian, in terms of both skewness and leptokurtosis, I have argued previously that it would be more meaningful to take the median across cores of the RCS residuals rather than their mean. Confidence intervals for the population median could then be computed from the order statistics using the binomial distribution, as I have indicated previously, with no parametric assumptions about the shape of the distribution required.

    Having a relatively constant sample size (relative to either series by itself) will yield relatively constant-width confidence intervals, making heteroskedasticity less of an issue when it comes to calibrating the series or any index it is included in to instrumental temperature and then using the series to reconstruct pre-instrumental temperatures. However, the presence of heteroskedasticity does not in itself prevent such a calibration and reconstruction, provided the formulas are adjusted too take it into account in standard ways. Since the combined core counts fall off before 900 AD, the appropriately computed CIs before then will simply become larger — perhaps by enough to make comparisons meaningless, but there is no harm in looking at them.

    Taking logs first may also give a more symmetric distribution, and therefore make the CI width more like an indicator of standard deviation, and the Gaussian distribution a better approximation to the true distribution for purposes of inference. Too small a sample will simply imply that no inferences can validly be made.

    I have no idea if other series should be combined with Yamal-Urals to make a “super duper series”. Perhaps Steve or Rob could comment on that.

    • Kenneth Fritsch
      Posted Oct 21, 2009 at 12:00 PM | Permalink

      Re: Hu McCulloch (#163),

      Steve’s point has been that the Yamal HS sample size does not meet standards imposed by dendros elsewhere, so that by their own standards it should not be used alone.

      Hu M, thanks for detailing your thinking on this issue as it does help me understand better the available methods for doing CIs. I do think that Steve M was using this dendro standard as a means of pointing to some inconsistencies in their thinking vis a vis Yamal and Polar Urals, but not indicating that it has any statistical foundations.

      So my modified question would be: why does not someone do the CIs for both series separately (and together)?

      I have a few observations that continue to nag me about the comparison of the Yamal and Polar Urals series (and I do think something can be learned looking at the differences as well as similarities):

      1) The coherence of the Yamal and Polar Urals series appears visually to be reasonably good, and, since this is a measure that Rob Wilson frequently points to, it is something, I judge, that needs to be discussed in more detail.
      2) The coherence to me indicates that the trees in both series were reacting to essentially the same changes in climate or perhaps the same non climate changes. That would, however, I think, be expected since the sites for these series are in close proximity.
      3) The real question then (disregarding whether the trees are reacting to temperature for the moment) is not coherence but the relative tree response as measured for the MWP versus the ModWP (and other periods of amplitude differences).
      4) I think that Rob Wilson over rates the good coherence as being an indicator that the trees in both cases are responding to changes in temperatures in similar fashion. What I see is that the trees indeed are reacting similarly to a common signal, but that even if that signal were primarily from temperatures changes, the response amplitudes at different periods are different.
      5) Further I think that the importance of the selection criteria in the instrumental period becomes a critical component of this time period response amplitude difference. Obviously, if I select only those tress or tree sites where the instrumental temperatures are reflected and discard all others, I will increase my odds that I obtain an unprecedented ModWP.
      6) Since a selection process can juice/influence the ModWP tree response, I would think that we need to compare time periods outside the ModWP in the Yamal and Polar Urals time series where the selection process should have little influence.
      7) Finally, in order to do that comparison I would think one would need to calculate the confidence intervals for the periods to be compared.

    • jeff id
      Posted Oct 21, 2009 at 12:19 PM | Permalink

      Re: Hu McCulloch (#163),

      I have no idea if other series should be combined with Yamal-Urals to make a “super duper series”. Perhaps Steve or Rob could comment on that.

      My opinion on this is that the combination of the data is no problem but the Yamal series has been shown to be hypersensitive to RCS style standardization. Something else needs to be done as by the mean of the data the hockey stick years tree rings are not atypcially wide compared to early years. In fact most of the ring widths are narrower, but the exp curve magnifies them several times.

      Rob Wilson,

      You are well respected here, but in my humble opinion you should be very concerned about RCS in Briffa’s Yamal. I’m not sure what kinds of problems it creates for a professional dendro to go against this part of the field but Briffa’s Yamal series unprecedented temps are an OBVIOUS mathematical artifact – NOTHING more! This isn’t a close call with the potential for sweeping it under the rug!!!

      This curve cannot be supported by science, logic or reason. I have difficulty with the possibility that now that everyone knows what creates its shape, those in this science might attempt to claim otherwise. Whether it has a ‘material’ effect on one paper or another is mute. The curve is bad, and that means it’s bad so it shouldn’t be used b/c it’s bad.

      If more data and different methods aren’t used the tone will do nothing but get worse.

      • Kenneth Fritsch
        Posted Oct 21, 2009 at 2:50 PM | Permalink

        Re: jeff id (#170),

        I see sensitivty to tree ring age after applying the RCS algorithym to the Yamal series and we all see the modern era change in tree rings in that series, but my question would be: What are the results, or would be, of similar observations with the Polar Urals series?

        • jeff id
          Posted Oct 21, 2009 at 4:05 PM | Permalink

          Re: Kenneth Fritsch (#176),

          I’ve never tried and don’t know where to find the data. I bet someone here does though ;)

          Re: NW (#177),

          I think there is a fundamental error in RCS, it’s subtle though. Nothing in Dendro limits tree growth to an exponential curve. It is possible that a U shape growth curve is standard. It’s been explained to me from several sources that it’s more common to use polynomials than exponential functions which indicates very clearly that the science of average tree growth ain’t settled.

          I proposed a from the hip reconstruction method on a site Layman Lurker pointed me too called delayed oscillator (DO). I found an interesting conversation there. The proprietor is a mainstream AGW scientist of some kind. I’ve been thinking of a method which chops the earliest 75 years off the individual trees and the earliest 150 years off the total chronology and average the rest. The point would be to reduce ring width distortions created by tree age. DO mentioned that the proposed method has already been used in publication.

          Here’s the link to DO:

          http://delayedoscillator.wordpress.com/2009/10/17/yamal-iv-growth-curves-and-sample-size/#comments

          He’s the first non-skeptic scientist I’ve run across willing to have an open discussion, so the blog has good possibilities and if it continues may get a bookmark at tAV. My latest replies have been stuck in moderation now for 3 days though but there are no additional approved posts afterward.

          Anyway, if old trees have beaten back the light and nutrient competition through shading – on average – the average growth would increase from a minimum some in later years. You get a slight U shape – I bet Dr. Loehle already knows the answer to this.

          If this were the case, the hockey bias from RCS exponential style could always be created by age with enough samples in the chronology. If another type of curve were used, RCS can be biased another way through excessive weighting of younger trees. Of course old trees are basically guaranteed to be more common in the most recent years of data because live trees are easy to locate compared to sub-fossil. If you put all that together, RCS exponential is an absolutely horrible method of standardization IMO.

        • bender
          Posted Oct 21, 2009 at 4:13 PM | Permalink

          Re: jeff id (#182),
          You are referring to old trees that attain canopy dominance and experience release from competition? That is why the dendros are supposed to be operating in unshaded open stands at treeline where populations are sparse. That is exactly why metadata matter – to prove that these conditions were met. A topic dodged by the gurus at RC. If the stands were EVER dense enough to be shaded at any time, then jolts of growth from competition release are possible.

        • jeff id
          Posted Oct 21, 2009 at 4:23 PM | Permalink

          Re: bender (#185), I see thanks.

          Also, I notice the data for the Urals can be obtained on the other thread. If there’s time, I’ll check it out.

        • Ryan O
          Posted Oct 21, 2009 at 5:12 PM | Permalink

          Re: bender (#185), In addition, this doesn’t address the issue of what happens when past samples cannot be shown to meet those criteria.

        • bender
          Posted Oct 21, 2009 at 6:23 PM | Permalink

          Re: Ryan O (#188),
          I know.
          Re: Kenneth Fritsch (#189),
          Parametric. Don’t tell the real statisticians, but I do not think the heteroskedasticity is a deal-breaker. Sometimes it’s nice to get a rough idea of the CI’s, even if they’re not completely robust. Just for starters. It’s better than nothing, that’s for sure.

        • Kenneth Fritsch
          Posted Oct 21, 2009 at 6:53 PM | Permalink

          Re: bender (#193),

          Parametric. What ever. My layperson status sometimes shows – make that too often. That is the second correction in two days. Whatever.

        • bender
          Posted Oct 21, 2009 at 8:31 PM | Permalink

          Re: Kenneth Fritsch (#194),
          hunh? You asked me if I used Hu’s non-parametric method. I replied: no, I uused a parametric approximation. Better?

        • Posted Oct 21, 2009 at 5:49 PM | Permalink

          Re: jeff id (#183),

          thanks for taking the time. That was helpful.

          Re: bender (#186),

          you too. You say here commenting on jeff id:

          You are referring to old trees that attain canopy dominance and experience release from competition? That is why the dendros are supposed to be operating in unshaded open stands at treeline where populations are sparse…If the stands were EVER dense enough to be shaded at any time, then jolts of growth from competition release are possible.

          This makes excellent theoretical sense to me. Bender, as you seem to know lots about what dendros are supposed to do, would you know of textbooks, canonical articles and/or surveys on “best practices”–what the ideal conditions are; when and why RCS (or other standardization techniques) are supposed to work in theory and in practice; and the empirical base of all that? I want to educate myself.

        • giano
          Posted Oct 21, 2009 at 8:20 PM | Permalink

          Re: NW (#191),
          There are many related papers at Jan Esper’s website http://www.wsl.ch/staff/jan.esper/publications.html

          Also at http://www.st-andrews.ac.uk/~rjsw/TRL/

          See also at http://www.ldeo.columbia.edu/res/fac/trl/index.html

          But probably the best dendro site is http://web.utk.edu/~grissino/

          Data here http://www.ncdc.noaa.gov/paleo/treering.html

          I would say that (and probably Steve Mc agrees with me here) despite some unfortunate examples, you cannot tell that dendrochronologists have not taken the time to make data, software and publications available for everybody to use!

          Steve: Yes, there is a lot of information available and the ITRDB at NCDC’s paleo data archive is a fine resource. I’ve praised the NCDC paleo archive on many occasions. That makes Briffa’s stonewalling all the more frustrating. It is also disquieting that the most frequently used tree ring records in multiproxy reconstructions (Yamal, Taimyr, the Tornestrak update…) were not archived at ITRDB. Esper hasn’t archived anything. D’Arrigo and Jacoby have horrendously incomplete archives. The non-HS Gaspe data wasn’t archived. Why haven’t dendrochornologists themselves objected to these failures? As to methodology, yes, ARSTAN and COFECHA software is available, but Briffa’s RCS isn’t. There are statistical decisions that are needed in any implementation that are not reported in any article.

        • bender
          Posted Oct 21, 2009 at 8:36 PM | Permalink

          Re: giano (#196),

          you cannot tell that dendrochronologists have not taken the time to make data, software and publications available for everybody to use

          That isn’t the issue. The issue is disclosure of SPECIFIC data and methods used in SPECIFIC papers that supply IPCC with their SPECIFIC iconery. There are identifiable repeat offenders, despite the honorable and sharing nature of the community at large. This is PRECISELY why Steve uses the “disparaging” phrase “The Team”, for which he is much criticized. He is indicating exactly what you say: that not all of them are bad.

      • Posted Oct 21, 2009 at 3:07 PM | Permalink

        Re: jeff id (#170),

        you noted that

        the Yamal series has been shown to be hypersensitive to RCS style standardization…the exp curve magnifies them several times.

        I’ve now read enough here to understand how RCS works and what this means. But if this is correct, is there something fundamentally fishy about RCS? Presumably, RCS was based on some kind of (hopefully careful) field studies and/or experiments, earlier in the dendro literature, and there are probably known caveats that arose then. Is there something good to read about all that? I have no trouble finding descriptions of the techniques online. But, perhaps there is a good recent critical survey somewhere? I have pretty good access to journals. Thanks for your pointers in advance.

  87. Posted Oct 21, 2009 at 11:23 AM | Permalink

    RE Steve, discussing Fig. [3] in headpost,

    Using Wilson’s rolling variance test, the variance of the Yamal chronology has been as high or higher than Polar Urals since AD1100 and has increased sharply in the 20th century when other chronologies have had stable variances. I am totally unable to discern any visual metric by which one could conclude that Yamal had a “roughly stable” variance in any sense that Polar Urals did not have as well.

    If two of these rolling variances are compared for the centers of two non-overlapping centuries, the ratio has a classic F distribution when the errors are independent with equal variances, provided the means are constant each century and the errors are Gaussian (big ifs, but all we have to go on). A high value of the F-stat (placing the higher variance in the numerator) is then the standard test for equality of variance.

    The two-tailed 5% critical value with sample size 101 in both numerator and denominator (and therefore 100 DOF in both numerator and denominator) is (in Matlab) finv(.975, 100,100) = 1.48, or 1.22 for the ratio of standard deviations. This critical value incorporates both the upper and lower tails, since finv(.025,100,100) = 1/finv(.975,100,100) = .647 = 1/1.48.

    Since we have about 12 non-overlapping centuries here, there are in fact 12*11/2 = 66 different pairs of non-overlapping centuries that can be made, and these pairs are not independent, so the critical value for the max centered variance over the min centered variance is considerably higher than 1.48 (or 1.22 for the sd), and messy to compute directly. The exact critical value could easily be determined by Monte Carlo means, however.

    From Steve’s graph the max sd ratio looks about like 3.0 for Polar Urals and 2.0 for Yamal, which is probably high enough to reject homoskedasticity for both series. Rob’s version in fig. 3 of the 2/22/06 post “Wilson on Yamal Substitution” (post #541) gives similar results. (Both graphs compute the “windowed variance” right up to the endpoint, which probably entails some cropping. To get the full 100 DOF, it is necessary to lop the first and last 50 years off each graph.)

    So it does look like raw Polar Urals are objectively worse by this measure, but that both are probably sufficiently heteroskedastic to reject constant variances, even if the multiple pair issue were taken into account.

    But since the core counts are now available for both series, thanks to Steve’s efforts, it would be more informative to look at the windowed variance of n*RCS, where n is the core count, rather than just RCS by itself, in order to see how much of this heteroskedasticity is just mechanically due to non-constant core counts. Since the variance of the mean (or median) is directly proportional to the sample size, these series are much more likley to appear to have constant variance.

    • Kenneth Fritsch
      Posted Oct 21, 2009 at 12:10 PM | Permalink

      Re: Hu McCulloch (#164),

      How is a selection criteria, that was surely made after the fact, based on variation of the resulting tree ring response or even variation of the variation over time justified statistically or physically? Why would a lower SD be preferred and could that lower SD be an artifact of the methods used to measure tree ring responses?

    • Dave Dardinger
      Posted Oct 21, 2009 at 12:32 PM | Permalink

      Re: Hu McCulloch (#164),

      To get the full 100 DOF, it is necessary to lop the first and last 50 years off each graph.

      But isn’t it precisely the last 50 years which is at issue regarding Yamel vs Polar Urals? I’m not sure just what this windowing does, but if it can’t deal with the living trees during the past 50 years, of what value is it? I don’t think there’s a problem with expecting both series to do a fair job of tracking each other during most of the last 1000+ years.

    • RomanM
      Posted Oct 21, 2009 at 12:39 PM | Permalink

      Re: Hu McCulloch (#164),

      Since we have about 12 non-overlapping centuries here, there are in fact 12*11/2 = 66 different pairs of non-overlapping centuries that can be made, and these pairs are not independent, so the critical value for the max centered variance over the min centered variance is considerably higher than 1.48 (or 1.22 for the sd), and messy to compute directly. The exact critical value could easily be determined by Monte Carlo means, however.

      Why can’t you use Bonferroni? Although it will be conservative, all you need to do to get critical values is to divide the significance level by the number of comparisons.

  88. Brian B
    Posted Oct 21, 2009 at 11:37 AM | Permalink

    OK. I have a hard time following the statistical stuff as it is, but Hu in #164 has set a new standard for over-my-headness. :)
    And please no one try and clarify it for my poor layman’s brain.
    I’d like it to stand, as is, in it’s perfect incomprehensibility.

  89. MikeN
    Posted Oct 21, 2009 at 12:23 PM | Permalink

    Lucy, try and do a correlation, and also with June-July-August temp

  90. Kenneth Fritsch
    Posted Oct 21, 2009 at 2:45 PM | Permalink

    Heteroscedasticity is important if we are calculating confidence intervals. Unfortunately I have not seen any attempts to do that with these series.

    Why not do an F test between the series Yamal and Polar Urals over various time periods? Would one want to combine two series that we thought were different (given the difference is not due to small sample size)?

    • bender
      Posted Oct 21, 2009 at 3:37 PM | Permalink

      Re: Kenneth Fritsch (#175),
      I did it 2-3 weeks ago and sent the graphic to Steve.

      • Kenneth Fritsch
        Posted Oct 21, 2009 at 5:43 PM | Permalink

        Re: bender (#180),

        Did you consider the effect of heteroscedasticity on the CIs or do a non parametric rendition as suggested by Hu M?

  91. PeterA
    Posted Oct 21, 2009 at 3:50 PM | Permalink

    snip
    However, for whatever reason my comment was removed from this thread.

    Steve – it was OT.

  92. jeff id
    Posted Oct 21, 2009 at 4:08 PM | Permalink

    Edit of last paragraph.

    If this were the case, the hockey bias from RCS exponential style could always be created by age with enough samples in the chronology. RCS can be biased another way as well, through excessive weighting of younger trees. Of course old trees are basically guaranteed to be more common in the most recent years of data because live trees are easy to locate compared to sub-fossil. If you put all that together, RCS exponential is an absolutely horrible method of standardization IMO.

  93. Posted Oct 21, 2009 at 4:10 PM | Permalink

    RE RomanM #173,

    Why can’t you use Bonferroni? Although it will be conservative, all you need to do to get critical values is to divide the significance level by the number of comparisons.

    With 66 comparisons on paper, only 6 of which are completely independent, Bonferroni might be excessively conservative.

    Monte Carlo is very easy, and gives a 5% critical value of 1.92 for the variance ratio or 1.39 for the s.d. ratio, with m = 12 centuries. As a check, I also tried m = 2 centuries to see if I got the standard F(100,100) critical value, and hit it on the head to 2 decimal places using 999 replications: 1.48 for the variance ratio and 1.22 for the s.d. ratio.

    Here’s my Matlab code. I assume R code would be very similar.

    % YamalFtest
    % Monte Carlo F critical value for Yamal/Urals heteroskedasticity problem.
    % Have m = 12 centuries, each with n = 100 years, all with equal variance.
    % Compute sample variance for each century v1 ... v12,
    % then look at F* = vmax/vmin. Replicate reps = 999 times,
    % sort, find .95 quantile = Fstar(950).
    % Take mean known to be 0, so DF of each variance = n.
    m = 2; % use m = 2 to check that program gives standard F(100,100) value
    m = 12;
    n = 100;
    reps = 999;
    randn('state', 123456789.)
    Fstar = NaN(reps,1);
    for rep = 1:reps
    x = randn(n,m);
    xx = x.*x;
    v = sum(xx)/n; % ~ 1 x m
    vmax = max(v');
    vmin = min(v');
    Fstar(rep) = vmax/vmin;
    end
    Fstar = sort(Fstar);
    p = (1:reps)'/(reps+1);
    plot(Fstar,p)
    Fstar(950) % 5% critical value for variance ratio
    % gives 1.9235 for m = 12, 1.4836 for m = 2
    sqrt(Fstar(950)) % 5% critical value for s.d. ratio
    % gives 1.3869 for m = 12, 1.2180 for m = 2
    % end of YamalFtest

    So the s.d. ratio of about 3.0 for Urals or even 2.0 Yamal is highly indicative of non-constant variance. (I’m just eyeballing these off Steve’s graph, since I don’t have the numbers handy)

    But we know the variances shouldn’t be constant, because the replication isn’t constant. What would be more relevant would be the outcome using nc(t)*(RCS(t)-mean(RCS)), nc(t)*var(RCS(t)-mean(RCS)), var(sqrt(nc(t))*(RCS(t)-mean(RCS)), where nc(t) is the number of cores at time t, and mean(RCS) is perhaps a 101-year centered moving average of RCS(t) to coincide with the 101-year moving variance estimate.

    • Kenneth Fritsch
      Posted Oct 21, 2009 at 5:47 PM | Permalink

      Re: Hu McCulloch (#185),

      But we know the variances shouldn’t be constant, because the replication isn’t constant. What would be more relevant would be the outcome using nc(t)*(RCS(t)-mean(RCS)), nc(t)*var(RCS(t)-mean(RCS)), var(sqrt(nc(t))*(RCS(t)-mean(RCS)), where nc(t) is the number of cores at time t, and mean(RCS) is perhaps a 101-year centered moving average of RCS(t) to coincide with the 101-year moving variance estimate.

      I’d be curious whether the variance was a function of tree ring age. I might even do a simple-minded approach to test this conjecture.

  94. Posted Oct 21, 2009 at 5:56 PM | Permalink

    ps–

    I already downloaded this from DO using jeff id’s link…

    Briffa and Melvin. 2008. A Closer Look at Regional Curve Standardization of Tree-Ring Records: Justification of the Need, a Warning of Some
    Pitfalls, and Suggested Improvements in Its Application in M. K. Hughes, H. F. Diaz, and T. W. Swetnam, editors. Dendroclimatology: Progress and
    rospects. Springer Verlag.

    …so I suppose that and its biblio are a start. But it is nice to have a contrasting voice. I’m grateful for any suggestions.

  95. giano
    Posted Oct 21, 2009 at 7:56 PM | Permalink

    Hey Steve, as I pointed out to romanm in the other post, you should also submit an abstract about this thread to http://www.worlddendro2010.fi. Then you will have the chance to show and discuss this stuff with many new and old dendros (deadline for abstracts has been extended to 8 Nov 09).

  96. Brian B
    Posted Oct 21, 2009 at 9:09 PM | Permalink

    Re #197:

    Probably should let Kenneth answer this but in case he doesn’t see it, I took him to mean he thought you (bender) were correcting him; that when he said ‘non parametric’ you were telling him he should have said ‘parametric’.

    • Kenneth Fritsch
      Posted Oct 22, 2009 at 7:44 AM | Permalink

      Re: Brian B (#199),

      Thanks, Brian B, I was in error about being in error, but now I see the light and understand what Bender was doing. I agree that a first look at CIs would be informative, but that calculating them in a totally acceptable manner may be complicated.

  97. andy
    Posted Oct 22, 2009 at 12:09 AM | Permalink

    Sorry if D’Arrigo 2006 has been discussed already elsewhere, but as this has an dendroconnection to Yamal, I’ll proceed.

    D*Arrigo reconstruction consists of treerings from following areas: Alps, Scandinavia, 3*Siberia, Mongolia, 2*Alaska, 5*Canada, sites roughly between 45 – 70 deg N. These reconstructions are compared to annual NH 20-90 temp series.

    I tried to compare D’Arrigo temp reconstruction to a temperature reconstruction consisting of GISS data of: Roro & Haparanda for Scandinavia, Salehard, Verhojansk, Anadyr for Siberia, Minusinsk for Mongolia, Nome for Alaska, Fort Simpson and Fort St James for Canada, St Bernhard and Saentis for the Alps. Scandinavia and Alps were combined to one serie for both. I tried to pick rural stations covering the 20th century, couple of missing values I replaced for the JJA comparison, the missing years from the records I left out of comparison, finally the years 1910 – 1985 were included.

    Results: The summer JJA temps correlation with the D’Arrigo:
    NHLandObs STD recon RCS recon
    0.1816 0.3790 0.2982

    And correlation to the annual temp records:

    Correlation combined with
    NHLandObs STD recon RCS recon

    0.681 0.030 0.126

    So: I get correlations between measured summer temps and temp reconstructions like 0.3 – 0.4, but with measured annual temps and reconstructions just as 0.03 – 0.12, meanwhile the paper states correlations as high as 0.55 with annual temps and reconstructions. What the heck I’m failing? One good note is that I don’t have similar divergence problem popping out, but the series stop at 1985 after which the divergence really starts to jump out.

    “Study” is in Excel, and of course I didn’t check how close the temp stations were the actual tree ring collection areas, but somehow I also remember from the dendro papers I have read that the normally they compare and reconstruct just the summer temps, not the annuals.

  98. EJ
    Posted Oct 22, 2009 at 1:43 AM | Permalink

    Bell of the ball and the wallflower. Dance cards and such. Attributing inner beauty of the data.

    Have we not all read the end to this story?

    Steve, aside from the spotless analyses, I love your prose. I know no one else who can weave a childish tale and nutted stats into a cool narrative. I am forced to read through to the end. I don’t want to miss anything.

    If I may. If data and methods are not archived, then the study is not scientific. Period.

    Can we agree on the scientific method?

    If data and methods aren’t archived, is this scientific?

    I reiterate the same theme. If data and methods aren’t archived, this is not science, no?

    If Mann, Briffa, Jones, or anyone refuses to archive for science, then they do not practice science.

    Wallflowers (late bloomers) and Belles.

    The tragedy is that apparently anyone can proffer some data set. I do so here. I have am eyeball study that says that temperatures go up and down. I will archive my data and methods soon.

    I think my findings are robustly correlated with the data.

    Thus, I have proved that climate changes, so climate change is real.

    If I don’t say that climate change is real, I will soon be looked at kinda like the slut in the corner by the Belles who all believe they are smarter than I am.

    Bottom Line, no? If the data and methods aren’t archived, it aint SCIENCE.

    Any scientists who endorses this nonsense should be forever banished.

    That means that Mann, Briffa, Jones are NOT scientists.

    They don’t practice the scientific method.

    They may as well be throwing chicken bones, these IPCC Authors.

    If one wants to practice science, they must archive their data and methods.

    Dr. McIntyre,

    Yes or no?

    According to your definition of the scientific method, do Mann et al deserve status as scientists?

  99. Jimmy Haigh
    Posted Oct 22, 2009 at 1:50 PM | Permalink

    EJ:
    October 22nd, 2009 at 1:43 am

    It’s ‘science’ EJ. But not as weknow it.

  100. Kenneth Fritsch
    Posted Oct 23, 2009 at 7:37 AM | Permalink

    In order to look in more detail at the inner beauty of the Yamal series in the form of the 101 year window for the standard deviation (sd) of the RCS chronology, I compared (by correlations) 101 year windows for sd, mean and tree ring counts. I did this not only with the whole Yamal series but for the series with tree rings older than 99 and 124 years. The R code, for the preliminary part of these calculations, is given in Posts #s 309 and 310 at:

    http://www.climateaudit.org/?p=7241#comments

    .
    The windowing R code is presented below.
    .

    The results of these calculations indicate that the magnitude of the sd follows that of the mean and not that of the tree ring counts. Based on that explanatory evidence, I do not see where Rob Wilson’s sd windows would account for much inner beauty for the Yamal series or, likely, for any other RCS series (Polar Urals).
    .

    Correlations between 101 year SD, mean and count windows:
    .

    for tree rings all ages:

    cor(zSD,zmean)
    [1] 0.7130628

    cor(zSD,zCT)
    [1] -0.08589537
    .

    for tree rings ages >99 years:

    cor(zSD,zmean)
    [1] 0.7386138

    cor(zSD,zCT)
    [1] 0.01040471
    .

    for tree rings ages >124 years:

    cor(zSD,zmean)
    [1] 0.783661

    cor(zSD,zCT)
    [1] -0.05208227

    #Obtain sd and mean for same 101 year window:

    Time=time(chron.yamal$series)
    Min=min (Time)-1
    Max=max(Time)
    n=Max-Min-101
    zSD=rep(n,0)
    for(i in 1:n)
    zSD[i]=sd(window(chron.yamal$series,start= Min+i, end=Min+i+101 ))
    zmean=rep(n,0)
    for(i in 1:n)
    zmean[i]=mean(window(chron.yamal$series,start= Min+i, end=Min+i+101 ))

    #Obtain counts for tree rings for same 101 year window:

    Count125=ts(data=Series_Count125[,2], start=min(Series_Count125[,1]),end=max(Series_Count125[,1]))
    Time=time(Count125)
    Min=min (Time)-1
    Max=max(Time)
    n=Max-Min-101
    zCT=rep(n,0)
    for(i in 1:n)
    zCT[i]=mean(window(Count125,start= Min+i, end=Min+i+101 ))

    • bender
      Posted Oct 23, 2009 at 8:02 AM | Permalink

      Re: Kenneth Fritsch (#207),
      That’s what I said just from eyeballing. Good eye, no?

      • Kenneth Fritsch
        Posted Oct 23, 2009 at 8:19 AM | Permalink

        Re: bender (#208),

        Good eye is good. Calibrated eye is better. Calculation if done correctly is best. Calculation if done incorrectly is worst. I am waiting for the real statisticians to pass judgment.

        Actually I printed out (large) graphs for eyeballing myself before doing the calculations – but I have old eyeballs.

        • bender
          Posted Oct 23, 2009 at 8:28 AM | Permalink

          Re: Kenneth Fritsch (#209),
          Couldn’t agree more. Eyeballing is no substitute for computation and documentation. (OTOH good eyes are what lead toward productive analyses and away from unproductive analyses.)

    • Steve McIntyre
      Posted Oct 23, 2009 at 8:25 AM | Permalink

      Re: Kenneth Fritsch (#207),

      Kenneth, as so often, nicely spotted and nicely documented. I’ve added your observation into the head post.

  101. kim
    Posted Oct 23, 2009 at 9:15 AM | Permalink

    Rob, I collect ironies, and you’ve just presented me a gem.

    snip – editorializing about policy

  102. Christopher
    Posted Oct 23, 2009 at 10:40 AM | Permalink

    I think he (Rob Wilson in his email to SM) means stuff like this:

    Parameterization of a process-based tree-growth model: Comparison of optimization, MCMC and Particle Filtering algorithms

    http://portal.acm.org/citation.cfm?id=1379718

    What I find odd is that this builds on the current edifice…

  103. Kenneth Fritsch
    Posted Oct 23, 2009 at 1:57 PM | Permalink

    Since I did a correlation calculation for the SD versus mean relationship in the Post 3 above, I thought it prudent that I do a regression and look at the auto correlation of the residuals. I found a very high level of AR1 correlation which from the pacf calculation appears to be confined to AR1.
    .

    The AR1 of 0.995 was submitted to the D. Nychka adjustment to see what it would do to the regression trend slope 95% CIs. The results below show that the interval includes 0 which means we cannot state that is it statistically different than zero, i.e. no relationship. Not shown here, but the regression of SD versus Counts had nearly the same AR1 as the one for SD versus mean had.
    .

    I am not sure how one would proceed here to show the effects of mean or counts on the SD. What should have clued me in was that the windowing was going to introduce a large amount of auto correlation. Instead of using a moving window with increments of 1 year, I need to use a window with no or much less overlap –which I now will do after I re-read exactly what Steve M and Rob Wilson did in moving the window.

    lmSDmean=lm(zSD~zmean)
    summary(lmSDmean)

    Call:
    lm(formula = zSD ~ zmean)

    Residuals:
    Min 1Q Median 3Q Max
    -0.096456 -0.037043 -0.001586 0.033084 0.121526

    Coefficients:
    Estimate Std. Error t value Pr(>|t|)
    (Intercept) 0.051914 0.007183 7.228 6.85e-13 ***
    zmean 0.339341 0.007288 46.563 < 2e-16 ***

    Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ‘ 1

    Residual standard error: 0.04311 on 2096 degrees of freedom
    Multiple R-squared: 0.5085, Adjusted R-squared: 0.5082
    F-statistic: 2168 on 1 and 2096 DF, p-value: < 2.2e-16

    acf(residuals(lmSDmean))$acf[2]
    [1] 0.9950212
    pacf(residuals(lmSDmean))$acf[2]
    [1] -0.2160758
    .

    Regress SD versus Mean: n= 2096, AR1 = 0.995, Factor =399, adj. = 25.37099
    .

    Regression trend slope +/- adjusted standard error = 0.34+/-0.185. Adjusted 95% CI = -0.02 to 0.70

    • bender
      Posted Oct 23, 2009 at 3:58 PM | Permalink

      Re: Kenneth Fritsch (#219),
      I was going to mention this possibility. Always a problem with sliding windows. Split it up into the smallest number of non-overlapping windows that gives you a large number of windows and a large sample in each window. For example, take the square root of the series length and use that. No one can accuse you of cherry picking window frames with such an arbitrary a priori null.

      • Kenneth Fritsch
        Posted Oct 23, 2009 at 7:05 PM | Permalink

        Re: bender (#222),

        I just read your post about sample sizing using the square root of the series length – which would be approximately 45 years. I used 50 years.
        .

        Someone with good eye balls and an ability to integrate information might be able to cherry pick a window length – but those conditions would eliminate me as a peaker/picker.

        • bender
          Posted Oct 23, 2009 at 10:51 PM | Permalink

          Re: Kenneth Fritsch (#224),
          I love the serendipity of independent crosspost arriving at a common answer.
          Bears all the way!

        • Kenneth Fritsch
          Posted Oct 24, 2009 at 10:22 AM | Permalink

          Re: bender (#225),

          I am going to attempt to keep my view of the Bears on topic here by making an analogy:

          Without Yamal (Cutler), an HS reconstruction (da Bears) can really suck.

  104. Kenneth Fritsch
    Posted Oct 23, 2009 at 6:54 PM | Permalink

    In order to avoid the auto correlation problem that I described in my previous post in this thread, I did a window treatment for SD, mean and Counts where the window was of length 50 years and moved 50 years for each iteration and resulting in no overlap. I looked at the complete Yamal RCS series and the same series for tree rings older than 149 and 124 years. The R code, that was unique to these new calculations, is given below. I calculated correlation, regression and AR1 statistics for these series. For the regressions I looked at SD versus mean, SD versus Counts, SD versus mean and Counts and finally SD versus mean, Counts and a mean/Counts interaction.
    .

    The important results are tabulated below and show once again that the mean has the much greater affect than counts on the SD even when the counts are dramatically reduced by eliminating the younger tree rings. The overlapped windows were obviously adding to the correlation value for complete Yamal series with all the tree rings. As the more of the younger tree rings were eliminated in the 125 and 150 year series, it was possible to account for more of the variation in the SD by either the means alone or the means combined with Counts.
    .

    It is interesting that the younger tree rings evidently retain more of their unexplained “inner beauty” than the older tree rings. Just the opposite of what I would have predicted for humans.
    .

    Yamal tree rings older than 149:
    .

    Correlation SD to mean = 0.71
    Correlation SD to Count = -0.20

    Regression SD to mean: trend slope= 0.36, t= 6.28, SE =0.057, adj. R^2 = 0.50, AR1 =-0.005
    Regression SD to mean and Counts: trend slope mean = 0.37, trend slope Counts = -0.013, SE mean slope = 0.054, SE Counts slope =0.006, t mean =6.74, t Counts = -2.28, adj. R^2 = 0.55, AR1 = -0.084
    Regressions SD to Count and SD to mean, Counts and mean/Counts interaction did not show statistical significance for variables added to mean variable
    .

    Yamal tree rings older than 124 years:
    .

    Correlation SD to mean = 0.51
    Correlation SD to Count = -0.23

    Regression SD to mean: trend slope= 0.31, t= 3.66, SE =0.086, adj. R^2 = 0.24, AR1 =-0.095
    Regression SD to mean and Counts: trend slope mean = 0.35, trend slope Counts = -0.011, SE mean slope = 0.082, SE Counts slope =0.004, t mean =4.27, t Counts = -2.50, adj. R^2 = 0.33 AR1 = -0.144
    Regressions SD to Count and SD to mean, Counts and mean/Counts interaction did not show statistical significance for variables added to mean variable
    .

    Yamal all tree rings:
    .

    Correlation SD to mean = 0.47
    Correlation SD to Counts =-0.12
    .

    Regression SD to mean: trend slope = 0.195, t = 3.39, SE = 0.057, adj. R^2 =0.20, AR1 = -0.130
    Regressions SD to Counts, SD to mean and Counts, and SD to mean and Counts and interaction mean/ Counts did not show statistical significance for variables added to the mean variable

    Time=time(chron.yamal$series)
    Min=min (Time)
    Max=max(Time)
    n=trunc((Max-Min)/50)
    zSD=rep(n,0)
    for(i in 1:n)
    zSD[i]=sd(window(chron.yamal$series,start= Min+i*50-50, end=Min+i*50-i ))
    zmean=rep(n,0)
    for(i in 1:n)
    zmean[i]=mean(window(chron.yamal$series,start= Min+i*50-50, end=Min+i*50-i ))

    Count125=ts(data=Series_Count125[,2], start=min(Series_Count125[,1]),end=max(Series_Count125[,1]))
    Time=time(Count125)
    Min=min (Time)
    Max=max(Time)
    n=trunc((Max-Min)/50)
    zCT=rep(n,0)
    for(i in 1:n)
    zCT[i]=mean(window(Count125, start= Min+i*50-50, end=Min+i*50-i ))

    lmSDmeanCT=lm(zSD~zmean+zCT)
    summary(lmSDmeanCT)
    acf(residuals(lmSDmeanCT))$acf[2]

    lmSDmean=lm(zSD~zmean)
    summary(lmSDmean)
    acf(residuals(lmSDmean))$acf[2]

    lmSDCT=lm(zSD~zCT)
    summary(lmSDCT)
    acf(residuals(lmSDCT))$acf[2]

    lmSDmeanCTinter=lm(zSD~zmean+zCT+zmean:zCT)
    summary(lmSDmeanCTinter)
    acf(residuals( lmSDmeanCTint))$acf[2]

  105. Posted Oct 29, 2009 at 12:05 PM | Permalink

    Hello CA,

    There is developing interest here in the UK in this Yamal story – behind the scenes (documentary TV), as it has not featured in the press (to my knowledge and to my surprise). If there is a succinct summary which presents both sides of the argument, please let me know (my email can be found on the website of ethos-uk.com.)

    Further – I have received for book review in an ecological journal, the volume ‘Natural Climate Variability and Global Warming’ – a compilation of papers edited by Richard Battarbee of University College London’s Environmental Change Rsearch Centre. Among the papers there is ‘Climate of the Past millennium: combining proxy data and model simulations’ by Hugues Goosse, Michael E Mann and Hans Renssen. This paper has no reference to any critique of the hockey-stick type modelling. I am aware that at least one MacIntyre and McKitrick paper was published in GRL in 2005 – are there any more that I as a reviewer need to be aware of?

    I would also find it useful to have comments and appropriate references to evaluate the statement by Battarbee in the opening overview – in reference to Mann98 that the above paper provides a strong defence of the hockey-stick (this is not explained to the reader and no references are given to any dispute) and in relation that:

    ‘there are now several additional independent analyses covering the same period – all are in essential agreement in showing anomalously high temperatures over the past few decades’

    One of the most grievous errors in science is not to cite your critics but as a reviewer, errors of omission are not easy to spot. It would help considerably if you could let me know of any other work in addition to the GRL2005 paper that has not been cited and is relevant to the issue (in journals where appropriate peers regularly publish and view papers – I have some respect for the work of Energy & Environment in publishing papers that fall foul of peer-review prejudice elsewhere but it has less value for my purposes than, for example, GRL).

    • bender
      Posted Oct 29, 2009 at 12:09 PM | Permalink

      Re: Peter Taylor (#227),
      Steve, I flag this one for your attention.
      Peter Taylor, please, read the blog. Thanks for your comment.

  106. curious
    Posted Oct 29, 2009 at 9:01 PM | Permalink

    Peter Taylor – maybe OT to your specific request but this list might be of interest – some GRL papers are listed along with EandE (and many others):

    http://www.populartechnology.net/2009/10/peer-reviewed-papers-supporting.html

    (h/t Smokey at Watts Up)

    Re: UK press – the Telegraph covered it in James Delingpole’s blog although there were some errors – see Steve’s comment 69 on this thread:

    http://www.climateaudit.org/?p=7244

    Telegraph story here:

    http://blogs.telegraph.co.uk/news/jamesdelingpole/100011716/how-the-global-warming-industry-is-based-on-one-massive-lie/

    I think The Spectator covered it too but can’t immediately find it – James Delingpole is also a contributor there.

  107. Posted Oct 19, 2009 at 4:53 PM | Permalink

    Re: Frank Lansner (#26), source?
    Re: Ryan O (#28), last time I saw Tom was over at WUWT trying to convert a new lot there. Or so it seemed. Tom:

    another set of cores that also gave good correlation with the instrument record

    well I dispute that the Twelvetrees even give “good correlation” for the twentieth century if you look at them individually and see the huge variety of patterns – even if their collective record does correlate better than the individual records. See my recent post at Jeff Id on this.

  108. steven mosher
    Posted Oct 19, 2009 at 8:37 PM | Permalink

    Re: Lucy Skywalker (#29), Tom is blog bouncing. Making claims and then bouncing to another place.

  109. Posted Oct 20, 2009 at 2:12 AM | Permalink

    Re: steven mosher (#47), Tom blog bouncing, yes. At WUWT on the 18th, Tom P (02:01:11) :

    stevemcintyre (16:02:45) :

    “I didn’t test how to do a proper benchmark – at this stage, one merely knows that Mann’s benchmark is fudged.”

    That’s what I call a statistical analysis! You promised back in January in your original posting you referenced to look further into “questionable Mannian benchmarks”. Are you going to do this?

  110. Steve McIntyre
    Posted Oct 23, 2009 at 11:03 AM | Permalink

    Re: Misreading the Point « Cruel Mistress (#216),

    This link goes to Ben Hale’s blog where he asks the question:

    Not sure where Steve McIntyre gets this from:

    Yamal Already a “Standard”? Another possible argument was raised by Ben Hale, supposedly drawing on realclimate: that Yamal was already “standard” prior to Briffa. This is totally untrue – Polar Urals was the type site for this region prior to Briffa 2000.

    I didn’t suggest anything of the sort.

    In his post, Hale clearly characterized realclimate as having said that Yamal was standard prior to Briffa as follows:

    Their [realclimate] reasons, so far as I can tell, are that the Yamal Climate Record was already standard by the time Briffa got to it…

  111. bender
    Posted Oct 23, 2009 at 3:54 PM | Permalink

    Re: Steve McIntyre (#218),
    Ben thinks he can maintain ‘neutrality’ while ignoring the details of the case. But there are so many details to keep track of, a guy is apt to forget his own words ;)

  112. bender
    Posted Oct 30, 2009 at 8:17 AM | Permalink

    Re: Kevin (#159),

    effort is directed at maximizing the hockey stick, or at minimizing historical variability

    Kevin, you saw Rob Wilson’s comment, that Polar Urals was excessively variable during MWP, and screened out on that basis? Well, it was also excessively warm. You have seen the proofs that ring width mean and variance are correlated. Therefore both are true. Screening bias against one implies automatically biasing against the other.
    .
    Recall Briffa’s assertion that AD1032 – in the middle of the MWP – was the coldest year on record? Mean and variance go together.

  113. Michael Smith
    Posted Oct 31, 2009 at 10:48 AM | Permalink

    Re: bender (#338),

    Bender’s got a killer point in comment 338 — and that brings up another issue.

    Let’s recall Jim Bouldin’s explanation (given right here at CA) of how researchers insure that the older trees they choose — those that pre-date the instrument record — are “temperature responders” like the modern trees they choose. Here is what Jim wrote:

    basically it would be follow this general idea:

    The ring width data from all the series, past and present, are lined up by cambial age chronology (cambial age = the relative age of each ring, from the pith or tree center). The average of the ring widths for each ring, over all the trees is then computed. This creates a “standard curve” that reflects primarily the size-dependent part of the growth response of the trees. This mean value series is then subtracted, ring by ring, from the actual ring widths of each tree, thus removing the diameter-related ring width component from each tree, since the goal is to isolate the environmental signal. The residuals from this detrending are then examined to see how “complacent” they are, meaning how much they vary from year to year. Those that vary the most strongly are the most sensitive to the environment, and whether they were responding to the same environmental factor is assessed by looking at the spatial similarity of the variation pattern across trees, across the area of interest.

    (From comment 160 here: http://www.climateaudit.org/?p=7278 Emphasis added)

    So, on the one hand, we have Bouldin citing strong variability as a basis for including older trees and on the other we have Rob Wilson citing it as a basis for excluding them. This is not necessarily a contradiction inasmuch as Bouldin states that “the spatial similarity of the variation pattern across trees” must be assessed to determine whether or not the trees are responding to temperature or something else.

    So did Rob Wilson exclude Polar Urals purely on the basis of their variability — or did he make some sort of assessment like Bouldin describes and exclude them because they appear to be responding to something besides temperature?

    Would love to hear Rob’s comments on this.

4 Trackbacks

  1. [...] of hockey stick graph data to a larger data set in the same area I noticed this post up at Steve McIntyre’s Climate [...]

  2. By Misreading the Point « Cruel Mistress on Oct 23, 2009 at 9:19 AM

    [...] « Dangerous People Misreading the Point October 23, 2009 Not sure where Steve McIntyre gets this from: Yamal Already a “Standard”? Another possible argument was raised by [...]

  3. [...] it is quickly challenged. There are many good examples of his work to see on his blog and a recent analysis of his is representative. As an experiment in the sociology of open science and a window into what [...]

  4. By RCS – One Size Fits All « Climate Audit on May 8, 2012 at 3:27 PM

    [...] are estimated.  Although this has been pointed out by various blog commentators (see, e.g. Jeff Id, comment 67 from Re-Visiting the “Yamal Substitution” and his posts at the Air Vent), few attempts have been made to examine the resulting effects in a [...]

Follow

Get every new post delivered to your Inbox.

Join 3,261 other followers

%d bloggers like this: