Wouldn't it be nice…

if Team methodological descriptions were correct?

Mann says:

All ITRDB tree-ring proxy series were required to pass a series of minimum standards to be included in the network: (i) series must cover at least the interval 1750 to 1970, (ii) correlation between individual cores for a given site must be 0.50 for this period, (iii) there must be at least eight samples during the screened period 1800–1960 and for every year used.

I checked two series familiar to CA readers – Gaspe (cana036) and Sheep Mountain (ca534). Aside from any other issues with these series, the early portion of the Gaspe chronology has only a couple of samples as does the post-1983 portion of the Sheep Mountain chronology. I checked these series in both the “original” and “infilled” versions. The “original” version transcribed ITRDB crn series, including portions with fewer than 8 samples. The “infilled” version was identical to the “original” version where the original version had values. Thus, contrary to the statement in the SI, the “infilled” versions used data with only one sample.

Does it “matter”? Well, why say it if it’s untrue?


27 Comments

  1. bender
    Posted Sep 29, 2008 at 10:02 AM | Permalink | Reply

    Of course it matters. Misrepresenting methods for the purpose of overstating confidence in the data is bad science.

  2. Sam Urbinto
    Posted Sep 29, 2008 at 10:53 AM | Permalink | Reply

    It doesn’t matter if nobody much is paying attention, the methods are made confusing enough and buried in the minutia and/or obfuscated by verbiage, and it supports the conclusion desired. Hey, as long as we get the correct answers, who cares about the methods?

    And of course, only the select few know the correct answers. Obviously, it helps if you know what they are beforehand, right?

  3. DaveM
    Posted Sep 29, 2008 at 11:09 AM | Permalink | Reply

    Ahhh. But it was a fine vintage sample.

    • bender
      Posted Sep 29, 2008 at 11:13 AM | Permalink | Reply

      Re: DaveM (#3),
      Unsupported assertion = antiscience.

      • ianl
        Posted Sep 29, 2008 at 5:32 PM | Permalink | Reply

        Re: bender (#4),

        The question is asked:

        “Does it “matter”? Well, why say it if it’s untrue?”

        The Mann position may come about with this combination:

        1) listing “excluded” data shows how rigorous the culling process is – only the most reliable for us

        2) that such data were not actually excluded at all simply emphasises the constant theme of hard resistance to auditing. Mann et al may never have expected auditing

        That auditing has actually happened truly entertains me

        • bender
          Posted Sep 29, 2008 at 5:42 PM | Permalink

          Re: ianl (#10),
          Well you should be exceptionally entertained by their certain knowledge that this audit must have been coming, and STILL they could not meet a minimum standard of quality. Now THAT’s bad.

        • PhilH
          Posted Sep 29, 2008 at 6:48 PM | Permalink

          Re: ianl (#10), Anyone who had read this site for more than a week knew dadgum well that Mann couldn’t do another proxy study without being audited by Steve. Mann would have had to be living on the moon for the last five years not to expect it. That’s why Bender is right about their failure to meet a minimum standard of quality. It’s crazy.

  4. Luis Dias
    Posted Sep 29, 2008 at 11:41 AM | Permalink | Reply

    Is Steve McIntyre playing Fake Steve Jobs or something?

  5. Posted Sep 29, 2008 at 12:44 PM | Permalink | Reply

    It doesn’t look to me like these passed the correlation sort in SD1.

    From your calculated r vs reported r plots and my back calculation though I don’t trust that his r values or the final series used were actual yet.

    I am rather suspicious from my calculation that we’ll eventually find that the final graph generated by his algorithm is different from the presented graph.

  6. Urederra
    Posted Sep 29, 2008 at 2:12 PM | Permalink | Reply

    Is an R2 of 0.5 a bit low?
    Is there any multivariate analysis study that shows how good tree ring width correlates to temperature, CO2 concentration and humidity, or something similar? My impression is that r2 can be improved if you add more variables into the equation. In the case of tree ring growth, biology says that it depends on more than one variable. I don’t know about other proxies, but still, if r2 is around 0.5, chances are that the variation depends on more than one variable.

    Sorry if I am wrong or too off topic.

    • Kenneth Fritsch
      Posted Sep 29, 2008 at 2:55 PM | Permalink | Reply

      Re: Urederra (#7),

      I don’t know about other proxies, but still, if r2 is around 0.5, chances are that the variation depends on more than one variable.

      What I read is the correlation criteria (r) is 0.5 not r^2. A correlation of 0.5 would yield an R^2 of 0.25, i.e. explaining as little as 25% of the variable.

      • Posted Sep 29, 2008 at 3:54 PM | Permalink | Reply

        Re: Kenneth Fritsch (#8),

        Oh, if only the R2 was even as high as that

      • Urederra
        Posted Sep 30, 2008 at 3:53 AM | Permalink | Reply

        Re: Kenneth Fritsch (#8),

        Thanks for the correction. That is even worse than I thought.

        • Kenneth Fritsch
          Posted Sep 30, 2008 at 10:28 AM | Permalink

          Re: Urederra (#18),

          Urederra, please look at the Steve M post here and reread what the introduction to this thread was actually quoting.

          Re: Steve McIntyre (#16),

          The criteria is a correlation (r) of at least 0.5 for the between TR core samples and says nothing about the overall correlation to temperature.

          While I find using such a low correlation with temperature strange for a reconstruction, I find it even stranger that the reviewers of these papers never seem to object much to these methods.

        • bender
          Posted Sep 30, 2008 at 12:42 PM | Permalink

          Re: Kenneth Fritsch (#19),
          If your reconstruction includes a robust estimate of uncertainty then you do not need to abritrarily screen out proxies for which the calibration correlation is “low”. The “lowness” gets factored into the size of the confidence envelope: the lower the r, the wider the envelope.

        • Kenneth Fritsch
          Posted Sep 30, 2008 at 4:16 PM | Permalink

          Re: bender (#20),

          If your reconstruction includes a robust estimate of uncertainty then you do not need to abritrarily screen out proxies for which the calibration correlation is “low”. The “lowness” gets factored into the size of the confidence envelope: the lower the r, the wider the envelope.

          If you have a regression correlation that when squared indicates that you are explaining maybe a percent or two of the variable, I would think that perhaps, regardless of the confidence envelop calculated one might be very concerned about using that regression in a reconstruction that goes back in time where all those unexplained (by R^2) factors could come into play.

        • bender
          Posted Sep 30, 2008 at 4:21 PM | Permalink

          Re: Kenneth Fritsch (#21),
          This unexplained variation, even if it large, is not a concern if it is independent of the thing that is being proxied/reconstructed. That is to say, the size of the confidence envelope will be suitably inflated by the huge uncertainty on the calibration parameter. Further adjustment beyond this is not required.

        • Kenneth Fritsch
          Posted Sep 30, 2008 at 5:47 PM | Permalink

          Re: bender (#22),

          This unexplained variation, even if it large, is not a concern if it is independent of the thing that is being proxied/reconstructed. That is to say, the size of the confidence envelope will be suitably inflated by the huge uncertainty on the calibration parameter. Further adjustment beyond this is not required.

          The unexplained variation is, of course, estimated in the calibration/validation period/process. How would a large unexplained variation and the uncertainty associated with it be transposed back into time for the reconstruction where the factors of the unexplained variation could conceivably obliterate the explained variance in the calibration/validation period?

          And, of course, if we do not turn a blind eye to the divergence/out-of-sample period results following the calibration/validation period, we have some corraborating evidence to give concern.

          I suppose as long as no one bothers to discuss these results all in the same paragraph, we can avoid the concern.

        • bender
          Posted Sep 30, 2008 at 6:08 PM | Permalink

          Re: Kenneth Fritsch (#25),

          How would a large unexplained variation and the uncertainty associated with it be transposed back into time for the reconstruction where the factors of the unexplained variation could conceivably obliterate the explained variance in the calibration/validation period?

          Huh? Ken, the larger the unexplained variation, the smaller the magnitude of the calibration coefficient, the larger its standard error, and the lower its significance.

          But like I said, this IS assuming that the unexplained variation does NOT “obscure” the proxy response signal. Don’t forget that the signal can be very noisy yet still be estimated correctly.

          This also assumes, of course, that the series in question IS indeed a proxy, i.e. the calibration is not spurious. Maybe that’s your issue? With nothing but a low r2 you doubt the fundamental validity of the proxy?

        • Kenneth Fritsch
          Posted Sep 30, 2008 at 7:01 PM | Permalink

          Re: bender (#26),

          With nothing but a low r2 you doubt the fundamental validity of the proxy?

          After consulting Dr. Ben, I was lead to believe that calculating r^2 would be ridiculous and an RE satatistic with a value 0 or higher would be more appropriate — so forget everything I said as I must have been temporarily blinded by reason.

  7. Kohl Piersen
    Posted Sep 29, 2008 at 5:54 PM | Permalink | Reply

    As I said elsewhere – how does this bloke get away with it? When will the IPCC wake up to him?
    Ah! But he writes the IPCC book doesn’t he?
    Sheeeeeet!

    • ianl
      Posted Sep 29, 2008 at 9:26 PM | Permalink | Reply

      Re: Kohl Piersen (#12),

      I agree – I am most amused.

      At the bottom, I suspect unbridled vanity (the true Achilles Heel of homo sapiens)

  8. Steve McIntyre
    Posted Sep 29, 2008 at 7:13 PM | Permalink | Reply

    I’ve been looking at lots of series now. Mann’s claim is untrue for series after series.

  9. Steve McIntyre
    Posted Sep 29, 2008 at 9:31 PM | Permalink | Reply

    #8. In terms of correlation to temperature, the average correlation of the North American tree rings is a bit under 0. The correlations appear to be drawn from a random distribution. Mann selects correlations >.13 as being “significant”. It’s really hard for people not used to this to understand how strange this is.

    • Wolfgang Flamme
      Posted Sep 30, 2008 at 2:32 AM | Permalink | Reply

      Re: Steve McIntyre (#16),

      Because of its established deficiencies as a diagnostic of reconstruction skill (), the squared correlation coefficient r2 was not used for skill evaluation.

      So Mann chooses infilled proxies that ‘likely’ have been influenced by temperature during the reference period … using a model concept of which basic quality metrics are known (to him) as having deficient skill. Everything looks like a hammer if you need to knock in some nails.

  10. Steve McIntyre
    Posted Sep 30, 2008 at 4:23 PM | Permalink | Reply

    “robust estimate of uncertainty” unfortunately is a term used often used in Team literature to describe intervals that are underestimated.

Post a Comment

Required fields are marked *

*
*

Follow

Get every new post delivered to your Inbox.

Join 2,901 other followers

%d bloggers like this: