The NAS Panel and Polar Urals

Now that we know the abysmally low replication of the modern portion of Briffa’s Yamal chronology (something previously unknown to specialists), I’ve been backtracking through some earlier documents to see how this may have impacted past studies.

We’ve talked previously about how Briffa refused to provide measurement data to D’Arrigo et al 2006, resulting in them using Briffa’s Yamal chronology more or less blind.

For some inexplicable reason, their article worsens the situation by, in effect, conflating the two: they used the Yamal chronology, but, in the absence of Yamal core counts, used Polar Urals core counts! The NAS panel adopted this mishmash, also using the Yamal chronology together with Polar Urals core counts.

Here is an excerpt of NAS Panel Figure 4-2, taken from D’Arrigo et a 2006, showing 5 Eurasian RCS chronologies (North American RCS chronologies are shown in another portion of the original figure). Of the 5 chronologies in this figure, the only one which doesn’t have a “divergence problem” (i.e. going down in the latter part of the 20th century) is the one in the third panel labeled “POL”: the top part of the third panel shows the chronology, while the bottom part shows core counts. Look closely at both.


Figure 1. Excerpt from NAS Panel Figure 4-2. Original Caption: Results for individual regional composite chronologies for the sites shown in Figure 4-1. The time series have been loosely grouped according to latitude bands and normalized to the common period. The bottom two panels in the right column show grouped replication plots for both North America and Eurasia. SOURCE: D’Arrigo et al. (2006).

The caption to the above figure refers to the “sites shown in Figure 1”, which is shown below. While Polar Urals and Yamal are close to one another, the site marked as “POL” is located at the Polar Urals site (66N, 65E) and not at Yamal (67N, 70.5E).


Figure 2. NAS Panel Figure 4-1. Original Caption: Location map of individual sites (red) and regional composites (yellow boxes) used to reconstruct Northern Hemisphere surface temperatures for the past millennium. SOURCE: D’Arrigo et al. (2006).

Let’s pause for a moment and scratch our heads. The core counts illustrated in the panel come from the Polar Urals site (russ021+russ196). The visual appearance matches the Polar Urals core count graphic; the statistics for total number of cores and mean segment length in D’Arrigo et al 2006 Table 1 match the Polar Urals data set. The location is clearly Polar Urals.

There’s just one catch: the chronology doesn’t come from Polar Urals. It comes from Yamal!

I don’t want people to think that I was previously under the impression that D’Arrigo et al used Polar Urals in their RCS reconstruction. I spotted the distinctive Yamal shape right away and confirmed that they did in fact use Yamal – and we debated at CA whether there were valid decisions for using Yamal versus Polar Urals. We also knew at the time that D’Arrigo et al didn’t have Yamal core counts.

The question that I didn’t consider at the time – nor later when the NAS panel reported – was this: if D’Arrigo et al 2006 didn’t have core counts for their illustrated RCS chronology (Yamal), what was the basis for providing core counts for the illustrated RCS chronology?

With the clarity arising from the Phil Trans B archive showing the actual Yamal core counts, we can now clearly see that the panel is a total mishmash: they used the Yamal RCS chronology with the Polar Urals core count. Had they used the real Yamal core counts or the real Polar Urals chronology, this panel would have looked very different.

Just so there is no doubt: Rob Wilson was one of the authors of D’Arrigo et al 2006. While I disagree with him on statistical issues, his integrity is unimpeachable. The mishmash here is just the weird sort of cock-up that happens from time to time. In my opinion, the primary fault lies with Briffa for not providing the D’Arrigo authors with measurement data. Had Briffa done so, they would have identified the abysmal Yamal replication in 2005 and modified their article accordingly.

NOTE: Just in case anyone wants to doublecheck this claim, here is a script that retrieves Yamal and Polar Urals crn and measurement data and does the plot of the Yamal chronology and Urals core count in a similar style to the figure here.

source("http://data.climateaudit.org/scripts/utilities.txt") #
source("http://data.climateaudit.org/scripts/tree/utilities.treering.txt")
source("http://data.climateaudit.org/scripts/yamal/collation.functions.txt")

#DATA
yamal.crn=get.crn("yamal")
urals.crn=get.crn("urals") ;tsp(urals.crn) # 778 1990
yamal=get.rwl("yamal_cru"); dim(yamal) # [1] 40892 4
urals=get.rwl("pol", verbose="esper");dim(urals) # [1] 25490 4
count.urals=countf(urals)

##############
## DARRIGO FIGURE
##################
#scale on 1686-1978 per D'Arrigo
x=yamal.crn;year=c(time(yamal.crn))
temp= year>= 1686 & year< = 1978 #legend of Darrigo
m0=mean(x[temp]); sd0=sd(x[temp])

nf=layout(array(1:2,dim=c(2,1)),heights=c(1.3,1.13))
y=x=window( (yamal.crn-m0)/sd0,start=950);year=c(time(y))
par(mar=c(0,3,2,1))
plot(year,x,xlim=c(750,2005),xaxs="i",type="n",ylim=c(-2,5.5),ylab="SD Units 1686-1978",xlab="",axes=FALSE)
axis(side=1,labels=FALSE);axis(side=2,las=2);box()
temp= x>0; x[!temp]=0
lines(year,x,type="h",col=2)
x=y
temp= x<0; x[!temp]=0
lines(year,x,type="h",col=4)
par(mar=c(3,3,0,1))
plot(950:1990,window(count.urals,start=950),type="h",col="grey80",xaxs="i",xlim=c(750,2005),ylim=c(0,65))

107 Comments

  1. Steve McIntyre
    Posted Oct 16, 2009 at 9:00 AM | Permalink

    For potential critics who may wish to criticize me for not identifying this problem sooner, there was sufficient information to identify this mishmash in 2006. I had information on Polar Urals core counts and could have matched that information to the reported information in D’Arrigo et al 2006 Table 1 from which the NAS panel drew. I also knew that the illustrated chronology was Yamal and I also knew that Briffa had refused to provide the D’Arrigo et al 2006 authors with measurement data and that Yamal core counts were unpublished. But there were lots of issues in play after the NAS and Wegman Reports and I didn’t focus on this mishmash at the time.

    I asked for measurement data from D’Arrigo et al (then unpublished) as an IPCC AR4 reviewer. This was one of two such requests (Hegerl/Crowley was the other) that resulted in Susan Solomon, Chair of WG1, threatening to expel me as a reviewer if I asked authors of unpublished reports for supporting data.

    • Heino
      Posted Oct 16, 2009 at 11:34 AM | Permalink

      Re: Steve McIntyre (#1),

      resulted in Susan Solomon, Chair of WG1, threatening to expel me as a reviewer if I asked authors of unpublished reports for supporting data

      Have you written about this before, or could you elaborate?

      • Steve McIntyre
        Posted Oct 16, 2009 at 11:59 AM | Permalink

        Re: Heino (#25),

        Full correspondence here http://www.climateaudit.org/?p=640

        • Dave Dardinger
          Posted Oct 16, 2009 at 12:18 PM | Permalink

          Re: Steve McIntyre (#27),

          Steve, I was going to reply to Heino if you hadn’t first, but I had found a reference to the statement by Susan Solomon in thread 1103 reply 133. The strange thing is that it’s dated 27 Jan 2007, while your link above is to a thread 670 dated in March 2007. How can a lower numbered thread be be a couple of months later than the 1103 thread? Do you assign thread numbers when you start a posting and then if it gets delayed maybe months, it will have a later date?

          Steve: I don’t recall the particulars here. It looks like I’d started an IPCC correspondence thread a few months earlier and not finished it. The number would be assigned when I started the thread, not when I finally published it. I don’t know how the posts are numbered. They used to get numbered in increments of 1, but now the numbers seem to increase in jumps.

        • Posted Oct 16, 2009 at 1:14 PM | Permalink

          Re: Steve McIntyre (#27), crikey, what a trail. Thank you for providing so many update notes and links. It might be good to have a whole post to draw all this stuff together especially as it feels like some key issues emerge in your discussions with, and about, Dr Crowley.

  2. benpal
    Posted Oct 16, 2009 at 9:14 AM | Permalink

    How are you supposed to do a thorough and honest review if you are not allowed to look at the data?

  3. Carl G
    Posted Oct 16, 2009 at 9:16 AM | Permalink

    Steve, you’ve repeatedly stated that you’d be surprised if any climate scientists (which includes D’Arrigo et. al) would defend what Briffa did. Since none of them have defended it yet (and assuming that they never will defend it and agree with you), do you think any of them will re-run their reconstructions and finally see what you are talking about in regards to Yamal’s dominence?

    • Steve McIntyre
      Posted Oct 16, 2009 at 10:00 AM | Permalink

      Re: Carl G (#3),

      do you think any of them will re-run their reconstructions and finally see what you are talking about in regards to Yamal’s dominence?

      I’m sure that the D’Arrigo authors know the sensitivity to Polar Urals/Yamal. Their long network is very similar to Briffa 2000, where I’ve modeled the impact – see early June 2006 CA. (I can’t emulate the exact impact on their network because some D’Arrigo components are still missing e.g. the “publicly available” Gulf of Alaska series used by Kaufman).

      I doubt that they would simply have reported a reconstruction using Polar Urals rather than Yamal. It wouldn’t “make sense” to them and they would have re-tooled something else that did “make sense”.

  4. David
    Posted Oct 16, 2009 at 9:19 AM | Permalink

    Was this a mistake, that they accidentally included the Yamal data with the Polar Urals core counts, or was it done intentionally “Oh we don’t have the core data for Yamal, but lets guess it’s similar to the Polar Urals and just use the Polar Urals core counts instead” ?

    • bender
      Posted Oct 16, 2009 at 9:21 AM | Permalink

      Re: David (#4),
      It’s circumstantial evidence of a last-minute substitution. Sound familiar?

  5. bender
    Posted Oct 16, 2009 at 9:19 AM | Permalink

    Had they used the real Yamal core counts or the real Polar Urals chronology, this panel would have looked very different.

    And the confidence intervals (never, ever shown) would have been much. much wider during the modern instrumental period.

    • Steve McIntyre
      Posted Oct 16, 2009 at 9:45 AM | Permalink

      Re: bender (#5),

      bender, the Brown and Subdberg confidence intervals are probably infinite either way.

  6. MikeN
    Posted Oct 16, 2009 at 9:35 AM | Permalink

    I’d be surprised if they haven’t done so already.

  7. stan
    Posted Oct 16, 2009 at 10:09 AM | Permalink

    Once again, it would appear that the scientists publishing in peer-reviewed journals have been a bit sloppy. There seems to be a whole lot of that. Yet, we are constantly being told that peer review in publishing is the gold standard for science and is far superior to any scientific discussions which take place on mere blogs.

    I really hope that someone tries to criticize Steve for failing to expose the sloppiness of this particular study when it came out. What fun that will be.

    If peer review produces this kind of sloppiness, and peer review is supposed to produce science of the highest standard, what does that say for the state of science? And what does that say for the attitudes that scientists have about non-scientists? How stupid do they think we are?

  8. Kevin
    Posted Oct 16, 2009 at 10:25 AM | Permalink

    It makes me sad I will hear about this only on CA.
    Will any media outlet pick up this story?
    Where is Neil deGrasse Tyson with a camera crew?

  9. Alexander Harvey
    Posted Oct 16, 2009 at 10:39 AM | Permalink

    I can not see how they could help but know that the core counts were “misleading”, but they may have had cause (I think this is covered on another thread) to think that using the POL counts was conservative, i.e. the Yamal counts were greater. And who would blame them for underplaying the strength of the data. If they had left the counts off their POL graphic I expect questions would have been asked during review which they knew could put their paper on the spike until Biffa liberated the data, if he ever did, and if he did not no one would be the wiser.

    Alex

    • bender
      Posted Oct 16, 2009 at 10:49 AM | Permalink

      Re: Alexander Harvey (#12),

      who would blame them for underplaying the strength of the data.

      No one would or should ever guess at sample sizes, even if they are low-balling. And I very much doubt they did that. It is surely an error.

      • Alexander Harvey
        Posted Oct 16, 2009 at 10:59 AM | Permalink

        Re: bender (#14),

        It is surely an error.

        I cannot possibly know, but as I said I cannot see it as an oversight. Could they have accidently used the Yamal chronology? May be so, but it seems a bit far fetched that none of the co-authors knew how the graphics were composed.

        Alex

        • bender
          Posted Oct 16, 2009 at 11:04 AM | Permalink

          Re: Alexander Harvey (#15),
          The error I would suggest – and the one I think most probable – is they intentionally used Yamal but forgot to fix the core counts plot and the map. Often the graphics are an afterthought in a publication.

        • Alexander Harvey
          Posted Oct 16, 2009 at 3:46 PM | Permalink

          Re: bender (#16),

          The error I would suggest – and the one I think most probable – is they intentionally used Yamal but forgot to fix the core counts plot and the map.

          Maybe, but as I understood it, they would have known that they could not get the core counts from Briffa so it was not something that could be fixed. Unfortunately I do not have a subscription so I cannot read the text, but I would be surprised if they reference Yamal in the text and POL in the graphics and still pass any form of review.

          As I say I cannot read it so I willl not speculate any further.

          Alex

          Steve: http://www.st-andrews.ac.uk/~rjsw/all%20pdfs/DArrigoetal2006a.pdf

  10. EddieO
    Posted Oct 16, 2009 at 10:46 AM | Permalink

    Either the authors didn’t understand what they were doing or were misinformed about the source of the core counts. If I was one of the co authors I would be very annoyed at the person who provided the data but also at myself for not being more inquisitive at the time.

  11. Posted Oct 16, 2009 at 11:06 AM | Permalink

    This was one of two such requests (Hegerl/Crowley was the other) that resulted in Susan Solomon, Chair of WG1, threatening to expel me as a reviewer if I asked authors of unpublished reports for supporting data.

    What????

    Please tell me, Steve, is there a good reason she would have done this? I do want to believe good if it’s possible…

    • bender
      Posted Oct 16, 2009 at 11:08 AM | Permalink

      Re: Lucy Skywalker (#17),
      It is not within IPCC’s mandate to drill that deeply. That would probably be her best defense. i.e. If everyone drilled that deeply we would still be working on AR1.

      • Posted Oct 16, 2009 at 11:15 AM | Permalink

        Re: bender (#19), yes, understood, but in this case, so much subsequent material that was used by IPCC depends on unpublished stuff does it not?

        • bender
          Posted Oct 16, 2009 at 11:17 AM | Permalink

          Re: Lucy Skywalker (#20),
          Define “much”. I don’t see a high percentage of unpublished papers being cited. The odd few. A bit of academic check-kiting here and there. Nothing unusual for the field.

        • TAG
          Posted Oct 16, 2009 at 11:32 AM | Permalink

          Re: bender (#22),

          Isn’t he referring to unpublished data, gray versions of data, unpublished algorithms and so on? Now of this would have been peer reviewed by anybosy but is kited in by being used in a published paper. The conclusions of these papers are being accepted on faith since there is no practical way to replicate them.

          Sokal published on a very similar topic to much aclaim and controvery.

        • bender
          Posted Oct 16, 2009 at 11:34 AM | Permalink

          Re: TAG (#24),
          Ah, fair point.

      • Geoff Sherrington
        Posted Oct 17, 2009 at 12:09 AM | Permalink

        Re: bender (#19),

        If everyone drilled that deeply we would still be working on AR1.

        Quite a few of my colleagues believe that AR1 should have been done properly before proceeding.

        A substantial amount of the work on blogs like this is correcting errors and methods and odd philosophies of science originating back then. We say that the deep drilling should have been done before the release of adventuresome reports such as the Summary for Policy Makers of 2007.

        Folks excluding bender, again, don’t throw all scientists into the barrel of bad apples. There are many scientists who distance themselves from climate science and in the course of time, might emerge as honourable. Even some climate scientists might make the honour roll if they have dropped belief and gone for evidence, even re-written papers based on the better data now available.

  12. Craig Loehle
    Posted Oct 16, 2009 at 11:06 AM | Permalink

    In my opinion, the problem is not that they made an error, which is only human, but that the self-correcting aspect of science is failing here. Where are the authors who come out with corrections or addenda? In biomedicine you sure better do that or you can be in big trouble. In fact, I would wager that no climate paper has EVER been withdrawn as erroneous after mistakes were pointed out.

  13. bender
    Posted Oct 16, 2009 at 11:15 AM | Permalink

    the primary fault lies with Briffa for not providing the D’Arrigo authors with measurement data. Had Briffa done so, they would have identified the abysmal Yamal replication in 2005 and modified their article accordingly.

    This is your assumption. Jim Bouldin and Deep Climate and Tom P might suggest that Briffa’s sample was adequate.

  14. Posted Oct 16, 2009 at 11:20 AM | Permalink

    Sorry, I am confused. Why does D’Arrigo et al table 1 say that Briffa 2000 has 157 series from the Polar Urals, when it doesn’t? Were they just confused by Briffa’s unclearly labelled fig 1f?

    Does Rob Wilson need to issue an erratum?

  15. Phillip Bratby
    Posted Oct 16, 2009 at 12:52 PM | Permalink

    Isn’t Briffa the most likely person to notice that the D’Arrigo et al 2006 Table 1 and the NAS panel figure 4-2 were wrong? Should he not have reported this error? You certainly cannot be given any blame Steve. Briffa cannot claim he never saw any of this.

  16. Tolz
    Posted Oct 16, 2009 at 1:34 PM | Permalink

    Is there any possibility of getting Rob Wilson to lend a perspective on the matter?

  17. Robinson
    Posted Oct 16, 2009 at 4:00 PM | Permalink

    For some inexplicable reason, their article worsens the situation by, in effect, conflating the two: they used the Yamal chronology, but, in the absence of Yamal core counts, used Polar Urals core counts!

    I’m no Mathematics Major, but even to me this sounds like an unbelivably retarded thing to do. Peerreviewed indeed.

  18. Posted Oct 16, 2009 at 4:16 PM | Permalink

    Had they used the real Yamal core counts or the real Polar Urals chronology, this panel would have looked very different.

    Yeah, it would–if they could get the real Yamal core counts. But of course they couldn’t, so the figure should have had words like this in place of the utterly false (no matter how innocently so) representation of core counts: “Access denied, thus we don’t know.”

    Would that have prevented publication of their paper?

    • deadwood
      Posted Oct 16, 2009 at 6:46 PM | Permalink

      Re: Micajah (#36),

      Would that have prevented publication of their paper?

      That’s the question – isn’t it? Sadly, I don’t believe it would have.

  19. TerryBixler
    Posted Oct 16, 2009 at 5:17 PM | Permalink

    OT

  20. Charlie
    Posted Oct 16, 2009 at 6:57 PM | Permalink

    Just so there is no doubt: Rob Wilson was one of the authors of D’Arrigo et al 2006. While I disagree with him on statistical issues, his integrity is unimpeachable.

    Since his integrity is unimpeachable should we assume that Rob Wilson is working on an erratum or corrigendum or whatever the “oops, we goofed” letter is called in peer reviewed literature?

  21. AnonyMoose
    Posted Oct 16, 2009 at 8:25 PM | Permalink

    So they didn’t take the rotten apple out of the barrel and left it to the customers to pick out all the resulting rotten apples? Someone doesn’t know how customers react to such a shop.

  22. Coup d'etat
    Posted Oct 16, 2009 at 8:32 PM | Permalink

    Just who peer reviewed Briffa? Do they have some questions to answer for not discovering this themselves? They obviously were on Briffa/AGW side and since the reports didn’t disagree with there pet theory they didn’t pay much attention, as the reverse where they look for minute flaws to fail a good report going against AGW.

  23. Coup d'etat
    Posted Oct 16, 2009 at 8:34 PM | Permalink

    All papers using Briffa Yamal data in part or in whole should be stricken from the record so to speak!

  24. Posted Oct 17, 2009 at 1:24 AM | Permalink

    Folks excluding bender, again, don’t throw all scientists into the barrel of bad apples. There are many scientists who distance themselves from climate science

    It’s a natural first reaction to vent, on discovering that the incoherence of the science is “worse than we thought”. And it’s a natural second reaction to dig for motives. But Steve, in persistently yanking us back to facts and material evidence, and making us cut out piling on indignation, is surely giving space to enable all scientists to stop and think, come clean, and choose to help with mending the science. We can help or hinder that process. IMHO.

    • crosspatch
      Posted Oct 17, 2009 at 2:33 AM | Permalink

      Re: Lucy Skywalker (#42),

      Very well put. Yes, it is difficult not to get emotional when we feel we are being robbed in a very real sense of our hard earned. And when we see, uhm, mistakes such as these it is all the more infuriating. It is one thing to watch a scientific debate but this particular debate has real economic consequences for the average person. People are wanting to actually take a piece of our livelihood away from us because of these results and it is very frustrating when one sees how pitiful the foundation is on which that course of action stands.

      In these times, I can’t blame people for getting emotional. If results are going to be used to take people’s money from them, there is a certain implied responsibility to ensure that it is for good reason. Muckups like this are simply infuriating to those who grasp what is being said.

  25. Rob Wilson
    Posted Oct 17, 2009 at 2:52 AM | Permalink

    Dear Steve et al.
    As a response to your recent posts and also to your private e-mails, here’s my 10 cents towards the use of the Yamal and/or Polar Urals data.

    I will admit that the data description (Figure 2 and Table 1) in D’Arrigo et al. (2006) is somewhat misleading.

    For those who have not read the paper

    Click to access DArrigoetal2006a.pdf

    we derived two NH reconstructions. One using TR chronologies developed using traditional individual series detrending methods (STD) and another that solely used TR chronologies that had been processed using the RCS method. The latter method being theoretically better to capture more low frequency information. I think our paper clearly shows this.

    For all locations except the Polar Urals and the Alps, we used the same data to derive the STD and RCS flavours.

    As we have discussed through CA before, I was not happy with the resultant RCS chronology using the Polar Urals data. I know you do not agree with my decision here. Anyway, the Yamal series represented a RCS chronology from a nearby location. The figure below show the strong coherence between the Polar Urals STD and Yamal RCS series.

    Unfortunately, in the paper we failed to clarify that different data were used for the STD and RCS versions for the location we labelled the Polar Urals. The replication data in Figure 2 are from the Polar Urals data.

    Please note that the situation exists for the Alps. The historical spruce data detailed in Wilson and Topham (2004) did not perform well using the RCS method, so instead we used a different species chronology (Nicolussi and Schiessling 2001) for the RCS version.

    So – to clarify – in Figure 2, the replication histograms are relevant for both the STD and RCS flavours for each site EXCEPT POL and ALPS where the replication is only relevant for the STD versions.

    I will not be troubling the journal with a corrigendum as it does not change the results of the paper at all.

    Finally, I want to clarify that I never asked Keith Briffa for the raw Yamal data.
    The simple fact of the matter is, I have great respect for Keith and I saw no point at the time in asking for raw data when there was a published RCS chronology for that location.

    Rob

    Steve: Rob previously commented in Feb 2006 as follows:

    I would have preferred to have processed the Yamal data myself, but like you, was not able to acquire the raw data.

    and by email:

    Keith would not give me his Yamal raw data, but said that the Yamal series was a robust RCS chronology.

    I (and, in my opinion, quite reasonably) interpreted these statements as meaning that Rob had asked for the data.

    • steven mosher
      Posted Oct 17, 2009 at 4:52 AM | Permalink

      Re: Rob Wilson (#44), Knowing what you know now about the core counts would you just as a matter of course ask for raw data in all cases? As a former engineer even when I was supplied results that had been vetted by my customer ( the Department of defense) I would make routine requests for raw data. This was just standard practice. It mattered little that the people who provided me results were well known and meticulous fellows. It mattered little that the results were in fact validated and verified. I always requested the raw data. That way, I could avoid the mistakes that happen when you trust a human process. Since peer review is only necessary but not sufficient wouldn’t it be prudent to request raw data in all cases?

    • John A
      Posted Oct 17, 2009 at 5:41 AM | Permalink

      Re: Rob Wilson (#44),

      Anyway, the Yamal series represented a RCS chronology from a nearby location. The figure below show the strong coherence between the Polar Urals STD and Yamal RCS series.

      Yamal differs from the Polar Urals by more than 2 STDs in the critical 20th Century part even as the number of cores drops to “just a few good men” which is in turn dominated by just one tree. Is this what dendros call robust?

      I’ve no idea what you mean by “strong coherence” but to me, you’re seeing a classic coherence between autocorrelated series with the same mean – and it means nothing at all.

    • Jason
      Posted Oct 17, 2009 at 6:01 AM | Permalink

      Re: Rob Wilson (#44),

      Thank you very much for stopping by.

      Many on this site and elsewhere are very eager to find out the following:

      In your expert opinion as a dendrochronologist, are Steve’s criticisms of Briffa’s 2000 Yamal series and its use as a temperature proxy justified?

      Also, as someone experienced in the use of RCS, if two groups of trees (of the same species, from nearby locations, but collected via different methods) have different empirical relationships between tree age and ring size, is it dendrochronologically appropriate to combine them into one group and apply RCS to the entire group?

      I really appreciate you sharing your knowledge with us.

    • Dave Dardinger
      Posted Oct 17, 2009 at 8:51 AM | Permalink

      Re: Rob Wilson (#44),

      Steve M says:

      I (and, in my opinion, quite reasonably) interpreted that statement as meaning that Rob had asked for the data.

      I quite agree

      snip –
      Steve: let’s just leave it where it is. No need to comment further

      • bender
        Posted Oct 17, 2009 at 10:02 AM | Permalink

        Re: Dave Dardinger (#52),

        Re:

        Keith would not give me his Yamal raw data

        Could mean any number of things. I would not over-interpret these statements, rather take them at face value. Maybe Wilson only hinted that he was interested in the data and maybe Briffa just chuckled. No “asking”, no “refusal”. I see no reason to question Wilson’s candour. Steve made a reasonable assumption and maybe the reasonable asumption was wrong.
        .
        Rather than engage on such trivial matters, why not ask about something important, such as Wilson’s view on the use of 10 samples during a known period of strong divergence using a method (RCS) that can be subject to endpoint problems? Wilson is the world’s foremost expert on divergence and I would like to hear as much as he has to say on the topic. Causes, consequences, remedies, field studies and experiments to quantify the issue …
        .
        I thank him very much for dropping in. If we were to collate a list of questions (purely techincal, purely science), I wonder if he would be willing to look them over and try answering? (I would be willing to let him cherry-pick from that list :))

        • Dave Dardinger
          Posted Oct 17, 2009 at 10:33 AM | Permalink

          Re: bender (#54),

          snip –

          Steve
          : let’s just leave it where it is.

    • Steve McIntyre
      Posted Oct 19, 2009 at 7:44 AM | Permalink

      Re: Rob Wilson (#44),

      Rob Wilson said above that a similar situation existed for the Alps.

      Please note that the situation exists for the Alps. The historical spruce data detailed in Wilson and Topham (2004) did not perform well using the RCS method, so instead we used a different species chronology (Nicolussi and Schiessling 2001) for the RCS version.

      But there’s an important difference. The situation in the Alps was reported in a footnote to Table 1, while the situation in the Polar Urals was not reported. The Alps note read as follows:

      Note that for the Alps, different data sets were utilized for the STDPT and RCS chronology versions. The STDPT is a highly replicated expanded data set of that detailed by Wilson and Topham [2004]. RCS was not possible, however, with these data due to significant differences in mean RW between the historical and living data sets which resulted in a highly biased RCS chronology. Instead, for the Alps RCS chronology, the Nicolussi and Schiessling [2001] long pine RCS chronology was used. No STD version of this chronology exists.

      The fact that this was important enough to report for the Alps means that it is even more important to report for Yamal. Long before the present discussion, the use of Yamal over Polar Urals was an issue in Review Comments for IPCC AR4.

      Rob says:

      I will not be troubling the journal with a corrigendum as it does not change the results of the paper at all.

      I do not believe that the journal would consider itself “troubled” if Rob and his coauthors corrected an erroneous description of a series. And, in fact, the results of the paper would change if the RCS series from the site said to have been used were actually used.

      • Mike B
        Posted Oct 19, 2009 at 7:57 AM | Permalink

        Re: Steve McIntyre (#67),

        It seems to me that it is Rob’s duty to report the error, and the journal’s decision whether or not to be “troubled” by issuing a corrigendum.

        Not that the outcome would be any different, but I expect better from one of the “good guys”.

      • bender
        Posted Oct 19, 2009 at 8:23 AM | Permalink

        Re: Steve McIntyre (#67),
        The way it works is he should report the error, state his case why he thinks it’s not significant, and then it is up to posterity to determine whether or not he is correct. But it all has to start with the corrigendum …

    • steven mosher
      Posted Nov 18, 2009 at 2:24 PM | Permalink

      Re: Rob Wilson (#44), bump

  26. AndyL
    Posted Oct 17, 2009 at 3:01 AM | Permalink

    Rob

    Thank you for your contribution to this discussion. While you are here, are you prepared to comment on Steve’s claim that no Dendro has defended Briffa’s use of 10 cores in his RCS reconstruction?

  27. John A
    Posted Oct 17, 2009 at 5:33 AM | Permalink

    Here’s the picture that results from Steve’s code:

    Watch for the line

    temp= year>= 1686 & year< = 1978 #legend of Darrigo

    because there’s supposed to be no space between < and = but WordPress puts one in automatically. Take it out by cutting and pasting the code into Notepad first and then removing the space before cutting/pasting into R

  28. ron from Texas
    Posted Oct 17, 2009 at 6:12 AM | Permalink

    “Please note that the situation exists for the Alps. The historical spruce data detailed in Wilson and Topham (2004) did not perform well using the RCS method, so instead we used a different species chronology (Nicolussi and Schiessling 2001) for the RCS version.”
    When you say that the data “did not perform well,” what is that supposed to mean? It didn’t make the graph you were expecting? So you use a method you would have used on another tree for this one because it, what? Produces the graph you were looking for? In general, is it not possible to actually just graph data as it actually appears? Especially as this data is a loose, sometimes ill-fitting proxy for temperature? I’m just not understanding at all this practice of not liking the appearance of data, so the data must be altered until it “looks right.” Looks right to fit what? Mangling data to fit a theory, as opposed to developing a theory that explains the data?

    • Ron Cram
      Posted Oct 17, 2009 at 8:14 AM | Permalink

      Re: ron from Texas (#50),

      You are posing the same questions I have. I just do not understand the special pleading for the Alps and Polar Urals. If there was a good scientific reason why it should be done differently in these locations, the paper should have spent extra time explaining the rationale. Methodology is extremely important for readers to understand. Instead of making the case, D’Arrigo et al fail to mention any change at all. This is a very misleading oversight.

  29. MikeN
    Posted Oct 17, 2009 at 8:36 AM | Permalink

    He may be referring to low EPS statistics for the RCS chronology. The Alps came in below the .85 threshold.

    Click to access 40770009.pdf

  30. Posted Oct 17, 2009 at 9:15 AM | Permalink

    Thanks Rob for stopping by and explaining.

    It seems possible that Rob simply forgot his efforts from three years ago. After all, there are a lot more important issues in a persons day.

    On a side note, I’m a newborn critic of RCS standardization as all can see from some of my posts when applied differently the endpoints are changed substantially. This gives the impression that the whole series is decent because the middle as shown in Rob Wilson’s plot above is quite similar and basically unaffected by changes to the method but the endpoints experience heavy unjustified corrections. I say unjustified because I’m an engineer and there is substantial possibility for small deviations from this exponential curve creating large non-temperature response. I could expand on this but perhaps this isn’t the appropriate thread.

  31. Posted Oct 17, 2009 at 10:47 AM | Permalink

    Re Rob Wilson, #44,

    Thanks for participating here and for your clarifying remarks. You note,

    I will admit that the data description (Figure 2 and Table 1) in D’Arrigo et al. (2006) is somewhat misleading.

    It seems to me, at least from the discussion here, that certifying your “POL” series (in fact Yamal) as representing over 50 cores in the crucial modern calibration period in Figure 2 of your paper is not “somewhat misleading”, but rather “highly misleading,” however inadvertent that may have been at the time.

    You do caution in your paper,

    We should note that replication for TORN, POL and TAY (Figure 2) did decline to eight or nine series for some select periods, but the EPS was never LT .70.

    However, the accompanying figure 2 (the NAS version of which is what John A is replicating in #47 above, if I understand correctly) clearly shows that these deficient periods occurred only in the 11th and 15-16th centuries, and therefore do not diminish the statistical significance of the dramatic upturn of your “POL” (Yamal) series in the last century.

    So had you known how few cores were represented in the 20th century in the Yamal series, would you have used it?

    Also, if the “coherence” in your comment is essentially a moving average correlation between the two series, there must be some mistake at the end, since the correlation obviously deteriorates dramatically after 1900.

    • bender
      Posted Oct 17, 2009 at 11:09 AM | Permalink

      Re: Hu McCulloch (#58),

      had you known how few cores were represented in the 20th century in the Yamal series, would you have used it?

      if the “coherence” in your comment is essentially a moving average correlation between the two series, there must be some mistake at the end, since the correlation obviously deteriorates dramatically after 1900.

      I echo these questions. I would rephrase the first, asking how would you have caveated the usage. Regarding the second, I would like to know how the calibration statistics change as the amount of data is gradually clipped back from 1990 to priori to the onset of divergence – which is ~1957 when you compare CRU Yamal to Schweingruber, perhaps earlier.
      .
      IMO the issue here is cherry-picking chronology substitutions such that the calibration statistics during the divergence period are much higher than prior to the divergence. Because this is what generates false hockey sticks.

    • steven mosher
      Posted Oct 17, 2009 at 10:16 PM | Permalink

      Re: Hu McCulloch (#56), Thanks Hu.

      I find it interesting that the EPS dropped to .70. In his other paper ( one I cited on the millieum project) Rob and others
      ( esper I beleive) used a cut off of .85 for a valid chronology. I havent checked if steve has looked at EPS before
      ( briffa & jones 1990–or vose I cant recall)) but it struck me as one of those novel statistics….

      So the approach goes something like this.. they set .85 as a criteria, except when a series falls below it. So I’ve now see .85 cited as a cut off, .8, and now .7

      Steve:
      I wrote a script to calculate EPS a couple of year ago in connection with Jones et al 1998. It comes from Wigley 1984 (not an actual statistics text.) I might have done a post on it, but it might be from pre-CA days.

  32. Reed Coray
    Posted Oct 17, 2009 at 1:34 PM | Permalink

    RE: stan: October 16th, 2009 at 10:09 am, #10
    Yet, we are constantly being told that peer review in publishing is the gold standard for science and is far superior to any scientific discussions which take place on mere blogs.

    I’d say climatology peer review can be better characterized as the pyrite standard.

  33. chopbox
    Posted Oct 17, 2009 at 10:22 PM | Permalink

    Thank you, Rob Wilson, for taking the time to talk to us. My take on the audience here is that most of us really do want to know, and so appreciate your shedding some light on a topic of which you are an expert. As always, there are people here (as elsewhere) who are quick to rip: please don’t let them stop you from talking to the rest of us.

  34. MikeN
    Posted Oct 17, 2009 at 11:30 PM | Permalink

    My last comment didn’t get through. Reading the violin paper Rob referred to about the Alps, I think his invalid chronology comes form low EPS statistics.

  35. Posted Oct 18, 2009 at 12:02 AM | Permalink

    In table 1 of D’Arrigo et al 2006, there is a column showing the period of time for each chronology during which the number of radii exceeds 10. This seems to imply that having more than 10 ring-width series covering a period of time is preferable. Is there a generally accepted basis for the preference?

  36. Posted Oct 18, 2009 at 4:55 AM | Permalink

    It is now apparent from the lower panel of John A’s graph in #47 above (or panel 3 of NAS Figure 1 in the head post) that a large part of the reason for the high variance of the Polar Urals series before 1800 and particularly before 1100 is small sample size. Steve posted Rob Wilson’s graph of this series in his Feb. 22, 2006 post, “Wilson on Yamal Substitution”:

    But if small sample size and high variance in part of the reconstruction period is valid reason to reject the Polar Urals series, wouldn’t comparably small sample size and high variance in the even more crucial calibration period be an equally valid reason to reject the Yamal series? Perhaps Dr. Wilson can tell us why this was not done.

    Or, if the two sites are close enough to be similar, perhaps an even better alternative would have been simply to merge the two into a single series, resulting in something similar to the green line in Fig. 3 of Steve’s 9/27/09 post. Why was this not considered?

    Another potential cause of the high early volatility of this series may be that volatility increases with average ring width, so that taking logs would give a more homoskedastic series. Was this done already, or do you think this would help? (This issue was raised already, on Comment #528 of the “Gavin’s Guru” thread, but perhaps Dr. Wilson hasn’t had time to wade through all those comments! 😉 )

  37. EW
    Posted Oct 18, 2009 at 5:04 AM | Permalink

    What I would really want to see is a paper about the extended chronology and all archived data from Hantemirov’s lab.
    They are the ones who took the samples and processed them and therefore should know everything about them.
    According to HS02 and Methods in the Hantemirov’s PhD Thesis abstract there must be galore of living, dead and sub-fossil data. And here 10-12 Briffa’s trees are discussed…
    BTW – Hantemirov compares their polar Urals data with Salehard meteo station (therefore giving it a great importance in the global reconstructions) – but in the GHCN the data end soon after 2000…

  38. Kenneth Fritsch
    Posted Oct 19, 2009 at 9:50 AM | Permalink

    Question : Did we gain insights into the Rob Wilson “approaches” and defenses at the blogosphere or from the peer reviewed litrature?

    Question 2: Do you admonishers really think your suggestions, following some ideal peer review process, are being heard? Is not the process in reality more like, find an error in the literature through diligent analyses, call the error to public attention through the blogosphere, be ignored by the authors or their spoke persons and use that lack of response in your consideration of your overall judgment of them, or alternatively receive, at least, a partial reply without responses to more specific question that again can be used as a basis of judgment on the authors and their science?

    The evidence provided by the responses to these analyses, that I have witnessed, would seem to support the proposition that the publicizing of the results will not lead directly to any significant and meaningful changes in the processes, including peer review, but, in my view, and more importantly, lead to valuable information for the thinking person to make judgments about the authors works and “approaches”.

    This real blog review process that I have described above might find general weaknesses in the peer review process and amongst publishing scientists, or it might find problems more uniquely associated with the science and authors in question. I would suggest that I distinctly get the feeling that some here are attempting to save/restore some ideal system that probably never existed. The blogosphere gives us simply another tool for analyzing and judging the peer review process, but not directly changing it.

    • bender
      Posted Oct 19, 2009 at 10:02 AM | Permalink

      Re: Kenneth Fritsch (#70),
      It’s a big ship, Ken. It’s going to take some work to deflect its course a little.

  39. bent-out-of-shape
    Posted Oct 19, 2009 at 10:03 AM | Permalink

    Yes, this is a drive-by. No need for anyone to respond. But it would be in poor form to delete (right?).

    This post makes me chuckle. A blog is demanding that someone print a correction to a footnote in a table…

    …ironical hypocrisy!

    • bender
      Posted Oct 19, 2009 at 10:12 AM | Permalink

      Re: bent-out-of-shape (#72),
      Of course your observation won’t be deleted!
      Are you interested in why the request is being made?
      There’s a back-story. If you choose to ignore it, that’s your choice.

      • bent-out-of-shape
        Posted Oct 19, 2009 at 11:27 AM | Permalink

        Re: bender (#74),

        I understand the motivation for the request. Just boiling the story down to the basics.

        What if they posted their correction to the (non-existent) footnote in a table in a blog? Would that be adequate? If not, then you must see the irony/hypocrisy – no???

        There’s something here about goose and gander. Or is it pots and kettles…

        • bender
          Posted Oct 19, 2009 at 11:35 AM | Permalink

          Re: bent-out-of-shape (#75),

          Just boiling the story down to the basics.

          I would say you’re focusing an a side-act, myself. There is no irony that I can see. The request is symbolic.
          .
          Had we known 10 years ago what Briffa’s sample size was we would not be debating this here today. We would not be awaiting Briffa’s defense. Similarly, had the Graybill bristlecones been outed 20 years ago – when they were known to be flawed – we might never have had the debate over MBH98 or Mann et al. (2008).
          .
          Those details matter when you have a Team on an unscrupulous hunt for “the right signal”.
          .
          Tell me: what do you make of the Esper quote?

        • bent-out-of-shape
          Posted Oct 19, 2009 at 11:41 AM | Permalink

          Re: bender (#76),

          The request is symbolic.

          Huh? I don’t get this statement. …could you explain it to me? I think it’s a communication breakdown. (no ref to the song)

        • bender
          Posted Oct 19, 2009 at 12:01 PM | Permalink

          Re: bent-out-of-shape (#79),
          I explain it in the very next paragraph. A corrigendum prevents the continual transmission of errors, across articles and through time. When you are talking about the sample size of the series that has the single biggest impact on global temperature reconstrucitons (CRU Yamal larch), we are not talking about something trivial. The datum that needed to be reported needed to be reported 10 years ago.

        • compy
          Posted Oct 19, 2009 at 11:39 AM | Permalink

          Re: bent-out-of-shape (#75),

          What if they posted their correction to the (non-existent) footnote in a table in a blog? Would that be adequate? If not, then you must see the irony/hypocrisy – no???

          Right, no. I think you are inventing hypocrisy when none exists. (I personally have nothing against drive-bys. I just wish the drivers would be make sense).

        • bender
          Posted Oct 19, 2009 at 11:41 AM | Permalink

          Re: bent-out-of-shape (#75),

          What if they posted their correction to the (non-existent) footnote in a table in a blog?

          That would be a start, for sure. Irony gone.

        • Steve McIntyre
          Posted Oct 19, 2009 at 12:24 PM | Permalink

          Re: bent-out-of-shape (#75),

          Actually this Table and footnote has a considerable backstory. In Sept 2005, the review of AR4 First Draft took place. D’Arrigo et al (subsequently 2006) was submitted and could be downloaded from the IPCC TSU (which I did on Sep 20, 2005 – date of saved version). I immediately noticed that their Table 1 said that they had 155 cores for Polar Urals, which was more than the 93 cores in russ021 used in Briffa et al 1995 (the coldest year of the millennium. I then asked them (Sep 20, 2005 – the same day):

          I’ve looked at your interesting D’Arrigo et al. 2005, submitted. I noticed that you used a Polar Urals data set of 155 cores. The SChweingruber larch data set russ021 archived at WDCP has 93 cores. What accounts for the difference? Who did you get the data from? Can you send me a copy of the measurement dataset that you used? Thanks, Steve
          PS In fact, your Figure 2 indicates that a surmise of mine about the older dataset may have been correct.

          I don’t see any reply in my email records, which are usually pretty complete. A few days later (Sep 26), I posted a note at CA here on the impact of combining russ176w and russ021w (a chronology that is very similar to the corresponding calculations done by Esper and Wilson).

          D’Arrigo et al had horrendous data archiving – the various chronologies weren’t archived, the sites used in regional composites weren’t identified and many sites remain unarchived to do this day.

          I asked the IPCC Technical Services Unit to obtain the data for me in order to review the article (which was cited in the draft). I’ve recounted the subsequent events, which led to Susan Solomon threatening to expel me for asking for data from authors (data which in this case remains unavailable to this day.)

          So does this information matter? You bet it does. I noticed something immediately on seeing D’Arrigo et al (submitted). Polar Urals vs Yamal is not a minor issue; together with bristlecones, it props up the spaghetti graph. Readers are entitled to accurate disclosure.

  40. a reader
    Posted Oct 19, 2009 at 10:10 AM | Permalink

    MangoChutney

    This is off topic so delete if necessary.

    A.E. Douglass in 1929 was convinced that sunspot numbers had an effect on tree rings. He noticed regularly recurring 11 year cycles in the rings except for 75 years between 1650-1725. That those years were also sunspot free was later confirmed by Dr. E. Walter Maunder. This is from his December 1929 article in National Geographic. I don’t know if this theory has ever been accepted, but apparently extra terrestrial influences have been studied for a long while.

  41. bender
    Posted Oct 19, 2009 at 12:09 PM | Permalink

    bent:
    The size of Briffa’s Yamal sample is not officially on the record. It needs to be on the record if IPCC is to make use of that fact.
    .
    People are imploring McIntyre to do his own reconstruction. Sure, that would be nice. But let’s start by making the inadequate size of the Briffa sample a matter of public record. Let’s debate the size of sample required to make the claims that Briffa does.

  42. Posted Oct 19, 2009 at 12:11 PM | Permalink

    In D’Arrigo et al 2006, the caption for figure 2 said: “Figure 2. Individual regional composite RCS chronologies and their replication.”

    Was the “Alps” sample distribution that purports to be the “replication” for the “RCS chronology” also the count for a data set which was not used for the RCS chronology? Or is it only “POL” that purports to show the replication for the data used in the RCS chronology but actually shows the replication for the STD chronology?

    If a significant difference between samples from living trees and others caused a bias in the RCS chronology for the Alps, so that a different data set was used, how was that bias manifested?

    Is there a similar difference in the Yamal data set which introduces a similar bias into the Yamal RCS chronology?

  43. bent-out-of-shape
    Posted Oct 19, 2009 at 1:36 PM | Permalink

    Re: #80 and 83:

    So is there enough data available for McIntyre to write a paper on this issue? He has shown a lot of graphs on this website.

    I’m not talking about a full-on reconstruction, as bender refers to in #81. (it would be interesting to see how his NH temp looks like tho since he knows all of the stats and proxies to do it)

    Steve: see Andy Revkin’s recent post on this very question and the answer that I gave to him – one that none of the commenters actually discussed. Unfortunately the statistics that I know (Brown and Sundberg etc) indicate to me that the proxies are presently too inconsistent to produce a meaningful reconstruction. I keep an eye on new papers and new proxies to see if there are any new ones that might help.

    • bent-out-of-shape
      Posted Oct 19, 2009 at 4:17 PM | Permalink

      Re: bent-out-of-shape (#84),

      so from what I can gather from the post, the long and short of it is that: yes there is enough to write a paper. But you won’t write it b/c you would rather have instantaneous gratification of doing your work via a blog. Is that about right???

      • bender
        Posted Oct 19, 2009 at 4:38 PM | Permalink

        Re: bent-out-of-shape (#89),
        I’ll assume for the moment that you are reasonably intelligent. If you read the blockquote in #87 you will see that Steve does not think meaningful reconstruction is possible given the inconsistency of the proxies put forth to date. This blog documents why he has start to come to this conclusion. If this was YOUR hypothesis, what form would your paper take so that it was “constructive”, “moving forward”, “making a positive contribution ot the science”, etc.? Remember: this is an audience that, like an infant, won’t take “no” for an answer.

        • bent-out-of-shape
          Posted Oct 19, 2009 at 5:23 PM | Permalink

          Re: bender (#90),

          I never said he should make a reconstruction. You did.

          McIntyre said that the “statistics that I know (Brown and Sundberg etc) indicate to me that the proxies are presently too inconsistent to produce a meaningful reconstruction.” Sounds like he could write a paper on that topic…???

          then goes on to say over at the Revkin blog that I was pointed to:

          “…I believe that there is a useful role for timely analysis of the type that I do at Climate Audit.”

          Seems like my comment in #89 was dead on.

        • bender
          Posted Oct 19, 2009 at 5:32 PM | Permalink

          Re: bent-out-of-shape (#94),
          I’ve been told to let you drive on by. Bye.

        • bent-out-of-shape
          Posted Oct 19, 2009 at 11:00 PM | Permalink

          Re: bender (#95),

          way to give someone who is a self-professed outsider a smidgen of almost-honest attention. I guess since I am not part of your team, I don’t deserve much more than that, eh? Thanks for the friendly attitude, Kenneth.

          …perhaps I hit a sore spot. Sorry about that. My observation in #94 still seems like it is close to the truth tho.

          #92. Prove it then… the world it waiting on you to do it.

        • bender
          Posted Oct 20, 2009 at 1:48 AM | Permalink

          Re: bent-out-of-shape (#96),
          You are asking what motivates McIntyre to do things the way he does. You expect me to answer that?
          1. He publishes papers. Your presumption otherwise is insulting.
          2. Would you like him to publish the number 10? That’s the number of samples present in the modern portion of Briffa’s Yamal chronology. Why didn’t Briffa publish that number, along with the paper? How do you publish a corrigendum for someone else’s paper? Why is this his burden, not Briffa’s?
          3. You presume the motivation is “instant gratification”. Does the publication of this number on a blog not serve a societal need for disclosure? Do you think, for example, D’Arrigo and Wilson might have wanted to know how highly rpelicated the chronology was when they wrote the 2006 paper? Or IPCC?

        • bent-out-of-shape
          Posted Oct 20, 2009 at 10:39 AM | Permalink

          Re: bender (#97), Re: bender (#99), Re: RomanM (#100),

          you are trying to put my back against the wall b/c I point out an inconsistency here. Not falling for it. I asked bender if it would be ok if the authors posted a comment in a blog about the error. He said it would be a start, see #78.

          Why just a start???

          Why is it that the scientists who are constantly ridiculed and put down, for example: “But hey, it’s climate science.”, have to print a correction in the peer reviewed literature – which is also deemed to be worthless by many? When at the same time, it’s ok for those here to post comments on a blog that MUST be addressed by the scientists? And if the scientists don’t immediately run here to “defend themselves”, it is a big conspiracy/they are hiding something, and their science is a joke?
          I understand that blogs are a new forum for smart, trained professionals to affect a field they were not originally trained in. I agree that it is a good opportunity to bring in fresh ideas.

          But something has to change from the current state. It is a strange inconsistency. That is what I was talking about originally.

        • RomanM
          Posted Oct 20, 2009 at 11:19 AM | Permalink

          Re: bent-out-of-shape (#101),

          What inconsistency?

          Why is it that the scientists … have to print a correction in the peer reviewed literature – which is also deemed to be worthless by many?

          Because that is where they made their original claims of scientific correctness and if if they are honest with themselves (assuming that they accept that a substantial error has been made), they should post such a correction in the same place. There is no such responsibility or moral imperative on others to do that for them. Given the “closed shop” mentality of many journals (never mind the actual monetary costs), it would be a rare day for most people to even try to publish scientific corrigenda particularly after past experience.

          The authors are perfectly welcome to dispute and/or correct what is said at this blog if they so wish, but no one says that they are required to do so either. I am hoping that at least a few will re-evaluate their work in light of what they read and maybe give some credit for the few bon mots they may take away from here. 😉

          Their choice.

        • bent-out-of-shape
          Posted Oct 20, 2009 at 2:31 PM | Permalink

          Re: RomanM (#102), Re: Kenneth Fritsch (#104),

          Kenneth, I got my answer. No need for you to lend me your shoulder to lean on (when I’m not strong). …and it sounds like you are the one who is more “bent out of shape” than the one with that name. 😉

          Roman thanks for the honesty. According to Roman, it seems like the feeling here is that it costs too much money to write a paper – plus being outsiders anything you write will be rejected by the peer-review process (as you will be viewed as haters). So you are resigned to blog posts… with the hope that you are listed in the acknowledgments of a paper for your scientific contribution. A scientific “shout-out”, if you will. Now I see the aim of this (and similar) blog(s) and will read them in a new light.

          Steve:
          the monetary cost of writing a paper is not an issue for me. You’re inventing things. If you wish to discuss Yamal, please do so. Editorially, I don’t wish this distraction to hijack the thread and have snipped/deleted irrelevant discussion.

        • MrPete
          Posted Oct 20, 2009 at 4:28 PM | Permalink

          Re: bent-out-of-shape (#105),
          I’ll be nice (?) enough to say it clearly for you, if for no other reason than other newbie readers might benefit.

          You said: “According to Roman, it seems like the feeling here is that it costs too much money to write a paper – plus being outsiders anything you write will be rejected by the peer-review process (as you will be viewed as haters).”

          You are simply wrong in your reading. The primary thing is that it takes an act of G-d for a non-author correction to get published. See other links and posts on How To Get A Comment Published in 1 2 3 Easy Steps, etc.

          A Google search of Corrigendum revealed an interesting editorial: Maintaining the integrity of the scientific record. This was published in the Journal of Experimental Biology (JEB) upon their first retraction of a published article. Pertinent to this topic: it is the original author who files a corrigendum, and the original author who pays the cost of that correction.

          RomanM was discussing Corrigenda, not original articles.

        • Dave Dardinger
          Posted Oct 20, 2009 at 11:43 AM | Permalink

          Re: bender (#97),

          Do you think, for example, D’Arrigo and Wilson might have wanted to know how highly rpelicated the chronology was when they wrote the 2006 paper?

          Apparently not, given Wilson’s most recent post here (on the revisiting the Yamel substitution thread).

        • Kenneth Fritsch
          Posted Oct 20, 2009 at 8:55 AM | Permalink

          Re: bent-out-of-shape (#96),

          Bent-Out-of-Shape:

          Get to the point. Do you really think that your tone is not a major give away as to your purposes here? If we were in a bar I would avoid a person with that attitude at all costs by politely ignoring them.

          so from what I can gather from the post, the long and short of it is that: yes there is enough to write a paper. But you won’t write it b/c you would rather have instantaneous gratification of doing your work via a blog. Is that about right???

          Me thinks you have bent out of shape what Steve M might have replied in order to make some vague and obtuse point here.

          Is your point that only a peer reviewed paper has any merit in analyzing the relevant issues presented in a published paper? Should peer reviewed papers do all our thinking for us on these issues or can blog discussions play a (major) role in providing evidence for a thinking person to make judgments about the work? Do you think that writing a paper for publication and getting it published are simply something that will happen if a person wills it done or to which a person diligently attends for an unknown and unlimited expenditure of time? In the meantime does that person avoid discussing the pertinent issues?

          Is there something wrong with the more immediate time frame of a blog analysis? If such analyses (as they have for me) have brought a rather timely satisfaction should that be regarded in the negative context as immediate gratification?

          I think I can recognize a trouble maker when he enters the bar, so please answer the questions or consider moving to the next bar (not that I have any bouncer privileges here or would ever consider preempting the prerogatives of the owner of this blog nor even suggest what he do).

        • bender
          Posted Oct 20, 2009 at 9:07 AM | Permalink

          Re: Kenneth Fritsch (#98),
          Couldn’t leave it alone, could you? 😉

        • RomanM
          Posted Oct 20, 2009 at 9:46 AM | Permalink

          Re: Kenneth Fritsch (#98),

          I think bent confuses “instant gratification” with how a bunch of people can discuss topics of mutual interest without actually sitting in the same room. If what is being discussed on the blog has no credibility, it should be easy for anyone with more than half a brain to point the errors. Since I didn’t see him post any tangible comments in that direction, I would conclude that bender has it right.

      • TAG
        Posted Oct 19, 2009 at 4:58 PM | Permalink

        Re: bent-out-of-shape (#89),

        But you won’t write it b/c you would rather have instantaneous gratification of doing your work

        Long ago when people were trying to teach me mathematics, they emphasized the importance of an existence proof. Before one begins to explore the properties of a mathematical object, one is well advised to prove that it actually exists. The same thing could be said about all the effort that goes into the construction of these putative reconstructions.

        Can a valid one of these be constructed with current theories and data?

  44. Kenneth Fritsch
    Posted Oct 19, 2009 at 2:06 PM | Permalink

    Unfortunately the statistics that I know (Brown and Sundberg etc) indicate to me that the proxies are presently too inconsistent to produce a meaningful reconstruction. I keep an eye on new papers and new proxies to see if there are any new ones that might help.

    I think this comment gets to the heart of the misconception that the Revkins of the world and some climate scientists have about the intent of Steve M’s criticism and auditing of the climate reconstructions. If one judges the reconstructions to be not very certain or useful in general (due to ceiling to floor confidence limits) why are the other methods and statistical calculations used in constructing these reconstructions of such importance.

    Speaking only for myself, I see Steve M’s analyses/audits as aimed more at climate science in general and the methods used than an attempt to specifically deconstruct reconstructions and dwell on cherry picking proxies as their main weakness. I also do not judge that these other observers have a feel for the self-motivations of a Steve M.

    • bender
      Posted Oct 19, 2009 at 2:20 PM | Permalink

      Re: Kenneth Fritsch (#85),
      Steve says:

      Unfortunately the statistics that I know (Brown and Sundberg etc) indicate to me that the proxies are presently too inconsistent to produce a meaningful reconstruction.

      Well then THAT is the paper that audiences “want” to read. Build the argument. Show it.

  45. Steve McIntyre
    Posted Oct 19, 2009 at 2:09 PM | Permalink

    Speaking of footnotes – these are taken very seriously in financial statements. They are an integral part of the financial statements. See my comment http://www.climateaudit.org/?p=3951 on a Mann et al 2008 accounting policy written in the vortex of the financial meltdown. (I was feeling pretty blue at the time.)

  46. MikeN
    Posted Oct 19, 2009 at 4:09 PM | Permalink

    I’m confused by the lack of interest by Wilson in issuing a correction. We’ve sen other minor footnotes corrected recently. The Copenhagen Report had at least 2, with Rahmstorf’s 11 year smoothing changed to 15. This was a caption for a graph, with the graph not changing. Raupach et al issued a minor correction as well, in an ‘unpublished note.’

  47. Kenneth Fritsch
    Posted Oct 19, 2009 at 4:57 PM | Permalink

    Come on guys and girls, bent-out-of-shape has indicated that he (a female would never chose that name) is a drive by. Please let him drive by. My grandkids would have replied “what ever” and I am beginning to see why they do.

  48. Sarah Johns
    Posted Oct 23, 2009 at 9:08 AM | Permalink

    It is really surprising thing if they haven’t done it so far

One Trackback

  1. […] (minus 1) for Yamal (Briffa) and Polar Urals (Esper). Note the graph in Rob Wilson’s recent comment compares the RCS chronology for Yamal with the STD chronology for Polar Urals – and does not […]