Juckes and Reconstruction #9

Juckes has archived 64 reconstruction variations in the file mitrie_new_reconstructions_v01.csv (mirrored in mitrie_new_reconstructions_v01.nc). The 9th platter in the jukebox is mr_mbh_1000_cvm_nht_01.02.001_pc – which is some kind of composite-variance-match version of MBH (whle he has 1 CVM version each for Esper, Hegerl, Jones and Moberg, he has no fewer than 7 different versions for MBH and I’m having difficulty sorting out which is which. However, the plot of the 9th platter is inriguing as shown below:


Plot of Juckes Reconstruction #9.

Here’s a plot of the smoothed version.

I guess if you stick this in a spaghetti graph and put on a beard on the end of the series (i.e. overlay the instrumental record in a heavy line), you can make this look like a HS. But otherwise??

I re-iterate that this is not one of my sensitivity variations of a Team reconstruction, but one of Juckes’ own reconstructions.

21 Comments

  1. Steve Sadlov
    Posted Nov 2, 2006 at 7:26 PM | Permalink

    Based on the way the squigglies behave in the world of signal integrity work, I would expect, at best (worst) a zero crossing right about now. A hockey stick would not, pardon the pun, compute!

  2. bender
    Posted Nov 2, 2006 at 7:33 PM | Permalink

    Ouch. Spiky.

  3. Michael Hansen
    Posted Nov 2, 2006 at 7:33 PM | Permalink

    Is there a short answer as to why most of these proxies stops around 1975, or something like that? Why is it even necessary to ad instrumental records? If you’re able to analyse 1000 years of data in one sweep and put them in a paper, it should be a fairly simple task to add another 25 years.

    Surely I have missed something, but what?

  4. Jeff Weffer
    Posted Nov 2, 2006 at 8:08 PM | Permalink

    Whenever the raw data is plotted, it always looks to me like there is no signal, chaotic ups and downs only.

    Whenever you plot an average into that data, it always looks to me like there is no signal and it should just be a flat line.

    Whenever I see the raw data and a statistical analysis done on the raw data, I always ask how did (Mann or some other researcher) come up with their chart and their conclusion.

    How do they get away with it is the question? Why does the general public not know what is really going on is the question? Why are we trying to reorganize our economy based on these flat lines is the question?

    The answer has to be “Well these guys are good at getting their point of view out and the fact-based science is not.”

  5. Ed Snack
    Posted Nov 2, 2006 at 8:31 PM | Permalink

    Can someone explain a “composite-variance-match” for those amongst the statistically sometimes challenged of us ?

    BTW, this looks a lot like the original MBH censored directory plots does it not ?

  6. bender
    Posted Nov 2, 2006 at 8:31 PM | Permalink

    Jeff, this recon may be a flatliner but others are not.

  7. Armand MacMurray
    Posted Nov 2, 2006 at 9:46 PM | Permalink

    Re:#3
    Michael, see the “Bring the proxies up to date!” link under favorite posts along the right.

  8. Nobody in particular
    Posted Nov 3, 2006 at 1:18 PM | Permalink

    Mann’s comment is priceless in that “Bring the proxies up to date!” thread:

    “this is a costly, and labor-intensive activity, often requiring expensive field campaigns that involve traveling with heavy equipment to difficult-to-reach locations”

    Well, someone before 1980 thought it was worth it to go through the intensive labor and expensive field campaigns to difficult-to-reach locations. I suppose we have simply become too lazy in the past 20 years? What about instrumentation before 1980, seems to me that the proxies could be compared to instrumentation for the period of, say, 1970 to 1980 and see how well they compare.

    My problem with all of these studies is the data is first represented in oranges, then apples, then grapes, then kiwi fruit. For crying out loud, can we at least get one that switches to instrumentation at the point where the instrumentation becomes available rather than one that switches only when proxies are no longer available? If the newest data is always going to be instrumentation, then I want to see that kind of data going back as far as possible so that any pattern seen in the recent past has some context and any trend can be meaningful. As it stands now, we are presented with hundreds of years of one kind of data and two decades of a different kind.

    How about one that switches to instrumentation readings in the 1950s or 1960s? Or possible even earlier.

  9. John A
    Posted Nov 3, 2006 at 1:24 PM | Permalink

    I’m surprised that they haven’t come up with the ultimate reason for not updating the proxies: to go to all these out-of-the-way places – its to save on the carbon emissions flying to all of these lovely destinations, staying in hotels, taking the 4×4 up to these majestic valleys etc.

    It all adds up.

  10. Brooks Hurd
    Posted Nov 4, 2006 at 11:29 AM | Permalink

    Re: 8,

    This would unfortunately require that Phil Jones releases his data.

    I am afraid that there will be permanent settlements on Mars before we have access to Jones’ instrument data.

  11. Murray Duffin
    Posted Nov 4, 2006 at 1:02 PM | Permalink

    Has no one noticed that disastrous “blade” from 1600 to nearly 1700? Does anyone know where the CO2 came from? Or — was there something else happening?

  12. bender
    Posted Nov 4, 2006 at 5:18 PM | Permalink

    Re #11
    That steep slope in this recon is impressive, but it doesn’t show up in all the recons.

  13. Henry
    Posted Nov 4, 2006 at 5:26 PM | Permalink

    Re #5, Osborn et al 2004
    Osborn et al 2004
    has

    To continue with the approach selected here requires a choice to be made between optimising
    the reconstruction at either local or regional-average scales (or some intermediate scale). The results
    reported in the remainder of this paper relax the requirement of optimising the fit for each grid box
    (which the local linear regression achieved) by instead scaling each grid-box MXD series so that its
    variance matches the variance of the observed temperature series for that grid box. A “variancematching”
    approach has been used by other studies, typically at a hemispheric scale (e.g., Jones et al.,
    1998; Crowley and Lowery, 2000). Though the variance matching does not give the optimal fit at the
    grid-box scale (i.e., the root-mean-squared error is not minimised), it yields a spatially-resolved
    reconstruction that, when regionally-averaged, matches the regionally-averaged temperature much
    more closely.

    Essentially, regression usually has the effect that the model output for the dependent variable will have a reduced variance compared with the original data. Variance matching scales the model output to offset this. Whether it is valid or not is unclear, but when the time trend – it certainty worsens the explanatory power of the model.

  14. Henry
    Posted Nov 4, 2006 at 9:22 PM | Permalink

    re #13 – I mistook “Submit Comment” for “Preview”

  15. Steve McIntyre
    Posted Nov 8, 2006 at 9:20 PM | Permalink

    Juckes commented about this plot as follows:

    Secondly: I notice on that on page 888 you are having trouble interpreting mr_mbh_1000_cvm_nht_01.02.001_pc: This figure could be interpreted as a coding error or as an illustration of the pitfalls of combining the use of proxy PCs and the composite approach. The problem is the arbitrary sign of the PCs. If this is not adjusted the composite is likely to be meaningless because of the arbitrariness, if it is adjusted estimates of significance can be compromised. Some such reconstructions (using adjusted PCs) are included in the discussion for comparison purposes, but for the main conclusions we avoid proxy PCs so as to avoid this problem. The curve you show on page 888 has an unadjusted PC, so it is basically meaningless.

    I think that I’ve diagnosed what’s going on. One technique that I use for detective work on reconstructions is simply regressing the reconstruction against candidate proxies and looking at the coefficients. If you do this for Juckes #7, you get an exact match. Here’s the barplot of coefficients here. Now there has to be some still undisclosed re-weighting. The coefficients are all sort of similar, but for #7, they were exact. So it’s hard to say Juckes actually did with his weighting here. Reconstruction #9 is constructed somehow from these coefficients, but there’s something still missing from Juckes’ methodological description. Maybe he’ll tell us. But that’s not the main issue. You’ll notice that all the regression coefficients are positive.

    Barplot of regression coefficients of Juckes reconstruction 9 ( mr_mbh_1000_cvm_nht_01.02.001_pc ) against Juckes archive of MBH99 proxies 20-32. The three left-most series are archived PC1s, with the negligibly weighted leftmost series the "unfixed" PC1. (The unfixed PC1 is used in Juckes #7.)

    Now here’a a plot of the PC1-fixed. The PC1-fixed is the biased Mannian PC1 slightly shaved. It has the HS-ness of the correlation PC1. There are multiple problems with Mann’s purported “fixing” of his PC1. (The only real solution to biased bristlecones is the one recommended by the NAS panel – avoid them in temperature reconstructions. Of course Juckes didn’t do this.) In the AD1000 network, the network has a disproportionate number of bristlecones, so the vagaries of PC methodology don’t make a whole lot of difference (as we observed in MM05 EE). You still get a bristlecone shape under any method. The Mannian PC, of course, makes the HS more extreme; the "fixing" just takes it back to being close to a un-"fixed" correlation PC.

    But here’s what’s amusing. In Juckes’ reconstruction #9, he forgot to flip the PC1. This is presumably the “coding error” mentioned above. Juckes undoubtedly realized what happened as soon as I pointed it out. Of course, he put the blame on me saying that it was me that was "having trouble interpreting" the reconstruction – as though this were my fault. I think that I’m interpreting this just fine.

    Plot of Juckes PC1-"fixed" from archive.

    Now if the issue is a "coding error", then it’s not that "I’m having trouble interpreting" the graphic,

  16. bender
    Posted Nov 8, 2006 at 9:52 PM | Permalink

    Steve M,
    If you are right about what these guys are doing – cherry-picking data-snooped series to deliver a pre-determined result – then there are two alternatives in response. One approach, which I had advocated from the start, was using bootstrapping to build an honest confidence interval. This would be innovative. A different approach, much simpler, would be to cherry-pick your own data-snooped series to obtain your own pre-determined, opposite result. This would not be innovative and constructive, but it would certainly make the counterpoint. You would have to know the proxies pretty well to do this (i.e. not just the proxy data stream, but all the sub-data streams that go into each proxy), and you apparently know your proxies. Possibly as well or better than the best in the world.

    My question is: have you ever tried publishing your own cherry-picked, data-snooped, counterpoint recon? It would have an impact.

  17. James Lane
    Posted Nov 8, 2006 at 10:29 PM | Permalink

    Bender, Steve’s already done this (Making Apple Pie instead of Cherry Pie):

    http://www.climateaudit.org/?p=581

  18. Steve McIntyre
    Posted Nov 8, 2006 at 10:53 PM | Permalink

    I’m going to present a series of such things at the AGU session in December, showing the impact of slight variations.

    BTW I’ve got some good new stuff on Indigirka which I’ll post up tomorrow. Juckes said that he didn’t use it because it was “unpublished data”. Actually I just discovered that it has an alter ego which I’ve even referred to without realizing that they were the same.

  19. bender
    Posted Nov 9, 2006 at 1:38 AM | Permalink

    Re #17
    Thanks, I hadn’t seen that. (Still, the question was whether this was ever submitted for publication.) Presenting fruity flavors at AGU is a good thing. They publish their abstracts?

  20. Steve McIntyre
    Posted Nov 29, 2006 at 8:35 AM | Permalink

    #15. Juckes has contested this comment at page 926 in the following exchange.

    In #134, he said:

    Re 133: On this site I think “undisclosed” usually means McIntyre has misunderstood something. He certainly uses it frequently where there is no reason to think anyone is hiding anything.

    I replied:

    Martin, you’ve made another accusation. Let’s work through an example – which is probably the one that’s been the most prominent. I originally said that the MBH reconstruction had adverse verification r2 results for the AD1400 step and that Mann failed to disclose the adverse results. You’ve made the accusation that this is just my “misunderstanding”. What have I misunderstood?

    To which Juckes replied:

    Re 140: see for example comment 15 on page 888: “Now there has to be some still undisclosed re-weighting.”
    Also note that “You’ve made the accusation that this is just my “misunderstanding”” is clearly untrue.

    So let’s look at #15, which discusses Juckes reconstruction mr_mbh_1000_cvm_nht_01.02.001_pc . The suffix “pc” is explained in Juckes SI as using the unadjusted first proxy PC as follows:

    The optional sufx is used to describe variants of the MBH proxy collection:_
    pc: using the unadjusted first proxy PC.

    The reverse engineering clearly shows that Juckes used the adjusted first proxy PC in this particular proxy reconstruction.

    Juckes’ main article stated:

    for the proxy principal components in the MBH collection the sign is arbitrary: these series have, where necessary, had the sign reversed so 5 that they have a positive correlation with the northern hemisphere temperature record

    Now in this particular example, it appears almost certain that the sign of the first proxy PC was not reversed so that it has a positive orientation to NH temperature. This example was re-visited with other examples in the post Replicating Juckes’ CVM.

    It appears to me that in the “pc” reconstruction that 1) Juckes has used the adjusted proxy PC1 and 2) has not orieinted the adjusted proxy PC1 to have a positive orientation with NH temperature. Juckes says that I’ve “misunderstood” what he did. OK Martin, what did I misunderstand. For the record, will you categorically for the record assert that:

    1) this “pc” reconstruction used the “unadjusted” proxy PC1;
    2) this “pc” reconstruction used the “unadjusted” proxy PC1 with its orientation reversed so that it had a positive correlation with NH temperature?

    These are pretty simple questions.

  21. Steve Sadlov
    Posted Nov 29, 2006 at 11:31 AM | Permalink

    Bump.

One Trackback

  1. […] (Juckes and Reconstruction #9) […]