More Changes at the Mann 2008 SI

Mannian confidence intervals have always been a mystery with MBH99 confidence interval methodology remaining an intractable mystery that has defeated all reverse engineering (and engineering) efforts by UC, Jean S and myself to date (though we haven’t picked up this file for a while.)

I was very interested to see how Mann 2008 calculated confidence intervals. You can’t really tell from the running text or the SI. And while, to his credit, Mann archived a lot of source code – a fairly brave undertaking given that the code is a lot like GISTEMP on a scale of 1 to needles-in-your-eyeballs, he didn’t archive the source code for confidence interval calculations – so we had another mystery.

I emailed Gerry North, said to have reviewed Mann 2008, seeking an explanation, but he told me to “move on”. Not very helpful. (But again in fairness to Gerry North, I’ll bet that our PNAS comment was sent to him for screening and he’s the sort of person who would be fair.)

In our comment (submitted on Dec 8, 2008), we commented on the calculation of confidence intervals, noting that, contrary to assurances in Mann et al 2008, the source code did not contain their calculation of their Figure 3 confidence intervals.

Jean S just noticed that, on Dec 15, 2008, Mann added what appears to be the relevant source code as http://www.meteo.psu.edu/~mann/supplements/MultiproxyMeans07/code/codeveri/calc_error.m. Although their cover webpage reports some other changes, this change was not reported there.

UC and Jean S have taken a quick look at this code and we should have something to report in the next week or two.

One thing that seems “very likely” to me: I’ll bet that, in their Reply to our Comment, Mann will say that we were “wrong” in our statement that the archived source code did not contain this calculation because if you go to their website, you can see that the calculation is there [disregarding the fact that it was added after the fact].

Note: Compare to http://www.climateaudit.org/?p=4449 – see UC comment below.

Update: Their reply, needless to say, did not acknowledge that they placed the code online AFTER our comment:

The method of uncertainty estimation (use of calibration/validation residuals) is conventional (3, 4) and was described explicitly in ref. 2 (also in ref. 5), and Matlab code is available at http://www.meteo.
psu.edu/~mann/supplements/MultiproxyMeans07/code/
codeveri/calc_error.m.

64 Comments

  1. Ross McKitrick
    Posted Jan 12, 2009 at 4:31 PM | Permalink

    No no, you’re being far too cynical Steve. I think the text will say something like, “We inadvertently failed to archive our code for computing confidence intervals and we thank M&M for drawing our attention to this matter, which we have rectified.” After all, isn’t that what most scientists would say?

  2. Jean S
    Posted Jan 12, 2009 at 4:42 PM | Permalink

    Well, I think (PLEASE correct me if I’m mistaken) the REs and CEs are calculated wrong, here’s the relevant code (verification REs):

    e_reconmean=e_recon-nanmean(e_recon(cals1:cale1));
    l_reconmean=l_recon-nanmean(l_recon(cals2:cale2));

    e_actmean=inst-mean(inst(cals1:cale1));
    l_actmean=inst-mean(inst(cals2:cale2));

    diffmean=e_actmean(vals1:vale1)-e_reconmean(vals1:vale1);
    meansqdiff=nansum((diffmean.^2));
    meansq=nansum((e_actmean(vals1:vale1)).^2);
    RE_1(n)=1.-(meansqdiff/meansq);

    diffmean=l_actmean(vals2:vale2)-l_reconmean(vals2:vale2);
    meansqdiff=nansum((diffmean.^2));
    meansq=nansum((l_actmean(vals2:vale2)).^2);
    RE_2(n)=1.-(meansqdiff/meansq);

    So the (calibration) means are subtracted from the instrumental and the reconstruction (_reconmean, _actmean) before calculating the difference (diffmean). If I’m not mistaken, the formula for verification RE (given in the same form as in the NAS report) is 1-MSE(y(verif),yhat(verif))/MSE(y(verif),y_mean(calib)), and now Mann is calculating 1-MSE(y(verif)-y_mean(calib),yhat(verif)-yhat_mean(calib))/MSE(y(verif),y_mean(calib)). Unless both the instumental series and the reconstruction has the same mean over the calibration interval, these are not the same.

  3. John A
    Posted Jan 12, 2009 at 6:31 PM | Permalink

    One of the continuing battles I’ve had since this blog began is holding down cynicism and assuming good faith on the part of the Hockey Team. And I’m not sure I’m winning any battle so far.

    I confess myself baffled that Gerry North should advise Steve to “move on” from an analysis of Mann 2008 – to what, exactly?

  4. Demesure
    Posted Jan 12, 2009 at 6:31 PM | Permalink

    I (randomly) bet that Jean S is right and the file date of calc_error.m will change again in Mann’s “archive” (if “archive” means something in Team’s world).

  5. henry
    Posted Jan 12, 2009 at 6:45 PM | Permalink

    Re: John A

    I confess myself baffled that Gerry North should advise Steve to “move on” from an analysis of Mann 2008 – to what, exactly?

    Why to Mann 09, of course.

  6. Craig Loehle
    Posted Jan 12, 2009 at 7:06 PM | Permalink

    It is time travel. It was there all along as of right now. Not only Dr. Who but Dr. WhoMe? who can time travel.

  7. John Norris
    Posted Jan 12, 2009 at 8:21 PM | Permalink

    The folder flags the last modification of

    calc_error.m

    as

    15-Dec-2008 15:56

    They would now have to roll back that date to cover their tracks if they were going to claim it was there from the get go. Although that would be really fun to see, I suspect they will just concede the shortfall in their archiving, and try and ‘move on’.

  8. Paul Penrose
    Posted Jan 12, 2009 at 8:33 PM | Permalink

    Maybe Mann invented a new, more robust way to calculate REs and CEs.

  9. jae
    Posted Jan 12, 2009 at 9:19 PM | Permalink

    I think the Mann is in a corner.

  10. Barclay E MacDonald
    Posted Jan 12, 2009 at 9:23 PM | Permalink

    It is all clearly explained by teleconnection through time and space.

    I vote for CA for best Science Blog. But Congrats Anthony Watts

  11. kazinski
    Posted Jan 12, 2009 at 9:58 PM | Permalink

    I am quite interested in seeing your comment to Mann 2008, of course I guessed way wrong on your cited references, but you evidently focused your comments on your area of expertise, statistics, rather than Mann’s bailiwick paleo-climatology. Nice call. I’d think that confidence intervals should follow standard methodologies and it would be very hard for Mann to justify a roll-your-own method, in fact CI should be about the easiest thing to reverse engineer, given the data.

    But harking back to the the gadfly post, I do have to fault you on your snark at the end of the post. It is of course way too late for Mann to take any of your criticisms constructively, you have become an obstacle to go around for him rather than a constructive critic that could help him improve his work. It is obvious to me (at least) that the whole point of Mann 2008 is to patch the whole you knocked into Mann 1999. I just think it would be better if you used a stiletto instead of an axe. It would be preferable if you assumed good faith rather than obstructionism, just noting that the code appeared on 12/15 would have been better than also predicting that Mann would claim its presence all along. Then again I’m not the one that has been writing all those polite emails and getting stiffed. I should also note that this is the kind of criticism I wouldn’t worry about posting here, but I suppose would probably get me banned at RC or “Open Mind”.

    • jae
      Posted Jan 12, 2009 at 10:34 PM | Permalink

      Re: kazinski (#13),

      It is of course way too late for Mann to take any of your criticisms constructively, you have become an obstacle to go around for him rather than a constructive critic that could help him improve his work. It is obvious to me (at least) that the whole point of Mann 2008 is to patch the whole you knocked into Mann 1999. I just think it would be better if you used a stiletto instead of an axe.

      With all due respect, you need to read more of the history here, before making that observation. Steve and his friends have gone WAY out of their way to be constructive and accomodating critics, including offers to co-author papers that discuss the problems. If you will read a little more of the history here, you will probably actually get very angry (as I do) at the arrogance of the “Team” in response to Steve’s simple inquiries. Of course, you are probably correct that it is now “too late” to reconcile the differences…

      • kazinski
        Posted Jan 12, 2009 at 11:29 PM | Permalink

        Re: jae (#16),
        I’m fully aware of the history, and I do not think that Steve is anywhere close to being equally responsible for the antagonism. But I still think that observations and documentation of the obstructionism, without predictions of further bad faith would be more constructive.

        Re: NukemHill (#17),
        The only way to tell what was uploaded when is if the contents of the directory were cached at regular intervals (or after every update). The timestamps on the files for reasons both innocent and malicious cannot be relied on, unless of course they are using source control software with an audit trail to manage the directory.

  12. Steve McIntyre
    Posted Jan 12, 2009 at 10:25 PM | Permalink

    #13. It’s a bit snarky, but there’s quite a bit of history in that remark, which I’m not going to bother recounting. It’s not just me. Jason Smerdon published a criticism of Mann’s first RegEM; in reply, Mann said that their criticism was incorrect because they had changed their method and were now using RegEM II.

  13. Gerald Machnee
    Posted Jan 12, 2009 at 10:29 PM | Permalink

    There will be a reincarnation of the debate(r) from a previous thread about when what was posted. Expect Mr. (?) to say it was there all the time.

    Mann will say that we were “wrong” in our statement that the archived source code did not contain this calculation because if you go to their website, you can see that the calculation is there [disregarding the fact that it was added after the fact].

  14. Posted Jan 12, 2009 at 10:41 PM | Permalink

    Unless I’m missing something, the “last updated” date stamp merely means this was the last time the file “calc_error.m” was modified or touched. It does not mean this was the time it was initially uploaded. You want to identify the “date created” (or whatever it is called on their file system) date stamp. This should tell you when the file was first uploaded.

    • Dave Dardinger
      Posted Jan 12, 2009 at 11:14 PM | Permalink

      Re: NukemHill (#17),

      Well, can anyone find it in the way-back machine before the recent date stamp?

      • John M
        Posted Jan 13, 2009 at 5:45 PM | Permalink

        Re: Dave Dardinger (#19),

        This is what the Wayback Machine gives.

        Link

        It seems odd for a University site, but I suppose it could be more common than one (me fer instance) might think.

        Maybe there’s something about hosting a web page that makes it advantageous to block robots.

  15. Soronel Haetir
    Posted Jan 12, 2009 at 11:11 PM | Permalink

    In response to the argument that Mann will just claim that it was there all along, I would point at all your posts over the last months. What /possible/ point would there be in your harping on this point if Mann had already made the code available? Picking at the point of certain code not being available when it was would just be silly given all the other problems with the body of work.

  16. Alan S. Blue
    Posted Jan 12, 2009 at 11:21 PM | Permalink

    Soronel,

    Is it worthwhile to pick on missing code while it is missing?

  17. Steve McIntyre
    Posted Jan 12, 2009 at 11:26 PM | Permalink

    The unavailability of the confidence interval code is not just being “picky”. UC, Jean S and myself have spent lots of time trying to figure MBH99 CI calculations out and have been unsuccessful. I’ve tried many avenues to obtain information on they were done and never got anywhere. Mann didn’t even provide this code to the House Energy and Commerce Committee when they asked him for his code.

    I’m interested in solving crossword puzzles and forcing this calculation out into the open is something that pleases me greatly (even if it is readily remedied and therefore I “wasted” one sentence of the 250 words).

  18. Steve McIntyre
    Posted Jan 12, 2009 at 11:50 PM | Permalink

    Mann’s nomenclature in the new code may be quite perspicacious: two key variables are denoted “goop” and “ooga”.

    ooga=load(strcat(‘cps_results/’,upper(‘NH’),’_ea100′,expe,’_newgrid_’,
    iseries,’_gbeachyear.mat’)

    I presume that the name “chaka” is reserved for the output. Ooga chaka, ooga chaka, ooga chaka.

    http://ca.youtube.com/watch?v=Gi2CfuqcUGE.

  19. Posted Jan 13, 2009 at 12:08 AM | Permalink

    For AD1000-AD1800 (not enough memory in my Matlab to go further, +checked NH full CRU only ) this code reproduces cps-validation.xls exactly. Except “adjusted to overfitting”* values for AD1000-AD1300.

    * what are these, how to overfit in the validation period ??

    • Jean S
      Posted Jan 13, 2009 at 2:49 AM | Permalink

      Re: UC (#24),

      * what are these, how to overfit in the validation period ??

      Unless both CE and (validation) RE improve in the current step, the values (and also the reconstruction) from the previous step are used! I guess mannthink goes something like this: if the validation “skills” do not improve, it’s due to an “overfit” in the calibration period(s)!

      Except “adjusted to overfitting”* values for AD1000-AD1300.

      Did you really expect to get ALL values correctly in a single shot? 😉

  20. bernie
    Posted Jan 13, 2009 at 6:27 AM | Permalink

    Steve:
    That is really witty. To be precise though, the lyrics to “Hooked on a feeling” are:

    Ooga chaka ooga ooga ooga chaka (Who would have guessed!?)

    See http://www.top40db.net/Lyrics/?SongID=98217

  21. Steve McIntyre
    Posted Jan 13, 2009 at 9:10 AM | Permalink

    #24, 25. I think that this was mentioned in our earlier pass through this material, though we didn’t have the code at the time. In the case that I used for benchmarking some calcs, the AD1000 network continued in use as none of the “proxies” starting after that period “improved” the validation scores.

    • Posted Jan 13, 2009 at 9:42 AM | Permalink

      Re: Steve McIntyre (#27),

      In the case that I used for benchmarking some calcs, the AD1000 network continued in use as none of the “proxies” starting after that period “improved” the validation scores.

      Ah, I see, adjRE is not a vector

      if (avgRE(n) >= adjRE & avgCE(n) >= adjCE)
      adjRE=avgRE(n);
      adjCE=avgCE(n);
      adjR2=avgR2(n);
      adjr2_1=r2_1(n);
      adjr2_2=r2_2(n);
      keepcent(n)=1;
      end

  22. Jeff Norman
    Posted Jan 13, 2009 at 12:35 PM | Permalink

    I can’t stop this feeling
    Deep inside of me
    Mann you just don’t realize
    What we’ve come to see
    When we line up
    All the data you set
    Your neat results
    We can rarely get

    I‘m hooked on a feeling
    I’m high on believing
    That your results are wrong

    Trends as smooth as graphite
    Made to your design
    Mann you keep be searchin for a different line

    etc. etc.

    Steve: And the Team says: Ooga chaka ooga ooga ooga chaka

  23. Steve McIntyre
    Posted Jan 13, 2009 at 12:55 PM | Permalink

    Trends as smooth as graphite

    Nice.

  24. Posted Jan 13, 2009 at 1:50 PM | Permalink

    In addition to fancy stats like CE and RE, the figures are also interesting,

    ( e_recon(vals1:vale1) vs. inst(vals1:vale1) )

  25. Jordan
    Posted Jan 13, 2009 at 2:02 PM | Permalink

    but he told me to “move on”.

    Perhaps the gentleman is trying to say that he wishes you would move on. That gives the aspirational HS a better chance of slipping its backside quietly into the throne of the mystical kingdom called “Settled Science”.

    If there are substantial points which remain unanswered or unjustified, there is a onus on scientists to ensure these are investigated. Consider the things we now take for granted because dedicated people would not give in when things got tricky.

    It may be any number of things, but when you hear “move on”, you are not listening to the voice of science.

    • kim
      Posted Jan 15, 2009 at 6:57 AM | Permalink

      Re: Jordan (#30),

      Speaking of images, Jordan’s picture of ‘settled’ science unsettles my Risus Monkey.
      ==================================================

  26. Mike B
    Posted Jan 13, 2009 at 2:54 PM | Permalink

    I emailed Gerry North, said to have reviewed Mann 2008, seeking an explanation, but he told me to “move on”. Not very helpful. (But again in fairness to Gerry North, I’ll bet that our PNAS comment was sent to him for screening and he’s the sort of person who would be fair.)

    Was North’s implication to “move on” as in “give up, you’ll never figure it out” or “move on” as in “further investigation will only cause the wagons to be circled tighter”?

  27. Hoi Polloi
    Posted Jan 13, 2009 at 4:23 PM | Permalink

    “Nothing to see here people, move on…”

  28. Steve McIntyre
    Posted Jan 13, 2009 at 4:43 PM | Permalink

    #31. UC, I don’t get the figure shown from plotting cols 1 and 2 from http://signals.auditblogs.com/files/2009/01/e_recon.txt (or cols 3 and 4). What are you plotting here?

    • Posted Jan 14, 2009 at 12:21 AM | Permalink

      Re: Steve McIntyre (#34),

      This is CRU, sorry,

      calc_error(‘NH’,”,’CRU’);

      and two first rows from here

      (format [e_recon(vals1:vale1) inst(vals1:vale1) l_recon(vals2:vale2) inst(vals2:vale2)])

      r2 of 0.55 seems to be very high for this, as visually it is difficult to see any correlation, but note that r is negative! Detrended r2 is 0.1. r2 value matches exactly with archived cps-validation.xls NH full CRU page , AD1000 .

      • Posted Jan 14, 2009 at 1:06 AM | Permalink

        Re: UC (#37),

        note that r is negative!

        And it gets better. Mann’s formula for uncertainties is

        uncert_e(n)=sd_1.*sqrt(1-kadjr2_1(n)); %%% 1-sigma uncertainties here!!!
        uncert_l(n)=sd_2.*sqrt(1-kadjr2_2(n)); %%% 1-sigma uncertainties here!!!
        uncert(n)= ( uncert_e(n) + uncert_l(n) ); %%%% 2-sigma uncertainties here!!!!

        where sd_1 and sd_2 are standard deviations of calibration instrumental. kadjr2_1 and kadjr2_2 are validation r2s . Once again we have a situation where uncertainties are upper-bounded. And negative correlation in the validation period is rewarded! LOL.

        • Jean S
          Posted Jan 14, 2009 at 1:39 AM | Permalink

          Re: UC (#38),
          Can someone explain me this formula:

          uncert(n)= ( uncert_e(n) + uncert_l(n) ); %%%% 2-sigma uncertainties here!!!!

          That is, direct addition of two standard deviations.

  29. Dan White
    Posted Jan 13, 2009 at 11:26 PM | Permalink

    Re: Ooga chaka –

    This blog is in danger of being nominated Corniest Science Blog 2009!

    At least there were several shots of a glacier in the video, presumably retreating at an unprecedented rate.


    Steve:
    Hey, we work with the material that we’re given.

  30. Mark T
    Posted Jan 14, 2009 at 1:57 AM | Permalink

    Yeah, shouldn’t they sum geometrically, not linearly?

    Mark

  31. Posted Jan 14, 2009 at 2:11 AM | Permalink

    #39, #40

    He has two estimates of the same sigma, and then he takes an average, and multiplies by two to get “2-sigma”. Almost the same thing as combining those two series, and then taking sample standard deviation *2. But this way he gets rid of inconvenient biases more efficiently 😉

    • Jean S
      Posted Jan 14, 2009 at 11:09 AM | Permalink

      Re: UC (#41),

      two estimates of the same sigma

      Ah, now I get it. He is essentially combining the standard errors of the estimates of two different simple linear regressions (“early” and “late”) to represent the standard error of estimate of a third simple linear regression (“full”). Innovative new idea at least to me 😉 It would be interesting to hear what more experienced statistics oriented readers (RomanM?) think about it.

      • RomanM
        Posted Jan 14, 2009 at 1:22 PM | Permalink

        Re: Jean S (#44),

        He is essentially combining the standard errors of the estimates of two different simple linear regressions (“early” and “late”) to represent the standard error of estimate of a third simple linear regression (“full”). Innovative new idea at least to me

        I took a glance at calc_error.m to see what Prof. Mann did. Yes, I actually have seen someone estimate a “pooled” standard deviation by averaging the individual sd’s, but it was in some rather ineptly done homeworks in first year stat courses. 😉 If the two estimates are independent, the appropriate thing to do is to first average the variances (geometric, in Mark’s terminology) and then take the square root of the result. That may not be so simple in this case since from what I could gather he appears to calculate the original sd’s from two overlapping segments of (heavily smoothed!) instrumental data. Innovative certainly describes this result!

        • Mike B
          Posted Jan 14, 2009 at 2:45 PM | Permalink

          Re: RomanM (#49), (and others)

          I think we have a new phenomenon here that I will call “Mannian Pooling”. I don’t have a proof handy, but I think Mannian Pooling will always give you a smaller pooled standard error than correct pooling.

          And as Roman implies, don’t attempt Mannian Pooling in a stats 101 class, you’ll flunk.

        • RomanM
          Posted Jan 14, 2009 at 3:09 PM | Permalink

          Re: Mike B (#50),

          I don’t have a proof handy, but I think Mannian Pooling will always give you a smaller pooled standard error than correct pooling.

          Good guess, Mike! It follows from properties of convex functions although anyone with a little high school algebra (and a little math talent) can show it pretty easily.

  32. Posted Jan 14, 2009 at 5:00 AM | Permalink

    I’ve discussed about assumptions of ICE earlier, but it seems that calc_error.m goes even further. I will now try to find the implicit assumptions behind this new formula. I will not promise anything, though 😉

    Let’s start with random variables T (temperature) and P (proxy).

    Assumption (A) : variances and means of T and P , and cov(T,P) , exist and are known

    Assumption (B) : Proxy causes Temperature

    Assumption (C):

    C.1 Sample variance of calibration temperature equals \sigma _T ^2

    C.2 As a result of variance matching, validation sample r^2 equals \rho^2

    Now, under (B) it is reasonable to try to find \alpha and \mu such that

    E[T-(\alpha P + \mu)]^2 = minimum  ,

    so that we can make linear predictions using only observed P. You all know the solution,

    \alpha = \rho \frac{\sigma _T}{\sigma _P}

    where \rho is coefficient of correlation, known as a result of (A). It is easy to show that the residual variance is

    \sigma _\epsilon ^2 = E[T-(\alpha P + \mu)]^2 = \sigma _T ^2 (1-\rho^2) .

    Now, using assumptions (A),(B), and (C) we can use Mann’s formula,

    uncert_e(n)=sd_1.*sqrt(1-kadjr2_1(n)); %%% 1-sigma uncertainties here!!!

    If assumptions (A) and (C.1) hold, temperature process must be stationary. No AGW, that is!

    With assumption (B) we can write the model as

    T=\alpha P + \mu+\epsilon ,

    and the worst case is \alpha=0 . This, along with (A) justifies upper-bounded errors of Mann’s equation.

    • Craig Loehle
      Posted Jan 14, 2009 at 8:11 AM | Permalink

      Re: UC (#42), Assumption B (proxy causes temperature) is rather unphysical and affects the confidence intervals compared to proper calibration of proxy from temperature and then inverting it.

    • Jean S
      Posted Jan 14, 2009 at 11:12 AM | Permalink

      Re: UC (#42),

      Assumption (B) : Proxy causes Temperature

      I think this actually should be
      Assumption (B’) : Reconstruction causes temperature
      I think the dependent variable here is the temperature and the independent variable is the reconstruction.

      under (B) it is reasonable

      This is an additional assumption
      Assumption (D) : Linear relationship
      🙂

      BTW, for those readers who have hard time following what’s going on here, but who are eager to learn, there are some nice basic stats videos in YouTube under this account. Some (there are more) videos related to this topic:
      -(simple) linear regression:
      http://www.youtube.com/watch?v=ocGEhiLwDVc
      -Standard Error of the Estimate (SEE):
      http://www.youtube.com/watch?v=dJR1WqeBgCg
      -Recap of SEE, r, R2
      http://www.youtube.com/watch?v=unil0JmtU0g

      • Jean S
        Posted Jan 14, 2009 at 11:19 AM | Permalink

        Re: Jean S (#45),
        I think it is reasonable to say that mike forgot to do the very first thing advised in the first video 😉 Try it using the data given by UC in #37!

      • Posted Jan 14, 2009 at 12:42 PM | Permalink

        Re: Jean S (#45),

        Hmm, maybe (D) results from implicitly assumed Gaussian T,P 🙂 A good read for this topic is also

        Chapter 2.1 Estimation and model criticism

        of Brown’s book.

  33. Posted Jan 14, 2009 at 12:30 PM | Permalink

    #43

    Assumption (B) is not necessary, if (A) holds ( jointly randomly distributed RVs ). Then there’s no AGW. But if one want’s to proceed without prior distribution for T, then causality has to be this way. No AGW, or proxy causes temperature. And even after that, (C) doesn’t hold. Checkmate, Dr. Mann 😉

  34. Steve McIntyre
    Posted Jan 14, 2009 at 12:34 PM | Permalink

    It’s pretty extraordinary after all this time to finally see one of Mann’s CI calculations in action. It’s pretty pathetic to think that this methodology is an “improvement” on the still unknown MBH99 methodology.

    I re-visited the text to see how they actually described this strange method. I couldn’t see anything relevant in the SI. The appendix to the main article says:

    Because of its established deficiencies as a diagnostic of reconstruction skill (32, 42), the squared correlation coefficient r2 was not used for skill evaluation.

    It’s therefore a bit odd to see verification r2 (smoothed like crazy) used to estimate CIs. There is the following comment in the text:

    Uncertainties were estimated from the residual decadal variance during the validation period based (32, 42) on the average validation period r2 (which in this context has the useful property that, unlike RE and CE, it is bounded by 0 and 1 and can therefore be used to define a ‘‘fraction’ of unresolved variance).

    It’s all very strange and will give considerable material over the next few weeks.

  35. Posted Jan 14, 2009 at 3:38 PM | Permalink

    RE UC #29, 36, 37, Jean S #45:

    I think it is reasonable to say that mike forgot to do the very first thing advised in the first video [smiley] Try it using the data given by UC in #37!

    The first thing recommended on the video is to look at a scatter plot of the data. Here’s a scatter of the data in UC’s #29, using the first two columns of his file in #36, which I gather are the verification reconstruction series derived from the early calibration, and the corresponding (late, ie 1950-95) instrumental series. To emphasize the serial correlation, I have connected the points:

    The R2 between these two series is indeed .5463, but as UC notes, the correlation, if any, is negative. In any event, the smoothing has induced so much serial correlation that the comparison can’t be useful.

    Since the CE and RE stats introduced by Fritts as an ad hoc way of checking sub-sample consistency have no standard distribution, it would make more sense to me to do this with a pair of standard Chow tests instead (introducing breaks at both the early and late breakpoints). This has a standard F distribution under the assumption that the relationship is the same in both sub-periods considered in each test, provided there is no serial correlation.

  36. Steve McIntyre
    Posted Jan 14, 2009 at 11:07 PM | Permalink

    Y’know how sometimes annoying songs get into your head – I’m having that trouble with Ooga Chaka (Hooked on a Feeling). I have this nightmarish image of Kyle from Tenacious D (separated at birth from…) doing the David Hasselhoff video of Ooga Chaka (Hooked on a Feeling), with the Team led by Bradley and Hughes in baby blue tuxedos doing Ooga Chaka.

  37. Jeff Alberts
    Posted Jan 14, 2009 at 11:25 PM | Permalink

    Re: Hasselhoff.

    That’s one of those WTF?? moments.

  38. Mark T
    Posted Jan 15, 2009 at 2:23 AM | Permalink

    Thanks Steve. That’s a visual attached to a song that I didn’t need in my head.

    Mark

  39. Mike B
    Posted Jan 15, 2009 at 2:43 PM | Permalink

    I realize this is totally OT, but here is a guaranteed reset button anytime you get an annoying song stuck in your head.

    Guaranteed. 🙂

    • Mark T.
      Posted Jan 15, 2009 at 4:16 PM | Permalink

      Re: Mike B (#57), Ohhh nooo! You sir are an evil, evil person.

      Mark

  40. Bob North
    Posted Jan 15, 2009 at 2:47 PM | Permalink

    Alright now I haveve both “Hooked on a feeling” and Gilligan’s theme song stuck my head. Thanks a lot guys!

  41. Craig Loehle
    Posted Jan 15, 2009 at 3:44 PM | Permalink

    At the University of Georgia, the old chapel built in the 1880s looks like it was designed by someone who had heard about greek architecture, but never actually seen any. These ci calcs remind me of that…

  42. Posted Dec 12, 2009 at 1:15 PM | Permalink

    Store So TIP, the whole picture?From health products, ? Mobile and.The insurer pays, and harmless as.Standard link to Mark B Bradley, on muddy ground when the landlord.No content The, fairly simple if.,

  43. Steve McIntyre
    Posted Mar 22, 2012 at 3:41 PM | Permalink

    Their reply, needless to say, did not acknowledge that they placed the code online AFTER our comment:

    The method of uncertainty estimation (use of calibration/validation residuals) is conventional (3, 4) and was described explicitly in ref. 2 (also in ref. 5), and Matlab code is available at http://www.meteo.
    psu.edu/~mann/supplements/MultiproxyMeans07/code/
    codeveri/calc_error.m.

    • Skiphil
      Posted Nov 26, 2012 at 1:39 PM | Permalink

      Now that Mann has provoked 23 dendros to publicly repudiate his work on another paper it may be a propitious time for others to make the case that Mann is not science God. All of his data and calculations need to be independently audited no matter how loudly he proclaims his infallibility.

      http://www.bishop-hill.net/blog/2012/11/26/lonely-old-mann.html

      • Posted Nov 26, 2012 at 3:24 PM | Permalink

        They have ganged up on Mann. One of those is Rosanne D’Arrigo of “cherry pie” fame. The principle in dispute is rather obscure and minor so it seems to me, but obviously a big deal to those.