Mann et al 2008

Notice of a new reconstruction by Mann and the Team is in many press clippings today, citing a PNAS article that is not (as I write) online. Rather than clutter other threads, here’s a placeholder thread pending my own response which may take a few days.

Update Sep 2, 2008 2:15 pm: Article online here Article is now online here

The most lengthy notice is here. In the pres release, Mann states that tree ring data is inessential to the present reconstruction, unlike the situation 10 years ago:

“Ten years ago, we could not simply eliminate all the tree-ring data from our network because we did not have enough other proxy climate records to piece together a reliable global record,” said Michael Mann, associate professor of meteorology and geosciences and director of Penn State’s Earth System Science Center. “With the considerably expanded networks of data now available, we can indeed obtain a reliable long-term record without using tree rings.”

It’s interesting that Mann characterizes the past studies in this way as the previous studies had, at the time, made pretty specific claims purporting to be robust to the presence/absence of dendroclimatic indicators.

For example, Mann et al 2000 stated:

We have also verified that possible low-frequency bias due to non-climatic influences on dendroclimatic (tree-ring) indicators is not problematic in our temperature reconstructions.

Or here, Mann states:

Whether we use all data, exclude tree rings, or base a reconstruction only on tree rings, has no significant effect on the form of the reconstruction for the period in question. This is most probably a result of the combination of our unique reconstruction strategy with the careful selection of the natural archives according to clear a priori criteria.

Now CA readers know that the presence/absence of tree rings (actually, bristlecones) had a “significant effect” on the AD1400 network – something that Mann knew as well from the analysis in the amusingly titled CENSORED directory (now deleted from his website), but here Mann artfully illustrated the AD1760 network where the results were advantageous. (In a securities offering, disclosure of the adverse AD1400 results would be obligatory, but seemingly not in climate science where authors are apparently permitted to report only results that go their way.)

MBH98 stated:

the long-term trend in NH is relatively robust to the inclusion of dendroclimatic indicators in the network, suggesting that potential tree growth trend biases are not influential in the multiproxy climate reconstructions.

So this isn’t the first time that Mann has claimed that his results are robust to the presence/absence of tree ring data. He is a serial utterer of this particular claim.

As to what’s in the network: a few predictions (and I’ve not seen the article or the data yet.) Here’s an image from the mongaybay article:

1) the entire MBH98 network (415 series) will be in it. This will include the Graybill bristlecone series.
2) the Briffa MXD network (387 series) used in Rutherford et al (some series of which remain unarchived.) These only go back to 1400 or later and don’t affect the MWP
3) the Luterbacher gridded version in Europe
4) Lonnie Thompson’s tropical ice cores
5) miscellaneous series from Mann and Jones

We’ll find out in due course, but squinting at the map, it doesn’t look like there are a lot of MWP proxies or that the proxy versions are going to differ very much from the usual suspects. So we can expect to see:

6) Briffa’s Tornetrask version, NOT Grudd’s
7) Briffa’s Yamal version and NOT the Polar Urals update
8 ) Graybill’s bristlecone version and NOT Ababneh’s

It doesn’t look like there is anything very much from the ocean sediment world.

Overall, I think that the selections are going to prove pretty familiar and that the MWP proxies are going to be the same tired old ones that we’re used to.

135 Comments

  1. Ian J
    Posted Sep 2, 2008 at 10:30 AM | Permalink

    Looks like the BBC are lapping it up as per usual.

    Helpfully they’ve included there tired old why climate sceptics are morons link.

  2. Steve McIntyre
    Posted Sep 2, 2008 at 10:38 AM | Permalink

    Here are links to some previous unsuccessful correspondence with PNAS attempting to get Thompson’s data:

    http://www.climateaudit.org/?p=1477
    http://www.climateaudit.org/?p=1552
    http://www.climateaudit.org/?p=1833

    and an earlier unsuccessful attempt to get Cicerone to ask obstructionist scientists to supply data:
    http://www.climateaudit.org/?p=763
    http://www.climateaudit.org/?p=819

    and on Cicerone’s recasting of the NAS panel terms of reference:
    http://www.climateaudit.org/?p=561

  3. Steve McIntyre
    Posted Sep 2, 2008 at 10:40 AM | Permalink

    I tried to move some comments from other threads to this thread but the move seems merely to have left the posts in the ether. I’ll inquire as to what happened.

  4. Steve McIntyre
    Posted Sep 2, 2008 at 11:01 AM | Permalink

    Luboš has a preview http://motls.blogspot.com/2008/09/hockey-stick-is-revived-alive-and-well.html

    A comment here has a clip from the SI (haven’t seen the SI myself) http://arstechnica.com/journals/science.ars/2008/09/02/climate-hockey-stick-has-staying-power

    • Posted Sep 2, 2008 at 11:43 AM | Permalink

      Re: Steve McIntyre (#4)

      From first glance, both red and blue time series seem to yield high Hurst coefficients, even excluding the most recent period of instrumental record. But we must see the data to be sure.

    • fFreddy
      Posted Sep 2, 2008 at 11:55 AM | Permalink

      Re: Steve McIntyre (#4),

      A comment here has a clip from the SI (haven’t seen the SI myself)

      Steve, the arstechnica post to which you link has a link to the SI here.

  5. Posted Sep 2, 2008 at 11:28 AM | Permalink

    “Ten years ago, we could not simply eliminate all the tree-ring data … With the considerably expanded networks of data now available, we can indeed obtain a reliable long-term record without using tree rings.”

    But it seems he still loves them (photo in http://www.realclimate.org/index.php/archives/2004/12/michael-mann/) 🙂

  6. jnicklin
    Posted Sep 2, 2008 at 11:32 AM | Permalink

    Why do the denro results match so closely at some intervals while they are markedly different at others? I’m assuming that the black line is instrument as opposed to proxy, which begs the question, if the proxy reconstructions are so accurate, why do we need to splice in instrument readings? Or does it all fall apart at that point?

  7. Dave Clarke
    Posted Sep 2, 2008 at 11:51 AM | Permalink

    From the Mann release:

    “Recent warmth appears anomalous for at least the past 1,300 years whether or not tree-ring data are used.”

    This appears to be a new claim that tree rings are now “inessential” (to use Steve’s term) for reconstructions going back 1300 years. So I fail to see any contradiction with the previous statements cited, none of which made this claim as far as I can see.

  8. Steve McIntyre
    Posted Sep 2, 2008 at 12:15 PM | Permalink

    Article is now online here http://www.pnas.org/content/early/2008/09/02/0805721105.full.pdf+html

  9. Kenneth Fritsch
    Posted Sep 2, 2008 at 12:22 PM | Permalink

    The blue no-dendro reconstruction implies that we have exceeded the temperatures of 1400 years ago by a mere 0.2 degrees C as indicated by the instrumental record and only in the last years of the 1990s. Given the temporal resolution of the instrumental record versus the reconstruction would seem to make the call a couple of years exceeding any of those in the last 1600 rather dubious.

    I anxiously await the details of the explanation.

  10. mugwump
    Posted Sep 2, 2008 at 12:39 PM | Permalink

    FTA:

    The reconstructed amplitude of change over past centuries is greater than hitherto reported, with somewhat greater Medieval warmth in the Northern Hemisphere,albeit still not reaching recent levels.

    I wonder if the “somewhat greater Medieval warmth” than “hitherto reported” in this MBH08 paper lies outside the error bars in MBH98?

    They’re getting there. I predict the abstract from MBH16 will read:

    The reconstructed amplitude of change over past centuries is greater than hitherto reported, with somewhat greater Medieval warmth in the Northern Hemisphere, possibly reaching recent levels, particularly given the anomalous cooling witnessed in the first two decades of the 21st century.

  11. Nick Moon
    Posted Sep 2, 2008 at 12:52 PM | Permalink

    All the data and all the code appears to have been put on-line here:

    http://www.meteo.psu.edu/~mann/supplements/MultiproxyMeans07/

    Nice to see that Steve’s persistence in asking climate scientists to disclose their code and data seems finally to be getting through 🙂

  12. Posted Sep 2, 2008 at 1:07 PM | Permalink

    Do I see from the list of proxies that he’s used Chuine et al – the one where Doug Keenan showed that their reconstructed temperature was more than two degrees out from actual temps?

  13. Jean S
    Posted Sep 2, 2008 at 1:14 PM | Permalink

    CPS is the old variance matching technique, this time done first locally with smoothed series and then again on hemispheric level! Not only that, we have the flipping of the series ala Juckes:
    %%% low pass filter to 0.05
    temp=x(kk:kkk,i+1)*sign(z(ib,i));
    [smoot,icb,ice,mse0]=lowpassmin(temp,0.05);
    sign(z(ib,i)) seems to be the sign of the correlation.

  14. Steve McIntyre
    Posted Sep 2, 2008 at 1:42 PM | Permalink

    I started downloading data from http://www.meteo.psu.edu/~mann/supplements/MultiproxyMeans07/data/proxy/ and was interrupted and can no longer get access to the site.

    Can others get through?

  15. EW
    Posted Sep 2, 2008 at 1:47 PM | Permalink

    Maybe we can split the downloads? Should I try to get, say, the first 5?

  16. Steve McIntyre
    Posted Sep 2, 2008 at 1:48 PM | Permalink

    Yeah, the ass…es have blocked me. Could someone run this script and email me the result.

    url=”http://www.meteo.psu.edu/~mann/supplements/MultiproxyMeans07/data/proxy/1209proxyname.txt”
    id=scan(url,what=””)
    #1209 items
    mann=list()
    for (i in 1:1029) {
    loc=file.path(“http://www.meteo.psu.edu/~mann/supplements/MultiproxyMeans07/data/proxy”,paste(id[i],”ppd”,sep=”.”))
    fred=read.table(loc,skip=3)
    names(fred)=c(“year”,”proxy”,”count”)
    mann[[i]]=fred
    }
    names(mann)=id
    save(mann,file=”d:/temp/mann.tab”)
    #list of 1209 series

  17. Steve McIntyre
    Posted Sep 2, 2008 at 1:50 PM | Permalink

    #17. Why don’;t you run the script as is and see how far you get? If they block you, save what you’ve got and someone and can start in.

  18. Steve McIntyre
    Posted Sep 2, 2008 at 1:54 PM | Permalink

    Mann seems to have re-thought the optics of this. He’s let me back online and I’m downloading data as we speak.

  19. Posted Sep 2, 2008 at 1:54 PM | Permalink

    In the graph posted on comment 4, does the slope (in this case the severity) of the sharp drop in temps between period 1350y to 1450y correspond to the same slope as the rise between 1850 and the present day?

  20. PHE
    Posted Sep 2, 2008 at 2:02 PM | Permalink

    In the paper, yet again (!) the instrumental data are presented as a bold red line that obliterates all of the critical results behind it. This means it is hard to judge just how reliable the proxies are during the last few decades – though from what you can make out, it doesn’t look too good.

    Mann et al make the following bold conclusion: “Our results extend previous conclusions that recent Northern Hemisphere surface temperature increases are likely anomalous in a long-term context.”

    As far as I can tell, the only result that presents the current temperature as anomalous is the instrumental data curve.

    The only clear achievement of this paper is that, assuming this is all they can come up with, it reinforces the case for scepticism.

  21. Not sure
    Posted Sep 2, 2008 at 2:12 PM | Permalink

    From 1209proxyname.txt: “chuine_2004_burgundyharvest”. Sure looks like the Chuine paper Keenan criticised.

  22. Stan Palmer
    Posted Sep 2, 2008 at 2:40 PM | Permalink

    The graph in comment 4 has a definite LIA and MWP. This is in contrast to previous statements that these did not exist.

    Secondly, Gore in “An Inconvenient Truth” showed the previous hockey stick on the same chart as the CO2 concentration curve. The remark was that temperature and CO2 concentration followed each other and so AGW QED. This remark is again in contrast with the new hockey stick of comment 4.

    • NJS
      Posted Sep 3, 2008 at 5:13 AM | Permalink

      Re: Stan Palmer (#24),

      Yes that was my first thought too. The statistical science is lost on me but it seems that “Hockey Stick II” shows a reasonable plot of temperatures (deigning to include LIA & MWP), until it goes “off the scale” very recently. Could I suggest that the un-usually warm year 1998 caused this? That it might be the case that if we had detailed historical climate records we would see a history of occasional out-lyers? Cause due to sun activity and ocean current activity combination perhaps?

  23. Hans Erren
    Posted Sep 2, 2008 at 2:51 PM | Permalink

    my comment on luterbacher
    http://home.casema.nl/errenwijlens/co2/errenvsluterbacher.htm

    my brief exchange with Lonnie Thompson
    http://home.casema.nl/errenwijlens/co2/quelccaya.htm#Conclusions

  24. Peter Ashwood-Smith
    Posted Sep 2, 2008 at 2:52 PM | Permalink

    Steve, been lurking for a few years, fascinating stuff.

    But .. how do you know that ‘they’ blocked you? My first assumption when I can’t get data from A to B is not a deliberate blockage but the normal daily goings on of IP.

    Hmm guess I’m auditing your blockage assertion 😉

    Steve: Only because of past history. Mann blocked access to his U of Virginia website; Rutherford blocked access to his website (with SI to Rutherford et al.); Hughes blocked access to U of Arizona website. So there’s a history. It’s something that’s both futile and embarrassing for the Team. A complaint to the U of Arizona vice president yielded only the reply that there was no breach of the terms of their research grants.

  25. PHE
    Posted Sep 2, 2008 at 3:13 PM | Permalink

    Some further observations on the instrumental record in the Mann et al paper:

    The results graphs (Fig 3) show both (i) CRU ‘NH Land’, identified as “CRU Instrumental Record” with bold red line at front, peaking at 0.9; and (ii) CRU ‘NH land + ocean’, identified as “HAD instrumental record” with bold, but pale grey line, behind the red, peaking at ‘only’ 0.58.

    Some questions:
    – why chose only NH (northern hemisphere)when many proxies from SH are included (Fig 1)? Surely not because NH T rise is greater??
    – why put emphasis on ‘Land record’ when at least some proxies are not on land (coral)? Surely not because land T rise is greater??

    The latest CRU global land+sea temperature value is about 0.4 (for 2007). This would put it on a par with the upper 95% of some proxies from the MWP. But that would be cherry-picking of course!

  26. Dave Clarke
    Posted Sep 2, 2008 at 3:24 PM | Permalink

    #18, #20:
    Steve, let me see if this I have this straight. You claim that Mann (or some other “ass…e”) “blocked” you at around 1:42 (or a bit before), and then he let you “back online” at around 1:54.

    A more plausible explanation might involve such possible technical issues as a wonky connection, heavy demand on his data archive or ISP bandwidth rationing. You are downloading more than a thousand files after all.

    Steve: You could be right. I might be being a bit chippy because of prior history with these guys. But the total size of the downloaded data is only about 2.8 MB. I’ll post up my collation which will be easier for interested parties to examine the data,

  27. Not sure
    Posted Sep 2, 2008 at 3:40 PM | Permalink

    He thinks he’s been blocked because it has happened before:

    http://www.climateaudit.org/?p=167
    http://www.climateaudit.org/?p=1584
    http://www.climateaudit.org/?p=313

  28. Posted Sep 2, 2008 at 4:05 PM | Permalink

    “Ten years ago, we could not simply eliminate all the tree-ring data from our network because we did not have enough other proxy climate records to piece together a reliable global record,” said Michael Mann, associate professor of meteorology and geosciences and director of Penn State’s Earth System Science Center. “With the considerably expanded networks of data now available, we can indeed obtain a reliable long-term record without using tree rings.”

    Its nice to know that Mann hasn’t lost his touch for brazen lying. The claims in MBH98 about robustness to the absence of dendroclimatic proxies were simply false, weren’t they Dr Mann?

  29. Posted Sep 2, 2008 at 4:09 PM | Permalink

    I’m going to add my own predictions:

    1. The R2 metric for key parts of the reconstruction will be near zero.
    2. The RE metric will be spuriously high.
    3. Mann will hide the R2 statistic reporting.
    4. The error analysis will not take into consideration any autocorrelation.
    5. Mann will assume that all proxies measure temperature, but some key ones are more sensitive (ie get weighted a lot higher) than others.

    Now I’ll read the paper on the way to work.

  30. Craig Loehle
    Posted Sep 2, 2008 at 4:58 PM | Permalink

    I had blithely assumed that the CRU plot in Fig. 3 of Mann was the standard 5 year smooth, but the caption says it is a 40 year low-pass filter. How do you get data all the way out to the end of your data set (2006) with a 40 year filter? Is it backward looking? Is there end-point pinning? Is there reflecting? Which low-pass filter is used? I find no answers to these questions in the paper or the SI. Does anyone else?

  31. Curt
    Posted Sep 2, 2008 at 5:03 PM | Permalink

    John A (#30):

    Regarding your points 1 & 3: the first thing that jumped out at me from a quick perusal of the paper was this:

    Because of its established deficiencies as a diagnostic of reconstruction skill (32,42), the squared correlation coefficient r2 was not used for skill evaluation.

  32. Steve McIntyre
    Posted Sep 2, 2008 at 5:40 PM | Permalink

    #32. Inconsistently, the SI states the following:

    Screening Procedure. To pass screening, a series was required to exhibit a statistically significant (P > 0.10) correlation with either one of the two closest instrumental surface temperature grid points over the calibration interval

    I guess in Mann-world correlation (r) can be used, but not r2. But hey, it’s climate science.

  33. RomanM
    Posted Sep 2, 2008 at 5:45 PM | Permalink

    #31 Craig Loehle

    I have been looking at the Mann paper discussed in the UC on Mann Smoothing thread ( http://www.climateaudit.org/?p=3504 )for the last several days. In that paper, he is consistently using “40 year lowpass” filters which according to his supplementary materials are calculated using the matlab program lowpass.m at http://www.meteo.psu.edu/~mann/smoothing08/ .

    This program uses a 10 point Butterworth filter ( f = 1/40) which is applied twice (by the Matlab function filtfilt), first in one direction, then in reverse. These programs require having the Matlab add-on of the “signal package”. You are quite correct that there must be some sort of padding. In the program, it is padded at each end with 60 observations. The three methods of padding used in the paper are the mean value (of the entire temperature sequence, termed by Mann as “minimum norm”), simple reflection (minimum slope) and reflected and flipped upside down (minimum roughness!??). My guess is that this is the low-pass filter used with one of these methods (although he also has a program called “lowpassmeanpad.m” which pads with the mean of the observations near the ends of the sequence).

    I don’t have this package, but I discovered that a similar library exists in R (also called “signal” and has the functions Butterworth and filtfilt) which supposedly does the same things. From that library, I was able to surmise that the Butterworth filter consists of a binomial moving average component followed by an autoregressive smoothing component. it seems to be the AR piece that does the “40 year” filtering. I couldn’t verify that the R version is identical to the Matlab without having the matlab stuff, but the claim in R is that it follows Matlab conventions.

    Note: I tried using the fancy “reply and paste link” but my stuff didn’t look like it was supposed to so I just used the clumsy link insertions.

  34. Posted Sep 2, 2008 at 5:52 PM | Permalink

    I’ve now done my first scan of Mann’s paper, and all I can say is that it reads like a spotty undergrad trying to pull the wool over the learned professor’s eyes.

    It amazes me that garbage like this ever sees the light of day. Do the editors of the PNAS journal have any shame? Or Lonnie “no archive” Thompson?

    I’ll be publishing my own (non-statistical) evaluation of Mann’s invocation of magical artifacts into a paper ostensibly about scientific issues (unless McIntyre beats me to it).

  35. Kenneth Fritsch
    Posted Sep 2, 2008 at 6:55 PM | Permalink

    Steve M #33

    That point did not register with me when I first scanned the paper. My attention was averted to the use of p less than 0.10 instead of what I would consider a customary p less than 0.05. The screening, in other words, means that the correlation need be significant (p less than 0.10) but does matter what the correlation value is that is it needs to be something other than zero.

    After my first scan of the article and ignoring my limitations in analyzing the methodologies used, I see what I view as many concessions from earlier work(s) and of course, without explicit admission. I think those concessions are adding to more than sufficient numbers to provide a lethal dose for earlier reconstructions.

  36. Robert Wood
    Posted Sep 2, 2008 at 7:17 PM | Permalink

    He’s just trying to save his reputation here. He knows his goose is cooked. he’s saying: “Well, maybe there was a teensy-weeny MWP, but doesn’t count, nyah, nyah!”

    Is he using the same AlGore-rythm-method that will show a hockey stick with red noise?

    Apologies for posting sarcasm on a truly sensible site; just that Mann is …

  37. Steve McIntyre
    Posted Sep 2, 2008 at 7:18 PM | Permalink

    #36. The problem with ex post use of correlation screening is that you can esily generate HS’s from red noise as both David Stockwell and I observed a long time ago.Mann’s procedure here seems little different than jacoby in 1989 picking the 10 most “temperature series” from 36. IMO, they have to establish a class of proxy and e.g. treeline white spruce or whatever and then use them all or use none of them. This simplest of statistical points seems impossible for the Team to understand.

  38. MarcH
    Posted Sep 2, 2008 at 7:36 PM | Permalink

    I still fail to understand how use of selected proxy data over such a long time frame is capable of producing these results within the error margins described. The term Pie in the Sky comes to mind as I read over the arguments.

  39. Posted Sep 2, 2008 at 7:39 PM | Permalink

    #38. And the logical companion analysis are the conflict-generated CIs of Brown and Sundberg http://www.climateaudit.org/?p=3364 that Steve dug up. Welcome back Steve!

  40. Craig Loehle
    Posted Sep 2, 2008 at 7:39 PM | Permalink

    RomanM: thanks for describing Mann’s lowpass filter. IMHO, the last year that could be used for the CRU data in Fig. 3 would be 1986. To base a call of alarm about warming on padded data (or reflected, or whatever) is simply stunning.

  41. old construction worker
    Posted Sep 2, 2008 at 7:47 PM | Permalink

    Is this part of the PR Challenge?

  42. ep
    Posted Sep 2, 2008 at 7:49 PM | Permalink

    So to the layman: does this new paper really tell us anything new or is it just a rehash, with added spin for a press release?

  43. Steve McIntyre
    Posted Sep 2, 2008 at 7:59 PM | Permalink

    There’s new data in the network, which needs to be looked at. It will take a while to take a proper measure. Although their scripts are online, the methods used are very idiosyncratic. The references to the supposed validation of the methods are Mann’s own articles, which hardly counts as Draper and Smith. However, the data is the place to start and I’ll look first at new non-tree ring proxies that cover the MWP and see what they do.

    I’ve placed plots of many of them at http://data.climateaudit.org/data/images/mann.2008 and will incorporate them into a post.

    • Dave Dardinger
      Posted Sep 2, 2008 at 9:08 PM | Permalink

      Re: Steve McIntyre (#44),

      Are there any of the outstanding non-archived proxies included in the SI? I’m sure you’d be pleased with Mann’s shopping list if he included some of the ice core data, etc.

  44. MattN
    Posted Sep 2, 2008 at 8:09 PM | Permalink

    How many times does Mann get to cry “Wolf”????

    • Posted Sep 2, 2008 at 8:36 PM | Permalink

      Re: MattN (#45), Glancing at the proxies I would like to see an ensemble spaghetti of the raw plots. No stats or adjustments just a spaghetti of the plots, then get in to the stats. That should be the best illustration of the BS that follows.

  45. Graeme Rodaughan
    Posted Sep 2, 2008 at 8:27 PM | Permalink

    As many times as people will continue to listen, and even afterwards, but hopefully then to the sound of chirping crickets…

  46. Posted Sep 2, 2008 at 8:57 PM | Permalink

    The Mann who cried wolf?

  47. Steve McIntyre
    Posted Sep 2, 2008 at 9:58 PM | Permalink

    #49. There are versions of (say) Thompson’s ice core data around, but not the sample data. Likewise, there are versions of Tornetrask around, but not the measurement data. Mann doesn’t have details at that level, but there is some data appearing digitally for the first time, so I’ve got things to look at before I comment much.

  48. Mark_T
    Posted Sep 2, 2008 at 10:00 PM | Permalink

    I can’t imagine that Mann would come up with another hockey stick of questionable methods after what was revealed about the first one. He must be sure that this one is right. If it’s not and this is shown to the public, I would imagine it would harm his career in some way. I just can’t see him making the same or similar mistake twice.

    • John A
      Posted Sep 3, 2008 at 12:37 AM | Permalink

      Re: Mark_T (#52),

      I can’t imagine that Mann would come up with another hockey stick of questionable methods after what was revealed about the first one. He must be sure that this one is right. If it’s not and this is shown to the public, I would imagine it would harm his career in some way. I just can’t see him making the same or similar mistake twice.

      You must be a newbie around these parts…

  49. Raven
    Posted Sep 2, 2008 at 10:39 PM | Permalink

    #51 Mark
    From BBC piece:

    The new paper adds to the evidence against that notion. One of the analytical methods used suggests that temperatures in the Mediaeval Warm Period could have been no higher than they were in about 1980; the other suggests they were no higher than those seen 100 years ago.

    I suspect Mann has a bait and switch planned. The method that produces results as high as 1980 (similar to Loehe) will likely stand up to scrutiny but the method that says the temperature in the MWP were that same as the temps in 1900 likely will not (if it could why would he bother with the two methods?). However, but once this paper is out there all the modellers and propaganda pieces created for the IPCC will use latter method and claim it is robust becasue it appeared in the same paper as another method which was robust. If I am right, it won’t fool anyone who has looked into but it will fool the vast majority of people looking for any excuse to believe that there was no MWP.

  50. mugwump
    Posted Sep 2, 2008 at 11:02 PM | Permalink

    I’ve placed plots of many of them at http://data.climateaudit.org/data/images/mann.2008 and will incorporate them into a post.

    If you have them, would you be able to put plots of all of them up? I just looked at every one you posted so far, and I can only see an anomalous recent warming trend in Lake Korttajärvi, Finland, and on top of some glacier. If that’s all they got, they got nuthin’

    Steve:
    IN due course. I’m looking at things. The Korttjarvi series are said by the original author to be distrubed in recent periods. Mann mentions this and then “proves” that the flawed data doesn’t affect the results. So why include it? Mann:

    we also examined whether or not potential problems noted for several records (see Dataset S1 for details) might compromise the reconstructions. These records include the four
    Tijander et al. (12) series used (see Fig. S9) for which the original authors note that human effects over the past few centuries unrelated to climate might impact records (the original paper states ‘‘Natural variability in the sediment record was disrupted by increased human impact in the catchment area at A.D. 1720.’ and later, ‘‘In the case of Lake Korttajarvi it is a demanding task to calibrate the physical varve data we have collected against meteorological data, because human impacts have distorted the natural signal to varying extents’). These issues are particularly significant because there are few proxy records, particularly in the temperature-screened dataset (see Fig. S9), available back through the 9th century. The Tijander et al. series constitute 4 of the 15 available Northern Hemisphere records before that point.

    These 4 series are worthless.

  51. PHE
    Posted Sep 2, 2008 at 11:07 PM | Permalink

    And another thing: in MZHBMRN08, Fig 3, it states it shows “estimated 95% confidence intervals”. But the present-day instrumental T is between 0.4 and 0.7 degC above this (and ‘only’ 0.2 above if you compare with the Global Land & Sea anomaly, not shown on the graph). So, what is the meaning of ‘95% confidence’ if it is so far out of reality; and what does this say about what instrumental data could have shown us for the MWP if thermometers existed then?

  52. PHE
    Posted Sep 2, 2008 at 11:14 PM | Permalink

    Re 52 (Mark_T).

    snip – no need to speculate on motives

  53. Posted Sep 2, 2008 at 11:46 PM | Permalink

    Steve, I was looking through some of the proxies and one really stands out as strange. What is going on with the Socotra Island chart? The temp changes are way off the charts toward the end of the twentieth century. Could that cause warming bias toward the twentieth century in the final graph.

  54. K. Hamed
    Posted Sep 3, 2008 at 1:30 AM | Permalink

    RE: Steve McIntyre (#39)

    Steve McIntyre:
    September 2nd, 2008 at 7:18 pm
    #36. The problem with ex post use of correlation screening is that you can esily generate HS’s from red noise as both David Stockwell and I observed a long time ago.

    Indeed, simulations with FGN data reveal that the variance of the product-moment cross-correlation coefficient r are inflated (similar to that of trend). This results larger Type I errors

    Scaling Coefficient H

  55. anna v
    Posted Sep 3, 2008 at 1:37 AM | Permalink

    http://www.climateaudit.org/?p=3501#comment-294070

    I can’t imagine that Mann would come up with another hockey stick of questionable methods after what was revealed about the first one. He must be sure that this one is right. If it’s not and this is shown to the public, I would imagine it would harm his career in some way. I just can’t see him making the same or similar mistake twice.

    You have to remember that scientists also have egos, and some of them enormous egos. The emotional need to be right no matter what does not disappear when one is talking scientific matters, and often clouds scientific judgment. Assuming true scientists and intentions; if one goes into the gray region (low IQ, intent to deceive,…) things get worse.

    • IainM
      Posted Sep 3, 2008 at 4:32 AM | Permalink

      Re: anna v (#61),

      On the other hand he could just be preparing his entry for the AR5 logo competition.

  56. K. Hamed
    Posted Sep 3, 2008 at 1:52 AM | Permalink

    Continuation to #60

    For a nominal alpha of 0.10
    simulation results
    Scaling Coef. H / Variance inflation / true alpha

    0.94 / 3.27 / 0.23
    0.90 / 2.68 / 0.21
    0.85 / 2.17 / 0.20
    0.80 / 1.71 / 0.17

    I got also almost identical results with an exact version of Kendall’s tau under persistence that I am working on.

    With LTP taken into account some of the apparently significant cross-correlations may be only due to chance. This may be partially supported by the comment by jnicklin (#6)

    Why do the denro results match so closely at some intervals while they are markedly different at others?

    which was also addressed by Steve in an earlier post.

  57. Stefan
    Posted Sep 3, 2008 at 2:33 AM | Permalink

    Could someone please explain to a layman what the generally accepted justification is for splicing in the instrument record at the end?

    Not being a scientist, I just look at it and assume, “well, they’re different things, so of course they look different.” And yet scientists do this, so there must be a good justification for it??

    • JamesG
      Posted Sep 3, 2008 at 6:30 AM | Permalink

      Re: Stefan (#64),
      It’s not really a splice, it is just another overlay. it’s supposed to show that the instrument records match the proxies over the overlapping period, which is normal practice. But it seems to me that without the tree-ring data they don’t actually match too well – though that’s a bit difficult to see without a close-up. But if that’s the case then it raises more questions than it answers.

      Here is a good example of a weird splice; the Siple curve (which MBH98 was tuned to match):
      http://www.john-daly.com/zjiceco2.htm

  58. Paul
    Posted Sep 3, 2008 at 2:52 AM | Permalink

    At what point should “scientific” research of this nature be considered data mining.

    How many papers now have been written by Mann and close associates? Each one clearly trying to validate a deeply held prior.

    Surely any of proxy reconstruction papers now deserve to be binned witohut consideration by journal editors.

  59. EW
    Posted Sep 3, 2008 at 3:44 AM | Permalink

    Another T-reconstruction from published data? Didn’t Craig Loehle get a letter that the journals aren’t interested in T-reconstructions from the old data anymore?

  60. Willis Eschenbach
    Posted Sep 3, 2008 at 3:57 AM | Permalink

    What stood out on the first quick look was the improbable nature of the error bars. In Fig. 1 we have EIV Land and EIV Land + Ocean. In the year 400, the uncertainty for both of these is about 0.15°C. A couple things about that.

    First, the current estimated 95% CI in the monthly HadCRUT GMST is 0.15° now. (That figure has always seemed low to me, but it’s Phil Jones’ claim, not mine). Back in the 1850s, it’s about twice that. Again, I doubt that accuracy, but let it roll.

    When fitting a proxy record to current data, we get the best fit we can find by some kind of linear transform of the data. But the inaccuracy of the instrumental data affects that final fit in two ways. 1) The error can never be less than the error at the early end of the instrumental record. 2)If there is a small error in the fit, a slight wobble, the proxy will be fit to the data at a different angle. The important thing to note is that the error increases the further you go back in time.

    So we have an error that at best is the error in the 1850 instrumental record (Phil says 0.3°) and increases as (1995-year)/(1995-1850). By the time we go back 600 years, we’re already at four times the initial error. That gives us 1.2°. And in the year 400, it is eleven times the error, 3.3°.

    A physical example might clarify this. Imagine a wooden floor with the instrumental record drawn on it, from 1850 to 1995. We drive a nail in a little bit at the high and low ends of the 1995 confidence interval. We do the same with the 1850 high and low 95% CIs. Let’s suppose we only have data for those two points, and we want to fit a straight line to the data. So we lay a long wooden dowel down, in between the two pairs of nails sticking out of the floor. That’s our straight line.

    But the skinny dowel isn’t held by the nails. It’s loose to rattle around, it will fit through the data at a variety of angles. Best guess is middle to middle, of course.

    And when you grab the long dowel a ways away from the nails, back say in the year 400, it can move quite a ways …

    (And yes, I understand that there are various other considerations that make the situation more complex, and the range not quite as wide. I am pointing to a principle here – when you fit the end of a long stick, which end can wobble more?)

    That’s what hit my eye first.

    The second thing was that the error bars for EIV Land and EIV Land + Ocean in the year 400 are the same … perhaps one of the more learned statisticians can explain that one. I was taught that with combined datasets the errors add orthogonally, but what do I know?

    w.

    PS – the situation is complicated by the overall upward trend of the instrumental record. This will bias the choice of proxies under his … mmm … let me call it a “curious” ex ante proxy selection policy of greater than 90% significance of correlation.

    In addition, we have the usual problem of sparse documentation. They identify the issue of autocorrelation, claim it makes hardly any difference, and leave it unclear whether any correction was applied or if they are just discussing it. If they are actually adjusting for it, they certainly don’t specify how.

    Finally, a breakdown of the kinds of proxies … unfortunately, some are left blank in the SI, so the story is not all told. Here’s the numbers:

    Cave Stalagmite, 1
    composite, 3
    early summer temp, 1
    ice, 1
    lake-sediments, 1
    marine-sediments, 2
    oxygen isotopes, 1
    tree, 1
    tree-earlywood-width, 8
    tree-latewood-density, 111
    tree-latewood-width, 8
    tree-width, 904
    (blank), 167
    Grand Total, 1209

    At least 1,032 of 1,209 proxies are tree ring proxies …

    Steve:
    There are values for all the codes. So you can deduce the split from them.

    #2000 3000 3001 4000 4001 5000 5001 6000 6001 7000 7001 7500 8000 8001 9000
    # 71 4 3 9 10 3 16 7 7 13 2 105 19 13 927

    2000 – Luterbacher – 71
    3000 – TR recons – 4
    3001 ocean seds – 3: Chesapeake, Black Cariaco bulloides
    4000- lake seds – 9: incl 4 Korttajarvi versions, Tuborg, Pallcacocha, Donard, Soper, Mono
    4001 – lake seds – 10: with 4 Curtis seris from Laguna, 2 Chichanacab
    5000- 3 documentary
    5001- China documentary – 16
    6000- speleo – 7
    6001 – speleothem – 7
    7000 -corals – 13
    7001 – corals – 2
    7500 – Osborn MXD – 105: what happened to 387??
    8000 – ice core – 19
    8001 – ice core – 13
    9000 – tree ring – 927

    Tree ring: 927+105+4= 1036;

    • bender
      Posted Sep 3, 2008 at 5:46 PM | Permalink

      Re: Willis Eschenbach (#67),

      What stood out on the first quick look was the improbable nature of the error bars.

      Bingo. I will read the paper tonight to see if the methods are adequately described and if they are in fact accurate. I’m skeptical. What I will be looking for in particular is precisely the same criticism I levelled at Loehle’s paper: how were dating errors in the x direction (dating through time) handled? When you cut out annually resolved tree ring proxies your temporal error starts to soar. Those error bars look highly improbable to me, are in need of audit.

  61. andy
    Posted Sep 3, 2008 at 4:53 AM | Permalink

    I went through couple of Tiljander’s articles, and her doctoral thesis can be found online as well. However, in none of those articles she tries to reconstruct the past temperatures more precisely than stating the colder and warmer periods. Somehow Mann is able to bypass those smaller restrictions, like Tiljander’s statement about non-validity of the data after 1720 due the human activities.

  62. Ceri
    Posted Sep 3, 2008 at 5:02 AM | Permalink

    Great News, the Hockey Stick is back!
    Having seen Mann’s latest graph I now realise that I have lived my life (44 years of it so far) during the most unprecedented climate change during the last 1400 years. This is comforting as not only have I managed to adjust to my warming world, but I have done so without even noticing it. My standard of living has also risen over the same period, as it has for everyone else on the planet (with the exception of those in war zones).
    The future looks bright!

  63. Nicholas
    Posted Sep 3, 2008 at 5:29 AM | Permalink

    The robots.txt file at the SI site says:

    User-agent: *
    Disallow: /

    In other words, no automatic downloading allowed.

    Now, the “proxy” directory alone contains over 1300 files. They expect us to download them one at a time? Is it just me or is that ludicrous.

    I’m willing to download some subset of them manually if necessary, and send it to anyone who needs to analyse it, to save them this pointless endeavour. Let me know.

    Steve: I’ve downloaded and collated them already into an R-table. I don’t agree that this robots.txt line would prohibit research downloading; we’ve been through that with Hansen obviously.

  64. Patrick M.
    Posted Sep 3, 2008 at 5:55 AM | Permalink

    Re Raven #53:

    I had the same exact thought. Although, it’s possible given Mann’s record that none of it will stand up to scrutiny, we’ll have to wait and see.

  65. Steve McIntyre
    Posted Sep 3, 2008 at 6:04 AM | Permalink

    #72. Mann’s data is now available as an R-table at CA and all the series can be downloaded as follows:

    download.file(“http://data.climateaudit.org/data/mann.2008/mann.tab”,”temp.dat”,mode=”wb”)
    load(“temp.dat”); length(mann) #[1] 1209
    #returns list of 1209 tables with columns headed year, proxy, count

    For analysis of large data sets, there’s no point dicking around with things like Excel. For readers that want to do so, you can download individual files from Mann’ website.

    I’ve also posted up MAnn’s collation of proxy info, correcting one typo for latitude (shown as 37,5). This can be accessed as

    download.file(“http://data.climateaudit.org/data/mann.2008/details.tab”,”temp.dat”,mode=”wb”)
    load(“temp.dat”); nrow(details) #[1] 1209
    #returns table with 1209 rows

  66. Luis Dias
    Posted Sep 3, 2008 at 6:08 AM | Permalink

    #68

    Exactly my thoughts. The BBC media campaign has started:

    A new study by climate scientists behind the controversial 1998 “hockey stick” graph suggests their earlier analysis was broadly correct.

    So there you go. It’s out and proven. What really bothers me is the timing. A month ago, W&A archived their data and the data shows how Mann is bollocks, but it doesn’t matter anymore because Mann 1998 is superseeded in 2008, and further talk about it is sticking to the past. (We have moved on!)

    When AR5 deadline comes, say in 2010, W&A2010 will come out as a rebuttal to a “MM2009” equivalent rebutting M2008. It’s data won’t be archived.

    Just speculatin.

    • Posted Sep 3, 2008 at 5:08 PM | Permalink

      Re: Luis Dias (#75), and re #1:
      this sums up the battleground IMO. Richard Black (BBC reporter) put together the “top ten” skeptics’ arguments with rebuttals, “with advice from Fred Singer and Gavin Schmidt”

  67. Craig Loehle
    Posted Sep 3, 2008 at 6:22 AM | Permalink

    Willis: when looking at the error in Temp when there is an error in the estimate of slope, the error bars propagate with deviation from the mean TEMP of the calibration period, not with time. That is, if the mean temp is X during calibration, as you get far above or below X your error bars increase. Not time.

    • Willis Eschenbach
      Posted Sep 3, 2008 at 10:56 PM | Permalink

      Re: Craig Loehle (#76), Craig, always good to hear from you. You say:

      Willis: when looking at the error in Temp when there is an error in the estimate of slope, the error bars propagate with deviation from the mean TEMP of the calibration period, not with time. That is, if the mean temp is X during calibration, as you get far above or below X your error bars increase. Not time.

      You are correct that the error bars increase with distance from the mean. However, they also increase with temporal distance from the observation period.

      Think about my earlier physical example. If we have a long thin dowel pinned by four nails at one end, the dowel can move much further at the free end (before being stopped by the nails) than it can at the end with the nails. We know that the reconstruction is within observations at the modern end, so the error must be small. We have no such assurance at the early end.

      All the best,

      w.

  68. bernie
    Posted Sep 3, 2008 at 6:23 AM | Permalink

    Willis:
    Thank you, you have elegantly stated what has disturbed me all along about what is going on here – too much certainty where the error terms are too large in relationship to the effects – or false precision.

    P.S. Anyone who has tried their hand at significant DIY projects will immediately understand your physical analogy and will have the scrap wood to prove it or crooked shelves to prove it!!

  69. Timo Hämeranta
    Posted Sep 3, 2008 at 6:41 AM | Permalink

    About comparisons with earlier reconstructed temperatures the Idsos in their CO2 Science Magazine Editorial state today

    “…the need for vegetation-based climate reconstructions to incorporate the effects of changes in atmospheric CO2 concentration, especially when attempting to compare late 20th-century reconstructed temperatures with reconstructed temperatures of the Roman and Medieval Warm Periods and the Holocene Climatic Optimum. Until this common deficiency is corrected, truly valid comparisons between these earlier times and the present cannot be made, for without properly adjusting for the growth- and water use efficiency-enhancing effects of the historical increase in the air’s CO2 content, reconstructed 20th-century temperatures — which must be used in place of actual measured values when making comparisons with earlier reconstructed temperatures — will be artificially inflated.”

    Ref: Gonzales, Leila M., John W. Williams, Jed O. Kaplan, 2008. Variations in leaf area index in northern and eastern North America over the past 21,000 years: a data-model comparison. Quaternary Science Reviews Vol. 27, No 13-14, pp. 1453-1466, July 2008

    • Tony Edwards
      Posted Sep 3, 2008 at 12:25 PM | Permalink

      Re: Timo Hämeranta (#79), 0.

      As a complete amateur, your comment raises a, maybe, significant question. As I understand it, you are suggesting that, as increased CO2 has an enhancing effect on vegetation, that adjustments need to be made to 20th century measurements to account for this effect. But this is assuming that all pre 20th century measurements were of vegetation that had grown in a constant CO2 concentration. Indeed this seems to be an underlying assumption, but from whence do we get any assurance that CO2 levels have not fluctuated in previous centuries, much as they have in the last?
      If there is no confidence in a constant concentration for earlier centuries, the the vegetation proxies are necessarily suspect as well. Or is this a bridge too far?

  70. Posted Sep 3, 2008 at 7:25 AM | Permalink

    As I’ve said earlier here, to see what happens in Mann’s smoothing, one should extract variable ‘padded’ from lowpass.m . Here’s the CRU used in Fig. 3 :

    • mugwump
      Posted Sep 3, 2008 at 12:12 PM | Permalink

      Re: UC (#80),

      who knew climate time series were everywhere reflection symmetric?

  71. Posted Sep 3, 2008 at 8:21 AM | Permalink

    This is just astonishing. Who do these clowns think they are kidding?
    They paste over the instrumental record in a thick red line to create a hockey stick.
    Even worse, they select (please correct me if I’m wrong) the land-only, NH only, CRU temperature data for that thick red line in fig 3, because that is the one that gives the biggest possible increase.

    And why is the ‘CRU’ graph in fig 3 so different from the one in fig 2? In fig 2 it goes through 0.0 in about 1977 and 0.4 in about 1995, so would reach 0.5 or 0.6 in 2000 (consider extending fig 2 another 5 years, roughly to the edge of the colour bar). But in fig 3 it goes through 0.8 in 2000! And it’s supposed to be the same data?

    Could someone who has the data plot a graph of just the proxies, without the pasted over instrumental record?

    (Struggling hard not to use any f words)

  72. Luis Dias
    Posted Sep 3, 2008 at 9:02 AM | Permalink

    #80

    UC, for the lay man, what’s up with that “CRU” graph? It extends to 2050? Did Mann incorporated some kind of computer projection to his smoothed line, is that what you are saying?

  73. Craig Loehle
    Posted Sep 3, 2008 at 10:01 AM | Permalink

    UC: thank you for extracting the “padding” from the smoother–this is a reflection of the existing data upwards, which is a hilarious assumption that is just glossed over as if it is a valid thing to do. Using this approach, I could smooth out my measurements of my child’s height over time with this filter and conclude for a child age 15 that they would continue to grow to be 9 feet tall by age 30 when in fact they have just stopped growing.

  74. Steve McIntyre
    Posted Sep 3, 2008 at 10:25 AM | Permalink

    #80. UC, this is pretty funny.

    We observed a while ago that Mann’s vertical reflection method yields identical or near-identical results to Emanuel’s (now disowned) bin-and-pin method. In Pielke and my submission on hurricanes, we illustrated (in passing as it was incidental to our submission) the effect of this method using later data, which provoked foaming at the mouth by GRL reviewers (probably Holland and perhaps Elsner), one of whom called the analysis “fraudulent”, the other saying that it was already well-known in the literature. However, if the method shows temperatures going up, then anything goes, I guess.

    • Dave Dardinger
      Posted Sep 3, 2008 at 10:46 AM | Permalink

      Re: Steve McIntyre (#84),

      I notice you haven’t used the cool new click and paste link feature in the blog. Hasn’t anyone told you about it, or does it not work when you’re aministrator?

      In case you haven’t been told, and for newbies, you just click the words below the comment number and it pastes a copy of the link properly coded and including the name of the poster. This makes it easy for the reader to go read the message being referred to. You can also include more than one link in a comment, though earlier at least the preview was a bit quirky with two or more links and I don’t know if it’s been fixed yet or not.

  75. Craig Loehle
    Posted Sep 3, 2008 at 10:34 AM | Permalink

    Steve: When reviewers foam at the mouth, you can be sure you are on to something…

  76. Steve McIntyre
    Posted Sep 3, 2008 at 10:40 AM | Permalink

    Can someone render the Matlab tables at:
    http://www.meteo.psu.edu/~mann/supplements/MultiproxyMeans07/data/reconstructions/cps/
    into ASCII or R?

    UPDATE: Problem solved. The package R.matlab has a function readMat that reads Matlab files. R programmers are so clever.

  77. Tony Edwards
    Posted Sep 3, 2008 at 12:26 PM | Permalink

    I meant that I’m the complete amateur, not you, Timo.

    Steve, will there ever be an edit function for stupid mistakes?

  78. Steve McIntyre
    Posted Sep 3, 2008 at 2:01 PM | Permalink

    Jean S, I looked at the “proxy” information for Sheep Mountain ca534, which shows values to 1998 in the “raw” data. They show a count of 3 trees continuing to 1998, though the data ends in 1990. The count information is fabricated. He shows the count as being the continuing value of the last measurement, even though the values are 0.

  79. Sam Urbinto
    Posted Sep 3, 2008 at 3:31 PM | Permalink

    “re-thought the optics”

    lol

    So how’s it looking so far to the outside world??

  80. Dave E.
    Posted Sep 3, 2008 at 5:30 PM | Permalink

    I am a complete novice at this, so can someone please tell me if I’m wrong & where I’ve gone wrong?

    1st they start their measurements at 1900, (a minima in the 60 to 70 year temperature cycle), and compare it to the end of century, (a maxima). This gives a HUGE century increase in global temperature.
    THEN; they skew the figures in USA & Europe pre-78 downwards.
    THEN; they skew the Africa figures downwards pre-50.

    Is this anywhere near an accurate summary of how they’re trying to scare the crap out of Joe Public?

    Dave E.

    • Posted Sep 4, 2008 at 3:07 AM | Permalink

      Re: Dave E. (#94), Have you read top left, Favorite Posts, McKitrick What is the Hockey Stick about? I’m a near-newcomer here who senses the integrity of these threads while not understanding much of the stats. If you’re completely new to the sceptics’ position, read my primer which also has links to other primers and the best to move on with.

      • kim
        Posted Sep 4, 2008 at 9:36 AM | Permalink

        Re: Lucy Skywalker (#104), Thank you, Lucy, for the link to your primer. I think you have a very powerful statement, there. I particularly like your analogy, the precedence of the ‘Malleus Maleficarum’, and the lovely necklace of pearls at the end. How can your work get greater distribution?
        ============================

  81. Posted Sep 3, 2008 at 5:36 PM | Permalink

    The problem with the reflection and the pinning of moving
    average to the end points was also dealt with in the Rahmstorf
    review http://landshape.org/enm/rahmstorf-7-finale/.

    I think that because of the pinning of the moving average to the end point, the standard error of the moving average expands to the
    standard deviation (i.e. the benefit of n data points in se=ds/sqr(n)
    is lost). The expanded confidence intervals are not represented accurately.

  82. Steve McIntyre
    Posted Sep 3, 2008 at 9:04 PM | Permalink

    I can’t access CRU right now. Can someone see if http://www.cru.uea.ac.uk/cru/data/ is online right now? I gather that Phil Trans Roy Soc is still trying to get data from Briffa but haven’t got it yet.

  83. Raven
    Posted Sep 3, 2008 at 9:09 PM | Permalink

    http://www.cru.uea.ac.uk/cru/data/ is dead for me too.

  84. Steve McIntyre
    Posted Sep 3, 2008 at 9:18 PM | Permalink

    MAnn uses three series attributed in the SI table to Suk et al 1987. They seem to be about Korea. I can’;t locate anything on Google scholar and PNAS did not require Mann to provide an accurate reference.

  85. Posted Sep 3, 2008 at 9:42 PM | Permalink

    RE #97, it’s dead for me now as well (11:41 PM EDT), both with the hotlink and by typing in the URL.

  86. MarkR
    Posted Sep 3, 2008 at 10:19 PM | Permalink

    The whole CRU website is down, not just the data.

  87. Posted Sep 3, 2008 at 11:17 PM | Permalink

    Re: Luis 82

    blue series is the input to filtfilt -function, red is the output up to 2006. lowpassmin.m selects ‘minimum roughness’ in this case, and thus prediction for future values is obtained by using a mirror “reflected ( w.r.t. x and y (w.r.t. final value ) )”

    Values near the endpoints thus include predicted values instead of observed values, and comparison to middle points is not fair. In addition, these padding methods are very sensitive to noise,

    http://www.climateaudit.org/?p=1681#comment-114704

    Steve,

    However, if the method shows temperatures going up, then anything goes, I guess.

    But if temps go down, the method is abandoned,

    http://hadobs.metoffice.com/hadcrut3/diagnostics/global/nh+sh/

    We have recently changed the way that the smoothed time series of data were calculated. Data for 2008 were being used in the smoothing process as if they represented an accurate esimate of the year as a whole. This is not the case and owing to the unusually cool global average temperature in January 2008, it looked as though smoothed global average temperatures had dropped markedly in recent years, which is misleading.

  88. Geoff Sherrington
    Posted Sep 4, 2008 at 3:34 AM | Permalink

    http://www.cru.uea.ac.uk/cru/data/

    This is online for me at 0430 Central time USA.

    Briffa has a page with
    osrreconout115.dat (50kb)
    osrreconout125.dat (80kb)
    Reconstructions of summer temperatures at each of 24 grid point locations in the western U.S. In 115 the data span 1850 – 1983. In 125 they span 1600 – 1983. A line preceeds each series, containing the mean and standard deviation. The data are in normalised form and can be transformed into degree Celsius anomalies by multiplying each value by the standard deviation, adding the mean, subtracting 1000, then dividing by 10. This will give values relative to the 1951-70 base period.
    osr115.dat (57kb) – above data as °C anomalies in spreadsheet layout
    osr125.dat (93kb)

    wus1600.dat (24kb)
    Regional mean reconstructions. Five series each spanning 1600 – 1983 as °C anomalies.

    reconyears.dat (60kb)
    quat88ts.dat (59kb) – reconstituted time-series from above, in spreadsheet layout
    From a Quaternary Research paper. 25 individual grid point reconstructions as a year-by-year shape map. Reconstituted, each grid point series spans 1750 – 1975.
    centscanukrec.dat (5kb)

  89. Willis Eschenbach
    Posted Sep 4, 2008 at 4:47 AM | Permalink

    OK, I’m tired of guessing. I’ve downloaded the Mann 2008 data using Steve’s great R script above. I end up with an object called “mann”, with the class “list”.

    But all I can get from it is a list of names of the proxies. I’ve tried everything I know to get to the actual data under the names … no joy.

    HELP!

    Thanks,

    w.

    Steve: Willis, if you type, for example, mann[[1]] , what do you get? There should be 1209 data sets labelled mann[[1]] to mann[[1209]]

    • Willis Eschenbach
      Posted Sep 4, 2008 at 1:28 PM | Permalink

      Re: Willis Eschenbach (#106), Steve, that’s what I thought too, but I get:

      >mann[[1]]
      Error in row.names.data.frame(x) : negative length vectors are not allowed

      All the best,

      w.

  90. Stan Palmer
    Posted Sep 4, 2008 at 7:22 AM | Permalink

    I understand that Mann selects proxies based on a correlation with the modern instrumental temperatures. I have also seen the comments here that the proxies appear to have no common signal beyond that. Now as a layman’s suggestion, would it not be desirable to calculate a similarity statistic between the selected proxies for periods that both include and exclude the instrumental period. I recall such a statistic being described previously on this blog. If there is a common signal then woud this statistic not indicate its presence.

    Even for a layman, it is clear the a procedure that looks for a rising signal at the end of a period, is mining for hockey stick shapes.

  91. Steve McIntyre
    Posted Sep 4, 2008 at 9:31 AM | Permalink

    I received the following email from a reader:

    Perhaps someone has brought it to your attention already, but submissions to PNAS communicated by Fellows of the NAS do not undergo the same peer-review process as an “ordinary” manuscript (IIRC this is a Track III submission, in the language of the journal). Instead the authors of the paper get to choose who the reviewers will be – hardly likely in my opinion to be as rigorous a process as conventional peer-review.

    The acknowledgements state:

    We are indebted to G. North and G. Hegerl for their valuable insight, suggestions, and comments and to L. Thompson for presiding over the review process for this paper.

    So I guess that Mann chose Hegerl and North.

  92. Phil.
    Posted Sep 4, 2008 at 10:17 AM | Permalink

    Re #108
    Steve, I disagree with your anonymous emailer, the procedure followed by PNAS is the same as used to be followed by Proc. Roy. Soc., that the paper was submitted by a Fellow who was identified in the rubric. In that case the fellow is personally associating himself with the paper and in my experience often subjects the paper to more detailed scrutiny than an anonymous reviewer.

    The paper was ‘communicated by Lonnie G. Thompson, Ohio State University, Columbus, OH, June 26, 2008 (received for review November 20, 2007)’.

    The rules for such a submission are: “An Academy member may “communicate” for others up to two manuscripts per year that are within the member’s area of expertise. Before submission to PNAS, the member obtains reviews of the paper from at least two qualified referees, each from a different institution and not from the authors’ or member’s institutions. Referees should be asked to evaluate revised manuscripts to ensure that their concerns have been adequately addressed. The names and contact information, including e-mails, of referees who reviewed the paper, along with the reviews and the authors’ response, must be included. Reviews must be submitted on the PNAS review form, and the identity of the referees must not be revealed to the authors. The member must include a brief statement endorsing publication in PNAS along with all of the referee reports received. Members should follow National Science Foundation (NSF) guidelines to avoid conflict of interest between referees and authors (see Section iii). Members must verify that referees are free of conflicts of interest, or must disclose any conflicts and explain their choice of referees. These papers are published as “Communicated by” the responsible editor.”

    So Thompson will have chosen the reviewers, the author doesn’t get to choose the reviewers, in fact they’re not supposed to know who they are.

    • bender
      Posted Sep 4, 2008 at 10:42 AM | Permalink

      Re: Phil. (#110),

      So Thompson will have chosen the reviewers, the author doesn’t get to choose the reviewers, in fact they’re not supposed to know who they are.

      True. But with such a tight social network, the authors don’t need to hand pick the reviewers in order to get a favorable review. That was the entire point of the Wegman social network analysis: impartial review is difficult to ensure in this incestuous field. The only way to ensure it is impartial is to have the entire process open to outside scrutiny.

      Wegman’s analysis is now three years old. It would be inteesting to re-do it and see to what extent the network has grown to envelope Thompson, Hegerl and North.

      • Phil.
        Posted Sep 4, 2008 at 10:52 AM | Permalink

        Re: bender (#111),

        But that’s different from the original remark made by the emailer and would be as true no matter how the reviewers are chosen. There’s good reason to suppose that the reviewers were not North and Hegerle.

        • bender
          Posted Sep 4, 2008 at 11:07 AM | Permalink

          Re: Phil. (#112), Again, you are correct – on both counts. Without knowing the reviewers it is impossible to know whether they might have been in a position of conflict. Wegman’s analysis suggests that this is a legitimate concern. As a policymaker I would have been a lot more comfortable with this paper had Steve M been asked to review it – not as a replacement for a 2nd, but as a 3rd. You know he’s going to review it anyways, so why not bring him in early on? Answer: pure ego.

  93. Posted Sep 4, 2008 at 11:04 AM | Permalink

    Note that PNAS permits comments and corrections to published papers in the form of short letters, but only within 3 months of publication. Any errors detected after 3 months must be discussed in other venues.

    Presumably an online SI can provide the details of the objection merely outlined in the abstract-length letter.

  94. Nathan Kurz
    Posted Sep 4, 2008 at 12:39 PM | Permalink

    re: Lucy Skywalker #104

    That is a very good introduction, Lucy! I’d recommend it to anyone looking for a well-researched and wide-reaching background on the state of climate science, as well as the context which surrounds this new paper by Mann.

    I posted it to Reddit hoping it might reach a wider audience: From Climate Change Activist to Global Warming Skeptic…. A vote up if you are Reddit user might help it be more visible. 🙂

    ps. There seems to be some problem with the ‘reply link’ and the preview function. Removing the quotes from the initial href seems to work, but shouldn’t be necessary. The bug only seemed to appear once I added another link in the body.

    • M. Jeff
      Posted Sep 4, 2008 at 9:41 PM | Permalink

      Re: Nathan Kurz (#115),

      ps. There seems to be some problem with the ‘reply link’ and the preview function. Removing the quotes from the initial href seems to work, but shouldn’t be necessary. The bug only seemed to appear once I added another link in the body.

      On my IE7, clicking the reply and paste link twice results in added text being previewed, but then two links are present instead of one. A single click followed by removing the quotes works as you mention, but on my system the “bug” is not dependent on the presence of another link.

      • MrPete
        Posted Sep 4, 2008 at 10:07 PM | Permalink

        Re: M. Jeff (#123), if you remove the quotes, you break the actual link. I’d be cautious about that. Apologies for the hassles… the hope is this is helpful to most, even if not for all (yet).

  95. Alastair
    Posted Sep 4, 2008 at 1:19 PM | Permalink

    I am the originator of Steve’s comment regarding peer reviewing at PNAS. There is further comment on the nature of the PNAS submission systems here

    http://pipeline.corante.com/archives/2008/08/28/pnas_read_it_or_not.php

    If I was mistaken in believing that the author of the articles picks the reviewers and not the Communicator then I apologize for the slip.

  96. Steve McIntyre
    Posted Sep 4, 2008 at 2:11 PM | Permalink

    Willis, try again, works fine for me.

    download.file(“http://data.climateaudit.org/data/mann.2008/mann.tab”,”temp.dat”,mode=”wb”)
    load(“temp.dat”);length(mann) #1209
    mann[[1]][1,]
    # year proxy count
    #1 1422 675 1

  97. claytonb
    Posted Sep 4, 2008 at 2:47 PM | Permalink

    How updated is this page? :
    http://www.climateaudit.org/?page_id=354

  98. Willis Eschenbach
    Posted Sep 4, 2008 at 2:53 PM | Permalink

    Steve, many thanks. I did it again from the start and it worked like a champ … go figure.

    w.

  99. Bob KC
    Posted Sep 4, 2008 at 3:52 PM | Permalink

    Gerald North comments on this paper in a Houston Chronicle blog by John Nielsen-Gammon here. Some excerpts:

    The original MBH paper stirred a hornet’s nest of protest from skeptics…The NAS Committee Report and the testimony by its members argued that the MBH study had some flaws in its first implementations of the statistical technique, but these had very little influence on the conclusions.

    They also used more sophisticated statistical procedures throughout, benefiting from the research conducted by themselves and many other investigators over the last decade…it appears that the present warm up-swing in temperatures coinciding with the Industrial Revolution and accelerating over the last century is remarkably different from any in the last two thousand years.

    • Kenneth Fritsch
      Posted Sep 4, 2008 at 5:01 PM | Permalink

      Re: Bob KC (#120),

      Should we expect anything more critical or in depth from someone who claims to merely wing these things around a conference table? I think not — so lets stay tuned to what someone who will truly analyze these findings finds.

  100. Jean S
    Posted Sep 5, 2008 at 3:26 AM | Permalink

    Again Mann is not using exactly Schneider’s RegEM. The only real difference in the main code
    http://www.meteo.psu.edu/~mann/supplements/MultiproxyMeans07/code/codeprepdata/infillproxydata/newregem.m
    I can spot is this line:
    peff_ave = peff_ave + peff*pm; % add up eff. number of variables
    In the original code
    http://www.gps.caltech.edu/~tapio/imputation/regem.m
    it is
    peff_ave = peff_ave + peff*pm/nmis; % add up eff. number of variables
    nmis seems to be the number of missing values. Can someone find explenation of this modification/explain its effect? Seems odd to make only one modification to someones working code if there is no real effect at all. SI refers to Mann et al (2007), but I couldn’t find explenation from there either, but it seems that they were using that modification already there.

    • K. Hamed
      Posted Sep 5, 2008 at 3:54 AM | Permalink

      Re: Jean S (#125),

      The problem seems to be the numbers in the href string! can this be fixed?

      • K. Hamed
        Posted Sep 5, 2008 at 3:56 AM | Permalink

        Re: K. Hamed (#126),

        Apparently, it affects the preview only.

        \alpha_i^2 ….Just checking

        • K. Hamed
          Posted Sep 5, 2008 at 3:57 AM | Permalink

          Re: K. Hamed (#127),

          sorry for taking space, but the tex preview is also not working for me either. It posts correctly though.

  101. Luis Dias
    Posted Sep 5, 2008 at 4:42 AM | Permalink

    As expected, Real Climate has posted about this already. From what I see in the comments, skepticism is being heavily censored.

  102. MC
    Posted Sep 5, 2008 at 6:31 AM | Permalink

    I read this paper over lunch where I got up to the Results section and stopped. I didn’t want to read anymore for two main reasons:

    After describing the CPS and EIV method, in the paragraph before the Data section

    For the EIV approach, which does not require that proxy data represent local temperature variations

    Then it goes on to describe the various schemes used. However this statement is a massive assumption or more to the point, just plain wrong, unless it has been shown that by individually defining proxy relationships and combining them that the result is similar to the EIV approach of non-locality. Even as I write this sentence though I have to shake my head. You must calibrate proxies to local environment otherwise they aren’t useable or the relationship repeatable.

    Secondly when they describe how they calibrated (regressed) various proxies

    Where the sign of the correlation could a priori be specified (positive for tree ring data, ice core oxygen isotopes, lake sediments, and historical documents, and negative for coral oxygen-isotope records), a one-sided significance criterion was used.

    I assume here that they are taking a proxy and forcing a relationship on it. What if the coral oxygen-isotope record has a positive correlation? How is this dealt with? It appears to be rejected. My issue with this statement is 1) NO REFERENCES for available defined proxy relationships and 2) NO REPEATABILITY of relationships to calibrate with or relate to any previous data set.
    This paper would be substantially more believable if they had gone through an exercise in defining all the proxies used (are they solely variable with temperature) and produced an empirical data set with errors. They then could have compared a PCA approach and seen how this related to a CPS (without weighting) approach.

    Lastly and an additional point, they don’t use r2 as they have shown (those pesky Wahl and Ammann papers) that they are deficient for reconstruction skill evaluation. Yet we all know that this is wrong as r2 has been adequate every other time it gets used in other fields.
    I am ashamed as a physicist that a ‘scientist’ is applauded for making such basic errors in method.

    • bender
      Posted Sep 5, 2008 at 6:38 AM | Permalink

      Re: MC (#130),
      As an experienced reviewer, that arrogant little explantory phrase (“which does not require that proxy data represent local temperature variations”) looks to be inserted after the fact in response to a reviewer’s complaint.

    • Posted Sep 5, 2008 at 10:43 AM | Permalink

      Re: MC (#130),

      see also

      Mann’s Climate Over the Past Two Millennia ( Annu. Rev. Earth Planet. Sci. 2007. 35:111–36 )

      CFR methods do not require that a proxy indicator used in the reconstruction exhibit any local correlation with the climate field of interest

  103. Kenneth Fritsch
    Posted Sep 5, 2008 at 11:19 AM | Permalink

    Bender says:

    As an experienced reviewer, that arrogant little explantory phrase (“which does not require that proxy data represent local temperature variations”) looks to be inserted after the fact in response to a reviewer’s complaint.

    I saw several comments in this paper where I thought that I saw some editing in the same tone as you give in this example. I am thinking though that it could have been in response to a coauthor as well as a reviewer. The explanatory phrases, using your terminology, seemed very Mannish to me, although I am not all that familiar with the coauthors.

    In general I see a number of significant concessions in this paper that appear either hidden or qualified with the “explanatory phrase”. It is almost as if the coauthors and/or reviewers were indicating to Mann that he had to rein it in a bit and offered to provide their comments to which Mann could offer an added explanatory phrase . Alternatively, I could be suffering from an over active imagination. Anyway I want to excerpt some of these statements from this paper in a future post.

  104. Kenneth Fritsch
    Posted Sep 5, 2008 at 5:21 PM | Permalink

    The Mann et al, paper that Steve M is in the process of critiquing seems to have a number of problems not unlike those from the original and their progeny. My rereading of this paper leaves me with the impression that given all the warts there are some rather major concessions made in this paper, albeit in Mannian fashion. Below I list some of those that in this laypersons view are significant.

    1.From page 13257, we have: “Given the uncertainties, the SH and global reconstructions are compatible with the possibility of warmth similar to the most recent decade during brief intervals of the past 1,500 years”.

    That comment to me, in terms of any expectations from a Mann authored paper on this topic, is a concession of the magnitude of point, game, set and match – for his opponents.

    2. From page 13254, we have: “However, in the case of the early calibration/late validation CPS reconstruction with the full screened network (Fig. 2A), we observed evidence for a systematic bias in the underestimation of recent warming. This bias increases for earlier centuries where the reconstruction is based on increasingly sparse networks of proxy data. In this case, the observed warming rises above the error bounds of the estimates during the 1980s decade, consistent with the known ‘‘divergence problem’ (e.g., ref. 37), wherein the temperature sensitivity of some temperature-sensitive tree-ring data appears to have declined in the most recent decades.”

    Which in my mind is another way of saying that authors concede that the historical global temperatures might have been as warm as the current ones, but if we did not put the instrumental data on our graphs so prominently one would see that over the entire proxy period to present the reconstruction, by itself, the global, and probably NH temperatures, of the past would have exceeded those of the present.

    3. From page 13254, we have: “Interestingly, although the elimination of all tree-ring data from the proxy dataset yields a substantially smaller divergence bias, it does not eliminate the problem altogether (Fig. 2B). This latter finding suggests that the divergence problem is not limited purely to tree-ring data, but instead may extend to other proxy records.”

    Which in my mind is the same as saying not only is the tree ring data suspect out-of-sample so is the non-dendro reconstructions. Therefore we can say that not only tree ring reconstructions but those using other proxies will essentially show that over the entire period, and without interjecting the instrumental data, we have reason to believe that the recent decades temperastures have not exceeded those in the past 1600 years.

    4. From page 13255, we have: “Peak Medieval warmth (from roughly A.D. 950-1100) is more pronounced in the EIV reconstructions (particularly for the landonly reconstruction) than in the CPS reconstructions (Fig. 3). The EIV land-only reconstruction, in fact, indicates markedly more sustained periods of warmer NH land temperatures from A.D. 700 to the mid-fifteenth century than previous published reconstructions. Peak multidecadal warmth centered at A.D. 960 (representing average conditions over A.D. 940–980) in this case corresponds approximately to 1980 levels (representing average conditions over 1960–2000). However, as noted earlier, the most recent decadal warmth exceeds the peak reconstructed decadal warmth, taking into account the uncertainties in the reconstructions.”

    That excerpt concedes, in my view, that the original Mann reconstructions and its progeny were too hasty in doing away with the MWP, but at the same time indicating that we should not expect the authors to note the 1980s divergence in the same breath with this statement and clearly show the reconstruction from the past into the present. That should be a Mann imposed exercise for the readers as should findings in this paper that the authors judge that the EIV version with its higher past temperatures is more reliable than the CPS version.

    The foregoing may not be the interpretation that a Gerry North would give this paper, but I think mine is a reasonable one.

  105. sd
    Posted Dec 14, 2009 at 11:16 AM | Permalink

    “…in the case of the early calibration/late validation CPS
    reconstruction with the full screened network (Fig. 2A), we observed
    evidence for a systematic bias in the underestimation of
    recent warming. … Interestingly, although the elimination of all tree-ring data from the proxy dataset yields a substantially smaller divergence bias, it does not eliminate the problem altogether (Fig. 2B). This latter finding suggests that the divergence problem is not limited purely to tree-ring data, but instead may extend to other proxy records.”

    Or to read it another way, why aren’t proxies able to reproduce recent warming?