The Hockey Stick and the Milankovitch Theory

The 20th century warming counters a millennial-scale cooling trend which is consistent with long-term astronomical forcing.


According to the UMass researchers, the 1,000-year reconstruction reveals that temperatures dropped an average of 0.02 degrees Celsius per century prior to the 20th century. This trend is consistent with the “astronomical theory” of climate change, which considers the effects of long-term changes in the nature of the Earth’s orbit relative to the sun, which influence the distribution of solar energy at the Earth’s surface over many millennia.

“If temperatures change slowly, society and the environment have time to adjust,” said Mann. “The slow, moderate, long-term cooling trend that we found makes the abrupt warming of the late 20th century even more dramatic. The cooling trend of over 900 years was dramatically reversed in less than a century. The abruptness of the recent warming is key, and it is a potential cause for concern.”

MBH99 Press Release

The long-term [northern] hemispheric trend is best described as a modest and irregular cooling from AD 1000 to around 1850 to 1900, followed by an abrupt 20th century warming.

IPCC TAR WG I: The Scientific Basis

The above figure has four familiar looking graphs. One of them is the original Hockey Stick, and three are “fake”. Can you tell which one is the real?

Although most of the original Hockey Stick methods have been uncovered, there are still a few remaining oddities. Apart from the confidence interval calculation there has been another mystery relating to MBH99. This is remarkable as the rather short MBH99 paper seems to be on the surface a simple extension of MBH98: a step (1000-1399) is added to the existing MBH98 NH temperature reconstruction using the same methodology. However, a wealth of material in the four page paper is devoted to “correcting” the (Mannian) North-American tree-ring series PC #1. How exactly or even why this was done has been somewhat a mystery. Two years ago Steve wrote notes about the issue (here, here, and here). It is worth reviewing those before continuing reading this post.

The problem with the methods described by Steve was that it was impossible that they had been actually used. The reason is that it is easy to see from the published data that the actual “correction”, or the “fix”, applied was piecewise linear. There is simply no way such a function could be obtained with any type of smoothing operations from the original data.

For the calculations Steve was using his private copy of Mann’s later destroyed UVA ftp archive infamously known for the CENSORED -directories. For the rest of us, data archived there has been unreachable — until now. The FOIA documents contain an MBH data directory structure obtained by Tim Osborn sometime back in 2003. It can be argued that the UVA ftp site was originally specially prepared by Scott Rutherford for Osborn, but that is another story. Anyhow, the files in Osborn’s archive seem to correspond to those originally located in UVA ftp site. The files in the directory TREE/COMPARE relate to the PC1 “fixing”.

While I was checking the files, I noticed a FORTRAN code “residualdetrend.f“, which I had not seen discussed anywhere. In the beginning of the file there is a comment:

c      regress out co2-correlated trend (r=0.9 w/ co2)
c      after 1800 from pc1 of ITRDB data

Wow! Exactly the same comment is found in “co2detrend.f” discussed by Steve here. Further down, we find

c      linear segments describing approximate residuals
c      relative to fit withrespect to secular trend

Indeed, there it was: a code removing a piecewise linear segment from the PC1, and further I found out that the segment matched nicely with the “secular trend in residuals” graph in MBH99 Figure 1(b). Mystery solved, well, kind of.

Now the question was, what the heck is then “co2detrend.f”?! I noticed that both these codes output to a file with the name “pc01-fixed.dat”. FOIA files include such a file, and its content matches to the output of “residualdetrend.f”. So IMO it can be safely assumed that Mann tried another CO2 “adjustment”, but for some reason ended up with the one described in “residualdetrend.f” (why to approximate the “secular trend” is another new Mannian mystery).

After establishing this, I had another surprise. I noticed that there is also a file “pc1-fixed-old.dat“, which I presumed to be the output of “co2detrend.f”. Well, it turned out that the “fix” contained in the file was neither of the methods described so far. Thus Mann had at least three methods for “adjusting” his PC1! Here is a plot of different “fixes” (to be subtracted from the original PC1) uncovered so far.

A natural question is now, why the fix used is “better” than the ones disregarded? Maybe the “skill” measures used by Mann contain the answer. MBH99:

The calibration and verification resolved variance (39% and 34% respectively) are consistent with each other, but lower than for reconstructions back to AD 1400 (42% and 51% respectively – see MBH98).

I (as Steve and UC) have been able to emulate the main MBH procedure for a while. Especially, my emulation of the AD1000 step is exact. So I ran the algorithm, but replaced the “fixed” PC1 with the other two “fixed” PCs. For the “co2detrend.f”-fix the calibration and verification REs are 0.37 and -0.09, respectively. So even according to Mann’s standards (negative RE) that “fixed” PC had to disregarded. For the “old fix” the RE scores were 0.37 and 0.20, so I guess they are not “consistent with each other”, and maybe this was the reason for trying yet-another-fix. However, the real surprise came when I tried the algorithm with the original Mannian PC1, i.e. without any “fixing”. The RE scores are 0.38 and 0.33, so based on these “skill metrics” there is no reason to “fix” the PC in the first place!

It gets more interesting: MBH99 has the linear trend (1000-1900 as in the IPCC figure) of -0.020°C/century, but without PC1 “adjustment” the cooling trend is reduced to less than -0.005°C/century! MBH99:

The substantial secular spectral peak is highly significant relative to red noise, associated with a long-term cooling trend in the NH series prior to industrialization (δT = -0.02°C/century). This cooling is possibly related to astronomical forcing, which is thought to have driven long-term temperatures downward since the mid-Holocene at a rate within the range of -0.01 to -0.04°C/century [see Berger, 1988].

Finally, the answer to the question posed in the beginning. The original Hockey Stick is Exhibit B. Exhibit C is obtained using the “old” AD1000 NOAMER PC1 “fix” keeping everything else the same in the Mannomatic. Exhibit D corresponds to the “co2detrend.f fix” , and Exhibit A is obtained using the original Mannian PC1 (no fixing). (click below to see an animated GIF of the different versions)


  1. pete m
    Posted Feb 3, 2010 at 6:28 AM | Permalink | Reply

    That’s awesome detective work Jean.

    Now can someone kindly put this all in English?

  2. Posted Feb 3, 2010 at 7:54 AM | Permalink | Reply

    My mbh99 code is here, (see the update, one change needed due to climateaudit update )


    I do not believe that global mean annual temperatures have simply cooled progressively over thousands of years as Mike appears to and I contend that that there is strong evidence for major changes in climate over the Holocene (not Milankovich) that require explanation and that could represent part of the current or future background variability of our climate.

    Update 31 Mar 2011: code now in here:

  3. Mark Cooper
    Posted Feb 3, 2010 at 8:38 AM | Permalink | Reply


    a bit of topic, but I asked this question several times on CA comments without an answer, (probably lost in the general commentary)- Seeing as you actually work with the data, please can you tell me why the error margin has a massive improvement around 1600AD on all the Mann et al Plots? If the reason is a reduction in quantity of source data, such as fewer tree ring proxies, or fewer thermometers etc, then there should also be an offset in the main trend-line- at least that’s what happens with my own data (unrelated to climate) Any explanation from anyone would be much appreciated.

    • Jean S
      Posted Feb 3, 2010 at 8:46 AM | Permalink | Reply

      Re: Mark Cooper (Feb 3 08:38),
      Mann calculates “error margins” from the calibration error. Now the number of proxies increase considerably after around 1600AD, and hence later steps “fit” better and have narrower “confidence intervals”. I do not understand what you mean by the offset thing.

      • Skip Smith
        Posted Feb 3, 2010 at 4:14 PM | Permalink | Reply

        I think Mark is arguing that adding in new data would likely shift the mean of the series up or down, as well as affect the confidence intervals.

        • Kenneth Fritsch
          Posted Feb 3, 2010 at 4:36 PM | Permalink

          I think Mark is not seeing the sequence correctly here. The trend has been determined and now the confidence interval of the trend line is required. Enter Mann and you obtain those from the variability (standard deviation of the trend) from the calibration period. The width of the error bars (CIs) will now change around the trend line as the number of samples changes with time.

    • Posted Feb 3, 2010 at 4:40 PM | Permalink | Reply

      You can actually replace all proxies with some random process (red, white, for example), and you’ll get similar error margins as in MBH98. But then your verification RE will be very likely negative. To obtain positive verification RE, it is sufficient to include PC1 for each step, and then replace all other proxies with, say, AR(1) p=0.9 noise. You’ll need some trial & errors to get all REs positive, but trial & error is what Mann seems to do all the time, so it is ok.

      • Posted Feb 4, 2010 at 9:14 AM | Permalink | Reply

        Actually, original PC1 is not needed for positive ver REs. One can just apply partially centered PCA to trendless red noise [1] and take the first ‘noise PC’. It won’t take many simulations to obtain a hockey stick that passes the positive RE requirement.

        [1] McIntyre, S., and R. McKitrick (2005), Hockey sticks, principal components, and spurious significance, Geophys. Res. Lett., 32, L03710, doi:10.1029/2004GL021750.

    • Posted Feb 3, 2010 at 4:49 PM | Permalink | Reply

      If the reason is a reduction in quantity of source data, such as fewer tree ring proxies, or fewer thermometers etc, then there should also be an offset in the main trend-line- at least that’s what happens with my own data (unrelated to climate)

      Do you mean that the variability about the mean value should increase as the number of proxies decreases? They use variance matching to get rid of such annoyance. A method AFAIK not very known in calibration literature.

  4. Steve McIntyre
    Posted Feb 3, 2010 at 8:44 AM | Permalink | Reply

    Jean S, this is great analysis. As always, every bizarre Mannian adjustment has a reason.

    • Jean S
      Posted Feb 3, 2010 at 8:55 AM | Permalink | Reply

      Re: Steve McIntyre (Feb 3 08:44),
      Thanks. Yes, all the mannian work is rather simplistic (once you figure them out) but they all seem to have a reason. Figuring them out is the problematic part. On the other hand, some things are truly weird. For instance, why did he decide to “approximate” data he had readily at hand? I could get practically no difference by using the true “secular trend residual” fix.

  5. Jimchip
    Posted Feb 3, 2010 at 8:55 AM | Permalink | Reply

    Thank you, Jean. And, I have to say, very well written. I’ll also say I understood almost everything but I could not have ever done it.

    Technically, those animated gifs and blinkys and… are very neat tools.

  6. HectorMaletta
    Posted Feb 3, 2010 at 9:06 AM | Permalink | Reply

    Given the number and bizarre nature of the many weird fixes and counter-fixes in the process of data-torturing leading to the final hockey stick, perhaps “mannian” should be replaced by “manniac”.

  7. Craig Loehle
    Posted Feb 3, 2010 at 9:07 AM | Permalink | Reply

    Kind of squishy, isn’t it, when one can try all sorts of models and only report the “best” one (where “best” is never described and looks subjective)?

    • HectorMaletta
      Posted Feb 3, 2010 at 9:24 AM | Permalink | Reply

      It is a combination of (1) one of the most extreme forms of ‘publication bias’, a widespread vice of contemporary science whereby only positive results get published, and (2) a new manifestation of the old trick of ‘torturing the data till they confess’. In this particular case, motivated also by an advocacy drive mixed with ruthless competition for research funds and insufficiently impartial peer review.

    • Experimentalphysicist
      Posted Feb 3, 2010 at 9:32 AM | Permalink | Reply

      John P. Ioannidis has a little gem of a paper hidden in the medical literature:
      “Why most published research findings are false”,PLoS Med. 2005;2:e124

      It is a highly readable account of some of the fallacies of statistical reasoning. A few highlights relevant for the present case:

      “Corollary 4: The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true. Flexibility increases the potential for transforming what would be “negative” results into “positive” results.”

      “Claimed Research Findings May Often Be Simply Accurate Measures of the Prevailing Bias”

      “the claimed effect sizes are simply measuring nothing else but the net bias that has been involved in the generation of this scientific literature. Claimed effect sizes are in fact the most accurate estimates of the net bias. It even follows that between “null fields,” the fields that claim stronger effects (often with accompanying claims of medical or public health importance) are simply those that have sustained the worst biases.”

    • Jean S
      Posted Feb 3, 2010 at 9:43 AM | Permalink | Reply

      Re: Craig Loehle (Feb 3 09:07),
      I think the most amazing thing is that these same people have been loudly telling how “robust” their results are!

  8. Bernie
    Posted Feb 3, 2010 at 9:54 AM | Permalink | Reply

    Jean S:
    This is a very elegant piece of detective work. I am still puzzled as to why the fixes. You frame the question but do not seem to offer an explanation. The version chosen – Exhibit B – has little going for it except its alignment with the presumed long term cooling trend.

    • Jean S
      Posted Feb 3, 2010 at 10:11 AM | Permalink | Reply

      Re: Bernie (Feb 3 09:54),
      well, without a fix (Exhibit A) the linear trend is -0.0047°C/century, and there is not enough cooling to be associated with the Astronomical Theory of Climate Change. See also if you can find any support for this statement (MBH99 Press Release) from Exhibit A (my bold)

      The latest reconstruction supports earlier theories that temperatures in medieval times were relatively warm, but “even the warmer intervals in the reconstruction pale in comparison with mid-to-late 20th-century temperatures,” said Hughes.

      • Bernie
        Posted Feb 3, 2010 at 11:01 AM | Permalink | Reply

        It is hard to believe that other scientists are letting them get away with this.

        Plus, how is any CO2 adjustment legitimate given that the CO2 signal is presumably part of another PC? Is there logic that bizarre?

        • Bernie
          Posted Feb 3, 2010 at 11:02 AM | Permalink

          Sorry “their logic”

        • cheesegraterco2
          Posted Feb 3, 2010 at 3:56 PM | Permalink

          Regarding the smearing in of historic CO2 data in thousand year hockey sticks, I wrote previously:

          “Just because it sounds stupid doesn’t mean it’s not true.”

          I have several comments on the thread about the Antarctic CO2 concentration (not isotopes) data from the 1988, 1996 and 1998 Etheridge. These all present multi-century hockeysticks because the blades are anthropogenic signals. Mann, et al. I strongly suspect, are smearing this data into their reconstructions, thereby necessarily reproducing hockeysticks. Might have even got the idea for adding an “instrumental” series on graph tails from Ether. 1988.

          I know, it seems too crazy, even criminal. You cannot believe it.

          But they do. I think I have an advantage. I have engaged the believers as a student, not an equal. I wanted the ABC’s and it became clear to me there is a strong element of blind faith involved, and that they believe the 1000-2000 year CO2 concentration record and temperature are one in the same. At Copenhagen the posters said “You Control the Climate”. Not effect, but control. The anthrowarmists believe it.

  9. dearieme
    Posted Feb 3, 2010 at 10:18 AM | Permalink | Reply

    The characteristic of the work of the Team gangsters that I find surreal is that so many of them fanny about doing something that bears a passing resemblance to science, without, apparently, having a clue as to what doing real science is like.

  10. Posted Feb 3, 2010 at 10:47 AM | Permalink | Reply

    Thanks, Jean — I’ll have to study your results more closely.

    My take on this back on Steve’s 11/13/07 post “The MBH99 ‘CO2 Adjustment’” was that MBH99 had simply hand-fudged the portion of the curve after 1700 to make it show more cooling prior to the 20th c. They did this by splicing in the low frequency of a series that had the “right” shape in place of the low frequency of their actual series. This was obfuscated by calling it a CO2 adjustment, but in fact the numerical values of their CO2 series were never used for anything, nor was the substitute series numerically calibrated to anything. See

    My bottom line was that while the adjustment was entirely bogus, it was not actually hidden if their text was carefully parsed. It was, however, obfuscated by unnecessarily complicating its explanation.

  11. EdeF
    Posted Feb 3, 2010 at 11:22 AM | Permalink | Reply

    I cheated, I have a copy of Bishop Hill’s “The Hockey Stick Illusion” right in front of me. Figure B is on the cover.

    • Jean S
      Posted Feb 3, 2010 at 11:37 AM | Permalink | Reply

      Re: EdeF (Feb 3 11:22),
      I got my copy on Monday, and I think it is an excellent book. I highly recommend it. However, I think BH should redraw the cover for the next edition, the shaded “hockey stick” is drawn in the wrong orientation in the light of this post ;) It should be more like this:

  12. R.S.Brown
    Posted Feb 3, 2010 at 11:38 AM | Permalink | Reply

    Indeed. Mann, et al. 1999, managed to infect/influence some of the basic research on
    “solar forcing”:

    So NCDC & NASA have allowed these Mannian statistics in all over the place.

  13. tommoriarty
    Posted Feb 3, 2010 at 11:53 AM | Permalink | Reply

    Please take a moment to look at my comments concerning the Luterbacher “proxies” used by Mann in his 2008 version of the hockey stick. I call these proxies the “amazing multiplying proxies” because Mann uses 71 seperate Luterbacher proxies, but the data for all of them prior to about 1750 come from the same 10 or so “documentary information” sources.

    Comments and criticisms are appreciated.

    You can see them at here.

    Best Regards,
    Tom Moriarty

  14. John From MN
    Posted Feb 3, 2010 at 12:02 PM | Permalink | Reply

    Is there any other reason to have those RED bars at the end of the Graph other than as a scare tatic? It would be nice if the darn RED bars did not cover-up the actual data. Tricks of smoke and mirrors….Can’t they just present data like scientists instead of scare mongers. They take us all for fools….I learned a new term today that decribes many of the Players in the AGW arena. Dunning–Kruger effect. Here is the explanation of this term of endearment :~) ……….how apropos…… John…

  15. Jan H
    Posted Feb 3, 2010 at 12:21 PM | Permalink | Reply


    and the comments. (For instance, E#21 written by Lau )

    If global temperature is going up – some device must show that’s the fact. Or?

  16. Posted Feb 3, 2010 at 3:16 PM | Permalink | Reply

    The RE scores are 0.38 and 0.33, so based on these “skill metrics” there is no reason to “fix” the PC in the first place!

    ..or you can ‘fix’ towards the other direction, make PC1 even more hs-like (added positive linear trend) :

    positive REs, so I guess this is ok as well.

  17. Tom C
    Posted Feb 3, 2010 at 3:20 PM | Permalink | Reply

    Whatever the fix, it’s sobering to keep in mind that we are talking about a handful of tree rings. No way they can be tortured enough to yield such precise conclusions.

  18. Posted Feb 3, 2010 at 3:20 PM | Permalink | Reply

    Figure missing,

  19. Tom C
    Posted Feb 3, 2010 at 3:45 PM | Permalink | Reply

    One has to wonder what it is like for Mann, Briffa to log on to CA and see their mis-deeds (which they had assumed were safely tucked away out of sight in their minds and in servers somewhere) explained in public with such clinical precision.

    • Craig Loehle
      Posted Feb 3, 2010 at 6:07 PM | Permalink | Reply

      And when do you think that has ever happened?

      • Tom C
        Posted Feb 3, 2010 at 7:21 PM | Permalink | Reply

        Sorry – Don’t get your drift.

  20. Kenneth Fritsch
    Posted Feb 3, 2010 at 5:00 PM | Permalink | Reply

    I think that the effort and detective instincts of Jean S, Steve M and UC are sometimes under appreciated by those of us who do not understand why these errors were not found by scientists more professionally involved with the subject matter.

    The analytical work is not easy and the output has to be matched by supposing what was being attempted initially. The opaque language of the original authors does not make the job easier. I also think that the analyst needs motivation that perhaps the professionally involved scientist does not possess.

    Anyway I guess Jean S has extracted, and provided for public view, a sensitivity test that Mann et al had.(unknowingly?) provided in the haze. Now that the sensitivity and robustness becomes apparent, I would guess the next step would be for Mann et al. to post hoc provide the a priori reasons for the selection they made. It is at this step that serious scientists are allowed to chuckle.

  21. Bob McDonald
    Posted Feb 3, 2010 at 5:35 PM | Permalink | Reply


    Is there any significant meaning to the intersection of the lines on your PC1 fixes graph? They seem to converge ~1925.

    It just seems odd that the 3 different calculations would arrive at the same temp in 1925 while having virtually nothing else in common.

    • Jean S
      Posted Feb 4, 2010 at 4:20 AM | Permalink | Reply

      Re: Bob McDonald (Feb 3 17:35),
      Nothing I’m aware of.

      I did the graph by standardizing all series (three “fixed” PCs plus the “rescaled” original PC) to zero mean and unit variance in the “pre-fixing era” (1000-1599), where they are exactly the same. Then I subtracted from the original each of the fixed PCs.

  22. Craig Loehle
    Posted Feb 3, 2010 at 6:02 PM | Permalink | Reply

    Does it seem bizarre to anyone else to adjust a PC1 result like this?

  23. Geoff Sherrington
    Posted Feb 4, 2010 at 12:59 AM | Permalink | Reply

    Maybe email 963233839.txt of 10 July 2000 is relevant.

    Goodness, the climate science community learned a lot about ignoring objections between 2000 and the closing date for submissions to AR4 in 2007.

  24. Steve McIntyre
    Posted May 8, 2010 at 9:07 AM | Permalink | Reply

    108. 0926026654.txt
    From: Phil Jones
    Subject: Straight to the Point
    Date: Thu, 06 May 1999 17:37:34 +0100
    Cc: k.briffa,t.osborn,mhughes,rbradley

    Keith didn’t mention in his Science piece but both of us
    think that you’re on very dodgy ground with this long-term
    decline in temperatures on the 1000 year timescale. What
    the real world has done over the last 6000 years and what
    it ought to have done given our understandding of Milankovic
    forcing are two very different things. I don’t think the
    world was much warmer 6000 years ago – in a global sense
    compared to the average of the last 1000 years, but this is
    my opinion and I may change it given more evidence.

    • Hu McCulloch
      Posted May 9, 2010 at 10:52 AM | Permalink | Reply

      Maybe Jones was persuaded after he read about it in the authoritative TAR? ;-)

    • Skiphil
      Posted Dec 4, 2012 at 2:48 PM | Permalink | Reply

      May I suggest a read (or review) of some of the fascinating detective work of Steve, Jean S, UC, et al through the years?

      I am reading a couple of past threads per day just to try to fill in my own mental world with all that has transpired here over the past decade….. Remarkable!

  25. Hu McCulloch
    Posted Apr 3, 2011 at 8:00 AM | Permalink | Reply

    Jean has suggested, over on the new Briffa Bodge post at , that this trick be called the “Milankovitch Bodge”: .

    However, I think it does Milankovitch an injustice to associate him with this procedure.

    How about the “Mannkovitch CO2 Bodge” instead?

    The essay Jean recommends, “A good trick to hide a decline”, by Andrew Montford at , provides very readable explanation of all this. Perhaps this is included in his book.

    One important thing that I got from Montford’s piece was that the Mannkovitch CO2 Bodge was only applied to the AD 1000-1400 portion of the HS, where it has the effect of raising the early portion of the shaft. If you look closely at the blue annual readings, this discontinuity is actually visible in the altered graph at 1400.

    As Montford dryly points out, it is “counterintuitive” that an alteration that supposedly adjusts for post-1900 CO2 fertilization has it entire effect pre-1400!

    Like they say, It’s Even Worse Than We Thought! ;-)

    • Skiphil
      Posted Mar 9, 2013 at 7:39 PM | Permalink | Reply

      Have any climate auditors looked at the latest greatest hockey stick paper, Marcott et al (2013)? It’s getting lots of the usual hyperventilating PR, with Mannian Hockey Team quotations. Since it claims 11,300 years of data, yet a sharp hockey stick blade in recent decades, it is suggesting comparative discussions with both Mannian studies and also Milankovitch estimates.

      comment at WUWT: something very odd about data for Marcott et al (2013)

      • Steve McIntyre
        Posted Mar 10, 2013 at 1:03 AM | Permalink | Reply

        Looking at it.

        • bernie1815
          Posted Mar 10, 2013 at 12:54 PM | Permalink

          Excellent, although if I knew you were going to be scrutinizing a stat laden paper of mine I would begin to perspire. If it depended on stat stuff from Mann, I would perspire greatly.

        • Brandon Shollenberger
          Posted Mar 10, 2013 at 9:40 PM | Permalink

          Steve, I assume you already noticed this, but I just finished reading the paper and found an amusing point. In Figure 1 (E and F), five reconstructions are compared to their new one. Two are from Mann 2008, and a third is from Wahl and Amman 2007. Of course, W&A 2007 is really just MBH with a couple minor alterations made with the (false) claim they fixed the problems of MBH. In other words, both of Mann’s reconstructions are present.

          In fact, one of the remaining two reconstructions (Huange04) only goes back to 1600 AD, and it looks nothing like their new reconstruction. That means the only millenial reconstruction they compare their work to other than Mann’s is Moberg 2005, and it is outside their confidence intervals for more than half the period it covers.

          Their reconstruction looks good if you compare it to Mann’s work, but it (apparently) looks bad if you compare it to any other work. That shouldn’t reassure anyone.

        • Brandon Shollenberger
          Posted Mar 11, 2013 at 5:20 AM | Permalink

          And now, a more useful comment. I’ve always thought the most important step in creating any reconstruction is to look at the data. Before implementing any statistical methods, just look at it and see what you can see. In that vein, I created images showing all 73 series used in this paper. The x-axis is held constant for each series so they’re comparable, but the y-axis is different per series.

          Given the data they used, I have trouble seeing how they confirmed Mann’s hockey stick. Most of their series don’t seem to resemble their results.

          Steve: others are noticing the same phenomenon. If none of the datasets have the Marcott stick, how does it emerge in the aggregate? Dunno.

        • Brandon Shollenberger
          Posted Mar 12, 2013 at 12:57 AM | Permalink

          I haven’t even been able to figure out how Marcott et al manage to get their 20 year samples. Getting that resolution from series with a much coarser resolution requires some sort of infilling. I can’t find anything in the paper or SI which discusses that step. It’s hugely important, but unless I’m missing something, it’s simply overlooked. My suspicion is it has something to do with the anomalous results.

          One possibility I’ve been considering is there are a number of series that extend to 1940-1950 but don’t reach 1960. If those series were cooler than the ones extending to 1960, that might explain the jump. When the series with a cooler end get dropped, the results get warmer. That could create a jump in temperatures like what we see.

          That assumes the data doesn’t agree with itself, but it is at least be an explanation. I don’t have any other at the moment.

        • thisisnotgoodtogo
          Posted Mar 12, 2013 at 1:47 AM | Permalink

          Rud has a post over at Judith’s

          He mentions
          “The misinformation highway took the paper’s figure S3 (below) as a spaghetti chart hockey stick of the proxy temperatures. It is not. It shows 1000 Monte Carlo simulations of the 73 data sets, perturbed by inserting random temperature and age calibration errors to establish the blue statistical band in Figure 1B. S3 doesn’t say the last century’s temperature has risen above the Holocene peak. It only says uncertainty about the combined recent paleotemperature has risen. Which must be true if the median resolution is 120 years.”

        • bernie1815
          Posted Mar 12, 2013 at 9:13 AM | Permalink

          Have you been in contact with Marcott et al to ask for more details on their analysis? Given some of the points raised by you and others, it might be helpful to at least show that you gave them an opportunity to clarify their analytic choices and decisions. It sounds like there is a list of questions that you could pose. (Cross posted at Climate, etc)

          Steve: I sent an email to Marcott yesterday asking a couple of points. No answer yet.

        • pottereaton
          Posted Mar 12, 2013 at 12:04 PM | Permalink

          Steve: appears from your comments here and at ClimateEtc, that you are running into writer’s block on the subject of Marcott.

          You wrote: “It looks like a real dog’s breakfast. I’ve been working on a long post at CA, but keep encountering new problems and am finding it hard to finish a post. Or even begin one.”

          It sounds like it has so much wrong with it that it’s hard to know where to start or finish. That it is a compendium of all that is wrong with certain climate science papers in the past and that it builds on those tricks, devices, errors, miscalculations and inappropriate statistical techniques for which those papers are known.

          If the above is true, and I don’t know that it is, maybe that might be a way to approach it. Just a general survey of all the mistaken techniques re-deployed by Marcott that originate in Climate Science’s dubious past.

          The alternative is several shorter posts on specific problems. A “one at a time” approach.

          Hope this helps. If not, please delete.

      • thisisnotgoodtogo
        Posted Mar 10, 2013 at 8:17 AM | Permalink | Reply

        To compare notes between the abstract and the paper:

        “…Temperatures have risen steadily since then, leaving us now with a global temperature higher than those during 90% of the entire Holocene.”

        “…Current global temperatures of the past decade have not yet exceeded peak interglacial values but are warmer than during ~75% of the Holocene temperature history.”

        “…Intergovernmental Panel on Climate Change model projections for 2100 exceed the full distribution of Holocene temperature under all plausible greenhouse gas emission scenarios.

        “…Our results indicate that global mean temperature for the decade 2000–2009 (34) has not yet exceeded the warmest temperatures of the early Holocene (5000 to 10,000 yr B.P.). These temperatures are, however, warmer than 82% of the Holocene distribution as represented by the Standard5×5 stack, or 72% after making plausible corrections for inherent smoothing of the high frequencies”

      • Jeff Norman
        Posted Mar 10, 2013 at 10:51 AM | Permalink | Reply

        Anthony’s site seems to be crashing my browser this morning.

        Several people have pointed out inconsistencies in the Marcott paper including their use of error margins and the bizarrely unprecedented hockey stick appearing in the century smoothed data.

        Given the 2013 publishing date I am also curious to see if they truncated the temperature data (as per Mann) to hide a lack of incline: say a 1C increase averaged over 100 yrs versus 115 yrs.

6 Trackbacks

  1. [...] The Hockey Stick and the Milankovitch Theory « Climate Audit [...]

  2. [...] climategate, More on the hockey stick, Manns misconduct – [...]

  3. [...] The Hockey Stick and the Milankovitch Theory « Climate Audit [...]

  4. [...] but was rather overlooked by the sceptic community in all the excitement over the emails. “The Hockey Stick and the Milankovitch Cycle” uses some of the Climategate files to solve one of the remaining mysteries of the Hockey [...]

  5. By Moderate Low Weight « Climate Audit on Dec 4, 2011 at 3:35 PM

    [...] (see comments) we began to understand the effect of this ridiculous “CO2-adjustment” (Mannkovitch Bodge) in MBH99: it adjusted the verification RE statistic and affected the 1000-1850 linear trend [...]

  6. [...] Jacoby and D’Arrigo (Clim Chg 1989), a study of northern North American tree rings, was extremely influential in expanding the application of tree rings to temperature reconstructions (as opposed to precipitation.) (See CA tag Jacoby for prior posts that have been tagged.) The Jacoby-d’Arrigo reconstruction was used in Jones et al 1998 and its components (especially Gaspe) were used in MBH98. It is used to “bodge” of Mann PC1 in MBH99; Mann’s “Milankowitch” argument rests almost entirely on this bodge – ably deconstructed by Jean S here. [...]

Post a Comment

Required fields are marked *



Get every new post delivered to your Inbox.

Join 2,874 other followers

%d bloggers like this: