The Marcott-Shakun Dating Service

Marcott, Shakun, Clark and Mix did not use the published dates for ocean cores, instead substituting their own dates. The validity of Marcott-Shakun re-dating will be discussed below, but first, to show that the re-dating “matters” (TM-climate science), here is a graph showing reconstructions using alkenones (31 of 73 proxies) in Marcott style, comparing the results with published dates (red) to results with Marcott-Shakun dates (black). As you see, there is a persistent decline in the alkenone reconstruction in the 20th century using published dates, but a 20th century increase using Marcott-Shakun dates. (It is taking all my will power not to make an obvious comment at this point.)
alkenone-comparison
Figure 1. Reconstructions from alkenone proxies in Marcott style. Red- using published dates; black- using Marcott-Shakun dates.

Marcott et al archived an alkenone reconstruction. There are discrepancies between the above emulation and the archived reconstruction, a topic that I’ll return to on another occasion. (I’ve tried diligently to reconcile, but am thus far unable. Perhaps due to some misunderstanding on my part of Marcott methodology, some inconsistency between data as used and data as archived or something else.) However, I do not believe that this matters for the purposes of using my emulation methodology to illustrate the effect of Marcott-Shakun re-dating.

ALkenone Core Re-dating

The table below summarizes Marcott-Shakun redating for all alkenone cores with either published end-date or Marcott end-date being less than 50 BP (AD1900). I’ve also shown the closing temperature of each series (“close”) after the two Marcot re-centering steps (as I understand them).
alkenone core redating table

The final date of the Marcott reconstruction is AD1940 (BP10). Only three cores contributed to the final value of the reconstruction with published dates ( “pubend” less than 10): the MD01-2421 splice, OCE326-GGC30 and M35004-4. Two of these cores have very negative values. Marcot et al re-dated both of these cores so that neither contributed to the closing period: the MD01-2421 splice to a fraction of a year prior to 1940, barely missing eligibility; OCE326-GGC30 is re-dated 191 years earlier – into the 18th century.

Re-populating the closing date are 5 cores with published coretops earlier than AD10, in some cases much earlier. The coretop of MD95-2043, for example, was published as 10th century, but was re-dated by Marcott over 1000 years later to “0 BP”. MD95-2011 and MD-2015 were redated by 510 and 690 years respectively. All five re-dated cores contributing to the AD1940 reconstruction had positive values.

In a follow-up post, I’ll examine the validity of Marcott-Shakun redating. If the relevant specialists had been aware of or consulted on the Marcott-Shakun redating, I’m sure that they would have contested it.

Jean S had observed that the Marcott thesis had already described a re-dating of the cores using CALIB 6.0.1 as follows:

All radiocarbon based ages were recalibrated with CALIB 6.0.1 using INTCAL09 and its protocol (Reimer, 2009) for the site-specific locations and materials. Marine reservoir ages were taken from the originally published manuscripts.

The SI to Marcott et al made an essentially identical statement (pdf, 8):

The majority of our age-control points are based on radiocarbon dates. In order to compare the records appropriately, we recalibrated all radiocarbon dates with Calib 6.0.1 using INTCAL09 and its protocol (1) for the site-specific locations and materials. Any reservoir ages used in the ocean datasets followed the original authors’ suggested values, and were held constant unless otherwise stated in the original publication.

However, the re-dating described above is SUBSEQUENT to the Marcott thesis. (I’ve confirmed this by examining plots of individual proxies on pages 200-201 of the thesis. End dates illustrated in the thesis correspond more or less to published end dates and do not reflect the wholesale redating of the Science article.

I was unable to locate any reference to the wholesale re-dating in the text of Marcott et al 2013. The closest thing to a mention is the following statement in the SI:

Core tops are assumed to be 1950 AD unless otherwise indicated in original publication.

However, something more than this is going on. In some cases, Marcott et al have re-dated core tops indicated as 0 BP in the original publication. (Perhaps with justification, but this is not reported.) In other cases, core tops have been assigned to 0 BP even though different dates have been reported in the original publication. In another important case (of YAD061 significance as I will later discuss), Marcott et al ignored a major dating caveat of the original publication.

Examination of the re-dating of individual cores will give an interesting perspective on the cores themselves – an issue that, in my opinion, ought to have been addressed in technical terms by the authors. More on this in a forthcoming post.

The moral of today’s post for ocean cores. Are you an ocean core that is tired of your current date? Does your current date make you feel too old? Or does it make you feel too young? Try the Marcott-Shakun dating service. Ashley Madison for ocean cores. Confidentiality is guaranteed.

229 Comments

  1. Craig Loehle
    Posted Mar 16, 2013 at 1:36 PM | Permalink

    What’s a 1000 years? Just a number…nothing to see here…move along…

    • Jeff Norman
      Posted Mar 16, 2013 at 1:48 PM | Permalink

      1008 years. Wow.

      • Jean S
        Posted Mar 16, 2013 at 3:17 PM | Permalink

        Re: Jeff Norman (Mar 16 13:48),
        yes, it is so amazing I could not believe I was reading the spreadsheet right, and just had to ask Steve. I wonder how Isabel Cacho from University of Barcelona likes the re-dating… the original paper is here and data here.

        Core MD 95-2043 has an accurate chronostratigraphy based on 18 14C AMS ages for the last 20 kyr (Table 1). AMS measurements were determined in the University of Utrecht with a precision ranging from +/-37 to +/-120 years. The older section has been dated by correlation of the alkenone SST profile with the 18O record of the Greenland ice core GISP2 (see more details in the work by Cacho et al. [1999a]). The correlation coefficient between MD 95-2043 SST and GISP2 18O over the 14C dated interval is extremely high (R=0.92).

        • Paul Fischbeck
          Posted Mar 16, 2013 at 3:21 PM | Permalink

          How many of the proxies that did not end in the last 100 years were “updated”? Are they selective in which ones get updated? It would be telling if selective updating was occurring and proxies that ended early were not touched.

        • Steve McIntyre
          Posted Mar 16, 2013 at 3:28 PM | Permalink

          There’s an interesting discussion of MD95-2011 at CA in 2007 here, in which Richard Telford also participated
          https://climateaudit.org/2007/11/28/loehle-proxy-md95-2011/. Richard pointed out that some dates in the official archive were incorrect.

  2. Robert
    Posted Mar 16, 2013 at 1:45 PM | Permalink

    Since you don’t want to make the obvious comment, allow me.
    This sure looks like someone wanted to hide the decline 🙂 .

    • Manniac
      Posted Mar 16, 2013 at 2:53 PM | Permalink

      Reality has a well known ‘upside-down’ bias.

      Apologies to Stephen Colbert.

  3. Robert
    Posted Mar 16, 2013 at 1:50 PM | Permalink

    A bit OT and I also asked this question on another thread and I’d be grateful for your opinion Steve.

    I don’t understand how the uncertainty can be temperature-independent i.e. that proxy precision and accuracy is as good for 1000 years back as from 200 years. The proxies respond to things except temperature and there surely must be variations in this. Averaging proxies seems an optimistic way to get to the “true” value.

    Steve: It’s hard enough to try to figure out the calculations. I haven’t looked at their uncertainties yet.But your intuition seems right to me. Their first centering was on BP4500-5500. If they centered on the modern reference period (their ultimate interest), the spread of values in the Holocene would much increase. That may have something to do with it. Topic for another day,

    • Jeff Norman
      Posted Mar 16, 2013 at 2:24 PM | Permalink

      I get the impression from a previous response to a similar query that the presented “uncertainty” is not a function of the uncertainty of the temperature measuring abilities of the individual proxies but simply of the number of proxies available at that moment in time.

      I don’t think that any reconstruction that I have ever seen actually presented a combinated uncertainty of the individual proxy’s ability to represent actual temperatures at a local, regional or global level.

      For that matter I don’t think I have ever seen modern global temperature trends presented with adequate uncertainties.

      • Robert
        Posted Mar 16, 2013 at 2:55 PM | Permalink

        Jeff
        If this is the case then this is worrying. An error band based on an incomplete uncertainty evaluation gives a very misleading impression. One should then make this clear. For example, they could label the error band “errors only due to factors X,Y” (which is what I do in this situation) or at the very least shout very clearly throughout the paper, and in press interviews, that (possibly) incorrect assumptions are made about other sources of errors being negligible.

        I realise that this means the significance of their work is reduced but this is science.

      • Geoff Sherrington
        Posted Mar 16, 2013 at 5:49 PM | Permalink

        Both Pat Frank and I have posted here over the years about the lack of formality used to estimate uncertainty in climate work, by comparison with that used in related work. For example, from analytical chemistry I’ve referenced the historically important paper

        Click to access MR%20Analysis%20Eval%20Morrison%201971.pdf

        This gives some consequences of optimistic estimates of capability.
        It seems there is a need for a text book on how to estimate uncertainty in climate time series. Is there one already?

        • Jeff Norman
          Posted Mar 16, 2013 at 6:42 PM | Permalink

          My background in this reguard is measuring temperatures (pressures, flows, concentrations, etc.) using calibrated instrumentation as defined by the ASME, the ISO and the U.S. EPA for contract and regulatory acceptance tests.

          The steam temperature mattered. The gas temperature into and out of the emission control device mattered. the uncertainties mattered.

    • Lance Wallace
      Posted Mar 16, 2013 at 3:06 PM | Permalink

      As mentioned earlier, in the Ph.D. thesis, the temperature uncertainties for the individual proxies were simply kept constant throughout the entire up-to-22000-year period. Entire batches of proxies were assigned a single constant temperature uncertainty (chronomids and pollen both were assigned uncertaainties of 1.7 degrees C.) Here is the relevant section from the thesis:

      4.5.1 Temperature and Chronologic Uncertainties
      In order to incorporate the full range of error associated with both the proxy
      calibrated temperatures and the age control points used to construct the time series,
      we implemented a Monte Carlo based approach. 10,000 Monte Carlo simulations
      were performed for each of the datasets that incorporated both the temperature
      calibration and chronologic uncertainties (Appendix C). The temperature records
      51
      used in this study were derived from multiple proxy-based methods, including UK’
      37,
      TEX86, Mg/Ca, chronomids, pollen, ice cores, and biomass assemblages (e.g.,
      foraminifera, diatoms, radiolaria). The uncertainty associated with each of the
      proxies was randomly varied following a normal distribution and errors were
      assumed to not correlate through time in order to maximize the temperature
      uncertainties. All UK’
      37 based alkenone records were converted to temperature
      following the global core top calibration of Müller et al. (1998). TEX86, Mg/Ca, and
      all biomass assemblage records were converted to temperature following the original
      publication from where the data were obtained (Apendix C). Chronomid based
      temperatures errors (±1.7°C) were derived from the average root mean squared error
      (RMSE) of several studies. Pollen temperature errors followed the RMSE of Seppä et
      al. (2005) (±1.7°C). Ice core based temperature error was conservatively assumed to
      be ±30%.

      • Robert
        Posted Mar 16, 2013 at 3:18 PM | Permalink

        Thanks Lance.

      • Posted Mar 17, 2013 at 4:05 AM | Permalink

        Chronomid? What’s that – do they mean Chironomid (a kind of fly?)

  4. polski
    Posted Mar 16, 2013 at 1:52 PM | Permalink

    Is redating of proxies common and accepted? If redated are the creators of the proxies not asked for their opinion as to why it is needed?
    If it an acceptable procedure I would like to be redated from 57 yo with thin hair to 27 and golden locks!

    • Posted Mar 17, 2013 at 2:48 AM | Permalink

      Re: polski (Mar 16 13:52), It seems to me that this very first post neatly nailed all the issues in a couple of lines. It would seem that they decided to use a dating procedure change based, at best, on hand waving before the community had established a procedure of sufficient soundness that one could justify an informal explanation of altering the ages. Therefore such optimistic alteration of the reported ages would have no such place within the spirit of the dating service. Especially in such circumstances where accurate reporting of age would be a fundamental indicator of value.

      In other words — if one is to use the Ashley Madison “dating service”, and a reputation for premature gesticulation and alteration of reported age precedes you it may well make future efforts to partake of the services of the “dating service” to achieves one’s stated goals. All this assumes that I have correctly grasped the essential issues of course.

  5. seanbrady
    Posted Mar 16, 2013 at 1:53 PM | Permalink

    This reminds me of a friend’s relative who was a refugee from Cuba in the 70s. Because she arrived with no papers and was close to 30, she simply added a couple years to her original birthdate (which coincidentally was fairly close to 1940 by the way).

    Everything was fine until decades later, when had to work two more years before she could start collecting Social Security!

  6. Todd Martin
    Posted Mar 16, 2013 at 1:55 PM | Permalink

    There is nothing quite so entertaining as when the team or, in this case, wannabe junior members, toss a slow pitch over the plate and Steve, in full flight with the bit between his teeth, knocks it out of the park. In this case, Marcott et al appear to have provided the gift that keeps on giving.

    There is also nothing quite so nauseating as the fact that Steve must provide pro bono a service that one would have thought the vaunted peer review gates of journals such as Science should have executed as a matter of course. Sad.

    Keep at it Steve.

    • kim
      Posted Mar 16, 2013 at 3:28 PM | Permalink

      Oh, excellent, Todd: ‘Full flight, bit between the teeth, knocking parts out of the stuffing’. I wanna T-Shirt.
      ==============

      • learDog
        Posted Mar 16, 2013 at 4:04 PM | Permalink

        ….Josh…..?

        ;-D

    • vivendi
      Posted Mar 16, 2013 at 5:29 PM | Permalink

      Marcott was one of the authors, so his co-authors should have caught the obvious errors (or can we say manipulations). Didn’t they? Then there were the peer-reviewers; they should have questioned some of these errors or inconsistencies.
      But then there is also the “Team” (Mann, Schmidt, Tamino et al). Aren’t they interested in these studies? Aren’t the able to cast a skeptic eye on such a study or do they accept just about anything that fits their believes?

      Finally, there is a skeptic non-climatologist who is not paid for his work who has to do the quality control? Imagine if Steve didn’t catch these errors, we (the whole world) would be damned to accept whatever they come up with in their studies. A frightening thought.
      Thanks Steve!

      • Ian
        Posted Mar 16, 2013 at 6:04 PM | Permalink

        All four article authors of the article were also involved with Marcott’s PhD thesis. Oddly (at least to me) is that the dissertation supervisor, Peter Clark is credited as a co-author of the relevant chapter of the thesis:

        “Chapter 4 – P.U. Clark co-wrote the manuscript. J.D. Shakun helped conceive the project and assisted with data analysis. A.C. Mix helped develop the statistical
        methods.”

        Of course, the thesis didn’t show the mid-late 20th C uptick in temperature.

        • seanbrady
          Posted Mar 18, 2013 at 3:29 PM | Permalink

          So here’s my prediction on the authors’ response to this whole brouhaha:

          “In response to numerous questions that have been raised regarding the statistical methods used in Marcott et. al, 2013, the paper has been revised so that the following sentence:

          ‘Chapter 4 – P.U. Clark co-wrote the manuscript. J.D. Shakun helped conceive the project and assisted with data analysis. A.C. Mix helped develop the statistical methods.’

          now reads:

          ‘Chapter 4 – P.U. Clark co-wrote the manuscript. J.D. Shakun helped conceive the project and assisted with data analysis. A.C. Mixup helped develop the statistical methods.'”

    • Geoff Sherrington
      Posted Mar 16, 2013 at 5:53 PM | Permalink

      Steve does it for the learning. As a squash player, he can keep redating his birth to play in competitions with age groups, able to adjust his chance of winning. (Not that he has to).

  7. HaroldW
    Posted Mar 16, 2013 at 2:03 PM | Permalink

    As neither reconstruction in your diagram is a reasonable depiction of history over the last couple of hundred years, is not the correct conclusion that the number of proxies (or the method itself) is insufficient over that interval? [Equivalently, the error bars are huge.]

  8. Posted Mar 16, 2013 at 2:04 PM | Permalink

    Ha I just re-dated myself from 53 to 26 ;>0 same DAMM thing

  9. Posted Mar 16, 2013 at 2:06 PM | Permalink

    How could the authors of the paper assume their methods would not be audited?

    • JPS
      Posted Mar 16, 2013 at 3:03 PM | Permalink

      one word answer: hubris

    • Bob Koss
      Posted Mar 16, 2013 at 3:18 PM | Permalink

      The goal was to get something scary published before the AR5 deadline. It simply didn’t matter if it is correct. Since it is not refuted prior to the deadline, it will be used.

      • ianl8888
        Posted Mar 16, 2013 at 4:36 PM | Permalink

        Yes

      • Posted Mar 16, 2013 at 4:37 PM | Permalink

        Bob Koss Mar16 3.18pm

        Oh I do hope so.

        • Andy Wilkins
          Posted Mar 16, 2013 at 6:15 PM | Permalink

          I hope so too.
          It would be great to see the contents of yet another IPCC novel ripped up in front of The Team’s eyes!

    • geronimo
      Posted Mar 17, 2013 at 12:31 AM | Permalink

      Two things Bob. First the objective is to get this paper into AR5, whether Mann et al have the pull to do that given the current furore in the blogosphere I don’t know, it depends on the integrity of the various lead authors. Most of them seem to men/women on a mission, but you never know there might be a flickering of scientific integrity in the embers of what were once scientists. The public are unaware of the deception because the MSM has shown the hockeystick and certainly won’t report the demolition on this, and other blogs, so we are depending on the integrity of scientists who’ve shown little in the past.

      The second thing is, as I’ve observed before, is the slipshod methodology of climate science in general. Richard Betts from the Met Office told me his mission was to get respect for scientists. In any other field there would be a hue and cry from the other scientists if such an obviously flawed paper came out. In climate science we get en masse silence, or enthusiastic support.

      I read somewhere that only 1 in 70 papers submitted to Science get published. It makes you wonder what sort of rubbish is put forward in papers if this one came first out of seventy. Or you could look at my first point again.

      • Skiphil
        Posted Mar 17, 2013 at 12:45 AM | Permalink

        re: AR5, one of the Lead Authors for the Paleo chapter is a co-author with Shakun and Marcott and Clark on Shakun et al. (2013): Bette Otto-Bliesner of NCAR. So there may be some interesting discussions on what gets into that chapter….

        re: hype and specialist reactions, consider how fast a substantial number of microbiologists came out against the “arsenic life” paper (also published in “Science”) in Dec. 2010. Within a week or less the paper and authors were already being fairly widely repudiated, with quite a few prominent microbiologists speaking out. What a contrast to how things go with climate science and group solidarity….

        • Skiphil
          Posted Mar 17, 2013 at 12:48 AM | Permalink

          sorry that is Shakun et al. (2012) not 2013, of course, typo

  10. Barclay E MacDonald
    Posted Mar 16, 2013 at 2:07 PM | Permalink

    Assuming I am not the only one reading this and assuming the above will be communicated to the authors, I would hope they will respond specifically to the questions being raised here at CA and how, if at all, my understanding of their work should be modified, if my sole source of information regarding their work was based on the recent March 9 Atlantic article “We’re Screwed” by Tim McDonnell.

    • Posted Mar 16, 2013 at 2:19 PM | Permalink

      “We’re Screwed” could prove a very apposite headline before this is done but the referent of the “We” may have changed.

  11. Posted Mar 16, 2013 at 2:09 PM | Permalink

    The age-depth model for MD952011 is not closely crossdated with tephra. It contains a single tephra, the Vedde ash, in the Younger Dryas. No Holocene tephras have been identified. Nor is this core near Iceland – it is much closer to Norway.

    Cores HM107-04 and HM107-05 are offshore Iceland, and dated with many tephras.

    Steve; thanks for the comment. That will teach me not to preview a post that I haven’t written yet. I think that I was thinking of MD99-2275.

    Richard, can you comment on the main issue though: – the legitimacy of re-dating the coretop?

    • bernie1815
      Posted Mar 16, 2013 at 2:24 PM | Permalink

      Richard:
      Can you spell out the implications of the points you raised as to the potential appropriateness of the redating. Sorry if this should be obvious.

    • Steve McIntyre
      Posted Mar 16, 2013 at 2:35 PM | Permalink

      In deference to Richard Telford’s correction and to avoid confusion for further readers, I have edited the main post by removing the following sentence:

      As a preview, I’ll note that MD95-2011 is a core that I’ve studied. It is a high-resolution core offshore Iceland, that has been carefully studied by competent specialists and closely crossdated by tephra. While the dating of some core tops may be open to question, this is not one of them.

      As Richard observed MD95-2011 is a core offshore Norway. I was thinking of offshore Iceland cores. I’ll ensure that this point is properly addressed in the forthcoming post. I’ve documented the change here in comments and corrected the article itself to reflect the review comments.

      I will discuss MD95-2011 in a forthcoming post since I do not believe that relevant specialists would support Marcott-Shakun redating. Another M-S core, MD95-2015, which is offshore Iceland, was also one of the re-dated cores. But I’ll have to double check whether it had tephra crossdating.

      Update: Note my prior discussion of this core https://climateaudit.org/2007/11/28/loehle-proxy-md95-2011/ in connection with Loehle where it was also used.

      • Skiphil
        Posted Mar 16, 2013 at 2:46 PM | Permalink

        Steve, I think the sentence following also needs to be removed, pending your new post: “While the dating of some core tops may be open to question, this is not one of them (because the “this” referent is to the core MD95-2011 in the sentence removed).

        Steve: fixed. I amended my comment to reflect this as well.

      • Steve McIntyre
        Posted Mar 16, 2013 at 3:17 PM | Permalink

        It’s nice to see that my misdescription of the location of MD95-2011 was so quickly spotted. CUriously, a similar mis-statement was made in Marchal et al 2002, one of Marcott’s references:

        It is noteworthy that apparent cooling is observed in so different oceanographic environments, including the Barents
        slope (core M23258), off Norway (MD952011), southwest of Iceland (MD952011), the Gulf of Cadiz (M39008), the Alboran Sea (MD952043), and the Tyrrhenian Sea (BS7938 and BS7933).

        • Geoff Sherrington
          Posted Mar 16, 2013 at 8:33 PM | Permalink

          On the subject of tephra, do we have a resident volcanologist who would opine on the area of influence and detectability of tephra at various distances from its deposition on land and sea; and on its fingerprint, if any, when there might be overlap from two sources of similar age. I can see its use as a marker, but I can see some generalist limitations. Is it typically separated and dated by radioactivity? There would be some problems with this approach also. It keeps coming back, in this discussion, to errors on the time (X) axis, which need to be corrected before the Y axis can be used for anything useful. Australia’s geochemists typically do not have much exposure to geologically recent volcanism, so I apologise for asking instead of reporting.

        • tty
          Posted Mar 17, 2013 at 7:25 AM | Permalink

          “Is it typically separated and dated by radioactivity?”

          In the Iceland area this is hardly necessary in recent centuries since we have excellent historical data on eruptions back to the Middle Ages and good geochemical data on the relevant volcanic systems.

        • Reference
          Posted Mar 17, 2013 at 9:25 AM | Permalink

          Sorry tty. Nice try, but anecdotal reports of historic events can’t be used as data in climate science /sarc

        • William Larson
          Posted Mar 17, 2013 at 6:27 PM | Permalink

          Confusion: “… off Norway (MD952011), southwest of Iceland (MD952011)…” It’s the same core number in each case. Is this the “misstatement” you are referring to? If this is a cut-and-paste from Marchal 2002, then it appears to be a typo of some sort instead of a misstatement.

          Steve: of course it’s a typo. I was feeling a little annoyed that Richard Telford (whose comments I welcome) tweaked me on mislocating MD95-2011 (which we had discussed in its right geography on an earlier occasion) without also calling out Marcott et al on the substantive issue of unjustified re-dating. I’m less annoyed today: it’s what happens.

  12. John B
    Posted Mar 16, 2013 at 2:12 PM | Permalink

    I noticed a comment on one of the earlier Marcott posts saying that journals should use Steve to review the statistical element of climate papers. I completely disagree. Far better for every prominent climate scientist in future who think they can get away with shoddy statistical practices to be exposed to ridicule in public. If I were a prominent, or even not so prominent, climate scientist tempted to offer dubious conclusions based on dubious or downright shoddy practices I would be more concerned about Mr McIntyres eyes alighting on my published paper, and thereby risking public ridicule,than any review prior to publication.

    One well aimed public exposure from the big dog will have far more of a chilling effect on anyone tempted to risk their reputation than a hundred private reviews. Every climate scientist should now realise they have to up their game. Reputations are at stake.

    • learDog
      Posted Mar 16, 2013 at 4:10 PM | Permalink

      Would be true if notoriety were considered a bad thing in Climate Science. With The Team – it seems to garner awards…

    • NZ Willy
      Posted Mar 16, 2013 at 5:03 PM | Permalink

      Agreeing, but if Steve performs such a service he should be funded. If only the climateers’ claims about industry funding of skepticism were correct, sigh…

  13. Posted Mar 16, 2013 at 2:14 PM | Permalink

    Sorry – maybe a dumb question with simple answer … is there a valid reason for redating these cores, and if so an accepted process to do so? I guess I just don’t get why someone else, or some other process, would be applied … why wouldn’t the original authors want to get the dating correct to start with – seems that’s the whole point of their work?

  14. Don Keiller
    Posted Mar 16, 2013 at 2:19 PM | Permalink

    Lets get this straight.

    What they have done is redated two proxies which ended with negative signs in the 20th century and replaced then with 3 others, with positive signs from 500 to 1000 years earlier?

    I’m really struggling to get my head round this, but as it is climate “Science” there must be some perfectly reasonable explanation?

    Maybe Nick Stokes can help, along with his magic shovel?

    • DBD
      Posted Mar 16, 2013 at 2:36 PM | Permalink

      Any shovel is groaning over this one:) Dr. Mann may regret his enthusiasm

    • mrsean2k
      Posted Mar 16, 2013 at 2:50 PM | Permalink

      It’s obvious; anthropogenic warming stimulated tachyon bursts originating from the 20th C. These were retrospectively responsible for interfering with historical alkenone generation in the proxies in question.

    • Paul Matthews
      Posted Mar 16, 2013 at 5:20 PM | Permalink

      Don, even Nick Stokes thinks it’s wrong.

      • Brandon Shollenberger
        Posted Mar 16, 2013 at 5:40 PM | Permalink

        Paul Matthews, I found that post amusing as Nick Stokes said:

        Again I’m not doing the re-dating etc that they do.

        While using the spreadsheet provided by Marcott et al where provide their re-dated series.

        • Posted Mar 16, 2013 at 10:58 PM | Permalink

          Brandon,
          Well, as I said, that was my first option. I’ve since run using the Marcott dates, and while overall there is not much change, there is now a big and recent spike. I’ve updated here. I’ll post further on this.

        • Brandon Shollenberger
          Posted Mar 17, 2013 at 12:17 AM | Permalink

          Nick Stokes, if you weren’t using the re-dated data from Marcott et al, what data are you saying you used? Marcott et al only provided their re-dated data. You’d have to go to a different source to get a different version.

          Which we can tell you didn’t do since you (commendably) provided your code. Your code shows you used the Marcott et al data. That means you used their re-dated data as that’s the only data they provided. How do you figure you could have used anything else?

          Steve: Marcott has two date columns: one showing published dates and one showing their dates.

        • Brandon Shollenberger
          Posted Mar 17, 2013 at 1:29 AM | Permalink

          Nevermind the previous comment. I made a stupid mistake because I’ve been busy today (it’s a Saturday night), and I completely overlooked something I had been aware of previously.

        • Posted Mar 17, 2013 at 1:30 AM | Permalink

          “The complaints are mainly about some recent spikes. My main criticism of the paper so far is that they do plot data in recent times with few proxies to back it. It shows a “hockey stick” which naturally causes excitement. I think they shouldn’t do this – the effect is fragile, and unnecessary. […] Indeed the spikes shown are not in accord with the thermometer record, and I doubt if anyone thinks they are real.” – Nick Stokes

          With out the spike at the end no one would have heard of Marcott et al. and it likely would never have been published in Science, nor so close to the IPCC deadline.

      • Posted Mar 16, 2013 at 8:43 PM | Permalink

        Paul,
        I don’t think it is necessarily wrong to re-date. I was just doing a no-frills emulation, and used the published dates to avoid getting my head around the changes.

        But the dates aren’t original observations by the authors. They work them out mostly by known formulae, given depths, proxy measurements and carbon isotopes etc, which they report. There’s no reason in principle why someone shouldn’t use a different formula. As Richard Telford says elsewhere – these get updated, and there’s also a good case in this type of analysis for using a consistent calibration.

        Steve: there is a good case for consistent calibration. But that’s not what Marcott and Shakun did. Consistent use of CALIB 6.0.1 is fine. Marcott already did that in his thesis. The coretop redating is entirely different. I wish that you pause every so often from being Racehorse Haynes,

      • pottereaton
        Posted Mar 16, 2013 at 10:07 PM | Permalink

        From an article on Racehorse Haynes:

        . . . Mr. Haynes also represented Morganna Rose Roberts, baseball’s “kissing bandit,” charged with trespassing at the Astrodome. Mr. Haynes’ defense? His client, who has a 60-inch bust, had been the victim of Newton’s law, pulled to the field by gravity.

        • Posted Mar 17, 2013 at 1:28 AM | Permalink

          Well, according to Wiki:
          ‘her lawyer used what he called the “gravity defense” to explain her unauthorized presence on the field, arguing: “This woman with a 112-pound body and 15-pound chest leaned over the rail to see a foul ball. Gravity took its toll, she fell out on the field, and the rest is history.” The judge laughed and dismissed the case.’

          I would never stoop to such levity, even to win a case.

        • curious
          Posted Mar 17, 2013 at 6:34 PM | Permalink

          Sounds a lot more feasible than a lot of the material examined here.

  15. Jeff Norman
    Posted Mar 16, 2013 at 2:33 PM | Permalink

    I wonder how the author(s) of the original proxy series feel about having their results changed?

  16. Posted Mar 16, 2013 at 2:50 PM | Permalink

    Well, I got the A. Madison sign off even if no one else did. Precious.

    • John R T
      Posted Mar 16, 2013 at 3:11 PM | Permalink

      I am in Costa Rica, but missed the 2011 Super Bowl commercials.

      Tell me more, please. How did Ms Madison manage this ‘product placement?’

      • Skiphil
        Posted Mar 16, 2013 at 3:31 PM | Permalink

        Google is your friend (I’d never heard of Ashley Madison before I searched just now, I guess I really live in a cave).

        • Theo Goodwin
          Posted Mar 16, 2013 at 5:19 PM | Permalink

          Me too. But I am very pleased with a “no television” life and highly recommend it to others. I will never look up Ashley.

  17. GrantB
    Posted Mar 16, 2013 at 2:52 PM | Permalink

    Expect a pleasant communication thanking you for your efforts but pointing out that one of the authors independently discovered this trifling issue some minutes earlier. A minor rewrite will be undertaken.

    • Robert
      Posted Mar 16, 2013 at 3:05 PM | Permalink

      But has there been a “mistake” spotted i.e. something which would necessitate a retraction or correction ?

      I’ve certainly seen evidence for weak science exposed here which demands (good) answers. If any of my papers contained these flaws I’d crawl into bed for a week and then consider a rewrite/correction. However, no serious “mistake” seems to have been spotted as in the Gergis et al. paper where the work was simply non-reproducible according to their stated methodology**. However, the standards in this field are low and I doubt many in the community really care about (or indeed understand the importance of) the issues raised here.

      ** Here they changed their minds and decided what they did wasn’t a mistake, rather it was what they should have done in the first place :). Fortunately the journal was tough in this case.

      • Posted Mar 16, 2013 at 3:55 PM | Permalink

        There is the “mistake” that the error range band do not include uncertainty the must exist when proxy series can be timeshifted within the time sampling “uncertainty.”

        The thesis shows an error band getting wider closer to today. Yet even here, Steve’s “Original Dating” red curve is well outside the wider thesis band.

        Mistake? Many mistakes were made between computer and publication.

        • Brandon Shollenberger
          Posted Mar 16, 2013 at 4:49 PM | Permalink

          Huh? Their uncertainty bands are very dependent upon the uncertainty introduced by being able to time-shift their series.

        • Posted Mar 16, 2013 at 5:00 PM | Permalink

          Why then does the “Original Dating” red curve in Fig. 1 above appear to be such an outlier in the later years?

        • tty
          Posted Mar 17, 2013 at 7:49 AM | Permalink

          Actually the dating uncertainty of a calibrated radiocarbon date is invariably larger and usually much larger than the original uncalibrated date where the uncertainty is only a matter of measurement error.

          There are lots of “plateaus” where samples of different dates contain the same amount of radiocarbon. For example according to INTCAL09 a sample dating 120 BP (1830) could also be from 1710, 1890 or 1910.

      • laterite
        Posted Mar 16, 2013 at 4:48 PM | Permalink

        At the moment the retraction notice would read something like this.

        “After receiving a communication from S. McIntyre we compared our reconstruction with one using the dating of the cores in the original articles and found prominent differences in the recent period. Due to this inconsistency, we wish to retract the Article, even though the reported reconstruction is consistent with other published reconstructions. We apologize to the readers for any adverse consequences that may have resulted from the paper’s publication.”

    • JEM
      Posted Mar 16, 2013 at 3:07 PM | Permalink

      …taking at least six months, following which the paper that results will resemble the original only in title, but which will nevertheless replace without notice the original in IPCC work.

  18. Posted Mar 16, 2013 at 3:07 PM | Permalink

    I am just starting to read Thinking, Fast and Slow by D. Kahneman. I feel like I have been at the water cooler while Steve has been been sharing his thinking via his (and the commenter’s) interplay of System 1 and 2 thinking. It will be interesting to see how the water cooler approach to building knowledge (“ability to identify and understand errors of judgment and choice, in others and eventually to ourselves………….(pg 4)” compares to the peer reviewed method.

  19. mrsean2k
    Posted Mar 16, 2013 at 3:14 PM | Permalink

    ISTR that Mann sought to defend the use of inverted Tiljander on the basis that PCA is sign invariant, in defiance of any rational physical interpretation of the data.

    Perhaps there is some equivalent undisclosed algorithmic sausage-machine in play here, “automatically” shifting the series for “best” fit to the “known” result.

    That sort of thing seems to be unquestioningly regarded as a get-out-of-jail free card.

  20. JEM
    Posted Mar 16, 2013 at 3:15 PM | Permalink

    Borrowing, if I might, the caption from Steve’s first graph, I’ve now got a mental image of a guy clicking around a spreadsheet fiddling a chart to the tune of ‘boom da boom boom da boom redate Marcott style’.

  21. pottereaton
    Posted Mar 16, 2013 at 3:41 PM | Permalink

    Steve, I’m glad you put the names of Clark and Mix in the mix, so to speak. Along with Bard, they should have been the experienced hands on the good ship Marcott et al. Either they were on board and complicit or on shore leave as their post-doctoral grads Marcott and Shakun drifted a long way offshore in a leaky vessel.

    Peter Clark and Alan Mix have been writing papers together since the 90s and even wrote a comment challenging a Forum piece in Eos written by Fred Singer back in 1997.

    • Skiphil
      Posted Mar 16, 2013 at 3:48 PM | Permalink

      … and Clark is a Coordinating Lead Author for the IPCC’s AR5, WG1, the chapter on sea levels

      • pottereaton
        Posted Mar 16, 2013 at 3:57 PM | Permalink

        Beat me to it, Skiphil. That puts this particular paper into an interesting context.

  22. Posted Mar 16, 2013 at 3:44 PM | Permalink

    Given the changes in results as a function of “dating uncertainty” (to put it most charitably), how can ANY of the authors and advisors OK the published error band? In particular the width of the error band as the proxy population tails off.

    This beyond error. Beyond blunder. At best, it is willful blindness of the introduction and existence of a source of noise and uncertainty not accounted for in the estimate of significance. The author could not be unaware that different shifts change the results, especially at the end points. The author could have produced Figure 1 as part of the paper, but chose instead to leave it as an exercise for the reader.

    I’ve been trying to understand the Monte Carlo aspect of the analysis. Was it to randomly search for the best time offsets of the proxies to reduce the statistical error between proxies?

    Shawn: Regarding the NH reconstructions, using the same reasoning as above, we do not think this increase in temperature in our Monte-Carlo analysis of the paleo proxies between 1920 − 1940 is robust given the resolution and number of datasets…

    Interesting specificity: “1920-1940”. Silence about later than 1940.

    • Jean S
      Posted Mar 16, 2013 at 3:49 PM | Permalink

      Re: Stephen Rasey (Mar 16 15:44),
      well, the reconstruction ended in 1940…

      • Posted Mar 16, 2013 at 4:06 PM | Permalink

        Then I am confused by Steve’s table showing 1960, 1980, 2000 year drop in sampling coverage.

        Is it possible that the original data ended in 1940, but some proies were time shifted into the 1980’s?

        • Jean S
          Posted Mar 16, 2013 at 4:35 PM | Permalink

          Re: Stephen Rasey (Mar 16 16:06),
          Steve is using the original dates as bolded in the beginning of the post, but more importantly you are confusing the dates of proxies and dates of the reconstruction(s). There are no values after 1940 in any of Marcott et al reconstructions, and Steve asked him about the increase between last two values (1920, 1940; corresponding actually to 20 year intervals) of his NHX reconstruction. So it is no wonder he did not talk about “robustness” of the reconstruction after 1940 since there is no reconstruction to talk about.

  23. Jean S
    Posted Mar 16, 2013 at 3:45 PM | Permalink

    On the other hand, I think we all need to acknowledge that the Marcott-Shakun service offers a pretty easy (and cheap!) solution to Steve’s eight year old suggestion! 😉

    • Skiphil
      Posted Mar 16, 2013 at 4:21 PM | Permalink

      Jean S, Steve, and all, you may get some chuckles (or groans) out of this interview with Marcott’s co-author Jeremy Shakun last year:

      Marcott co-author Shakun on their approach to the Shakun et al. (2012) study

      [emphasis added]

      “It was really simple science,” he said. “We said, we’ve got 80 records from around the world, let’s just slap them together, average them into a reconstruction of global temperature.”

  24. Posted Mar 16, 2013 at 3:49 PM | Permalink

    From the supplemental index bottom of page 1:

    “…This study includes 73 records derived from multiple paleoclimate archives and
    23 temperature proxies(Fig. S1; Table S1): alkenone (n=31), planktonic foraminifera Mg/Ca
    24 (n=19), TEX86 (n=4), fossil chironomid transfer function (n=4), fossil pollenmodern analog
    25 technique (MAT) (n=4), ice-core stable isotopes (n=5), other microfossil assemblages(MAT and
    26 Transfer Function)(n=5), and Methylation index of Branched Tetraethers(MBT)(n=1). Age control is derived primarily from 14
    27 C dating of organic material; other established methods 28 including tephrochronology or annual layer counting were used where applicable
    …”

    From page 7:

    “…To account for age uncertainty,
    71 our Monte Carlo procedure perturbed the age-control points within their uncertainties
    . The
    72 uncertainty between the age-control points was modeled as a random walk (76), with a “jitter”

    73 value of 150 (77). Chronologic uncertainty was modeled as a first-order autoregressive process
    74 with a coefficient of 0.999. For the layer-counted ice-core records, we applied a ±2%
    75 uncertainty for the Antarctic sites and a ±1% uncertainty for the Greenland site (1σ)…”

    From page 8:

    “…3. Monte-Carlo-Based Procedure
    79 We used a Monte-Carlo-based procedure to construct 1000 realizations of our global
    80 temperature stack. This procedure was done in several steps:
    81 1) We perturbed the proxy temperatures for each of the 73 datasets 1000 times (see
    82 Section 2)(Fig. S2a).
    83 2) We then perturbed the agemodels for each of the 73 records(see Section 2), also
    84 1000 times(Fig. S2a)…”

    I thought perturbing data was to protect confidentiality while data mining:

    “…Abstract: Data perturbation is a data security technique that adds ‘noise’ to databases to allow individual record confidentiality.
    This technique allows users to ascertain key summary information about the data while preventing a security breach…”

    Apparently all 73 datasets have been perturbed via the ‘Monte Carlo technique’ a thousand times.

    From page 10:

    “…4. Construction of Stacks
    117 We constructed the temperature stack using several different weighting schemes to test
    118 the sensitivity of the temperature reconstruction to spatial biases in the dataset.
    These include
    119 an arithmetic mean of the datasets(Standard method), both an area-weighted 5°x5° and
    120 30°x30°lat-lon gridded average, a 10° latitudinal area-weighted mean, and a calculation of
    121 1000 jackknifed stacks that randomly exclude 30% and 50% of the records in each realization
    122 (Fig. S4 and S8). We also used a data infilling method based on a regularized expectation
    123 maximization algorithm (RegEM; default settings) (78). The uncertainty envelope we report for
    124 RegEM combines the Monte Carlo simulation uncertainty with that provided by the RegEM
    125 code
    (78)…”

    From page 20 (Fig S12 description):

    “…Fig. S12: Temperature reconstructions using multiple time-steps.(a)Global temperature envelope (1-σ)
    263 (light blue fill) and mean of the standard temperature anomaly using a 20 year interpolated time-step
    264 (blue line), 100 year time-step (pink line), and 200 year time-step (green line). Mann et al.’s(2) global
    265 temperature CRU-EIV composite (darkest gray) is also plotted. Uncertainty bars in upper left corner
    266 reflect the average Monte Carlo based 1σuncertainty for each reconstruction, and were not overlain on
    267 line for clarity. b same as a for the last 11,300 years. Temperature anomaly is from the 1961-1990 yr
    268 B.P. average after mean shifting to Mann et al.(2)
    …”

    From page 21:

    “…We next used the NCDC land-ocean data set, which spans a greater period of time than
    285 the NCEP-NCAR reanalysis. Comparison of the global temperature history for the last 130 years
    286 to the temperature history derived from the 73 locations of our data sites shows agreement
    287 within 0.1°C (Fig. S15). Finally, we used the modeled surface-air temperature from ECBilt-CLIO
    288 (81)in the same way as the NCDC land-ocean data set, and again find agreement within 0.1°C
    289 or less between our distribution and the global average from the model(Fig S16). These
    290 findings provide confidence that our dataset provides a reasonable approximation of global
    291 average temperature. Our results are also consistent with the work of Jones et al. (85) who
    292 demonstrated that the effective number of independent samples is reduced with timescale
    ,…”

    Data from the proxies is perturbed whatever they mean by it.
    Data is infilled.
    Modern temperature dataset is also given the full Monte simulation.
    Mann et al is used. Strictly for comparison?
    Jones et al is used. Strictly for consistency?
    Modern temperature data set is compared to the 73 locations?
    It sure looks like the modern data is processed via Mannian methodas and then included with the end of the 73 proxies (As Steve shows above).

    I keep wondering if the infilling for the missing data points on the proxies are filled against the modern temperature data.

    In the manufacturing world I was led to believe that when combining resolutions or error rates that the proper method was to multiply them, not add and then divide out desired resolution scales after linking datasets.

    It also looks like age of the data records is adjusted/manipulated throughout their perturbations.

    • Posted Mar 16, 2013 at 4:45 PM | Permalink

      “Modeled as a random walk
      A random walk implies a sequence of random steps, where the result of step 2 is dependent upon the result of step 1. In how many dimensions?

      Take a “jitter” of 150 (years, for each proxy, I assume)

      If I am to take seriously the concept of random walk, then some series can be shifted by much more 150 years as the prior step randomly walks away from the zero point.

      Does he mean “random walk” this way: For each proxy(i), time(j), where j = 0 at present and increases into history, the random walk is to use TimeAdj(n+1) = TimeAdj(n) + TimeInterval + RandValue. So the proxy series can stretch and shrink in time through history as well as bulk shift?

      121 1000 jackknifed stacks that randomly exclude 30% and 50% of the records in each realization

      In other words, use Monte Carlo to shift, shrink, stretch and exclude proxies optimizing on… WHAT? How many degrees of freedom are in this analysis? The number of possible combinations probably exceeds the number of atoms in the Sun (2E+57)

      • Brandon Shollenberger
        Posted Mar 16, 2013 at 5:01 PM | Permalink

        Stephen Rasey:

        Take a “jitter” of 150 (years, for each proxy, I assume)

        If I am to take seriously the concept of random walk, then some series can be shifted by much more 150 years as the prior step randomly walks away from the zero point.

        That is not what the jitter value means. If it were, the dating uncertainty of each series would be completely irrelevant. In reality, the jitter value of a random walk should be scale insensitive, and it is effectively multiplied by the dating uncertainty of each series. That means each series will be shifting by different amounts depending on the dating uncertainty for that series.

        In other words, use Monte Carlo to shift, shrink, stretch and exclude proxies optimizing on… WHAT? How many degrees of freedom are in this analysis? The number of possible combinations probably exceeds the number of atoms in the Sun (2E+57)

        Why would you say the process excludes proxies “optimizing on… WHAT?”? What part of “randomly” excluding series makes you think they are optimizing on anything? Moreover, why would you talk about this immediately following a discussion of the authors’ main Monte Carlo method when this is just a single implementation of it that has no bearing on the others?

        • Posted Mar 16, 2013 at 6:03 PM | Permalink

          I say “optimizing” because of regularized expectation maximization algorithm

          I also say optimizing because the number of degrees of freedom are large and without some sort of optimized search, 1000 purely random samples of the domain seems woefully inadequate.

          Just take the combination of proxies in or out = 2^73 = 10^22 combinations.

          Add on the constraint that we’ll take 100% of the combinations where 50% or more are included (>10^21), none of the combinations where 30% are excluded (about 10^18) , and an uncertain but variable fraction of where 30-50% are excluded. It is still >10^21. Will 10^3 adequately sample this space? I think not. For every one of the 1000 trial samples, there are 10*18 unsampled combinations.

        • Brandon Shollenberger
          Posted Mar 18, 2013 at 5:58 AM | Permalink

          Your explanation makes less than no sense. The algorithm whose name you highlight isn’t a part of the methodology we’re discussing. It’s a methodology the authors used to infill data in one set of test runs That has nothing to do with the Monte Carlo process. In fact, it has nothing to do with the jackknifed runs you refer to next.

          You do math for number of combinations possible if series were dropped out all together. Again, that is nothing like what the authors did in their Monte Carlo process. You are referring to a method used in modifying data for one set of test runs, a different set than those you were just talking about. Moreover, your math is done for a dramatically different set of conditions than that used in the paper, and that greatly increases the numbers you came up with.

          Both reasons you’ve given are unconnected to the Monte Carlo process. They are even unconnected to each other as both are connected to different test runs. I have no idea how you think they explain anything.

      • Paul Hanlon
        Posted Mar 16, 2013 at 7:46 PM | Permalink

        Okay, forgive a statistical noob.

        The occasions I have seen Monte Carlo simulations applied were in the stock and futures markets. After playing around with them for a while, I found them to be not worth the effort in terms of their predictive value.

        As I understand it, in this case they have been used to infill data. The problem I have with this is that in a truly random system (which paleoclimatology is not), you will get periods when things will go one way for a while. Toss a thousand coins and you will get “ten heads” sets. Now suppose that one sees one of these sets and then “redates” it so that it becomes significant, what have you got?

        Absolutely nothing. Any findings are based on a totally artificial dataset. It doesn’t matter how many times you redo it.

        Paleoclimatology is not “random”. There is a reason why something has happened. Monte Carlo simulations have uses when one is doing what if scenarios, but using them to derive significant insights is no better than reading tea leaves. I stand to be corrected.

    • jim2
      Posted Mar 17, 2013 at 9:59 AM | Permalink

      “Data perturbation is a data security technique that adds ‘noise’ to databases to allow individual record confidentiality. This technique allows users to ascertain key summary information about the data while preventing a security breach…””

      Does anyone see why individual proxy points would need to be “anonymized?”

      What else would a Monte Carlo treatment of the data bring to the table?

      I don’t understand, which is why I’m asking.

  25. Jeff Condon
    Posted Mar 16, 2013 at 4:11 PM | Permalink

    ” All five re-dated cores contributing to the AD1940 reconstruction had positive values. ”

    Can you clarify what you meant by this?

    • NZ Willy
      Posted Mar 16, 2013 at 5:26 PM | Permalink

      positive temperature anomalies compared with their 4500-5500 BP baseline.

      • Jeff Condon
        Posted Mar 17, 2013 at 8:42 AM | Permalink

        That is what I think he means but without looking at the data myself, I’m not sure if he means to say that there is a significant effect from this. My guess is that it was just an observation and Steve isn’t sure yet.

    • Manfred
      Posted Mar 16, 2013 at 5:47 PM | Permalink

      That gives you an upward tick if the previous timestep contained additional negative value proxies which then dropped out.

  26. Robert
    Posted Mar 16, 2013 at 4:13 PM | Permalink

    For my own interest I wrote the following list of concerns/issues that Steve and co have raised. Pls let me know if this is complete/correct.

    (1) The uptick seems to be an artefact of the algorithm. That said, the authors state “clearly” that the 1890-onwards part is not “robust”.
    Why bother showing it at all then ?
    (2) The uptick anyway starts earlier in the extra tropical reconstructions so the “1890-onwards” argument in (1) is dubious.
    (3) The re-dating has a large effect for the temps corresponding to the past few hundred years, providing an up-tick rather than a down-tick. Is it standard practice to re-date ? Is the re-dating performed here sensible ?

    I add my pet worry:
    (4) Why is the uncertainty for temps of 10000 ago the same as for 200 years ago ? This is counter-intuitive in view of the variability of the non-temperature quantities affecting proxies.

    • Posted Mar 16, 2013 at 4:49 PM | Permalink

      Re (4). Is that where the “random walk” part comes in?

    • HaroldW
      Posted Mar 18, 2013 at 7:17 AM | Permalink

      “Why is the uncertainty for temps of 10000 ago the same as for 200 years ago ? This is counter-intuitive… ”
      Interestingly, in Marcott’s thesis, the uncertainty bands increased for the latest ~500 years, but this is not the case in the published paper.

      The authors evaluate the range of outcomes which are achieved by Monte Carlo dithering of the chronologic and temperature calibration parameters, and from randomized subsets of the proxies. However, even if the calibrations were ideal, and the proxies provided a fully accurate local temperature, the limited number of samples must place a lower bound on the accuracy of a reconstruction of global temperatures. This lower bound increases as the number (or geographical diversity) of proxies decreases. Failure to include this sampling uncertainty leads to overly tight confidence intervals.

      Gergis et al. — at least in its first incarnation — also did not quantify uncertainty due to sampling.

  27. Posted Mar 16, 2013 at 4:27 PM | Permalink

    This should be in the news. If they want scary headlines, it’s right there. I can’t think of anything more scary than this on-going very deliberate deception.

    Yes, I know, no one has to explain to me why it won’t happen, the MSM are puppets, I know they’ve been bought. I just keep thinking that somewhere there’s a paper, a reporter or an editor who is not quite so pink as the others and who will finally see just how hot this story is.

    We are in the dying days (months? years?) of the biggest and longest-lasting scam in the history of our world and every single one of the newspapers in existence today will look back in a few years and wonder why the heck they didn’t run with the story first.

    Well done, by the way, for uncovering this, Steve, we would be lost without you.

  28. Green Sand
    Posted Mar 16, 2013 at 4:27 PM | Permalink

    During my first day at work, a long, long time ago I was introduced to the “Sawdust Platting Dept” never thought I would ever come across it again.

  29. Ivan Jankovic
    Posted Mar 16, 2013 at 4:29 PM | Permalink

    If the reconstruction ends in 1940, how then the IPCC can use it to bolster their political claims, (since we know that even according to the IPCC most of the warming before 1950 was natural in origin)? Hence, if Marcott et al are right that would only prove that the natural forcing in the early 19th century was extremely strong.

    • RomanM
      Posted Mar 16, 2013 at 4:43 PM | Permalink

      But the temperature has increased even more since then …

  30. Ivan Jankovic
    Posted Mar 16, 2013 at 4:29 PM | Permalink

    I meant early 20th century.

    • Posted Mar 16, 2013 at 4:49 PM | Permalink

      A shift of only 100 years? That’s barely worth mentioning by Marcottian standards.

  31. Posted Mar 16, 2013 at 4:56 PM | Permalink

    “I was unable to locate any reference to the wholesale re-dating in the text of Marcott et al 2013.”
    Actually it is rather obvious from the data selection criteria (and elsewhere in the Supplementary Materials) that Marcott et al recreate the age-depth models. They write:

    “6. All datasets included the original sampling depth and proxy measurement for complete error analysis and for consistent calibration of age models(Calib 6.0.1 using INTCAL09 (1)).”

    There would be little (probably no) utility calibrating the radiocarbon dates with INTCAL09 (or MARINE09 for the marine dates) if the age-depth models were not updated to the re-calibrated dates. This is a necessary, or at least highly desirable, step in this type of analysis. The older publications in Marcott et al would have used INTCAL98 or earlier calibration curve, or perhaps not calibrated the dates at all, and really do need moving onto a modern calibration curve. There is also the possibility that the original authors had calibrated their radiocarbon dates using an incorrect protocol – I know of some papers published when calibration was a fairly new step that report strange methodologies.

    Treating the coretop as 0BP (1950 CE) is commonly done and is reasonable in the absence of other information. However, I would not have recommended this assumption for MD95-2011.

    Steve: the re-dating between the thesis and the Science article is NOT “obvious”. The thesis has the same language about CALIB 6.0.1 but doesn’t play around with coretops/

    • Jean S
      Posted Mar 16, 2013 at 5:15 PM | Permalink

      Re: richard telford (Mar 16 16:56),

      I would not have recommended this assumption for MD95-2011.

      Is there a series in Steve’s table you would have recommended the assumption?

      • Posted Mar 16, 2013 at 5:19 PM | Permalink

        Of these cores, I am only familiar with MD95-2011.

        • Steve McIntyre
          Posted Mar 16, 2013 at 5:22 PM | Permalink

          Richard,about 6 years ago, you commented that dating of JM97-948/2A was erroneous in one of the Pangaea archives. Has this been corrected? See https://climateaudit.org/2007/11/28/loehle-proxy-md95-2011/

        • Posted Mar 16, 2013 at 5:52 PM | Permalink

          The re-dating should be obvious to anybody who has worked on proxy chronologies – if the dates are recalibrated, the chronology needs to be recreated. Perhaps Marcott et al could have stated this more explicitly.

          I’ve just looked again at the JM96-948/2A data on Pangea.de.
          doi:10.1594/PANGAEA.510801 has a reasonable age-depth model (-0.045 to 0.522 ka BP)
          doi:10.1594/PANGAEA.510799 has an odd age depth model (-0.049 to -0.02 ka BP) not sure that has happened here. When I next meet the author, I will ask her.

          Steve: Richard, you commented on this error six years ago and it still isn’t fixed. why dont you email her while you remember.

    • bernie1815
      Posted Mar 16, 2013 at 5:16 PM | Permalink

      Richard:
      Many thanks for the response. Does the magnitude of the redating surprise you? Are Marcott et al the first ones to redate these proxies? Wouldn’t each redating exercise need to be documented and validated?

    • NZ Willy
      Posted Mar 16, 2013 at 5:34 PM | Permalink

      I think the C14 dating is used only to locate the 4500-5500BP baseline for each series. The idea is, get the proxy depths of the baseline from C14, then use those depths to get the baseline temperatures from the main series data — the baseline is then used to calculate the temperature anomalies for the whole series. Apologies if this is obvious or wrong.

    • Kenneth Fritsch
      Posted Mar 16, 2013 at 7:59 PM | Permalink

      snip – fair question but OT. I;ll deal with it separately

  32. Andy
    Posted Mar 16, 2013 at 4:56 PM | Permalink

    So if I understand. They have taken proxies which show the opposite of what some unnamed climate scientists want, that is an upside down hockey stick, redated them to remove them from the time period we are all worried about leaving hockey stick like proxies in place, and et volia a hockey stick and world wide fame is the result?

    • mpaul
      Posted Mar 17, 2013 at 10:03 AM | Permalink

      I’ve been in a foul mood on this topic and entirely too snarky. Steve has batted down several of my comments as a result. So let me try to articulate what’s got me in such a snit.

      I can fully understand the need to re-date based on new or updated calibration data. But how does one choose? The era of big data has created a situation where scientists have a nearly infinite smorgasbord of data to choose from. I’d love it if someone would actually examine the combinatorics here. I would wager that there are billions (or more) of potential re-dating combinations that could be justified by information available somewhere in the big data soup. And Steve has shown that the reconstruction is hype-sensitive to dating. Then there’s the problem of information asymmetry — its unlikely that any particular investigator is aware of all of the information available. So how do you avoid cherry picking? How do you avoid talking yourself into a rationale for re-dating in such a way as to produce a tidy result? There’s just too much opportunity for conformation bias.

      Its a bit analogous (although not perfectly so) the the Drake Equation Fallacy. Basically you can get any answer you want because the entire equation is based on a combination of assumed values. Re-dating turns a multi-proxy reconstruction into a mulligans stew of assumed values.

      Methodologically, I think this argues that you need a separation of duties. The investigator producing the multi-proxy reconstruction should not be permitted to re-date individual proxies.

      • Steve McIntyre
        Posted Mar 17, 2013 at 10:57 AM | Permalink

        I’ve been deleting a number of “piling on” comments. Precisely what constitutes “piling on” is a bit arbitrary. But I dislike comments that editorialize against climate scientists or journals or otherwise moralize. The facts are eloquent enough,

      • bernie1815
        Posted Mar 17, 2013 at 11:09 AM | Permalink

        mpaul:
        Your comment that “The investigator producing the multi-proxy reconstruction should not be permitted to re-date individual proxies” makes sense to me though I think it would be OK if they first wrote up their redating, received feedback and needed adjustments and then separately wrote up the aggregation.

  33. Pav Penna
    Posted Mar 16, 2013 at 4:57 PM | Permalink

    The detailed mathematical discussion here is above my pay grade. However, I can read English well enough to understand the implications and Steve McIntyre has earned my trust on these issues.

    The next few days are going to be fascinating. What will the authors do? What will Science do? What about Mann and Revkin? The reviewers?

    How do you back down from the obvious bubbly joy and public high-fivin’ with which they greeted this pig of a paper?

    Perhaps their best approach would be to just emulate the Church Lady on Saturday Night Live – “Oh, that’s different! Never mind.”

    • John B
      Posted Mar 16, 2013 at 8:58 PM | Permalink

      You can put lipstick on a pig but it’s still a pig.

  34. Posted Mar 16, 2013 at 4:59 PM | Permalink

    If you were perturbed via the ‘Monte Carlo technique’ a thousand times, you might withhold giving the correct answer too ….

    Sorry – couldn’t resist 😉

    • Theo Goodwin
      Posted Mar 16, 2013 at 7:32 PM | Permalink

      That was Emily, not the Church Lady. I wonder if Romm and others will begin wondering if they have been set up?

  35. TerryS
    Posted Mar 16, 2013 at 5:05 PM | Permalink

    So they have taken a core that showed it was anomalously warm during the MWP and re-dated to show it is anomalously warm today.

  36. Posted Mar 16, 2013 at 5:21 PM | Permalink

    The moral of today’s post for ocean cores. Are you an ocean core that is tired of your current date? Does your current date make you feel too old? Or does it make you feel too young? Try the Marcott-Shakun dating service. Ashley Madison for ocean cores. Confidentiality is guaranteed.

    Thanks, Steve … with your choice of topic and this closing para [not to mention the graph], you’ve given this statistically-challenged person a grand chuckle for the day!

    With so many contortions dedicated to preserving the icons of “the cause”, surely there’s enough material that you’ve unearthed for Cirque du Soleil to consider a new production! Or perhaps a spin-off “Cirque du Science”!

    I do hope that Josh has read this post; it cries out for one of his inimitable captures 😉

  37. joshv
    Posted Mar 16, 2013 at 5:39 PM | Permalink

    Steve, where did the re-dating table come from? Is it inferred from your replication or is it contained in Marcott’s various publications?

    • NZ Willy
      Posted Mar 16, 2013 at 6:11 PM | Permalink

      They are in the main data under the column heading “Marine09 age”.

  38. jorgekafkazar
    Posted Mar 16, 2013 at 5:52 PM | Permalink

    This “Monte Carlo procedure” has more the aroma of Las Vegas.

  39. Salamano
    Posted Mar 16, 2013 at 5:57 PM | Permalink

    From Revkin’s blog (if it hasn’t already been posted somewhere here yet)…

    http://dotearth.blogs.nytimes.com/2013/03/07/scientists-find-an-abrupt-warm-jog-after-a-very-long-cooling/#more-48664

    Steve McIntyre at Climate Audit has been dissecting the Marcott et al. paper and corresponding with lead author Shaun Marcott, raising constructive and important questions.

    As a result, I sent a note to Marcott and his co-authors asking for some elaboration on points Marcott made in the exchanges with McIntyre. Peter Clark of Oregon State replied (copying all) on Friday, saying they’re preparing a general list of points about their study:

    After further discussion, we’ve decided that the best tack to take now is to prepare a FAQ document that will explain, in some detail but at a level that should be understandable by most, how we derived our conclusions. Once we complete this, we will let you know where it can be accessed, and you (and others) can refer to this in any further discussion. We appreciate your taking the time and interest to try to clarify what has happened in our correspondence with McIntyre.

    ———————-

    Sometimes these “this is all I’m gonna say about it” FAQs may end up moving the pea or not addressing the actual issues, but it’s nice to see that they’re willing to do this. Also nice to see Steve hitting this hard in the past few days– the FAQ is most likely being drawn up now, and will probably only address whatever has been generated up through this weekend or so.

    • AntonyIndia
      Posted Mar 16, 2013 at 10:04 PM | Permalink

      In Andy Revkin’s 1st video Jeremy Shakun is full of the 4-6C temperature rise that he is sure will come. His confidence to show (only) a present short term very sharp uptick in a low frequency graph seems to from those predictions. It has to happen according to theory, so there it is.

    • David L. Hagen
      Posted Mar 16, 2013 at 11:34 PM | Permalink

      At Revkin’s blog, Robert Rohde observes:

      The 20th century may have had uniquely rapid warming, but we would need higher resolution data to draw that conclusion with any certainty. Similarly, one should be careful in comparing recent decades to early parts of their reconstruction, as one can easily fall into the trap of comparing a single year or decade to what is essentially an average of centuries.

    • DaveA
      Posted Mar 17, 2013 at 5:09 AM | Permalink

      Frequently Asked Questions the reviewers didn’t ask? lol

  40. Posted Mar 16, 2013 at 5:58 PM | Permalink

    The Marcott version has bumps in all the right places. It might be interesting to find out how far their dating changes would need to be pushed around to make a bigger MWP and smaller C20th WP.

    Then ask why one set of arbitrary dating would be preferred over the other by ‘the team’.

  41. Posted Mar 16, 2013 at 6:02 PM | Permalink

    Reblogged this on Climate Ponderings.

  42. Paul Matthews
    Posted Mar 16, 2013 at 6:11 PM | Permalink

    I have submitted a brief comment to the journal (this morning, before the two latest CA posts appeared).

    • David L. Hagen
      Posted Mar 16, 2013 at 7:58 PM | Permalink

      A succinct public referee’s statement! Well put.

    • Robert
      Posted Mar 16, 2013 at 11:43 PM | Permalink

      Paul
      They argue that the time interval of the uptick is unreliable and that they made this clear. Their response to your point would be “big deal whether its an uptick or downtick here, we’ve said in the paper not to trust it.” .

      Sure, the “uptick” region of invalidity for this set of measurements is not quite just 1850- (looking at the hemispheric results). However, this could be argued to be not terribly serious in view of the very long period over which they make measurements.

      I think their stats are, in general, sloppy on this paper and the PR surrounding this work absurd. However, I don’t think your argument, or indeed the other criticisms the made here, are clearly retraction-worthy.

  43. Posted Mar 16, 2013 at 6:21 PM | Permalink

    I suspect we are dealing with group think here. Marcott’s thesis describes honest and respectable work. The 2013 Science paper however gives the impresstion that (among other things) carbon dating has been subtly manicured to support a predefined narrative. This may be entirely a false impression. However I have other concerns with the linear interpolation of each proxy time resolution to a 20 year interval, and the method to calculate regional anomalies.

    1. Was interpolation done before deriving anomalies or after ? This will make a difference.

    2. Were anomalies calculated for each location using the measured data or only after interpolation?

    3. Were the anomalies calculated prior to geographical averaging , or were they calculated afterwards?

    4. Were anomalies transforned to the 1961-1990 individually or just the reagional averages ? Is the transform a simple linear offset ?

    • Robert
      Posted Mar 16, 2013 at 11:59 PM | Permalink

      Clive

      Groupthink is also likely happening here. Following such gems as the classic Mann hockey stick, upside down proxies and Gergis et al. there is an expectation here (IMO) that this paper is fundamentally balonely and that each of Steve’s points is a nail in the paper’s coffins.

      I’ve seen lots of interesting points raised by Steve and others which demand serious answers. However, to abuse yet further the term, I’ve seen nothing yet which indicates that the paper’s major results are not robust and ought to be withdrawn. Even the redating question, perhaps the most serious of the issues, can likely be met with a decent explanation (Richard Telford). If the authors state that 1850-beyond is unreliable they can just about argue that nobody should worry whether an uptick or downtick occurs.

      The PR etc surrounding this paper is another matter
      entirely. I recently spent a few hours with a journalist who was writing an article on one of my papers. I avoided overhype and encouraged him to do this same.

      Were I one of the authors I would have jumped into this discussion given the track record of this blog. The tone of comments the authors would have received would have been critical but not vicious. Science should work this way. FAQ’s and response-posts on other blogs doesn’t move science forward.

      • Steve McIntyre
        Posted Mar 17, 2013 at 12:36 AM | Permalink

        Richard Telford’s comments are always interesting, but in this case, I’m 99.99% sure that he’s holding ranks while privately gnashing his teeth. There’s no possible way that he would endorse Marcott et al’s wholesale redating of specialist cores.

        Using CALIB 6.0.1 consistently is one thing, but Marcott et al have done something much different. Richard’s very mild disapproval of the redating of MD95-2011 should speak volumes. But it’s foolish to expect more than that in public discussion. Privately, I’m sure that he would ream Marcott et al out if he had a chance.

        Imagine if Craig Loehle or I had produced a reconstruction making a similar re-dating of cores dated by specialists. We’d have had our heads handed to us by the specialist community. Imagine what Gavin Schmidt would have written if Loehle had redated a core by 1000 years. He’d have run Loehle out of town.

        • sue
          Posted Mar 17, 2013 at 12:53 AM | Permalink

          I found his latest post at his website interesting: http://quantpalaeo.wordpress.com/2013/03/16/collaboration-networks-in-bio/ And he provides his own R code! And on the day that he responses here for the first time since 2008 (?) Of course he only is doing an analysis of his own university departments but I assume others could use this for other analyses…

        • Robert
          Posted Mar 17, 2013 at 1:25 AM | Permalink

          Assuming your hypothesis is correct then I still don’t see how this any material impact on the study. It certainly would be poor practice (I doubt such work would have been shown in the paper had it led to downticks) but the major conclusions of the work but be largely unchanged.

        • Robert
          Posted Mar 17, 2013 at 2:13 AM | Permalink

          The big issue for me is the size of the uncertainties. It is counter intuitive that a quantity from 10000 years ago can be measured to the same accuracy and precision as one from 200 years ago. Having looked at the thesis its very difficult to work out exactly what they did here. I will take a little convincing that their uncertainty evaluation and averaging procedure somehow takes into account all of the (non-temperature) factors which influence a proxy and which aren’t terribly well constrained as one goes back to the very distant past of 10000 years ago. Their time-independent uncertainties represent an extraordinary claim and such claims require extraordinary evidence.

          This is unfortunate since it is the uncertainty which leads them to draw their conclusions about recent temperatures being higher x% of the past y years.

        • Carrick
          Posted Mar 17, 2013 at 2:40 AM | Permalink

          Robert:

          Assuming your hypothesis is correct then I still don’t see how this any material impact on the study. It certainly would be poor practice (I doubt such work would have been shown in the paper had it led to downticks) but the major conclusions of the work but be largely unchanged.

          Um, what “major conclusions” are you referring to that are left unchanged?

          As far as I can tell, this paper made its way into Science not on other merits, but because of what now appears to be an erroneous last data point.

          The reconstructions that don’t use the Marcott-Shakum date realignment don’t show this uptick on the end point… that is making this paper (and sadly the field of paleoclimate by extension, given how many have climbed on board and endorsed this paper) look really bad.

        • Robert
          Posted Mar 17, 2013 at 2:51 AM | Permalink

          Carrick

          Instrumental data is used to make the case for the fast rise in recent temperatuers.
          Marcott’s results are primarily historical measurements stretching back 10000 years or so. Indeed, in Marcott’s own words, their measurements corresponding to the most recent times aren’t robust and not to be taken seriously. They’ve written themselves a get-out clause.

          Marcott’s conclusions are unchanged whether or not measurements relating to the most recent 150 years are kept in the paper.

        • Mooloo
          Posted Mar 17, 2013 at 4:14 AM | Permalink

          but the major conclusions of the work but be largely unchanged.

          Without the uptick, would the major conclusion not be that we are slowly cooling? That’s what their graph shows. Even if there is a recent (post 1950) temperature spike, we are well within the normal bounds for recent millenia. In fact the CO2 is likely cancelling the slow drift into an ice age that the data shows.

          I doubt strongly that Marcott, Shakun, Clark and Mix really want to send a message that we don’t need to worry about warming.

          Without the uptick this paper is a disaster for the calamitous warming cause.

        • Robert
          Posted Mar 17, 2013 at 4:25 AM | Permalink

          Mooloo

          The uptick is an irrelevence. It is a useful picture for those putting forward a certain point of view it but has no scientific merit. Even the authors admit this in the paper. If it was a downtick (and it was shown) it would not be disaster for the CAGW hypothesis since it is an artefact of an algorithm and not a temperature measurement.

          The paper’s significance is due to the temperature measurements stretching back 10000 years.

        • kim
          Posted Mar 17, 2013 at 11:22 AM | Permalink

          As Mooloo notes, there is a grand irony for the discourse and a yawning dilemma for the alarmist narrative. I was amazed to find this sentence in CNN’s first reporting of Doctor Marcott’s article: ‘If not for man-made influences, the Earth would be in a very cold phase right now and getting even colder’.
          ======================

        • Carrick
          Posted Mar 17, 2013 at 12:39 PM | Permalink

          Robert:

          Instrumental data is used to make the case for the fast rise in recent temperatuers [sic]

          .

          Yes, but given the low-frequency nature of Marcott’s reconstruction, you can’t use the lack of data to argue that there weren’t correspondingly fast increases or decreases prior to the temperature record, and the proxy based record tells you nothing about high frequency variability.

          This is a case of “absence of evidence is not evidence of absence.”

          But I was looking for what “new” conclusions there were to this paper?

          It’s plausible, even likely, that adding high frequency noise to the Holocene Climate Optimum period, and you’d still have periods where historic temperatures were warmer than current. So that’s certainly not one.

          I agree with you regarding the problems at the end points of the series, but as you know Marcott doesn’t (or maybe now “didn’t”) share that lack of enthusiasm when he said “We’ve never seen something this rapid. Even in the ice age the global temperature never changed this quickly.”

          That certainly is an unfortunate thing to claim. “We’ve never seen … ” may be true, but it doesn’t immediately follow that the temperature “never changed this quickly.”

          As you probably are aware the attribution studies suggest that the global warming from 1910-1950 was primarily natural in origin and has a slope that is statistically indistinguishable from that of 1970-2000, so even within the existing temperature record, we have an example where it did “change this quickly.”

      • MrPete
        Posted Mar 17, 2013 at 1:19 AM | Permalink

        Re: Robert (Mar 16 23:59),
        The challenge is not just “PR.” Can you find any MSM discussion of the paper that provides an appropriate headline, let alone content? Revkin’s is the most even-handed content I’ve seen so far, and his headline is 100% incorrect.

        In fact, is there any non-skeptic discussion of the paper that is appropriately grounded? I’m truly astounded this paper has received any attention at all, let alone been published. Talk about GIGO.

        • Posted Mar 17, 2013 at 4:57 AM | Permalink

          “Science’s Mission: Science seeks to publish those papers that are most influential in their fields or across fields and that will significantly advance scientific understanding. Selected papers should present novel and broadly important data, syntheses, or concepts. They should merit the recognition by the scientific community and general public provided by publication in Science, beyond that provided by specialty journals.”

          I would like to see this paper framed in these terms…

        • Skiphil
          Posted Mar 18, 2013 at 3:44 PM | Permalink

          May I pose two questions?

          1) Can Marcott et al. represent 1820 – 1920 accurately, when most of the globe was supposedly emerging from the downtick of the LIA?

          2) Do the alkenones really have the temp. Sensitivity and reliability to provide the resolution needed?

  44. Posted Mar 16, 2013 at 6:46 PM | Permalink

    Stunning…
    BTW do these authors engage in time-warping or merely time-shifting?

    In a related question, do they constrain themselves to the Lorentz Gauge?

    Just wow…
    RR

  45. Gary Hladik
    Posted Mar 16, 2013 at 7:15 PM | Permalink

    “It is taking all my will power not to make an obvious comment at this point.”

    I think the phrase you’re looking for is “not robust”. 🙂

  46. Carrick
    Posted Mar 16, 2013 at 7:19 PM | Permalink

    For people that have missed it, Nick Stokes has had a go at recreating the series.

    He’s got R code, but unfortunately it uses xlsReadWrite, which I don’t think is available on a Mac.

    • Posted Mar 16, 2013 at 8:17 PM | Permalink

      Carrick,
      Sorry about that xls problem. I’ve modified the code so that it writes the data that it reads immediately to a R binary (prox.sav) which I’ve put into the zip file. There’s a flag to load the file instead of reading xls.

      • Brandon Shollenberger
        Posted Mar 16, 2013 at 8:37 PM | Permalink

        Nice timing Nick Stokes. I was just about to post something similar as I made a workaround for my own use, and I thought it might be worth posting with your code.

        Carrick, if that doesn’t work, let me know. I had to make some workarounds for two different systems, and I think I figured out all the kinks. If not, I can provide the data in a simple csv file r will have no trouble reading.

      • Nathan Kurz
        Posted Mar 17, 2013 at 1:33 AM | Permalink

        Just wanted to say a quick thank you to Nick for your continued presence here. Providing code is great! It must take a thick skin, but right or wrong (on either side) it’s a better site for your skepticism toward the skepticism.

        • sue
          Posted Mar 17, 2013 at 2:03 AM | Permalink

          +1 I appreciate Nick’s contribution also…

  47. Mindert Eiting
    Posted Mar 16, 2013 at 7:25 PM | Permalink

    Both proxy values (temperatures) and their dates constitute the data. The dates were changed which means that the data of the thesis and paper are different. The only valid reason to change the dates, is that they were seriously wrong in the thesis. Is this said and motivated by the paper authors?

    • Kenneth Fritsch
      Posted Mar 16, 2013 at 7:34 PM | Permalink

      “The only valid reason to change the dates, is that they were seriously wrong in the thesis. Is this said and motivated by the paper authors?”

      I believe that SteveM wrote this thread with the intent of showing that the changed dates in the paper are different than the original proxy source papers showed and that the thesis was more in line with the original proxy sources. SteveM also notes that the authors of the Marcott paper had little or nothing in the way of explaination of why and by what rationale the dates were changed.

      I am quite sure that the authors are very much aware of this discussion at CA and they could come by and explain this all in a flash.

    • Theo Goodwin
      Posted Mar 16, 2013 at 7:44 PM | Permalink

      Thanks for bringing up data – as in “facts.” Work with proxies is so far from fact that the crucial issues in the Marcott controversy do not touch upon fact at all. Even the critics, first rate critics such as McIntyre and Telford, agree that the dates can be changed and legitimately. Clearly, then, the entire discussion is over what is “proper” in the relevant statistical methodology. Whenever proxies for temperature are the topic, scientists believe that they are quite justified in failing to tie their inferences to any factual ground at all. In my humble opinion, the lack of empirical science in the study of proxies is exactly why Warmists love them.

      • Mindert Eiting
        Posted Mar 17, 2013 at 9:34 AM | Permalink

        OK, with legitimate change of the dates, the question is of whether re-dating was independent of proxy shape. We have a population of 73 proxies. In order to get an up-tick in the twentieth century, we need proxies with an up-tick in its tail. We take a sample of n proxies to be re-dated. Assuming random selection, the hypergeometric distribution gives the probability of getting (at most/ at least) k proxies with an up-tick in its tail. Or, for getting the required up-tick, proxies with an up-tick in its tail should be re-dated upwards, and proxies with a down-tick in its tail should be re-dated downwards. Same procedure next.

  48. Rud Istvan
    Posted Mar 16, 2013 at 7:50 PM | Permalink

    Steve, bravo.
    There is a classic smoking gun ‘proof’ of this time shifting trick in the Science paper as published. Compare figure 1G ( proxy ‘survival’ ) to figure 4.3C in the thesis. Basically the same graphic, except the Science version shows how many proxies ( at least 10 ) were pulled forward to at least 1850, while at least 9 were provably pulled back from 1950.

    I sensed passing the puck to you from Climate Etc would result in a hockey goal. Had no idea how solid the goal would be. A high sticking penalty would seem to be in order for Marcott et. al.
    Highest regards
    Rid

  49. Skiphil
    Posted Mar 16, 2013 at 8:55 PM | Permalink

    Peter Clark of the author team told Revkin (on Friday) that there is an FAQ document in preparation, to respond to issues being raised. This was before Saturday’s developments, so it will be interesting to see if they still try to deal with everything via FAQ, or will they have to take new action about the paper itself:

    Peter Clark to Andy Revkin

    [REVKIN]
    As a result, I sent a note to Marcott and his co-authors asking for some elaboration on points Marcott made in the exchanges with McIntyre. Peter Clark of Oregon State replied (copying all) on Friday, saying they’re preparing a general list of points about their study:

    [PETER CLARK of Oregon State]
    After further discussion, we’ve decided that the best tack to take now is to prepare a FAQ document that will explain, in some detail but at a level that should be understandable by most, how we derived our conclusions. Once we complete this, we will let you know where it can be accessed, and you (and others) can refer to this in any further discussion. We appreciate your taking the time and interest to try to clarify what has happened in our correspondence with McIntyre.

    • pottereaton
      Posted Mar 16, 2013 at 10:49 PM | Permalink

      This really is beginning to track like Gergis et al. Remember when Karoly took over for Gergis and became the voice of their writing group?

      From a distance and regardless of what is going on behind the scenes, if there are professional reputations on the line here, it’s likely they are Clark’s and Mix’s reputations.

      I look at Shakun in the interview with Revkin waxing enthusiastically about the dawn of the Anthropocene and the impression I get is that he and Marcott are victims of indoctrination. Years and years of it.

      • pottereaton
        Posted Mar 16, 2013 at 10:55 PM | Permalink

        My post was a reply to Skiphil at 8:55pm.

      • Robert
        Posted Mar 17, 2013 at 6:39 AM | Permalink

        I think the similarities with Gergis et al. are fairly superficial. So far nothing has been shown (IMO) which would necessitate a correction/retraction.

  50. Geoff Sherrington
    Posted Mar 16, 2013 at 9:33 PM | Permalink

    There is some circularity in the alkenone method. (An alkenone is a hardy class of ketone). In summary, it relies upon the ratio of lengths of alkenone molecules. The ratio is calibrated against other indices such as the surface sea temperature derived from oxygen isotopes in foraminifera and similar. The oxygen isotopes are related in turn by quite loose equations that more or less say that evaporation sites leave heavier isotopes behind and vice versa for precipitation. However, the distance relation between evaporation and precipitation sites and the likelihood of intermediate mixing and contamination and repeated processes is a great unknown. The mere fact that some people make equations for calibrating temperature with oxygen isotope ratios does not eradicate the possibility of wide variation. This has been known since the 1980s.

    Click to access CHAP_39.PDF

    Quote: To determine environmental conditions during sapropel formation
    requires reconstruction of hydrographic parameters: ambient
    surface-water paleotemperature and paleosalinity before, during, and
    after sapropel formation. As demonstrated by Rostek et al. (1993),
    sea-surface temperature (SST) and paleosalinity can be estimated by
    using the combined planktonic foraminiferal oxygen isotope and alkenone
    Uk’ 37 signals. Reliable application of this strategy, however, requires
    knowledge about growth season and depth habitats for both
    planktonic foraminifers used for isotope analysis and of phytoplankton
    species used for alkenone measurements. End quote.

    Are we on a bridge too far? Yes?

    • k scott denison
      Posted Mar 16, 2013 at 9:47 PM | Permalink

      But, but, but, but, I am POSITIVE the way I reconstruct them must be correct!!!! /sarc

      • k scott denison
        Posted Mar 16, 2013 at 10:26 PM | Permalink

        Sorry, should read: “but I’m positive the way I reconstructed them THIS TIME must be correct”

    • Theo Goodwin
      Posted Mar 16, 2013 at 11:07 PM | Permalink

      Thanks so much for this post. The information warms the heart of this lover of empirical science. I guess none of this information will make it into the analysis of Marcott’s paper.

      I wish that McIntyre had a twin who would attend to the empirical matters as well as McIntyre attends to the statistical matters. Of course that twin would not have so much to write about as the Warmists carefully avoid all matters empirical.

    • Pat Frank
      Posted Mar 16, 2013 at 11:33 PM | Permalink

      I assessed the accuracy of the dO-18 proxy here.

      Under the best laboratory controls, the method is not more accurate than (+/-)0.6 C. For paleo-reconstructions, the calibration curves for G. bulloides and O. universa carbonate vary across 4 C.

      There’s no way any of them can reconstruct paleo-temperatures with anything near the resolution needed for comparison with the modern trend.

      • Theo Goodwin
        Posted Mar 16, 2013 at 11:50 PM | Permalink

        Thanks. I suspected as much.

    • Posted Mar 17, 2013 at 2:52 AM | Permalink

      Geoff – you have misinterpreted Doose et al.

      Alkenones are either calibrated against a set of core-top samples or against laboratory cultures grown at different temperatures. Doose et al. use the Prahl and Wakeham (1987) equation – this is explicit in their methods section.

      Alkenones are not calibrated against d18O.

      The quote you give is emphasising the importance of using multiple proxies that are sensitive to different aspects to best understand the oceanography of a site.

      • tty
        Posted Mar 17, 2013 at 8:18 AM | Permalink

        Calibrating alkenone against coretop samples assumes that the coretop reflects current temperatures (which is quite reasonable if sedimentation rate is not too low and there isn’t any loss of loose sediment when recovering the core).

        However it would seem that this assumption also eliminates any need for “recalibrating” the coretop.

      • Pat Frank
        Posted Mar 17, 2013 at 12:46 PM | Permalink

        Prahl and Wakeham (1987) reported an accuracy of (+/-)0.5 C for their calibration equation, for E. huxleyi algal cultures grown under laboratory conditions. This lower limit of uncertainty should have been propagated into the Marcott construction. Wild-type conditions will produce a larger scatter of points and lower accuracy.

        Further, though, Prahl and Wakeham also noted an inconsistency between their calibration, and prior work. They go on to say that, “The cause for this apparent discrepancy is uncertain and warrants further laboratory investigation. Systematic study of other strains of E. huxleyi will reveal the extent to which [derived temperature depends on the specific algal clone used for study]”

        So, the (+/-)0.5 C is really an upper limit of accuracy, because unknown organismal influences other than temperature may affect the alkenone distribution.

        I haven’t evaluated the field generally, but if Prahl and Wakeham are typical, alkenone proxy temperatures, like d-O18 temperatures, do not have the resolution needed to evaluate the modern trend.

      • Geoff Sherrington
        Posted Mar 18, 2013 at 12:04 PM | Permalink

        Richard Telford, Thank you, I’ll dig deeper. In either the early data or the late data there has to be a way to calibrate against temperature. I was of the opinion, which might be wrong, that oxygen isotopes were used in the early data.
        Life here is quite complicated by illness just now, so if you could do a couple of one-liners on whether or not oxygen isotopes enter the calibration of alkenone methods at any stage, I will be grateful. It’s poor of me if I have posted before being more thorough.

        Pat Frank, thank you for the comments. Do you know of a text that is used to teach how to measure, formalise, calculate and express both precision and accuracy in the current context? There is an Australian blog that would benefit from exposing some university people to it. It’s run by a number of Universities plus bodies like CSIRO and BoM, but they will not allow me to contribute lead articles because they say I have no current affiliation with a University. I’m challenging the use of public funds to censor writers, but I think they have used some smart to work around this. “The Conversation”.

  51. mt
    Posted Mar 16, 2013 at 9:55 PM | Permalink

    There’s another interesting dating issue. Shakun 12 is co-authored by Marcott, and uses a number of the same proxies. Redating was also done in this paper, this time both Marine04 and Marine09 were compared, with 04 being chosen (discussion in the supplemental). The spreadsheet for Shakun12 lists both Marine04 and 09 ages, and the Marine09 ages differ from Marcott13 for the same proxies, usually in the earlier years. As an example, the ages for the first proxy in Marcott, GeoB5844-2:

    Publ. Shakun Marcott
    813.2 1134 841
    1110.4 1361 1143
    1407.6 1587 1446
    1704.8 1815 1748
    2002 2046 2049
    2354.4 2392 2393
    2706.8 2736 2736

    and for SO136-GC11:
    Publ. Shakun Marcott
    50 509 118.6200339122
    290 721 364.4322840683
    540 933 611.9149649022
    790 1144 860.2330132163
    1050 1356 1109.154025937
    1320 1568 1358.5927153155
    1600 1779 1608.5344708245
    1900 1991 1859.0253369293
    2210 2202 2110.2064526283

    Same authors, same data, same Calib 6.0.1 with INTCAL09, different ages.

  52. Rick
    Posted Mar 16, 2013 at 10:14 PM | Permalink

    “We appreciate your taking the time and interest to try to clarify what has happened in our correspondence with McIntyre.”
    Or “We are attempting to determine where shite will land after its contact with fan.”

  53. MrPete
    Posted Mar 17, 2013 at 1:12 AM | Permalink

    It’s always been painful to visit natural-variability denier sites such as Joe Romm’s site. But on a whim I followed a link to his discussion of the Marcott paper.

    What fun! Take a look at what happens when an over-the-top alarmist extrapolates from already-ridiculous analysis.

    From uptick to rocket ship...

    • Posted Mar 17, 2013 at 2:56 AM | Permalink

      Is that red for dangerous or red for faced?

    • Espen
      Posted Mar 17, 2013 at 5:35 AM | Permalink

      The Daily Kos reposted that chart as “the scythe”, and Mann then enthusiastically reposted it on his Facebook page.

      • Posted Mar 17, 2013 at 6:07 AM | Permalink

        I didn’t think things could get better but they just did. Thanks.

      • Robert
        Posted Mar 17, 2013 at 7:24 AM | Permalink

        Oh dear. Did Prof Mann point out that :
        (a) the blade is nonsense (as admitted by the authors themselves) ?
        (b) a growth of temperatures as we saw in the last ~100 years would not not be visible owing to proxy resolution and that therefore it tells us little as to whether or not we’re experiencing unprecedented conditions ?
        (c) the temperature errors on this study are 50% of his ? Perhaps he could even explain why. This is baffling me (a poor physicist).

      • Espen
        Posted Mar 18, 2013 at 8:26 AM | Permalink

        Lurking at Mann’s facebook page is fun in a disturbing way these days. Yesterday he posted another frankenchart – a version of “the scythe” created by Bart Verheggen: http://klimaatverandering.files.wordpress.com/2013/03/shakun_marcott_hadcrut4_a1b.png – and today he posted a link to a Marcott-praising piece by “bad astronomer” Phil Plait centering on the “faster than ever before” nonsense.

        • Posted Mar 18, 2013 at 10:42 AM | Permalink

          Hatred makes a man mad. And I can’t interpret Mann’s attitude to McIntyre any other way.

    • Jeff Condon
      Posted Mar 17, 2013 at 8:39 AM | Permalink

      Holy! I thought you were joking!

  54. TerryS
    Posted Mar 17, 2013 at 3:39 AM | Permalink

    Here is a another paper by Shakun that seems to use similar methods and proxies. Here is an extract

    Age control. All radiocarbon dates were recalibrated using Calib 6.0.1 with the IntCal04 calibration and the reservoir corrections suggested in the original publications.

    The data is available here

  55. Lance Wallace
    Posted Mar 17, 2013 at 3:41 AM | Permalink

    Using just the anomaly from the means of each proxy, one can compare the increase over the century from 1850 to 1950 using either the published or Marcott (Marine09) ages for the NHX. The slope is about zero for the published ages, almost a degree per century for the Marcott ages.

    published age:
    http://tinypic.com/r/29gflw4/6

    Marcott age:
    http://tinypic.com/r/2r3lg5f/6

    The Excel file is available on Dropbox:

    https://dl.dropbox.com/u/75831381/Marcott%20temps%20including%20METADATA.xlsx

  56. Otter
    Posted Mar 17, 2013 at 3:51 AM | Permalink

    One ‘trafamdore’ over at WUWT, claims your points are easily refuted, Steve. Yet I see not one single posting from him here. He must be Really good- he doesn’t have to say Anything to refute you!

    snip

    • Robert
      Posted Mar 17, 2013 at 12:18 PM | Permalink

      Otter

      Trafamdore is right IMO about the significance of the matters under discussion here.

      Wrong about the hockey stick though.

  57. TerryS
    Posted Mar 17, 2013 at 4:30 AM | Permalink

    No links because my comments with links seem to disappear so I’ll try adding them in a reply.

    An article was published in Nature in April 2012 called “Global warming preceded by increasing carbon dioxide concentrations during the last deglaciation”. The authors where Shakun, Clark, Marcott, Mix and others.

    This used Calab 6.0.1 with IntCal04 to redate the proxies, but the supplimentary information also contains re-dating information with IntCal09.

    I’ve only looked at MD95-2043 but the first 4 dates look to have been stretched. The table below shows the depth, the orignal dating, Shakun dating from the Nature article and Marcott dating from the Science article

    Depth Original Nature Science
    0     1008     NaN    0
    2     1082     1222   222.54
    4     1156     1275   442.74
    10    1379     1435   1100.68
    14    1527     1536   1535.39

    The rest all appear to agree within a year or two.

    • TerryS
      Posted Mar 17, 2013 at 4:32 AM | Permalink

      Paper here: http://sciences.blogs.liberation.fr/files/shakun12naturesi.pdf

      Supplementary info here:
      http://www.nature.com/nature/journal/v484/n7392/extref/nature10915-s2.xls

    • TerryS
      Posted Mar 17, 2013 at 10:03 AM | Permalink

      Here is a plot of the differences between Shakun 2012 and Marcott 2013 for MD95-2043.
      Both papers calculated the date using Calab 6.0.1 with IntCal09 so there should be little to no difference.

      Shakun 2012 did not provide a data point for 0cm so I’ve used the original of 1008 BP but I think it should really be around 1100 – 1200 BP.

      The X axis is the depth in centimetres and Y axis is the difference in years between the two papers

      • TerryS
        Posted Mar 17, 2013 at 10:29 AM | Permalink

        • Ben
          Posted Mar 17, 2013 at 10:30 PM | Permalink

          That is an interesting visual. Thanks TerryS

  58. Greg
    Posted Mar 17, 2013 at 4:57 AM | Permalink

    Outstanding work as ever Steve. Once again your tenacity pays off.

    Good to see you extending attribution to all authors. No reason why Marcott should get all the “credit” for this work. Indeed as mt points out above, the re-dating trick was also used in Shakun et al 2012, an attempt to reverse CO2-temperature causality. That paper’s somewhat longer author list includes the authors as this paper.

    I don’t wish to detract from the rigour and seriousness of your analysis but I’m again tempted to note the appropriateness of some of the authors’ names: Shakun-Mix re-dating.

  59. Robert
    Posted Mar 17, 2013 at 5:00 AM | Permalink

    Looking at the main plot something troubles me. The main plot is here btw

    Why are Marcott’s errors half of the size of Mann’s ? This strikes me as being not a little daft. Mann wasn’t exactly noted for his conservative treatment of systematic uncertainties. What is the trick (used here not as a snarky term) that Marcott et al. used such that they are able to reconstruct the temperature so much better than a similar analysis ? Or have they just neglected a lot of errors ? Are the errors being compared like for like, eg 1sigma to sigma etc ?

    Also, is there a systematic uncertainty in this work at all ? A systematic error is defined here as the uncertainty that would be dominant should nature have granted the authors enough samples such that the random error is negligible. Is the systematic error something sensible or does it turn out to be absurd , eg 0.1 degrees uncertainty for 10000 years ago, which would imply that the methods and assumptions are incredibly (and implausibly) reliable.

    I know I bang on about the error but the importance of getting the error right is the first thing I teach my ug students.

    • Pat Frank
      Posted Mar 17, 2013 at 12:59 PM | Permalink

      You’re exactly right, Robert. I’ve now evaluated climate model temperature projections, the surface air temperature record, and proxy temperature reconstructions.

      In each and every case, the scientists involved have ignored systematic error. These are theory-bias, sensor measurement error, and laboratory error plus biological disequilibrium, respectively.

      When the respective error is propagated into the results, not one of the methods retains the resolution necessary to make any sense of recent climate.

  60. Posted Mar 17, 2013 at 5:44 AM | Permalink

    In my primitive Marcott emulation, I had reached a stage where switching from published dates to Marcott dates introduced a very large spike. I have found a reason. It’s a rather trivial one. The sheet of proxy 65 has some junk many lines down from the data block. My R program read these as data, with spiky effects which I explain in the post. It’s due to the linear interpolation. With this fixed, a modest spike remains. I think I have found some reasons for this, but am still checking.

    A more careful (robust?) program would have avoided this problem, but it’s just possible that it has affected others.

    Steve: I had noticed the junk sections of several proxies down the page and had excluded that data in my calculations.

    • RomanM
      Posted Mar 17, 2013 at 9:29 AM | Permalink

      While we are fixing obvious errors in the data provided by Marcott et al, we should remove the two zeroes from proxy 62: Stott’s MD98-2176. These values create a change of about .4 degrees in the reconstruction at around -9000BC.

      All of the other temperatures in the proxy set are of the order of 28 to 30 C. The proxy plot in the thesis does not display such precipitous declines in temperature.

    • Lance Wallace
      Posted Mar 17, 2013 at 10:11 AM | Permalink

      I also noticed the “junk” data in several proxies and did not include it in my Excel file.

    • NZ Willy
      Posted Mar 17, 2013 at 2:10 PM | Permalink

      Don’t forget the junk data. It may be needed — what are the chances it was included in Marcott’s processing.

  61. Peter Miller
    Posted Mar 17, 2013 at 6:50 AM | Permalink

    Being cynical about almost anything to do with official climate science, I would suggest something along the following lines may have happened:

    Imagine a small room somewhere in the academic world, probably early last year, where Dr Marcott is sitting in front of a small panel of ‘eminent’ climate scientists.

    The conversation eventually comes around to the subject of economic reality in the world of climate science.

    “Well, Dr Marcott your PhD thesis is excellent work, but if you want to have a career in climate science, you must realise you have to come to the right conclusions, something which was clearly not achieved in your thesis.”

    “Oh,” replies Dr Marcott, contemplating the bleak prospect of having to find a real job in the real world, “I understand, but what do I have to do?”

    “Well, we suggest you publish a new research paper along the lines of your thesis, but this time coming to the right conclusions. So no big deal, just an update of some of the graphics, casting a fresh eye over the data, clean up some of the wording, nothing serious. And to make things easier for you, we can suggest some co-authors – people who are known to be sound on the subject of climate science – they will provide you with the statistical methodology and whatever other ‘proof’ you need to reach the right conclusions. These people are masters in the interpretation of raw data and can be relied on to provide you with what you need to become one of us. In addition, in order to demonstrate our sincerity in this, we shall arrange publication of this paper in a prestigious science journal – and don’t worry about having any difficulties in the peer review process, they will all be supporters of the Cause.”

    “The Cause?” mutters a baffled Marcott.

    “The Cause of all us climate scientists.”

    “OK, but how can I justify coming to a totally different conclusion?”

    “Trust us, nobody will ever know. And in the unlikely event anyone does, the re-interpretations of the data sets used in your new paper will be so complicated no one, not even Steve McIntyre, will ever be able to figure them out. In an absolute worst case scenario, you can talk about always being concerned about the robustness of data, so if there is something wrong it’s not your fault, it’s the fault of the data.”

    It is not certain at this point, whether or not Dr Marcott fell to his bended knees, realising his financial future was suddenly now secure as he clasped his hands together and gratefully bleated: “Thank you, oh thank you so very, very much.”

    One ’eminent’ climate scientist rises to his feet, extending his hand: “And as we are now all agreed, I warmly welcome you as the newest member of The Team.”

  62. Owen Hughes
    Posted Mar 17, 2013 at 8:44 AM | Permalink

    Steve McIntyre: thank you for all the hard, timely, clearly-explained work you’ve done here and, of course, on the original Mann “hockey stick.”. I have learned a great deal and I admire the civility and patient tone that, somehow, you are able to maintain in the face of such ineptitude and outright imposition. As for your wit: the “Dating Service” is hilarious. I would suggest that somebody might also find a way to work up a good skit, along the lines in Rocky Horror Picture Show, with all of the paper authors singing and dancing to the “Time Warp.”

  63. Pamela Gray
    Posted Mar 17, 2013 at 9:16 AM | Permalink

    Now fellas. You must have heard of the spin ratio statistic. The believability of the statistical analysis is a function of the word count in the paragraph preceding to explain all the adjustments made to the data prior to tests of significance.

  64. Craig Loehle
    Posted Mar 17, 2013 at 10:02 AM | Permalink

    Several people have noted that known perturbations of climate over the past 10000 yrs are missing from the final recon. Throwing a bunch of bad data together would do that, as Willis suggested, but also the Monte Carlo perturbation of dates (and maybe the redating) would also effectively do a time-smoothing (move to lower frequency) thus eliminating details. This would flatten out the curve and further enhance the perception that recent instrumental warming is even more alarming.

    • Manfred
      Posted Mar 17, 2013 at 4:28 PM | Permalink

      Yes, such reconstructions are always flattened due to

      1. low frequency data
      2. errors in dating
      3. non temperature influences

      How to reduce at least unavoidable errors in dating
      -> Perhaps with pertubation under the constraint of maximizing variability ?

      How to reduce errors due to non temperature influences:
      -> Focus on high quality data (speleothemes ?) and perhaps well correlating additional proxies ?

      An alternative way may be to estimate the influence of 1/2/3 on variability and increase the temperature range accordingly.

      Otherwise an attachment of an instrumental record at one end of a flattenend reconstruction is totally misleading.

  65. Kenneth Fritsch
    Posted Mar 17, 2013 at 12:40 PM | Permalink

    My post may be considered off topic for the subject at hand, but I cannot keep myself from giving my perspective on the Marcott paper based on my general view of reconstructions and the analyses of those reconstructions.

    Firstly, I think when SteveM does these analyses of papers it is a real learning experience for people on all sides of these issues. In this case I hope it draws meaningful replies by the authors, and, regardless of the outcome, we will all have learned. I also think that using a term audit for these analyses does not do them justice as the audits I have been connected with were not that imaginative in finding problems but more or less scripted and thus limited by that script. Finding corrective action might have required more imagination but sometimes that action was merely tuned to the scripted audit.

    The problem I do have with these analysis that focus in detail on a given aspect of a reconstruction is that sometimes that approach, as necessary as it is, gives the impression that the remainder of the rationale and methods used in a reconstruction have been accepted. In turn this allows piecemeal replies that imply that given the existence of a single problem the conclusions continue to hold with perhaps a little less certainty.

    I have finished reading through the Marcott paper and SI for the first time and from that read I get the impression that this reconstruction was meant primarily to address temperature trends on the centennial basis as the following excerpt explains:

    “Power spectra of the resulting synthetic proxy stacks are red, as expected, indicating that signal amplitude reduction increases with frequency. Dividing the input white noise power spectrum by the output synthetic proxy stack spectrum yields a gain function that shows the fraction of variance preserved by frequency (Fig. S17a). The gain function is near 1 above ~2000-year periods, suggesting that multi-millennial variability in the Holocene stack may be almost fully recorded. Below ~300-year periods, in contrast, the gain is near-zero, implying proxy record uncertainties completely remove centennial variability in the stack. Between these two periods, the gain function exhibits a steady ramp and crosses 0.5 at a period of ~1000 years.”

    Taken together with the table in the SI showing the temporal proxy resolution – nothing less than 20 years and with an eyeball average around 100 years – I do not see how advocates including scientists/advocates can really say anything about the comparison to the 40 years of the modern warming period. While the authors have not tacked the instrumental record on the end of the reconstruction like a number of reconstruction authors have in the past, with the implicit meaning that that record and the reconstruction proxy responses can be taken as equally valid , they have evidently somehow been able to show a spike upward at the end even while showing very large uncertainty intervals and telling us, at least implicitly , in the SI that that spike is not to be seriously considered. Note that those uncertainty limits shaded in blue in Marcott are for +/- 1 sigma and not +/- 2 sigma and with that last upward spike rather difficult to read from the graph and probably better estimated by looking at the – 1 sigma.

    Finally, a good hard look at the individual proxies shows, as is so often the case with these published reconstructions, little coherence between the proxy responses over time and even when considering the proxies are located around the globe and the trends can be different depending on location. Unfortunately averaging the responses together with the hope that the noise in those responses cancels out does not address the issue that given proxy responses which are influenced only a small amount by temperature and a large amount by responses to other effects that those other effects will be sufficiently similar in kind and magnitude to somehow cancel to give a meaningful response. This area of investigation has been totally neglected in my view by the climate scientists and it would not be surprising to hear as I paraphrase from a second hand comment from a Marcott author : We take 80 proxies and average them together.

  66. OldWeirdHarold
    Posted Mar 17, 2013 at 1:50 PM | Permalink

    Maybe I’m missing something obvious, but the alkenones are based on fish oils, and are thus proxies for sea water temperature, not surface temperature, if I understand this correctly. Doesn’t this make a 1.9C rise in 20 years all the more … not robust?

  67. Mike in TO
    Posted Mar 17, 2013 at 1:51 PM | Permalink

    Dumb question from a lurker – the extreme drop off shown using the original dates also looks somewhat “alarming” to the casual observer, like something went haywire in the modern era. Does it imply anything of significance?

    • superbowlpatriot
      Posted Mar 18, 2013 at 9:33 AM | Permalink

      It could suggest that the proxies in question have been picking up some recent (anthropogenic?) changes unrelated to temperature. This might not be the case, but it seems more likely than the possibility that these proxies have recorded a rapid temperature change.

    • superbowlpatriot
      Posted Mar 18, 2013 at 9:50 AM | Permalink

      Actually it may be an artifact of the dating and the gaps in the different series. I’d have to take a closer look but this question might have been answered.

    • Posted Mar 18, 2013 at 10:45 AM | Permalink

      The answer has to be no significance, for reasons given by Paul Dennis on Bishop Hill earlier today.

  68. Sven
    Posted Mar 17, 2013 at 2:20 PM | Permalink

    I’m sure that The Team will now gang up against the editor of “Science” as we saw against another editor in the Climategate e-mails and as Gavin has so eloquently explained – it was just because they don’t like bad science to be published. On second thought … i’m not that sure…

  69. Willis Eschenbach
    Posted Mar 17, 2013 at 4:28 PM | Permalink

    Here’s an overall look at the dating situation. First the whole dataset, then the last 1800 years, then the last 500 years.

    Lots of rearrangement going on …

    w.

    • Bob Koss
      Posted Mar 17, 2013 at 4:52 PM | Permalink

      🙂

    • Peter Miller
      Posted Mar 17, 2013 at 5:38 PM | Permalink

      Even by Climate Science’s typical standards of data manipulation to ‘prove’ a theory, this is more than a bit rich.

      I find it incredible this abuse of accepted scientific methodology is so blatant in so much of today’s Climate Science.

      Over the past 500 years, do any of the datasets have the correct chronology in Marcott’s paper?

    • Posted Mar 17, 2013 at 5:56 PM | Permalink

      Dear Willis;
      Am I correct that your graphs show that they are not only linearly offsetting time within a given series, but warping it nonlinearly also?

      Did they really allow the computer to do this under the aegis of Monte Carlo randomization?
      Are these guys desperate or what?
      Yikes!
      RR

    • mrsean2k
      Posted Mar 17, 2013 at 6:26 PM | Permalink

      Great visualisation, thanks.

  70. Uniblogger
    Posted Mar 18, 2013 at 7:26 AM | Permalink

    “If we torture the data long enough, it will confess.”

    ~Ronald Coase, Nobel Prize for Economic Sciences, 1991

  71. A C Osborn
    Posted Mar 18, 2013 at 10:18 AM | Permalink

    Nick Stokes Posted Mar 17, 2013 at 3:07 PM “To show the best history of temperature we can get.”

    In your honest opinion does this study even come close to that aim?

  72. Lars P.
    Posted Mar 18, 2013 at 3:10 PM | Permalink

    Just a stupid question. Has anybody checked or has the possibility to check if this new, inventive dating proxies process has ever been used in other work? I remember there was a Shakun et al not so long ago… Maybe there is a consistence somewhere after all. Just asking…

  73. Ryan
    Posted Mar 19, 2013 at 12:58 PM | Permalink

    For what it’s worth Monte Carlo analysis is commonly used in electronic engineering. The idea is if you have a circuit with multiple components each of which has a tolerance associated with it, you want to know that the output of the circuit will likely remain within required limits no matter what combination of tolerances on the components you have. A “worst-case” analysis can do the same thing, but sometimes it is difficult to work out exactly how a given component will affect the output without doing a lot of math for that specific component. What you do in electronics is build a software simultion of the circuit which allows the tolerances to be considered, then the software does a Monte Carlo analysis – basically it tries the component values at different values within the tolerance limits and plots the output. Usually you end up with a whole series of curves showing how the output will change with component values, and hopefull this will tell you if the circuit will do the same thing.

    In this case the scientist wants to present a single curve showing temperature over time, and this curve is a function of multiple input proxies. Unfortunately he has some tolerance on the dating of those proxies, so it seems he used a Monte Carlo analysis to get an idea of how taking into account the tolerance will impact their end curve.

  74. Albrecht Glatzle
    Posted Mar 25, 2013 at 2:03 PM | Permalink

    Dear Steve, I do not doubt you found once more irritating data manipulations. However, even as somebody being fairly familiar with climate matters, I am unable to follow your essay. Could you present your findings in a more didactical manner? What are alkenones? What proxy do they represent? Where and when were the cores drilled? Where was the original dating published and by whom? What implications does the re-dating have? Furthermore I do not understand the table you show nor the explications you give.

    I’ld very much like to be able to follow your analysis so that I can reproduce the essentials in any dispute on climate change issues….

  75. Tomcat
    Posted Apr 20, 2013 at 3:37 AM | Permalink

    He was really “shakun the dates”.
    – a term for this new, cutting-edge technique in Climate ‘Science’.

36 Trackbacks

  1. […] The Marcott-Shakun Dating Service […]

  2. […] […]

  3. By Jolly hockey sticks … | bobmcgee on Mar 16, 2013 at 9:20 PM

    […] Which makes them useful proxies for temperature. But as well as reflecting temperature a useful proxy must also be accurately dated. If you were to choose your proxies carefully and fiddle with their dates you could get any result you wanted … even a hockey stick. […]

  4. […] Steven McIntyre has now done a full audit of their data, and found it ridden with substantial  fallacies. He mocks the way the authors have changed the dates of observations in order to achieve a ‘hockey stick’ result: https://climateaudit.org/2013/03/16/the-marcott-shakun-dating-service/ […]

  5. […] McIntyre’s latest post is a breathtaking indictment of the paper: […]

  6. […] McIntyre’s latest post is a breathtaking indictment of the paper: […]

  7. By The Climate Change Debate Thread - Page 2250 on Mar 17, 2013 at 3:15 AM

    […] […]

  8. By Een nieuwe valse hockeystick on Mar 17, 2013 at 5:45 AM

    […] een doodzonde, soort ‘hide the decline’ ontdekt: schuiven met data Hockeysticksloper Steven Mcintyre heeft echter ontdekt dat ze- om die sterke spike op het eind te krijgen- met data hebben zitten schuiven. Wanneer je die […]

  9. […] The Marcott-Shakun Dating Service « Climate Audit […]

  10. […] those who have been following the saga of the Marcott Hockey Stick MkII, Steve McIntyre has followed up his series of posts, which have already raised serious questions about the […]

  11. […] Mac has a very interesting post up.  If you haven’t read it yet, one should do so.  It’s enlightening and […]

  12. By Of Mice and Men - Page 4 - Router Forums on Mar 17, 2013 at 2:39 PM

    […] […]

  13. By hockey break … | pindanpost on Mar 17, 2013 at 7:49 PM

    […] The Marcott-Shakun Dating Service « Climate Audit […]

  14. […] post on his research is here. This chart shows how critical Marcott’s re-dating was to his conclusion that temperatures spiked […]

  15. […] The Marcott-Shakun Dating Service (climateaudit.org) […]

  16. […] post on his research is here. This chart shows how critical Marcott’s re-dating was to his conclusion that temperatures spiked […]

  17. […] Steve McIntyre: […]

  18. […] the meantime, Steve McIntyre has been conducting the due diligence that obviously was not done by Revkin (nor, evidently, by those who “peer-reviewed” the […]

  19. […] The Marcott-Shakun Dating Service Marcott, Shakun, Clark and Mix did not use the published dates for ocean cores, instead substituting their own dates. The validity of Marcott-Shakun re-dating will be discussed below, but first, to show that the re-dating “matters” (TM-climate science), here is a graph showing reconstructions using alkenones (31 of 73 proxies) in Marcott style, comparing the results with published dates (red) to results with Marcott-Shakun dates (black). […]

  20. By Att skarva med proxivärden | The Climate Scam on Mar 19, 2013 at 12:00 AM

    […] (länk) här på TCS. Artikeln har kritiserats också på andra blogginlägg.(länk1, länk2, länk3). Dagens inlägg  är endast ett urval av den kritik som redan framförts i dessa blogginlägg. […]

  21. […] Steve McIntyre, l’incubo di Mann e soci e ora anche di Marcott e soci, ha prodotto il grafico in testa a questo post plottando le serie impiegate in questo paper sia con la datazione pubblicata che con quella rivisitata. Lo trovate qui sotto, mentre al link indicato c’è tutta la spiegazione. […]

  22. […] the last decade. This year, he takes aim at the latest nonsense, from Marcott et al. On his blog (Climate Audit), he explains how the timing of  the data was manipulated – in one case, a dataset was […]

  23. […] the last decade. This year, he takes aim at the latest nonsense, from Marcott et al. On his blog (Climate Audit), he explains how the timing of  the data was manipulated – in one case, a dataset was […]

  24. […] on the proxy samples were changed for some strange reason. McIntyre’s post on his research is here. This chart shows how critical Marcott’s re-dating was to his conclusion that temperatures spiked […]

  25. By Dagbladet skremmer | Klimatilsynet on Mar 19, 2013 at 1:46 PM

    […] mer her: https://climateaudit.org/2013/03/16/the-marcott-shakun-dating-service/ […]

  26. […] it. The day after publishing Marcott’s nonresponse, Steve published his re-dating comment, with McIntyre 2 worth more than a thousand words. Black is Science with Marcott’s re-dating. Red is Marcott’s […]

  27. By Worse Than Macroeconometrics | askblog on Mar 20, 2013 at 6:17 PM

    […] Steve McIntyre seems to pour cold water on a study allegedly demonstrating global warming. […]

  28. By AGW: The Hockey Stick, Broken Again on Mar 20, 2013 at 7:06 PM

    […] […]

  29. […] https://climateaudit.org/2013/03/16/the-marcott-shakun-dating-service/ […]

  30. […] 2 shows the anomaly data using the modified carbon dating (re-dating). This has been identified by Steve McIntyre and others as the main cause of the up-tick. However I think this is only part of the […]

  31. By 1.10 Orwell vs. Huxley | Radish on Mar 26, 2013 at 2:06 AM

    […] Compare the tone, in a post chosen almost at random, of Romm’s archenemy, Steve McIntyre of Climate Audit […]

  32. […] The Marcott-Shakun Dating Service (climateaudit.org) […]

  33. […] much hilarious material on offer. This cartoon was inspired by Steve McIntyre’s posts on the Marcottian redating of cores. How do they get away with this […]

  34. […] mathematician who maintains what would also be considered a “climate denier” blog Climate Audit, McKitrick pointed out what they believe are flaws in the Marcott et al. […]

  35. […] How Marcottian Upticks Arise le 16/03/2013 (Comment les lames marcottiennes se produisent) The Marcott-Shakun Dating Service le 16/03/2013 (Les service de redatation Marcott-Shakun) Hiding the Decline: MD01-2421 le […]

  36. […] The Marcott-Shakun Dating Service […]