Marcott Monte Carlo

So far, the focus of the discussion of the Marcott et al paper has been on the manipulation of core dates and their effect on the uptick at the recent end of the reconstruction. Apologists such as “Racehorse” Nick have been treating the earlier portion as a given. The reconstruction shows that mean global temperature stayed pretty much constant varying from one twenty year period to the next by a maximum of .02 degrees for almost 10000 years before starting to oscillate a bit in the 6th century and then with a greater amplitude beginning about 500 years ago. The standard errors of this reconstruction range from a minimum of .09 C (can this set of proxies realistically tell us the mean Temperature more than 5 millennia back within .18 degrees with 95% confidence?) to a maximum of .28 C. So how can they achieve such precision?

Twenty year reconstruction changes

The Marcott reconstruction uses a methodology generally known as Monte Carlo. In this application, they supposedly account for the uncertainty of the proxy by perturbing both the temperature indicated by each proxy value as well as the published time of observation. For each proxy sequence, the perturbed values are then made into a continuous series by “connecting the dots” with straight lines (you don’t suppose that this might smooth each series considerably?) and the results from this are recorded for the proxy at 20 year intervals. In this way, they create 1000 gridded replications of each of their 73 original proxies. This is followed up with recalculating all of the “temperatures” as anomalies over a specific 1000 period (where do you think you might see a standard error of .09?). Each of the 1000 sets of 73 gridded anomalies is then averaged to form 1000 individual “reconstructions”. The latter can be combined in various ways and from this set the uncertainty estimates will also be calculated.

The issue I would like to look at is how the temperature randomization is carried out for certain classes of proxies. From the Supplementary Information:

Uncertainty

We consider two sources of uncertainty in the paleoclimate data: proxy-to-temperature calibration (which is generally larger than proxy analytical reproducibility) and age uncertainty. We combined both types of uncertainty while generating 1000 Monte Carlo realizations of each record.

Proxy temperature calibrations were varied in normal distributions defined by their 1σ uncertainty. Added noise was not autocorrelated either temporally or spatially.

a. Mg/Ca from Planktonic Foraminifera – The form of the Mg/Ca-based temperature proxy is either exponential or linear:
Mg/Ca = (B±b)*exp((A±a)*T)
Mg/Ca =(B±b)*T – (A±a)
where T=temperature.
For each Mg/Ca record we applied the calibration that was used by the original authors. The uncertainty was added to the “A” and “B” coefficients (1σ “a” and “b”) following a random draw from a normal distribution.

b. UK’37 from Alkenones – We applied the calibration of Müller et al. (3) and its uncertainties of slope and intercept.
UK’37 = T*(0.033 ± 0.0001) + (0.044 ± 0.016)

These two proxy types account for (19 (Mg/Ca) and 31 (UK’37)) 68% of the proxies used by Marcott et al. Any missteps in how these are processed would have a very substantial effect on the calculated reconstructions and error bounds. Both of them use the same type of temperature randomization so we will examine only the Alkenone series in detail.

The methodology for converting proxy values to temperature comes from a (paywalled) paper: P. J. Müller, G. Kirst, G. Ruthland, I. von Storch, A. Rosell-Melé, Calibration of the alkenone 497 paleotemperature index UK’37 based on core-tops from the eastern South Atlantic and the 498 global ocean (60N-60S). Geochimica et Cosmochimica Acta 62, 1757 (1998). Some information on Alkenones can be found here.

Müller et al use simple regression to derive a single linear function for “predicting” proxy values from the sea surface temperature:

UK’37 = (0.044 ± 0.016) + (0.033 ± 0.001)* Temp

The first number in each pair of parentheses is the coefficient value, the second is the standard error of that coefficient. You may notice that the standard error for the slope of the line in the Marcott SI is in error (presumably typographical) by a factor of 10. These standard errors have been calculated from the Müller proxy fitting process and are independent of the Alkenone proxies used by Marcott (except possibly by accident if some of the same proxies have also been used by Marcott). The relatively low standard errors (particularly of the slope) are due to the large number of proxies used in deriving the equation.

According to the printed description in the SI, the equation is applied as follows to create a perturbed temperature value:

UK’37 = (0.044 + A) + (0.033 + B)* Pert(Temp)

[Update: It has been pointed by faustusnotes at Tamino’s Open mind that certain values that I had mistakenly interpreted as standard errors were instead 95% confidence limits. The changes in the calculations below reflect the fact the the correct standard deviations are approximate half of those amounts: 0.008 and 0.0005.]

where A and B are random normal variates generated from independent normal distributions with standard deviations of 0.016 0.008 and 0.001 0.0005, respectively.

Inverting the equation to solve for the perturbed temperature gives

Pert(Temp) = (UK’37 – 0.044)/(0.033 + B) – A / (0.033 + B)

If we ignore the effect of B (which in most cases would have a magnitude no greater than .003), we see that the end result is to shift the previously calculated temperature by a randomly generated normal variate with mean 0 and standard deviation equal to 0.016/0.033 = .48 0.008/0.033 = 0.24. In more than 99% of the cases this shift will be less than 3 SDs or about 1.5 0.72 degrees.

So what can be wrong with this? Well, suppose that Müller had used an even larger set of proxies for determining the calibration equation, so large that both of the coefficient standard errors became negligible. In that case, this procedure would produce an amount of temperature shift that would be virtually zero for every proxy value in every Alkenone sequence. If there was no time perturbation, we would end up with 1000 almost identical replications of each of the Alkenone time series. The error bar contribution from the Alkenones would spuriously shrink towards zero as well.

What Marcott does not seem to realize is that their perturbation methodology left out the most important uncertainty element in the entire process. The regression equation is not an exact predictor of the the proxy value. It merely represents the mean value of all proxies at a given temperature. Even if the coefficients were known exactly, the variation of the individual proxy around that mean would still produce uncertainty in its use. The randomization equation that they should be starting with is somewhat different:

UK’37 = (0.044 + A) + (0.033 + B)* Pert(Temp) + E

where E is also a random variable independent of A and B and with standard deviation of the predicted proxy equal to 0.050 obtained from the regression in Müller:

The perturbed temperature now becomes

Pert(Temp) = (UK’37 – 0.044)/(0.033 + B) – (A + E) / (0.033 + B)

and again ignoring the effect of B, the new result is equivalent to shifting the temperature by a single randomly generated normal variate with mean 0 and standard deviation given by

SD = sqrt( (0.016/0.033)2 + (0.050/0.033)2 ) = 1.59
SD = sqrt( (0.008/0.033)2 + (0.050/0.033)2 ) = 1.53

The variability of the perturbation is now three 6.24 times as large as that calculated when only the uncertainties in the equation coefficients are taken into account. Because of this, the error bars would increase substantially as well. The same problem would occur for the Mg/Ca proxies as well, although the magnitudes of the increase in variability would be different. In my opinion, this is a possible problem that needs to be addressed by the authors of the paper.

The regression plot and the residual plot from Müller give an interesting view of what the relationship looks like.

I would also like someone to tell me if the description for ice cores means what I think it means:

f. Ice core – We conservatively assumed an uncertainty of ±30% of the temperature anomaly (1σ).

If so, …


Tom Curtis Writes

While CA readers may disagree with Tom Curtis, we’ve also noticed that he is straightforward. Recently, in comments responding to my recent post on misrepresentations by Lewandowsky and Cook, Curtis agreed that “Lewandowsky’s new addition to his paper is silly beyond belief”, but argued that “the FOI data does not show Cook to have lied about what he found. He was incorrect in his claims about where the survey was posted; but that is likely to be the result of faulty memory.”

Showing both integrity and personal courage, Curtis has sent me the email published below (also giving me permission to publish the excerpt shown.) While Curtis agreed that Cook’s statement to Chambers could not possibly be true, Curtis re-iterates his belief that Cook is honest, though he is obviously troubled by the incident. Curtis also reports that, as early as last September, he emailed both Lewandowsky (cc Oberauer) and Cook informing them that no link to the Lewandowsky survey had been posted at the SKS blog, only a tweet – a warning inexplicably ignored by Lewandowsky and Oberauer in their revisions to Lewandowsky et al (Psych Science). Continue reading

April Fools’ Day for Marcott et al

Q. Why did realclimate publish the Marcott FAQ on Easter Sunday?
A. Because if they’d waited until Monday, everyone would have thought it was an April Fools’ joke. Continue reading

The Marcott Filibuster

Marcott et al have posted their long-promised FAQ at realclimate here. Without providing any links to or citation of Climate Audit, they now concede:

20th century portion of our paleotemperature stack is not statistically robust, cannot be considered representative of global temperature changes, and therefore is not the basis of any of our conclusions.

Otherwise, their response is pretty much a filibuster, running the clock on questions that have not actually been asked and certainly not at issue by critics. For questions and issues that I’ve actually raised, for the most part, they merely re-iterate what they already said. Nothing worth waiting for. Continue reading

Aging as a State of Mind

Bobbie Hasselbring, editor of Real Food Traveller, has an article on “Aging as a State of Mind”. Her article concludes as follows:

Katherine McIntyre is 89 years old. She’s the oldest person ever to have zip lined at High Life Adventures. She’s a working travel journalist. And she kept me from even wondering whether I’m afraid of heights. And, you know what? I’m not afraid. Because Katherine isn’t afraid and she’s one of my heroes.

Mine as well.

Lewandowsky Doubles Down

Last fall, Geoff Chambers and Barry Woods established beyond a shadow of a doubt that no blog post linking to the Lewandowsky survey had ever been published at the Skeptical Science (SKS) blog. Chambers reasonably suggested at the time that the authors correct the claim in the article to reflect the lack of any link at the SKS blog. I reviewed the then available information on this incident in September 2012 here.

Since then, information obtained through FOI by Simon Turnill has shown that responses by both Lewandowsky and Cook to questions from Chambers and Woods were untrue. Actually, “untrue” does not really do justice to the measure of untruthfulness, as the FOI correspondence shows that the untruthful answers were given deliberately and intentionally. Chambers, in a post entitled Lewandowsky the Liar, minced no words in calling Lewandowsky “a liar, a fool, a charlatan and a fraud.”

Even though the untruthfulness of Lewandowsky and Cook’s stories had been clearly demonstrated by Geoff Chambers in a series of blog articles (e.g. here), in the published version of the Hoax paper, instead of correcting prior untrue claims about SKS, Lewandowsky doubled down, repeating and substantially amplifying the untrue claim. Continue reading

Bent Their Core Tops In

In today’s post, I’m going to show Marcott-Shakun redating in several relevant cases. The problem, as I’ve said on numerous occasions, has nothing to do with the very slight recalibration of radiocarbon dates from CALIB 6.0.1 (essentially negligible in the modern period in discussion here), but with Marcott-Shakun core top redating. Continue reading

Hiding the Decline: MD01-2421

As noted in my previous post, Marcott, Shakun, Clark and Mix disappeared two alkenone cores from the 1940 population, both of which were highly negative. In addition, they made some surprising additions to the 1940 population, including three cores whose coretops were dated by competent specialists 500-1000 years earlier.

While the article says that ages were recalibrated with CALIB6.0.1, the differences between CALIB6.0.1 and previous radiocarbon calibrations is not material to the coretop dating issues being discussed here. Further, Marcott’s thesis used CALIB6.0.1, but had very different coretop dates. Marcott et al stated in their SI that “Core tops are assumed to be 1950 AD unless otherwise indicated in original publication”. This is not the procedure that I’ve observed in the data. Precisely what they’ve done is still unclear, but it’s something different.

In today’s post, I’ll examine their proxy #23, an alkenone series of Isono et al 2009. This series is a composite of a piston core (MD01-2421), a gravity core (KR02-06 St. A GC) and a box/multiple core (KR02-06 St A MC1), all taken at the same location. Piston cores are used for deep time, but lose the top portion of the core. Coretops of piston cores can be hundreds or even a few thousand years old. Box cores are shallow cores and the presently preferred technique for recovering up-to-date results.

There are vanishingly few alkenone series where there is a high-resolution box core accompanying Holocene data. Indeed, within the entire Marcott corpus of ocean cores, the MD01-241/KNR02-06 splice is unique in being dated nearly to the present. Its published end date was -41BP (1991AD). Convincing support for modern dating of the top part of the box core is the presence of a bomb spike:

A sample from 3 cm depth in the MC core showed a bomb spike. The high sedimentation rate (average 31 cm/ka) over the last 7000 years permits analysis at multidecade resolution with an average sample spacing of ~32 years.

Despite this evidence for modern sediments, Marcott et al blanked out the top three measurements as shown below:

md01-2421 excerpt
Table 1. Excerpt from Marcott et al spreadsheet

By blanking out the three most recent values of their proxy #23, the earliest dated value was 10.93 BP (1939.07 AD). As a result, the MD01-2421+KNR02-06 alkenone series was excluded from the 1940 population. I am unable to locate any documented methodology that would lead to the blanking out of the last three values of this dataset. Nor am I presently aware of any rational basis for excluding the three most recent values.

Since this series was strongly negative in the 20th century, its removal (together with the related removal of OCE326-GGC30 and the importation of medieval data) led to the closing uptick.

BTW in the original publication, Isono et al 2009 reported a decrease in SST from Holocene to modern times that is much larger than the Marcott NHX estimate of less than 1 deg C, reporting as follows:

the SST decreased by ~5 °C to the present (16.7 °C), with high-frequency variations of ~1 °C amplitude (Fig. 2).

A plot of this series is shown below, with the “present” value reported by Isono et al shown as a red dot.

MD03-2421 splice

The Marcott-Shakun Dating Service

Marcott, Shakun, Clark and Mix did not use the published dates for ocean cores, instead substituting their own dates. The validity of Marcott-Shakun re-dating will be discussed below, but first, to show that the re-dating “matters” (TM-climate science), here is a graph showing reconstructions using alkenones (31 of 73 proxies) in Marcott style, comparing the results with published dates (red) to results with Marcott-Shakun dates (black). As you see, there is a persistent decline in the alkenone reconstruction in the 20th century using published dates, but a 20th century increase using Marcott-Shakun dates. (It is taking all my will power not to make an obvious comment at this point.)
alkenone-comparison
Figure 1. Reconstructions from alkenone proxies in Marcott style. Red- using published dates; black- using Marcott-Shakun dates.

Marcott et al archived an alkenone reconstruction. There are discrepancies between the above emulation and the archived reconstruction, a topic that I’ll return to on another occasion. (I’ve tried diligently to reconcile, but am thus far unable. Perhaps due to some misunderstanding on my part of Marcott methodology, some inconsistency between data as used and data as archived or something else.) However, I do not believe that this matters for the purposes of using my emulation methodology to illustrate the effect of Marcott-Shakun re-dating.

ALkenone Core Re-dating

The table below summarizes Marcott-Shakun redating for all alkenone cores with either published end-date or Marcott end-date being less than 50 BP (AD1900). I’ve also shown the closing temperature of each series (“close”) after the two Marcot re-centering steps (as I understand them).
alkenone core redating table

The final date of the Marcott reconstruction is AD1940 (BP10). Only three cores contributed to the final value of the reconstruction with published dates ( “pubend” less than 10): the MD01-2421 splice, OCE326-GGC30 and M35004-4. Two of these cores have very negative values. Marcot et al re-dated both of these cores so that neither contributed to the closing period: the MD01-2421 splice to a fraction of a year prior to 1940, barely missing eligibility; OCE326-GGC30 is re-dated 191 years earlier – into the 18th century.

Re-populating the closing date are 5 cores with published coretops earlier than AD10, in some cases much earlier. The coretop of MD95-2043, for example, was published as 10th century, but was re-dated by Marcott over 1000 years later to “0 BP”. MD95-2011 and MD-2015 were redated by 510 and 690 years respectively. All five re-dated cores contributing to the AD1940 reconstruction had positive values.

In a follow-up post, I’ll examine the validity of Marcott-Shakun redating. If the relevant specialists had been aware of or consulted on the Marcott-Shakun redating, I’m sure that they would have contested it.

Jean S had observed that the Marcott thesis had already described a re-dating of the cores using CALIB 6.0.1 as follows:

All radiocarbon based ages were recalibrated with CALIB 6.0.1 using INTCAL09 and its protocol (Reimer, 2009) for the site-specific locations and materials. Marine reservoir ages were taken from the originally published manuscripts.

The SI to Marcott et al made an essentially identical statement (pdf, 8):

The majority of our age-control points are based on radiocarbon dates. In order to compare the records appropriately, we recalibrated all radiocarbon dates with Calib 6.0.1 using INTCAL09 and its protocol (1) for the site-specific locations and materials. Any reservoir ages used in the ocean datasets followed the original authors’ suggested values, and were held constant unless otherwise stated in the original publication.

However, the re-dating described above is SUBSEQUENT to the Marcott thesis. (I’ve confirmed this by examining plots of individual proxies on pages 200-201 of the thesis. End dates illustrated in the thesis correspond more or less to published end dates and do not reflect the wholesale redating of the Science article.

I was unable to locate any reference to the wholesale re-dating in the text of Marcott et al 2013. The closest thing to a mention is the following statement in the SI:

Core tops are assumed to be 1950 AD unless otherwise indicated in original publication.

However, something more than this is going on. In some cases, Marcott et al have re-dated core tops indicated as 0 BP in the original publication. (Perhaps with justification, but this is not reported.) In other cases, core tops have been assigned to 0 BP even though different dates have been reported in the original publication. In another important case (of YAD061 significance as I will later discuss), Marcott et al ignored a major dating caveat of the original publication.

Examination of the re-dating of individual cores will give an interesting perspective on the cores themselves – an issue that, in my opinion, ought to have been addressed in technical terms by the authors. More on this in a forthcoming post.

The moral of today’s post for ocean cores. Are you an ocean core that is tired of your current date? Does your current date make you feel too old? Or does it make you feel too young? Try the Marcott-Shakun dating service. Ashley Madison for ocean cores. Confidentiality is guaranteed.

How Marcottian Upticks Arise

I’m working towards a post on the effect of Marcott re-dating, but first I want to document some points on the methodology of Marcott et al 2013 and to remove some speculation on the Marcott upticks, which do not arise from any of the main speculations.

In the graphic below, I’ve plotted Marcott’s NHX reconstruction against an emulation (weighting by latitude and gridcell as described in script) using proxies with published dates rather than Marcott dates. (I am using this version because it illustrates the uptick using Marcott methodology. Marcott re-dating is an important issue that I will return to.) The uptick in the emulation occurs in 2000 rather than 1940; the slight offset makes it discernible for sharp eyes below.

emulation -NH
Figure 1. Marcott NHX reconstruction (red) versus emulation with non-redated proxies (yellow). The dotted lines at the left show the Younger Dryas. Marcott began their reported results shortly after the rapid emergence from the Younger Dryas, which is not shown in the graphics.

I have consistently discouraged speculation that the Marcott uptick arose from splicing Mannian data or temperature data. I trust that the above demonstration showing a Marcottian uptick merely using proxy data will put an end to such speculation.

The other “explanation” is that the uptick results from high-frequency swings in individual proxies. Marcott’s email to me encouraged such speculation. However, this is NOT what causes the uptick. Below I show the series that contribute to the NHX weighted average before and after the uptick. The proxy values shown below have been re-centered to reflect Marcott recentering: (1) by -0.66 deg C to reflect the re-centering from mid-Holocene to 500-1450AD; (2) by -0.08 deg C to match the observed mean of Marcottian reconstructions in 500-1450 AD.

Readers will observe that there are 6 contributing series in the second-last step, of which 5 are negative, some strongly. Their weighted average is negative (not quite as negative as the penultimate Marcott value in 1920, but you see the effect.) Only one series is present in the final step, one that, after the two rescaling steps, is slightly positive. Thus, the uptick. None of the contributing series have sharp high-frequency: their changes are negligible. Ironically, the one continuing series (Lake 850) actually goes down a little in the period of the uptick.

excerpt nh unredated

Marcottian uptricks upticks arise because of proxy inconsistency: one (or two) proxies have different signs or quantities than the larger population, but continue one step longer. This is also the reason why the effect is mitigated in the infilled variation. In principle, downticks can also occur – a matter that will be covered in my next post which will probably be on the relationship between Marcottian re-dating and upticks.

I have been unable to replicate some of the recent features of the Marcott zonal reconstructions. I think that there may be some differences in some series between the data as archived and as used in their reported calculations, though it may be a difference in methodology. More on this later.