Mannian CPS

As a technique, CPS is little more than averaging and shouldn’t take more than a few lines of code. You standardize the network – one line; take an average – one more line; and then re-scale to match the mean and standard deviation of your target – one more line. (I’m not commenting here on the statistical merits of CPS – merely what implementing it in a modern programming language.) However, Mannian CPS takes pages of code, and, needless to say, everywhere there are snares for the unwary. I guess Mann learned his programming from Hansen and passed the style along to his students.

We had previously spent some time starting down this analysis before we were so rudely interrupted by Santer et al 2008. So let’s pick up where we left off.

We had just finished smoothing the proxies. This had its own set of complications as Mann used a butterworth filter on uncentered data. This led to peculiar end point problems requiring long end point padding to control the effect of the butterworth filter. Emulating this in R was complicated by the odd problem that R filtfilt operated differently than Matlab filtfilt. Roman M identified and resolved this by writing a pretty function filtfiltx, which emulated Matlab filtfilt, enabling an emulation of Mann’s smoothing function in R.

Aside from the fact that I’m used to R, going through the exercise of translating Matlab into R is helpful in walking through each step, always bearing in mind UC’s “Implicit Assumption Advisory.”

On to gridboxcps.m.

After smoothing the proxies using Mannian smoothing, they smooth the “instrumental” data – now the “instrumental” data appears to be related to CRU data, but has been “infilled” back to 1850 in many gridcells that don’t actually have 19th century data. Mannian correlations are often between infilled proxies and infilled data. The instrumental data is smoothed using the same smoothing function as the proxies. The comments in the program say that the data is smoothed with a “lowpass filter with a cutoff frequency of 0.05 — 20 years”, but the parameter
cutfreq is set at 0.1 (10 years). It’s a bit of a guess as to which was actually used for the archived runs, so I worked this through using cutfreq=0.1, presuming that the comment is incorrect, as opposed to the wrong version being archived (but either is possible.)

Averaging into Gridcells
The next step is to “average all proxies … into 5×5 grid boxes”. This is something that should be pretty routine, but hey… If I were doing the exercise from scratch, I would do a one-time assignment of a gridbox number (I use the “Jones” sequence N to S, Dateline to Dateline) in my proxy “info” database and use this information whenever required. This is easy enough to do.

Mann does something a little different. Instead of assigning each proxy to a gridcell, he makes an empty dummy array of 1732 gridcells and looks for proxies contained within each gridcell, a procedure repeated over and over, a piece of pointless inefficiency in rococo GISTEMP style. However, in addition to the inefficiency, each search looks for proxies that in the “less than or equal to” gridcell. In the AD800 network that I’ve been looking at, Mann has (incorrectly) used coordinates of 12.5N 50E for the Socotra Island speleothem. On its face, the logic of the program includes this proxy in two gridcells: 12.5N 47.5E and 12.5N 52.5E. This problem was noted a couple of weeks ago by Jean S and UC, who observed that that the regts series in two gridcells were identical in an early network (Socotra Island.)

Given that a relatively small number of proxies are exactly on gridcell boundaries, my procedure would be to re-examine the source documents for a more precise location to take the lat-long off dead center. In the case of Socotra Island, the matter is easily resolved as the original reference places the site at 12.5N 54E, removing the problem in this instance.

Because the Mann algorithm permits proxies to be located in more than one gridcell, the emulation of his method (using a concise style) is complicated somewhat. Instead of having one gridbox associated with each proxy, you need a list of 1-4 gridcells associated with each proxy. I did this in what I thought was a fairly pretty way. I have a function jones that locates the Jones gridcell for a given lat-long. I created a new function outerjones that yielded 1-4 gridcells in the Mann style and made a list for all 1209 proxies for repetitive use. The outerjones function perturbed each lat-long location plus-minus 0.0001 deg and found the unique gridcells (4 on the exact corner). This eliminates one repetitive step.

outerjones=function(x) unique( c(outer (x[1]+c(-.0001,.0001),x[2]+c(-.0001,.0001),jones) ))
outerlist=apply(details[,c(“lat”,”long”)],1,outerjones)

To simplify the programming, for a given step, I then made a subset of the proxies that (1) started in time; (2) passed the Mannian correlation step (3) starting the collection at the start of the Mannian step. For the AD1800 step, this network goes from 800-1996 and has 18 series.

tempi=!is.na(proxys[period_MBH[k]-tsp(proxys)[1]+1,])&passing;sum(tempi)
temp=(time(proxy)>=period_MBH[k])
working=proxyscaled[temp,tempi];dim(working) #1197 18
dimnames(working)[[2]]

We then determine the number of gridcells attached to these proxies using the list that we created once (rather than cycling through 1732 gridcells over and over.) In this case, there are 12 such gridcells (including two cells for Socotra Island.) The list of gridcells can be obtained in one line as follows:

idsort= sort(unique(unlist(outerlist[tempi]))) ;K=length(idsort);K #12
# 94 245 328 402 661 780 922 955 1126 1127 1462 1626 1938

Next we set up array, one column for each of the above gridcells.We name each column by its Jones gridcell number (which is not the same as the Mann gridcell number – Mann goes from S to N). I use Jones numbers because I’m used to them.

grids=array(NA,dim=c(nrow(working),K) )
dimnames(grids)[[2]]=idsort

We then average the standardized proxies within each gridcell, again emulating the Mannian calculation in a more intelligible style. One of the few annoying features of R is that some important functions (apply) apply only to matrices; the average function shown here will average matrices but won’t fail with a vector.

average= function(X) if( is.null(ncol(X) )) X else apply(X,1,mean,na.rm=T)

For each column in the gridcell, the attached proxies are picked out from our standard list and averaged (they have already been standardized in Mann style). Using techniques like lists and lapply enables one to accomplish pages of programming in one line and, more importantly, to avoid the endless subscripting that characterizes Mann-Hansen programming.

for (i in 1:K) {
tempr= !is.na(unlist(lapply(outerlist[tempi],function (A) match(idsort[i],A)) ) ) # this picks out the relevant series in gridcell
grids[,i]=average( working[,tempr])
}
grids=ts(grids,start=period_MBH[k])

But aside from style, one feels that clearer programming would have identified the apparent defect by which one proxy is assigned to multiple gridcells.

Calibrate Proxies Against Instrumental Data
In a typical CPS calculation, the proxies are assumed, rightly or wrongly, to be temperature proxies and calibration is done after they are averaged. Mannian CPS calibrates on more than one occasion and again it appears that programming issues lead to some unexpected results.

This step is done in the gridboxcps.m program after the comment % calibrate proxies against instrument data. The calibration is a usual Team method: re-scale and re-center the proxy to the mean and standard deviation of the “target” instrumental data. As the program is written, this requires that target instrumental data exist for the gridbox (or gridboxes) to which the proxy is attached. What happens if there is no target instrumental data in the required box? As written, the program omits the affected proxy. In the AD800 network, two series are affected: Agassiz melt and Central Greenland GISP. As far as I can tell, they do not enter into subsequent CPS calculations. So at the end of these two operations, Mannian CPS has double-counted Socotra Island and removed Agassiz and Central Greenland, leaving us with 11 gridcells. [see below as this doesn’t seem to be right for the AD1000 network examined by UC. UC’s compilation below doesn’t contain Tornetrask and Tiljander, but does contain GISP and Agassiz – I’ll check with him on this, as this seems odd.]

Update- I’ve now compared UC’s graphic here against my emulation. I can identify all 10 series in his graphic, which confirms the double-counting of the Socotra series, but UC’s chart here includes the Agassiz and GISP series (but excludes a number of series that my emulation would have included – so I’m missing something here. The caption below this figure identifies the series.

Left column, top to bottom: Socotra Island speleothem (C13-O18 combination); 2- Punta Laguna sediment combination; 3- Graybill foxtail (Campito Mt ca533); 4- Scotland speleothem(Baker); 5- Graybill bristlecone (nv512); Right top to bottom; 1 – Duplicate Socotra speleothem; 2- inverted Dongge cave speleothem; 3- Shihua cave speleothem; 4 – GISP O18; Agassiz melt.

Here’s how I emulated this calculation (the first line coordinates between Jones-sequence gridcell numbering and Mann-sequence gridcell numbering):

test=match(idsort,info$jones);temp=!is.na(test);sum(temp);info[temp,]
mean.obs= info$mean[test];sd.obs=info$sd[test]
mean.proxy=apply(grids[(1850:1995)-tsp(grids)[[1]]+1,],2,mean)
sd.proxy=apply(grids[(1850:1995)-tsp(grids)[[1]]+1,],2,sd)

regts=array(NA,dim=c(nrow(grids),ncol(grids)))
dimnames(regts)[[2]]=dimnames(grids)[[2]]
for( i in (1:ncol(grids))[!is.na(mean.obs)]) {
regts[,i]=(grids[,i]-mean.proxy[i])*sd.obs[i]/sd.proxy[i]+mean.obs[i] }
regts=ts(regts[,!is.na(mean.obs)],start=tsp(grids)[1]);dim(regts)

Re-Gridding
The Mann SI states:

In regions within North America and Eurasia with especially dense spatial coverage, grid boxes were first averaged over 15° resolution boxes before inclusion in the spatial average to diminish potential bias due to variations in spatial sampling density (see Fig. 1B).

In the computer code, this is done as follows. All of the gridded series located south of 30N are retained as is. The gridded series north of 30N are re-averaged into 15×15 degree boxes. In the AD800 network, this results in the Tornetrask and (upside-down) Tiljander sediments being averaged. We now have 10 gridboxes, plots shown below (with names of contributing series attached):

   

As in other studies, for analyis of the period back to AD800, the number of contributing proxies is very small. Claiming the use of 1209 proxies is mostly rhetorical. Many of these proxies are already familiar to us.

Tornetrask, Graybill bristlecones, Quelccaya and Cook’s Tasmania reconstruction are all staples of all the reconstructions. The Graybill bristlecone chronology is one of only a couple of stong HS-contributors. The other main HS contributor is the Tornestrask-Tiljander combination. We’ve already discussed the Tiljander series – Tiljander reported that the modern portion of this series has been disturbed; Mann inverted the interpretation of the authors and shows the series upside down from the interpretation of the authors. (Obviously the authors don’t think that the data went hugely down in the 20th century, that’s why they said the modern portion shouldn’t be used.) But you can see that Mann’s error in the Tiljander series is not simply incidental.

Most of the other series are speleothems, a class of proxy relatively undiscussed at CA. Shihua Cave was used in Moberg; the South African speleothem was used in Loehle, with this use being criticized by Gavin Schmidt. (Loehle used the grey-scale series, Mann used C13 and O18 – some of Schmidt’s disputes with Loehle would seem to apply to Mann as well.) Dongge Cave and Socotra Island are both speleothems as well. I plan to do specific posts on these proxies.

NH and SH Averages
Finally, Mann calculates NH and SH averages, re-scaling the averages to the target CRU (or HadCRU) hemispheric temperature. As far as I can tell from the code, each of these reconstructions to a different “target” is a linear transformation of one another, with only the mean and standard deviation of the recon varying somewhat. In each case, the reconstruction is a weighted average of the contributing proxies, with the Mannian weights varying a little due to things like re-gridding, intermediate rescaling etc.

Though Mann has provided a commendable amount of information on the reconstruction, I was unable so far to locate anything other than spliced versions of output (as with MBH98). Below is a comparison of my emulation (spliced) to the spliced NH reconstruction (CRU target). It isn’t quite the same – even when you’re spotted the data and code, it’s never easy repeating a Mann calculation, but it’s pretty close – certainly sufficiently close to feel justified in doing sensitivity analyses, like the impact of using Tiljander data the right way up or Ababneh bristlecones, which I’ll do another day.

26 Comments

  1. anonymous
    Posted Nov 1, 2008 at 1:14 PM | Permalink

    Even before doing the reconstruction with Ababneh bristlecones or correct Tiljander data, you don’t particularly get the feeling looking at the reconstruction that the world is in a period of unprecedented millenial warming. Looks like it could just be random fluctuations [other than the 800 spike down] – entirely underwhelming.

  2. anonymous
    Posted Nov 1, 2008 at 1:15 PM | Permalink

    And I should add it down’t look much like a hockey stick any more.

  3. Posted Nov 1, 2008 at 3:28 PM | Permalink

    The emulation doesn’t really match at year 1200 or so. They go in opposite directions for some reason.

  4. Alan S. Blue
    Posted Nov 1, 2008 at 7:40 PM | Permalink

    “and removed Agassiz and Central Greenland”

    I’d expect a proxy from Central Greenland to have a strong MWP and LIA, does it?

    This would be a “flattening the shaft” issue. Actually more irritating than the “find a blade” issues.

  5. Steve McIntyre
    Posted Nov 1, 2008 at 9:09 PM | Permalink

    Nope, the GISP series is pretty flat and the Agassiz series is one of 2-3 series that are important to the Moberg stick. So it looks like programming misadventure rather than intent.

  6. Peter D. Tillman
    Posted Nov 2, 2008 at 11:33 AM | Permalink

    Dumb question — what does CPS stand for?

    Thanks, Pete Tillman

    • Posted Nov 2, 2008 at 12:26 PM | Permalink

      Re: Peter D. Tillman (#6),

      ‘‘composite plus scale’ , calibration method that has no theoretical justification, but results will look ‘good’ even if SNR is zero.

  7. Posted Nov 2, 2008 at 1:41 PM | Permalink

    See http://signals.auditblogs.com/files/2008/11/calibration.jpg ( CVM = variance matching = CPS, ICE and CCE explained in http://signals.auditblogs.com/2007/07/05/multivariate-calibration/ )

    If there’s lots of noise, CCE result will have large variance, ICE small variance (uses a priori information, weights towards calibration mean ), CPS variance always matches, regardless of noise. Nice invention.

  8. Steve McIntyre
    Posted Nov 2, 2008 at 2:41 PM | Permalink

    I’m very puzzled as to how the calibration of the Agassiz and GRIP series are done without corresponding gridcells. Something seems to be available or else UC wouldn’t have got these series into his results.

    IT looks like maybe the instrumental series cps/infilled2poles_1850 has infilled values for these gridcells. But the archive instfilled data only has 1732 gridcells and isn’t “infilled to poles” – I’m wondering whether Mann used a different instrumental version than the one that’s archived. I don’t see any code so far that yields infilled2poles, but it’s read into gridboxcps.m.

  9. Posted Nov 2, 2008 at 2:46 PM | Permalink

    lurkers pl. use Tip Jar, Steve needs a standalone Matlab license!

    • conard
      Posted Nov 2, 2008 at 9:09 PM | Permalink

      Re: UC (#10),

      Have you tried Octave? I use it (and the underlying GSL) regularly with emacs. However, I do not often have to run complicated matlab projects from matlab users, nor have I spent much time looking in detail at the matlab files under consideration at this site so cannot speak for it applicability to what Steve does here. It may be worth a try.

  10. Posted Nov 2, 2008 at 3:30 PM | Permalink

    Again good stuff.

    I would love to get my hands on the averaged weigting and CPS scaling vectors to do a correction of the temp scale. Perhaps you could point me to where you archived code and I can get what I need.

    BTW: After a ton of hours on this paper, I have concluded that 100% of the uptrend and long term shape in this CPS curve is created by mathematical artifact. The effect is so blatant, I find it implausible that Mann’s group …………[self snip]

  11. stuart harmon
    Posted Nov 2, 2008 at 7:32 PM | Permalink

    Dear Mr McIntyre
    You said you did not want to get involved in the debate about climate change. So sorry for this post. I tried to post on realclimate web site the bastion of our survival on this planet.

    My posts were not shown and I blame you because all I said was the hockey stick graph was shown to be at best a mistake and the rest you can guess.

    I also suggested they looked at Robert Carter’s presentaion on utube.

    Every time I now try to post I am blocked out.

    Is this now the norm?

    Take care

    • jae
      Posted Nov 2, 2008 at 8:15 PM | Permalink

      Re: stuart harmon (#12),

      LOL. Yes, this is the norm. “Robert Carter” is probably treated as spam there. You must be very careful and devious to get any adverse ideas posted on SurrealClimate.

  12. Jim Steele
    Posted Nov 2, 2008 at 10:09 PM | Permalink

    My question is not directly related but this seems to be as good a place to jump in. And I only have a superficial understanding of statistics and I am new to this site.

    My question is “are the last 100 years of temperature data to calculate global averages, used differently when trends for the past 1200 years are being determine.” I wonder about this because warming in the Arctic is much greater, double the rate, than warming at lower latitudes. I am not sure how the Arctic temperatures bias the global averages. But since we only have about 100 years of actual measurements in the Arctic I would suspect the Arctic data would skew trends that lack Arctic data. Also are there good proxies for the Arctic?

    • Dave Dardinger
      Posted Nov 2, 2008 at 10:26 PM | Permalink

      Re: Jim Steele (#15),

      I’d think that ice cores from Greenland would be possible sources of temperature proxies. However, there are possible complications such as where the source of precipitation which falls there is. Shifting wind patterns could change the SST of an ocean source and make it look like temperatures were falling or rising when they were actually constant; and vice versa.

  13. braddles
    Posted Nov 3, 2008 at 4:44 AM | Permalink

    For lesser readers like me feel a bit stupid because they don’t know what “CPS” is, it stands for “composite-plus-scale”. The fact that it has been reduced to its initials here does not mean that it is in common usage.

    In fact, Googling “composite plus scale” finds no more than about 40 hits on the entire Web, virtually all of them related to Mann and temperature reconstructions. Wikipedia has a long list of things called “CPS” but this is not one of them. There doesn’t seem to be anything standard about it.

  14. Jim Melton
    Posted Nov 3, 2008 at 7:44 AM | Permalink

    Thanks again for your efforts Steve and I do hope that you dig up something interesting in your further analysis, but for me the graphs of the proxies are enough:

    Even where you can find one proxy that seems to show a sensitivity similar to another at one time period somewhere else in time there are massive divergences. What does averaging (or any other inappropriate stat) tell you about the collected data.

    1) They are not the same
    2) They are not measurements or approximations to any single or related group of constants that is understood
    3) er..

    This is not apples v pears or even Cherries v cherry pie more like Barbara Cartland’s novels v Magna Carta.

    They share some vagaries of common language and nothing else.

    Astrology for the 21st Century

  15. Steve McIntyre
    Posted Nov 3, 2008 at 7:48 AM | Permalink

    #17. I try to write clearly, but I’m not trying to make each blog post as a self-standing essay in which everything is defined. It would be nice to do so, but I don’t have time to do this and keep doing new work.

    CPS is a technique that’s been discussed dozens of times here and if you do the site google, you get dozens of references on this site alone. Having said that, it should have been in the list of acronyms, which I haven’t been keeping up to data – my apologies – and I’ve added to it.

    PS someone posted up a comment suggesting a link to a better list of acronyms. If they’ll refresh this, I’ll attend to it.

  16. Carl Gullans
    Posted Nov 3, 2008 at 8:10 AM | Permalink

    #19: Here is an extensive list of acronyms that some climate audit readers have been compiling:

    -http://climateaudit101.wikispot.org/Glossary_of_Acronyms

  17. Dave Dardinger
    Posted Nov 6, 2008 at 10:43 AM | Permalink

    Steve,

    “Back to business.”

    You go first. I’ve noticed a paucity of messages lately but we got 200+ messages on a one day thread. Could you come up with a moderately provocative climate thread which would get those of us trying to detox from the US election back on track? Maybe something like how long it will take / has taken for the current economic downturn to affect the slope of the CO2 emissions / CO2 atmospheric concentration curves? In fact I wonder if ocean SSTs have been declining (due to PDO or whatever) and if that will make it difficult to observe any economic impact since colder SSTs should result in increased uptake of atmospheric CO2.

  18. mpaul
    Posted Nov 6, 2008 at 7:19 PM | Permalink

    A bit OT. Actually, kinda far OT.

    http://www.wtop.com/?nid=220&sid=1512340

    I can’t seem to find the original paper referenced. But this article implies that this is a non-European proxy showing a medieval warm period.

  19. Steve McIntyre
    Posted Nov 6, 2008 at 8:56 PM | Permalink

    ACtually, it turns out that this is very much on topic. In the AD800, AD1000 Mann 2008 segments, speleothems are very important -something that’s been mentioned before in connection with Socotra and Dongge. I’ve been spending a lot of time in the past 2 weeks looking at speloethem papers, especially in south Asia. So this new proxy is relevant.

  20. Willis Eschenbach
    Posted Nov 6, 2008 at 11:44 PM | Permalink

    mpaul, thanks for the link to the chinese proxy. I was curious how a rainfall proxy could also be a temperature proxy … they say:

    The WX42B δ18O record presents an apparent negative correlation to the local precipitation (r=-0.64) and a strong positive correlation with local temperature after the 1960’s (r=0.8) .

    In other words, if you throw away the first decade of temperature data (1950-1960) there is a good correlation with temperature (0.80). However, this struck me as a strange claim … why not include all the data? I digitized their post 1950 data, and I agree with their post-1960 figure of 0.8.

    What they neglect to mention is that if you throw away the last decade of data (1990-2000) the correlation with temperature drops to a miserable (and no longer statistically significant) 0.32 … I weep for the death of science.

    Finally, the monsoons are driven by the summer temperature difference between the ocean and the land. Accordingly, there should be more rain when there are hotter summers … but their data indicates the opposite. What’s up with that?

    w.

  21. JamesG
    Posted Nov 7, 2008 at 2:13 AM | Permalink

    In any event Mann already has a couple of Chinese proxies with an MWP, and also MWP’s from Central and South America. What he didn’t seem to have was any MWP for the Arctic or Northern Europe. Of course if he had inverted the Finnish lake data then it’d be different. But this was the big oddity for me in the proxies since Mann was previously very vocal in asserting that that it was only a “European” warm period yet none of his data seems to support that point of view.