A Georgia Tech EAS8001 Assignment

Here’s an assignment for JEG’s hockey stick class. Try to replicate the reported proxy-gridcell temperature correlations reported in Mann et al 2007 SI.

The table showing correlations between 112 proxies (the AD1820 network presumably) and gridcell temperature are shown in column 5 of the spreadsheet here. Data for the 112 proxies in the AD1820 proxy network is here. I used the HadCRU2 gridcell version for the calculations shown here, but experiments using HadCRU3 or vintage versions would be welcome.

First here is a barplot showing the reported correlations, color coded by proxy type, with distinct colors for instrumental temperature, instrumental precipitation, tree ring chronologies, tree ring temperature reconstructions, coral dO18, ice core dO18 and melt pct and miscellaneous (coral indices that aren’t dO18 and ice accumulation.) Mann reports an average correlation of 0.26, an average which is obviously helped along by the very high correlation of instrumental temperature to instrumental temperature.

correl5.gif
Note: Yellow in this graphic and below are tree ring temperature recons.

Now here’s the same barplot using the reported locations in the table and the proxy data from http://holocene.meteo.psu.edu/shared/research/MANNETAL98/PROXY/data1820.dat. The correlations of instrumental temperature to instrumental temperature match nicely, but otherwise many of the correlations do not match. One obvious difference is that all the reported correlations are positive, while the actual correlations are a mixed bag. In some cases, there is a plausible physical basis for reversing the orientation of the proxy – thus, I’d have no trouble if (say) coral dO18 proxies were inverted
(if that’s what specialists do) – provided that all of them were inverted. Similarly with say ice core accumulation. What I don’t buy is opportunism: scientists should at least be able to specify the orientation of supposed temperature proxy in advance – rather than deciding afterwards.

correl6.gif

PC series present a bit of a conundrum as there is no intrinsic orientation. For tree ring PCs, the PC1 can be interpreted as a weighted average – often looking something like an average – and if the eigenvector values are negative, one can reasonably prefer the positive orientation. However, past that, lower order PCs are orthogonal to the PC1 and may be best interpreted as contrasts – in which case, it’s hard to pick a gridcell to assign the PC to or think up a reason why the contrast should have a correlation to that particular gridcell temperature. If the PCs have undergone varimax rotation, then there may be a cluster of sites that are heavily weighted e.g. the bristlecones tend to maintain a distinct identity even in lower order PCs. But the allocation is something that you have to work at.

Now I don’t think that you can arbitrarily decide the orientation of a series after the fact, but let’s say that that’s what Mann did (and he obviously did so), then the difference between the reported correlation and the calculated absolute value correlation is shown below – the differences being material.

correl7.gif

After I did the above calculations, I noticed something quite weird about the Mann et al 2007 SI. The longitudes are inconsistently reported. For series 80-83 and 96-112, west is positive, while for the other series, west is negative. So I placed the longitudes on a consistent basis (also in the process changing the longitude of series 84-92 from a 0 to 360 basis to a -180 to 180 basis. I then re-ran the two calculations above. There is a separate list of lat-longs here which appears to be consistent. Again one finds a mixture of positive and negative orientations.

correl8.gif

Then here again is the difference between reported and calculated correlation (this time correcting the inconsistent longitudes in Mann’s table):
correl9.gif

That’s my try at replicating Mann’s correlation table. The homework assignment for EAS8100: can you achieve a better replication of Mann’s correlation table – perhaps using a different temperature series? Perhaps a different proxy version? Use your imaginations.

Garbage Characters in Old Posts

As a result of the migration between servers, specialized characters in old posts are now being rendered as garbage quotes. The characters that are affected are mainly “, -, ‘, …, š, ü , … Each of these renders into long garbage strings. To fix the problem, one needs to do a bulk find-and-replace over the entire database. I don’t know how to do it. There’s probably some information in WordPress but I don’t have the time to figure it out and it’s probably not very efficient for me to do so anyway. If someone more knowledgeable can volunteer a method, I’ll see if it passes muster with Pete Holzmann and/or Anthony Watts and see if either of them can fix things without screwing up something else.

Mann et al 2007 Precipitation Teleconnections

Judith Curry suggested that we talk a little about Mann et al 2007 available here . I noted its publication last summer when Jean S and UC made some remarks about it. It has an extensive SI with code here and, in fairness to Mann, there is a serious attempt at documenting his work in the SI. At the time, I noted that Mann et al 2007 continued the use of his incorrectly calculated principal components – something that seems absurdly stubborn in view of the clear statements by both the NAS panel and Wegman that the calculation was erroneous. It also reflects poorly on the JGR referees that they should have allowed this incorrect method to remain in discussion and on the more extended community, including JEG and Judith Curry, that no one has seen fit to hold any of them accountable for it.

Today, I noticed something very interesting in his SI, that goes back to MM 2003 and demonstrates the fanatical resistance to admitting even the slightest error. In MM03 here, we observed that there was an exact match between the precipitation series in Mann’s gridcell 42.5N 72.5W (in New England) and the GHCN historical series for Paris, France. I summarized the match at the time with the phrase:

The rain in Maine falls mainly in the Seine.

Here’s an excerpt from MM03 ( a paper with which Mann is familiar)

mann074.gif

Subsequent to MM03, we observed that the location of all but one precipitation series in MBH98 was incorrect. We reported the problem to Nature. In Mann’s Corrigendum, he merely corrected the incorrect attribution to Bradley and Jones 1993, saying that the data came from “NOAA” but offering no hints as its provenance within that organization and not admitting that the locations were incorrect. In the 2004 Corrigendum SI, he left all the incorrect geographical locations uncorrected.

In the new SI, there is a table comparing each proxy to the corresponding gridcell and I thought that it would be interesting to see what happened with all these incorrectly located precipitation series – including whether Mann et al 2007 had stubbornly perpetuated the erroneous location. The table is here .

Sure enough, there’s not been a single change in any of the incorrect locations. The rain in Maine still falls mainly in the Seine. How bizarre. Mann then proceeded to calculate correlations between each precipitation series and an incorrect gridcell location – often wildly incorrect. Thus the Paris precipitation series was compared to New England gridcell information, Toulouse precipitation to a South Carolina gridcell and a precipitation series near Philadelphia to the information from the gridcell containing Bombay! A severe test of teleconnection.

I did a quick re-analysis in which I calculated correlations of the 11 MBH98 precipitation series to the 72 GHCN (ndp041 vintage) precipitation series that started before 1830. For each MBH series, I identified the GHCN series with the highest correlation and then made a new table showing the lat and long of the identified series and the MBH series. About half the series have correlations of 1 and are firmly identified. Other series have correlations of around 0.8-.9 and the exact provenance of the series remains unknown although the locations are undoubtedly near the ones shown here. To my surprise, I actually figured out how Mann screwed up his precipitation coordinates. Take a look below and see if you can figure out the pattern.

Station Id

Correlation

GHCN Site

GHCN Lat

GHCN Long

MBH Grid Lat

MBH Grid Long

4327900

1

MADRAS/MINAMBAKKAM

13

80.18

12.5

82.5

7240811

0.913

WEST CHESTER 1W

39.97

-75.63

17.5

72.5

0763000

1

TOULOUSE/BLAGNAC

43.63

1.37

37.5

-77.5

0765000

1

MARSEILLE/MARIGNANE

43.45

5.23

42.5

2.5

7250706

0.821

NEW BEDFORD

41.63

-70.93

42.5

7.5

0715000

1

PARIS/LE BOURGET

48.97

2.45

42.5

-72.5

1123100

0.836

KLAGENFURT-FLUGHAFEN

46.65

14.33

47.5

2.5

1151800

1

PRAHA/RUZYNE

50.1

14.28

47.5

12.5

0365701

0.838

OXFORD

51.7

-1.2

52.5

12.5

0310202

1

EALLABUS

55.6

-6.2

52.5

-2.5

0316001

1

EDINB.OBS./BLACKFORD

55.9

-3.2

57.5

-7.5

The first row is located OK. But the MBH lat-longs for the 2nd row shouldn’t be there. All the coordinates for rows 3-11 should be moved up one row to rows 2-10. I presume that he intended to have a Bombay precipitation series in the data set, but forgot to actually include it. So a series is shown in that location when none actually comes from there. While a Bombay precipitation series might provide useful information about the monsoon, one feels less confident that precipitation data from near Philadelphia, interpreted as Bombay precipitation, will provide a lot of useful information on the monsoon, but I guess we should never underestimate the power of teleconnection.

This is the third time that I’ve seen a goofy collation error in MBH materials. In the original proxy data set that Rutherford directed me to, all the PC series were off one year. This was the famous data set that Mann subsequently said was the “wrong” data set and deleted, reproaching me for actually relying on the data set at his website to which Scott Rutherford had directed and which I’d even asked Mann to confirm that this was the correct data set. (A new archive suddenly materialized in Nov 2003.)

Then in Rutherford et al 2005, there’s a similar collation which we discussed at the time, in which, as I recall, his temperature series are all displaced one year incorrectly relative to the proxy collation.

When I first observed that the rain in Maine falls mainly in the Seine, I hadn’t considered the possibility of teleconnection. I just thought it was an error (and I still think this.) JEG describes Mann et al 2007 as a “sophisticated” analysis. But because Mann has not corrected the geographical locations of his precipitation series, all the calculations showing correlations to the gridcell location for these series are incorrect – despite this matter being brought to his attention long ago.

All in all, I feel that it detracts somewhat from the explanatory value of Mannian teleconnection theory if data sets could be located on incorrect continents without any affect on the results.

MBH98-Style Pseudo-Confidence Intervals for Loehle

The position at ClimateAudit is that error bars in MBH98 are incorrectly calculated and “pseudo science” and that no one knows how the error bars in MBH99 were calculated (not just Me, Jean S, UC but also von Storch) and these error bars are also “pseudo science”. Notwithstanding this view, UC has posted up MBH98-style error bars for the Loehle reconstruction as shown here:

L07cis

UC comments:

We have a trade-off here, no error bars or wrong error bars. Not a good situation, no indication of accuracy or false indication of accuracy (which leads to discussion about reliability, or consistency in some fields). Here are the MBH98-style CIs for Loehle’s reconstruction Jean sent me:

Jean S had previously written:

I personally do not believe the Mannian “error analysis” is worth anything. However, since JEG seems to be insisting on those, I calculated the Mannian “CI”s for the Loehle reconstruction the following way:

1) I took the HadCRU global instrumental series and 30-year run mean filtered it (in order the target series to match the Loehle reconstruction)
2) stardardized the both series to the mean of 1864-1980
3) calculated RMSE (over the overlap 1864-1980) between the series (which gives the Mannian CI sigma).

I sent my files to UC for double checking, but here are the preliminary results:
RMSE=0.067, so that gives Mannian CIs as 2*sigma=0.13! BTW, R2=0.73 and the series are “remarkably similar” using the Mannian terminology.

There’s some “skill” for you, JEG. Have fun!

I suspect that I speak for Jean S and UC (both statistics professionals in respected universities) as well as myself when I say that the apparent inability of climate scientists to recognize and reject the pseudo-science of MBH error bars – worse, their embracing of these calculations as an advance in their science – does not increase our confidence in their judgment when we are asked to accept their judgment in other areas that we have not studied.

The Loehle Network plus Moberg Trees

Loehle’s introduction emphasized the absence of tree ring chronologies as being an important aspect of his network. I think that he’s placed too much emphasis on this issue as I’ll show below. I previously noted the substantial overlap between the Loehle network and the Moberg low-frequency network.

I thought that it would be an interesting exercise to consider Loehle’s network as a variation of the Moberg network as follows:

  • expand and amend Moberg’s low-frequency network (11 series) to the larger Loehle network (18 series);
  • keep the Moberg tree ring network
  • use my emulation of Moberg’s wavelet methodology using the discrete wavelet transform. This is not exactly the same as Moberg’s method for which there is no source code, making exact emulation very difficult.
  • The results are shown in the Figure below. Obviously this method maintains the general “topography” of the Loehle network as to the medieval-modern relationship, while using a Team method (or at least a plausible emulation of the Moberg method.)

    moberg23.gif

    The difference between Moberg’s results (in which the modern warm period was a knife-edge warmer than medieval) and these results rests entirely with proxy selection. The 11 series in the Moberg low-freq network are increased to 18, primarily through the addition of ocean SST reconstructions (two Stott recons in the Pacific Warm Pool, Kim in the north Pacific, Calvo in the Norwegian Sea, de Menocal offshore Mauritania and Farmer in the Benguela upwelling, while excluding the uncalibrated Arabian Sea G Bulloides series – a proxy much criticized at CA). The other additions are the Mangini speleothem and the Holmgren speleothem (while the Lauritzen speleothem in Norway was excluded for reasons that I’m not clear about, but its re-insertion would not change things much) plus the Ge phenonology reconstruction and the Viau pollen reconstruction.

    So the underlying issue accounting for the difference is not the inclusion or exclusion of tree ring series (in a Moberg style reconstruction) but mainly the inclusion of several new ocean SST and speleothem temperature reconstructions, combined with the removal of uncalibrated proxies.

    Something New in the Loehle Network

    In the construction of his network, Loehle has done something very simple and very sensible that amazingly has never been done in a complete network in any previous temperature reconstruction. Something that neither bender nor JEG noticed; in fact even Loehle himself didn’t notice. Try to guess before you look at the answer. Continue reading

    Loehle Proxies #2

    Craig Loehle has sent digital versions of the 18 proxies used in Loehle 2007 which are now in the directory http://data.climateaudit.org/data/loehle listed in the following order (slightly different than the order in the previous note, shown here with url information). I’ve been able to successfully crosscheck many of these series against original provenance. I’ve got some questions yet that I’m seeking to resolve. Loehle has been very co-operative in establishing a proper data set with accurate and reproducible data citations. This compares favorably with (say) Hegerl et al 2006, where after nearly 2 years of effort and over 20 emails, I am still unable to establish the provenance of several series or Esper et al 2002, where it required the intervention of the journal to obtain data (and to this date, one series is missing,)

    While it would have been better to have had this all available at the start, at this point the only thing that is “unscientific” about the data situation for Loehle 2007 is the relative promptness of the data availability and the co-operativeness and commitment of the author to ensuring an adequate archive, as compared, of course, to a realclimate scientist.

    #1) GRIP borehole temperature (Dahl-Jensen et al., 1998); See Moberg Nature site supplementary material
    [SM- digitization at CA/data/moberg/djgrip.txt]
    #2) Conroy Lake pollen (Gajewski, 1988); ftp://ftp.ncdc.noaa.gov/pub/data/paleo/pollen/recons/liadata.txt
    #3) Chesapeake Bay Mg/Ca (Cronin et al., 2003); ftp://ftp.ncdc.noaa.gov/pub/data/paleo/contributions_by_author/cronin2003/
    #4) Sargasso Sea 18O (Keigwin, 1996); ftp://ftp.ncdc.noaa.gov/pub/data/paleo/contributions_by_author/keigwin1996/
    #5) Caribbean Sea 18O (Nyberg et al., 2002); ftp://ftp.ncdc.noaa.gov/pub/data/paleo/contributions_by_author/nyberg2002/ converted to temperature on the Moberg article nature site suppl material
    #6) Lake Tsuolbmajavri diatoms (Korhola et al., 2000); See Moberg Nature site supplementary material
    [SM- Hans Erren digitization at CA/data/moberg/Korhola_fig4_points.txt]
    #7) Shihua Cave layer thickness (Tan et al., 2003); ftp://ftp.ncdc.noaa.gov/pub/data/paleo/speleothem/china/shihua_tan2003.txt use col 7 temp
    #8.) China composite (Yang et al., 2002) which does use tree ring width for two out of the eight series that are averaged to get the composite, or 1.4% of the total data input to the mean computed below;ftp://ftp.ncdc.noaa.gov/pub/data/paleo/contributions_by_author/yang2002/china_temp.txt
    #9) Spannagel Cave (Central Alps) stalagmite oxygen isotope data (Mangini et al., 2005). ftp://ftp.ncdc.noaa.gov/pub/data/paleo/speleothem/europe/austria/spannagel2005.txtmean SST for northern Pacific site SSDP-102 (Latitude 34.9530, Longitude 128.8810) from
    #10) SST variations (warm season) off West Africa (deMenocal et al., 2000); ftp://ftp.ncdc.noaa.gov/pub/data/paleo/contributions_by_author/demenocal2000/
    #11)speleothem data from a South African cave (Holmgren et al., 1999); from author—email sent for archive link
    #12) SST reconstruction in the Norwegian Sea (Calvo et al., 2002);
    [SM- http://doi.pangaea.de/10.1594/PANGAEA.438810?format=html matches exactly]
    #13-14) SST from two cores in the western tropical Pacific (Stott et al., 2004); ftp://ftp.ncdc.noaa.gov/pub/data/paleo/contributions_by_author/stott2004/
    #15) mean temperature for North America based on pollen profiles (Viau et al.,2006); http://www.lpc.uottawa.ca/data/reconstructions/index.html
    #16) a phenology-based reconstruction from China (Ge et al., 2003); ftp://ftp.ncdc.noaa.gov/pub/data/paleo/historical/china/china_winter_temp.txt
    #17) SST from the southeast Atlantic (Farmer et al., 2005); ftp://ftp.ncdc.noaa.gov/pub/data/paleo/contributions_by_author/farmer2005/
    #18) annual mean SST for northern Pacific site SSDP-102 (Latitude 34.9530, Longitude 128.8810) from Kim et al. (2004); http://doi.pangaea.de/10.1594/PANGAEA.438838

    Here is a plot of the above data sets:

    loehle7.gif

    Craig Loehle sent me the following plot showing all the proxies together:
    loehle8.gif

    He also sent in 3 diagrams showing that the impact of different smoothing intervals was negligible. The 3 figures below show 10-year, 20-year- and 30-year running averages, and the difference is obviously not substantial. I think that this would apply to gaussian smoothing or other forms of smoothing. While smoothing is an issue that always seems to provoke lots of opinions here, it’s an issue that’s very low on my list of issues.

    loehle9.gif
    10-year smooth

    loehle10.gif
    20-year smooth
    loehle11.gif
    30-year smooth

    Craig Loehle Reconstruction #2

    Continuation of Craig Loehle Reconstruction

    Emile-Geay and Verification r2 Statistics

    Julien Emile-Geay from Judith Curry’s university – who, together with Kim Cobb, is teaching a course on the Hockey Stick – has joined our debate with a forceful criticism of Craig Loehle’s recent paper. While Emile-Geay seems to be a lively young man with some very cordial comments about CA here and his comments are very welcome here, his initial skate around the hockey rink seems surprisingly unreflective about the defects of the canonical Hockey Team studies. Whatever validity his points have against Loehle, all too often they apply even more forcefully against Team articles, none of which are criticized. A little more attention, shall we say, to the “beam in his own eye” or at least of his teammates.

    Emile-Geay asks of Loehle:

    Where are the CE, RE, and most importantly R-squared statistics that are so dear to ClimateAuditers ? How are we supposed to guess whether the reconstruction has any skill ?

    I agree with this 100%. These statistics are part of the game and should be provided. While I think that these statistics have to be very carefully assessed and that the risk of spurious RE statistics is not understood by climate scientists at all, I agree that readers are entitled to such information about any proposed reconstruction presented as a positive alternative.

    But the more interesting issue in this demand is surely not the performance of the Loehle reconstruction, but the dissonance between Emile-Geay’s demand for a verification r2 statistic from Loehle as compared to past contortions by Mann (and Ammann) in trying to cover up the MBH verification r2 failure.

    As we speak in November 2007, Mann has never reported the verification r2 (or CE) statistics for any MBH98-99 steps prior to the AD1820 splice. You can confirm this by examining the original MBH98 SI where the RE is reported but not the verification r2. On the other hand, MBH98 Figure 3 shows a map reporting the AD1820 verification r2 results – a step for which results were favorable, unlike the earlier AD1400 and AD1000 steps. MBH98 also explicitly says that verification r2 results were considered and IPCC TAR states that the Mann reconstruction had “skill” in verification statistics without limiting this to the RE statistic.

    Mann has never actually reported the reconstructions for the individual steps, including the AD1400 or AD1000 steps, forcing any interested readers to run the gauntlet of trying to replicate his study from scratch in order to obtain the elementary statistics said here by Emile-Geay to be a necessity for any study (and a demand with which I agree.) I have been trying to obtain Mann’s actual result (or equivalently the residual series) for the AD1400 step since 2003 without any success in order to carry out the verification tests demanded here by Emile-Geay. In 2003, Mann refused. I asked the National Science Foundation and they refused. I filed a Materials Complaint to Nature and they refused, saying that Mann was not required to produce the results of his “experiments”; it was up to him.

    In 2004, I acted as a reviewer for a submission by Mann to Climatic Change, supposedly rebutting our EE 2003 article. In my capacity as a reviewer, I again asked for this information and again Mann refused. Ultiumately Mann withdrew the article rather than providing the data (although the rejected article is check kited in Jones and Mann 2004.)

    In MM(2005 GRL and 2005 EE), we observed that the verification r2 for the AD1400 step under consideration there was ~0. In MM (2005 EE), we observed that it seemed inconceivable that Mann had not calculated the verification r2 statistic (as indeed he had as evidence by his source code).

    This issue has prompted much subsequent controversy. The House Energy and Commerce Committee asked Mann, Bradley and Hughes whether they had calculated a verification r2 statistic and asked what it was. Even in response to a congressional inquiry, Mann refused to provide the verification r2 statistic. He did provide code and the code shows for certain that he calculated a verification r2 statistic in the same source code step as the RE statistic – observed in summer 2005 at CA.

    The National Academy of Sciences specifically referred to the verification r2 issue when they wrote their complaint letter to the House Energy and Commerce Committee. In our presentation to the NAS panel, we summarized the verification r2 issue as it then stood. In the NAS panel presentations, a NAS panelist asked Mann whether he had calculated a verification r2 statistic for the AD1400 step and what was it. Mann famously denied calculating it, saying that that would be a “foolish and incorrect thing” to do – notwithstanding the fact that his own source code, produced for the House Energy and Commerce Committee and the MBH98 figure for the AD1820 step, showed that he had calculated the statistic: he merely didn’t report it for the adverse steps.

    As an anonymous reviewer of Wahl and Ammann, I asked the authors to report the verification r2 statistic for their calculations (fully knowing that the results were zero). They refused, while, at they same time, they issued a press release stating that all our claims were “unfounded”. I’ve reported previously on my proposal to Ammann in San Francisco in Dec 2005, proposing a joint paper itemizing points that we agreed on, points that we disagreed on, knowing that our codes fully reconciled (while neither fully reconciled to Mann’s); he said that this would interfere with his career advancement. I also urged Ammann to report the verification r2 results telling him that he seemed like a nice young man but that I would not stand idly by if he failed to report the adverse results; he still said that he would not report the verification r2 statistiis. I filed an academic misconduct complaint and, in late February 2006, the adverse verification r2 results were disclosed in an Appendix to the revised Wahl and Ammann, fully confirming our previous results (although Wahl and Ammann did not credit us with priority or acknowledge that they had confirmed our earlier findings).

    The Wahl and Ammann preprint came online a couple of days after the NAS panel hearings. In a supplementary letter to the NAS panel, we alerted the NAS panel that the revised Wahl and Ammann had confirmed the adverse verification r2 scores, and the NAS panel noted these failed verification r2 results in their report.

    While I fully agree that Loehle should have reported the verification r2 statistics for his reconstruction (and I would be surprised if they were any better than the results for MBH or other Team studies), it is extremely hypocritical (and all too characteristic of Team climate science) for Emile-Geay to criticize Loehle for this omission given the history of obstruction on this matter by Mann and Ammann. If Mann wouldn’t provide this information to the NAS panel even when asked directly, shouldn’t that (and related ) refusals have occasioned Emile-Geay’s disapproval long before his opprobrium against Loehle’s omission of this statistic (an omission which should be corrected).

    BTW I’ve calculated the Loehle performance statistic relied on exclusively in Juckes et al 2007 – the 1856-1980 r. Loehle’s recon has an 1856-1980 r of 0.594, which matches or exceeds the 1856-1980 r reported by Juckes for several CVM variations (MBH .535, Esper 0.599, Jones et al 1998 0.367). It is my view (expressed in my review of Juckes et al ) that his claim that these correlations were 99.99% significant was absurd, because trivial variations with different medieval-modern relations also had 99.99% significant 1856-1980 correlation (now including the Loehle reconstruction). Juckes was not required to even respond to that criticism (in breach of Climate of the Past policies), but the issue is surely back on the table.

    Loehle and Moberg

    Julien Emile-Geay has made many forceful criticisms of the Loehle reconstruction. For example, he says:

    Relationship of each proxy to *local* temperature is not even discussed. We are just shoved a list of references (hey Craig , have you heard of tables ? They are a great means that scientists use to convey information clearly). How is it that tree-rings are seen as the penultimate antichrist but that, to take but one example, d18O of speleothem calcite (e.g. Holmgren et al., 1999) is a flawless paleothermometer ? Shouldn’t one discuss one by one, and with great care, the pros and cons of each proxy as a temperature recorder ? d180 in forams is a notoriously flawed temperature proxy, is being replaced by Mg/Ca where possible, and one can only surmise why the Keigwin[1996] series is here.. Is it just a form of screening for proxies with a low “hockey-stick index” (in McIntyre parlance) ?

    Emile-Geay notably did not compare Loehle’s proxy selection with those in the canonical studies. Here I’d like to observe the overlap between Moberg’s low-frequency network and Loehle’s network. Because of this overlap, such criticisms apply a fortiori to Moberg et al, about which Emile-Geay has, to my knowledge, has been entirely silent.

    Moberg used 11 proxies in his low-frequency network (retaining tree rings only for high-frequency information.) The 11 low-frequency proxies are listed here. Here is a plot of the 11 Moberg series.

    moberg4.gif

    Attentive readers will see that no fewer than 8 of the 11 series are used in the Loehle network. The only exceptions are the Arabian Sea G Bulloides (discussed a lot here) which is uncalibrated to local temperature (actually having an inverse relation!), the Agassiz melt percentage also uncalibrated to local temperature and the Lauritzen speleothem, digital information on which was not made available at the time.

    Through a Materials Complaint to Nature, I am now in possession of a digital version of the Lauritzen data (without restriction) and have posted it up at www.climateaudit.org/data/moberg/

    17 of the 18 series used by Loehle used temperature calibration by the original authors. There was only one exception: the Holmgren speleothem and this one exception was seized on by Emile-Geay and criticized. I think that there are definite advantages for a Loehle-type reconstruction in maintaining a policy of using temperature reconstructions rather than uncalibrated proxies. Even single-proxy temperature reconstructions have their own problems, but at least the attempt at constructing a temperature reconstruction by the original authors avoids some defects arising in uncalibrated proxies. I’ve objected to the use of uncalibrated series in the past (e.g. the Moberg G Bulloides series) and believe that the same objection applies against the use of the Holmgren speleothem. Obviously there are occasions in which one has to broaden the inclusion criterion, but that doesn’t apply here.

    If Emile-Geay believes that multiproxy authors should not use uncalibrated proxy series such as the Homgren speleothem to which he objected, then he must surely also criticize the use of the even more objectionable Arabian Sea G Bulloides series by Moberg and by Juckes. If he thinks that the consideration of the Loehle series has been inadequate, then he must also conclude that the consideration by Moberg was also inadeuqate. However, to my knowledge, he has been thus far silent in his criticisms of Moberg.

    And of course there’s an even more powerful example of the use of a proxy uncalibrated to local temperature: Mann’s PC1. This was never calibrated against local temperature. If it were, the implied MBH98-99 calibration would entail an implausible ice age in California, a point that I’ve often made. The MBH PC1 is able to have such a large impact on reconstructions precisely because it is uncalibrated and has such a negative value in its early stages.

    So I welcome Emile-Geay’s suggestion that authors should examine “one by one, and with great care, the pros and cons of each proxy as a temperature recorder”. Indeed, much of the volume of ClimateAudit has been devoted precisely to such examination – an examination which has been notoriously absent in the canonical multiproxy studies. I hope that Emile-Geay is consistent in his policies and will be as unsparing in his criticism of Moberg et al, Mann and Jones and MBH, and perhaps even come to appreciate the texture of the post history at Climate Audit in which we have tried to carefully examine the pros and cons of each temperature proxy – not a small job.