How To Publish A Scientific Comment in 123 Easy Steps

is an engaging account by Rick Trebino of Georgia Tech on his experience in trying to publish a scientific comment in a field far less controversial than climate science. Readers can doubtless think of analogous experience in climate science. (h/t Chas)

Trebino was stonewalled by the authors when he sought data and methods – again readers can doubtless think of analgous situations.

Trebino closes by reflecting on how matters might be improved – his first recommendation is compulsory archiving of data and parameters as a condition of journal publication and a sanction of misconduct for authors refusing to provide to so after publication.

Loso: Varve Thickness and Nearest Inlet

One excellent feature of the Alaskan varvochronologists is that (unlike, say, Bradley and his coterie) some of them show and archive their work. The Kaufman student MSc theses are good at this. So too is Michael Loso’s work on Iceberg Lake. Thus while one can raise an eyebrow at (and criticize) their statistical peregrinations, at least they provide enough material that one can at least analyse the data (Bradley’s 1996 C2 data remaining under security.) In an earlier post, I discussed Loso (2006). Today, I’ve got a few comments on Loso (2009), which I did not discuss in my earlier post. Loso (J Paleolim 2009) revisits Iceberg Lake varves with some interesting comments (without however satisfactorily resolving the fundamental inhomogeneity problems.) Like many paleoclimate articles, though the author is not a statistician, the topic is primarily statistical. And like most paleoclimate articles (also mainly statistical), there is no evidence that any reviewers were statisticians. Continue reading

Varves: To Log or Not to Log

The majority of Kaufman’s varvochronology proxies are various functions of varve thickness – which, if anything, seem more problematic than sediment BSi.

While Kaufman’s offering memorandum to NSF promised consistency, the handling of varve thicknesses in the various selections seems to be anything but. Kaufman et al 2009 gives no hint of the varied functional forms used to create varve thickness proxies. While none is quite exotic as the Hallet Lake cube root of the product of BSi and flux_all, there is an intriguing variety of forms.

1- the Blue Lake temperature reconstruction is a linear function of logged varve thickness (unadjusted for density). “Turbidites” are a problem in these records and they state (Bird et al 2009):

Twenty-one anomalously thick layers, ranging from 1.1 to 18.14 cm thick were also identified in the cores (Fig. 4). Based on sedimentological characteristics (i.e. presence of graded bedding) the layers were identified as turbidites and excluded from the thickness record and varve chronology.

There isn’t any mention of tephra, though tephra are an important feature of some lake sediments. Cross-dating is via Cs137, Pb210 and C14 (per NSF abstract.)

4. Loso’s Iceberg Lake varves are linear in varve thickness (no logging), also unadjusted for density. They removed 82 varve measurements; they combined the remaining varve measurements. (As noted in the Loso post, there is considerable evidence of a 1957 inhomogeneity not adjusted for.) Loso:

Scattered among the other well-dated sections are isolated strata that record episodic density flows (turbidites), resuspension of lacustrine sediment by seismic shaking and/or shoreline-lowering events, and dumping of ice-rafted debris. The case for excluding such deposits from climatologically-oriented varve records has been made elsewhere (Hardy et al., 1996), and we accordingly removed measurements of 82 individual laminae from these other sections. Those removed (mostly turbidites) include many of the thickest laminae, but sediment structure (not thickness) was in all cases the defining criterion for exclusion from the master chronology.

Again, no mention of tephra. No mention of Cs137, Pb210 or C14 cross-dating.

Note: Cascade Lake, Alaska, another Kaufman student site, was not used. Kathan stated:

The cores are composed of rhythmically laminated mud, with an average lamination thickness of 0.4 cm; these laminations do not represent annual sedimentation (e.g. varves).

Kathan observed the presence of tephra and used a couple of known tephra for her age-depth relationship.

6. Lake C2, Ellesmere Island. This is an old 1996 data set (not collected in the ARCUS2K program) for which there is no archive (Bradley of MBH was a co-author; Bradley assured the House Energy and Commerce Committee that he had archived all his data, but I guess that he forgot about Lake C2.) [Update: sept 23 3.20 pm: Scott Lamoureux of Queen’s University near Toronto has just emailed me the C2 annual data, noting that it had originally been archived with NCDC and been inadvertently removed at some point, a situation he is now redressing. I’ve posted the data up at CA/data/kaufman/C2_annual.dat.] They made a varvochronology explicitly borrowing from tree ring methods by first “standardizing” the data against their “expected” values, which, in this case, appear to be by dividing the varve width by the linear trend. Bradley observes that one trend goes up and one goes down – an inconsistency that didn’t seem to bother anyone at the time, nor seemingly Kaufman in its present iteration. (Without original data, it is of course impossible to determine exactly what they did.) No mention of Cs137, Pb210 or C14 cross-dating. (Lamoureux and Bradley 1996; Hardy et al 1996)

A major turbidite was mentioned in 1957 (the same year as a major turbidite in Iceberg Lake above, presumably a coincidence.) Lamoureux and Bradley 1996 mention both a filtered and unfiltered version – in the filtered version, identified turbidites are replaced by the series mean. This data set is pretty much inaccessible to any any analysis.

8 – Lower Murray Lake, Ellesmere. This is a new record from Bradley’s program. The varve chronology is archived, but not the varve measurements. The caption to Besonen et al 2008 Figure 7 states: “13 turbidites were replaced with the period average thickness” in one version of the top panel. It looks like the version after adjustment is what is archived. This article also posits that a “clearly erosive” 1990 turbidite event cleanly erased the previous 20 years from the lake:

the prominent turbidite event recorded in the 1990 varve was clearly erosive. Thus, assuming the 137Cs peak in sample 3 represents ~1963, if we take the centre of the 0.5 cm zone as 1963, it suggests that the varves from ~1970 to 1989 inclusive were eroded by the 1990 turbidite.

As far as I can tell, no other erosive turbidite events are accounted for in any other Kaufman series – surprising that one only occurs at this site. In this case, density values are calculated yielding a mass_flux estimate. Kaufman uses the log(mass_flux) in his composite.

C14 cross-dating carried on 2 samples (due to unavailability.) Dates were older/much older than the presumed varve date. Limited Cs137 and Pb210 dating was carried out due to “cost constraints” (notwithstanding the commitment to NSF to do this work):

Owing to cost constraints, only the top 2.5cm (five samples) of sediment were analysed.

The Cs137 peak was unexpectedly high in the sample leading to the hypothesis of an erosive 1990 turbidite (as opposed to core loss at the top, the possibility of which seems at least possible but which was not discussed – the team being content with the erosive turbidite theory.)

9 – Big Round Lake, Baffin Island. This varve thickness record goes back only to 980AD, prior to which the sediment lacked a varve structure. Sand thicknesses (if present) are reported for each varve layer and the residual is used by Kaufman. A linear function is used.Thomas and Briner 2009 url Cross-dating here was done by 239+240Pu; no 137Cs or 14C measurements are reported (and none were carried out for some reason, a point confirmed by the authors.)

10 – Donard Lake, Baffin Island. This is the last North American varve thickness series in Kaufman 2009. This is an older series (2001) predating the ARCUS2K program. Moore et al 2001 does not discuss the handling of turbidites. No radiogenic cross-dating is reported.

Non-Normality
The need for some sort of handling of non-normality is shown by the following diagram showing a histogram of the Loso varves (truncated at 60 mm for visibility here, though there are a number of thicker varves) together with the best fit under several different distributions. As you can see, the distribution is far from normal. The log transformation, that is semi-popular among varvochronologists, doesn’t really do the trick either. An inverse gamma distribution fits better than log-normal (I looked at this because the fat-tailed Levy distribution is a special case of an inverse gamma distribution.)

Taking a simple average from this sorts of weird distributions (as is done in most of these recons) doesn’t seem like a very diligent way of handling the non-normality. While the functional form of the Hallet Lake BSi flux series is not very attractive, one can sense the underlying motive for trying to do something to coerce the data into a comparison with temperature.

I presume that varve distributions for the other sites (with far more incomplete archives than Loso) have similarly problematic distributions and this is one of the reasons for the quirky handling. All in all, we see at least 4 distinct methods (before turbidite adjustment): linear; log of varve thickness; log of varve mass flux; detrended linear. I suspect that there are further differentiae beneath the surface.

As to the question – to log or not to log – my guess (as of today) is that, if these series are to be used, it would make more sense to do a sort of non-parametric mapping of the distribution (handling nugget effects severely) onto a normal distribution – prior to doing any correlations or reconstructions. Of course, there’s another possibility – these things aren’t temperature proxies and shouldn’t be used in major reconstructions – a possibility that we shall examine more as we get a better understanding of varvochronology.

Overpeck’s Enigmatic Tuq(?)

In fairness to Kaufman, he and his students actually did useful field work. Elsewhere I’ve noted that their MSc theses contain many helpful details on the Alaskan lakes. The situation is entirely different with the Greenland lakes, where Jonathan Overpeck was the point man for the ARCUS2k lake project collection.

Kaufman et al 2009 contains one sediment record from Greenland – SFL4-1 organic material (OM%), published (and archived) in 1999 by Willemse (pdf data ). The most recent sediment in this series is dated to 1938; it is ascribed approximately decadal resolution.

Together with the three truncated Finnish proxies (see upside-down Mann problem), the lack of closing data for these records resulted in short-segmenting of the latter portion of the standardization period (something that Hu has expressed concern about.) It’s hard to see that the upside-down truncated Finnish proxies add useful information to this recon; the old Greenland OM% series is the only other series with a short modern period.

The purpose of Kaufman’s program was to triple the number of new proxies – so why didn’t they get any new data for Greenland? Why did he use the tired old SFL4-1 series? I can’t imagine that Kaufman was thrilled to use this thing.

It wasn’t for lack of funding of the Greenland data. Overpeck got about 50% of the amount that Kaufman got. (Ammann actually got more than Kaufman – I wonder what Kaufman got out of Ammann: again, I can’t imagine that he was thrilled to have to recycle the 5600-3600 BP GCM run, after Ammann and associates got $261,000 to model runs for him.)

Unlike Kaufman’s grant sheet where several publications are listed, Overpeck’s grant sheet does not show any publications (Overpeck is listed as a Kaufman 2009 coauthor and I suppose that will ultimately accrue on this grant sheet as well as the others.)

In the original listing of 30 NSF sites, the Greenland sites that seem to be Overpeck’s responsibility are Sved Lake and Tuqtotog Lake.

In the first PI meeting in 2006, the notes report:

– Sven: Chironomid-based temperatures completed at millennial scale; pollen analysis underway
– Tugtutog: Microliminated; 14C chronology completed; chironomid analyses pending

In the 2nd PI meeting (which, for the predominantly American participation, was conveniently in Iceland), Donna Francis reported:

Greenland: Sved Lake (ongoing midge work; awaiting uppermost seds for last 600 yrs), Qipisarqo? (Holocene profiles generated; last 2kyr may be Peck’s project), Tug?

The enigmatic Tug? subsequently remained a question-mark, as indeed, does Sved Lake and Qisiparqo(?).

At the Dec 2007 PI meeting, it is tersely reported:

Greenland (Anderson)
-6 records from west Greenland
-He wonders what the controls are on the proxies.
-Argues that the records presented are oversimplified.

Doubtless CA readers are also wondering “what the controls are on the proxies”. The attendance includes two Andersons (J and L). Neither Overpeck nor Francis were there.

None of the PI meeting minutes make any reference to the inclusion of the (ancient) SFL4-1 OM% series as a Greenland sediment representative. According to the minutes, Overpeck;s data was nearly ready, but it’s fallen off the radar. Unfortunately, we have previously had unfortunate precedents for non-archiving of data in climate science, e.g. Jacoby:

If we get a good climatic story from a chronology, we write a paper using it. That is our funded mission. It does not make sense to expend efforts on marginal or poor data and it is a waste of funding agency and taxpayer dollars. The rejected data are set aside and not archived.

While one always hopes that things improve, the failure to provide any accounting for the missing Overpeck data is not very encouraging. And it’s not as though the PI were unprepared for scrutiny. After all, the Iceland meeting minutes say:

We need to be very careful that all data included in the synthesis are publicly available, and preferably peer-reviewed. We will be SCRUTINIZED. Ideally as many *published* records as possible. We may be a lightning rod – and therefore need to be extremely careful to document our decisions and be ready to publicly defend them.

Overpeck seems to have been paid in full for his data. Time for Overpeck to archive it. Perhaps Kaufman will endorse this request.

Kaufman’s "Classical" Log Regression

I observed yesterday that I had been unable to replicate the archived version of Kaufman’s Hallet Lake series – something that I thought was due to a change in the archived version (since the NCDC archive noted that a new version had been archived in Nov 2008.) This turns out not to be what happened.

Kaufman archived BSi (%) at NCDC and I innocently assumed that this was what was used in Kaufman et al 2009. It appears that, instead of using the archived BSi%, Kaufman used (a version of ) the temperature reconstruction from BSi flux developed using a “classical logarithmic” regression by one of Kaufman’s students.

I thought that CA readers would be intrigued by the “classical logarithmic” regression. The thesis of Kaufman’s student says:

Quantitative summer temperature reconstruction: BSi flux increases exponentially with temperature over the calibration period (figure 19). A classical logarithmic regression was used to develop a transfer function.

The “classical” logarithmic function is then shown in full as follows (url pdf page 34):

I don’t feel quite so bad for not being able to figure out this “classic” functional form from the information in Kaufman et al 2009 and NCDC.

I tested the above formula against the archived temperatures in McKay A-6 and replicated these temperatures almost exactly. The reconstruction uses BSi flux, not BSi %, BSi flux being (conventionally) defined as follows in the McKay thesis as follows:

BSi concentrations (%) were converted to flux (mg cm-2 yr-1) by multiplying %BSi by total flux (the product of bulk density and sedimentation rate).

The use of 1.000626 as a base in the logarithm seems a little exotic and it’s unclear why this unusual nomenclature was used. The form itself can be somewhat simplified (with one less free parameter) to the following (still not the most elegant functional form in the world) – the estimation of the parameters remains unclear.

Temperature = 12.79*( 0.765718 +log(BSi_flux))^(1/3)-1.082

In other words, temperature is said to be proportional to the cube root of flux_all times BSi.

They go on to say:

To minimize the effects of point-to-point variability, and to emphasize longer-term changes in temperature, a 50-year Gaussian-weighted low-pass filter was applied to the high-resolution record of BSi flux before using the transfer function to reconstruct summer temperatures for the past 2 kyr. The BSi flux-inferred summer temperatures for the past 2 kyr range from 9-14°C; the average of the unfiltered data for the past 2 kyr is 10.6°C (10.7°C for the filtered data), nearly 2°C cooler than the modern average temperature (12.4°C) (figure 21); here defined as 1976-2005, the period of continuous measurement during the current AL regime. Comparisons of the observed and predicted values, along with the residuals show that the transfer function tends to slightly underestimate the highest and lowest temperatures (figure 19).

Earlier today, I thought that I’d managed to replicate the Kaufman 2009 version from the temperature recon in the McKay thesis, but as of now, I’m still stumped as to the provenance of the Kaufman version. Here’s what I get, kaufmanizing the McKay temperature reconstruction:

The NCDC version is the same as the thesis version, but says that a new version was provided in Nov 2008. Perhaps Kaufman used an old version of this series. Another Team mystery.

BSi flux and BSi-flux reconstructed temperature are shown against depth in the McKay Thesis Appendix A-6, with temperatures expurgated below 192 cm. (These would be high temperatures in the early Holocene according to the McKay formula.) The BSi% measurements can be matched to the BSi% measurements at NCDC for all but three years (three high years expurgated from the NCDC record.) The NCDC record is indexed by year rather than by depth. The two records can be spliced to yield an age-depth relationship that (annoyingly) is not otherwise archived.

 

Reference: McKay thesis, 2007 pdf

Five Alaskan BSi Series

At the second meeting of Kaufman’s PIs, one of the scientists plaintively asked:

But shouldn’t we aim to do a synthesis that is only lake seds (at least as first step)?

This logical building block was pushed aside (thereby allowing Briffa’s Yamal series to be recycled for the nth time) on the following grounds:

some modeling experiments will require tests vs paleodata from other parts of the world (not just our lake sites)

As we observed to (bender’s) surprise the other day (see comments in thread …), Kaufman et al 2009 did not actually contain model results for the past 2000 years although that was supposedly part of the program – Kaufman Figure 4 uses results from a CSM model for the period 5600-3600BP. Nothing is mentioned in Kaufman et al 2009 about Ammann and Schneider’s runs (though they were discussed at PI meetings)

Although the archiving record of Kaufman’s program at NOAA is very incomplete, there have been some useful additions to the Alaskan BSi record from Kaufman’s MSc students that we’ve been discussing over the last few days. Yesterday, I mentioned Daigle’s Goat Lake BSi series, which showed a remarkable contrast between LIA sediments and MWP sediments. An excerpt is shown below.

Should any MSc students read this thread, I though that Daigle’s graphic was considerably more effective than the corresponding graphics in theses by Kathan and McKay, as it nicely integrated the geological classification of the sediments with the quantitative information from BSi % and Organic Material %. Low values of BSi% and OM% are associated with a type of gray mud, while high values of BSi% and OM% are associated with “gyttja”.

Only McKay’s Hallet Lake series was used in Kaufman et al 2009. Here is the Hallet Lake BSi series (as rendered into Kaufman decadal averages and rendered into SD units) – showing both the Kaufman archived version and my replication. The NCSC archive observes that a new version was filed in Nov 2008 and possibly Kaufman used an older version of the Hallet series. (I’ve exactly replicated about 10 Kaufman series from original data and am 100% confident that I’ve got his decadal averaging and re-scaling method. In my emulation, I’ve included 3 BSi upspikes reported in the MSc data, but expunged in the NCDC version.)

In either version, Hallet Lake BSi has a 20th century uptick, a depressed medieval warm period (always attractive to the Team) and an elevated early first millennium.

Figure 2 – Versions of Hallet Lake BSi in SD Units – Kaufman archive and emulated from MSc data.

In order to orient readers to a broader range of BSi values than the Hallet Lake site used in Kaufman, the graphic below compares BSi % over the Holocene for 5 Alaskan sites ranged from west to east. Information from three of these sites (Goat, Cascade, Hallet) are from Kaufman’s MSc thesis (I manually typed in the Goat Lake values as these were annoyingly in a photo-format.) Two sites are from other groups and archived at NCDC. Only the Hallet BSi value is used in Kaufman 2009.

The contrasts in Daigle’s Goat Lake series are obviously much sharper than the corresponding series from Kathan (Cascade Lake) and McKay (Hallet Lake) (or the other two sites.) This seems like an extremely interesting and useful result – precisely the sort of thign that one would like to have seen discussed in a “synthesis that is only lake seds (at least as first step)”. All of the following series show BSi (plotted as % rather than SD units). Take a look and I’ll comment further below.


Figure 3. Five Alaskan BSi % series.

Obviously the BSi contrast at Goat Lake is 1-2 orders of magnitude greater than at the other sites. In most forms of analysis, it’s easier to work from strong contrasts than from weak contrasts and I don’t think that there’s any exception here.

The big contrast at Goat Lake is between the very low BSi% in the grey (predominantly inorganic) mud of the Little Ice Age and the LGM and the elevated BSi% associated with gyttja in the Holocene Optimum, the MWP and with modern warm period sediments. Daigle observed that the LIA glacier advance at Goat Lake was unprecedented since the LGM ( I didn’t check whether he used the u-word, but he made the point.) BSi% at Goat Lake in the MWP is very high (as you can see) though not quite at the overall maximum; the anti-MWP animus in the “community” is so great that despite the elevated BSi values compared to the LIA and modern period, Daigle is obliged to observe the following:

Warmer temperatures during the MWP are not recognized in the Goat Lake productivity signal, in which OM remains constant and BSi decreases from 31% at 1000 AD to 26% at 1200 AD (Figure 22).

In comparison with the striking contrast at Goat Lake, there is negligible contrast in the Hallet Lake BSi record employed by Kaufman. Re-examining the McKay thesis, the Hallet Lake sediments unsurpisingly are reported to be grey mud (not gyttja) – the color of the lower mud (with higher BSi values) is a “dark grey” while the more recent portion is a “light” grey. At nearby Greyling Lake (the original NSF30 target), a gyttja zone is reported, but Kaufman did not take any BSi measurements and focused on nearby Hallet Lake.

As a third party looking at this proxy for the first time (albeit one coming with experience in geological literature and proxy literature), it’s hard for me to avoid the view that the minor Hallet Lake BSi fluctuations are extremely unlikely to be usable as any sort of useful climate record (likewise, the Cascade Lake BSi record.) It seemed to me that Goat Lake represented a much better building block, but that’s just a first impression.

I echo the frustration of the scientist who asked:

But shouldn’t we aim to do a synthesis that is only lake seds (at least as first step)?

I have less than zero interest in yet another CPS-style reconstruction blending Yamal with a bunch of other data with no common “signal” which is what we got. But it would have been very interesting to see a “synthesis” of lake sediments in which the work was carried out with consistent methods, thereby permitting an overall assessment of the value of the proxy. If this be “vicious commentary”, so be it.

Kaufman's BSi Selection

Below is a plot comparing sediment BSi (biological silica) to depth (cm) from two of Kaufman’s lakes (done by different students). I’ve shown it by depth (rather than ascribed age) since the dating of these sediment series is not without some hairiness. I’ve shown equal lengths for each lake, both covering at least 800AD-present on their assigned dates. The dotted red line shows where 1963 is assigned in each study (1963 is a key date in radiogenic sediment testing.)

Kaufman et al 2009 used only one of these series (and I’m sure that the reasons are impeccable.) Similarly only one of the two series is archived in the NCDC paleo archive (I’ve extracted the other data from an online M.Sc. thesis). Again I’m sure that the reasons for only placing one of the two series in the NCDC paleo archive are impeccable.

The question for CA readers: which of the two series is used in Kaufman et al 2009? No prize for guessing the correct answer.

BTW I wish that site reports for climate proxies were as well presented as the MSc theses of Kaufman’s students – the attention to detail in these theses makes them infinitely better resources than journal publications by their professors.

UPDATE: Here’s another Alaskan BSi series from a Kaufman student – a site also not used in Kaufman et al 2009 and not archived. In this case, the thesis has a photoimage of the data (which, as a result, cannot be readily extracted as it could from the other two theses):

 

Reference: McKay thesis 

Is Kaufman 'Robust'?

A common meme in Team-world these days is that any issues or errors are minor and that none of them “matter”. As we peel back the layers of Kaufman et al, this is the first line of Team defence.

The rhetorical impact of Team reconstructions largely derives from the modern-medieval differential: is it in the red or is it in the black?

Thus when one sees study after study which has modern-medieval differentials that are always just slightly in the black, any prudent analyst would arch his/her eyebrow slightly and examine any accounting policies that may have contributed to getting the result in the black. (I use the term “accounting” intentionally, since the term implies that there be policies for the inclusion/exclusion of particular data sets and their truncation.) And let there be no doubt: when one is dealing with CPS reconstructions with very small data sets (20 or so), it is quite possible to affect the differential through a small subset of the data.

Sometimes the accounting exceptions look innocent enough e.g. MBH’s unique extension of Gaspe tree ring series back to 1400 from 1404. However, such variations in accounting policy invariably seem to enhance HS-ness in the composite and each such variation needs to be examined.

Obviously there are a few methods that I’ve learned to look for: does the study use Graybill bristlecones? Does it use Yamal? Does it use upside-down Tiljander? Are there any truncations or extensions of the series? Is the most modern version of the series used?

I noticed almost instantaneously that Kaufman used Yamal and upside-down Tiljander (however mitigating the impact of upside-down Tiljander by truncating it at 1800). He used two other Finnish series, both of which are, as far as I can tell right now, used in an orientation upside-down to that proposed by the original authors. Kaufman truncated the Blue Lake varve series because of supposed non-temperature inhomogeneity in the early portion of the series, but didn’t truncate the later portion of the Loso Iceberg Lake varve series where there was a definite inhomogeneity. Kaufman appears to have used an old version of the Hallet Lake series (which was replaced over a year ago in Nov 2008 at NCDC – otherwise, the inconsistencies between the Kaufman version and the NCDC version are inexplicable.) In addition to Yamal, Kaufman used two other Briffa versions, while not using seemingly plausible tree ring series at Tornetrask (Grudd) and Indigirka, Yakutia.

I’ve done a quick sensitivity analysis in which I’ve done a CPS average (980-1800 base) with the following variations:

1. Current version of 2- Hallet Lake and non-truncated version of 1-Blue Lake. (I haven’t checked whether this “matters”, but there didn’t seem to be any overwhelming reason to use an obsolete version of Hallet Lake or a truncated version of Blue Lake.)
2. The three Briffa series (Yamal, Tornetrask-Finland, Taymyr-Avam) are replaced by Polar Urals (Esper version), Tornetrask (Grudd version) and Indigirka (Moberg version). (I think that this is the sensitivity that carries the water here and my guess is that most of the difference arises from the Briffa data. I’ve provided materials that make this easy for anyone interested to check.)
3. The three Finnish proxies are used in the orientation of the original authors i.e. flipped from the Kaufman version.

Here’s the result. Obviously there is a lot in common in the general appearance of the two composites – the difference between the two is that there is nothing “unprecedented” about the 20th century in the latter case.


Top – CPS average of 23 Kaufman proxies; bottom – variation as described above.

One can calculate the relative contribution of each accounting decision to the change in appearance. The largest contribution comes from Yamal versus Polar Urals. Each accounting decision has some impact on the modern-medieval differential. I’ve uploaded data sets and scripts so that interested readers can experiment for themselves and will attach an explanation script in the first comment.

I do not claim that the bottom graph is more reasonable or less unreasonable than the first graph. My point here – as on many other occasions – is that just calling something a “proxy” doesn’t mean that it is a “proxy”. The sensitivity of the modern-medieval differential to different roster selections means that the data is not consistent enough to yield a “robust” result.

Invalid Calibration in Kaufman 2009

Darrell S. Kaufman, David P. Schneider, Nicholas P. McKay, Caspar M. Ammann, Raymond S. Bradley, Keith R. Briffa, Gifford H. Miller, Bette L. Otto-Bliesner, Jonthan T. Overpeck, and Bo M. Vinther (Science 9/4/2009) propose a reconstruction of Arctic summer land temperatures for the last 2000 years, using 23 diverse proxies. Decadal averages of each proxy are normalized to zero mean and unit variance relative to the period 980-1800 AD, when all 23 proxies are available. These are then are averaged, as available, to form a 2000-year composite. This composite is converted into temperature anomalies by comparison to the CRUTEM3 summer (JJA) temperature for land north of 60°N latitude, for 1860 – 2000 A.D.

Unfortunately, the paper’s calibration of the proxy composite is defective. The 23 proxies used include lake varve thicknesses, varve densities, varve and sediment organic material (OM), sediment biosilica, ice core 18O depletions, and tree ring widths, and are all from different locations. There is no a priori reason to expect these very diverse proxies to all have the same behavior with respect to temperature, even when normalized. Because not all the proxies are available in each decade, the composite does not have constant composition. As its composition changes, it essentially becomes a different index, which must be calibrated separately to temperature. The authors fail to do this, and hence the reconstruction is invalid.

Continue reading

WGIII and those unarchived comments and RE reports

At the beginning of September, I was copied an email discussion reporting the online publication in Climatic Change of a paper by Warwick McKibbin, David Pearce, and Alison Stegman.   What was notable about this paper was that it was submitted in September 2005.   What for readers of this site, perhaps, was unremarkable was that it supported criticisms by David Henderson and Ian Castles of IPCC SRES scenarios for their choice of market exchange rates rather than PPP-adjusted GDPs.   The paper suggested a serious overestimation of emissions by 2100 from this error alone.

Also unremarkable to CA readers was the fact that a paper on some of the same issues by two members of the IPCC SRES team (“PPP versus MER: Searching for Answers in a Multi-dimensional Debate”, authored by Detlef van Vuuren and Knut Alfsen, both of whom were lead authors of Chapter 3 of the AR4 WGIII report) was received by Climatic Change on 15 November 2005 (over two months AFTER the MPS paper); accepted on 14 December 2005, less than a month later; published online by Climatic Change on 4 May 2006; and published in the journal itself in the March 2006 issue.   Clearly, it was suggested, Climatic Change can move with despatch  when it wants to.

The email exchanges reminded me that on this site I reported that IPCC Working Group III who deal with these scenarios had published the Expert Reviewer’s Comments and perhaps over optimistically suggested that if asked the TSU in the Netherlands would release the Review Editors’ reports.   In fact they did not.   Nor did they ever publish the Lead Authors’ responses to the Expert Reviewers’ Comments as eventually had WGI, with a little persuasion, and (credit where credit is due) the Met Office responsible for WGII did before being asked.

A week ago I asked the Netherlands Environmental Assessment Agency where I could find  the “open archive” required by Appendix A to Principles Governing IPCC Work as it was felt there were some questions that still needed answers and the Dutch site had disappeared.   I also asked for access to the various WGIII documents under the European Directive which incorporated into European Law the Aarhus Convention.   The answer from Dr Meyer was that all the files had been transferred to the Potsdam Institute for Climate Impact Research (PIK) to whom he copied my requests.

Dr Matschoss, at PIK, responded today that he would raise the question at the meeting of the IPCC Bureau on 17/18 September and get back to me in 1 -2 weeks.   Remember that at this point I had only asked for the AR4 documents which according to the ‘Principles’ should have been in an “open archive”.   I believe however this is an important issue that the IPCC can no longer prevaricate upon and replied with the open letter I reproduce below.

Dear Dr Matschoss,

Participation, Openness and Transparency

I agree that the meeting of the IPCC Bureau tomorrow and Friday is the right forum to discuss the points I raised with Dr Meyer and I hope that you might present this open letter which outlines what I believe many observers and commentators on the IPCC process believe should happen in the fifth assessment.

From its inception the IPCC has required the Working Groups to undertake their assessments on an open and transparent basis.   This is contained in the second Principle Governing IPCC Work which has been successively reviewed and reconfirmed.   It is an overarching principle, and the fact that the detailed procedures in Appendix A only prescribe few detailed requirements of documents to be archived does not limit the generality of the second Principle.

In calling for the first Freedom of Information Conference UN Resolution 59 in 1947 began:

“Freedom of information is a fundamental human right and is the touchstone of all the freedoms to which the United Nations is consecrated”

Since the first IPCC Assessment Report there have been dramatic increases in communication technology which in turn have lead to an ease with which information can be disseminated and shared inexpensively.   At the same time the Internet has more recently given voice to the latent interest in many areas such as climate change and I do not believe the IPCC’s approach to disclosure, evidenced by the Working Groups in AR4, conforms to the requirements of openness and transparency as originally understood by the IPCC’s founders let alone in our modern world.

I have detailed many examples from WGI in a published paper and in a letter, sent last year to Dr Christ, which was neither answered nor acknowledged, and  I asked for various information including the unpublished WGIII Expert Reviewers’ Comments and the Review Editors’ Reports.

PIK appear to have a well constructed TSU web site, but it requires a user name and password.   The working group and TSU clearly should not have to allow for an unlimited number of unofficial, even if expert, members of the public commenting upon the drafts and other documents.  However, to be open and transparent as is required by the Principles, I would ask you to provide a “guest” login to allow the public to follow the assessment.

Clearly, for the IPCC to be open and transparent, timetables, instructions, the intermediate drafts, Reviewers’ Comments and  Lead Authors’ responses should be open to public inspection at the same time that the many hundreds of world wide IPCC participants have access to them.   The Internet discussion groups and the media generally provide a good forum for the public to discuss the forthcoming assessment and through the media and their political representatives can if necessary make their views felt and participate in the assessment.   None of this need impede the work of the TSU.   In the same way the Lead Author’s meetings and Working Group plenary sessions should, in this 21st century, be webcast if you are to persuade the public that you are genuinely open and transparent.

Finally I would ask that the Bureau require that the Review Editors’ reports should be published when they are received and should meet the reasonable expectation of the public that they be as Appendix A requires.   They should be  a “written report” as were several AR4 WGII reports rather than a bland “sign off” of WGI, which were published, and those of WGIII which along with their Expert Reviewer’s comments were not.

While I will accept the foregoing is a departure from previous Working Group practices and may be unwelcome,  I would point out that it is no more than PIK should find itself obliged to require of the TSU under the Convention on Access to Information, Public Participation in Decision-making and Access to Justice in Environmental Matters (“the Aarhus Convention”).

Yours sincerely

David Holland

I will report back on this matter.   I hope that the IPCC Bureau will recognise not only the IPCC Principles but European Law under which WGIII must operate.   If so I hope that they will also accept that as a sister UN body to the UNECE they should fully adopt the provisions of the Aarhus Convention and require the WGI and II to also provide unrestricted access to the AR5 working documents.