One reader asked whether my RCS results held up using “standard” software. There is no “standard” software for RCS. It is different than ARSTAN. Further, despite the use of Briffa’s RCS chronologies in many multiproxy studies, until the present data sets were archived, to my knowledge, there were no public examples where both a measurement data set and an RCS chronology had been archived. The following post is technical.
Having said that, as I’ve observed before, the method as described is actually simpler than “conventional” standardization as, on its face, one negative exponential curve for aging is fitted to the entire population with the chronology; the observed ring widths are divided by the corresponding smooth, with the chronology value in the year being the resulting average. This can be implemented in R with the nls function in a few lines of code. (My RCS.chronology function on file provides some variations developed in experimentation that I probably won’t carry in my code on an ongoing basis.) The formula is:
I told the reader that I’d gotten very close results, but the sudden availability of a data set for benchmarking is an opportunity that I can hardly pass by. So I’ve benchmarked my algorithn against the three Briffa data sets with some interesting results.
The first graphic compares my RCS emulation of the Briffa TornFin chronology from the archived TornFin data set. The top panel shows the archived chronology; the second panel my emulation in R; the third panel the difference. After the first couple of hundred years, the calculations are indistinguishable – the differences are negligible – less than 0.01 and often 0.001. This shows convincingly that, in this network, one curve is fitted to the entire network. Otherwise, differences would be greater. Differences arise in the first part of the series: we know that the underlying data set covers a longer period. It seems pretty clear to me that the chronology in the first few hundred years uses cores that are not included in the measurement archive – which, to that extent, is incomplete (but is complete for this calculation after the teething period. By 0 AD, the start of the Briffa analysis period, the two versions are in synch.
The next figure shows the same thing for Avam-Taimyr. Here the emulation is very close, but not exact. There’s a little low-frequency variability that doesn’t occur in the Tornetrask case. Given the virtually exact replication, this one should replicate exactly as well. Maybe there are a few cores missing here and there. Dunno. The discrepancy at the start of the series is less than at Tornetrask, suggesting that there is less contribution from early cores not included in the measurement archive.
Next here is the same graphic for Yamal. Like both graphics, the emulation is very good throughout most of the series, though not exact like TornFin. Like TornFin, there is a discrepancy at the beginning, presumably due to missing cores used in the early portion of the chronology; this discrepancy doesn’t have a noticeable effect after 0 AD, the start of the period under study. However, there is also a discrepancy at the end of the period, with my emulation actually being a little more HS-ish than the archived versions.
This looks to me like there is a discrepancy of 1-3 cores at the end between the archived measurement data and the measurement data set used in the calculation of the archived CRU chronology. Does this matter? Yes and no. It’s something that one has to watch out for sensitivity studies.