Millennial Quebec Tree Rings

In today’s post, I’m going to discuss an important new 1000-year chronology from northern treeline spruce in Quebec (Gennaretti et al 2014, PNAS here).  The chronology is interesting on multiple counts.  This is the first Quebec northern treeline chronology to include the medieval warm period.  Second, it provides a long overdue crosscheck against the Jacoby-D’Arrigo chronologies (including Gaspe) that have been embedded in a number of canonical reconstructions.  Its results are very different.  Third, the Quebec (and Labrador) northern treeline is the closest treeline to the Baffin Island ice core and varve thickness series.  I’ve observed on several occasions that interpretation of Baffin Island varve thickness series (Big Round Lake) is presently inconsistent with the interpretation of the similar Hvitarvatn series in Iceland and, in my opinion, there are serious questions about whether PAGES2K has oriented this series correctly.

Continue reading

Decomposing Paico

In today’s post, Jean S and I are going to show that the paico reconstruction, as implemented in the present algorithm, is very closely approximated by a weighted average of the proxies, in which the weights are proportional to the number of measurements.  Paico is a methodology introduced in Hanhijarvi et al 2013 (pdf here) and applied in PAGES2K (2013). It was discussed in several previous CA posts.

We are able to show this because we noticed that the contributions of each proxy to the final reconstruction can be closely estimated by half the difference between the reconstruction and reconstructions in which each series is flipped over, one by one. This sounds trivial, but it isn’t: the decomposition has some surprising properties. The method would not work for algorithms which ignore knowledge of the orientation of the proxy i.e. ones where it supposedly doesn’t “matter” whether the proxy is used upside down or not. In particular, the standard deviations of the contribution from each proxy vary by an order of magnitude, but in a way that has an interesting explanation. We presume that this decomposition technique is very familiar in other fields. The following post is the result of this joint work. Continue reading

New Article on Igaliku

Shortly after the publication of PAGES2K, I pointed out that the Igaliku lake sediment proxy, had been contaminated by modern agricultural runoff. The post attracted many comments.

Nick Stokes vigorously opposed the surmise that the Igaliku series had been contaminated by modern agriculture and/or that such contamination should have been taken into account by Kaufman and associates. Stokes:

I see earlier demands that selection criteria be declared for proxies. Kaufman has done that, and appears to have stuck with them. But when a spike appears, suddenly the CA throng has a thousand a posteriori reasons why Kaufman is a reprobate for not throwing it out.

or

I see no reason to disagree with the original authors, Massa et al in saying that “pollen accumulation appears to document climatic changes of the last millennia nonetheless”. The Betula/Salix counts are not contaminated.

Subsequent to my CA post, the Igaliku specialists have published a new article entitled “Lake Sediments as an Archive of Land use and Environmental Change in the Eastern Settlement, Southwestern Greenland” (abstract here) which unambiguously connected soil erosion to agriculture, not just in the modern period, but in the medieval period, observing that modern mechanization in the 1980s had resulted in a “five times” the rate of erosion.

Palaeoenvironmental studies from continental and marine sedimentary archives have been conducted over the last four decades in the archaeologically rich Norse Eastern Settlement in Greenland. Those investigations, briefly reviewed in this paper, have improved our knowledge of the history of the Norse colonization and its associated environmental changes. Although deep lakes are numerous, their deposits have been little used in the Norse context. Lakes that meet specific lake-catchment criteria, as outlined in this paper, can sequester optimal palaeoenvironmental records, which can be highly sensitive to both climate and/or human forcing. Here we present a first synthesis of results from a well-dated 2000-year lake-sediment record from Lake Igaliku, located in the center of the Eastern Settlement and close to the Norse site Garðar. A continuous, high-resolution sedimentary record from the deepest part of the lake provides an assessment of farming-related anthropogenic change in the landscape, as well as a quantitative comparison of the environmental impact of medieval colonization (AD 985—ca. AD 1450) with that of recent sheep farming (AD 1920—present). Pollen and non-pollen palynomorphs (NPPs) indicate similar magnitudes of land clearance marked mainly by a loss of tree-birch pollen, a rise in weed taxa, as well as an increase in coprophilous fungi linked to the introduction of grazing livestock. During the two phases of agriculture, soil erosion estimated by geochemical proxies and sediment-accumulation rate exceeds the natural or background erosion rate. Between AD 1010 to AD 1180, grazing activities accelerated soil erosion up to ≈8 mm century-1, twice the natural background rate. A decrease in the rate of erosion is recorded from ca. AD 1230, indicating a progressive decline of agro-pastoral activities well before the end of the Norse occupation of the Eastern Settlement. This decline could be related to possible climate instabilities and may also be indirect evidence for the shift towards a more marine-based diet shown by archaeological studies. Mechanization of agriculture in the 1980s caused unprecedented soil erosion up to ≈21 mm century-1, five times the pre-anthropogenic levels. Over the same period, diatom assemblages show that the lake has become steadily more mesotrophic, contrary to the near-stable trophic conditions of the preceding millennia. These results reinforce the potential of lake-sediment studies paired with archaeological investigations to understand the relationship between climate, environment, and human societies.

I recently noticed that my criticism had been more or less conceded in McKay and Kaufman 2014, which purported to accommodate the contamination (or overprinting, as suggested by Mosher) by deleting the last two points. I was critical of their correction, arguing that their correction still leaves a heavily contaminated reading in 1970. (The next reading is dated circa 1910 – its’ very low resolution and actually below the resolution standard of the study).

It’s hard to tell whether this was intentional or not. I can see one way that they might have left in this value by accident. If they had deleted two points from the PAGES2K-2013 version, that would have also deleted the contaminated 1970 point. But the PAGES-2013 had already omitted or removed one point from the underlying NOAA version. The new McKay and Kaufman version deleted two points from the NOAA version, and thus only one point from the PAGES2K-2013 version, still leaving the contaminated 1970 reading.

Or, if pressed, perhaps they would argue that the most recent article only expressly referred to mechanization “in the 1980s”. However, this hardly precludes the likelihood that the elevated erosion observed in the sample dated circa 1970 could not similarly be attributed to mechanization occurring earlier than the 1980s (farm mechanization obviously occurring throughout the world long before the 1980s) or dating error.

The series should never have been used in a temperature reconstruction.

Note: Jean S and I have been doing some interesting analysis of paico and it is my present view that Igaliku does not have a large impact on the paico reconstruction, but does have a large impact on the “basic composite” reconstruction, one of the PAGES2K alternatives.

PAGES2K vs the Hanhijarvi Reconstruction

The PAGES2K (2013) Arctic reconstruction of Kaufman et al has attracted considerable attention as a non-Mannian hockey stick. However, it’s been fraught with problems since day one, including a major re-statement of results in August 2014 (McKay and Kaufman, 2014 pdf), in which Kaufman conceded (without direct acknowledgement) Climate Audit criticism that their results had been impacted by the use of contaminated data and upside-down data.  But there’s a lot more.

In March 2013, almost exactly contemporaneous with PAGES2K, Hanhijarvi et al, pdf here, the originators of the paico method, published their own Arctic reconstruction, which has undeservedly received almost no publicity.   (In this post, I will use “PAGES2K” to refer to the PAGES2K Arctic reconstruction; the full PAGES2K study includes other areas, including Gergis’ Australian reconstruction.)   But unlike PAGES2K, its medieval reconstruction has higher values than its modern reconstruction – a finding that has received negligible coverage.  Because its methodology matches the PAGES2K methodology, the difference necessarily arises from proxies, not from method.

Nor is the issue merely “regional” coverage though Hanhijarvi et al’s Arctic reconstruction is based on North Atlantic proxies though it would be puzzling even as a “regional” result.  These proxies from a very large subset of the PAGES2K Arctic data (27 of 59 series, using no other data).  With such a large subset, one can only obtain the PAGES2K Arctic results if there is a superstick for the rest of the data (non-H13 proxies).  As a regional result, specialists would have to explain the physics of a medieval warm period in the North Atlantic concurrent with extreme cold in the rest of the Arctic, if one were to take these results at face value.

But before attempting such a complicated solution,  it is important to note that Kaufman’s proxies are fraught with defects. Kaufman has already acknowledged that one of his supersticks (Igaliku) was contaminated by modern agriculture; and that another non-H13 series (Hvitarvatn) was used upside down. Several series, thought to be temperature proxies as recently as 2013, were removed in August as no longer “temperature proxies”.  For inexplicable reasons, Kaufman failed to remove all the contamination from the Igaliku series and his inversion of the Hvitarvatn points to major inconsistencies with other series.  Further, although Kaufman has acknowledged multiple errors in the PAGES2K Arctic reconstruction, he has not issued a corrigendum, thereby permitting the erroneous series to continue in circulation, while, oddly, thus far not providing a digital version of the amended reconstruction.

Continue reading

PAGES2K: More Upside Down?

Does it matter whether proxies are used upside-down or not?

Maybe not in Mann-world (where, in response to our criticism at PNAS, Mann claimed that it was impossible for him to use series upside-down).  But, unlike Mann, Darrell Kaufman acknowledges responsibility for using proxies upside-up. Unfortunately, he and the PAGES2K authors don’t seem to be very diligent in ensuring that they do so.

Shortly after release of PAGES2K, I observed that they used both Hvitarvatn and Quelccaya upside-down (the latter on Neukom’s watch.) I also observed that correct orientation of Hvitarvatn ought to have a knock-on impact on Big Round Lake, which matched Hvitarvatn about as closely as two distinct proxies could be expected. Thus far, Kaufman’s already corrected upside-down Hvitarvatn: Big Round Lake should be in play as well. This inconsistency is something that ought to have been “assessed” in an assessment report, but wasn’t.

In a previous post earlier today, I questioned whether PAGES2K ought to have inverted the orientation of the Okshola speleothem O18 series since the Holocene trend (as inverted) is now opposite to the Holocene trend of the high-quality Renland ice core O18 series. The Okshola series is the only speleothem O18 series in the PAGES2K network: on other occasions, I’ve questioned the appropriateness of using “singleton” proxies in an assessment report. The fact that serious questions can arise over even the orientation of a series is eloquent support for this policy.

In the present post, I’m going to look at another singleton O18 series in PAGES2K – the single ocean sediment O18 series in the network (P1003), where once again, I seriously question whether PAGES2K have used the series in the correct orientation.

In the diagram below, I’ve shown O18 values (inverted) of sediments from an Arctic ocean core, showing the contrast between the LGM and the Holocene Optimum: this is a loud contrast which ought to show which way is up. While ice core O18 series have more negative values in glacial periods, the opposite happens with ocean sediment O18: O18 values in Arctic ocean sediments became less positive (from ~4 to ~3 %%). This is true over dozens of cores. The reason is logical enough:  the continental glaciers in ice ages contain ice with depleted O18 values and this results in the oceans being less depleted in O18.

P1003_long

Figure 1. Top panel – O18 for PS1243-1 (from pangaea.de).  Bottom panel – long version P1003 from Sundqvist et al 2014 archive. (I haven’t seen a technical publication.)

In the next figure, I’ve shown the two-millennium section of P1003 used by PAGES2K in two mirror orientations. In the top panel, I’ve shown the series in PAGES2K uninverted orientation, while in the bottom panel, I’ve shown the series in the inverted orientation that is consistent with the observed relationship between values in the LGM and Holocene Optimum.

P1003_modern

Figure 2.  P1003 O18 series (PAGES2K) version. top – in PAGES2K orientation; bottom – inverse orientation to match LGM-Holocene Optimum orientation.

Had the series been oriented to show elevated O18 values to show elevated values in glacial periods, it would also have resulted in a Little Ice Age being colder than both the medieval warm period and the modern warm period –  a phenomenon that is not disputed even by the Team for the Arctic, and a somewhat declining trend through the two most recent millennia, reducing the inconsistency of this proxy with other series.   As a clincher, Kristensen et al 2004 (Paleooceanography), a technical publication of P1-003MC, used the orientation shown in the bottom panel – opposite to PAGES2K as shown in the excerpt shown below (the scale is different, but if you look closely, you can see the match):

P1003_kristensen-2004

 

 

Okshola: which way is up?

The recent revisions to PAGES2K included a dramatic flipping of the Hvitarvatn varve series to the opposite orientation used in the 2013 version used in IPCC AR5. Together with other changes (such as a partial – but still incomplete – removal of contaminated sediments from the Igaliku series), this unwound most of the previous difference between medieval and modern periods.   While Kaufman and coauthors are to be commended for actually fixing errors,  contamination still remains in the Igaliku series. In addition, the revised Hvitatvatn orientation is now inconsistent with the Big Round Lake (Baffin Island) varve series, which is now almost a mirror image.

The revised Arctic2K removed three series as no longer viewed as being temperature proxies. Each of these deserves to be looked at, as to whether this is simply post-hoc.  One of these was a O18 series from Kepler Lake, Alaska.  I’ll discuss this in a separate post. Obviously, O18 is a workhorse proxy and it is disquieting  that an O18 series can be removed post-hoc with the following footnote:

Omitted (not temperature sensitive). 

Removing some O18 series , while keeping other series of the same proxy class that go the “right way”, obviously introduces potential bias since there is an obvious possibility that some of the “right way” examples are overshooting?. Continue reading

Revisions to Pages2K Arctic

Kaufman and the PAGES2K Arctic2K group recently published a series of major corrections to their database, some of which directly respond to Climate Audit criticism. The resulting reconstruction has been substantially revised with substantially increased medieval warmth. His correction of the contaminated Igaliku series is unfortunately incomplete and other defects remain. Continue reading

Sliming by Stokes

Stokes’ most recent post, entitled “What Steve McIntyre Won’t Show You Now”, contains a series of lies and fantasies, falsely claiming that I’ve been withholding MM05-EE analyses from readers in my recent fisking of ClimateBaller doctrines, falsely claiming that I’ve “said very little about this recon [MM05-EE] since it was published” and speculating that I’ve been concealing these results because they were “inconvenient”.

It’s hard to keep up with ClimateBaller fantasies and demoralizing to respond to such dreck. Continue reading

ClimateBallers and the MM05 Simulations

ClimateBallers are extremely suspicious of the MM05 simulation methodology, to say the least.  A recurrent contention is that we should have removed the climate “signal” from the NOAMER tree ring network before calculating parameters for our red noise simulations, though it is not clear how you would do this when you not only don’t know the true “signal”, but its estimation is the purpose of the study.

In the actual NOAMER network, because of the dramatic inconsistency between the 20 stripbark chronologies and the 50 other chronologies, it is impossible to obtain a network of residuals that are low-order red noise anyway – a fundamental problem that specialists ignore. Because ClimateBallers are concerned that our simulations might have secretly embedded HS shapes into our simulated networks, I’ve done fresh calculations demonstrating that the networks really do contain “trendless red noise” as advertised.  Finally, if ClimateBallers continue to seek a “talking point” that “McIntyre goofed” because of the MM05 estimation of red noise parameters from tree ring networks, an objective discussed at the ATTP blog, they should, in fairness, first direct their disapproval at Mann himself, whose “Preisendorfer” calculations published at Realclimate in early December 2004, also estimated red noise parameters from tree ring networks, though ClimateBallers have thus far only objected to the methodology when I used it. Continue reading

t-Statistics and the “Hockey Stick Index”

In MM05,  we quantified the “hockeystick-ness” of a series as the difference between the 1902-1980 mean (the “short centering” period of Mannian principal components) and the overall mean (1400-1980), divided by the standard deviation – a measure that we termed its “Hockey Stick Index (HSI)”.  The histograms of its distribution for 10,000 simulated networks (shown in MM05 Figure 2) were the primary diagnostic in MM05 for the bias in Mannian principal components.  In our opinion, these histograms established the defectiveness of Mannian principal components beyond any cavil and our attention therefore turned to its impact, where we observed that Mannian principal components misled Mann into thinking that the Graybill stripbark chronologies were the “dominant pattern of variance”, when they were actually a quirky and controversial set of proxies.

Nick Stokes recently challenged this measure as merely an “MM05 creation” as follows:

The HS index isn’t a natural law. It’s a M&M creation, and if I did re-orient, it would then fall to me to explain the index and what I was doing.

While we would be more than happy to be credited for the simple concept of dividing the difference of means by a standard deviation, such techniques have been used in the calculation of t-statistics for many years, as, for example, in the calculation of the t-statistic for the difference of means.   As soon as I wrote down this rebuttal, I realized that there was a blindingly obvious re-statement of what we were measuring through the MM05 “Hockey Stick Index” as the t-statistic for the difference in mean between the blade and the shaft.  It turned out that there was a monotonic relationship between the Hockey Stick Index and the t-statistic and that MM05 histogram results could be re-stated in terms of the t-statistic for the difference in means.

In particular, we could show that Mannian principal components produced series which had a “statistically significant” difference between the blade (1902-1980) and the shaft (1400-1901) “nearly always” (97% in 10% tails and 85% in 5% tails).  Perhaps I ought to have thought of this interpretation earlier, but, in my defence, many experienced and competent people have examined this material without thinking of the point either. So the time spent on ClimateBallers has not been totally wasted.

 

t-Statistic for the Difference of Means 

The t-statistic for the difference in means between the blade (1902-1980) and the shaft (1400-1901) is also calculated as the difference in means divided by a standard error: a common formula computes the standard error as the weighted average of the standard deviations of the two subperiods, weighted by the length of each subperiod.  An expression tailored for the specific case is shown below:

se= sqrt( (78* sd( window(x,start=1902) )^2 + 501* sd( window(x,end=1901))^2 )/(581-2) )

For the purposes of today’s analysis, I haven’t allowed for autocorrelation in the calculation of the t-statistic (allowing for autocorrelation will reduce the effective degrees of freedom and accentuate results, rather than mitigate them.)

Figure 1 below shows t-statistic histograms corresponding to the MM05 Figure 2 HSI histograms, but in  a somewhat modified graphical style:  I’ve overlaid the two histograms, showing centered PC1s in light grey and Mannian PC1s in medium grey. (Note that I’ve provided a larger version for easier reading – interested readers can click on the figure to embiggen.)   The histograms are from a 1000-member subset of the MM05 networks and a little more ragged.   I’ve also plotted a curve showing the t-distribution for df=180, which was calculated from one of the realizations. This curve is very insensitive to changes in degrees of freedom in this range and I therefore haven’t experimented further.

The separation of the distributions for Mannian and centered PC1s is equivalent to the separation shown in MM05 Figure 1, but re-statement using t-statistics permits more precise conclusions.

tstat_histogram

Figure 1. Histograms of t-statistic for difference of 1902-1980 mean and 1400-1901 means showing centered PC1s (light grey) and Mannian PC1s (medium grey). The curve is a t-distribution (df=180).   The red lines at +- 1.65 and +-1.96 correspond to 90% and 95% two-sided t-tests. 

The distribution of the simulated t-statistic for centered PC1s is similar to a high-df t-distribution, though it appears to be somewhat  overweighted to values near zero and underweighted on the tails: there are approximately half the values in the 5% and 10% tails that one would expect from the t-distribution.  At present, I haven’t thought through potential implications.

The distribution of the simulated t-statistic for Mannian PC1s bears no relationship to the expected t-distribution.  Values are concentrated in the tails: 85% of t-statistics for Mannian PC1s are in the 5% tails ( nearly 97% in the 10% tails.)  This is what was shown in MM05 and it’s hard to understand why ClimateBallers contest this.

What This Means

The result is that Mannian PC1s “nearly always” (97% in 10% tails and 85% in 5% tails) produce series which have a “statistically significant” difference between the blade (1902-1980) and the shaft (1400-1901).   If you are trying to do a meaningful analysis of whether there actually is a statistically meaningful difference between the 20th century and prior periods, it is impossible to contemplate a worse method and you have to go about it a different way.   Fabrications by ClimateBallers, such as false claims that MM05 Figure 2 histograms were calculated from only 100 cherrypicked series, do not change this fact.

The comparison of the Mannian PC histogram to a conventional t-distribution curve also reinforces the degree to which the Mannian PCs are in the extreme tails of the t-dstribution.   As noted above (and see Appendix), the t-stat is monotonically related to the HSI:  rather than discussing the median HSI of 1.62, we can observe that the median t-stat for Mannian PC1s is 2.44, a value which is at the 99.2 percentile of the t-distribution.  Even median Mannian PC1s are far into the right tail.  The top-percentile Mannian PC1s illustrated in Wegman’s Figure 4.4 correspond to a t-statistic of approximately 3.49, which is at the 99.97 percentile of the t-distribution.  While there is some difference in visual HS-ness, contrary to Stokes, both median and top-percentile Mannian PC1s have very strong HS appearance.

Stokes is presently attempting to argue that representation of a network through a biased Mannian PC1 is mitigated in the representation of the network, by accommodation in lower order PCs. However, Stokes has a poor grasp on the method as a whole and almost zero grasp of the properties of the proxies.  When the biased PC method is combined with regression against 20th century trends, the spurious Mannian PC1s will be highly weighted.  In our 2005 simulations of RE statistics (MM05-GRL, amended in MM05 (Reply to Huybers – the Reply containing new material), we showed that Mannian PC1s combined with networks of white noise yielded RE distributions that were completely different than those used in MBH98 and WA benchmarking.  (WA acknowledged the problem, but shut their eyes.)

Nor, as I’ve repeatedly stated, did we argue that the MBH hockeystick arose from red noise: we observed that the powerful HS-data mining algorithm (Mannian principal components) placed the Graybill stripbark chronologies into the PC1 and misled Mann into thinking that they were the “dominant pattern of variance”.  If they are not the “dominant pattern of variance” and merely a problematic lower order PC, then the premise of MBH98 no longer holds.

 

Appendix

Figure 2 below compares the t-statistic for the difference between the means of the blade (1902-1980) and the shaft (1400-1901) against the HSI as defined in MM05-GRL: it shows a monotonic, non-linear relationship.  It is immediately seen that there is a monotonic relationship between HSI and t-statistic, with the value of the t-statistic being closely approximated by a simple quadratic expression in HSI.  The diagonal lines show where both values are equal.  The HSI and t-statistic are approximately equal for HSI with absolute values less than ~0.7.  Values in this range are very common for centered PC1s but non-existent for Mannian PC1s, a point made in MM05.

The vertical red lines show 1 and 1.5 values of HSI (both signs); the horizontal dotted lines show 1.65 and 1.96 t-values,  both common benchmarks in statistical testing (95% percentile one-sided and 95% two-sided, 97.5% one-sided respectively.)  HSI values exceeding 1.5 have t-values well in excess of 2.

 

tstat_vs_HSI

Figure 2.  Plot of t-statistic for the difference in means of the blade (1902-1980) and the shaft (1400-1901) against the HSI as defined in MM05-GRL for centered PC1s (left) and Mannian PC1s (right). It shows a monotonic, non-linear relationship.  The two curves have exactly the same trajectories when overplotted, though values for the centered PCs are typically (absolute value) less than about 0.7 HSI, whereas values for Mannian PCs are bounded away from zero.

 

Follow

Get every new post delivered to your Inbox.

Join 3,559 other followers