PAGES2K vs the Hanhijarvi Reconstruction

The PAGES2K (2013) Arctic reconstruction of Kaufman et al has attracted considerable attention as a non-Mannian hockey stick. However, it’s been fraught with problems since day one, including a major re-statement of results in August 2014 (McKay and Kaufman, 2014 pdf), in which Kaufman conceded (without direct acknowledgement) Climate Audit criticism that their results had been impacted by the use of contaminated data and upside-down data.  But there’s a lot more.

In March 2013, almost exactly contemporaneous with PAGES2K, Hanhijarvi et al, pdf here, the originators of the paico method, published their own Arctic reconstruction, which has undeservedly received almost no publicity.   (In this post, I will use “PAGES2K” to refer to the PAGES2K Arctic reconstruction; the full PAGES2K study includes other areas, including Gergis’ Australian reconstruction.)   But unlike PAGES2K, its medieval reconstruction has higher values than its modern reconstruction – a finding that has received negligible coverage.  Because its methodology matches the PAGES2K methodology, the difference necessarily arises from proxies, not from method.

Nor is the issue merely “regional” coverage though Hanhijarvi et al’s Arctic reconstruction is based on North Atlantic proxies though it would be puzzling even as a “regional” result.  These proxies from a very large subset of the PAGES2K Arctic data (27 of 59 series, using no other data).  With such a large subset, one can only obtain the PAGES2K Arctic results if there is a superstick for the rest of the data (non-H13 proxies).  As a regional result, specialists would have to explain the physics of a medieval warm period in the North Atlantic concurrent with extreme cold in the rest of the Arctic, if one were to take these results at face value.

But before attempting such a complicated solution,  it is important to note that Kaufman’s proxies are fraught with defects. Kaufman has already acknowledged that one of his supersticks (Igaliku) was contaminated by modern agriculture; and that another non-H13 series (Hvitarvatn) was used upside down. Several series, thought to be temperature proxies as recently as 2013, were removed in August as no longer “temperature proxies”.  For inexplicable reasons, Kaufman failed to remove all the contamination from the Igaliku series and his inversion of the Hvitarvatn points to major inconsistencies with other series.  Further, although Kaufman has acknowledged multiple errors in the PAGES2K Arctic reconstruction, he has not issued a corrigendum, thereby permitting the erroneous series to continue in circulation, while, oddly, thus far not providing a digital version of the amended reconstruction.

Continue reading

PAGES2K: More Upside Down?

Does it matter whether proxies are used upside-down or not?

Maybe not in Mann-world (where, in response to our criticism at PNAS, Mann claimed that it was impossible for him to use series upside-down).  But, unlike Mann, Darrell Kaufman acknowledges responsibility for using proxies upside-up. Unfortunately, he and the PAGES2K authors don’t seem to be very diligent in ensuring that they do so.

Shortly after release of PAGES2K, I observed that they used both Hvitarvatn and Quelccaya upside-down (the latter on Neukom’s watch.) I also observed that correct orientation of Hvitarvatn ought to have a knock-on impact on Big Round Lake, which matched Hvitarvatn about as closely as two distinct proxies could be expected. Thus far, Kaufman’s already corrected upside-down Hvitarvatn: Big Round Lake should be in play as well. This inconsistency is something that ought to have been “assessed” in an assessment report, but wasn’t.

In a previous post earlier today, I questioned whether PAGES2K ought to have inverted the orientation of the Okshola speleothem O18 series since the Holocene trend (as inverted) is now opposite to the Holocene trend of the high-quality Renland ice core O18 series. The Okshola series is the only speleothem O18 series in the PAGES2K network: on other occasions, I’ve questioned the appropriateness of using “singleton” proxies in an assessment report. The fact that serious questions can arise over even the orientation of a series is eloquent support for this policy.

In the present post, I’m going to look at another singleton O18 series in PAGES2K – the single ocean sediment O18 series in the network (P1003), where once again, I seriously question whether PAGES2K have used the series in the correct orientation.

In the diagram below, I’ve shown O18 values (inverted) of sediments from an Arctic ocean core, showing the contrast between the LGM and the Holocene Optimum: this is a loud contrast which ought to show which way is up. While ice core O18 series have more negative values in glacial periods, the opposite happens with ocean sediment O18: O18 values in Arctic ocean sediments became less positive (from ~4 to ~3 %%). This is true over dozens of cores. The reason is logical enough:  the continental glaciers in ice ages contain ice with depleted O18 values and this results in the oceans being less depleted in O18.


Figure 1. Top panel – O18 for PS1243-1 (from  Bottom panel – long version P1003 from Sundqvist et al 2014 archive. (I haven’t seen a technical publication.)

In the next figure, I’ve shown the two-millennium section of P1003 used by PAGES2K in two mirror orientations. In the top panel, I’ve shown the series in PAGES2K uninverted orientation, while in the bottom panel, I’ve shown the series in the inverted orientation that is consistent with the observed relationship between values in the LGM and Holocene Optimum.


Figure 2.  P1003 O18 series (PAGES2K) version. top – in PAGES2K orientation; bottom – inverse orientation to match LGM-Holocene Optimum orientation.

Had the series been oriented to show elevated O18 values to show elevated values in glacial periods, it would also have resulted in a Little Ice Age being colder than both the medieval warm period and the modern warm period -  a phenomenon that is not disputed even by the Team for the Arctic, and a somewhat declining trend through the two most recent millennia, reducing the inconsistency of this proxy with other series.   As a clincher, Kristensen et al 2004 (Paleooceanography), a technical publication of P1-003MC, used the orientation shown in the bottom panel – opposite to PAGES2K as shown in the excerpt shown below (the scale is different, but if you look closely, you can see the match):




Okshola: which way is up?

The recent revisions to PAGES2K included a dramatic flipping of the Hvitarvatn varve series to the opposite orientation used in the 2013 version used in IPCC AR5. Together with other changes (such as a partial – but still incomplete – removal of contaminated sediments from the Igaliku series), this unwound most of the previous difference between medieval and modern periods.   While Kaufman and coauthors are to be commended for actually fixing errors,  contamination still remains in the Igaliku series. In addition, the revised Hvitatvatn orientation is now inconsistent with the Big Round Lake (Baffin Island) varve series, which is now almost a mirror image.

The revised Arctic2K removed three series as no longer viewed as being temperature proxies. Each of these deserves to be looked at, as to whether this is simply post-hoc.  One of these was a O18 series from Kepler Lake, Alaska.  I’ll discuss this in a separate post. Obviously, O18 is a workhorse proxy and it is disquieting  that an O18 series can be removed post-hoc with the following footnote:

Omitted (not temperature sensitive). 

Removing some O18 series , while keeping other series of the same proxy class that go the “right way”, obviously introduces potential bias since there is an obvious possibility that some of the “right way” examples are overshooting?. Continue reading

Revisions to Pages2K Arctic

Kaufman and the PAGES2K Arctic2K group recently published a series of major corrections to their database, some of which directly respond to Climate Audit criticism. The resulting reconstruction has been substantially revised with substantially increased medieval warmth. His correction of the contaminated Igaliku series is unfortunately incomplete and other defects remain. Continue reading

Sliming by Stokes

Stokes’ most recent post, entitled “What Steve McIntyre Won’t Show You Now”, contains a series of lies and fantasies, falsely claiming that I’ve been withholding MM05-EE analyses from readers in my recent fisking of ClimateBaller doctrines, falsely claiming that I’ve “said very little about this recon [MM05-EE] since it was published” and speculating that I’ve been concealing these results because they were “inconvenient”.

It’s hard to keep up with ClimateBaller fantasies and demoralizing to respond to such dreck. Continue reading

ClimateBallers and the MM05 Simulations

ClimateBallers are extremely suspicious of the MM05 simulation methodology, to say the least.  A recurrent contention is that we should have removed the climate “signal” from the NOAMER tree ring network before calculating parameters for our red noise simulations, though it is not clear how you would do this when you not only don’t know the true “signal”, but its estimation is the purpose of the study.

In the actual NOAMER network, because of the dramatic inconsistency between the 20 stripbark chronologies and the 50 other chronologies, it is impossible to obtain a network of residuals that are low-order red noise anyway – a fundamental problem that specialists ignore. Because ClimateBallers are concerned that our simulations might have secretly embedded HS shapes into our simulated networks, I’ve done fresh calculations demonstrating that the networks really do contain “trendless red noise” as advertised.  Finally, if ClimateBallers continue to seek a “talking point” that “McIntyre goofed” because of the MM05 estimation of red noise parameters from tree ring networks, an objective discussed at the ATTP blog, they should, in fairness, first direct their disapproval at Mann himself, whose “Preisendorfer” calculations published at Realclimate in early December 2004, also estimated red noise parameters from tree ring networks, though ClimateBallers have thus far only objected to the methodology when I used it. Continue reading

t-Statistics and the “Hockey Stick Index”

In MM05,  we quantified the “hockeystick-ness” of a series as the difference between the 1902-1980 mean (the “short centering” period of Mannian principal components) and the overall mean (1400-1980), divided by the standard deviation – a measure that we termed its “Hockey Stick Index (HSI)”.  The histograms of its distribution for 10,000 simulated networks (shown in MM05 Figure 2) were the primary diagnostic in MM05 for the bias in Mannian principal components.  In our opinion, these histograms established the defectiveness of Mannian principal components beyond any cavil and our attention therefore turned to its impact, where we observed that Mannian principal components misled Mann into thinking that the Graybill stripbark chronologies were the “dominant pattern of variance”, when they were actually a quirky and controversial set of proxies.

Nick Stokes recently challenged this measure as merely an “MM05 creation” as follows:

The HS index isn’t a natural law. It’s a M&M creation, and if I did re-orient, it would then fall to me to explain the index and what I was doing.

While we would be more than happy to be credited for the simple concept of dividing the difference of means by a standard deviation, such techniques have been used in the calculation of t-statistics for many years, as, for example, in the calculation of the t-statistic for the difference of means.   As soon as I wrote down this rebuttal, I realized that there was a blindingly obvious re-statement of what we were measuring through the MM05 “Hockey Stick Index” as the t-statistic for the difference in mean between the blade and the shaft.  It turned out that there was a monotonic relationship between the Hockey Stick Index and the t-statistic and that MM05 histogram results could be re-stated in terms of the t-statistic for the difference in means.

In particular, we could show that Mannian principal components produced series which had a “statistically significant” difference between the blade (1902-1980) and the shaft (1400-1901) “nearly always” (97% in 10% tails and 85% in 5% tails).  Perhaps I ought to have thought of this interpretation earlier, but, in my defence, many experienced and competent people have examined this material without thinking of the point either. So the time spent on ClimateBallers has not been totally wasted.


t-Statistic for the Difference of Means 

The t-statistic for the difference in means between the blade (1902-1980) and the shaft (1400-1901) is also calculated as the difference in means divided by a standard error: a common formula computes the standard error as the weighted average of the standard deviations of the two subperiods, weighted by the length of each subperiod.  An expression tailored for the specific case is shown below:

se= sqrt( (78* sd( window(x,start=1902) )^2 + 501* sd( window(x,end=1901))^2 )/(581-2) )

For the purposes of today’s analysis, I haven’t allowed for autocorrelation in the calculation of the t-statistic (allowing for autocorrelation will reduce the effective degrees of freedom and accentuate results, rather than mitigate them.)

Figure 1 below shows t-statistic histograms corresponding to the MM05 Figure 2 HSI histograms, but in  a somewhat modified graphical style:  I’ve overlaid the two histograms, showing centered PC1s in light grey and Mannian PC1s in medium grey. (Note that I’ve provided a larger version for easier reading – interested readers can click on the figure to embiggen.)   The histograms are from a 1000-member subset of the MM05 networks and a little more ragged.   I’ve also plotted a curve showing the t-distribution for df=180, which was calculated from one of the realizations. This curve is very insensitive to changes in degrees of freedom in this range and I therefore haven’t experimented further.

The separation of the distributions for Mannian and centered PC1s is equivalent to the separation shown in MM05 Figure 1, but re-statement using t-statistics permits more precise conclusions.


Figure 1. Histograms of t-statistic for difference of 1902-1980 mean and 1400-1901 means showing centered PC1s (light grey) and Mannian PC1s (medium grey). The curve is a t-distribution (df=180).   The red lines at +- 1.65 and +-1.96 correspond to 90% and 95% two-sided t-tests. 

The distribution of the simulated t-statistic for centered PC1s is similar to a high-df t-distribution, though it appears to be somewhat  overweighted to values near zero and underweighted on the tails: there are approximately half the values in the 5% and 10% tails that one would expect from the t-distribution.  At present, I haven’t thought through potential implications.

The distribution of the simulated t-statistic for Mannian PC1s bears no relationship to the expected t-distribution.  Values are concentrated in the tails: 85% of t-statistics for Mannian PC1s are in the 5% tails ( nearly 97% in the 10% tails.)  This is what was shown in MM05 and it’s hard to understand why ClimateBallers contest this.

What This Means

The result is that Mannian PC1s “nearly always” (97% in 10% tails and 85% in 5% tails) produce series which have a “statistically significant” difference between the blade (1902-1980) and the shaft (1400-1901).   If you are trying to do a meaningful analysis of whether there actually is a statistically meaningful difference between the 20th century and prior periods, it is impossible to contemplate a worse method and you have to go about it a different way.   Fabrications by ClimateBallers, such as false claims that MM05 Figure 2 histograms were calculated from only 100 cherrypicked series, do not change this fact.

The comparison of the Mannian PC histogram to a conventional t-distribution curve also reinforces the degree to which the Mannian PCs are in the extreme tails of the t-dstribution.   As noted above (and see Appendix), the t-stat is monotonically related to the HSI:  rather than discussing the median HSI of 1.62, we can observe that the median t-stat for Mannian PC1s is 2.44, a value which is at the 99.2 percentile of the t-distribution.  Even median Mannian PC1s are far into the right tail.  The top-percentile Mannian PC1s illustrated in Wegman’s Figure 4.4 correspond to a t-statistic of approximately 3.49, which is at the 99.97 percentile of the t-distribution.  While there is some difference in visual HS-ness, contrary to Stokes, both median and top-percentile Mannian PC1s have very strong HS appearance.

Stokes is presently attempting to argue that representation of a network through a biased Mannian PC1 is mitigated in the representation of the network, by accommodation in lower order PCs. However, Stokes has a poor grasp on the method as a whole and almost zero grasp of the properties of the proxies.  When the biased PC method is combined with regression against 20th century trends, the spurious Mannian PC1s will be highly weighted.  In our 2005 simulations of RE statistics (MM05-GRL, amended in MM05 (Reply to Huybers – the Reply containing new material), we showed that Mannian PC1s combined with networks of white noise yielded RE distributions that were completely different than those used in MBH98 and WA benchmarking.  (WA acknowledged the problem, but shut their eyes.)

Nor, as I’ve repeatedly stated, did we argue that the MBH hockeystick arose from red noise: we observed that the powerful HS-data mining algorithm (Mannian principal components) placed the Graybill stripbark chronologies into the PC1 and misled Mann into thinking that they were the “dominant pattern of variance”.  If they are not the “dominant pattern of variance” and merely a problematic lower order PC, then the premise of MBH98 no longer holds.



Figure 2 below compares the t-statistic for the difference between the means of the blade (1902-1980) and the shaft (1400-1901) against the HSI as defined in MM05-GRL: it shows a monotonic, non-linear relationship.  It is immediately seen that there is a monotonic relationship between HSI and t-statistic, with the value of the t-statistic being closely approximated by a simple quadratic expression in HSI.  The diagonal lines show where both values are equal.  The HSI and t-statistic are approximately equal for HSI with absolute values less than ~0.7.  Values in this range are very common for centered PC1s but non-existent for Mannian PC1s, a point made in MM05.

The vertical red lines show 1 and 1.5 values of HSI (both signs); the horizontal dotted lines show 1.65 and 1.96 t-values,  both common benchmarks in statistical testing (95% percentile one-sided and 95% two-sided, 97.5% one-sided respectively.)  HSI values exceeding 1.5 have t-values well in excess of 2.



Figure 2.  Plot of t-statistic for the difference in means of the blade (1902-1980) and the shaft (1400-1901) against the HSI as defined in MM05-GRL for centered PC1s (left) and Mannian PC1s (right). It shows a monotonic, non-linear relationship.  The two curves have exactly the same trajectories when overplotted, though values for the centered PCs are typically (absolute value) less than about 0.7 HSI, whereas values for Mannian PCs are bounded away from zero.


What Nick Stokes Wouldn’t Show You

In MM05, we quantified the hockeystick-ness of simulated PC1s as the difference between the 1902-1980 mean (the “short centering” period of Mannian principal components) and the overall mean (1400-1980), divided by the standard deviation – a measure that we termed its “Hockey Stick Index (HSI)”.  In MM05 Figure 2, we showed histograms of the HSI distributions of Mannian and centered PC1s from 10,000 simulated networks.

Nick Stokes contested this measurement as merely a “M&M creation”.  While we would be more than happy to be credited for the concept of dividing the difference of means by a standard deviation, such techniques have been used in statistics since the earliest days, as, for example, the calculation of the t-statistic for the difference in means between the blade (1902-1980) and the shaft (1400-1901), which has a similar formal structure, but calculates the standard error in the denominator as a weighted average of the standard deviations in the blade and shaft.   In a follow-up post, I’ll re-state the results of the MM05 Figure 2 in terms of t-statistics: the results are interesting.

Some ClimateBallers, including commenters at Stokes’ blog, are now making the fabricated claim that MM05 results were not based on the 10,000 simulations reported in Figure 2, but on a cherry-picked subset of the top percentile. Stokes knows that this is untrue, as he has replicated MM05 simulations from the script that we placed online and knows that Figure 2 is based on all the simulations; however, Stokes has not contradicted such claims by the more outlandish ClimateBallers.

In addition, although the MM05 Figure 2 histograms directly quantified HSI distributions for centered and Mannian PC1s,  Stokes falsely claimed that MM05 analysis was merely “qualitative, mostly”.  In fact, it is Stokes’ own analysis that is “qualitative, mostly”, as his “analytic” technique consists of nothing more than visual characterization of 12-pane panelplots of HS-shaped PCs (sometimes consistently oriented, sometimes not) as having a “very strong” or “much less” HS appearance.  (Figure 4.4 of the Wegman Report is a 12-pane panelplot of high-HSI PC1s, but none of the figures in our MM05 articles were panelplots of the type criticized by Stokes, though Stokes implies otherwise. Our analysis was based on the quantitative analysis of 10,000 simulations summarized in the histograms of Figure 2. )

To make matters worse, while Stokes has conceded that PC series have no inherent orientation, Stokes has attempted to visually characterize panelplots with different protocols for orientation. Stokes’ panelplot of 12 top-percentile centered PC1s are all upward pointing and characterized by Stokes as having “very strong” HS appearance, while his panelplot of 12 randomly selected Mannian PC1s are oriented both up-pointing and down-pointing and characterized by Stokes as having a “much less” HS appearance.

Over the past two years, Stokes has been challenged by Brandon Shollenberger in multiple venues to show a panelplot of randomly selected Mannian PC1s in up-pointing orientation (as done by the NAS panel and even MBH99) to demonstrate that his attribution is due to random selection (as Stokes claims), rather than inconsistent orientation.  Stokes has stubbornly refused to do so.  For example, at in a discussion in early 2013 at Judy Curry’s, Stokes refused as follows:

No, you’ve criticized me for presenting randomly generated PC1 shapes as they are, rather than reorienting them to match Wegman’s illegitimate selection. But the question is, why should I reorient them in that artificial way. Wegman was pulling out all stops to give the impression that the HS shape that he contrived in the PC1 shapes could be identified with the HS in the MBH recon.

Stokes added:

I see no reason why I should butcher the actual PC1 calcs to perpetuate this subterfuge.

When Brandon pointed out that Mann himself re-oriented (“flipped”) the MBH99 PC1, Stokes simply shut his eyes and denied that Mann had “flipped” the PC1 (though the proof is unambiguous.)

In today’s post, I’ll show the panelplot that Nick Stokes has refused to show.  I had intended to also carry out a comparison to Wegman Figure 4.4 and the panelplots in Stokes’ original blogpost,  but our grandchildren are coming over and I’ll have to do that another day. Continue reading

Mike’s NYT trick

I’m not sure McIntyre knows what ‘splicing’ is.  To me it means cutting and joining two ends together.  All Mann did was plot instrumental temperatures on the same axes, but he showed the whole record.

Dana Nuccitelli

There still seems to be a lot of confusion among Mann’s few remaining supporters as to why Phil Jones credited the “trick of adding in the real temps” to Mann’s Nature article (MBH98). Today I will review that topic.

Let’s first see what the Great Master himself says about the issue in his book of Fairy Tales:

In reality, neither “trick” nor “hide the decline” was referring to recent warming, but rather the far more mundane issue of how to compare proxy and instrumental temperature records. Jones was using the word trick in the same sense — to mean a clever approach — that I did in describing how in high school I figured out how to teach a computer to play tic-tac-toe or in college how to solve a model for high temperature superconductivity. He was referring, specifically, to an entirely legitimate plotting device for comparing two datasets on a single graph, as in our 1998 Nature article (MBH98) — hence “Mike’s Nature trick.”

With that explanation in hand you don’t need to be Mosher to ask the right question: why on Earth would Jones even mention that “trick”, when he didn’t use it in the WMO cover graph? He didn’t compare reconstructions to the instrumental record as there was no instrumental record plotted in the first place!

Let’s now see what was possibly known to Jones about the trick in MBH98 at the time of the email. The best known (at least for CA readers) example of the trick usage in MBH98 is obviously in the smoothed reconstruction of Figure 5b. This has been covered here so many times (for the exact parameters, see here), that I just show the “before” and “after” pictures as they seem popular. The MBH98 (Nature) plot (Figure 5b) is in B/W, and it is very fuzzy. That’s why I’ve plotted the smoothed curve in red, but otherwise I’ve tried to replicate the original figure as closely as possible. Here’s the relevant part without and with the trick:

trick_MBH98fig5bHere’s also the same for MBH99:

trick_MBH99fig3aThe MBH98 plot is so blurry that the usage of the trick is actually very hard to spot. It is therefore valid to question if Jones actually
noticed it. In fact, given his track record of technical sophistication, I believe he did not (at least not from MBH98). However, he didn’t need to notice it as there are other more observable cases where the trick was used.

As originally observed by Steve years ago Mann is also extending the proxy record with the instrumental series in the MBH98 Figure 7 (top panel):


That is even clearly stated in the caption:

‘NH’, reconstructed NH temperature series from 1610–1980, updated with instrumental data from 1981–95.

The splicing can be further confirmed from the corresponding data file. There is a slight difference in the plot between the proxy and the instrumental part (solid vs. dotted), but it is important to notice that the instrumental and proxy records do not overlap. Instead the proxy record is clearly extended (“updated”) with the instrumental data.

What is even more important is the use of this trick in the attribution correlations (plotted in the bottom panel). Mann used the extended series in his attribution analysis, which in essence is just windowed correlations between the extended record and various “forcing” time series. In other words, last 15 points in the correlation plot (bottom panel) depend not only from the values of the (uncertain) proxy series but also from the (more certain) instrumental series. So one really shouldn’t be comparing the last 15 points to earlier values as it is a kind of apples to oranges comparison. Especially, the observation in the paper that

The partial correlation with CO2 indeed dominates over that of solar irradiance for the most recent 200-year interval, as increases in temperature and CO2 simultaneously accelerate through to the end of 1995, while solar irradiance levels off after the mid-twentieth century.

seems to be somewhat dependent on the trick. However, there are other more serious problems with the MBH98 attribution analysis, which is likely the reason why we didn’t delve into this more at the time.

Jones didn’t have to notice even this correlation use of the trick in order to have grounds for attributing the trick to Mike’s Nature article! Namely, MBH98 may have been rather groundbreaking in that it had already an extensive Press Release along with  press photos (and FAQ!). One of the photos (Figure 2) has the MBH98 reconstruction plotted. Unfortunately, it seems that the picture is not archived anywhere, and we only have a broken Postscript file available. Luckily the file opens just enough to confirm what is said in the figure caption.


Original caption: Northern hemisphere mean annual temperature reconstruction in °C (thin black line) with 95% confidence bounds for the reconstruction shown by the light blue shading. The thick black line is a 50 year lowpass filter (filtering out all frequencies less than 50 years) of the reconstructed data. The zero (dashed) line corresponds to the 1902-1980 calibration mean, and raw data from 1981 to 1997 is shown in red.

Here’s my replication:


So Mann had plotted the reconstruction from 1400 to 1980 and again extended it (using different color) with the instrumental series for 1981-1997. In other words, as in Figure 7 but unlike in the later plots he did not plot the 1902-1980 part of the instrumental record alongside, i.e., there is no overlap between the reconstruction and the instrumental (and hence they can not be compared).

Additionally there exists one even more blatant use of the trick that is somewhat comparable to what Jones did (and Mann approved) in the WMO graph. Namely, five days after the publication of the MBH98, the New York Times published an article (by William K Stevens) titled “New Evidence Finds This Is Warmest Century in 600 Years” featuring the results. The article carried this picture:


Original Caption: ”Warmer Weather, Recently, in Northern Hemisphere” Researchers used thermometer readings and proxy data like tree rings, ice core samples and coral records to trace climate patterns over the past 600 years. The annual variations for those years from the average temperature for the years from 1902 until 1980 for the Northern Hemisphere are shown in the graph. For example, the graph shows that in 1400, the mean temperature was about 0.5 degree Fahrenheit cooler than the average annual temperature from 1902-1980. ANNUAL TEMPERATURE VARIATIONS — FROM RECONSTRUCTED PROXY DATA — THERMOMETER MEASUREMENTS (Source: Dr. Michael E. Mann, Dr. Raymond S. Bradley and Dr. Malcom K. Hughes)

The plotted series has an incredible splicing of the MBH98 reconstruction (1400-1901) with Mann’s instrumental series (1902-1997)! In other words, the 1902-1980 part of the actual reconstruction (or the uncertainty intervals) is nowhere to be seen (replaced by the instrumental). The splicing together with the fact that anomalies are given in Fahrenheits indicates that whoever produced the graph had an access to the actual data (not available in the extensive press kit). It would be interesting, if the NYT journalists, some of them for sure reading this, would dig up their archives for the full story of how the graph was produced and if there were any protests from the authors about this grotesque splicing. Here’s again my replication of the figure without and with the splicing:


For sure Jones had seen the figure as he is quoted in the article.

Other experts pointed to other caveats. One, Dr. Philip Jones of the University of East Anglia in England, questioned whether it was valid simply to extend the proxy record by adding the last 150 years of thermometer measurements to it. He said that would be a bit like juxtaposing apples and oranges.

I don’t blame him for mistaking a 96 years splice with a 150 years splice, but I wonder what or who made him to do a complete U-turn in the validity of the splicing in one and half years time? Finally, it is always good to keep in mind the words from the Great Master a few years later.

[Response: No researchers in this field have ever, to our knowledge, "grafted the thermometer record onto" any reconstruction. It is somewhat disappointing to find this specious claim (which we usually find originating from industry-funded climate disinformation websites) appearing in this forum. Most proxy reconstructions end somewhere around 1980, for the reasons discussed above. Often, as in the comparisons we show on this site, the instrumental record (which extends to present) is shown along with the reconstructions, and clearly distinguished from them (e.g. highlighted in red as here). Most studies seek to "validate" a reconstruction by showing that it independently reproduces instrumental estimates (e.g. early temperature data available during the 18th and 19th century) that were not used to 'calibrate' the proxy data. When this is done, it is indeed possible to quantitatively compare the instrumental record of the past few decades with earlier estimates from the proxy reconstruction, within the context of the estimated uncertainties in the reconstructed values (again see the comparisons here, with the instrumental record clearly distinguished in red, the proxy reconstructions indicated by e.g. blue or green, and the uncertainties indicated by shading). -mike]

Here’s the turn-key Octave code for reproducing the figures in this post.

The CEI and NR Reply Briefs

Online here.  CEI     NR


Get every new post delivered to your Inbox.

Join 3,381 other followers