The 20th century warming counters a millennial-scale cooling trend which is consistent with long-term astronomical forcing.
MBH99
According to the UMass researchers, the 1,000-year reconstruction reveals that temperatures dropped an average of 0.02 degrees Celsius per century prior to the 20th century. This trend is consistent with the “astronomical theory” of climate change, which considers the effects of long-term changes in the nature of the Earth’s orbit relative to the sun, which influence the distribution of solar energy at the Earth’s surface over many millennia.
“If temperatures change slowly, society and the environment have time to adjust,” said Mann. “The slow, moderate, long-term cooling trend that we found makes the abrupt warming of the late 20th century even more dramatic. The cooling trend of over 900 years was dramatically reversed in less than a century. The abruptness of the recent warming is key, and it is a potential cause for concern.”
The long-term [northern] hemispheric trend is best described as a modest and irregular cooling from AD 1000 to around 1850 to 1900, followed by an abrupt 20th century warming.
The above figure has four familiar looking graphs. One of them is the original Hockey Stick, and three are “fake”. Can you tell which one is the real?
Although most of the original Hockey Stick methods have been uncovered, there are still a few remaining oddities. Apart from the confidence interval calculation there has been another mystery relating to MBH99. This is remarkable as the rather short MBH99 paper seems to be on the surface a simple extension of MBH98: a step (1000-1399) is added to the existing MBH98 NH temperature reconstruction using the same methodology. However, a wealth of material in the four page paper is devoted to “correcting” the (Mannian) North-American tree-ring series PC #1. How exactly or even why this was done has been somewhat a mystery. Two years ago Steve wrote notes about the issue (here, here, and here). It is worth reviewing those before continuing reading this post.
The problem with the methods described by Steve was that it was impossible that they had been actually used. The reason is that it is easy to see from the published data that the actual “correction”, or the “fix”, applied was piecewise linear. There is simply no way such a function could be obtained with any type of smoothing operations from the original data.
For the calculations Steve was using his private copy of Mann’s later destroyed UVA ftp archive infamously known for the CENSORED -directories. For the rest of us, data archived there has been unreachable — until now. The FOIA documents contain an MBH data directory structure obtained by Tim Osborn sometime back in 2003. It can be argued that the UVA ftp site was originally specially prepared by Scott Rutherford for Osborn, but that is another story. Anyhow, the files in Osborn’s archive seem to correspond to those originally located in UVA ftp site. The files in the directory TREE/COMPARE relate to the PC1 “fixing”.
While I was checking the files, I noticed a FORTRAN code “residualdetrend.f“, which I had not seen discussed anywhere. In the beginning of the file there is a comment:
c
c regress out co2-correlated trend (r=0.9 w/ co2)
c after 1800 from pc1 of ITRDB data
c
Wow! Exactly the same comment is found in “co2detrend.f” discussed by Steve here. Further down, we find
c
c linear segments describing approximate residuals
c relative to fit withrespect to secular trend
c
Indeed, there it was: a code removing a piecewise linear segment from the PC1, and further I found out that the segment matched nicely with the “secular trend in residuals” graph in MBH99 Figure 1(b). Mystery solved, well, kind of.
Now the question was, what the heck is then “co2detrend.f”?! I noticed that both these codes output to a file with the name “pc01-fixed.dat”. FOIA files include such a file, and its content matches to the output of “residualdetrend.f”. So IMO it can be safely assumed that Mann tried another CO2 “adjustment”, but for some reason ended up with the one described in “residualdetrend.f” (why to approximate the “secular trend” is another new Mannian mystery).
After establishing this, I had another surprise. I noticed that there is also a file “pc1-fixed-old.dat“, which I presumed to be the output of “co2detrend.f”. Well, it turned out that the “fix” contained in the file was neither of the methods described so far. Thus Mann had at least three methods for “adjusting” his PC1! Here is a plot of different “fixes” (to be subtracted from the original PC1) uncovered so far.
A natural question is now, why the fix used is “better” than the ones disregarded? Maybe the “skill” measures used by Mann contain the answer. MBH99:
The calibration and verification resolved variance (39% and 34% respectively) are consistent with each other, but lower than for reconstructions back to AD 1400 (42% and 51% respectively – see MBH98).
I (as Steve and UC) have been able to emulate the main MBH procedure for a while. Especially, my emulation of the AD1000 step is exact. So I ran the algorithm, but replaced the “fixed” PC1 with the other two “fixed” PCs. For the “co2detrend.f”-fix the calibration and verification REs are 0.37 and -0.09, respectively. So even according to Mann’s standards (negative RE) that “fixed” PC had to disregarded. For the “old fix” the RE scores were 0.37 and 0.20, so I guess they are not “consistent with each other”, and maybe this was the reason for trying yet-another-fix. However, the real surprise came when I tried the algorithm with the original Mannian PC1, i.e. without any “fixing”. The RE scores are 0.38 and 0.33, so based on these “skill metrics” there is no reason to “fix” the PC in the first place!
It gets more interesting: MBH99 has the linear trend (1000-1900 as in the IPCC figure) of -0.020°C/century, but without PC1 “adjustment” the cooling trend is reduced to less than -0.005°C/century! MBH99:
The substantial secular spectral peak is highly significant relative to red noise, associated with a long-term cooling trend in the NH series prior to industrialization (δT = -0.02°C/century). This cooling is possibly related to astronomical forcing, which is thought to have driven long-term temperatures downward since the mid-Holocene at a rate within the range of -0.01 to -0.04°C/century [see Berger, 1988].
Finally, the answer to the question posed in the beginning. The original Hockey Stick is Exhibit B. Exhibit C is obtained using the “old” AD1000 NOAMER PC1 “fix” keeping everything else the same in the Mannomatic. Exhibit D corresponds to the “co2detrend.f fix” , and Exhibit A is obtained using the original Mannian PC1 (no fixing). (click below to see an animated GIF of the different versions)
56 Comments
That’s awesome detective work Jean.
Now can someone kindly put this all in English?
My mbh99 code is here, http://signals.auditblogs.com/2008/07/01/hockeystick-for-matlab/ (see the update, one change needed due to climateaudit update )
Hmm, http://www.eastangliaemails.com/emails.php?eid=138&filename=938031546.txt
Update 31 Mar 2011: code now in here: https://climateaudit.org/2008/07/01/code-the-hockey-stick/
Jean,
a bit of topic, but I asked this question several times on CA comments without an answer, (probably lost in the general commentary)- Seeing as you actually work with the data, please can you tell me why the error margin has a massive improvement around 1600AD on all the Mann et al Plots? If the reason is a reduction in quantity of source data, such as fewer tree ring proxies, or fewer thermometers etc, then there should also be an offset in the main trend-line- at least that’s what happens with my own data (unrelated to climate) Any explanation from anyone would be much appreciated.
Re: Mark Cooper (Feb 3 08:38),
Mann calculates “error margins” from the calibration error. Now the number of proxies increase considerably after around 1600AD, and hence later steps “fit” better and have narrower “confidence intervals”. I do not understand what you mean by the offset thing.
I think Mark is arguing that adding in new data would likely shift the mean of the series up or down, as well as affect the confidence intervals.
I think Mark is not seeing the sequence correctly here. The trend has been determined and now the confidence interval of the trend line is required. Enter Mann and you obtain those from the variability (standard deviation of the trend) from the calibration period. The width of the error bars (CIs) will now change around the trend line as the number of samples changes with time.
You can actually replace all proxies with some random process (red, white, for example), and you’ll get similar error margins as in MBH98. But then your verification RE will be very likely negative. To obtain positive verification RE, it is sufficient to include PC1 for each step, and then replace all other proxies with, say, AR(1) p=0.9 noise. You’ll need some trial & errors to get all REs positive, but trial & error is what Mann seems to do all the time, so it is ok.
Actually, original PC1 is not needed for positive ver REs. One can just apply partially centered PCA to trendless red noise [1] and take the first ‘noise PC’. It won’t take many simulations to obtain a hockey stick that passes the positive RE requirement.
[1] McIntyre, S., and R. McKitrick (2005), Hockey sticks, principal components, and spurious significance, Geophys. Res. Lett., 32, L03710, doi:10.1029/2004GL021750.
Do you mean that the variability about the mean value should increase as the number of proxies decreases? They use variance matching to get rid of such annoyance. A method AFAIK not very known in calibration literature.
Jean S, this is great analysis. As always, every bizarre Mannian adjustment has a reason.
Re: Steve McIntyre (Feb 3 08:44),
Thanks. Yes, all the mannian work is rather simplistic (once you figure them out) but they all seem to have a reason. Figuring them out is the problematic part. On the other hand, some things are truly weird. For instance, why did he decide to “approximate” data he had readily at hand? I could get practically no difference by using the true “secular trend residual” fix.
Thank you, Jean. And, I have to say, very well written. I’ll also say I understood almost everything but I could not have ever done it.
Technically, those animated gifs and blinkys and… are very neat tools.
Given the number and bizarre nature of the many weird fixes and counter-fixes in the process of data-torturing leading to the final hockey stick, perhaps “mannian” should be replaced by “manniac”.
Kind of squishy, isn’t it, when one can try all sorts of models and only report the “best” one (where “best” is never described and looks subjective)?
It is a combination of (1) one of the most extreme forms of ‘publication bias’, a widespread vice of contemporary science whereby only positive results get published, and (2) a new manifestation of the old trick of ‘torturing the data till they confess’. In this particular case, motivated also by an advocacy drive mixed with ruthless competition for research funds and insufficiently impartial peer review.
John P. Ioannidis has a little gem of a paper hidden in the medical literature:
“Why most published research findings are false”,PLoS Med. 2005;2:e124 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1182327/
It is a highly readable account of some of the fallacies of statistical reasoning. A few highlights relevant for the present case:
“Corollary 4: The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true. Flexibility increases the potential for transforming what would be “negative” results into “positive” results.”
“Claimed Research Findings May Often Be Simply Accurate Measures of the Prevailing Bias”
“the claimed effect sizes are simply measuring nothing else but the net bias that has been involved in the generation of this scientific literature. Claimed effect sizes are in fact the most accurate estimates of the net bias. It even follows that between “null fields,” the fields that claim stronger effects (often with accompanying claims of medical or public health importance) are simply those that have sustained the worst biases.”
Re: Craig Loehle (Feb 3 09:07),
I think the most amazing thing is that these same people have been loudly telling how “robust” their results are!
Jean S:
This is a very elegant piece of detective work. I am still puzzled as to why the fixes. You frame the question but do not seem to offer an explanation. The version chosen – Exhibit B – has little going for it except its alignment with the presumed long term cooling trend.
Re: Bernie (Feb 3 09:54),
well, without a fix (Exhibit A) the linear trend is -0.0047°C/century, and there is not enough cooling to be associated with the Astronomical Theory of Climate Change. See also if you can find any support for this statement (MBH99 Press Release) from Exhibit A (my bold)
It is hard to believe that other scientists are letting them get away with this.
Plus, how is any CO2 adjustment legitimate given that the CO2 signal is presumably part of another PC? Is there logic that bizarre?
Sorry “their logic”
Regarding the smearing in of historic CO2 data in thousand year hockey sticks, I wrote previously:
“Just because it sounds stupid doesn’t mean it’s not true.”
I have several comments on the thread about the Antarctic CO2 concentration (not isotopes) data from the 1988, 1996 and 1998 Etheridge. These all present multi-century hockeysticks because the blades are anthropogenic signals. Mann, et al. I strongly suspect, are smearing this data into their reconstructions, thereby necessarily reproducing hockeysticks. Might have even got the idea for adding an “instrumental” series on graph tails from Ether. 1988.
I know, it seems too crazy, even criminal. You cannot believe it.
But they do. I think I have an advantage. I have engaged the believers as a student, not an equal. I wanted the ABC’s and it became clear to me there is a strong element of blind faith involved, and that they believe the 1000-2000 year CO2 concentration record and temperature are one in the same. At Copenhagen the posters said “You Control the Climate”. Not effect, but control. The anthrowarmists believe it.
https://climateaudit.org/2009/12/10/calibrating-dr-thompsons-z-mometer/
The characteristic of the work of the Team gangsters that I find surreal is that so many of them fanny about doing something that bears a passing resemblance to science, without, apparently, having a clue as to what doing real science is like.
Thanks, Jean — I’ll have to study your results more closely.
My take on this back on Steve’s 11/13/07 post “The MBH99 ‘CO2 Adjustment'” was that MBH99 had simply hand-fudged the portion of the curve after 1700 to make it show more cooling prior to the 20th c. They did this by splicing in the low frequency of a series that had the “right” shape in place of the low frequency of their actual series. This was obfuscated by calling it a CO2 adjustment, but in fact the numerical values of their CO2 series were never used for anything, nor was the substitute series numerically calibrated to anything. See https://climateaudit.org/2007/11/13/the-mbh99-co2-adjustment-part-1/#comment-116917.
My bottom line was that while the adjustment was entirely bogus, it was not actually hidden if their text was carefully parsed. It was, however, obfuscated by unnecessarily complicating its explanation.
I cheated, I have a copy of Bishop Hill’s “The Hockey Stick Illusion” right in front of me. Figure B is on the cover.
Re: EdeF (Feb 3 11:22),

I got my copy on Monday, and I think it is an excellent book. I highly recommend it. However, I think BH should redraw the cover for the next edition, the shaded “hockey stick” is drawn in the wrong orientation in the light of this post 😉 It should be more like this:
Indeed. Mann, et al. 1999, managed to infect/influence some of the basic research on
“solar forcing”:
ftp://ftp.ncdc.noaa.gov/pub/data/paleo/climate_forcing/solar_variability/bard_irradiance.txt
So NCDC & NASA have allowed these Mannian statistics in all over the place.
Please take a moment to look at my comments concerning the Luterbacher “proxies” used by Mann in his 2008 version of the hockey stick. I call these proxies the “amazing multiplying proxies” because Mann uses 71 seperate Luterbacher proxies, but the data for all of them prior to about 1750 come from the same 10 or so “documentary information” sources.
Comments and criticisms are appreciated.
You can see them at here.
Best Regards,
Tom Moriarty
Is there any other reason to have those RED bars at the end of the Graph other than as a scare tatic? It would be nice if the darn RED bars did not cover-up the actual data. Tricks of smoke and mirrors….Can’t they just present data like scientists instead of scare mongers. They take us all for fools….I learned a new term today that decribes many of the Players in the AGW arena. Dunning–Kruger effect. Here is the explanation of this term of endearment :~) http://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect ……….how apropos…… John…
Checkout http://www.bestinclass.dk/index.php/2010/01/global-warming/
and the comments. (For instance, E#21 written by Lau )
If global temperature is going up – some device must show that’s the fact. Or?
..or you can ‘fix’ towards the other direction, make PC1 even more hs-like (added positive linear trend) :
positive REs, so I guess this is ok as well.
Whatever the fix, it’s sobering to keep in mind that we are talking about a handful of tree rings. No way they can be tortured enough to yield such precise conclusions.
Figure missing,
One has to wonder what it is like for Mann, Briffa et.al. to log on to CA and see their mis-deeds (which they had assumed were safely tucked away out of sight in their minds and in servers somewhere) explained in public with such clinical precision.
And when do you think that has ever happened?
Sorry – Don’t get your drift.
I think that the effort and detective instincts of Jean S, Steve M and UC are sometimes under appreciated by those of us who do not understand why these errors were not found by scientists more professionally involved with the subject matter.
The analytical work is not easy and the output has to be matched by supposing what was being attempted initially. The opaque language of the original authors does not make the job easier. I also think that the analyst needs motivation that perhaps the professionally involved scientist does not possess.
Anyway I guess Jean S has extracted, and provided for public view, a sensitivity test that Mann et al had.(unknowingly?) provided in the haze. Now that the sensitivity and robustness becomes apparent, I would guess the next step would be for Mann et al. to post hoc provide the a priori reasons for the selection they made. It is at this step that serious scientists are allowed to chuckle.
Jean,
Is there any significant meaning to the intersection of the lines on your PC1 fixes graph? They seem to converge ~1925.
It just seems odd that the 3 different calculations would arrive at the same temp in 1925 while having virtually nothing else in common.
Re: Bob McDonald (Feb 3 17:35),
Nothing I’m aware of.
I did the graph by standardizing all series (three “fixed” PCs plus the “rescaled” original PC) to zero mean and unit variance in the “pre-fixing era” (1000-1599), where they are exactly the same. Then I subtracted from the original each of the fixed PCs.
Does it seem bizarre to anyone else to adjust a PC1 result like this?
Maybe email 963233839.txt of 10 July 2000 is relevant.
Goodness, the climate science community learned a lot about ignoring objections between 2000 and the closing date for submissions to AR4 in 2007.
108. 0926026654.txt
From: Phil Jones
To: mann@snow.geo.umass.edu
Subject: Straight to the Point
Date: Thu, 06 May 1999 17:37:34 +0100
Cc: k.briffa,t.osborn,mhughes,rbradley
Keith didn’t mention in his Science piece but both of us
think that you’re on very dodgy ground with this long-term
decline in temperatures on the 1000 year timescale. What
the real world has done over the last 6000 years and what
it ought to have done given our understandding of Milankovic
forcing are two very different things. I don’t think the
world was much warmer 6000 years ago – in a global sense
compared to the average of the last 1000 years, but this is
my opinion and I may change it given more evidence.
Maybe Jones was persuaded after he read about it in the authoritative TAR? 😉
May I suggest a read (or review) of some of the fascinating detective work of Steve, Jean S, UC, et al through the years?
I am reading a couple of past threads per day just to try to fill in my own mental world with all that has transpired here over the past decade….. Remarkable!
Jean has suggested, over on the new Briffa Bodge post at https://climateaudit.org/2011/03/30/muir-russell-and-the-briffa-bodge/#comment-259525 , that this trick be called the “Milankovitch Bodge”: https://climateaudit.org/2011/03/30/muir-russell-and-the-briffa-bodge/#comment-259525 .
However, I think it does Milankovitch an injustice to associate him with this procedure.
How about the “Mannkovitch CO2 Bodge” instead?
The essay Jean recommends, “A good trick to hide a decline”, by Andrew Montford at http://bishophill.squarespace.com/blog/2010/4/26/a-good-trick-to-create-a-decline.html , provides very readable explanation of all this. Perhaps this is included in his book.
One important thing that I got from Montford’s piece was that the Mannkovitch CO2 Bodge was only applied to the AD 1000-1400 portion of the HS, where it has the effect of raising the early portion of the shaft. If you look closely at the blue annual readings, this discontinuity is actually visible in the altered graph at 1400.
As Montford dryly points out, it is “counterintuitive” that an alteration that supposedly adjusts for post-1900 CO2 fertilization has it entire effect pre-1400!
Like they say, It’s Even Worse Than We Thought! 😉
Have any climate auditors looked at the latest greatest hockey stick paper, Marcott et al (2013)? It’s getting lots of the usual hyperventilating PR, with Mannian Hockey Team quotations. Since it claims 11,300 years of data, yet a sharp hockey stick blade in recent decades, it is suggesting comparative discussions with both Mannian studies and also Milankovitch estimates.
comment at WUWT: something very odd about data for Marcott et al (2013)
Looking at it.
Excellent, although if I knew you were going to be scrutinizing a stat laden paper of mine I would begin to perspire. If it depended on stat stuff from Mann, I would perspire greatly.
Steve, I assume you already noticed this, but I just finished reading the paper and found an amusing point. In Figure 1 (E and F), five reconstructions are compared to their new one. Two are from Mann 2008, and a third is from Wahl and Amman 2007. Of course, W&A 2007 is really just MBH with a couple minor alterations made with the (false) claim they fixed the problems of MBH. In other words, both of Mann’s reconstructions are present.
In fact, one of the remaining two reconstructions (Huange04) only goes back to 1600 AD, and it looks nothing like their new reconstruction. That means the only millenial reconstruction they compare their work to other than Mann’s is Moberg 2005, and it is outside their confidence intervals for more than half the period it covers.
Their reconstruction looks good if you compare it to Mann’s work, but it (apparently) looks bad if you compare it to any other work. That shouldn’t reassure anyone.
And now, a more useful comment. I’ve always thought the most important step in creating any reconstruction is to look at the data. Before implementing any statistical methods, just look at it and see what you can see. In that vein, I created images showing all 73 series used in this paper. The x-axis is held constant for each series so they’re comparable, but the y-axis is different per series.
Given the data they used, I have trouble seeing how they confirmed Mann’s hockey stick. Most of their series don’t seem to resemble their results.
Steve: others are noticing the same phenomenon. If none of the datasets have the Marcott stick, how does it emerge in the aggregate? Dunno.
I haven’t even been able to figure out how Marcott et al manage to get their 20 year samples. Getting that resolution from series with a much coarser resolution requires some sort of infilling. I can’t find anything in the paper or SI which discusses that step. It’s hugely important, but unless I’m missing something, it’s simply overlooked. My suspicion is it has something to do with the anomalous results.
One possibility I’ve been considering is there are a number of series that extend to 1940-1950 but don’t reach 1960. If those series were cooler than the ones extending to 1960, that might explain the jump. When the series with a cooler end get dropped, the results get warmer. That could create a jump in temperatures like what we see.
That assumes the data doesn’t agree with itself, but it is at least be an explanation. I don’t have any other at the moment.
Rud has a post over at Judith’s
He mentions
“The misinformation highway took the paper’s figure S3 (below) as a spaghetti chart hockey stick of the proxy temperatures. It is not. It shows 1000 Monte Carlo simulations of the 73 data sets, perturbed by inserting random temperature and age calibration errors to establish the blue statistical band in Figure 1B. S3 doesn’t say the last century’s temperature has risen above the Holocene peak. It only says uncertainty about the combined recent paleotemperature has risen. Which must be true if the median resolution is 120 years.”
Steve:
Have you been in contact with Marcott et al to ask for more details on their analysis? Given some of the points raised by you and others, it might be helpful to at least show that you gave them an opportunity to clarify their analytic choices and decisions. It sounds like there is a list of questions that you could pose. (Cross posted at Climate, etc)
Steve: I sent an email to Marcott yesterday asking a couple of points. No answer yet.
Steve: appears from your comments here and at ClimateEtc, that you are running into writer’s block on the subject of Marcott.
You wrote: “It looks like a real dog’s breakfast. I’ve been working on a long post at CA, but keep encountering new problems and am finding it hard to finish a post. Or even begin one.”
It sounds like it has so much wrong with it that it’s hard to know where to start or finish. That it is a compendium of all that is wrong with certain climate science papers in the past and that it builds on those tricks, devices, errors, miscalculations and inappropriate statistical techniques for which those papers are known.
If the above is true, and I don’t know that it is, maybe that might be a way to approach it. Just a general survey of all the mistaken techniques re-deployed by Marcott that originate in Climate Science’s dubious past.
The alternative is several shorter posts on specific problems. A “one at a time” approach.
Hope this helps. If not, please delete.
To compare notes between the abstract and the paper:
“…Temperatures have risen steadily since then, leaving us now with a global temperature higher than those during 90% of the entire Holocene.”
“…Current global temperatures of the past decade have not yet exceeded peak interglacial values but are warmer than during ~75% of the Holocene temperature history.”
“…Intergovernmental Panel on Climate Change model projections for 2100 exceed the full distribution of Holocene temperature under all plausible greenhouse gas emission scenarios.
“…Our results indicate that global mean temperature for the decade 2000–2009 (34) has not yet exceeded the warmest temperatures of the early Holocene (5000 to 10,000 yr B.P.). These temperatures are, however, warmer than 82% of the Holocene distribution as represented by the Standard5×5 stack, or 72% after making plausible corrections for inherent smoothing of the high frequencies”
Anthony’s site seems to be crashing my browser this morning.
Several people have pointed out inconsistencies in the Marcott paper including their use of error margins and the bizarrely unprecedented hockey stick appearing in the century smoothed data.
Given the 2013 publishing date I am also curious to see if they truncated the temperature data (as per Mann) to hide a lack of incline: say a 1C increase averaged over 100 yrs versus 115 yrs.
6 Trackbacks
[…] The Hockey Stick and the Milankovitch Theory « Climate Audit […]
[…] climategate, More on the hockey stick, Manns misconduct – […]
[…] The Hockey Stick and the Milankovitch Theory « Climate Audit […]
[…] but was rather overlooked by the sceptic community in all the excitement over the emails. “The Hockey Stick and the Milankovitch Cycle” uses some of the Climategate files to solve one of the remaining mysteries of the Hockey […]
[…] (see comments) we began to understand the effect of this ridiculous “CO2-adjustment” (Mannkovitch Bodge) in MBH99: it adjusted the verification RE statistic and affected the 1000-1850 linear trend […]
[…] Jacoby and D’Arrigo (Clim Chg 1989), a study of northern North American tree rings, was extremely influential in expanding the application of tree rings to temperature reconstructions (as opposed to precipitation.) (See CA tag Jacoby for prior posts that have been tagged.) The Jacoby-d’Arrigo reconstruction was used in Jones et al 1998 and its components (especially Gaspe) were used in MBH98. It is used to “bodge” of Mann PC1 in MBH99; Mann’s “Milankowitch” argument rests almost entirely on this bodge – ably deconstructed by Jean S here. […]