Here’s a quick summary of the overlap of proxies in three widely publicized “independent” 2006 studies. The number of proxies are all small (Juckes -18; Osborn – 14; Hegerl – 12). All three use multiple bristlecone/foxtail chronologies: Juckes 4; OSborn 2; Hegerl 2. All three use Fisher’s Greenland dO18, Tornetrask (Juckes twice, Hegerl mis-identifying it); Taimyr; the Yang composite; Yamal. Several series are used in 2 of three studies: Chesapeak Mg/Ca; Alberta (Jasper) tree rings; Jacoby Mongolia tree rings. There are very few “singletons” – Osborn 3; Hegerl 3 and Juckes 6, although the Juckes singletons were used in Moberg 2005 or MBH98.
Juckes et al 2006 | Osborn and Briffa 2006 | Hegerl et al 2006 |
Boreal Plateau foxtails
Upper Wright foxtails Methuselah Walk bristlecone Indian Garden bristlecone |
MBH PC1
Average of Boreal Plateau and Upper Wright foxtails |
w.U.S.Hughes (MBH PC1)
w. U.S.composite (Average of Boreal Plateau and Upper Wright foxtails) |
W Greenland (Fisher) | Greenland dO18 (Fisher) | w Greenland (Fisher) |
Tornetrask (Briffa 2000 version)
Tornetrask (Briffa 1992 version) |
Tornetrask | N Norway (Tornetrask – sic) |
Taimyr | Taimyr | Taimyr |
Yang composite | Yang composite | E China (Yang composite) |
Yamal
Polar Urals (Briffa 1995) |
Yamal
Mangazeja |
Composite of Yamal, Mangazeja and Polar Urals (W Siberia) |
Chesapeake Mg-Ca | Chesapeake Mg-Ca | |
Mongolia | Mongolia (different version) | |
Alberta: Jasper/Icefields | Alberta (Jasper) | |
Singletons | ||
Arabian Sea G Bulloides | Netherlands documentary (van Engeln) | European historical (Luterbacher) |
Col du Zad, Morocco | Quebec (Bonif) | Zhaschiviesk- Ayandina River composite |
GRIP borehole | Tirol tree rings | Mackenzie tree rings |
Quelccaya Summit accumulation
Quelccaya Summit dO18 |
||
Shihua Cave, China |
36 Comments
Wouldn’t this be suitable for a letter or note to a journal such as Science or Nature?
So what’s wrong? It’s just Teamwork!
Maybe I’m a bit slow, but aren’t they all slightly similar?
nitpicking:
The author’s name is A. van Engelen.
A reconstruction made up of previously used proxies is hardly new. Since journals often decline to publish articles because the informatation has been published elsewhere, why are all these mostly repetitive studies getting into journals?
Brooks:
It is the techniques used to dig out those elusive temperature signals that keeps changing. But, the end result is always the same. . . .
The techniques used to dig them out don’t even change. All of them are, at heart, linear regressions in one fancy disguise or another.
w.
Another independent study may be published soon: 40% Juckes, 30% Osborn, 30% Hegerl, by Juckerlborn, Mann, Indiana Jones, et al. Wow, this one is even closer to the consensus view than all previous studies! Be afraid. Be very afraid.
Do I have that right, Steve M: there are no data points for the 3rd largest political entity on earth = Canada? Hardly global.
RE: #9 – Plus, Canada has the lowest latitude examples of tundra and taiga as well as the second lowest latitude example of a signifant marine feature with annual frozen-across sea ice.
#9 to be fair, osborn and briffa appear to have a quebec point, and its a singleton.
#9 oops! O & B, and hegerl also use a common site from alberta.
That having been said, my comments stand. They have all avoided the Canadian Arctic. I wonder why?
The McKenzie River from the Hegerl study is in the North West Territories of Canada. It is North of 60
Three this time? Dangerous formation. Beware of three bearing hockey sticks.
Click here for example
RE: #14 – OK, I concede that. So, there is one Canadian Arctic site.
Re #9 I was referring to the matter at hand, which is the Juckes reconstruction. This is the one that appears to contain no Canadian data.
Re #15
Is that Jim Hansen in the middle?
#18
No, but interesting connection. Maybe he’s the smart brother who got some schoolin’.
I’d like to repeat my suggestion that these tables would be very suitable as the basis for a note or letter to a journal such asNature or Science.
This is perhaps marginally on-topic because it is a new independent study that proves the hockey stick graph, a really strong one:
http://www.sitemeter.com/?a=stats&s=s24lumidek&r=35
If you wonder what was the cause, it’s because instapundit.com mentioned my article that 2006 will probably be the coolest year in 5 years
http://motls.blogspot.com/2006/12/2006-probably-coldest-year-in-last.html
re: #21
Can’t be Lubos. To be a true HS you would have had to have about 7000 hits on the 12th. The trick, remember, is to keep the MWP just marginally lower than the modern period.
Dear Dave, as you can see, my data shows that the catastrophic warming is worse than anyone so far – including Al Gore and Jim Hansen – has thought or could even imagine. When even skeptical blog counters lead to the same chilling conclusion, it’s time for an action. 😉
After reading so much (and so long) about the lack of independent data in followup studies I have to ask a “dumb” question, how much
“independent” data exists and/or is available? For us non-experts we might assume that the proxy modelers have exhausted the data.
It would be nice to know if they are using 10% or 90% of the available data (ice cores, tree rings, etc.).
Mark H.
Re #24
There’s plenty of data, but the Hockey Team are using selection processes to find the ones that best fit their verification period (the instrumental record) which, through the vagaries of statistics, produce hockey stick results when they are combined.
Thus, they use the same limited set of proxies over and over. Its a delusion that real statisticians would have avoided, but none of them are real statisticians.
RE: #25 – or, viewed in a more sinister manner, they are socio-political activists who have infiltrated science and who abuse statistics as a tool to accomplish their real quasi Jacobin goals.
See comments on the Malcolm Hughes podcast in “Day Two at AGU” for more thoughts on “independence”.
Re #26,
I like my answer because it’s at least falsifiable. I think that intent in a lot of these cases is impossible to prove or disprove, where simple self-delusion is the most parsimonious conclusion from the data.
RE: 28
I agree. Having hung around my share of academics, self-delusion is almost as well practiced as
it is in Jonestown tent revivals. That said, I was mainly interested in the data landscape. If, for example,
only 20% of available ice cores are carefully picked, the whole enterprise seems absurd. However, if one
randomly samples ice cores from a population, then that is a different matter.
Thanks for the suggestions…I will read.
Speaking of things “hard to prove”, not to mention opportunities for self-delusion … suppose a GHG emissions control policy were implemented, with the goal of reducing global temperatures. How would you test the policy’s effectiveness, given that you have a treatment scenario and no “control” scenario? Can we agree that the GCMs are sufficiently uncertain that a definitive test is not possible?
What, then, would be the ultimate fate of such a program? And of the tone of debate surrounding the fate of the program?
If you thought climsci has become politicized, you ain’t seen nuthin’ yet. If emissions control is a “moral question”, then what Al Gore is proposing is that we treat the climate debate much as we treat the debate over abortion or capital punishment. Doesn’t that sound fun.
RE: #30 – Just watch us Californians. We’ve gone diving in head first.
#30 — “Can we agree that the GCMs are sufficiently uncertain that a definitive test is not possible?”
I’ve been doing a little modeling of my own, and have found that the state-of-the-art GCMs produce no more than linear responses to pure GHG warming, plus wiggles. They appear to operate as though there were no climate drivers other than atmospheric gasses.
Re: #32
Pat, you mean they respond linearly to increased ghgs and not logarithmically?
#33 John, the gases increase non-linearly and so the forcing increases non-linearly. But the temperature trend has a linear dependence on the forcings. Over short periods of time, the non-linear temperature increase is almost linear because the curvature of the trend is not great.
When I projected the temperature based on summing the forcings of CO2, methane and nitrous oxide, the trend line went right through the predictions of all the best GCMs, when calculated under the same boundary assumption of 1% CO2 increase per year. To make that comparison, I had digitized the plotted outputs of 10 of the 15 GCM projections given in the LLNL intercomparison project. I have that comparison on my computer at work. I have to run up there this evening anyway. While there, I’ll send you a gif file of it.
Thinking in terms of practical politics, the emissions of GHG would be what the regulating bodies would attempt to control while temperature/climate effects (and even GHG levels in the atmosphere) would become secondary and tertiary until such times as the restrictions on emissions became an adverse issue with the voting constituents. Once the pain was felt, the voting public would demand to know the relationship between the controls and GHG levels and in turn the GHG levels and climate/temperatures. That would be time that skeptics arguments would/could come to the fore. All these reactions would be subject to some modifications based on what the “natural” trends were in the climate, and particularly, in the frequency of extreme climate events. Shear momentum of the regulation and inflexibility of government programs, in general, would tend to keep otherwise failing policy in place while the amount of pain felt by the public would have a reverse effect.
I would judge that without convincing evidence of, at least, the semi-quantitative effects of GHG on climate, the outcomes of policy would be much the same regardless of the real GHG effects on climate. Other developments, such as higher fossil fuel prices bringing market forces to bear on the use alternative energy sources, could override the regulation considerations.
unclearly.