We keeping hearing the incantation from the Team that all the reconstructions on the Jesuit Index show a warmer modern than medieval period. I reported that I recently obtained a digital version of Grudd’s revised Tornetrask reconstruction and I’ve been anxious to test out its impact on the Jones et al 1998 reconstruction (together with the impact of the Polar Urals update.) I’d experimented a little with this previously using my own unwinding of Briffa’s “adjusting” of the Tornetrask series, but this analysis is obviously much stronger using Grudd’s version.
In doing so, I took the opportunity to re-visit and tidy my emulation of the Jones 1998 reconstruction methodology, which includes an implementation of the “variance adjustment” procedure of Briffa and Osborn (Dendrochronologia 1999), the eminent statistical successors of Sir Ronald Fisher (J Royal Statistical Society). Previously, I’d been able to directionally replicate these results – most of the series overlapped with MBH and I used the MBH versions in my emulation where available. However the replication was not nearly as precisely as I wanted; I asked Jones for a copy of the data as used (which was never archived) in order to try to reconcile results. Jones (“we have 25 years invested in this”) refused.
I caught a little break when Juckes et al was published. Although Jones had refused to provide me with the data as used in Jones et al 1998, it turned out that he was willing to provide the data as used to other Team members (though not to potential critics.) When Juckes archived his data, I noticed that the version of the Greenland dO18 data was different than the MBH version and so I was also interested in seeing the impact of changing these versions on my replication.
If you’ll bear with me, I want to document a couple of these points, before showing the impact of the new series versions. The black series in the top panel shows the difference between the archived reconstruction and my emulation. As you see, the early portion is pretty much bang on up to rounding, while the later portion isn’t bad, but the change in variance indicates that I’ve probably introduced the wrong version of one of the series. Looking closely, the change in amplitude occurs around 1659, when the Central England series (MBH annual version) was introduced.
Figure 1. Jones 1998 reconstructions, with Crete dO18 version and MBH C England annual.
As some of you may recall, the Central England version was an issue in MM03 and the MBH Corrigendum, as it turned out that, instead of using annual data as they had said in their original SI, they used a summer version starting in 1730, previously used in Bradley and Jones 1993 (though the truncation was also unreported there.) I substituted the MBH truncated JJA version and re-ran with the results shown below, this time matching before 1659 and after 1750. My guess is that there is another version of this data around somewhere, probably a summer version which coincides with the MBH version after 1730, and starts in 1659, suggesting that the explanation of the inconsistency in the Corrigendum was itself incorrect. The reconciliation is now pretty good in any event.
Figure 2. Jones 1998 reconstruction, using C England version starting in 1730.
I then made two updates to this data set: 1) replacing the Briffa version of Tornetrask, where, as discussed elsewhere, Briffa bodily adjusted the 20th century results to match his expectations; 2) using the updated Polar Urals chronology from Esper et al 2002. Using the same methodology, this yields the following result. (I have not substituted the Yamal series for Polar Urals, as Briffa did, as I am unaware of any report showing any defects in the Polar Urals update. While the Team didn’t like the results, in my opinion, that is insufficient reason to withhold the results.)
Figure 3. Jones 1998-style reconstruction, using updated Tornetrask and Polar Urals versions.
58 Comments
Now if this passes statistical muster (as I am sure it should), it would be well worthy of publication. Looks more like a topographic profile of a valley than a hockey stick. Very interesting.
Bob,
Why is it hotter a 1000 years ago than today that is the question, but it is much too sensitive to bring up.
I fear some miss the point. This is a demonstration of the sensitivity of temp reconstructions to proxy selection. It doesn’t mean that Steve’s alternative is any “better”. The issue of the sensitivity of temp reconstructions to proxy selection is one well worthy of publication, as it would (hopefully) force the paleoclimate community to justify their choice of proxies.
James, after reading all (or most) paleo material available on this site, it seems to me the the point is more basic than proxy justification; It is that in most cases paleoclimate data is at best regional, and more likely local. The extrapolation to anything on a larger scale is no more scientific than palm reading.
I am new around. Could anybody tell me how this last reconstruction compares with other reconstructions that uses ice cores or ocean sediments as proxies?
What does the “deg C (1960-90)” in the Y-axis mean? I guess is Celsius degrees, but what does the time interval (1960-90) stand for?
Peter, yes to what you say, but the point nevertheless is about proxy selection.
Urederra, Steve is not proposing this as a new reconstruction (see above).
Another point that I want readers to reflect about in these sorts of graphs is what exactly it means – if anything – for a reconstruction to be 99.9% (or 99.9999% “significant”) in the non-statistical usage of PR challengers Ammann and Wahl. The original recon and the sensitivity recon are indistinguishable in the calibration-verification period for statistical purposes – with noticeably different trajectories only before 1650 or so. How can one trajectory be 99.9% significant and not the other? How can both be 99.9% “significant” and yield markedly different results in their early periods – a question that I’ve raised on other occasions, notably in connection with the online review of Juckes, where the point was ignored.
But it’s an important point. The only answer as far as I’m concerned is that neither reconstruction is 99.9% “significant”. In fact,that’s not the sort of phrase that really has any statistical meaning.
If you re-approached the matter from a likelihood perspective, which seems long overdue, the likelihood statistics of each reconstruction would be virtually identical. However there are a couple of things counting against the original reconstruction. Briffa “adjusted” the Tornetrask series in an ad hoc and unacceptable way; and the early portion of Briffa’s Polar Urals series does not meet QC standards common in the trade. So the recons may be undecideable in their likelihood stats, but the “sensitivity” version has not been manipulated.
As JAmes Lane observes, for the 8000th time, I did not propose this or similar graphics as an “alternative” view of the world. I’m merely observing that the medieval-modern differential in these recons can change under minor and plausible variations in proxy versions.
If I’m correct, this further discounts the idea of teleconnections in the series?
So, we’ve gone from a Hockey Stick to a Half Pipe. Totally rad, dude!
Of the many quite serious statistical points Steve has made, this one is so simple as to defy belief that people don’t get it: simple changes in which proxies are used can give completely different results with the same verification period fit. There has been no response to this point in the literature, and authors continue to pick proxies with no justification at all. This is not a trivial point at all and the dendros really need to take notice of this.
This point appears in referenceable literature together with the absurd non-reply by Juckes et al.
McIntyre, S (2006). Clim. Past Discuss., 2, S708–S712, 2006 stated:
Juckes, M. et al (2007). Clim. Past Discuss., 2, S918–S921, 2007 replied, in a way typical of the Team, simply re-iterating their point in a louder voice, without responding to the question:
We previously discussed the abject editorial failure of Climate of the Past to deal with the open review process. Contrary to journal policy, Editor Goosse, a Team co-author, disregarded the open review comments and provided a closed review process for Juckes et al., something that both Willis and I were annoyed about at the time and remain annoyed about. The issue highlighted here was an important one, it was raised in discussion and completely disregarded.
Is this evidence that climate reconstructions are extremely sensitive to proxy selection?
Or is this evidence that the Team has deliberately cherry picked and altered data in an effort to exaggerate the magnitude of the recent warning relative to the recent past.
Steve has been very careful to avoid charging the Team with deliberate scientific misconduct. I think we’ve reached the point where such a conclusion is unavoidable.
The vast majority of the scientific community believes that existing peer review processes are sufficient to prevent this sort of misconduct nine out of ten times.
What Steve has detailed on this site represents a complete break down of the peer review process when applied to analysis that furthers the consensus view.
Call me naive, but I think that his results can be distilled into a short (~5 page) position paper (circulated not published) that would create genuine outrage in much of the scientific community.
Steve: You really have to be careful about going a bridge too far. Too often, people get excited and over-state things and give an easy retort. There are real issues and concerns, but be careful not to go from something that is carefully expressed and valid to something that is excited and invalid.
5 Urederra “deg C (1960-90)”
Yes, celsius. AFAIK, 60-90 is the calibration period, specifically the CRU base period the modern anomaly is based upon.
12,
Given the stakes, I think that 9 of 10 would still be irresponsible.
Of course, given the testimony we saw before Congress, there is no sound reason for believing that peer review will catch 9 of 10. Or even 5 of 10.
Steve: You have to be careful in what you call misconduct. Not every point of difference is an issue of misconduct.
Jason
Only nine times out of ten? If it’s insufficient one time out of ten, things are very bad!
I think self policying prevents misconduct at least nine times out of ten. Beyond that, I’m not sure how much protection peer review provides. Peer review is much better for reducing the amount of absolute trivia, reducing the amount of unreadable stuff, and catching some obvious mistakes. At that, it only reduces the amount of drek that gets plublished.
Peer review is probably better at catching mistakes than scientific misconduct. If someone were to set out to intentionally cheat or distort, the peer review process is not really designed to catch that. If a cancer researcher paints spots on mice using a felt tip pen, peer review of individual papers is unlikely to spot that. How can someone reviewing a paper determine data are faked? Or even fudged or finagled? Unless you try to repeat the full analysis, you can’t. No one is going to stop to replicate research merely to review a paper.
Actual fraud tends to be discovered after the paper passes peer review, gets published, but others find it impossible to replicate.
However, failure to replicate doesn’t necessarily suggest fraud. It could just be a mistake.
Fraud is also sometimes when lab technicians “snitch”, or interested readers notice something odd in the paper itself. (These readers may be conmpetitors, or not infrequently, graduate students!) When competitors or grad students detect the outright fraud, we end up witnessing some drama.
It would be very hard to make a process that catches 99% of deliberate misconduct. Its just too easy to get away with.
But I think this could reasonably be rewritten as follows:
“The vast majority of the scientific community believes that existing peer review and community processes effectively prevent major replicated scientific conclusions from being the result of professional misconduct.”
We aren’t dealing with a single paper. We are dealing with a whole bunch of papers that repeatedly reach a conclusion of great significance; a conclusion which can not be replicated without committing or reusing the results of scientific misconduct.
For most academics, the prospect of this sort of collaborative misconduct is shocking and unthinkable. If compelling evidence that such misconduct occurred is presented alongside overwhelming evidence that data supporting the misconduct has been covered up, I think it would inspire action.
On the other hand, I doubt that any number of accusations of flawed calculations will result in change. Errors are a natural part of the scientific process.
But I don’t think we got to where we are today because of natural errors. We arrived here because the community wanted evidence supporting the need for immediate action curbing greenhouse gas emissions, and a few mutually supportive researchers decided to manufacture some.
In my opinion, the essence of the problem is not that some people were willing to step over the line. I think this is inevitable. The essence of the problem is that the scientific community lacked the processes necessary to detect and prevent this.
Steve: As I’ve said many times, I think that far too much weight is given to journal peer review as the end of due diligence. It’s a cursory and limited form of due diligence. Recently, for example, Ammann and Wahl didn’t even include their SI; no reviewers noticed; Ammann refused to provide it when requested and I noticed yesterday that it’s online a year late. My recommendation is not that peer review should be more exhaustiv, but that data and code be archived contemproary with publication so that the process of verification by third parties more efficient. Right now, authors can make it impossible to replicate work or check statistics – they say, go to the Himalayas and take your own ice core. Sure. Clone your own sheep.
Steve,
I understand your reluctance to use the word misconduct.
I agree that in presenting the issue misconduct may be the wrong word to use.
But I think that misconduct is what we are actually looking at.
My main argument (not limited to just your latest post, but focusing on your analysis of the team’s research as a whole) is two fold:
1. The evidence that you have collected if effectively summarized and presented is enough to make a very strong case that the team has engaged in deliberate misconduct (even if that word is never used).
2. If instead of making the case for scientific misconduct, the team’s actions are presented as a series of unfortunate errors, nobody is going to feel compelled to act.
This information combined with Koutsoyiannis et al 2008 throws quite a shadow on modern climate science.
Steve,
One of the most important questions from a public policy standpoint in the whole climate change debate has been (and still is):
“Are global average temperatures today warmer than they were in the MWP, or indeed at any time in the past 1000 years?”
Your work has shown convincingly that a group of scientists could (and did) manufacture an affirmative answer to that question by the use of a few carefully selected proxy series, along with at least one significant ad hoc adjustment.
Your efforts, and those of many others who regularly contribute to CA, have demonstrated that The Team has not proven that their methods are robust or that their conclusions are appropriate. Nor has Briffa offered an adequate explanation for his ad hoc adjustment.
Perhaps this is as far as you are willing to go in response to the policy question posed above. But the debate will carry on nonetheless. The Team was not at all hesitant to broadcast their conclusions to the media, politicians, educators, and casual observers. As a result, almost everyone in the general public now believes the “hottest in the past 1000 years” claim.
What do you believe people should be told now when they ask that question?
Steve: It still could be true. I haven’t shown or even tried to show that it isn;t. I’m merely saying that the tree ring and other proxy data doesn’t support grandiose claims. That also doesn’t prove that CO2 is not a problem. To the extent that I can, I’ve tried to encourage policy advocates to focus on their key arguments and show, without arm waving, how doubled CO2 leads to 3 deg C – and I do not share the view of many readers that such exposition is impossible. I think that advocates have been a bit lazy in relying on easy images rather than working through the sort of exposition that I think would be more helpful.
1) Is this sensitivity to proxy selection only a problem for tree ring proxies, or is it common in other proxies?
2) re #12
“You really have to be careful about going a bridge too far”
If I said dendrochronology appears to offer no value for global temp reconstructions, presumably that would be going a bridge too far. So I won’t say that.
Try this on for size: The hypothesis that dendrochronology is useful for global temperature temperature reconstruction can neither be accepted nor rejected at present.
Doesn’t it become misconduct when one willfully ignores the identified problems and knowingly perpetuates bad data?
It seems to this non-expert that tree ring morphology can be influenced by species, temperature, rainfall, CO2 level, atmospheric density (altitude), tree age, soil composition, sunlight/shading (e.g. forest density, which side of the mountain, pervasive overcast), and other factors unknown to a non-expert like me. And, not necessarily in that order.
I seriously question the utility of tree ring data for temperature reconstructions. Steve’s outstanding work reinforces that “question”.
At the very least, dendrochronologists using ring morphology to determine ancient temperatures have the burden of proof that such is fundamentally good science — a burden which is yet to be met so far as I can tell.
Steve you’ve been told so many times that this is so basic and accepted by climate scientists that you’re wasting their time. Everyone in the profession knows 2xCO2 leads 2-5.7 deg C warming, we don’t have time to point you to all the literature that’s proven it from first principles and experimentation. Jeez, it’s as fundamental as aspirin on the tooth takes away toothache, and toast always falls butter side down. If I can be bothered I’ll try and dig out my Dummies guide to AGW I’m sure it’s all in there. Move on, get yourself on the climate change bandwagon before it’s too late and the money train stops.
I would certainly not call any of this scientific misconduct; it would have to be science in the first place. This is propaganda, hyperbole, advocacy or some other thing (or blend of such things) instead. Regardless of how much Shinola is put on certain substances, it’s still not a pair of shoes.
Substitute error bars for Shinola and you’re on to something.
In grad school the prof in the next office had a poster that said “If you can’t dazzle them with brilliance, baffle them with bull***t” The problem is that all too often reviewers allow themselves to be baffled by complexity of method–if it is complex and scientifical sounding, it must be state of the art. They don’t consider that maybe even the authors don’t know what it means.
When you combine what Thomas Kuhn would call a “pre-paradigm science” ( http://facweb.bcc.ctc.edu/wpayne/thomas_kuhn.htm) , Climatology, with a few of Eric Hoffer’s “True Believers” (http://en.wikipedia.org/wiki/The_True_Believer) the potential for misconduct becomes overwhelming. Mix together a little bit of: “Man is bad”, “Man is using up all the world’s resources” and “Man is trashing the planet” with a little “CO2 is a green house gas” and pretty soon all the data point to catastrophic climate change. Of course they first tried “a new ice age”, but that didn’t fly, so now we have “global warming”, and interestingly, all with some of the same data, just massage it a little differently.
The problem with “true believers” is that they don’t realize when they have left the world of science and the scientific method and entered the netherworld of pseudo-science, all for the sake of a cause in which they truly believe. A little pre-selection of data isn’t bad, nor is it misconduct, nor is finding the right statistical method that proves your theory. To paraphrase one Nobel Lauriat, “there is nothing wrong with using a little exaggeration to make your point.”
All of the sudden, you now have a new paradigm that most of us would like to believe in, at least the part about man destroying the environment. We would also like to believe that those proposing the paradigm are competent professionals, so why question them or their methods.
When this marriage of Kuhn and Hoffa arises, neither competent journal editors nor the peer review process will have any effect. The archiving of all data and processes may eventually lead some curious grad student to attempting to replicate the results, but I wouldn’t count on it. We also can’t depend forever on people like Steve McIntyre, Ross McKitrick, Anthony Watts, etc. to decide, on their own time, to get involved. There has to be, in place, a requirement that all science on which public policy decisions are to be based must be thoroughly audited before consideration, and must successfully pass that audit.
Joe Crawford
PS – Thank you Steve, Ross, Anthony, and the many on this blog for the many hours of hard work spent in auditing the misguided effort of a few “True Believers” to justify AGW whether that paradigm is true or false.
The question of misconduct or malfeasance is tricky because you are accusing someone of willfully doing something they know to be wrong. For sure, that can be a dangerous place legally. I think a much better and safer term to use in a forum such as this would be misfeasance where an action is acknowledged to have been taken but just not done fully or as competently as it could have been. Nonfeasance is just not doing an activity agreed upon previously but misfeasance admits the action was attempted but failed to meet a specified standard. I think with all we have seen with the IPCC and its contributors and reviewers, accusing them at the very least of some level of misfeasance is pretty much accurate.
#29 mbabbitt — “We have to get rid of the Medieval Warm Period”. …speaks to intent.
24 Ian
That is nonsense. From first principles, one gets about 1.2C, only. If it is so clear, (a) why is the range of uncertainty so large (2-5.7C), (b) why doesn’t such a ‘proof’ appear in a review article (or text-book) which you could point to, as would be true in any branch of Physics I am familiar with.
You are arm-waving, without any supporting documentation. That is not science.
#31
Watch out for irony in #24
#11 — Steve M. wrote, “This point appears in referenceable literature together with the absurd non-reply by Juckes et al.
“McIntyre, S (2006). Clim. Past Discuss., 2, S708–S712, 2006 stated:
“”Reconstructions that are slightly varied from the Juckes reconstruction (but with different medieval-modern relationships) are also “99.98% significant” by the criterion of Juckes et al. Obviously the two different reconstructions cannot both be “99.98% significant” – evidence that neither reconstruction is “99.98% significant”. See http://www.climateaudit.org/?p=903”
“Juckes, M. et al (2007). Clim. Past Discuss., 2, S918–S921, 2007 replied, in a way typical of the Team, simply re-iterating their point in a louder voice, without responding to the question:
“Para 8: The significance given is, as stated, the significance of the correlation between the composite and the instrumental temperature in the calibration period.“”
Juckes’ reply is a tacit admission that the statistical significance of his reconstruction does not extend past the validation period. That is, he is repudiating the significance of the entire tail of his reconstruction from 1880 out into the Medieval past. Any editor with scientific sense would have realized that Juckes was thereby jettisoning the entire thesis of his paper. It should have been rejected forthwith. And that doesn’t even touch the general lack of physical meaning.
One might say, Steve, that your review got Goossed
Ian was being sarcy about doubled CO2, toast, and band wagon etc.
I Googled “We have to get rid of the Medieval Warm Period”, and it took me to some interesting sites including the discussion page for the Medieval Warm Period Wikipedia entry.
They show this graph.
The reconstructions used, in order from oldest to most recent publication are:
(dark blue 1000-1991): P.D. Jones, K.R. Briffa, T.P. Barnett, and S.F.B. Tett (1998). , The Holocene, 8: 455-471.
(blue 1000-1980): M.E. Mann, R.S. Bradley, and M.K. Hughes (1999). , Geophysical Research Letters, 26(6: 759-762.
(light blue 1000-1965): Crowley and Lowery (2000). , Ambio, 29: 51-54. Modified as published in Crowley (2000). , Science, 289: 270-277.
(lightest blue 1402-1960): K.R. Briffa, T.J. Osborn, F.H. Schweingruber, I.C. Harris, P.D. Jones, S.G. Shiyatov, S.G. and E.A. Vaganov (2001). , J. Geophys. Res., 106: 2929-2941.
(light green 831-1992): J. Esper, E.R. Cook, and F.H. Schweingruber (2002). , Science, 295(5563: 2250-2253.
(yellow 200-1980): M.E. Mann and P.D. Jones (2003). , Geophysical Research Letters, 30(15: 1820. DOI:10.1029/2003GL017814.
(orange 200-1995): P.D. Jones and M.E. Mann (2004). , Reviews of Geophysics, 42: RG2002. DOI:10.1029/2003RG000143
(red-orange 1500-1980): S. Huang (2004). , Geophys. Res Lett., 31: L13205. DOI:10.1029/2004GL019781
(red 1-1979): A. Moberg, D.M. Sonechkin, K. Holmgren, N.M. Datsenko and W. Karlén (2005). , Nature, 443: 613-617. DOI:10.1038/nature03265
(dark red 1600-1990): J.H. Oerlemans (2005). , Science, 308: 675-677. DOI:10.1126/science.1107046
After some reading, it occurred to me that this may be an opportunity for a few CA contributors to contribute to the Wiki. Or at a minimum, perhaps Steve would like to offer a reconstruction to add to the spaghetti chart.
http://en.wikipedia.org/wiki/Medieval_Warm_Period
In Pat’s defense.
The trouble is that what seems to be obvious sarcasm to most of us is very much like what warmers actually say. So someone like Pat who has seen a lot of this line of defense has probably had his irony detector blunted.
Wikipedia is patrolled by members of the Team who have been granted administrative privileges. Any information which directly contradicts their published research will be removed. Any information which casts doubt on their certainty will be minimized and then gradually removed over time.
32 34
I guess my sarcasm detector was on the blink, today. I thought he was one of Phil’s pals……
36
Thanks, Dave, you got it dead right. I don’t usually fall for sarcasm, this one was too subtle for me.
William Connolley and Kim Dabelstein Petersen, among others, use their privileges at Wikipedia to ruthlessly and often viciously suppress any dissenting contributions to Wikipedia on this subject. Their misconduct has effectively censored scientific free speech and discussion of the subject on Wikipedia.
See:
Wikipropaganda: Spinning green. By Lawrence Solomon; National Review Online; July 8, 2008 6:00 AM.
Is Wikipedia Promoting Global Warming Hysteria? By Noel Sheppard; July 9, 2008 – 11:29 ET
http://newsbusters.org/blogs/noel-sheppard/2008/07/09/wikipedia-promoting-global-warming-hysteria
User talk:William M. Connolley, From Wikipedia, the free encyclopedia
http://wikipediareview.com/index.php?showtopic=19501&mode=linear
As can be seen, you can try to introduce scientific evidence and balance into the Wikipedia articles, but don’t be surprised if it proves to be a waste of time and effort due to blatant political and idealogical censorship.
Petty, but Sarcasm and Irony are different.
None of the scientific methods referred to here to study the climate are “pre-paradigm” in Kuhn’s sense, at least as I read Kuhn( it’s been forty years). Surely that the climate fluctuates is not a copernican revolution or on the order of discovering germs and the germ theory for explaining disease. Germs were an unknown while all the relevant variables for climate explanations have been “known” for years. The problem from what I gather in following the discussions here is not the variables per se but the inability to replicate or falsify any studies because of the denied access to the variables or their proxies used in the studies. That’s a political problem rather than a problem of science, strictly speaking.
snip
38 Pat, I’m really really sorry. I have absolute respect for you, sometimes you reach the point where you begin to wonder if you’re being really dense. Phil et al have told us so many times the science is settled on CO2 and he literally had the manuscripts on his desk which prove it. Steve called him out on this months ago, and repeated the call several times in another thread, and whilst Phil is still actively posting, to this topic he seems to have turned a blind eye. I couldn’t imagine how the Team, or the IPCC keep a straight face when they meet, it’s so absurd that we can be this far down the track without someone, anyone being embarrassed to the point where they criticise themselves. It’s so frustrating, [snip], somebody must care – present company accepted of course. Steve’s attitude is a model of patience, he’s said many times his unequivocal whether AGW or AGC is happening, but jeez provide some solid engineering studies before you construct the house of cards. If the AGW’s are proven right, then from what I can see of the paper trail, it’ll be more the luck of the dice, than valid science. Sorry again, moving on as someone once said 😦
Steve: I’ve never suggested that nothing be done pending some “solid engineering studies”. If you are advised by reputable authorities that there is a problem, sometimes it’s prudent to make decisions without “solid engineering studies”. I’ve said over and over that, if I had a big policy job, I would rely on advice from properly instituted authorities. However, I see no harm and much potential good in approaching the problem as engineers would.
Re #43
That was answered back in June.
Steve: The “answer” was that Phil could not provide a reference deriving 3 deg C from doubled CO2, but a reference to another matter – square root or logarithmic dependence, which Phil said was all that he undertaken to provide. He argued that I was asking more from him than he had agreed to provide. Be that as it may, the key point is that Phil did not provide the long sought exposition, but a reference to something that, for me, wasn’t at issue and where I was well aware of the references.
re #35, 40
to be fair to Wikipedia, there’s not a complete shut-out on ‘dissenting’ global warming views (yet) :-
(or maybe Connolley just hasn’t got round to editing that page yet….)
“there’s not a complete shut-out on ‘dissenting’ global warming views (yet),” but not for a want of trying. Even that article has been nominated for deletion twice in the past two years, and you’ll find Connelly’s remarks supporting deletion in one of the discussions.
TerryB: Note the subtle title to the page: “List of scientists opposing the mainstream scientific assessment of global warming.
James,
it is, as you say, “a demonstration of the sensitivity of temp reconstructions to proxy selection.”
But it goes far beyond that. As far as I can see, Steve M has tried to replace bad science with good science. He says, for example, “Briffa bodily adjusted the 20th century results to match his expectations”. If that is true, and I’ve no reason to doubt it, then this is an example of very bad science that should be consigned to the rubbish bin.
The message of this piece is very clear: when you replace bad science with good science then the Medieval Warm Period comes roaring back. This happens in countless studies, for example when a small selection of questionable tree ring proxies are replaced with a wide range of other proxies.
[snip -policy]
Chris
Steve: I don’t argue that one squiggle is “right” and one squiggle is “wrong”. I’ve not argued that the proxies prove the existence of an MWP. I’m merely arguing that these studies do not prove what they claim. Also, as always, remember that there are other lines of evidence and concern regarding CO2 besides proxies. As an AR4 reviewer, I suggested that the entire paleoclimate chapter be deleted if it wasn’t relevant to the “key” arguments.
Chris, in a word “no”, that’s not the message of this piece.
Had this in my mind all week.
Para 8: The significance given is, as stated, the significance of the correlation between the composite and the instrumental temperature in the calibration period.
So actually they are admitting that Steve’s version is as “significant” as theirs because he got the same result.
I wrote to Greg Wiles as follows:
James – and Steve,
perhaps I should have written it slightly differently. When I wrote “The message of this piece is very clear….” I was not really referring to Steve’s message specifically (he has made his own message very clear!) I was really referring to the conclusion that many people might draw from this piece. Steve’s aim was not to ‘prove’ a strong MWP – but nevertheless his findings do support it.
Steve,
keep up the good work! I’m still trying to digest your piece on Ammann and Wahl….
Chris
Steve: re D’Arrigo et al 2006 cited Wiles et al 2005 (in prep),
It is odd, because I have never been able to cite a paper in prep or submitted. Never. The reviewers and editors I publish in insist on at least “In Press” and even that doesn’t make them happy.
#53. Craig, you need to join the Team; they’re much more free-wheeling about this.
Actually, I wonder whether Wiles et al 2005 is even “Wiles et al 2005, in prep” as opposed to “Wiles et al 2005 (gleam in eye)” or “Wiles et al 200x (some day when we get around to it).”
Obviously, I’ve commented disapprovingly of this on other occasions. The beauty of the system for the Team is that if they can get a comment in print citing something in prep or under review as authority, then, even if the cited article is rejected, they can use the approved article as authority. I encountered this with a comment in Jones and Mann 2004 on MM, which cited Mann et al, under review (which was rejected). Jones and Mann 2004 had no other authority for the comment, but, having passed the “rigorous” peer review, it could now be cited, so that it was irrelevant whether they actually proved the point.
No thanks, I’m not a Team player. And for some reason I have an aversion to Groupthink.
I just reviewed a paper (on a PCA-based method, no less) in which the author failed to provide an appropriate reference (he mentioned the other author’s name, but not the actual citation). For this reason alone (though there were a few other boo-boos) I listed my approval as “minor revision required.” Proper referencing is important. Referencing something that has not yet been accepted by anyone is unacceptable.
Btw, the author was using a rather novel implementation of PCA that I found quite interesting. It was, however, applied to an uncontroversial topic, one that would be testable if anyone chose to do so.
Mark
Mark T.
How about referencing unpublished data? I see that all the time. Ramanathan most recently.
Well, there isn’t a whole lot of “data” per se that can be referenced in the arenas that I would be reviewing anyway, but this author had lots of referenced data in his paper. The author used large tables from readily available sources.
Generally speaking, the engineering world that I play in thrives on simulated data since a solid physical link to the real world exists (Maxwell, Shannon, Nyquist, etc.). Rarely is data even an issue regarding the validity of a paper. When data are unpublished, such as radar return data, or communication link data, people aren’t publishing papers based on it, either.
If someone pushed a paper through my hands that said “we calculated a bunch of stuff on our proprietary data and trust us, it worked,” I’d surely reject it, btw.
Mark