The revised PAGES2K Arctic reconstruction used 56 proxies (down three from the original 59). Although McKay and Kaufman 2014 didn’t mention the elephant in the room changes in their reconstruction (as discussed at CA here here), they reported with some satisfaction that “decadal-scale variability in the revised [PAGES2K] reconstruction is quite similar to that determined by Kaufman et al. (2009)”, presumably thinking that this replication in the larger dataset was evidence of robustness of at least this property of the data. However, while the decadal scale similarity is real enough, this is more of a tautology rather than evidence of robustness, as 16 of the most highly weighted PAGES2K proxies come from the Kaufman et al 2009 network (the 22 Kaufman 2009 proxies being assigned over 80% of the total weight and the other 34 proxies under 20%.)
The Decadal-Scale Similarity
McKay and Kaufman illustrated the decadal-scale similarity between the Kaufman et al 2009 reconstruction and the PAGES2K Arctic (revised) reconstruction in their Figure 2d (shown below). A similar point could have been made about the PAGES2k-2013 version as well. The decadal-scale similarity is real enough.
Figure 1. McKay and Kaufman 2014 Figure 2d showing – on inconsistent scale – revised PAGES Arctic 2K (red) and Kaufman 2009 (black).
As noted in our opening discussions, the scale of the Kaufman et al 2009 and PAGES2K reconstructions are not the same in the above diagram. Figure 2 below compares the Kaufman and PAGES2K Arctic (revised) reconstructions on a consistent scale, overplotting PAGES2K data onto an image used by the New York Times to illustrate the Kaufman reconstruction. From this perspective, the difference in scale is most manifest as much cooler temperatures in the Little Ice Age, especially in the early 19th century,
Figure 2. Comparison of revised PAGES Arctic 2K (blue) and Kaufman 2009 (black), overplotting onto New York Times figure.
Barplot of Weights
The reason for the similarity is not robustness of the results to the new data, but because of the weights assigned to proxies by the paico algorithm as implemented by PAGES2K (presumably unintentionally.) In Figure 3 below, I show a barplot of weights for the PAGES2K proxies calculated by Jean S using the paico decomposition method that we recently used in connection with the Hanhihjarvi data.
In the previous post on “Paico Decomposition”, Jean S and I showed that the effective weights of each proxy in a paico reconstruction could be estimated from a dataset in which each column was the difference between the base (uncalibrated) reconstruction and a reconstruction in which each proxy was individually inverted. The sum of these columns closely approximated the base reconstruction. Thus, the standard deviation of each column measured the effective weight of each proxy.
The barplot in Figure 1 below shows the effective weight of each proxy calculated using this methodology (with the weights standardized so the sum of squares is equal to 1). Proxies previously used in Kaufman et al 2009 shown in red and other proxies in black.
One easily observes that nearly all of the most heavily weighted proxies had been previously used in Kaufman et al 2009 (16 of top 18; 18 of top 21), with over 80% of the total weight assigned to Kaufman proxies and under 20% to the 34 “new” proxies. The average weight of a Kaufman-2009 proxy is nearly 5 times greater than the average weight assigned to other proxies.
Figure 3. Barplot of estimated effective weight of each proxy in PAGES2K 2014 (Arctic). Red- also in Kaufman et al 2009; black – not in Kaufman et al 2009. Interestingly, the top weighted proxy is the Hvitarvatn proxy, the orientation of which was inverted between PAGES2K-2013 and PAGES2K-2014. Its high weighting undoubtedly explains the majority of the large change between the two reconstructions. Also note that three Briffa series are in the top 10 (including the 2008 Yamal superstick in the top four).
Discussion
Because of the heavy weighting of Kaufman et al 2009 proxies, the McKay and Kaufman conclusion that the “decadal-scale variability in the revised [PAGES2K] reconstruction is quite similar to that determined by Kaufman et al.” is, as advertised above, more of a tautology rather evidence of robustness of the result in the additional data.
At the end of the day, any proxy reconstruction is either a linear combination of the underlying proxies (or can be closely approximated by such a linear combination.) Over the years, I’ve consistently urged that the effective weights be shown for novel methods. Had this been done, I doubt that the above weights would have been the result, since it’s hard to believe that the Arctic2K authors intentionally adopted the above weights. Jean S has done some experiments and there are definitely alternative weighting schemes that can result from slightly varied implementations of paico.
As CA readers are aware, I remain dubious that material benefits arise from putting relatively simple datasets into increasingly complicated and poorly understood multivariate methods and remain of the opinion that there are better opportunities for improving analysis by first comparing like proxies across regions and comparisons of unlike proxies within a region, prior to venturing into the assimilation of unlike proxies in different regions. But this recommendation has been mostly rejected by specialists in the field, who remain committed to dumping data into black boxes, but who get huffy when resulting defects are criticized.
Finally nearly all the difference between the PAGES2K-2013 and the revised result arises from a single proxy (Hvitarvatn, used upside down in the earlier version.) Some readers have expressed surprise at the idea that specialists could use proxies upside down, observing that their interpretation as temperature proxies must be very tenuous if even specialists didn’t know which way was up. Particularly in a multi-author Nature article, subsequently relied upon by IPCC. I agree with this and have written numerous articles critical of varvology, proxies that have become widely used in post-AR4 multiproxy studies. I think that there may well be usable information in this data, but as long as thick varves are interpreted by some specialists as evidence of cold and by other specialists as evidence of warmth, the first order of business for assessment is to reconcile varve thickness data before dumping the data into a multiproxy composite, rather than after.
118 Comments
Steve,
You say: “its hard to believe that the arctic 2K authors intentionally adopted the above weights”. Why and based on what evidence? Others might suspect you are being too kind.
By adopting those weights the team stays on message, more or less, and a this is a positive outcome for them. But surely they must have investigated the relative weights? How is it that work like this can be published without that degree of cross checking/validation? Have they not even examined what the black box actually does to the data that is thrown in? Somebody on the team must surely have a clear understanding of the effects of the box, otherwise how can they even use it. I see several end-member possibilities. The first is an overwhelming lack of scientific competence, the second is deliberate manipulation by members of the team, the third is a political directive from outside the team.
How is it that the authors expect to maintain genuine credibility?
When you say that Jean S has examined other weightings I become curious as to what the results of that analysis actually are.
Great work on your part by the way.
They are obviously following the advice given in Bolt and Wilson (1962):
“The trick, William Potter, is not minding that it hurts.”
“ … observing that their interpretation as temperature proxies must be very tenuous if even specialists didn’t know which way was up.”
Another milestone in factual drollery, here at CA. And yet the charade parade marches on.
OMG, such spooky and scary stories about the contributions of such “objective, transparent, inclusive talent” [aka “Climate hypochondriacs” h/t Eduardo Zorita] on which the future of our planet depends – or so we’ve been told … and on such a night!
Steve, how could you?!
Word missing from this sentence under the heading Discussion
“. . . .more of a tautology rather [than] evidence of robustness of the result in the additional data.”
All the engineers I know, including myself, are extreme skeptics. This is a good example why. What a pile of rubbish when they can arbitrarily use data upside down, or the right way up! Incredible that so-called scientists accept this.
Re: John Francis (Oct 31 19:51),
These scientists might have called less attention to themselves had they properly acknowledged their errors.
They might also have partially saved face in making their acknowledgements by using a different kind of phrasing in characterizing the nature of their mistakes, one more positive in its expression as might be perceived by potential critics of their work.
For example, as opposed to saying that, “The data was used upside-down,” they might have said, “The data was used with an inverse right-side up interpretation.”
Should I comment any more on variance amplification algorithms or the authors intent thereof?
Thanks to Jean and SMc for their relentless efforts.
Actually, I think a description of Paico would be kind of a fun post. I haven’t taken the time to read the articles, but the decomposition sounds entertaining at this point.
Can someone explain to a simple physical chemist what is the physical rationale for using an upside down signal from a proxy.
Steve has a post on this, about the Southern Hempisphere reconstruction in AR5. You have a proxy in the Northern Hemisphere. So if you are trying to reconstruct temperatures in the Southern Hemisphere, shouldn’t it make sense to use a Northern Hemisphere proxy upside-down?
I think there are a few reasons why proxies may be used with the opposite polarity compared to the original author’s interpretation:
1) It goes down at the end and instrumental series go up at the end, so it has a negative correlation and thus is automatically flipped by the algorithm to give a better overall correlation (which of course is one of the biggest problems with MBH-style algorithms, they are biased to produce “hockey sticks” for this reason).
2) Experts may interpret two different proxies of the same type as having different polarities (eg, one varve series may be interpreted at thick=cold while another may be interpreted as thick=warm) and they are just lumped together and assumed to operate in the same way, mainly due to carelessness/ignorance.
3) Simple carelessness – not checking whether higher values correspond to warmer or cooler temperatures and just assuming that higher means warmer. Really the study authors should be checking each series before feeding it into the “magic” algorithm but experience tells us they often don’t.
4) The result of the algorithm is more along the lines of “what is expected” with the series flipped, so it’s flipped to give a “better” result.
Of course, none of these are GOOD reasons to flip a series. But these may be the justifications or reasons why they are being used “upside-down”.
P.S. When is the movie coming out that’s based on the title of this post? I assume Matt Damon will be starring in the action-packed climate thriller.
+1 on the movie. I gather The Kaufman Legacy is still being made.
I was thinking about a Big Bang Theory episode.
In one of the Climategate emails from Sept 2009, Kaufman discussed how he came to use data upside down on an earlier occasion (upside down Tiljander) as being “confused by the fact that the 20th century shows very high density values and I inadvertently equated that directly with temperature”:
One of the contributing factors to Kaufman’s error was Mann’s prior failure to issue a corrigendum in respect to Mann et al 2008, where the identical error was criticized. (Actually, worse, Mann denied that there was an error.)
The failure to issue corrigendums also impacted Tingley and Huybers 2013, which similarly used upside down data, relying on the earlier error of Mann et al 2008.
Steve: Your mastery of the subject matter, acuity and recall continues to be amazing. Mann’s failure to post corrigenda is potentially a very powerful point in any assessment of Mann’s professional integrity. Hopefully it leads to as additional line of questions when Mann gets deposed by Steyn’s lawyers.
Since this is density data, should he have flipped the series upside down, or taken the reciprocals?
If it goes down at the end, while the instrumental series goes up at the end, then either a) It is actually negatively correlated with temperature, b) It is totally uncorrelated with temperature, c) It used to be correlated with temperature but no longer is.
This isn’t an issue of “Oops, I used it upside down!” If something in fact dropped when previous physical arguments imply that it should have gone up, we at the very least have something that needs to be explained. We, most likely, don’t know exactly what is going on. We definitely do not have a scenario where that should be the most heavily weighted proxy.
It was explained, contamination in the last portion, choice c.
So this “upside down” business really occurred because Mann & co “calibrated” the proxy using data from the present, contaminated period, and therefore their formula gave that it was negatively correlated with temperature?
Keep up the good work Steve, Jean S and others. I just wish I had your education, maths, stats skills etc
BTW how has the Antarctic fared in the latest revisions to the Pages 2 K study?
Before in 2013 they claimed that the Antarctic was warmer than today from 149AD to 1250 AD.( thus covering a warmer Med WP for the SH?)
Is that still the case?
Steve: thus far, only the Arctic2k has been revised. Antarctica shows more of a long-term decline through the Holocene, with neither the medieval nor modern warm periods being particularly noticeable in the isotope data. Antarctic first millennium isotopes were warmer than last millennium and this impacts the overall average.
When this was reviewed by The Prussian at his blog, he noticed that there was no hockey sticks anywhere but the Arctic. Australasia is based on a paper that has been retracted. It appears there are no regional hockey sticks but they still have an overall global hockey stick.
Are varves any more reliable as proxies than bristlecone pines? For every sample/series that has/have been chosen how many have been discarded? And for what reasons are some selected (and/or heavily weighted) and not others?
Would there be any point in showing diagrams of the weightings of the proxies in the two studies side by side? This could be coupled with another diagram to show the difference of the weightings between the two studies to the same scale to show how similar the result would be expected to be.
Steve: as I recall, Kaufman et al 2009 used even weights. So just imagine the red series shown equally.
The titles of these post are sounding more and more like episodes of The Big Bang Theory.
It’s The Kaufman Paradox that’s key to understanding the whole series, I’m told.
Steve thanks for your reply about Antarctica.
BTW is it a fact that the Australasia data is based on the retracted Gergis, Karoly study?
de freitas et al have a new 100 yr peer-reviewed surface temperature history out for New Zealand. The trend is only about 1/3 of that given previously in documents produced by NIWA. This impacts the way one calibrates proxies to temperature.
I suspect the BOM in Australia will eventually be required to revise its long-term surface temp trend for Australia as well.
There is a knock-on effect for published proxy studies like Gergis et al, which will quickly become redundant, or doubly redundant. This flows on the into global “averaging” of proxy series.
It’s a bad paper methodologically. It will make no difference going forward. It’s simply another example of people publishing bad papers in out-of-topic journals to avoid rigorous peer-review. A recent example being Loehle (2014) which is equally as flawed.
Steve: in the context of the present post, do you regard identification of PAGeS2K’s use of upside-down data as a responsibility of peer reviews (implying a failure of peer review) or that such errors happen anyway and are the responsibility of the wider community (though, to my knowledge, no one other than me took issue with it). Or the contaminated data in Mann’s nodendro reconstruction? Did it receive ‘rigorous peer review”? From my perspective, I think that peer reviewers should spend more time ensuring that data is available and methods are transparent and less time worrying about whether the results accord with their own results.
Robert:
Can you point me to a critique of this paper that demonstrates its errors and limitations? All that I have seen to date on some NZ sites are essentially ad hominem attacks on the authors and appeals to authority. It is quite possible that the paper is grievously flawed, but simply saying so, hardly makes it so.
On in-topic “rigorous peer review,” I’m presently trying to publish a paper that develops a method of propagating error through GCM air temperature projections.
The “rigorous peer review” I’ve experienced in main-line climate journals is that climate modelers know nothing of physical error analysis. They literally don’t understand a physical error statistic, don’t understand a physical confidence interval, and have apparently never before encountered error propagation. Their “rigorous peer reviews” are riddled with freshman mistakes.
This epic has been on-going since June 2013.
With this sort of experience recorded (pdf) by virtually everyone publishing critical evaluation papers — including people of high professional stature like Richard Lindzen — it’s no mystery why such papers end up being published in “out-of-topic” journals. The on-topic journals are guarded by incompetents.
Pat, I always wish you’d tell us what you really think 🙂
Well I’m sure that’s a fairly sweeping generalization. Somehow I think that the likes of Isaac Held, Richard Peltier etc… are fairly competent at error propagation 😉
As I mentioned up above – try Climate of the Past – ensures it is open for the public to see the review process.
Robert says:
Not necessarily. Juckes et al 2007 received conventional secret review by Climate of the Past. I and several other CA readers submitted open review comments on Juckes et al 2007, which was on a topic that I had detailed knowledge. Juckes was permitted by the editor to ignore our comments and the review was taken behind closed doors.
Having said that, I do support what Climate of the Past tries to do and reviewed Burger’s submission at the request of Valerie Masson-Delmotte.
pat many people have tried to explain your mistakes to you, skeptics, lukewarmers, and AGW types.
perhaps you should be less certain that you are correct.
Robert Way, I know my experience. Not one of the climate modelers who has encountered the analysis has understood it. Likewise, not one physical scientist to whom I’ve explained it has failed to immediately grasp it. It’s as though climate modelers are completely untrained in physical error analysis.
You’re welcome to look yourself. Here (2.9 MB pdf) is the poster I presented at the 2013 AGU Fall meeting in San Francisco. Not one climate modeler who stopped by understood the error analysis. Likewise, not one of my reviewers identifiable as a climate modeler.
Steve Mosher, you have yet to demonstrate you have even read anything whatever of the papers I’ve published, much less understood anything of them. And yet you repeatedly presume to declaim upon my mistakes. And then you leave when I require specifics of you. What “mistakes,” Steve?
Once the error propagation paper is published, I intend to go back to the surface air temperature record. Much of that analysis is finished. I’ve got it by the short hairs, Steve. One true classic of naivete will make those folks want to hide. If you’re in Erice next March, you’ll hear it mentioned.
Pat and Steve Mosher,
the issue of error propagation in GCMs, important as it doubtless is, is not one that I’ve covered or am familiar with and has been coatracked into this thread. I try to avoid CA getting into topics that I cannot vouch for, as, in the past, my critics have been very quick to try to tar me with every view and topic expressed here so I try to focus on topics where I have first hand knowledge. I’d prefer that it be discussed at another venue. Sorry about that.
Pat Frank,
Maybe lack of knowledge of classical error treatment is generational.
Maybe you and I acquired training because of the sub-sets of science where we worked.
Whatever the cause, I heartily endorse your comment and have made similar myself.
Researchers who have not done so should browse the pertinent sections of, for example, http://www.bipm.org
There are international conventions for error analysis.
It’s a bad paper methodologically.
Details?
It will make no difference going forward.
Sounds club-ish. That or you’ve added yourself to those who can see/predict the future. The prediction record in your field isn’t all that great btw.
I haven’t looked at the paper yet, so it may very well be flawed, but if you’re going to make that claim, it’s generally good practice to include some of the reasons…
The paper is, as robert indicates, flawed. Folks working in the field will give it no notice. It won’t change anything.
There is no doubt in any field that there are times when the peer-review process fails to recognize substantial (and important) issues in submissions. The difference is that the examples pointed to above are following a recent trend of individuals publishing in out-of-topic journals to circumvent the review process and avoid those with expertise on the subject matters in question. I tend to find it ironic that certain individuals are so confident in their results and then yell out gatekeeping when they get rejected from climate journals and yet they never submit to Climate of the Past with its open review process.
As for the responsibilities of reviewers identifying errors – well its a tough question. The onus is on the reviewers to identify glaring issues in data and methods (and analysis) but alternatively should reviewers be responsible for quality-checking every dataset used as input? If I publish a paper on regional climate trends should I expect the reviewer to go check every individual station used as input to check for inhomogeneities?
My view is that the onus should not be on the reviewer for that type of work. A reviewer is there to ensure that a contribution is seemingly sound science and that it is properly translated – not to “audit” a study. The broader community does have a responsibility however to self-police and identify issues when they exist.
I think there are ways to improve as a community in terms of self-policing which includes limiting the influence of internal politics on the review process. That being said I think that generally there has been progress made in terms of transparency and I expect that will continue with time. I think that today’s generation of young graduate students and PhDs are publishing in a world which requires much more transparency than perhaps 15 years ago.
Where I do agree with you is that data quality is an issue – and I would sooner like to see more of an effort to develop better paleo-chronologies than developing quasi-black box algorithms for processing old chronologies. As has been noted here beforehand – if you have a strong signal in the proxy then you should be able to recover it with fairly simple methods.
I see the main task of peer reviews to be reducing the noise, i.e. making it less likely that a large number of fundamentally erroneous or totally irrelevant papers gets published. Another important task is improving the quality of papers that get published by pointing out issues that make the papers more difficult to understand. Requiring that additional information is included, when what’s presented leaves serious gaps, is part of the second task.
It’s not uncommon that people believe or insist that every paper should be verified as correct, before it gets published. That’s, however, not reality, and that would be counterproductive by delaying publication of valuable new results. The real and most important verification by peers takes place later. It’s done by all the other scientists, who study the paper. If the findings of the paper appear important, it will be scrutinized by many. Some of them are also likely to use the paper as starting point for further research. Such further research will ultimately provide the most stringent tests of the results of the paper. Peer reviewed papers published in high impact factor journals are more reliable than other papers, because they get much more widely and rapidly scrutinized by other scientists of the field. The verdict of the community may, however, remain hidden from the outsiders, because refutations and erranda are not always published even, when errors are significant. Those with good contacts to other scientists of the field are most likely to learn about such cases.
Similar development from early attempts to more sophisticated research seems to have happened also in paleoclimatology, but as discussed by Steve and Robert, the expansion in the volume of input data and improvements in the quality of the input data have been rather slow. Therefore the situation seems still to be such that methods are chosen based on their apparent power rather than on good understanding of their limitations and reliability. With large amounts of high quality data scientists would automatically choose their methods differently.
>The difference is that the examples pointed to above are following a recent trend of individuals publishing in out-of-topic journals to circumvent the review process and avoid those with expertise on the subject matters in question.
Isn’t that what happened with Mann in PNAS? Or had he not yet earned fast-track privileges there?
Robert,
I suspect that we’re in agreement on many points. However, out of all the issues to worry about, I think that eout-of-topic journals is very small beer and if the climate community believes that a lousy paper is worth criticizing, they are free to do so. My own experience with peer review from the climate community is that reviewers are too often concerned with gatekeeping. I can vouch for that from my personal experience. For example, Ross and I tried to publish a comment on Santer et al 2008, pointing out that they had used data ending in 1999 and that their results did not hold up with up-to-date data. A simple enough comment, but flatly and firmly rejected with exchanges behind the scenes among Climategaters. By keeping our comment out of the literature, Santer et al was unrebutted for a long time – and the failure of anyone to publish a rebuttal was specifically claimed in the EPA RTP documents as support for their false claims. Eventually, Ross re-did the results with a varied econometric method – the variation, in my opinion, being irrelevant to the point – and managed to get the article published in a different journal, but the initial peer reviewers had been doing nothing more than gatekeep.
O’Donnell et al 2010, which has also been cited by IPCC and others, was perniciously reviewed by Eric Stieg, who, despite his conflict of interest, did not identify himself as a reviewer and delayed and tried to prevent its publication and, in his capacity as an anonymous reviewer, required changes that I would not have accepted if we had known that we were dealing with a party with a known adverse interest.
Pielke Jr and I submitted a short article some years ago on hurricane distributions and, more because of me than him, received unbelievably angry reviews – which were laughably inconsistent. One reviewer said that our results were already well-known and established in the literature and recommended rejection. The other reviewer said that the results were not merely wrong, but fraudulent and recommended rejection. The editor said that there was a consensus in favor of rejection.
So yes, going to out-of-topic journals may yield an occasional paper that may not stand the test of time. But so what. Out of all the things that the climate community should concern itself with, trying to “plug the leak” at such journals is surely not one of them.
RE: Robert Way Posted Nov 2, 2014 at 2:49 PM
“There is no doubt in any field that there are times when the peer-review process fails to recognize substantial (and important) issues in submissions.”
And yet you (and the Team) have not recognized &/or criticized what has been posted here – and often – where clearly errors have not just been pointed out but written about with calculated, graphed and well, audited submissions. From data known to be used upside down, (but doesn’t matter) to using data known to be contaminated (but doesn’t matter) are still being used and cited and are still passing peer-review? When obviously as most &/or all of these issues are years old, every peer-reviewer must be aware of them. Puh-leese. This is nothing more than willful.
RE: Pekka Pirilä
“With large amounts of high quality data scientists would automatically choose their methods differently.”
Onnea
(For the the non-finns; Good luck with that)
“If I publish a paper on regional climate trends should I expect the reviewer to go check every individual station used as input to check for inhomogeneities?
My view is that the onus should not be on the reviewer for that type of work. A reviewer is there to ensure that a contribution is seemingly sound science and that it is properly translated – not to “audit” a study. The broader community does have a responsibility however to self-police and identify issues when they exist.”
And yet the peer review process is presented as the “gold standard” to the general public. Perhaps climate scientists should correct that misconception?
The gold standard for things that matter (Engineering) is auditing and close scrutiny. The general public expects no less of climate scientists than it does of its Engineers, infact we expect more given the huge stakes.
Steve Mc,
[RW] I’m only new to the academic scene – and I certainly am not sure I will remain there – too stressful on the home life – but I can say that I’ve had multiple papers that had very long, tough review processes (and multiple rejections) but I don’t have the same opinion as you with respect to both the prevalence and nature of ‘gatekeeping’. I think there are probably some individuals out there who probably partake in that sort of thing but the same could be said in many fields, and it is certainly only a minority. There’s a certain sense of naivety in expecting that scientists in any field are going to be immune to the very same qualities people in other professions display. As such, I really very strongly disagree with any perspective that instances of gatekeeping are caused by ‘inconvenient results’ – on the contrary I believe it’s largely driven by personal differences and egos. If someone doesn’t like you or your actions then they’re probably going to give you a harder review. Hate to say it but its human nature. This is why there are advantages to both open review processes and double blind reviews – it can limit those issues. But expecting perfection from the academic review process will always lead to disappointment. The lucky thing is that if you’re right and can prove it then your work will be published eventually.
[RW] In my experiences I have had some incredibly harsh reviews. I’ve also had papers rejected (wrongly if you ask me) multiple times from journals. At the end of the day I can either choose to have sour grapes or just keep on working away and when I get the chance to respond to a tough reviewer I do so in a frank, convincing manner but still remaining respectful. So much in life is about how you say things – not what exactly is said. The academic lifestyle is not for the faint of heart and bruised egos are something that has to be accepted. Our coverage bias paper for another example received a very tough review process with two rounds of review that lasted ~7 months – yet this was scolded by the contrarian community as being an alarmist paper that received pal reviews…
[RW] All the above comments being said, these experiences are the exceptions not the rule. Most researchers I have known have been thoughtful, nuanced and keenly skeptical of extreme results. Reviews have always been rigorous and tough with it being rare for there to not be major revisions suggested. The real pity is that there is so much focus on the few high profile climate scientists while little attention is paid to the majority working and publishing in the field. It is too bad that you’ve had some bad experiences with the review process at some journals. I do think that for instance the O’Donnell et al paper was an interesting contribution that deserved to be published but it’s like anything – sometimes things are more difficult than they need to be.
[RW] As for the out-of-topic journal discussion – we will agree to disagree. I’ve seen way too many crappy papers this year in out-of-topic journals getting bandied about as “proof” of the great ‘AGW fraud’…
The de Freitas et al 2014 paper showed an alternative derivation of the NZ seven station temperature record.
I’ve been working on Australian data for some years. Reasoning suggests we do a similar exercise to deF14. One might expect broadly similar temperature trends each side of the Tasman.
People like Dr David Stockwell are preparing more evidence of questionable assumptions in the official Australian record named Acorn-sat.
So the NZ paper is having an effect already. The material consideration is the strength of the science, not who published the paper or from which club one fires a social comment.
Robert Way writes “My view is that the onus should not be on the reviewer for that type of work. A reviewer is there to ensure that a contribution is seemingly sound science and that it is properly translated – not to “audit” a study.”
There is certainly room for criticism of a paper where a new method results in a result that differs from the mainstream but without the detailed analysis its little more than an arm wave.
“All the above comments being said, these experiences are the exceptions not the rule.”
Robert Way, these comments are avoiding the point. You are discussing the issues that reviewers are human beings, sometimes with large personality flaws. Steve McIntyre is discussing something entirely different: that a important faction of the paleo climate scientists consider him to be an enemy. They are fighting a war against him and some others, and as McIntyre claims and Climategate shows, peer review was one of their weapons.
I don’t see how you can claim that you don’t know this, as you discussed it on the private SkS forum. You were talking to a group of people who think they’re in a war, and you were helping them discuss tactics and strategy. Sounded like there were two groups there: the ones (you) who respected McIntyre as a competent enemy, and the ones who just thought he was an enemy.
Given that spurious inversion of series is a very common error in statistical reconstructions shouldn’t that be one of the first things that reviewers look for?
Agreed, Steve, M. I’ll reply to Geoff by email.
Nice post! The information and analysis that you and Jean S provide continues to be quite clear. Even to a “member of the general public” such as myself, at least on a conceptual level, what you are presenting is quite understandable, and if what you are presenting is wrong, it should not be difficult for the authors and their supporters to point out the errors. The silence continues to be deafening. Thank you.
I am no expert in these fields, but I am highly skeptical that a varve could be a reliable predictor of anything specific such as temperature. There are just too many potential biological and environmental variables potentially and unpredictably affecting sediments. So it is interesting to see that an expert such as yourself is critical of their current use in proxy studies, confirming my armchair suspicions.
Nick Stokes will be along soon to claim it’s all Marie Antoinette’s fault and you should not eat sausages on a Thursday.
I think the graph of the weights should be required on the first page of every one of these proxy reconstructions publications. It is really the DNA of the reconstruction.
It would be even better if each proxy was issued a global ID# and the graph listed ALL the proxies at the bottom of the graph; then you would clearly see the weights AND see what proxies were omitted completely. At the time of issuing the global ID, the proxy could be evaluated by a committee for proper orientation and that could be set without regard to what is was being combined with or compared with.
Just my $0.02
“what *it* was being…”
Robert Way,
If the de Freitas et al paper is flawed and if you consider that it should not have been published then perhaps you should look at the truly awful dogs breakfast, produced by NIWA, which purports to be the official temperature record for NZ.
The de Freitas et al paper is a vastly improved effort over the NIWA version.
The de Freitas et al paper will be very helpful going forward. It will force NIWA, who are the official gate keepers for the orthodoxy in New Zealand, to man-up and properly explain their own bloated excuse for a land surface temperature trend.
Sampling is one of the greatest challenges in commercial and sociological research studies. Insuring you are talking to the right people, reading the right material, etc., is pretty much 90% of the battle.
Perhaps some of the lessons learned in those subsectors are relevant to some of the discussions here.
The idea of random sampling has been more or less abandoned for research purposes. Lack of true accessibility to a random sample was the primary reason. Fortunately, researchers experimented often enough to devise ways of conducting credible research without a random sample, mostly through the use of screening, or qualifying, questions and the development of sample frames (or quota groups) to insure representativity, rather than randomness.
I don’t know how many paleoclimatic data series exist, nor how many of them are claimed to have a strong temperature signal. I don’t really care. In order to study a data series of this type you would ideally try to identify a random sample of data series that covered all or a significant portion of the time period being investigated.
That does not appear to be what happened. Proxies were chosen for a pre-determined degree of fitness–matching trends with real temperature trends for a calibrating period.
This does not have to be a fatal flaw in the study. Thousands of accurate research projects are conducted successfully each year with similar constraints. They accurately predict elections, success of movies and television shows, the geographic range and rate of spread of illnesses, etc., etc. You can do excellent research under challenging conditions.
If the data series were screened for fitness, it would be okay–if there was a sample frame constructed beforehand to insure representativeness of the data included for analysis. I don’t know if this was done. And perhaps to the horror of some, I would even hold that you could create such a sample frame post hoc, if adequate care was taken.
To me, the more serious problems with these experiments (for that’s what I consider them, experiments in analysis), are first, that the period of calibration was a time of significant change in the data of interest, and second, that the calibration period is also one of the primary subjects of such analyses. That’s a really bad combination. Too much can go wrong, and apparently did.
It would have been far better to use as a calibration period some time before the current warming period, IMO.
But I do think it would be possible to assemble a data set that would be fit for purpose and I hope all our criticism of efforts to date does not chill further research efforts.
Barn E Rubble is not a common name for a Suomalainen!
Anteeksi Steve! (Means ‘sorry’ Steve)
Finnish speakers seem to be over-represented on your fine blog!
The statistical methods I use to deal with this are too complex to spell out in detail but, as Steve has pointed out elsewhere, they come down to a weighting 🙂
“I probably got confused by the fact that the 20th century shows very high density values and I inadvertently equated that directly with temperature. This is new territory for me, but not acknowledging an error might come back to bite us”
Now theres a challenge for Nick Stokes
Too demonstrate that even an “experts” own admission that he was wrong is wrong
Nick’s never been keen to discuss the details of Climategate emails. Private messages unethically obtained is one way to justify the lacuna. They also make it far harder for Racehorse to win the case. I’ve never known which it is.
“Nick’s never been keen to discuss the details of Climategate emails.”
In fact, I often found it necessary to point out to someone who had discovered a secret email admission, that in fact what he was waving was just what was in due course published. And so it is here. Kaufman published a corrigendum, in line with the email discussion.
But what people here never seem to deal with is that Mann too acknowledged the issues with Tiljander, calling it a potentially problematic series, and publishing s calculation which omitted it. It’s all in the SI (p 2) of the 2008 PNAS paper:
“Potential data quality problems. In addition to checking whether or not potential problems specific to tree-ring data have any significant impact on our reconstructions in earlier centuries (see Fig. S7), we also examined whether or not potential problems noted for several records (see Dataset S1 for details) might compromise the reconstructions. These records include the four Tiljander et al. (12) series used (see Fig. S9) for which the original authors note that human effects over the past few centuries unrelated to climate might impact records (the original paper states ‘‘Natural variability in the sediment record was disrupted by increased human impact in the catchment area at A.D. 1720.’’ and later, ‘‘In the case of Lake Korttajarvi it is a demanding task to calibrate the physical varve data we have collected against meteorological data, because human impacts have distorted the natural signal to varying extents’’). These issues are particularly significant because there are few proxy records, particularly in the temperature-screened dataset (see Fig. S9), available back through the 9th century. The Tijander et al. series constitute 4 of the 15 available Northern Hemisphere records before that point.
In addition there are three other records in our database with potential data quality problems, as noted in the database notes: Benson et al. (13) (Mono Lake): ‘‘Data after 1940 no good— water exported to CA;’’ Isdale (14) (fluorescence): ‘‘anthropogenic influence after 1870;’’ and McCulloch (15) (Ba/Ca): ‘‘anthropogenic influence after 1870’’.
We therefore performed additional analyses as in Fig. S7, but instead compared the reconstructions both with and without the above seven potentially problematic series, as shown in Fig. S8.”
Nick: this is very old terrtory. I am completely familiar with Mann’s SI, as well as his denial of the error in his reply to our Comment at PNAS and his refusal to issue a corrigendum. Mann’s sensitivity conspicuously omitted the dependence of his vaunted no dendro reconstruction on contaminated data. Mann falsely denied that it was a problem. Mann failed to publish a corrigendum acknowledging the error at PNAS. Mann’s failure to publish a corrigendum (instead placing the acknowledgement of the problem deep in the SI to a different paper) tricked EPA into thinking that the nodendro reconstruction was valid despite the contamination. Mann’s pal, Gavin Schmidt, acted as peer reviewer for EPA, accepting Mann’s contaminated nodendro reconstruction in the RTP documents. The acquiescence of the larger climate community in this sort of nonsense diminishes their standing as the defence of it diminishes yours.
Nick
You miss the point of why Mann chose to include a series he know was “potentially problematic” – though “contaminated” would have been a better word.
Clearly he included it for a reason, and that reason is obvious. It was included so that he could do his ‘skip-one’ test leaving out Tiljander and still get a hockey stick from bristlecones. He could then do a skip-one test leaving out bristlecones and get a hockey stick from Tiljander.
It is “tricks” like this that lead sceptics to have such a poor opinion of Mann
Nick:
Are you saying that Mann’s HS holds if he dropped both Bristlecones and Tiljander? Are you saying that Mann did not know Tiljander was problematic before he included it in his pool of proxies?
Steve: this is another threadjack by Nick Stokes. Mann’s misuse of contaminated data has been extensively discussed. Nick is in full-out denial on the topic.
Perhaps McIntyre could answer the question from bernie1815: Does Mann’s HS hold if he dropped both Bristlecones and Tiljander.
Nope. McIntyre will not do that. We all know why.
ehak:
I am sure Steve could cite more posts. It took me 10 seconds to find this post, I am sure there are more.
https://climateaudit.org/2011/07/06/dirty-laundry-ii-contaminated-sediments/
I think you owe Steve an apology or at least a corrigendum.
Nick:
Boring, your commentary is getting more and more pathetic every day. Steve dealt with the issue first time less than a day after Mann 08 went online:
bernie1815:
Your link shows than the HS holds without 7 proxies. Not McIntyre’s work though. He needs help from realclimate.
And of course: that is not without only Tiljander and bristelcones.
Try again.
ehak: You’re accepting that Mann should never have included Tiljander and bristlecones in his reconstructions then? Like bernie I’m confident Steve can give copious details of why your phrase ‘the HS holds’ should soon be adorning the dustbin of history as well as the routines of stand-up comedians. But we remain open-minded as always 🙂
> this is another threadjack by Nick Stokes
I was under the impression that Richard Drake channeled Nick with his “never been keen to discuss the details of Climategate emails”.
Using someone as a whipping boy and then blaming him for defending himself might very well be suboptimal. How shameful this is for the auditors to decide.
Steve: Nick’s “response” was a wildly untrue statement about the analysis at CA of Mann’s use of contaminated data. Making wildly untrue allegations is hardly “defending himself”; it was another fabricated accusation against others. Plus Mann’s use of contaminated Tiljander data had little to do with the Climategate emails – Kaufman considered the issue in emails, but offhand I don’t recall Mann discussing this topic in the emails. SO Nick’s absurd accusation was coatrack as well.
My comment was on topic if little polyp‘s was. I stand by my observation that Nick isn’t keen on the subject. Unfortunately the example he then gave was poor and this prompted Steve’s comment. But note that nothing Nick, ehak or you have written has been removed. It’s up to the disinterested reader to decide. Perhaps you’d like to pass this strength of CA on to other blogs you admire so that they can emulate?
willard king climateballer
“Using someone as a whipping boy and then blaming him for defending himself might very well be suboptimal. How shameful this is for the auditors to decide.”
1. whipping boy.. hmm victim card much
2. blaming him for defending himself? sorry not seeing either. Not seeing
anyone blame him and not seeing any kind of defense from him.
3. and now the Bully willard moralizes about who should be shameful.
climateball homerun
more of a strikeout or feeble popup.
Steve:
Er, make that “Unfortunately the non-example he then gave was very poor” in my earlier post.
Mosh:
It should really be on Fox Sports by now.
Steve,
“Nick’s “response” was a wildly untrue statement about the analysis at CA”
I said that Mann published, as part of his paper, cautions about the use of Tiljander, and quoted them. That is entirely factual. I then offered the opinion that CA has been unwilling to deal with the existence of that caution. Unsurprisingly, you disagree. That doesn’t make my opinion “wildly untrue”.
Failure to deal with it is shown here.
“One of the contributing factors to Kaufman’s error was Mann’s prior failure to issue a corrigendum in respect to Mann et al 2008, where the identical error was criticized. (Actually, worse, Mann denied that there was an error.)
The failure to issue corrigendums also impacted Tingley and Huybers 2013, which similarly used upside down data, relying on the earlier error of Mann et al 2008.”
The fact that Mann et al 2008 warned, very explicitly, of the “potentially problematic” Tiljander data goes unmentioned, but is surely relevant.
Steve: your assertion that “people here”, including me, did not “deal” with Mann’s reference to Tiljander in his SI, was wildly untrue. Mann’s original Si has been carefully analysed on multiple occasions. In the SI, Mann failed to disclose the impact of contaminated data on the heavily promoted “nodendro” reconstruction. If, at the time, Mann knew that the nodendro reconstruction did not “validate” according to his (no necessarily valid) methods without contaminated data, his failure to this in the original SI was dishonest. If Mann did not know this, then when Mann became aware of it, he should have issued a corrigendum, if not retraction. However, he didn’t do so. And, yes, like others, I later learned that he acknowledged the failure to validate in the SI to a different paper, but that is not an acceptable alternative to a corrigendum. Indeed, even EPA’s RTP documents, for which Gavin Schmidt acted as reviewer, was either deceived by Mann’s failure to issue a corrigendum or intentionally ignored it. It is astonishing that you defend such conduct.
for willard I had to shorten right field.
Nick’s never been keen to discuss the details of Climategate emails, but he has found it necessary.
It was the commentary, not the play, I was enjoying 🙂
“But note that nothing Nick, ehak or you have written has been removed.”
No. But my response still sits in moderation.
climateballer says ‘but moderation’
the good old Mann08 SI.. As a further safeguard against potentially nonrobust results, a minimum of seven predictors in a given hemisphere was required in implementing the EIV procedure, because seven is prime.
>I said that Mann published, as part of his paper, cautions about the use of Tiljander, and quoted them. That is entirely factual. I then offered the opinion that CA has been unwilling to deal with the existence of that caution. Unsurprisingly, you disagree. That doesn’t make my opinion “wildly untrue”.
This is like Martin Vermeer’s attempted defense of Mann’s reply to Steve’s comment in PNAS. “Allegation of upside-down use is bizarre. Regression algorithms are blind to the sign of the indicator.”
Pointed out that Tiljander was used upside-down, and in fact was not fed through a correlation screening in CPS. Thus Mann was responding with an irrelevant point. Vermeer’s reply was that Mann is right because regression algorithms are blind to the sign of the indicator.
> Making wildly untrue allegations is hardly “defending himself”; it was another fabricated accusation against others.
Nick’s response to the gratuitous “more Omertà, Nick?” [1] starts with this:
The “yes, but Tiljander” is tangential to this claim, and turns Nick’s response into an excuse to peddle the Tiljander Affair into the discussion. To whatever claims like “untrue allegations” or “fabricated accusation” are supposed to refer, unless they address Nick’s point underlined above, they amount to just another round of ClimateBall ™.
If we really want to discuss Nick’s “Omertà”, I do hope we can agree that as a matter of “decorum” it is legitimate not to say what one thinks [3], and I suggest we start by returning to an argument he put forward a few years ago:
There was no answer to that argument back then, except perhaps mpaul’s agreement.
[1] neverendingaudit.tumblr.com/post/8195679480
[2] neverendingaudit.tumblr.com/post/1702909842
[3] neverendingaudit.tumblr.com/post/16927748749
[4] neverendingaudit.tumblr.com/post/9587811876
Steve: the fact that Kaufman published a corrigendum in respect to Kaufman et al 2009 is unresponsive to Mann not publishing a corrigendum. Indeed, it supports the position that errors should be acknowledged in corrigenda and points to Kaufman’s failure to thus far publish a corrigendum for PAGES2K.
Because Nick and Willard are saving the planet, they cannot (nay, *should* not) be bothered with little things like facts, logic, and their own prior statements which happen to be contrary to their current statements.
RE:“Nick’s “response”
RE: Steve: :”. . . It is astonishing that you defend such conduct.”
Well I’m thinking that leaves me among the the few (& bored) who aren’t astonished that nick (small ‘n’ op) would defend such conduct.
But then again that’s me . . . and I suppose, that’s nick . . .
“would defend such conduct”,/i>
Out of curiosity, what conduct am I supposed to be defending?
I simply quoted what Mann said in the SI, said that it was relevant to what is said here, and people didn’t seem to want to acknowledge it. What conduct?
Steve: your assertion that “people here”, including me, did not “deal” with Mann’s reference to Tiljander in his SI, was wildly untrue. Mann’s original Si has been carefully analysed on multiple occasions. In the SI, Mann failed to disclose the impact of contaminated data on the heavily promoted “nodendro” reconstruction. If, at the time, Mann knew that the nodendro reconstruction did not “validate” according to his (no necessarily valid) methods without contaminated data, his failure to this in the original SI was dishonest. If Mann did not know this, then when Mann became aware of it, he should have issued a corrigendum, if not retraction. However, he didn’t do so. And, yes, like others, I later learned that he acknowledged the failure to validate in the SI to a different paper, but that is not an acceptable alternative to a corrigendum. Indeed, even EPA’s RTP documents, for which Gavin Schmidt acted as reviewer, was either deceived by Mann’s failure to issue a corrigendum or intentionally ignored it. It is astonishing that you defend such conduct.
> [T]he fact that Kaufman published a corrigendum […] is unresponsive to Mann not publishing a corrigendum.
This claim is unresponsive to Nick recalling that fact to respond to yet another “More Omertà?” and his point that snooping through emails might reveal what was in due course published.
This claim is also unresponsive to the task of reconciling “More Omertà?” and “I don’t always say what I think”, two ClimateBall ™ moves that might be tough to reconcile.
Hold the front page – climate scientist says something in public consistent with his views in private.
RE: Robert Way -“I think there are probably some individuals out there who probably partake in that sort of thing but the same could be said in many fields, and it is certainly only a minority.”
It would only be minor in total. Mann’s Hockey Team was in the majors not minors. Their faulty papers were used to influence the IPCC policy and still are. As well they fail to acknowledge their errors, but use other faulty papers to reinforce them. As Wegman noted the peer reviews were done by a social club. There is lots of documentation of this in Steve’s posts.
It would appear that the two papers use almost exactly the same proxies, with the exception of a humongous weight being assigned to a proxy in which specialists disagree so fundamentally on what it represents as to not even agree on positive vs. negative changes.
Robert Way,
“As for the responsibilities of reviewers identifying errors – well its a tough question. The onus is on the reviewers to identify glaring issues in data and methods (and analysis) but alternatively should reviewers be responsible for quality-checking every dataset used as input? If I publish a paper on regional climate trends should I expect the reviewer to go check every individual station used as input to check for inhomogeneities?”
In my field of work a competent reviewer would be expected to spot-check sufficient data to establish that any errors that might be present in the unchecked data would not suffice to affect the conclusions drawn.
Where more than one dataset is present that does require testing each of the datasets. If any outliers show up in the data these would be examined and, if there were no obvious reason for their existence, there would be debate as to whether the outliers should be retained or excluded.
To take a one example (sorry, three if one is being pedantic) those who peer reviewed Briffa’s papers in 2000,2006 and 2008 do not appear to have looked into the possibility that a single outlier might have affected a whole series. Perhaps, if climate scientists looked at data to see what conclusions might be drawn from that data rather than whether that data supported a pre-existing hypothesis there might me fewer sceptics.
On the Briffa issue, the question for me is also how come Briffa and his co-authors did not spot the outliers. Most of us who work with large data sets check to ensure that results are not being distorted by such outliers. Your credibility and integrity as a researcher depend upon it and, as Feynman so pithily noted, it is far too easy to fool yourself.
I find it impossible to believe that at no time during the prepararion of these papers and the underlying ‘research’ not one of the people involved even eye-balled the individual datasets. Not to do so would represent incompetence of epic proportions. So I think the question should be, why did they choose to include outliers without due discussion of the impact of doing so in the papers?
As Willis might say, the ol’ “Mark-1 Eyeball” is the preliminary tool, the statistical tests follow…
Reblogged this on I Didn't Ask To Be a Blog.
ehak:
Did it never occur to you that your rudeness and agression (which seems to typify much of the warmists’ argument/discussion SOP) means that people simply don’t like you and hence you get written off upfront, regardless of whether or not your contributions are worthwhile?
I agree with that. This is definitely the wrong blog for clueless-sounding, hostile, accusing comments. It is pretty obvious to anyone who spent any time here that Steve and the regular commenters know their stuff very well. Unsupported throwaway comments that they don’t are just going to get, well, thrown away.
Miker613 – It is pretty obvious to anyone who spent any time here that Steve and the regular commenters know their stuff very well.
I will add, that while my knowledge of the subject matter is at best a layman’s level, the level of the discussion, in honesty, integrity with Jean S/S Mc is lightyears ahead of most other sites. I would love to find a AGW site that addresses the science so that I can better comprehend the opposing information, though the only sites I can find on the AWG side are the advocacy sites such as SKS, RC, NOAA which masqerade as science sites.
At the risk of being threadjacked by Willard, his first link went to the following comment where I responded to Nick as follows:
CA readers will recall that I had asked Science to require Osborn and Briffa to provide the measurement data used for their chronologies for Taimyr, Tornetrask and Yamal. Osborn flatly lied to Science by saying that he was not in possession of the data – a claim that was contradicted when the data was in the Climategate documents.
Stokes’ unresponsive response had been that the sharing of other people’s data was an outstanding academic issue. Be that as it may, that’s a different issue than Osborn lying to Science.
Willard seems to be untroubled by CRU academics lying to journals, only at them being called out on it.
The very essence of hypocrisy … some people don’t mind being hypocritical, they just mind it being pointed out
> At the risk of being threadjacked […]
Is this the doctrine of preemptive strike applied to ClimateBall ™.
***
> Stokes’ unresponsive response had been that the sharing of other people’s data was an outstanding academic issue.
Compare and contrast with what Nick said:
https://climateaudit.org/2011/06/27/ico-orders-uea-to-produce-crutem-station-data/#comment-299438
Paraphrases like “outstanding academic issue” may not be the best way to “get your fact straight,” perhaps the most important ClimateBall ™ move of the auditing sciences.
Steve: CRU’s lying about nonexistent terms in their “confidentiality agreements” was an issue pre-Climategate and did not require Nick to read emails to understand. On earlier occasions, they had sent station data versions to the US Department of Energy who had placed it online, hardly consistent with their claims to have had confidentiality agreements. Nick ignored this when confronted as follows: “The issue with Jones is not just sending it to Webster. I also asked about Jones sending the data to the US Department of Energy and publishing it on his website in 1996. Are those “grey” areas? Nick, you never answer a straight question. It makes discussions with you very annoying. Either (1) Jones’ sending the data to the US Department of Energy was a breach of confidentiality and/or (2) Jones publishing the data online in 1996 was a breach of confidentiality or there were no binding confidentiality agreements. Doesn’t mean that Jones has to be “thrown overboard”. However, unless the conduct is confronted, little changes, as appears to be the case here.”
CRU lying to Sciencemag about not being in possession of tree ring measurement data was a separate lying incident. Again, I take it that you are offended only by CRU being challenged on their lies and not by the lies themselves.
> CRU lying to Sciencemag […]
Something more specific than “CRU” might help clarify the accusation. In any case, our Voice of God seems responsive to the comment it was supposed to reply, an unresponsiveness which might be tough to reconcile with the “more Omertà” doctrine. There was another interesting comment in that vintage 2011 thread:
https://climateaudit.org/2011/06/27/ico-orders-uea-to-produce-crutem-station-data/#comment-299454
“Out of curiosity,” has this hope been fulfilled yet?
Steve: There have been many other people examining temperature records and I’ve spent my time on other matters, as you know. I ask readers not to threadjack and would appreciate it if you observed this blog policy. For the matter at hand, do you agree that Kaufman should issue a corrigendum – a topic connected to this post?
> seems responsive
Seems unresponsive, that is.
Never would have guessed.
If you’re really puzzled, Willard (you’re not, nice try ClimateBaller) the answer is Dr. Phil Jones. And to the extent that you’re pleading generalities, you may want to take it up with the review board…
In general, “[t]he Review found an ethos of minimal compliance (and at times non-compliance) by the CRU with both the letter and the spirit of the FoIA and EIR. We believe that this must change”. The Review also made it clear that CRU did not receive enough support from UEA management, and made recommendations to the university on how it should handle future information requests. It also recommended to the ICO that it engage more with universities and clarify how FoI law applies to research.
> [T]he answer is Dr. Phil Jones.
Thank you for the clarification, TerryMN, and for the mind probing, always a pleasant ClimateBall move. But since you indirectly ask, it was not clear to me if Phil was the only person who was accused of lying.
***
> For the matter at hand […]
The immediate matter at hand was the Omertà doctrine, which has been transmogrified into “CRU lying,” a ClimateBall deflection the Auditor himself used.
The title of the post refers to a tautology, if we’d like to pinpoint a “matter at hand” instead of playing ClimateBall.
Highlight on ‘thus far’ with regards to publishing corrigendum.
“I don’t think snooping through people’s private emails is a dignified activity.”
Many years ago all my emails were made available to a competitor during the discovery phase of a law suit. They were work emails and not private although many were to from my wife and kids. That was unpleasant however the money involved was a piddling few billion as compared to the astronomical sums involved in CAGW. So I think we can all tollerate a bit of un-dignified activitiy including Steve making occasional well deserved snide remarks that frankly many of us are thinking.
I am flabbergasted that people who want to make major changes to the finances of the world do not understand the degree of scritiny and attack they will properly be subjected to.
Ah but people dealing with a piddling few billion know that the moral high ground isn’t with anyone. With the trillions of climate policy come the anointed – at least, those who think they are and take enormous offence if we doubt it.
“I am flabbergasted that people who want to make major changes to the finances of the world do not understand the degree of scritiny and attack they will properly be subjected to.”
think of it this way. as a graduate student you have a choice: extend the science; challenge the science.
in both cases you are challenged back and questioned by your peer group.
you get your Phd and write papers. You are challenged by your peer group according to norms of that group.
you might be subject to gossip in the faculty lounge, but your career advancement depends on what peers
think about your science.
You do some government grants. The funding agency looks at your work. You promised to do X, did you do X?
Then you get invited by politicians to help them in their field. you know nothing ( first hand ) about
wrestling with pigs in mud.
You’ve just entered thunderdome and you dont even know the rules.
Climateball in the Thunderdome. No wonder the rules change when the consequences of losing are that terminal.
at least now they have media training for scientists.
recall during the early CG emails that the head of PR was Mann.
Recall the mail from briffa to jones ca 2005.. wherein briffa forwarded
a bunch of newsclippings about mann’s stonewalling.
Briffa’s observation? skeptics are getting traction with this.
Jones reaction? that day he emails warwick hughes “why should I give you my data”
dumb and dumber.
As I look back I’m thinking.. really? do we really want to take our PR direction
from Mann? do we really want to hang the case of AGW in some loose way on the hockey stick? really? well that glove dont fit, so some think we must acquit.
And there you have it. All the evidence of physics which is good and solid that GHGs warm the planet is forgotten because some dummy decided that the HS was a key bit
of evidence.
I don’t think it is forgotten Steve. But the inept PR on something peripheral has naturally led some to think there’s nothing there at all. We reap what we sow.
That’s perhaps the reason Gavin is moving away from – almost dismissing paleo – as key evidence.
New PR game.
But without paleo the case for “unprecedented” goes out the window, and modern records aren’t all that long, to establish natural variation limits.
lose/lose
So it’s gotta be just a slime attack.
OT… I was half expecting that Steve would have commented on Peter Rose’s post over at Judith’s:
Cognitive bias – how petroleum scientists deal with it
http://judithcurry.com/2014/11/03/cognitive-bias-how-petroleum-scientists-deal-with-it/
It seems to have relevance to the governance of scientific investigation and evidence based decision making?
AJ – Posted Nov 6, 2014 at 11:24 PM Cognitive bias – how petroleum scientists deal with it. http://judithcurry.com/2014/11/03/cognitive-bias-how-petroleum-scientists-deal-with-it/. Same is true in my field (federal and state taxation). If there is no financial risk for being wrong, the incentive to be correct diminishes.