On June 10, a few days after the Gergis-Karoly-Neukom error had been identified, I speculated that they would try to re-submit the same results, glossing over the fact that they had changed the methodology from that described in the accepted article. My cynical prediction was that a community unoffended by Gleick or upside-down Mann would not cavil at such conduct.
The emails http://www.climateaudit.info/correspondence/foi/gergis/Part%202a%20Journal%20Correspondence.pdf show that Karoly and Gergis did precisely as predicted, but Journal of Climate editors Chiang and Broccoli didn’t bite. Most surprising perhaps was that Karoly’s initial reaction was agreement with the Climate Audit criticism of ex post correlation screening. However, when Karoly realized that the reconstruction fell apart using the methodology of the accepted article, he was very quick to propose that they abandon the stated methodology and gloss over the changes. In today’s post, I’ll walk through the chronology.
Karoly’s first technical response (June 7 Melbourne) to Neukom’s confession was a surprisingly strong endorsement of criticism of non-detrended correlation, going as far as to even agree with me by name:
Thanks for the info on the correlations for the SR reconstructions during the 1911-90 period for detrended and full data. I think that it is much better to use the detrended data for the selection of proxies, as you can then say that you have identified the proxies that are responding to the temperature variations on interannual time scales, ie temp-sensitive proxies, without any influence from the trend over the 20th century. This is very important to be able to rebut the criticism is that you only selected proxies that show a large increase over the 20th century ie a hockey stick .
The same argument applies for the Australasian proxy selection. If the selection is done on the proxies without detrending ie the full proxy records over the 20th century, then records with strong trends will be selected and that will effectively force a hockey stick result. Then Stephen Mcintyre criticism is valid. I think that it is really important to use detrended proxy data for the selection, and then choose proxies that exceed a threshold for correlations over the calibration period for either interannual variability or decadal variability for detrended data. I would be happy for the proxy selection to be based on decadal correlations, rather than interannual correlations, but it needs to be with detrended data, in my opinion. The criticism that the selection process forces a hockey stick result will be valid if the trend is not excluded in the proxy selection step.
Neukom replied immediately (8:55 June 7 Melbourne) that he agreed, but warned that peril lay that way, since they had very few proxies that met even such a minimal standard:
I agree, but we don’t have enough strong proxy data with significant correlations after detrending to get a reasonable reconstruction….
Meanwhile, Gergis and Karoly were drafting a notice to the Journal of Climate, with the notice being sent the next day (June 8), describing the error as an “unfortunate data processing error”, explaining that they had used detrending in a related SH paper, but had inadvertently failed to do so in the AUS paper:
When we went to recheck this on Tuesday [June 5], we discovered that the records used in the final analysis were not detrended for proxy selection, making this statement incorrect.
The detrending of proxy records had been done in another paper on Southern Hemi sphere temperature variations that we had been writing simultaneously. So we wrongly assumed the same thing had been done in the Australasian paper..[REDACTED] … this was not picked up until now.
Although it was an unfortunate data processing error, it does have implications for the results of the paper. We wish to alert you to this issue before the paper goes into final production.
They asked that the paper be removed from the online section and asked how to proceed:
Please let us know how you’d like us to proceed, be it through a revised or new submission.
The following day (June 9 Melbourne), Journal of Climate editor Chiang gave Gergis some bad news: he had decided to rescind acceptance of the paper and asked Gergis to withdraw the paper, inviting her to re-submit fresh following a re-do:
After consulting with the Chief Editor, I have decided to rescind acceptance of the paper- you’ll receive an official email from J Climate to this effect as soon as we figure out how it should be properly done. I believe the EOR has already been taken down.
Also, since it appears that you will have to redo the entire analysis (and which may result in different conclusions), I will also be requesting that you withdraw the paper from consideration. Again, you’ll hear officially from J CLimate in due course. I invite you to resubmit once the necessary analyses and changes to the manuscript have been made.
I hope this will be acceptable to you. I regret the situation, but thank you for bringing it to my prompt attention.
On June 11, Gergis forwarded this to Karoly, Phipps and Gallant without comment (2G: ; 2K, ). Despite this seemingly categorical email from Journal of Climate, Karoly, Gergis and the University of Melbourne publicly maintained that the article was merely “on hold” or “under revision”.
Later in the evening of June 11, Karoly reviewed potential options for his coauthors, with options ranging from more or less ignoring results using the method of the accepted article (his option 1) to resubmitting results using the methodology set out in the accepted paper (option 3) or showing both (option 2). By this point, Karoly had moved away from re-submitting using the method of the accepted article to either ignoring the original method (option 1) or showing both (option 2). Karoly forwarded to his coauthors an email from Michael Mann in which Mann accused me of “dishonesty” with the following commentary:
Following some email discussions with Mike Mann and helpful discussions with you both last week, there appear to be several different approaches that we can take with revising the Australasian temp recon paper. I am going to go through some of them briefly, and then raise some suggestions for further data analysis that might be needed.
1. Amend the manuscript so that it states the actual way that the proxy selection was done, based on correls that included trends and were significant at the 5% level. The calibration was also done using the full data variations, including trends, over the calibration period. As Mike Mann says below and in the attached papers, this is a common approach. Don’t seriously address the proxy selection for detrended data
2. Revise the manuscript to present results for reconstructions based on both proxy selections for full correls and proxy selections for detrended correls. Expand the paper to show both sets of results and explain why the full correls ‘are better.
3. Redo the analysis for proxy selection based on what the manuscript says, proxy selection based on detrended correls, which gives only about 9 selected proxies and only one prior to 1400. No reliable reconstruction prior to 1400.
4. Redo the analysis based on proxy correlations with local/regional temps at interannual and decadal timescales, not the Australasian area average; select proxies that have strong local temperature signals, then average the proxies to get the area average temperature. This approach is like what Raphi is doing for the SH paper, I think.
My preference is now for 1. or 2. above, and not for 3. Now for some technical questions.
1. Raphi, did you estimate the significance level of the correlations between the target and the individual proxies allowing for the autocorrelation in the proxies and the reduced degrees of freedom? Some of the comments on the CA web site suggest that they can only get sig correlations for the 27 proxies if you assume 70 degrees of freedom, effectively ignoring autocorrelation. Do you have different values for the sig correlations for each
proxy, because the autocorrelation is different for each proxy?
2. In a table like the one you provided last week, can you give for each proxy record, for the 1920-1990 period, the correlation, no.of degrees of freedom and sig level for the full data, detrended data and low pass filtered data. This will help us with proxy selection.
3. It is not surprising that there are many fewer significant correlations for the interannual variations and some are even of the opposite sign for the full correlations. The spatial pattern for the temp response to ENSO, which is the main contributor to Aust temp variations at interannual time scales, is not uniform over Australasia, being quite different in NZ or Law Dome than Australia. Ailie or Raphi, can you do a map using the modem temp data for the correlations of interannual variations of gridded temp data with teh target, area average Australasian temps? Then redo the map for the full data, including the trend. My guess is that the correlns will be much larger scale for the full data. This will help to explain some of the proxy selection issues
for interannual variations.
Neukom responded immediately, generally agreeing, mentioning that he’d had similar advice from David Frank, a frequent Esper co-author.
On June 12, co-author Phipps opposed re-doing the paper using the stated methodology (option 3) on the grounds that this would not yield a “viable” reconstruction:
Based on the various emails circulated over the past few days, it appears that we will not have a viable millennial-scale reconstruction if we pursue the detrended approach. I therefore feel that we should use the raw data to validate the proxies…
My preference is therefore for David’s Option 2, with Option 1 as my second choice. I dislike Option 3 as it will not leave us with a viable reconstruction. I also dislike Option 4 as it strikes me as essentially starting again from scratch – which seems unnecessary given how far this work has already progressed, and also seems out of proportion to what is only a matter of fixing a technical issue.
Despite the very discouraging email from Journal of Climate editor Chiang on June 9 rescinding acceptance and requesting withdrawal, Karoly told Retraction Watch on June 13 that the paper was merely “on hold”:
The paper has been put on hold, while an issue with the data processing and methods that we have identified is checked. The paper has not been withdrawn nor has it been retracted.
Karoly also told The Australian that their plan was to re-submit the paper using the intended method
A fresh analysis of the data will be done, using the intended method, and the effect on the study conclusions is uncertain.
In the same article, Karoly told the reporter that “the Gergis team had not seen these [Climate Audit] posts before June 5” – a claim refuted by the many references to Climate Audit and the name-and-shame post in the emails among the Karoly-Gergis coauthors.
Despite this public posture, the Karoly coauthors were concurrently trying to get the Journal of Climate to either forget or minimize use of the methodology of the accepted article. On June 14 (14:55), Gergis had received a reminder email from Hayley Charney, Chiang’s editorial assistant at Journal of Climate, in which Charney had re-transmitted Chiang’s rejection email of June 9 to which Gergis had not responded.
However, instead of withdrawing the paper as Chiang had requested, Gergis argued (June 14) that the error didn’t matter (TM-climate science): that the error was nothing more than words describing the proxy selection method and not flaws in the analysis. (Here, Gergis took a swipe at “amateur climate skeptic bloggers” – though one of the “amateur skeptic bloggers” had been a coauthor on a recent Journal of Climate article edited by Editor-in-Chief Broccoli.) Gergis requested that they be entitled to submit a “revision”, rather than being required to withdraw and re-submit. Gergis argued that they be permitted to more or less disregard the methodology of the accepted article in the revised article itself, instead consigning discussion of results using the methodology of the accepted article to Supplementary Information:
Just to clarify, there was an error in the words describing the proxy selection method and not flaws in the entire analysis as suggested by amateur climate skeptic bloggers.
Over recent days we have been in discussion with colleagues here in Australia and internationally about the use of detrended or non detrended data for proxy selection as both methods are published in the literature .
People have argued that detrending proxy records when reconstructing temperature is in fact undesirable (see two papers attached provided courtesy of Professor Michael Mann) .
While anthropogenic trends may inflate correlation coefficients, this can be dealt with by allowing for autocorrelation when assessing significance. If any linear trends ARE removed when validating individual proxies, then the validation exercise will essentially only confirm the ability of the proxies to reconstruct interannual variations. However, in an exercise of this nature we are also intrinsically interested in reconstructing longer-term trends. It therefore appears to be preferable to retain trends in the data, so that we are also assessing the ability of the proxies to reconstruct this information.
Both approaches have been widely used in the past, and that both are supported in the literature. Thus we believe that either approach is entirely justifiable. In terms of revisions to our paper, we plan to compare the influencing of using detrended and non detrended proxy selection in a supplementary section but it is very unlikely to result in a rewrite of the paper. Instead, there will be correction of the correct method used in the paper and reference to additional supplementary
material where appropriate.
Given this paper was originally submitted for review on 3 November 2011 and was extensively reviewed by three expert assessors, my strong preference would be for permission to submit a revision of the original manuscript rather than an entirely new submission. That said, we will of course follow your advice on how best to proceed.
Chief Editor Broccoli, who had been copied on the correspondence, sharply challenged (June 15) the inconsistency between Gergis’ original information that they had inadvertently failed to implement the stated methodology (a “data processing error”) with their present position that they had carried out the analysis as they intended (but had merely misdescribed the methodology):
Your latest email to John characterizes the error in your manuscript as one of wording. But this differs from the characterization you made in the email you sent reporting the error. In that email (dated June 7) you described it as “an unfortunate data processing error,” suggesting that you had intended to detrend the data. That would mean that the issue was not with the wording but rather with the execution of the intended methodology.
Would you please explain why your two emails give different impressions of the nature of the error?
Chiang promptly (June 15) added his own commentary, very sensibly observing that they had presumed from the original notice that the authors were going to re-do the analysis to “conform to the description” in the paper and that he had asked them to withdraw the paper because the journal could not assume that the results would remain unchanged:
Both Tony and I read your initial email (dated June 8 for me, I’m in Taipei) to mean that you had intended to detrend during the predictor selection, but that subsequently you had discovered that you had not. Given that you had further stated that “Although it was an unfortunate data processing error, it does have implications for the results of the paper,” we had further took this to mean that you were going to redo the analysis to conform to the description of the proxy selection in the paper.
Assuming this to be true, my reasoning was that since you are likely to use a different subset of proxies in the recalculation, it allows for the possibility of a significantly different result and conclusion. It was on this basis that I requested that you
resubmitwithdraw the paper (and not because of flaws in the analysis method). I understand that the results may well remain essentially the same after the redo, but this is not something that I can assume to be true .
I hope this clarifies my decision. I’ll wait for your response to Tony’s query before I get back to you on your June 14 email?
Gergis took about 10 days to respond and was unable to give a coherent answer to Broccoli’s sensible question. She argued (June 25) that their original notification (that it was a “data processing error”) was yet another mistake; once again asked that they be able to consign any discussion of results using the methodology of the accepted article to Supplementary Information; and asked that all this be characterized as a mere “revision” (not even a “major revision”, a term of art in Journal of Climate review processes that was used to obstruct O’Donnell et al 2010.)
Sorry for the delay in responding to your emails as I have been on leave over the past week but am now back in regular email contact.
Just to clarify our position:
The message sent on 8 June was a quick response when we realised there was an inconsistency between the proxy selection method described in the paper and actually used. The email was sent in haste as we wanted to alert you to the issue immediately given the paper was being prepared for typesetting.
Now that we have had more time to extensively liaise with colleagues and review the existing research literature on the topic, there are reasons why detrending prior to proxy selection may not be appropriate. The differences between the two methods will be described in the supplementary material, as outlined in my email dated 14 June.
As such, the changes in the manuscript are likely to be small, with details of the alternative proxy selection method outlined in the supplementary material. The careful checking and analysis will take a little time but we expect to submit the revised manuscript for consideration by the journal again before the end of July. Like any other revised paper, we would expect it to be sent for peer review again.
As I mentioned previously, given this paper was originally submitted for review on 3 November 2011 and was extensively reviewed by three expert assessors , our team’s strong preference would be for permission to submit a revision of the original manuscript rather than an entirely new submission. That said, we will of course accept your decision on how best to proceed.
However, Chiang didn’t fall over though he did give Gergis and coauthors a larger window than they probably deserved. Instead of formally requesting immediate withdrawal (as he had signaled on June 9), Chiang granted Gergis four weeks to submit a revision (a date that would accommodate the IPCC deadline), but stated that this was a “hard deadline” and that failure to meet the deadline would mean “rejection”. Further, Chiang firmly rejected Gergis’ suggestion that the body of the article be allowed to ignore results using the methodology of the accepted article, telling Gergis that the sensitivity of the reconstruction to detrended/non-detrended correlations was an issue that should be “addressed” and that this would be a “good opportunity to demonstrate the robustness of your conclusions”. This latter comment was polite, but very pointed; its import would have been unmistakeable to Karoly and Gergis.
I’ve discussed your case again with Tony, and have come to a decision regarding the handling of your manuscript.
I will allow the modifications to your manuscript to be accepted as a revision, to be submitted on or before July 27, 2012 (EST) – so a month from today. Upon receipt, the manuscript will be sent out for re-evaluation .
Please note that this is a hard deadline, in order to keep the revision schedule within reasonable limits. If the revision is not submitted by July 27, the paper will be rejected.
In the revision, I strongly recommend that the issue regarding the sensitivity of the climate reconstruction to the choice of proxy selection method (detrend or no detrend) be addressed. My understanding that this is what you plan to do, and this is a good opportunity to demonstrate the robustness of your conclusions.
Gergis’ response conceded (at least temporarily, as we don’t know the subsequent history):
Our team would be very pleased to submit a revised manuscript on or before the 27 July 2012 for reconsideration by the reviewers. As you have recommended below, we will extensively address proxy selection based on detrended and non detrended data and the influence on the resultant reconstructions.
Gergis and Karoly apparently did not meet the July 27 hard deadline. This was noted up in A CA post of August 2, which drew attention to an update on a University of Melbourne webpage. The University webpage continued to say that the article had been “accepted” and that a “revised” version would be submitted “likely” before the end of September. (This language seems at odds with Chiang’s email saying that the article would be rejected if the hard deadline wasn’t met; though it is possible that the arrangements were subsequently varied in an email subsequent to the tranche presently available.)
The Journal of Climate website was changed to say that the article had been “withdrawn” by the original authors, again contradicting the University of Melbourne webpage.
Meanwhile, the submission to Science by the PAGES 2K Consortium (of which Gergis was a member), which the IPCC Second Order Draft used as a replacement citation for the Gergis reconstruction, cited the Gergis et al article as “under revision”, a status that seems inapplicable once Chiang’s hard deadline had passed.
The article has now apparently been re-submitted to the Journal of Climate. One wonders precisely how Gergis et al will go about “demonstrating the robustness of [their] conclusions” as editor Chiang had asked them to do.