PAGES2K, Gergis and Made-for-IPCC Journal Articles

March 15, 2013 was the IPCC deadline for use in AR5 and predictably a wave of articles have been accepted. The IPCC Paleo chapter wanted a graphic on regional reconstructions and the PAGES2K group has obligingly provided the raw materials for this graphic, which will be published by Nature on April 21. Thanks to an obliging mole, I have information on the proxies used in the PAGES2K reconstructions and will report today on the Gergis reconstruction, of interest to CA readers, which lives on a zombie, walking among us as the living dead.

The PAGES2K article has its own interesting backstory. The made-for-IPCC article was submitted to Science last July on deadline eve, thereby permitting its use in the Second Draft, where it sourced a major regional paleo reconstruction graphic. The PAGES2K submission used (in a check-kited version) the Gergis reconstruction, which it cited as being “under revision” though, at the time, it had been disappeared.

The PAGES2K submission to Science appears to have been rejected as it has never appeared in Science and a corresponding article is scheduled for publication by Nature. It sounds like there is an interesting backstory here: one presumes that IPCC would have been annoyed by Science’s failure to publish the article and that there must have been considerable pressure on Nature to accept the article. Nature appears to have accepted the PAGES2K article only on IPCC deadline eve.

The new PAGES2K article contains reconstructions for all continents and has an extremely long list of proxies, some of which have been discussed before, but some only now making their first digital appearance. Each regional reconstruction is a major undertaking and deserving of separate peer review. It seems impossible that these various regional reconstructions could themselves have been thoroughly reviewed as re-submitted to Nature. Indeed, given that the PAGES2K coauthor list was very large, one also wonders where they located reviewers that were unconflicted with any of the authors.

Of particular interest to CA readers is the zombie version of the Gergis reconstruction. Previous CA articles are tagged gergis.

CA readers will recall that Gergis et al 2012 had stated that they had used detrended correlations to screen proxies – a technique that seemingly avoided the pitfalls of correlation screening. Jean S pointed out that Gergis et al had not used the stated technique and that the majority of their proxies did not pass a detrended correlation test – see CA discussion here (building on an earlier thread) reporting that only 6 of 27 proxies passed the stated significance test.

Senior author David Karoly asked coauthor Neukom to report on correlations and, after receiving Neukom’s report, wrote his coauthors conceding the validity of the criticism:

Thanks for the info on the correlations for the SR reconstructions during the 1911-90 period for detrended and full data. I think that it is much better to use the detrended data for the selection of proxies, as you can then say that you have identified the proxies that are responding to the temperature variations on interannual time scales, ie temp-sensitive proxies, without any influence from the trend over the 20th century. This is very important to be able to rebut the criticism is that you only selected proxies that show a large increase over the 20th century ie a hockey stick .

The same argument applies for the Australasian proxy selection. If the selection is done on the proxies without detrending ie the full proxy records over the 20th century, then records with strong trends will be selected and that will effectively force a hockey stick result. Then Stephen Mcintyre criticism is valid. I think that it is really important to use detrended proxy data for the selection, and then choose proxies that exceed a threshold for correlations over the calibration period for either interannual variability or decadal variability for detrended data. I would be happy for the proxy selection to be based on decadal correlations, rather than interannual correlations, but it needs to be with detrended data, in my opinion. The criticism that the selection process forces a hockey stick result will be valid if the trend is not excluded in the proxy selection step.

Unfortunately, as coauthor Neukom immediately recognized a big probleM:

we don’t have enough strong proxy data with significant correlations after detrending to get a reasonable reconstruction.

Mann and Schmidt immediately contacted Gergis and Karoly advising them to tough it out as Mann had done with his incorrect use of the contaminated portion of the Tiljander data, where Mann’s refusal to concede the error had actually increased his esteem within the climate community. Nonetheless, Gergis and Karoly notified Journal of Climate of the problem. Despite Karoly’s concerns about substantive problems, Gergis hoped to persuade Journal of Climate that the error was only in their description of methodology and to paper over the mistake. However editor Chiang’s immediate reaction was otherwise, advising Gergis:

it appears that you will have to redo the entire analysis (and which may result in different conclusions), I will also be requesting that you withdraw the paper from consideration.

Upon receiving advice from Mann, Gergis tried to persuade Journal of Climate that the error was not one of methodology, but one of language only. Bur Chief Editor Broccoli was not persuaded, responding:

In that email (dated June 7) you described it as “an unfortunate data processing error,” suggesting that you had intended to detrend the data. That would mean that the issue was not with the wording but rather with the execution of the intended methodology.

Editor Chiang added:

Given that you had further stated that “Although it was an unfortunate data processing error, it does have implications for the results of the paper,” we had further took this to mean that you were going to redo the analysis to conform to the description of the proxy selection in the paper.

After further lobbying form Gergis, Chiang reluctantly permitted Gergis to re-submit as a “revision” by the end of July, but insisted that they show the results of both methods, describing this as an “opportunity” to show the robustness of their work:

In the revision, I strongly recommend that the issue regarding the sensitivity of the climate reconstruction to the choice of proxy selection method (detrend or no detrend) be addressed. My understanding that this is what you plan to do, and this is a good opportunity to demonstrate the robustness of your conclusions.

Gergis didn’t meet the July 31 deadline and Journal of Climate reported that the paper had been “withdrawn” by the authors.

The article was apparently resubmitted to Journal of Climate by the end of September, where, according to Gergis’ current webpage, it remains “under review”.

Nonetheless, the Gergis reconstruction has already been incorporated into the PAGES2K made-for-IPCC composite. CA readers will recall the Mole Incident in 2009. Once again, I am in possession of the proxy list used in the zombie reconstruction and can report that it has had only negligible changes.

On the left is a list of the 28 proxies used in the disappeared Gergis version, highlighting the proxies re-used in the zombie version. On the right is the list of proxies in the new version, highlighting the additions. 21 of 27 proxies are re-used. Six proxies have been excluded, while seven have been added. Remarkably, Gergis has kept the numbering as close as possible to the original list, so that the first 20 re-used proxies appear in the same order as in the original table.

gergis2012-table1 cropped

gergis2013- cropped

The medieval portion of their reconstruction only has two proxies – as observed at CA very early here, where it was also pointed out that these two proxies did not constitute “new” information, as claimed in an IPCC draft, since they had not only been available for AR4 but illustrated in it.

Excluded from the original list are a tree ring series(Takapari), two ice core series (both from Vostok) and three coral series (Bali, Maiana and Fiji 1F O18), replaced by a speleothem (Avaiki), three tree ring series (Baw Baw, Celery Top West, Moa Park), two coral luminescence series (Great Barrier Reef, Havannah) and a coral O18 series (Savusavu). None of the excluded or included series is particularly long.

Obviously Gergis et al have not “redone the analysis to conform to the description of the proxy selection in the paper” as they continue to use many of the proxies that failed the original significance test – see the graphic below from last June.

Nature reviewers obviously didn’t have the concerns about robustness that were expressed by Journal of Climate editors last summer, as the new article doesn’t demonstrate any such “robustness”. It will be interesting to see whether Journal of Climate editors will themselves adhere to the scruples that they showed last summer and require Gergis et al to demonstrate the robustness of their reconstruction.

50 Comments

  1. clivere
    Posted Apr 19, 2013 at 1:26 PM | Permalink

    “where Mann’s refusal to concede the error had actually increased his esteem within the climate community”

    To me this reads like an assertion with no justification to back it up. Perhaps you can justify it but does that help the overall article which does not need that kind of diversion. In my opinion better left out!

    • MarkB
      Posted Apr 19, 2013 at 2:32 PM | Permalink

      Given the fact that Mann has been given awards since the event, justification is hardly an issue. snip

    • pax
      Posted Apr 19, 2013 at 3:18 PM | Permalink

      I must admit that this assertion also puzzled me. I suspect that Mann is secretly regarded as a d***head among many of his peers, but I don’t have Steve’s information.

    • JunkPsychology
      Posted Apr 19, 2013 at 3:41 PM | Permalink

      Refusing to retract or even acknowledge an obviously improper use of data would get you kicked out of any scientific community worthy of the name.

      Mann is still there. Enough said, as far as the “climate community” goes.

    • mecheng1
      Posted Apr 20, 2013 at 8:28 AM | Permalink

      Steve’s statement is well founded. Mann has received several awards since he was outed on his use of the contaminated and upside down Tiljander data. In addition he has been promoted to Distinguished Professor of Meteorology at Penn State.

      http://ploneprod.met.psu.edu/people/mem45

      • clivere
        Posted Apr 20, 2013 at 9:06 AM | Permalink

        You think the awards were given to him because he did not acknowledge the misuse of the contaminated data from the Mia Tiljander paper?

        I personally think those awards would have been given for other reasons entirely.

        I also think that a large number of academics are unaware of the significance of the Tiljander error and I would cite the perpetuation of the error in the Tigley/Huybers paper as evidence of that ignorance.

        I do not think that Mike Manns esteem has increased as a result of the misuse of the data. There is even evidence that the fan club in the Tree Hut community are embarrased by the error.

        • Steve McIntyre
          Posted Apr 20, 2013 at 10:03 AM | Permalink

          Re: clivere (Apr 20 09:06),
          Mann et al 2008 was Mann’s most substantive work since MBH98. Its supposed ability to “get” a Stick without dendro was widely promoted at the time and subsequently. For example, in realclimate’s response to Yamal and, more importantly, in the EPA response to the divergence problem, where the EPA argued that the divergence problem didn’t matter because of Mann’s no-dendro reconstruction.

          Would Mann have received equivalent honors if he had retracted the Mann et al 2008 no-dendro reconstruction? What would they have been for?

          His refusal to concede the error was not itemized in the award citation, but the non-retracted work surely was.

        • clivere
          Posted Apr 20, 2013 at 10:40 AM | Permalink

          Steve – ok – a tenuous substantiation going down the cause and effect route.

          Any further discussion will turn very hypothetical and inconclusive about whether he would have still received the awards or not if the error had been admitted. I share your view that the Tiljander error is a bad one.

          It does not change my opinion that you have made a weak assertion that is off topic to the article and would have been better left out. It is better to focus on the strong arguments rather than pad them out with weak ones.

          Steve:point taken.

        • Pat Frank
          Posted Apr 20, 2013 at 1:25 PM | Permalink

          cliverethose awards would have been given for other reasons entirely.

          Reasons such as what, for example?

          We know from the work of Steve (and Ross) that Mann’s MBH98/99 constituted a knowing fabrication. That is proved by the contents of Mann’s CENSORED” directory. Such conduct has its own honors.

          His specious assignment of physical meaning to principle components has misled an entire generation of graduate students. His mathematics are hardly more than a vehicle for trickery. He personally has brought great harm to climate science by aggressive and deliberate pollution with the politics of recrimination. This he has done in no small part as a smoke screen to defend his fabrications from exposure.

          What has he done worthy of award?

  2. R
    Posted Apr 19, 2013 at 3:25 PM | Permalink

    Watch what you imply there Steve Mc

    “Indeed, given that the PAGES2K coauthor list was very large, one also wonders where they located reviewers that were unconflicted with any of the authors.”

    I know for a fact that this was an extremely tough review process. One reviewer had in excess of 50 pages of review comments.

    • bernie1815
      Posted Apr 19, 2013 at 3:30 PM | Permalink

      50 pages of review comments and the article still got accepted?

      Steve: O’Donnell et al review correspondence was an enormous file as well.

    • zootcadillac
      Posted Apr 19, 2013 at 3:40 PM | Permalink

      and this is relevant to the fact that there could quite feasibly be a conflict how?

      50 pages of comments just means that someone had a lot to say ( and quite likely did not pal review it ) but it does not say that there could be no connection with the reviewer or an author.

      I think it’s a reasonable question to ponder. Might be a problem less of implication and more of inference.

    • Steve McIntyre
      Posted Apr 19, 2013 at 3:42 PM | Permalink

      Re: R (Apr 19 15:25), by and large, and my experience on this point is limited, I think that reviewers too often try to impose their point of view -as opposed to ensuring that the authors have properly documented their results. “tough” reviews are not always useful reviews.

      Steig’s review of O’Donnell et al 2010 was “tough” but unhelpful and, in my opinion, the first draft was, in some ways, better than the final article, with changes made in response to Steig deteriorating the article. I realize that we’re not going to solve peer review in a few sentences.

      do you have an opinion on who the 50-page reviewer was? It sounds like something that Mann would do.

      • NZ Willy
        Posted Apr 19, 2013 at 9:43 PM | Permalink

        I am guessing that R himself is that reviewer.

        • R
          Posted Apr 20, 2013 at 3:23 PM | Permalink

          My only experience is having interacted with some people involved with Pages group who expressed to me that the process was difficult. As for the identity of the tough reviewers – I cannot speak to that but the paleo community is not particularly large and this paper has like 60 authors 😉

        • Skiphil
          Posted Apr 20, 2013 at 3:43 PM | Permalink

          Neither the number of pages of review comments, nor the perceived toughness of the review(s), means anything without a rigorous assessment of the papers.

          For a rough comparison, some students can perceive as grueling an examination or grading process which may seem normal or even easy to more proficient students.

          IF the field is weak on any of the methodologies, etc. then even a cursory but critical review could seem harsh. It all depends upon detailed assessments of the standards, not some generic assertion that the review was tough (or easy, etc.).

        • NZ Willy
          Posted Apr 20, 2013 at 5:48 PM | Permalink

          Especially if the review was tough in the other direction, that is, the reviewer urging a result more in conformity to “The Team”, as appeared to happen with Marcott et al.

    • Skiphil
      Posted Apr 20, 2013 at 6:59 AM | Permalink

      Re: R (Apr 19 15:25),

      IF an “extremely tough review process” should pass along a seriously flawed paper, THEN that has interesting implications for the prevailing standards in the field.

  3. Steven Mosher
    Posted Apr 19, 2013 at 5:01 PM | Permalink

    CA readers will recall the Mole Incident in 2009.

    nice

    • pouncer
      Posted Apr 19, 2013 at 8:53 PM | Permalink

      I’m missing the joke here. I thought that “mole” was an open FTP server. Or is the IPCC team also leaving confidential stuff lying around in public venues?

  4. Manniac
    Posted Apr 19, 2013 at 5:22 PM | Permalink

    “According to Dr. Nigel V.H. Oldham, professor emeritus at Oxford University’s Centre for Metascience, this violent data dance is what makes climate researchers unique among breeds of scientists.”

    [iowahawk.typepad.com]

    • Steven Mosher
      Posted Apr 20, 2013 at 9:32 PM | Permalink

      best not to ask

  5. Geoff Sherrington
    Posted Apr 19, 2013 at 8:45 PM | Permalink

    The Gergis matter has been somewhat protected. Here are 2 emails:
    From: sherro1@optusnet.com.au
    Sent: Saturday, June 02, 2012 9:51 PM
    To: enquiries@climatechange.gov.au
    Subject: Disclosure of information by persons under contract
    Might you please describe the requirements for authors of scientific papers to make available some or all of the raw data behind a publication when the publication is funded in part or in full by the Department of Climate Change, and/or under Contract to it. If the information exists in an Act, might you please disclose it and the relevant section. It there are guidelines from the Department of Climate Change, might they please be emailed to me or referred to in a form that has reasonable access properties. If the Department of Climate Change is involved with publications that have no guidelines for data availability and archiving, might you please make this clear to me. In the event that there is a complexity caused by dates of commencement, amendment or cessation of Acts, Regulations, Contracts or Guidelines pertinent to the activities of the Department of Climate Change, Australia, you might use the specific example of Dr Joelle Gergis et al, Melbourne University. The information below is from the public source
    http://www.findanexpert.unimelb.edu.au/researcher/person203094.html
    Contracts
    Title Role Funding Source Award Date ESTIMATING NATURAL CLIMATE VARIABILITY IN THE AUSTRALASIAN REGION OVER THE PAST 2,000 YEARS: DATA SYNTHESIS FOR THE IPCC 5TH ASSESSMENT REPORT Chief Investigator DEPT OF CLIMATE CHANGE 01/01/2011
    This is believed to have led in part or in full to a publication – J. Gergis, R. Neukom, S.J. Phipps, A.J.E. Gallant, and D.J. Karoly, “Evidence of unusual late 20th century warming from an Australasian temperature reconstruction spanning the last millennium”, Journal of Climate, 2012.
    Geoffrey H Sherrington
    ………………………………
    Re: Due Diligence – Grants from Department of Climate Change
    Dear Mr Sherrington,
    The Department of Climate Change and Energy Efficiency provided funds to the University of Melbourne in January 2011 for scientific research into the provision of extended estimates of regional scale climate variables (temperature and rainfall) to reduce uncertainties about climate change and its potential impacts in the Australasian region over the past 500-2,000 years. Under this research project, the manuscript ‘Evidence of unusual late 20th century warming from an Australasian temperature reconstruction spanning the last millennium’ was submitted to the Journal of Climate earlier this year.
    The Department is aware that the publication of this manuscript was put on hold by the project leaders after the identification of an issue in the processing of the data used in the study. The testing of scientific studies through independent analysis of data and methods strengthens scientific conclusions and is a normal part of the scientific process. The researchers are currently reviewing the data and results and will resubmit the paper to the journal for peer review.
    The paper synthesises palaeo data from a number of sources. The majority of this data is publically available through the NOAA World Data Centre for Palaeoclimatology at http://www.ncdc.noaa.gov/paleo/. The review article Neukom and Gergis, ‘Southern Hemisphere high-resolution palaeoclimate records of the last 2000 years’, The Holocene May 2012 22: 501-524 provides a comprehensive list of references which contain data that were used in the synthesis paper ‘Evidence of unusual late 20th century warming’. Any data which is not available at the NOAA world data centre can be requested from the authors listed in the review article.
    Thank you for your enquiry.
    Regards
    Climate Change Science Team
    9 August 2012
    ……………………………..
    Despite several requests for a human name, the Climate Change Science Team has refused to identify itself. One wonders if an author of the subject paper is on it.

    • Betapug
      Posted Apr 21, 2013 at 1:26 PM | Permalink

      “….provided funds (for..”DATA SYNTHESIS FOR THE IPCC 5TH ASSESSMENT REPORT”) to the University of Melbourne…to reduce uncertainties about climate change …over the past 500-2,000 years”

      “500-2000 years”??..or so, or thereabouts, or approximately, or whatever suits you?

      “Money for nothing and your (perks) for free.” to coin a phrase.

  6. PMT
    Posted Apr 19, 2013 at 10:01 PM | Permalink

    Steve, this BBC article may be off-topic, but it does resonate with a lot of your work and the essence of this blog.
    Opening quote:-
    “This week, economists have been astonished to find that a famous academic paper often used to make the case for austerity cuts contains major errors. Another surprise is that the mistakes, by two eminent Harvard professors, were spotted by a student.”

    http://www.bbc.co.uk/news/magazine-22223190

    Paul

    • Wally
      Posted Apr 20, 2013 at 2:45 AM | Permalink

      Comes as no surprise.

      When I was an undergrad I found errors in the course work that the prof was teaching. When I asked him about it face to face and explained why I thought there was a mistake, he looked and said “oh yes, that’s not right”. He didn’t issue a correction to the class, though.

      For that episode, he went done from “suspect” to “rock bottom tosspot” in my estimation. My first experience of academic egos and loss of face.

  7. Hoi polloi
    Posted Apr 20, 2013 at 2:20 AM | Permalink

    This and many other scientific gaffes lately in medic, social and climate science (some of them very prominent in my own country) have made me, as layperson, very suspicious of every “ground breaking”, “unprecedented”, “revolutionary” research that is seeing the light the last years. I’ve lost my trust in science.

    Is it that why they say: “where are the contemporary geniuses”?

    Indeed where are the grondbreaking, revolutionary Einsteins or Feynmans of today?

    “Geniuses are born, they never been taught”.

  8. Stephen Richards
    Posted Apr 20, 2013 at 3:18 AM | Permalink

    Steve McIntyre

    Posted Apr 19, 2013 at 3:42 PM | Permalink

    In my fairly extensive experience, reviewers ALWAYS try to impose their own views / opinions.

    • Craig Loehle
      Posted Apr 20, 2013 at 4:59 PM | Permalink

      I find several types of reviewers:
      1) helpful, with good suggestions
      2) helpful who find a problem with my analysis
      3) pedantic
      4) have no idea about the methods/topic and make nonsense comments/requests

      of course type 4 is the most annoying.

  9. Geoff Sherrington
    Posted Apr 20, 2013 at 5:11 AM | Permalink

    In general reading today I found this:
    https://theconversation.com/uk-researcher-sentenced-to-three-months-jail-for-faking-data-13619

  10. Speed
    Posted Apr 20, 2013 at 7:39 AM | Permalink

    Carl Bialik (The Numbers Guy) at the Wall Street Journal has two pieces about the Reinhart-Rogoff paper.
    “It is rare for peer reviewers to seek original data sets of authors or to check their coding,” Dube said. “It would be very time-consuming to do so.”

    Dube offered another idea: “Perhaps there could be an audit system whereby authors would be required to submit their data and coding to reviewers with a small probability [of audit]. The threat of such an audit might provide better incentives for record keeping.”
    http://blogs.wsj.com/numbersguy/reinhart-rogoff-the-math-errors-that-slip-through-the-crack-1232/

    And it isn’t even clear just how uncertain many of the data are. Economists today give a margin of error for a GDP estimate. But no such margin accompanies older figures. For example, the high-debt, negative-growth year of 1951 in New Zealand was key to the original paper’s finding. But the 7.6% contraction in GDP was based on academic estimates; the country didn’t have official GDP estimates until four years later, said a spokeswoman for government data keeper Statistics New Zealand.

    Without margins of error for individual readings, there was no way for the Harvard researchers to compute a margin of error for their overall findings. It is possible that if they did, it would dwarf the difference between growth in high- and low-debt countries.
    http://online.wsj.com/article/SB10001424127887323809304578432790180712674.html

    These sorts of problems are well known to students of climate research.

    • pdtillman
      Posted Apr 20, 2013 at 5:47 PM | Permalink

      Re: Speed (Apr 20 07:39),

      Re Bialik on the Reinhart-Rogoff paper:

      “The 2010 paper appeared in a special issue of the American Economic Review, devoted to papers presented at the annual meeting of the American Economic Association. Those papers aren’t peer-reviewed. “The editorial staff of the AER has nothing to do with them,” Pinelopi Goldberg, editor of the AER and an economist at Yale University, wrote in an email. If errors are caught after publication of these papers, the journal doesn’t print corrections — unlike with articles in its regular issues.”

      The dismal science and climate science — gold-dust twins!

  11. Jean S
    Posted Apr 20, 2013 at 11:54 AM | Permalink

    It was much harder to find the “mole” than the error in original Gergis et al .. took me about an hour 🙂

    It seems pretty clear that the non-detrended correlation is used as the new Gergis et al. screening. But what is the justification? Steve, can you see any logic in those left out compared to the new ones? Change of geographic area?

    Minor correction: the new series 22 (Savusavu) should be also highlighted on the right.

    Steve: No idea. They might have restricted the geographical error. If they tweaked their inclusion protocol, it might have changed Law Dome – with an observable MWP. So they might have tweaked their geography as well. If they did, I’m sure that there’s a “good” reason as well. Or maybe there’s only a good reason. We’ll see.

    • sue
      Posted Apr 20, 2013 at 12:46 PM | Permalink

      Re minor correction: and 27 Laing should be highlighted on the left and not highlighted on the right.

    • Steve McIntyre
      Posted Apr 20, 2013 at 1:47 PM | Permalink

      Jean S says:

      It was much harder to find the “mole” than the error in original Gergis et al .. took me about an hour 🙂

      an entire hour to find the mole. You’re losing your touch 🙂

    • Posted Apr 30, 2013 at 4:21 AM | Permalink

      Is cherry-picking a method to produce a result, as opposed to cherry picking data to produce a result, (cherry-picking)^2

  12. Jean S
    Posted Apr 20, 2013 at 1:54 PM | Permalink

    Hah! The “mole” just gave me some auxiliary information showing that Steig et al was the instrumental target in the PAGES2k Antarctica reconstruction! Moreover, I was handed information indicating that they managed to create a reconstruction with negative average (of two) verification RE for every single year! That has to set some type of a new record.

    • Jean S
      Posted Apr 20, 2013 at 2:51 PM | Permalink

      Re: Jean S (Apr 20 13:54),
      Oh no, the mole just corrected me. It’s not just a reconstruction, all three Antarctica reconstructions (full, west, east) have negative average verification REs for every single year!

      Inquiring minds want to know also which one they are going to show in their paper to be released next week. Or is it after all a weighted average of the east and west reconstructions? 😉

    • Pat Frank
      Posted Apr 20, 2013 at 2:57 PM | Permalink

      So, Jean S, you’re saying that PAGES2K set as their target the Steig PCA that O’Donnell, et al., showed unambiguously to be wrong? 🙂

      • Jean S
        Posted Apr 20, 2013 at 3:16 PM | Permalink

        Re: Pat Frank (Apr 20 14:57),
        what else could Steig’s “Ant_instrum_target” be?

        # Steig target comes as Antarctica;WAIS+Pen;EAIS;WAIS;Pen
        target<-read.ts("Ant_instrum_target.csv",sep=";",header=T)
        target.fullant<-target[,1]
        target.ea<-target[,3]
        target.wa<-target[,2]
        
    • Jean S
      Posted Apr 20, 2013 at 3:52 PM | Permalink

      Re: Jean S (Apr 20 13:54),

      Oops:

      # Make the proxy composite ouf of the scaled proxies weighting them with their individual cor with the target
      
  13. Rud Istvan
    Posted Apr 20, 2013 at 3:03 PM | Permalink

    It is far to early to tell, but if in fact the new proxy set again violates its stated selection criteria then it is encumbent upon those here with the most compelling data to demand a correction or retraction from the editors of Nature, especially in light of the previous situation of which they should have been aware, or at least the paper’s reviewers.
    Blogging is no longer enough. The leading Journals have to be held accountable to Minimum standards of scientific conduct. If the climate change Science community won’t then the general readership will.

    Science has acknowledged in writing the seriousness of my formal written complaint to the editors demanding correction or retraction of Marcott based on the contradictions between the FAQ, published figure1a, the first sentence of the abstract, and the core top redatings evident in the SM which expressly violate the SM statement on procedure. There was even a redated core top that had a post top age control from an accompanying box core that was not spliced into the proxy. And it was published, archived, and even illustrated in the thesis.
    We have zero tolerance for apparent academic misconduct, and for the editors of journals who won’t police the scene. Perhaps they are unaware. It is in your power to take that excuse away if necessary.

    • Diogenes
      Posted Apr 20, 2013 at 5:13 PM | Permalink

      I think the really sad thing is that this does not surprise anyone anymore….

  14. John A
    Posted Apr 20, 2013 at 4:11 PM | Permalink

    Pat Frank:

    We know from the work of Steve (and Ross) that Mann’s MBH98/99 constituted a knowing fabrication. That is proved by the contents of Mann’s CENSORED” directory. Such conduct has its own honors.

    His specious assignment of physical meaning to principle components has misled an entire generation of graduate students. His mathematics are hardly more than a vehicle for trickery. He personally has brought great harm to climate science by aggressive and deliberate pollution with the politics of recrimination. This he has done in no small part as a smoke screen to defend his fabrications from exposure.

    Immediately after Mann cut-off communications with Steve Mc after Scott Rutherford had screwed the pooch by giving Steve the entire contents of the ftp site (CENSORED directories intact), Mann immediately fabricated lies against Steve despite that up to that point Steve hadn’t done anything with the data. All of this was revealed in the Climategate emails written at the time.

    Mann appears to have gotten away with clearly making statements in his paper that he already knew were false when he made them.

    That, to my mind as well as many others, is extremely serious scientific misconduct. Scientists have been fired for less.

    • Pat Frank
      Posted Apr 20, 2013 at 10:21 PM | Permalink

      Completely agree, John.

  15. Beta Blocker
    Posted Apr 21, 2013 at 8:07 AM | Permalink

    In 1966, there was Stan Freberg’s advertisement which asserted, “Nine out of ten doctors recommend Chun King chow mein.”

    In 2013, there is evidently a powerful need for an IPCC advertisement which asserts, “Nine out of ten climate scientists recommend the hockey stick.”

  16. talldave2
    Posted Apr 21, 2013 at 8:49 PM | Permalink

    50 pages of review comments and the article still got accepted?
    Steve: O’Donnell et al review correspondence was an enormous file as well.

    “Bury them in paperwork” seems to be the new climate mafia modus operandi. Makes sense, they’re far better funded and staffed. Skeptics will be at a considerable disadvantage.

    Before Marcott, I used to wonder if people like Hansen understood there are not actually billions of dollars flowing to the likes of Watts and McIntyre. Nowadays I don’t wonder so much anymore.

  17. Skiphil
    Posted Jul 21, 2013 at 10:20 PM | Permalink

    Tonight, as of July 21, 2013, the revised Gergis et al. submitted to Journal of Climate in Sept. 2012 is still listed as “in review” on the personal website of Joelle Gergis, and also on the Climate History website affiliated with the University of Melbourne (Gergis has direct involvement with this website from what I’ve seen in past press releases):

    http://climatehistory.com.au/?s=gergis+karoly+phipps

    Gergis, J., Neukom, R., Phipps, S.J., Gallant, A., Karoly, D.J. and PAGES Aus2K project members. Evidence of unusual late 20th century warming from an Australasian temperature reconstruction spanning the last millennium. Journal of Climate (resubmitted September 2012, in review).

8 Trackbacks

  1. […] Read it all here: PAGES2K, Gergis and Made-for-IPCC Journal Articles […]

  2. […] at Climate Audit, Steve has been looking at the resurfacing of some Hockey Stick science papers, and the fun […]

  3. […] https://climateaudit.org/2013/04/19/pages2k-gergis-and-made-for-ipcc-journal-articles/ […]

  4. […] unravelled almost as fast as they come off the Intergovernmental Panel on Climate Change (IPCC)'s just-in-time […]

  5. […] Yet slabs of it seem now to have been included in the IPCC’s upcoming review of global warming. […]

  6. […] ClimateAudit has been reporting on it for a number of days. See Steve McIntyre’s posts here, here and especially here. WattsUpWithThat has discussed the paper here and […]

  7. […] ClimateAudit has been reporting on it for a number of days. See Steve McIntyre’s posts here, here and especially here. WattsUpWithThat has discussed the paper here and […]

  8. […] I cannot imagine that Betts was oblivious to the history of documented problems that preceded the publication of this just-in-time paper (with no less than 77 co-authors). But he has indicated to me in the past that his preferred mode of reading blogs is to “skim”. Consequently, he may well have missed the full context of Steve McIntyre’s observation: […]

%d bloggers like this: