Arctic Lake Sediments: Reply to JEG

Julien Emile-Geay (JEG) submitted a lengthy comment concluding with the tasteless observation that “Steve’s mental health issues are beyond PAGES’s scope. Perhaps the CA tip jar pay for some therapy?”  – the sort of insult that is far too characteristic of activist climate science.  JEG seems to have been in such a hurry to make this insult that he didn’t bother getting his facts right.

Inventory

In the article, I had inventoried Arctic lake sediment series introduced in four major multiproxy studies: Mann et al 2008, Kaufman et al 2009, PAGES 2013 and PAGES 2017, observing that a total of 32 different series had been introduced, showing the split in the first line of the table shown in the article (replicated below). In each case, the series had been declared “temperature sensitive” but 16 had been declared in a subsequent study to be not temperature sensitive after all. In the table, I listed withdrawals by row, showing (inter alia) that three had been withdrawn in P14 (McKay and Kaufman 2014), four in PAGES 2017 (which also reinstated two proxies used in earlier studies) and three in Werner et al 2017 (CP17).   In my comments on Werner et al 2017, I distinguished the three series that were discarded from series not used in that study because they were not annual (of which there were nine.)

arctic_inventory

Here’s JEG’s comment on this table:

Responding to the post, not the innumerable comments (many of which are OT).

It is incorrect to claim that PAGES2k discarded 50% of the lake sediment records.

PAGES 2013, v1.0 had 23 arctic lake records
PAGES 2013, v1.1., rejected 3 (see https://www.nature.com/ngeo/journal/v8/n12/full/ngeo2566.html)
PAGES 2017, v2.0, we rejected another 4 and added 3, for reasons explained in Table S2.

Werner et al CPD 2017 is a climate field reconstruction based on a slightly earlier version of this dataset.
They excluded non-annually resolved records for reasons made clear in the manuscript – there is nothing “strange” about that – unless you want to misconstrue it. The entire point of a compilation like PAGES is that it is relatively permissive, so users who are more stringent can raise the bar and use only a subset of records for their own purposes.

So, out of the original 23, 7 (30.43%) were rejected because of more stringent inclusion criteria, with 3 additions. Anyone is welcome to see what impact this made to an Arctic composite or reconstruction using a method that meets CA standard.

None of his comments rebuts or contradicts anything in my post.  JEG says that 3 proxies were discarded in v1.1 – precisely as shown in the third row of the table and discussed in the article. JEG says that 4 proxies were discarded in PAGES 2017 – precisely as shown in the sixth row of the table.

Of Werner et al 2017, he says that they “excluded non-annually resolved records for reasons made clear in the manuscript – there is nothing “strange” about that – unless you want to misconstrue it.”   I didn’t “misconstrue it. While I noted that “in their reconstruction, they elected not to use 9 series on the grounds that they lacked annual resolution”, I excluded those nine from the above table.  In addition to these nine, Werner et al 2017 discarded three annual series (Hvitarvatn, Blue Lake, Lehmilampi) as defective. JEG says that Werner et al used a “slightly earlier” version of the PAGES 2017 dataset.  Be that as it may, Werner et al 2017 did in fact discard these three series as shown in the table for the grounds stated in my post (a “very nonlinear response, short overlap with instrumental, unclear interpretation”, the “exact interpretation unclear from original article” and “annual and centennial signal inconsistent”).

As a housekeeping point, I counted 22 Arctic sediment series in PAGES 2013 (not 23 as stated by JEG). I also counted a total of four additions to PAGES 2017 (two new and two re-instatements as shown in the table above), rather than the “three” additions claimed by JEG.

Most fundamentally, the denominator of my comparison was the inventory of series introduced in the four listed papers, not the inventory in PAGES 2013, which already represented a partial cull of Kaufman et al 2009 and Mann et al 2008. I do not understand why JEG misrepresented this simple point.

Finally, JEG says that the discarding was due to “more stringent inclusion criteria”. Three things.  1) The inclusion criteria in later studies are not necessarily “more stringent” – PAGES 2013 included some short series excluded fromKaufman et al 2009 (which required 1000 years) and PAGES 2017 some even shorter series.  Inclusion of short series that do not go back to the medieval period or even AD1500 is less stringent, not more stringent. 2) The stated reasons for exclusion of series in later studies are typically ones that indicate non-compliance with criteria set out in the earlier study, i.e. if a later study correctly determines that the interpretation of the record is “unclear”, its use in the earlier study was an error in the earlier study according to its criteria, not the result of “more stringent” criteria. 3) To keep things in clear perspective, greater stringency is not an antidote to problems arising from ex post screening (see also selection on the dependent variable) and is therefore irrelevant to the main issue. Jeff Id did some good posts on this.  Contrary to JEG, I do not advocate “greater stringency” in ex post screening as proper technique. On the contrary, I object to ex post screening (selection on the dependent variable).

Corrigendum

In my article, I said that “McKay and Kaufman (2014) conceded the [Hvitarvatn] error and issued an amended version of their Arctic reconstruction, but, like Mann, refused to issue a corrigendum to the original article.”

Finally, it is entirely incorrect to claim that PAGES 2k did not issue a corrigendum to identify the errors in v1.0 that were corrected in v1.1. They did so here (https://www.nature.com/ngeo/journal/v8/n12/full/ngeo2566.html), where Steve McIntyre was acknowledged about as clearly as could have been done: “The authors thank D. Divine, S. McIntyre and K. Seftigen, who helped improve the Arctic temperature reconstruction by finding errors in the data set.”

I published my criticism of upside-down Hvitarvatn in April 2013, a few weeks after publication of PAGES 2013. (Varves, particularly Hvitarvatn, had been a prior interest at CA). McKay and Kaufman 2014, published 18 months later (Oct 2014), acknowledged this and other errors, but failed to acknowledge Climate Audit on this and other points. On October 7, 2014, I wrote Nature pointing out that McKay and Kaufman 2014 primarily addressed errors in PAGES 2013 (as opposed to being “original”) and suggested to them that such a “backdoor corrigendum” was no substitute for an on-the-record corrigendum attached to the original article. (In making this point, I was thinking about Mann’s sly walking-back of untrue statements in Mann et al 2008 deep in the SI to a different paper, while not issuing a corrigendum in the original paper.) Nature said that they would look into it.  I also objected to the appropriation of criticisms made at Climate Audit without acknowledgement.  I heard nothing further from them.

In November 2015, over a year later, PAGES 2013 belatedly issued a corrigendum as I had requested in October 2014, including a brief acknowledgement.  I was unaware of this until JEG brought it to my attention in his comment.  Nature had not informed me that they had agreed with my suggestion and none of the authors had had the courtesy to mention the acknowledgement. Needless to say, I’ve not waited 18 months to issue a correction and have done so right away.

 Strange Accusations

JEG concluded his comment with a strange peroration accusing me of “continuing to whine about the lack of acknowledgement”, which he called a “delirium of persecution” and a “mental health issue”, suggesting “therapy”:

Continuing to whine about the lack of acknowledgement is beginning to sound like a delirium of persecution. We can certainly fix issues in the database, but Steve’s mental health issues are beyond PAGES’s scope. Perhaps the CA tip jar pay for some therapy?

Where did this come from?

I’ve objected from time to time about incidents in which climate scientists have appropriated commentary from Climate Audit without proper acknowledgement – in each case with cause.  I made no such complaint in the article criticized by JEG. Nowhere in the post is there any complaint about “lack of acknowledgement”, let alone anything that constitutes “continuing to whine about the lack of acknowledgement”.

The post factually and drily comments on the inventory of Arctic lake sediment proxies, correctly observing the very high “casualty rate” for supposed proxies:

This is a very high casualty rate given original assurances on the supposed carefulness of the original study. The casualty rate tended to be particularly high for series which had a high medieval or early portion (e.g. Haukadalsvatn, Blue Lake).

One should be able to make such comments without publicly-funded academics accusing one of having “mental health issues”, a “delirium of persecution” or requiring “therapy”.

PS. Following the finals of the US National Squash Doubles (Over 65s) in March, I severely exacerbated a chronic leg injury and am receiving therapy for it. Yes, some aches and pains come with growing older, just not the ones fabricated by JEG.

 

 

55 Comments

  1. Don Keiller
    Posted Jul 29, 2017 at 11:50 AM | Permalink

    “Misrepresentation” is what climate “scientists” do.
    Along with ad hominems and dodgy statistics.
    JEG is the latest in a long line, started by Hansen and “perfected” by Mann.

  2. Joe Public
    Posted Jul 29, 2017 at 12:22 PM | Permalink

    “I’ve objected from time to time about incidents in which climate scientists have appropriated commentary from Climate Audit with proper acknowledgement ….”

    ” …. without proper acknowledgement”?

    Steve: thanks, fixed

    • Posted Jul 29, 2017 at 1:23 PM | Permalink

      What’s your question, Joe? Your comment is totally unclear.

      w.

      • Joe Public
        Posted Jul 29, 2017 at 2:06 PM | Permalink

        Should it be ‘with’ or ‘without’ ….. proper acknowledgement?

  3. Anonymous
    Posted Jul 29, 2017 at 12:52 PM | Permalink

    Link to reworked 11AUG2010 post? The comments section there is locked. and the original content was deleted with a promise to rework the argument.

    I would be fine with just restoring the old content and opening discussion. No reason to close off viewing and discussion a mistake of yours to wait for rework. (Whether or not you ever do the rework.)

    Steve: comments are closed on ALL old threads as a means of spam control, not that thread in particular. Spam filters are insufficient to prevent weeds everywhere. Most blogs follow a similar policy. I try to be careful and avoid mistakes, but do not claim infallibility. I’m sure that there’s another mistake somewhere in the thousands of posts and comments, if you wish to editorialize on mistakes.

    • davideisenstadt
      Posted Jul 31, 2017 at 6:51 AM | Permalink

      “Critics know the cost of everything and the value of nothing.”

  4. Eric Barnes
    Posted Jul 29, 2017 at 1:00 PM | Permalink

    “Steve’s mental health issues are beyond PAGES’s scope”

    I have to agree with JEG. I would like to further clarify that it’s also obvious that PAGE’s scope is strictly related to future funding and promoting “The Cause”.

    I’m sure JEG would defend recent Arctic Lake Proxy inclusion/exclusion if it was possible. But it’s much easier to lob Ad Hom’s after all the checks have been cashed and soldier on to the hard work of rationalizing the next paycheck.

  5. Danley Wolfe
    Posted Jul 29, 2017 at 1:27 PM | Permalink

    Steve, you are justified in reacting to JEG venomous personal comments, but the best thing to do is to correct the record as you have done … but also bring attention to the immature (putting it mildly) behavior he is publicly exhibiting.

    Thomas Hobb’s said (a law of nature) a person ultimately acts only in their own self interest. So the climate change discussion is a debate of value systems not science. The science may eventually be ‘proven’ or ‘settled’ but it is a wicked problem … and declarative statements such as … that the future of the planet is at stake … and “millennial climate models predict that …” are not helpful and propaganda at best. These may be hypotheses but are not testable (or falsifiable) and recent patterns do not adequately account for other variables, notably natural causes.

    The Nobel Committee awarded the prize to Al Gore and the IPCC in order to support / reward their own value systems; notably it was not a Nobel prize in science. Al Gore’s release of Inconvenient … Part Two is MOTS (more of the same). I have posted the following before which I feel reflects the general tenor of the discussion on the climate change debate and in this case JEG’s comments.

    Oxford historian Norman Davis outlined Five Basic Rules of Propaganda in “Europe – a History,” Oxford Press, 1996, pp 500-501):

    “Simplification: reducing all data to a simple confrontation between ‘Good and Bad’, ‘Friend and Foe’.”

    Disfiguration: discrediting the opposition by crude smears and parodies.”

    Transfusion: manipulating the consensus values of the target audience for one’s own ends.”

    Unanimity: presenting one’s viewpoint as if it is the unanimous opinion of all right-thinking people; including drawing doubting individuals into agreement by the appeal of star-performers, social pressure and by ‘psychological contagion’” a.k.a. psy-ops.

    Orchestration: endlessly repeating the same message, in different forms, variations and combinations.”

    • bernie1815
      Posted Jul 29, 2017 at 2:14 PM | Permalink

      Some have argued that Alinsky’s Rules were ideological and of the left in particular. I disagree. I agree with what I believe you are saying, that his “Rules” are for whoever is protesting the activities of a powerful establishment, i.e.,radicals or dissenters.

  6. Pathway
    Posted Jul 29, 2017 at 6:21 PM | Permalink

    http://www.ratemyprofessors.com/ShowRatings.jsp?tid=1715162

    This might shine the light on JEG’s attack on Steve.

    • Posted Jul 31, 2017 at 6:05 AM | Permalink

      For instance from one student on JEG: “Mostly focuses on political side to climate change, which is interesting, but he makes it unbearable.”

  7. Anonymous
    Posted Jul 29, 2017 at 8:29 PM | Permalink

    A. Can you just answer with link to the reworked post (or if never done, just say so, straight). I won’t even grind you, not that this promise should be needed to get an answer. Here is what you wrote on 13AUG2010:

    “UPDATE Aug 13 – there was a mistake in the collation of trends that I used in this post -which was (confusingly) supposed to help clarify things rather than confuse things. I used trends calculated up to 2099 instead of 2009, which ended up making the within-group standard deviations too narrow. I’ll rework this and re-post. However, I’m going to Italy next Tuesday and am working on a presentation so the re-post will have to wait. This has also been a distraction from the MMH issues so let’s focus on them.”

    A Mixed Effects Perspective on MMH10

    B. You closed comments on that thread in 13AUG2010, two days after it was opened. I remember it. Another threads in AUG2010 have comments through at least 2013. This was the very last post before commenting was locked:

    “Steve McIntyre Posted Aug 13, 2010 at 10:15 AM | Permalink

    there was a mistake in the collation of trends that I used in this post -which was (confusingly) supposed to help clarify things rather than confuse things. I used trends calculated up to 2099 instead of 2009, which ended up making the within-group standard deviations too narrow. I’ll rework this and re-post. However, I’m going to Italy next Tuesday and am working on a presentation so the re-post will have to wait. This has also been a distraction from the MMH issues so let’s focus on them.

    So let’s stick to MMH10. Sorry about that.”

    C. I’m fine with the mistake. I don’t like the defensiveness to erase the flawed post, have the last word on the mistake [do we know everything about the nature of the mistake for one], and then block commenting. Very wrong to do this, especially after you defended it at first in comments. Even now, you are defensive versus stoical.

    • Steve McIntyre
      Posted Jul 29, 2017 at 9:59 PM | Permalink

      It looks like I didn’t return to the topic. I only have so much time and energy and it looks like this was a topic that I forgot about at the time. Sorry about that.

      Despite your complaint, I notice that I included a script in the comments which shows what had been done in the post. In particular, it showed that I had used boxplots to distinguish within-model variance from between-model variance.

      I subsequently used boxplots for this purpose in the later posts:

      Two Minutes to Midnight

      Update of Model-Observation Comparisons


      In these later posts, I didn’t link back to the 2010 post which I had forgotten about.

      I don’t guarantee that these later posts represent exactly the same approach to the problem as the 2010 post, but hope that you find them of interest. I’ve done other posts comparing models to observations from time to time.

      I drafted a further and more detailed comparison of models to observations earlier this year, but didn’t finish it as I’ve been very short of energy.

      • Anonymous
        Posted Jul 29, 2017 at 10:39 PM | Permalink

        Completely understand not getting to something. No foul on that. Happens.

        Still didn’t like the closed comments section and erasing the post. If you still have the old content, will you please post it back up. You can put some black text at front, but you should leave the content there for people to see.

        Also, we should have been able to discuss it. (For one thing, maybe there was more than one mistake, maybe we learn something.) But if it is past spam control timing (now) than this is just unfixable.

      • Anonymous
        Posted Jul 29, 2017 at 10:42 PM | Permalink

        “I don’t guarantee that these later posts represent exactly the same approach to the problem as the 2010 post, but hope that you find them of interest. I’ve done other posts comparing models to observations from time to time.”

        I’d rather pick one question to death at a time. It is hard to get anywhere if the focus shifts before we resolve a question.

        • eloris
          Posted Jul 31, 2017 at 1:05 PM | Permalink

          What does some mistake made almost 7 years ago have to do with this post?

      • talldave2
        Posted Aug 29, 2017 at 1:54 PM | Permalink

        Thanks Steve, that 2016 comparison is in fact pretty interesting — I like it a bit better than Pielke’s approach with the model runs since 1989.

        It’s still sort of baffling that there aren’t a ton of these comparisons being done by IPCC, GISS, NOAA, etc, given that these models were already used as multi-trillion-dollar global policy inputs. I guess sciency predictions remain much more popular than the empirical scientific question of whether they were actually accurate.

        • talldave2
          Posted Aug 30, 2017 at 8:42 AM | Permalink

          Sorry, I had forgotten the RICO thing happened at the same time.

  8. bernie1815
    Posted Jul 29, 2017 at 10:27 PM | Permalink

    Steve:
    I am not sure if I did something that put my comments on Alinsky (and deleted your useful responses) in moderation for the first time ever, but I understand if you want to avoid any side-tracks on the thread.

  9. Posted Jul 30, 2017 at 9:37 PM | Permalink

    I also made this comment on the previous post, which uses the same table:

    Your table “Arctic Lake Proxy Inventory” has an error on the bottom row (CP17). It should read across:

    2 5 7(not 4) 2 16

    • Posted Aug 2, 2017 at 9:31 AM | Permalink

      Let me try this a different way:

      On the P13 column, 8-1 should equal 7. You have 8-1=4.

      Steve: thanks, fixed.

  10. RCB
    Posted Jul 31, 2017 at 10:15 AM | Permalink

    You are always a gentleman, Mr. McIntyre! Thank-you for your continued efforts to carefully and critically evaluate climate related publications. I have been following your weblog for about 10 years, and I recommend it to all who wish to talk about climate issues. Wishing you a long, healthy, and happy life!

  11. JEM
    Posted Jul 31, 2017 at 10:31 AM | Permalink

    We appreciate the fact that your squash career’s mishaps have put you back in front of the keyboard.

  12. Andy
    Posted Jul 31, 2017 at 3:20 PM | Permalink

    Hi Steve,

    I will pass on the technical aspects of the post, but the response from Julien Emile-Geay is characteristic of someone who is out of their depth. I suspect you are close to finding a really big problem with the paper and he is panicking.

    Keep going, hope the injury gets better soon.

  13. kenfritsch
    Posted Jul 31, 2017 at 6:35 PM | Permalink

    SteveM, I think if I were you I would not bother to respond to someone like JEG. His tone in his posts is anything but of a science temperament and it should be obvious to those reading here who understand the problem of post facto selection of temperature proxies that JEG does not get it and never probably will. He wants apparently to turn the argument to the fractional amount of post facto selection and whether you got the exact fraction correct- and that is not the point and it appears by JEG using this approach that he is totally lost.

    • HAS
      Posted Jul 31, 2017 at 8:45 PM | Permalink

      The curious thing is that he probably does “get it” if his past work is anything to go by. In “Estimating Central Equatorial Pacific SST Variability over the Past Millennium. Part I: Methodology and Validation” http://shadow.eas.gatech.edu/~kcobb/pubs/emilegeay12a.pdf Section 4 deals with validation by way of cross validation and begins:

      “A common way to assess the quality of statistical predictions is cross-validation (e.g., Hastie et al. 2008). In our context, it consists of estimating the instrumental target (here the historical Nin ˜o-3.4 index) using proxy data, when some of the instrumental data is withheld
      from the calibration period to compute out-of-sample prediction error (Wilks 1995).”

      This suggests he understands the general concept. But the cited Hastie et al 2008 (“The Elements of Statistical Learning Data Mining, Inference, and Prediction”) makes it all pretty plain in its section on cross validation, and highlights the problems with what it terms a typical strategy under the heading “7.10.2 The Wrong and Right Way to Do Cross-validation”:

      “1. Screen the predictors: find a subset of “good” predictors that show fairly strong (univariate) correlation with the class labels
      2. Using just this subset of predictors, build a multivariate classifier.
      3. Use cross-validation to estimate the unknown tuning parameters and to estimate the prediction error of the final model.”

      Using an example Hastie et al show how this delivers an error rate much lower then the true rate, going on to say:

      “What has happened? The problem is that the predictors have an unfair advantage, as they were chosen in step (1) on the basis of all of the samples. Leaving samples out after the variables have been selected does not correctly mimic the application of the classifier to a completely independent test set, since these predictors “have already seen” the left out samples.”

      • kenfritsch
        Posted Aug 1, 2017 at 9:13 AM | Permalink

        I skimmed through that paper and I do not believe it relates well to temperature reconstructions. The authors were looking at a temperature index.

        If you want to get around all the verbiage in these papers the issue of cross validation comes down to one of using a temperature data set that one can have reasonable confidence that it is reasonably accurate over a rather extended time period for the locale of interest. That instrumental record will have a wider error range than one that is averaged over many locale and can often have more missing data points. That data set becomes the standard in validating the temperature response of the proxy of interest. There is a dilemma in doing cross validation with temperature to proxy relationships in that a high frequency correlation (annual) does not always translate to a low frequency one (meaningful trends which are critical in these studies). Individual data points cannot be selected randomly if the interest is in low frequency trends. Precluding random selection is also the problem of auto correlation in the temperature proxy series and in the case of the proxy a real possibility in having a long term persistent. In that case extended and contiguous parts of the series must be used and in order to obtain a reasonable number of comparisons. Those series parts then must overlap and then in turn become dependent. It is also a judgment call (and thus obtaining biases from model fitting) on the length of the parts of series and how far to jump ahead in the starting data point for the next series part. Non overlapping extended parts of instrumental temperature series that represent reasonably accurate results in most locales probably involves drawing from 100 or less years. With large auto correlations and/or long term persistence a non deterministic trend for a proxy series can occur over long periods of time and a false low frequency correlation between temperature and proxy can be obtained by chance.

        A major issue is what was the initial filter used in post facto selection of the proxies that are finally put to a cross validation test. It is here that the authors of these papers might want to talk about using an a prior selection process that is based on some reasonable physical criteria and then using all the selected proxies in a cross validation test. When these issues are not covered in detail in temperature reconstruction papers, I would say that the authors do not get it. With JEG it is matter of who do I believe: his references or my lying eyes.

        • HAS
          Posted Aug 1, 2017 at 3:14 PM | Permalink

          Yes, the particular application is different, but the principle at stake is the same – using the prior information collected on a relationship as the basis for selection, and then claiming the relationship for the subset.

          As I said, it’s curious that having thought about the principle in the earlier work, it didn’t get extended to the later work.

          I’d note in passing that while a physical relationship is desirable, this doesn’t preclude investigative work that lacks this. But any credibility will come from performance out of sample.

        • kenfritsch
          Posted Aug 2, 2017 at 7:53 AM | Permalink

          I’d note in passing that while a physical relationship is desirable, this doesn’t preclude investigative work that lacks this. But any credibility will come from performance out of sample.

          The only true and incorruptible out-of-sample test would be an extended time into the future of a series being tested. Peeking at series to test in a cross validation test precludes calling that out-of-sample testing. Certainly truncating diverging temperature proxy series and adding back some infilling data is a strong indication that Mann (2008) did not get the post fact problem and in fact is one the worst examples of not getting it.

          Investigative work that can be tested in hard sciences with a controlled test is very different than that for the softer sciences like climate science where a controlled test is not available. I sometimes think that this is where even hard science scientists become confused in apply testing to a soft science environment.

        • davideisenstadt
          Posted Aug 2, 2017 at 9:27 AM | Permalink

          Ken:
          your comment is thoughtful and measured.

          I believe the problem stems from the frustration created from working with substandard data that refuses to confess makes the community forget the most basic of tenets of statistical analysis…

          These guys a desperate to find the signal they believe exists.

          So:
          Individual trees that dont behave correctly are disposed of, during calibration(as if the very act of calibrating, that is throwing out the data that dont agree with one’s hypothesis isnt ex post screening),

          others’ research (also the product of ex post selection) is combined, with no corrections made for meta-analysis (bonferroni corrections, for example)

          Stubborn parts of data sets, ones that dont conform, are discarded, and then the expurgated sets are recombined over and over again

          A total disregard for any type of rigorous analysis of error, and a failure to communicate the uncertainty associated with proxies is simply par for their courses.

          Its disgusting really.

        • HAS
          Posted Aug 10, 2017 at 4:08 PM | Permalink

          I just noted a paywalled article “The application of machine learning for evaluating anthropogenic versus natural climate change” Abbot & Marohasy https://doi.org/10.1016/j.grj.2017.08.001 that could be an interesting and more robust approach to out of sample analysis of these kinds of proxies. Abstract:

          “Time-series profiles derived from temperature proxies such as tree rings can provide information about past climate. Signal analysis was undertaken of six such datasets, and the resulting component sine waves used as input to an artificial neural network (ANN), a form of machine learning. By optimizing spectral features of the component sine waves, such as periodicity, amplitude and phase, the original temperature profiles were approximately simulated for the late Holocene period to 1830 AD. The ANN models were then used to generate projections of temperatures through the 20th century. The largest deviation between the ANN projections and measured temperatures for six geographically distinct regions was approximately 0.2°C, and from this an Equilibrium Climate Sensitivity (ECS) of approximately 0.6°C was estimated. This is considerably less than estimates from the General Circulation Models (GCMs) used by the Intergovernmental Panel on Climate Change (IPCC), and similar to estimates from spectroscopic methods.”

    • Posted Aug 1, 2017 at 6:00 AM | Permalink

      There must be room in the mainstream print literature for a paper by Steve (possibly with a co-author to dot i’s and cross t’s) on the problems arising from data mining and ex-post screening in centennial to millennial proxy-based paleoclimate studies.

      • pauldd
        Posted Aug 6, 2017 at 7:35 AM | Permalink

        The problem of expost screening is so obvious that no statistical journal would have any interest in publishing it and it is so devastating to the paleo reconstructions that no climate journal would publish it.

  14. AntonyIndia
    Posted Aug 1, 2017 at 12:28 AM | Permalink

    publicly-funded academics – like Emile-Geay:

    Funded Research, Contracts and Grants Awarded:
    Last Millennium Reanalysis Project (NOAA (Dept of Commerce)), Greg Hakim (UW), Julien Emile-Geay (USC), David Noone (U. Colorado), Eric Steig (UW),
    $1,488,473, 08/2014-
    GeoChronR – open-source tools for the analysis, visualization and integration of time-uncertain geos (National Science Foundation), N. McKay (Northern Arizona University), Julien Emile-Geay, Ken Collier,
    $566,000, 07/01/2014-
    LinkedEarth: Crowdsourcing Data Curation & Standards Development in Paleoclimatology (National Science Foundation), J. Emile-Geay, Y. Gil (ISI), Nick McKay
    $798,000, 09/2015-08/2017
    https://dornsife.usc.edu/cf/faculty-and-staff/faculty.cfm?pid=1023062
    Plus
    $684,779.00 from NSF award 1541029
    $196,654.00 from NSF award 1347213

    Couldn’t search NOAA as their grantsonline search was down. There could be other sources too.

    • Steve McIntyre
      Posted Aug 1, 2017 at 8:22 AM | Permalink

      pretty generous grants.

      • mpainter
        Posted Aug 1, 2017 at 8:45 AM | Permalink

        I wonder if the spigot has been shut off yet.
        Imagine the peevishness of those who have enjoyed such largesse and who now face famine.

        • TAG
          Posted Aug 1, 2017 at 8:58 AM | Permalink

          The presence of exceptional research universities is commonly identified as a prime reason for the USA’s leadership in technology. NSF grants fund the research performed at these universities. Should that spigot be shut off?

        • mpainter
          Posted Aug 1, 2017 at 9:35 AM | Permalink

          Well, TAG, the NSF fertilizer grows weeds and tares as well as crops. In some fields the weeds and tares crowd the crop. Something needs to be done about NSF standards, which presently do not even require adherence to acceptable statistical practices. The poor quality of much NSF funded work can be attributed to NSF laxity in not setting proper standards to be complied with by NSF recipients. I regard the NSF as something of a scandal. Recall Jagadish Shukla (whatever became of that?) and Dave Verardo.

          There is much room for improvements at the NSF, and I hope the present administration sees its way toward this.

        • Posted Aug 1, 2017 at 9:37 AM | Permalink

          TAG, The problem here is that as you know, the soft money culture at Universities has resulted in a loss of quality and objectivity with reward systems that encourage hyping results and biased research.

          http://www.nature.com/news/beware-the-creeping-cracks-of-bias-1.10600

          A recent editorial in the Lancet says that the case against science is clear. Perhaps half the results are wrong. I have a bookmark folder with perhaps 60 of recent articles documenting the threat posed to science. There is a lot of denial out there by people running the system or the scientists who have been most successful at the money grubbing. In my field, the decline is quite remarkable. Simply put, many results are cherry picked to make the authors look good or sell their research. Negative results are often simply filed in the desk drawer.

        • Posted Aug 1, 2017 at 9:47 AM | Permalink

          Just picked a couple more good references. You can find hundreds quite easily.

          http://www.economist.com/news/leaders/21588069-scientific-research-has-changed-world-now-it-needs-change-itself-how-science-goes-wrong

          http://www.economist.com/news/science-and-technology/21707513-poor-scientific-methods-may-be-hereditary-incentive-malus

          Denial is easy for those enjoying success in the system.

      • talldave2
        Posted Aug 30, 2017 at 8:35 AM | Permalink

        Apparently since the story about Shukla’s Gold broke, he has… been given an award and recommended skeptics be prosecuted under RICO.

        https://en.wikipedia.org/wiki/Jagadish_Shukla

        This is very worrying. I’m not sure the space-time continuum can absorb these levels of irony without collapsing into an infinitesimal singularity inside which particles of snark overcome the Paul exclusion principle to form a degenerate sardonic mass from which no logic can escape. Pray for us all.

    • Steven Mosher
      Posted Aug 2, 2017 at 9:10 AM | Permalink

      cool work

      https://nickmckay.github.io/GeoChronR/

  15. Phil Howerton
    Posted Aug 2, 2017 at 10:23 AM | Permalink

    My God! That is almost 3.8 million dollars since 2014. What the hell are these people doing with all that money? It is not as if they were going out traveling the world cutting trees or digging up lake sediments. They are working with previously collected data, compiled by other individuals, which enables them to sit in their offices working with computers and sending emails to their co-authors. It is my impression that Steve read the paper and found the errors almost immediately. No 3.8 million there!

    This is a disgrace. I wonder if the NSF has any requirements for final accountings from people like these, showing where the money went and why. I doubt it. I would also like to know if these people are entitled, directly or indirectly, to put any of this money in their own pockets. FOI anyone?

    PhilH

    • James Nickell
      Posted Aug 3, 2017 at 11:12 AM | Permalink

      Not so much oversight, it would seem. An example follows, which I believe to represent general practice, not the exception. NSA grants and other funding can be very lucrative and a nice way to double up income and spread around the wealth with little consequence. JEG’s snarky replies are likely in response what he perceives as an attack on his funding, more so than his work product.

      Shukla’s Gold

    • Duster
      Posted Aug 8, 2017 at 6:16 PM | Permalink

      The expressed purpose seems to be to create an archive of “paleoclimatological” data in a common format that would unify the data that seems to frequently not make it in to the SI or some other accessible archive. They written software for R, Python and Matlab (useable in Octave perhaps?) for employing the data. That’s all useful work, or could be. The data itself seems a little obscurely stored though, or possibly you need to join to access the data.

  16. nvw
    Posted Aug 2, 2017 at 1:31 PM | Permalink

    GeoChronR is described as “Quantifying age uncertainties is a critical component of paleoscience (paleoclimatology, paleoecology, paleontology). GeoChronR is an integrated framework that allows scientists to generate state-of-the-art age models for their records, create time-uncertain ensembles of their data, analyze those ensembles with a number of commonly-used techniques, and visualize their results in an intuitive way.

    Question: will they submit Shaun Marcott’s assignment of ages to ocean cores for GeoChronR analysis?

    • Steve McIntyre
      Posted Aug 3, 2017 at 6:52 PM | Permalink

      good point

  17. Phil Howerton
    Posted Aug 2, 2017 at 5:01 PM | Permalink

    I don’t know who in the government has supervisory authority over NSF.

    Does anyone know if Trump has appointed a science advisor? I am pretty sure John Holdren is not still lurking the back halls of the White House any longer, with his playbook of imminent disasters. I think Judith would be a great choice. I am sure Mann would be fine with that.

    Phil

  18. BallBounces
    Posted Aug 4, 2017 at 5:43 PM | Permalink

    “Following the finals of the US National Squash Doubles (Over 65s) in March, I severely exacerbated a chronic leg injury and am receiving therapy for it.”

    Next time, don’t try to play the Squash Doubles alone — get a partner 😉

  19. Anonymous
    Posted Aug 5, 2017 at 1:01 PM | Permalink

    1. Please restore the deleted post content with your failed experiment from AUG2010. I don’t mind the mistake. I do mind censorship by deleting your mistake, locking comments, and making a reneged promise to repost an alternate version. There was nothing stopping you from posting a big notice that you realized your mistake, leaving the content up and discussion OPEN and then if you decided you had time/interest doing an update in a separate post. The whole behavior is eerily similar to the warmers that you criticize. At the end of the day if you are going to capriciously delete or edit posts this just shows why your blog is inferior to archived literature. At least with the literature, there is a permanent record.

    2. Was there ever a post here discussing your corrigendum to the Atmos Sci Lett (MMH) paper? I tried searching and looking in the corresponding month but didn’t find it. (May be there, no worries, just hard to find.) I would like to see the post discussion where people can analyze what changed and if ALL the conclusions didn’t get affected; implications for method versus data; etc. This is actually a benefit of blog discussion over a lit only approach.

  20. AntonyIndia
    Posted Aug 6, 2017 at 9:24 AM | Permalink

    Darrell Kaufman was the MSc advisor for Nick McKay. They frequently publish together and also with Julien Emile-Geay now. Is the older Kaufman to blame for transmitting bad attitude towards Steve McIntyre or are the younger horses on the loose?

  21. Posted Aug 21, 2017 at 12:02 AM | Permalink

    Judith Curry is putting the call out to the climate science lay experts for their help on critique of the 669-page “Climate Science Special Report” that you might have heard about “leaked” to the press. It is now under Presidential review as well as public comment.

    Judith:

    A red team effort is needed, with people selected outside of the USGCRP establishment. It seems possible that the Trump Administration will soon convene an adversarial review of the Climate Science Special Report. To give any formal Red Team a leg up on their work, I’m proposing that the CE Denizens conduct a crowdsourced review of the Report. I’m hoping that the collection of comments posted here might receive more attention than public comments submitted to the USGCRP.

    So its pay up time for the Denizens. Its time to put some effort into critiquing this report. I’m asking for your help in identifying false, misleading, incomplete and/or overconfident statements in the draft. https://judithcurry.com/2017/08/20/reviewing-the-climate-science-special-report/

  22. Dodgy Geezer
    Posted Oct 27, 2017 at 9:42 AM | Permalink

    If I had received such a comment from an academic in the course of a polite discussion, a transcript would be winging its way to his Dean or Chancellor…