Marvel et al.: GISS did omit land use forcing

A guest article by Nic Lewis

I reported in a previous post, here, a number of serious problems that I had identified in Marvel et al. (2015): Implications for climate sensitivity from the response to individual forcings. This Nature Climate Change paper concluded, based purely on simulations by the GISS-E2-R climate model, that estimates of the transient climate response (TCR) and equilibrium climate sensitivity (ECS) based on observations over the historical period (~1850 to recent times) were biased low.

I followed up my first article with an update that concentrated on land use change (LU) forcing. Inter alia, I presented regression results that strongly suggested the Historical simulation forcing (iRF) time series used in Marvel et al. omitted LU forcing. Gavin Schmidt of GISS responded on RealClimate, writing:

“Lewis in subsequent comments has claimed without evidence that land use was not properly included in our historical runs…. These are simply post hoc justifications for not wanting to accept the results.”

In fact, not only had I presented strong evidence that the Historical iRF values omitted LU forcing, but I had concluded:

“I really don’t know what the explanation is for the apparently missing Land use forcing. Hopefully GISS, who alone have all the necessary information, may be able to provide enlightenment.”

When I responded to the RealClimate article, here, I inter alia presented further evidence that LU forcing hadn’t been included in the computed value of the total forcing applied in the Historical simulation: there was virtually no trace of LU forcing in the spatial pattern for Historical forcing. I wasn’t suggesting that LU forcing had been omitted from the forcings applied during the Historical simulations, but rather that it had not been included when measuring them.

Yesterday, a climate scientist friend drew my attention to a correction notice published by Nature Climate Change, reading as follows:

Corrected online 10 March 2016

In the version of this Letter originally published online, there was an error in the definition of F2×CO2 in equation (2). The historical instantaneous radiative forcing time series was also updated to reflect land use change, which was inadvertently excluded from the forcing originally calculated from ref. 22. This has resulted in minor changes to data in Figs 1 and 2, as well as in the corresponding main text and Supplementary Information. In addition, the end of the paragraph beginning’ Scaling ΔF for each of the single-forcing runs…’ should have read ‘…the CO2-only runs’ (not ‘GHG-only runs’). The conclusions of the Letter are not affected by these changes. All errors have been corrected in all versions of the Letter. The authors thank Nic Lewis for his careful reading of the original manuscript that resulted in the identification of these errors.”

So, as well as the previously flagged acceptance that the F2×CO2 value of 4.1 W/m2 was wrong in the original paper, the authors now accept that LU forcing had indeed been omitted from the Historical forcing values. They had been omitted from the forcing originally calculated in ref. 22 (Miller et al. 2014); Figures 2 and 4 of that paper likewise omit LU forcing.

It is decent of the authors to acknowledge me. However, I am mystified by their claim that “The conclusions of the Letter are not affected by these changes.” Their revised primary (iRF) estimate of historical transient efficacy is, per Table S1, 1.0 (0.995 at the centre of the symmetrical uncertainty range). This means the global mean temperature (GMST) response to the aggregate forcing applying during the historical period (actually, 1906–2005) was identical, in the model, to the response to the same forcing from CO2 only. That implies its TCR can be accurately estimated by comparing the GMST response and forcing over that period. But in their conclusions they contradict this, stating that:

“GISS ModelE2 is more sensitive to CO2 alone than it is to the sum of the forcings that were important over the past century.”

and they go on to claim that:

“Climate sensitivities estimated from recent observations will therefore be biased low in comparison with CO2-only simulations owing to an accident of history: when the efficacies of the forcings in the recent historical record are properly taken into account, estimates of TCR and ECS must be revised upwards.”

I would not have left these claims unaltered if it had been my paper – not that I would ever have made the second claim in any event, since it assumes the real world behaves in the same manner as GISS-E2-R.

I also find the way that the error in the F2×CO2 value has been dealt with in the corrected paper to be unsatisfactory. The previous, incorrect, value of 4.1 W/m2 has simply been deleted, without the correct values (4.5 W/m2 for iRF and 4.35 W/m2 for ERF) being given, either there or elsewhere in the paper or in the Supplementary Information. All the efficacy, TCR and ECS estimates given in the paper scale with the relevant F2×CO2 value, so it is important. In my view it would be appropriate to provide the F2×CO2 values somewhere in the paper or the SI.

Finally, I note that none of the other serious problems with the paper that I identified have been corrected. These include:

  • the use of values for aerosol and ozone forcings, which are sensitive to climate-state, being calculated in an unrealistic climate state;
  • use of ocean heat uptake – which amounts to only ~86% of total heat uptake – as a measure of total heat uptake despite the observational studies Marvel et al. critique using estimates that included non-ocean heat uptake;
  • downwards bias in the equilibrium efficacy estimates by not comparing the GMST response to the forcing concerned with that to CO2 forcing derived in the same way;
  • various results whose values disagree significantly to the data used and/or the stated bases of calculation. For instance, all the uncertainty ranges appear to be out by a factor of two or more. (I can’t make sense of Gavin Schmidt’s justification for the way they have been calculated.)
    .

Update 13 March 2016

Steve M asked about the changes in the corrected paper/SI from the original. Apart from those mentioned in the 10 March 2016 correction notice, I have identifed the following other changes:

1) The caption to Figure 1 of the paper has been changed, by inserting “(defined with respect to 1850-1859)” after “a Non-overlapping ensemble average decadal mean changes in temperature and instantaneous radiative forcing for GISS-E2-R single-forcing ensembles (filled circles)” in the second line. As worded this definition applies to temperature as well as forcing, since the filled circles relate to both variables. But I think it is intended to apply only the forcing change, not to the temperature change. This was triggered, I imagine, by the following exchange at RealClimate on 11 January 2016:

Nic Lewis says:

Chris Colose, you asked “why do the volcanic-only forcings (red dots) hover around a positive value in the first graph?”. The explanation I give in my technical analysis of Marvel et al. at Climate Audit is that in Figure 1 the iRF for volcanoes appears to have been shifted by ~+0.29 W/m2 from its data values, available at http://data.giss.nasa.gov/modelforce/Fi_Miller_et_al14.txt. Why not check it out and see whether my analysis is confused, as has been suggested here?

[Response: You are confused because you are using a single year baseline, when the data are being processed in decadal means. Thus the 19th C baseline is 1850-1859, not 1850. We could have been clearer in the paper that this was the case, but the jumping to conclusions you are doing does not seem justified. – gavin]”

There was no mention anywhere in the originally paper or SI of the 1850-1859 mean being used as a baseline for any variable, and it appears to have been used as the baseline solely for iRF forcing.

2) The words “for the iRF case” have also been added two lines further down, at the end of the caption for Figure 1.a.

3) The second sentence of the paragraph starting “Assuming that all forcings have the same transient efficacy as greenhouse gases” now reads: “Scaling each forcing by our estimates of transient efficacy determined from iRF we obtain a best estimate for TCR of 1.7 C (Fig. 2a) (1.6 C if efficacies are determined from ERF).” The words in brackets are new, and previously the figure before “(Fig. 2a)” was 1.7 C.

4) The ECS figure in the third sentence of the paragraph starting “We apply the same reasoning to estimates of ECS.” has been changed from 2.9 C to 2.6 C.

5) Various values in graphs have been changed.

6) All the efficacy values in Table S1 in the Supplementary Information have been changed. I show below the update history for this table, along with my provisional calculations of what all the values should be.
TableS1_versions

.
My calculations are based on the method I have deduced Marvel et al. used to produce the iRF efficacy estimates, not on the basis stated by Marvel et al., which is significantly different in two respects. Marvel et al. state:

“In the iRF case, where annual forcing time series are available, TCR and ECS are calculated by regressing ensemble-average decadal mean forcing or forcing minus ocean heat content change rate against ensemble-average temperature change.”

and say that transient and equilibrium efficacies are defined as the ratio of the calculated TCR or ECS to published GISS-E2-R TCR and ECS values. But I can only match the Table S1 mean iRF efficacy values by regressing the other way around, that is regressing temperature change against decadal mean forcing or forcing minus ocean heat content change rate. Moreover, to match the ECS efficacy values (for which it makes a difference), I have to regress on a run-by-run basis and then average the regression slopes, rather than regressing on ensemble-average values as stated was done.

For all iRF forcings other than LU, my mean estimates agree with those in the 10 March 2016 version of Table S1, within rounding errors. I am currently unsure why the iRF LU efficacies differ between the version of Table S1 dated 18 January 2016, that was available at the GISS website until recently – which matched my estimate as to the mean – and the 10 March 2016 version. Nor have I yet worked out why most of the ERF mean estimates and uncertainty ranges differ between my calculations and the 10 March 2016 version of Table S1.

As already noted, my iRF efficacy uncertainty ranges are much narrower than Marvel et al.’s. Their ratio appears to be [double] the square root of [one-fifth] the number of simulation runs from which the range was derived. That is [, by a factor of 1.118 (1.225 for historical forcing), not] consistent with Gavin Schmidt’s comment at RealClimate:

“The uncertainties in the Table S1 are the 90% spread in the ensemble, not the standard error of the mean.”

Since the values given are stated to be “mean and 95% confidence intervals”, I cannot see any justification for the efficacy uncertainty ranges actually being 95% confidence intervals for a single run, centered on the mean efficacy calculated over all runs.

Note: wording in square brackets in the pre-penultimate paragraph inserted 14 March 2016 AM; I wrote the Update late at night and overlooked that Gavin Schmidt’s explanation would only account for the discrepancy if the divisor for the standard error of the mean of n runs were sqrt(n-1), rather than the correct value of sqrt(n).

130 Comments

  1. Posted Mar 11, 2016 at 8:44 PM | Permalink

    No small victory getting an acknowledgement from the tribe. Congratulations on that and well done for being persistent.

  2. Timo Soren
    Posted Mar 11, 2016 at 9:20 PM | Permalink

    Keep at it Nic! We appreciate it.

  3. Posted Mar 11, 2016 at 11:46 PM | Permalink

    Congratulations on a partial Victory. I can hear the sounds of them eating humble pie.

  4. bernie1815
    Posted Mar 12, 2016 at 7:42 AM | Permalink

    Nic: Nicely done. So Marvel et al’s claim is that this error was inadvertent? Does this strike you as likely? It strikes me as a very convenient inadvertency? Might there be an assumption that LU forcing are small enough to ignore? Might it be the result of a form of confirmation bias?
    Is this an example of the same mind set that kept problematic proxies in the analysis after they had been shown to lead to biased results?

    • MikeN
      Posted Mar 13, 2016 at 10:55 AM | Permalink

      I think inadvertent is likely. Now if it had been an error going in the other direction, they might have been more diligent and noticed the error.

  5. mpainter
    Posted Mar 12, 2016 at 10:29 AM | Permalink

    Nix, no personal note from any of the authors acknowledging your help in this?

    • mpainter
      Posted Mar 12, 2016 at 10:30 AM | Permalink

      _Nic_

  6. stevefitzpatrick
    Posted Mar 12, 2016 at 11:03 AM | Permalink

    Nic,
    They showed a bit more class than usual by not claiming they had already identified the errors before you pointed them out.

    That doesn’t absolve them of culpability in insisting on inclusion of an obviously bizarre land use run and not revising the paper’s conclusion, which should now be something like ‘the model shows little net impact of efficacies for non-CO2 forcings’. Seems to me another case of confirmation bias leading to a paper that was not critically examined by either authors or reviewers. The authors should have remembered that the easiest person to fool with a biased analysis is yourself.

    • Michael Jankowski
      Posted Mar 12, 2016 at 12:07 PM | Permalink

      Eh…why did they not thank Nic Lewis for identifying the errors, not thank him for his “careful reading that resulted in the identification of these errors?”

      • Posted Mar 12, 2016 at 12:46 PM | Permalink

        Yes, I had noticed that wording.

        • M Seward
          Posted Mar 13, 2016 at 9:23 PM | Permalink

          I think ‘weasel wording’ about sums it up.

        • Posted Mar 14, 2016 at 5:36 AM | Permalink

          Firstly, they did thank you. This has been an example of blog science having an impact, which has been acknowledged. Just because you didn’t get it all your way does not make the other party wrong and you right. Stamping your feet and complaining again and again and again is not how science is done. You could try to take a positive, rather than a negative, out of this. It is ultimately a postive and I think it is both good that you’ve delved into this and good that the authors have responded.

          Secondly, you’re promoting this on a site that has – in the past – implied that one of the authors might not be honest. What do you expect? Do you think you can get away with robust (for which you should read insulting) posts on such a site and then expect others to thank you profusely? I’m not saying you should avoid your normal style, but at least recognise that by engaging as you do you’re unlikely to get effusive thanks from those you’ve chosen to insult.

        • kim
          Posted Mar 14, 2016 at 6:46 AM | Permalink

          Observations offend, Ken. You might as well get used to it; that’s their nature.
          ==============

        • Posted Mar 14, 2016 at 7:11 AM | Permalink

          ATTP, you have no valid grounds to imply that I was complaining. I agree that it is very positive that the authors have responded on the two issues involved in the correction. I don’t expect to get effusive thanks in a case like this – few people like to have to issue corrections in relation to errors found in their papers.

          I invite others to compare the wording of my articles and comments about Marvel et al. 2015 at Climate Audit with those by Gavin Schmidt about my appraisal of Marvel et al. and to decide for themselves which most deserve the term “insulting”.

          I’ll leave Steve M to respond, should he wish to do so, to your complaint about a post he wrote over a decade ago.

        • Posted Mar 14, 2016 at 7:30 AM | Permalink

          Nic,
          I was responding to your “yes I had noticed that wording” which I took to imply that you agreed with the earlier commenter. If you’re not complaining and see this as a positive, then I take back my criticism. As I said, I think this is a positive, both what you’ve done and what the authors of the paper have done.

          Oh, and I wasn’t complaining about Steve’s older post, I was simply highlighting something that you might wish to ponder. Steve can write whatever post he likes. Others can also choose how to thank those who may, or may not, have highlighted errors with some of their work.

        • miker613
          Posted Mar 14, 2016 at 8:33 AM | Permalink

          “These are simply post hoc justifications for not wanting to accept the results.” ATTP, I really don’t see anything Lewis said that approached the insult in these words of Gavin Schmidt. Be sensitive in both directions.
          In any event, the importance of noting Nic Lewis’s contribution to their correction (at least for me) is not that they’re being collegial. It’s that they are acknowledging that Nic Lewis is a real contributor to their section of climate science. They are still saying that “The conclusions of the Letter are not affected by these changes,” and that doesn’t seem right to me – I’ll let Lewis and others sort that one out. If it does turn out that the conclusions are strongly affected, then they still tried to play Climateball by pretending that he isn’t an _important_ contributor, and that sometimes he’s completely right and their work was what was wrong.
          Mann and co never got to that point with McIntyre, even as we spectators watched over the course of a decade as Mann’s group grudgingly conceded one point after another till they were all gone, never before they claimed that they had moved on to new better work, and without ever acknowledging McIntyre’s work by name or link. Their cheerleaders are still saying that McIntyre was debunked, blissfully unaware that it went the other way round.
          I will recall that you, ATTP, told me a little while ago that there was no obligation for Marvel et al to respond to a non-peer-reviewed comment like Lewis’ – in a blog! – and I answered that this is a much better and faster way for science to progress and truth to be established. You said that the statistics discussion didn’t seem to be going anywhere, and (by implication) nothing was being gained. How happy I am that Schmidt and company are more interested in checking their work. Their cheerleaders will just have to change their cheers.

        • mpainter
          Posted Mar 14, 2016 at 8:54 AM | Permalink

          I predict that Ken Rice will wind regretting his harangue. Perhaps he will even apologize for its intemperance.

        • Posted Mar 14, 2016 at 10:32 AM | Permalink

          Desperate stuff from Ken Rice in trying the guilt-by-association tack on Nic based on these words of Steve back in October 2005:

          So I ask the question again: is Gavin Schmidt honest about welcoming “serious discussions and rebuttals” at realclimate? About being in favor of openness in climate science? Or even about my being allowed to post at realclimate?

          ATTP’s argument here implies not just a thin skin for Schmidt, versatile international man of mystery, now popped up as Marvel co-author, but a remarkably long-lasting one. And hasn’t the moderation at realclimate frequently raised the questions asked by Steve over ten years ago?

          I see this as of one piece with ATTP’s commentary on the original Marvel thread by Nic in January:

          Let’s be clear, critiquing other studies is an entirely reasonable thing to do. … [But] Auditing isn’t really part of the standard scientific method.

          We had a good laugh then at ‘critique-good, audit-bad’ – an attitude surely more than a little influenced by the name of this blog. Now Nic is also held to bear responsibility for all words written here for twelve years, yanked out of context, and their detrimental effect on the fragile egos of those with whom the host has had cause to disagree. Time to get a grip.

        • stevefitzpatrick
          Posted Mar 14, 2016 at 3:51 PM | Permalink

          Ken Rice,
          “complaining again and again and again is not how science is done”

          Please do point out Nic’s endless complaining that you refer to. I haven’t seen any evidence of that. What I have seen is Gavin accuse Nic of refusing to accept the results of Marvel et al because of personal bias. His words were: “These are simply post hoc justifications for not wanting to accept the results.” Gavin was just as wrong about that as he was about the missing LU forcing, the incorrect instantaneous CO2 forcing, and inclusion a bizarre ‘model-off-the-rails’ LU run in the analysis.

          Seems to me Marvel et al’s errors got past review and into a published paper precisely because of what Gavin accused Nic of: the authors and reviewers suffer clear confirmation bias…. they ‘liked’ the results and so didn’t critically examine the paper. They want empirical estimates of relatively low sensitivity to be wrong, and think (with no credible rational) that the behavior of a GCM to different applied forcings is somehow a refutation of empirical estimates. Ithink a more plausible interpretation is that the model is a poor representation of reality.

        • RickA
          Posted Mar 14, 2016 at 4:05 PM | Permalink

          RC bans comments.

          But so does ATTP.

          I have been banned (via Willard).

          It is their blog – so they can do what they want – but it is a pretty cheap way to try to win an argument.

          Just don’t let the other side speak.

        • stevefitzpatrick
          Posted Mar 14, 2016 at 6:30 PM | Permalink

          RickA,
          Silencing your opponents has a long and sorry history among totalitarians of all stripes.

      • stevefitzpatrick
        Posted Mar 12, 2016 at 4:44 PM | Permalink

        The words they may have wanted to use were ‘nitpicking reading’. They refuse to budge on the errors weakening the paper’s original conclusion, which I find very odd, since the paper clearly is weakened when the errors are corrected.

        • Michael Jankowski
          Posted Mar 12, 2016 at 5:23 PM | Permalink

          Yes it would appear they maintain that their results were robust to careful/nitpicking reading.

      • MikeN
        Posted Mar 13, 2016 at 10:57 AM | Permalink

        Are they trying to emphasize it is not a simple error?

      • stevefitzpatrick
        Posted Mar 14, 2016 at 3:17 PM | Permalink

        Ummm… maybe Gavin just thinks the passive voice is always better than the active voice; too many years writing in the passive voice in journals I guess.

  7. Matt Skaggs
    Posted Mar 12, 2016 at 11:17 AM | Permalink

    “These are simply post hoc justifications for not wanting to accept the results.”

    All these years of writing out careful rebuttals to criticism of my work…this is what I really wanted to say!

  8. kenfritsch
    Posted Mar 12, 2016 at 11:41 AM | Permalink

    I hope Nic (and PaulK) keeps plugging away until all the issues he has posed have been addressed. The original reaction by Schmidt on the Land Use ommission is very different in the correction than his original reaction that seemed so certain that there was not a problem.

    I think I understand that the uncertainty in the model multiple run means would be properly handled by using the standard error of the mean which requires dividing the standard deviation by the square root of 5 in this case. Schmidt merely shrugs this off by saying he reported standard deviation and not the standard error.

    Schmidt has had a problem with this previously in a discussion the result of a paper by Santer on the differences between model and observedvtrends in the lower troposhere.

    • Michael Jankowski
      Posted Mar 12, 2016 at 12:12 PM | Permalink

      So does the International Man of Mystery now “accept the results” that Nic was right? Or is he spinning like a Mann-o-matic?

    • Jeff Alberts
      Posted Mar 12, 2016 at 12:12 PM | Permalink

      I’m sure Gavin is, as we type, furiously typing out a voluminous apology.

    • mpainter
      Posted Mar 12, 2016 at 12:39 PM | Permalink

      “The original reaction by Schmidt on the Land Use ommission is very different in the correction than his original reaction that seemed so certain that there was not a problem.”

      ###
      But Schmidt made no specific claims that Nic Lewis was incorrect:

      “Lewis in subsequent comments has claimed without evidence that land use was not properly included in our historical runs…. These are simply post hoc justifications for not wanting to accept the results.”

      That Schmidt is a curious fellow.

      • stevefitzpatrick
        Posted Mar 12, 2016 at 5:10 PM | Permalink

        Curious is not the adjective I would choose; maybe ‘strident’ or ‘arrogant’ would be more accurate.

        • stan
          Posted Mar 12, 2016 at 9:06 PM | Permalink

          Overly kind.

  9. Posted Mar 12, 2016 at 12:45 PM | Permalink

    Bernie1815: Thanks. Yes, I think that the omission of Land use forcing from the measure of Historical forcing was completely inadvertent. But it seemed pretty obvious to me that LU forcing might well have been omitted, and both straightforward regression analysis and comparison of the spatial patterns pointed to its omission being a near certainty. I would say that its likely omission could have been spotted, and confirmed, by the lead author, but I don’t think Kate Marvel can really be blamed for not doing so. There might be an element of confirmation bias there, or perhaps a less questioning and cross-checking approach than my own.

    I find the use of the wrong F2xCO2 values more difficult to account for. And I am a bit surprised that it wasn’t picked up on peer review. I had been a peer reviewer, I would certainly have questioned whether the F2xCO2 value used the correct measure of forcing.

    • Michael Jankowski
      Posted Mar 12, 2016 at 12:59 PM | Permalink

      “… And I am a bit surprised that it wasn’t picked up on peer review…”

      As long as you keep it to “a bit.”

      “…I had been a peer reviewer, I would certainly have questioned whether the F2xCO2 value used the correct measure of forcing…”

      If you had been the author, I think it would have been questioned.

      Am I too cynical?

    • bernie1815
      Posted Mar 12, 2016 at 1:44 PM | Permalink

      Nic: Thanks for the full response. Somebody just posted this. I assume the nifty CG would be somewhat different now? Note this graphic includes what passes for a LU forcing.
      http://www.bloomberg.com/graphics/2015-whats-warming-the-world/

      • Follow the Money
        Posted Mar 12, 2016 at 4:58 PM | Permalink

        Somebody just posted this.

        Where…here? Did they point out the “forcings” are not calculated from data, but alleged GISS modeling? It’s explained in the text below the graph.

        • bernie1815
          Posted Mar 12, 2016 at 6:00 PM | Permalink

          It came up on my Facebook page from a friend with the following as an explanation:

          Retweeted Gavin Schmidt (@ClimateOfGavin):
          “What’s warming the World” ‪#‎dataviz‬ from @DrKateMarvel, me, @eroston and @BlackiLi wins a major award! ‪#‎malofiej24‬

          I was surprised when it was dated June, 2015. I assumed that the graphics won an award and Gavin tweeted out the announcement without any caveats.

          Please don’t shoot the messenger.

        • Follow the Money
          Posted Mar 13, 2016 at 12:03 AM | Permalink

          Shoot, I wouldn’t do that.

          Plus I’ve looked on twitter…it is surprising how many people think “data” is being depicted.

          Or not–that’s a target of graphic chicanery. It works.

          This graph(s) is about the second time I’ve seen climate scientologists dare to track a CO2 record and a temps record as optically congruent. Although I also suspect atmospheric CO2 concentration data were undisclosed partners in creating early hockey sticks.

        • Follow the Money
          Posted Mar 13, 2016 at 5:34 PM | Permalink

          b,

          eyeballing the graphs again, GISS model forcing runs against a GISS temperature data set, I gain a feeling that the “adjusting” of GISS temperature records over the past recent years is not necessarily just warming the present and cooling the past by whimsy. They could be using there GHG model runs to calibrate or massage the temperature sets. The linear matching is just too close, i.e, they’re doing it for the optics, not the science. Of course, the relationship is allegedly logarithmic, but that is never explained.

  10. Posted Mar 12, 2016 at 2:13 PM | Permalink

    It’s a pleasant surprise to see the authors both acknowledge this error and Nic Lewis for finding it. Maybe I’ve grown too cynical, but such a trivially decent thing is genuinely surprising to me at this point.

    I am curious what Gavin Schmidt will start saying now. Or if he’ll just stop talking all together to try to avoid the issue. He’s certainly done that before when he realized he screwed up in what he defended.

    • MikeN
      Posted Mar 13, 2016 at 11:03 AM | Permalink

      I was arguing with one blogger about the errors in Kaufmann, and they were completely not understanding the upside-down Mann, complicated when I randomly picked to demo the anomalous cold portion of the warm period.
      When she finally got it, said I should contact the author, which I responded was pointless. Before clicking the comment, I tried it. Was very surprised to see Kaufmann respond that they were submitting a correction.

  11. Michael Jankowski
    Posted Mar 12, 2016 at 5:28 PM | Permalink

    If such an error were made by Steve, Christy, Spencer, etc, we’d never hear the end of it. If such a person maintained there was no error but were eventually made to concede, we’d hear about it hundredfold.

    Instead we get the usual blip on the radar with an “it doesn’t affect our conclusions” and are just supposed to move on.

    • kenfritsch
      Posted Mar 12, 2016 at 5:46 PM | Permalink

      I think the reason that some like Schmidt can say that it does not effect their conclusions is because their conclusions were set in stone before the paper was written – long before it was written. As to the future it may well be a matter of what do you believe: those long set conclusions or your lying eyes.

      • stevefitzpatrick
        Posted Mar 12, 2016 at 6:22 PM | Permalink

        Kenneth,
        I really think it is all about reducing the credibility of any empirical estimate which yields other than high sensitivity. The policies desired by green advocates become much less likely to be adopted if lower sensitivity is credible (and modeled sensitivity less so). It seems to me just politics masquerading as science.

        • kenfritsch
          Posted Mar 13, 2016 at 10:42 AM | Permalink

          SteveF, while my comment was a lame attempt at being cute and I agree with your specifics in this case, my comment was aimed at the more general case where the climate scientist knows the “correct” conclusion and now only has to find evidence to confirm it. Once the investigation provides that “correct” conclusion the investigators have a tendency to stop looking. That means sensitivity testing is not done and apparently in this case and others that might be recalled the results were not checked properly for errors. That was left to Nic. With hard set conclusions going into the study it is obviously difficult for the authors to admit that the errors will change that conclusion. A tactic often used is to reply to one error at a time and avoid confronting a multitude of errors as that allows the authors to reply that that single error does not materially change the conclusions whereas the multitude of errors would. That is why I hope Nic and PaulK can maintain the pressure on the Marvel authors to reply to all the issues they have raised.

      • Posted Mar 12, 2016 at 10:47 PM | Permalink

        Too harsh.
        Clearly, it doesn’t affect the conclusions of the paper simply because the conclusions were reached without knowledge of the error. Were the authors to draw conclusions with knowledge of the error, they may well be different to those of the published paper. But they haven’t published such conclusions, and perhaps not made them – preferring, perhaps, to rely on their previously published conclusions instead.
        Equally clear is that since the conclusions support the establishment position, no change – or acknowledgement of a requirement for change – of the conclusions is considered appropriate.

        What, me? Cynical?

  12. Posted Mar 12, 2016 at 6:57 PM | Permalink

    Nic,

    It clearly is a positive development to have your careful work acknowledged by the authors: however, although they did make a few corrections, they did not admit to what to me is to me the most important point – your paper’s estimate was not biased low. In fact, using their corrected forcings and assuming HadCRUT4 temperature changes, together with observationally based estimates of heat flux to the oceans and elsewhere, I would estimate ECS values of 0.8 to 1.3 C and TCRs of 0.7 to 1.1 C – even lower than those in your paper. I’ll email you a spread sheet showing how I get to those numbers. If you agree that’s the case it might be useful to gently point that out, even though I suspect getting either the authors or the Editors to agree in public might be tough sledding.

    • Posted Mar 12, 2016 at 9:32 PM | Permalink

      Again, with emphasis

      they did not admit to what to me is to me the most important point – your paper’s estimate was not biased low.

    • Posted Mar 13, 2016 at 12:54 PM | Permalink

      Keith,
      Thanks. I’ll respond properly when I have had a chance to study your spreadsheet – I have been out today.

  13. Posted Mar 12, 2016 at 8:23 PM | Permalink

    The RC crowd has made quite an extraordinary hobby or refutable papers. Engineers that make things which don’t work don’t get paid.

    • stan
      Posted Mar 12, 2016 at 9:10 PM | Permalink

      Science has no accountability because academia has no accountability. The absence of accountability explains much of what goes on today on campus.

      • mpainter
        Posted Mar 13, 2016 at 9:44 AM | Permalink

        One logical place for accountability is the NSF. Public funds are accountable, minutely so. Standards are now called for. I suggest that one such standard might read:

        >If a study uses statistical techniques, these should be according to the best practices as determined by that field.<

        Or some such language defining standards. Also, there should be sanctions, such as a refund by authors/institutions who fail standards. This will put the institution in the position of judging the work of its inmates, instead of merely cranking out press releases. All for the improvement of science, academia to the enormous benefit of the public.

        Dear Congressman, no accountability for the use of NSF funds? WHAT GIVES? Time for this abuse to end.

    • stevefitzpatrick
      Posted Mar 12, 2016 at 9:21 PM | Permalink

      They get paid for a little while, but not long.

    • Michael Jankowski
      Posted Mar 12, 2016 at 9:40 PM | Permalink

      Engineers risk losing their license and essentially their career.

      • Carbon Bigfoot
        Posted Mar 13, 2016 at 2:12 PM | Permalink

        Not all engineers–only Professional Engineers face that scrutiny. In the US we have what is called the Industrial Exemption. Engineers that work for Industry, Utilities and Government are not required to be licensed. As a ChE I am an exception being licensed as I chose consulting across all markets including State & Government.
        The push for all Engineers to be licensed by NSPE and State Societies has met with mixed resistance. The National Society IMHO does a good job of representing Civil, Structural, Mechanical and Electrical Fields, as they are involved in Infrastructure & Building Construction. Unfortunately they don’t understand Hydrocarbon and Chemical Processes and their continuing professional competency programs don’t include our needs. AIChE would make a better Licensing and Oversight Agency. States would never agree as they would lose their power and fees.

        • Michael Jankowski
          Posted Mar 14, 2016 at 7:33 PM | Permalink

          Ok, so people termed “engineers” who aren’t licensed don’t risk losing a license. If I say that “drivers risk having their license suspended if they get a DUI,” are you going to chime-in and tell me that not everyone who drives a vehicle is licensed and that some vehicles don’t need a license to operate at all?

          I’ve worked in government and for a utility…and used my license. It depends on the nature of the work.

    • MikeN
      Posted Mar 13, 2016 at 11:04 AM | Permalink

      And then declare how the skeptics can’t even get radians and degrees straight or don’t understand Fortran.

  14. Steve McIntyre
    Posted Mar 12, 2016 at 10:55 PM | Permalink

    Nic, have you collated before and after changes in the paper?

    • Posted Mar 13, 2016 at 12:55 PM | Permalink

      Steve, yes, I have. I’ll post an update giving them.

  15. TimTheToolMan
    Posted Mar 13, 2016 at 5:26 AM | Permalink

    Gavin : “These are simply post hoc justifications for not wanting to accept the results.”

    The irony. It burns.

  16. miker613
    Posted Mar 13, 2016 at 9:27 AM | Permalink

    To me it is very encouraging that they acknowledged Nic Lewis by name.

    • mpainter
      Posted Mar 13, 2016 at 10:35 AM | Permalink

      It was either that or plagiarism. They were impaled on horns of a delimma, were they not? Hardly encouraging that they should choose the least disagreeable.

      Their third alternative was to ignore the pestiferous Nic Lewis completely. But such a course would have to be acceptable to all the co-authors. I guess one might feel encouraged that they corrected their study.

  17. Gerald Machnee
    Posted Mar 13, 2016 at 10:14 AM | Permalink

    Are they still saying,”we made an error but our results are still correct.”?

    • Not Sure
      Posted Mar 13, 2016 at 1:19 PM | Permalink

      Sure sounds that way to me. By the Hockey Team playbook so far. First say that the error “doesn’t matter”. Next, I’m expecting they’ll say it’s time to “move on”.

      • Michael Jankowski
        Posted Mar 13, 2016 at 5:16 PM | Permalink

        We’ll likely see an “independent” paper that “validates” Marvel et al (2015) even if it doesn’t.

        • stevefitzpatrick
          Posted Mar 13, 2016 at 5:59 PM | Permalink

          Of course. One weak paper with multiple errors and based only on one climate model will not do….. a multimodel study is needed now to confirm that empirical estimates are always biased low. Of course, nobody is going to point out that the ‘bias’ is against a bunch of worthless rubbish climate models.

  18. Posted Mar 13, 2016 at 7:34 PM | Permalink

    I have added an Update dealing with the changes made in the corrected paper and SI, and giving my own calculations of the forcing efficacies on the basis I deduce was used by Marvel et al.

  19. miker613
    Posted Mar 14, 2016 at 8:05 AM | Permalink

    ‘The ECS figure in the third sentence of the paragraph starting “We apply the same reasoning to estimates of ECS.” has been changed from 2.9 C to 2.6 C.’ A number of the conclusions seem to have been very significantly affected by the changes, notwithstanding their claim.

  20. Ross McKitrick
    Posted Mar 14, 2016 at 4:44 PM | Permalink

    I am struck by the difference between their (large) single-forcing uncertainties and that for the aggregate historical run, which is very small. Of the 10 single-forcing uncertainties for ERF in their Dec 14 table, 5 differ significantly (at 10%) from 1. But for the same entries in the March 10 table, only 2 do. And in 6 cases the efficacy has a negative lower bound, which implies that even the sign of the effect is uncertain. Yet the ‘historical’ row, which I take to be the aggregate, has very tiny uncertainty bounds. Somehow their uncertainties cancel each other out rather than interacting multiplicatively.

    • kenfritsch
      Posted Mar 14, 2016 at 8:01 PM | Permalink

      When I made my caclculations for CIs I found the CI range was related to size of the temperature change. AA and GHG produce relatively large changes. It has to do with the noise level versus temperature change.

  21. Posted Mar 15, 2016 at 3:15 AM | Permalink

    The attached plot is the three GISS-E2-R runs in CMIP5 averaged over 30S-30N compared to HadISST v1.1

    They have a big problem with over-estimation of sensitivity to volcanic forcing. This is what I have been pointing out for years.

    Any possible correlation for Krakatoa and Mt. Pinatubo are down in the noise of inter-annual variability.

    If they are not even getting a vague similarity to observations, they need to start seriously stirring their fudge.

    Suggesting the models are under-estimating sensitivity is nothing short of laughable, however they massage the data to arrive at that conclusion.

    • Posted Mar 15, 2016 at 3:37 AM | Permalink

      They are injecting a spurious ( highly exaggerated ) cooling due to volcanic aerosols that counters a spuriously high sensitivity to GHG.

      Stratospheric ozone in models is erroneously being driven CFC emissions rather than ozone destroying sulphuric acid aerosols for stratospheric volcanic eruptions, and thus also providing a spurious anthropogenic post-2000 forcing.

      The preconceptions about the human origins of ANY change that occurs in climate has contaminated the whole process and continues to drive everything they produce.

      • kim
        Posted Mar 15, 2016 at 3:40 AM | Permalink

        There is a lot of interannual variability, but in all five illustrated eruptions you can see the sharp down slope followed by a more gradual rise, also all nearly of the same slope, to a higher point. I’m not capable of making much of my reading of the graph, but it suggests that the cooling and warming effects of vulcanism are not thoroughly understood.
        =================

        • Posted Mar 15, 2016 at 3:55 AM | Permalink

          Huh? In the models yes, not in the orange line with is derived from observations.

          There is NO downward trend after Krakatoa nor Mt P. the two largest events. There was a downward trend before Mt Agung which STOPPED after the eruption. El Chichon dip happens when the model shows recovery, so any correlation is spuriously seen by those squinting with one eye closed.

          There is warming spike about 5-7y after most eruptions. But that is another storey. Part of ozone mis-attribution.

        • kim
          Posted Mar 15, 2016 at 4:17 AM | Permalink

          Thanks for the correction. Lesson: Read more carefully, comment less casually.
          ================

        • Steven Mosher
          Posted Mar 15, 2016 at 2:31 PM | Permalink

          Dont be fooled kim.

          read the chart carefully and you will see the trick.

        • kch
          Posted Mar 15, 2016 at 6:49 PM | Permalink

          Stephen Mosher-

          I sometimes wish that you would be less cryptic in your commentary, if only to assist those of us who mostly lurk about trying to follow the discussions. I think I see what you’re getting at, but…

          So, in looking at the chart, I note that the orange line (HadISST) is based on sea-surface temperature observations, while the three other lines are (various GISS-E2-R runs) are land-and-sea model outputs. Is this what you are talking about?

          If it is, I think I see your point – I certainly wouldn’t expect SSTs to respond to volcanic eruptions in the same fashion as atmospheric temperatures would (at least not in the short term).

          If it isn’t, what are you talking about?

        • Steven Mosher
          Posted Mar 15, 2016 at 11:34 PM | Permalink

          kch.

          read the chart more carefully. you are headed in the right direction.
          of course you could read the literature on the spatial aspects of the climate response to volcanoes.

        • kch
          Posted Mar 16, 2016 at 7:10 AM | Permalink

          Steven Mosher-

          “of course you could read the literature on the spatial aspects of the climate response to volcanoes.”

          Yes I could – if I had any freakin’ clue of where to start, or the time to try understanding a whole new area of this benighted science. Still, if you give me a pointer, I’ll try. I do enjoy learning new stuff…

          [As an aside, this is an example of what bothers me about your frequent crypticism – you obviously think climategrog has put up something misleading, but won’t explicitly say what it is. Without that, climategrog can’t defend himself or even really discuss what he’s done, or why. I greatly respect your knowledge and understanding, and so will assume there is some good reason for your position, but without some more exposition on this the ignorati such as myself are left with the impression that it is a he said/she said situation.]

        • kim
          Posted Mar 16, 2016 at 9:09 AM | Permalink

          Gotcha, we don’t understand the responses to volcanoes.
          =======

        • Steven Mosher
          Posted Mar 16, 2016 at 3:02 PM | Permalink

          “Yes I could – if I had any freakin’ clue of where to start, or the time to try understanding a whole new area of this benighted science. Still, if you give me a pointer, I’ll try. I do enjoy learning new stuff…”
          ###########################
          Weird. A cryptic chart gets posted with zero accountability. yet you refer to the science as “benighted”. You have zero reason to believe the chart, yet you question, my questioning of the chart.
          Bottom line. if you see any chart on the internet that does not explain EXACTLY how it was produced, you have no rational obligation to believe it or defend it. Charts dont make cases.
          They are not evidence. they are nothing. Second. Google is your friend. But you dont even need google to see the problem. The chart itself gives you a clue and you were well on the way to finding all the issues

          [As an aside, this is an example of what bothers me about your frequent crypticism – you obviously think climategrog has put up something misleading, but won’t explicitly say what it is. Without that, climategrog can’t defend himself or even really discuss what he’s done, or why. I greatly respect your knowledge and understanding, and so will assume there is some good reason for your position, but without some more exposition on this the ignorati such as myself are left with the impression that it is a he said/she said situation.] I greatly respect your knowledge and understanding, and so will assume there is some good reason for your position, but without some more exposition on this the ignorati such as myself are left with the impression that it is a he said/she said situation.]

          1. Without that, climategrog can’t defend himself or even really discuss what he’s done, or why.
          read what you wrote again. A chart is posted. You are saying that unless I detail what
          is wrong with it that the author cannot defend himself or EXPLAIN or discuss what he has done?
          my silence has never been so powerful….

          2. Weird that you should see it as a “he said’ she said. As if merely posting a chart made a fact. On berkeley earth Ive posted over 40,000 graphics. maybe a handful have been discussed.
          That makes 40000+ cases where i said something and no skeptic explained why they reject that
          specific chart. of course, since they havent attacked every chart and explained in detail
          what problem they have, I cannot defend myself.. See how funny your logic is?

          Look at the chart again. Dont just read the words on the chart, but write them down.
          As you write them down.. ask yourself.. why this? why this choice? was their another choice?
          what happens if I choose differently?

          you were already on the right path when you asked ‘Why SST?”
          there is more here. Look at the area selected and ask why?

          next look at the sources for CMIP

          you see three runs selected.. look’s official and all doesnt it? except

          https://climexp.knmi.nl/selectfield_cmip5.cgi?id=someone@somewhere#ocean

          See the problem?

          It pretty simple. Analysts make choices. Sometimes they say nothing about their choices
          and you can pretty much throw their stuff away.
          Soemtimes they tell you some of the choices they made

          1. I choose to look at SST.
          2. I choose to look at 30-30
          3. I choose to look at 3 runs of one GCM
          4. I choose to look at this subset of volcanos
          5. i choose to look at this metric
          6. I choose to look using this method of looking

          you think those choices are random? or don’t matter? think again.

        • mpainter
          Posted Mar 16, 2016 at 4:11 PM | Permalink

          Mosher: “Weird. A cryptic chart gets posted with zero accountability.”
          ###
          Weird indeed. Look closer. The data sources are given on the chart.

        • kch
          Posted Mar 16, 2016 at 7:29 PM | Permalink

          Steven Mosher –

          Thank you for the extended reply. The unpacking helps. My apologies if I came across as offensive – that was not in any way my intent.

          “you were well on the way to finding all the issues”

          Well, I’d thought I might have some of it, but wasn’t really sure how much. Still not sure, actually, though it’s starting to look like not just apples vs oranges, but cherry flavoured apples vs oranges. Probably wrong on that, though – I really don’t know enough about the various models to be certain.

          As for your points 1 and 2, frustration at not being certain of the answer led me to express myself badly and with a whole lot less logic than I could have wished for. I guess what I would have been looking for would have been something more direct on your part leading to a discussion. Your silence wouldn’t make him right, but your naked contradiction (if that’s what it was) doesn’t make him wrong. But definitely no obligation on anyone’s part to participate, just a personal wish to see a point further explained by those who know more than me. Lord knows I won’t be challenging climategrog, though now I do have some basis for asking him for clarification – no more – on a couple of points.

          [Side note: I always prefer to read two knowledgeble people on any topic rather than fall afoul of what I call “Mosher’s Law” (I first saw it laid out by you years ago): ‘in any blog debate, the weakest commenter will be the one attacked, and his defeat will be held up to demonstrate the total victory of the other side.’ I fully understand the limits of my knowledge, and hate to become the strawman on that basis. Can’t learn much that way…]

          Of course, this is all pretty OT, and so due to be snipped, but thanks again for taking the time to answer.

        • Posted Mar 17, 2016 at 12:42 AM | Permalink

          Steven Mosher makes a rather remarkable argument:

          1. Without that, climategrog can’t defend himself or even really discuss what he’s done, or why.
          read what you wrote again. A chart is posted. You are saying that unless I detail what
          is wrong with it that the author cannot defend himself or EXPLAIN or discuss what he has done?
          my silence has never been so powerful….

          To be clear, the first sentence here is a quotation which, for unknown reasons Mosher doesn’t bother to indicate is a quotation. The second sentence is what’s fascinating. Mosher claimed a graphic was flawed/wrong/whatever. A person responded by pointing out by making that accusation without providing any explanation or justification, Mosher made it impossible for the graphic’s creator to defend his work.

          Mosher’s response is to mockingly dismiss that idea by suggesting a person could defend or explain their work in general. That might be true, but… it doesn’t address what was said. When a person makes a vague accusation, there’s no way to know what one needs to respond to. A person could write 500 words explaining their graphic only to find nothing they said deals with the unspecified accusation.

          Vague accusations are a blight on discussions. If you want to say something is wrong, explain how it is wrong. If you want to just point out something is unexplained, that’s okay too. But the idea making vague statements to refer to unspecified issues is okay because Mosher’s mocking remarks justify them is… silly.

          And while I don’t think it should be necessary, I’ll point out I’m not pointing this out because of any dislike for Mosher himself. My feelings to him aside, I’ve see this same sort of thing in criticisms of what’s been posted here time and time again. People like Michael Mann and Gavin Schmidt have almost always benefited by vague and unspecified accusations. Generally what happens is Steve McIntyre has to spend far more time than the people making the accusations unpacking what the issues are and what the accusations might be claiming. It’s bad for discussions, and it is really quite rude.

        • Sven
          Posted Mar 17, 2016 at 7:27 AM | Permalink

          Just a few years ago Mosher used to be a welcome addition to every discussion so it’s really sad to see the change (I don’t know, since joining the Berkeley group?). His cryptic arrogance has really become a nuisance. Sad.

    • Posted Mar 15, 2016 at 3:48 AM | Permalink

      The over-sensitivity to both warming and cooling perturbations works while both are present. In the absense of major volcanic activity since Mt P, the errors become abundantly clear.

      There are probably dozens of ways to engineer a similar upward drift over a very limited period such as the 1960-1998 tuning of the models.

      The lack of the ability to match either preceding or later changes even on the inter-decadal scale shows their model has NO predictive skill whatsoever.

      But maybe : “These are simply post hoc justifications for not wanting to accept the results.”

    • mpainter
      Posted Mar 15, 2016 at 5:56 AM | Permalink

      Thanks for this. It illustrates graphically the faults of the modelers like Gavin Schmidt who are mathematicians with little grasp of science and have no concept of the need to reconcile their work with actual observations. In fact, your charts show zero correspondence between observations and model products, excepting only that which may be attributable to random match. Some day these anti-science tinker toys will be perceived as deliberate deception, but no, it is mostly incompetence, as Nic has shown in his parsing.

      • Michael Jankowski
        Posted Mar 17, 2016 at 6:54 PM | Permalink

        To be fair, Gavin’s PhD is in applied mathematics. He’s not a “pure” mathematician.

        But what’s really sad is that some of his issues here are mistakes someone with his mathematical background should not make.

    • kch
      Posted Mar 16, 2016 at 7:50 PM | Permalink

      Climategrog –

      A couple of questions:

      1) Why GISS-E2-R historical, as opposed to other possibilities? Would it have made a difference to your chart?

      2) It appears to me that the GISS-E2-R are atmospheric/ocean model runs, while the HadISST is purely SST. If so, there would appear to me to be little value in comparing the two when looking for aerosol sensitivity to volcanic eruptions. Is this wrong? How?

      [And please, I’m not challenging you or anyone else here, I’m just trying to get a better handle on this in the light of the back-and-forth I’ve had with Steven Mosher.]

      • kim
        Posted Mar 17, 2016 at 8:13 AM | Permalink

        It seems to me that eruptions of such different character probably shouldn’t be modeled alike.
        ==============

  22. Posted Mar 15, 2016 at 5:11 AM | Permalink

    I think it is a shame that the authors did not also acknowledge E J Thribb (17) of 4 Kensington Mews for his careful reading of the paper. I know that he read it assiduously. E J Thribb is 37.

    On the other hand, I would like to thank Nic Lewis for bringing a number of errors to the attention of the authors. In particular, I think that the identification of the missing LU forcing by reverse engineering is in keeping with the best tradition of this site. Well done.

    I am genuinely very surprised that the authors did not opt for full voluntary retraction. I suspect that their decision to persevere with this paper was a serious mistake which will come back to haunt them.

    By my count, the “corrected” paper still has 4 gross conceptual errors, 2 patently invalid assumptions and introduces a monumentally absurd definition of the newly minted Equilibrium Efficacy. The latest results for transient efficacies also fail simple internal consistency checks, highlighting continuing methodological errors.

    The paper now resembles a war-torn landscape, and the collateral casualty list is growing.

    First and foremost, the credibility of the GISS-E2-R Nint results (the model itself) has become vanishingly small. Quite apart from the wider publicity given to the heat transport problem in the Russell ocean model (which affects all of the published GISS-E2-R results), and the famous rogue LU run where a negative forcing yields an overall positive net flux response (which is not rogue at all and not to be excluded according to Gavin), the WMGHG results and particularly the relationship between Fi and ERF values now seem positively bizarre. The initial picture presented by Marvel et al (founded on incorrect Fi value for CO2) was that the WMGHG results were basically in line with CO2-only response, and the historical run was a low outlier – giving rise to the paper’s argument that observation-derived TCR values would be biased low because of this “accident of history”. The new results show that the historic run aligns perfectly with the CO2-only case, and that the WMGHG response is a high outlier, having an unbelievably high efficacy despite the (Fi) forcing having already been increased by some 15% relative to the original GISS-E model. Moreover, in the CO2-only case, the ERF value is lower than the Fi value (as expected) but somewhat counter-intuitively higher than the Fa value. This suggests an unusual signed positive feedback with fixed SST, but may still be plausible. However, in the WMGHG case, we now find that the ERF value is significantly higher than even the Fi value. This is just physically implausible.

    The Miller 2014 paper needs extensive revision.

    The Marvel 2014 paper needs to be retracted.

    I note with amusement that, caught in friendly-fire is also Professor Marotzke’s assertion that Forster’s AF values can be considered extrinsic since the AF values can be independently verified by fixed SST experiments.

    Last but not least is the credibility and competence of the GISS team. Faced with a major accounting discrepancy in summed forcings, one would have imagined that the first step would be to check the integrity of the inputs. Instead, the GISS team (there were 40 authors in Miller et al 2014) concluded that there must be a complex non-linear interaction of the forcing drivers. The house was dark, but no-one thought to check to see if the light was switched on or not. I am forced to conclude that a couple of hours of Nic’s time, looking at the house through a telescope is rather more valuable than the collective wisdom of the tens of professional electricians inside the house who have had several years to find out why the lights were out.

    • Hoi Polloi
      Posted Mar 15, 2016 at 6:33 AM | Permalink

      I think we have here a marvel-lous example of Groupthink…

      https://en.m.wikipedia.org/wiki/Groupthink

    • Posted Mar 15, 2016 at 7:01 AM | Permalink

      The problem with E J Thribb is the sarcasm with which he presents his findings, in poetic form. (Hmm, another clue on the troubled identity of kim?)

      • kim
        Posted Mar 15, 2016 at 11:55 AM | Permalink

        Heh, I must draw my troubled bow dozens of times before finding a shaft so meant for the bullseye as P’s above.
        ==================

    • stevefitzpatrick
      Posted Mar 15, 2016 at 8:21 AM | Permalink

      Paul,
      Your incredulity stems from the notion of this being a paper that tries to advance understanding. It doesn’t. Low empirical values for TCR and ESC can’t be allowed a shred of credibility, and you have to publish whatever confused nonsense you can to reduce that credibility. The desired result of this paper is a reference in a prestegious journal, and so long as the paper is not withdrawn, that objective has been acheived. Marvel et al will be forever used by green advocates to claim empirical estimates are biased low, and must therefore be ignored when choosing public energy policies. The fact that Marvel has multiple substantive errors and draws erroneous conclusions is quite beside the point. What politician is going to claim a Nature journal article is full of errors, or that the Russell ocean model has serious problems which lead to nutty GCM behavior?

      The pattern of attacking every empirical estimate which yields low sensitivity by showing that GCMs give high sensitivity has now been repeated so often that you can count on it continuing….. no matter how much GCMs diverge from reality.

  23. mpainter
    Posted Mar 15, 2016 at 8:53 AM | Permalink

    Miker613 says “I will recall that you, ATTP, told me a little while ago that there was no obligation for Marvel et al to respond to a non-peer-reviewed comment like Lewis’ – in a blog! – and I answered that this is a much better and faster way for science to progress and truth to be established. ”

    ####

    Well put. Nic’s postings and the response of Marvel et al illustrate how blog science has superseded peer review as the most efficacious method of examining the merits of a scientific study, at least in the field of climate science.
    Those, like Ken Rice, who deprecate “blog science” are merely taking rear-guard potshots.

    • Posted Mar 15, 2016 at 11:31 AM | Permalink

      I was thinking of commenting on this quite profound (and encouraging) point. Before I could Miller et al came good with an immaculately worded corrigendum – this one leaving no doubt both as to the originator of the insight and their gratitude to him. Despite obvious resistance in the climate scene ‘blog science’ is being taken very seriously indeed.

      • stevefitzpatrick
        Posted Mar 15, 2016 at 3:18 PM | Permalink

        Richard,
        If you go back to the exchanges with Ryan O’Donnell, Jeff ‘Id’, and others at Real climate after Steig et al’s Nature cover article (and its claim of nearly uniform Antarctic warming), you will see the same kinds of dismissive, hostile comments from the RealClimate crew, at least in the comments they didn’t later ‘disappear’ to make themselves look better, as Nic and PaulK got in this case. Of course, Ryan and the others were absolutely correct: Steig et al had serious methodology problems which gave an incorrect (much too uniform) warming over the entire continent, essentially smearing peninsula warming over the whole continent. A recent publication shows that rising atmospheric CO2 above high altitude Eastern Antarctica is expected to on average cool the surface, not warm it….. which is consistent with the surface thermometer observations, and exactly opposite the Steig et al result. What led Steig et al astray was the desire to ‘explain’ discrepancies between projected GHG driven warming in Antarctica and measured warming; confirmation bias did the rest. Too much happiness with the ‘desired’ result is always a danger.

        Still, it seems very hard for some to learn from experience.

        • Posted Mar 15, 2016 at 4:21 PM | Permalink

          Steve, I expect Nic will remember the paper by O’Donnell, Lewis, McIntyre and Condon, published in the Journal of Climate in December 2010, and its aftermath! I’m inclined to follow his gracious lead in his responses in this thread, especially after Miller et al’s excellent correction.

    • Hoi Polloi
      Posted Mar 15, 2016 at 1:37 PM | Permalink

      I wonder how “And There Was Ken” is feeling now Gavin Schmidt has acknowledged Nic Lewis’ findings. Do I see some egg residue on his face?

  24. Posted Mar 15, 2016 at 10:28 AM | Permalink

    I see from a response by Gavin Schmidt at RC that a corrigendum to Miller et al 2014: CMIP5 Historical Simulations (1850-2012) with GISS ModelE2, from which Marvel et al 2015 took their iRF forcing time series, has also be published. It starts off:

    “In the calculation of instantaneous radiative forcing, land use was omitted from the `all together’ case that combined the separate forcings.”

    and kindly acknowledges my role in this connection:

    “The authors thank Nic Lewis for bringing this error to our attention.”

    The fast response of the GISS authors of both papers in investigating this matter, submitting corrections and updating the data on the GISS website, is to be commended, as is the provision of extensive forcing and other data on the GISS website.

    • mpainter
      Posted Mar 15, 2016 at 11:07 AM | Permalink

      We are witness to a watershed event in climate science as Gavin Schmidt openly embraces the “skeptical” approach to science in Nic’s link. Mark this comment by Gavin for future reference.
      It is Nic’s work that occasioned this comment by Schmidt. Congradulations, Nic.:-)

    • Posted Mar 15, 2016 at 11:25 AM | Permalink

      The Miller et al corrigendum is at page 38 of this pdf. Great work Nic.

      • Follow the Money
        Posted Mar 15, 2016 at 3:56 PM | Permalink

        The new Figure 4c map is different in small parts…and in one very large one.

        I thought previously that the LU was dropped at the last moment before the 2014 publication because someone looked at the 4c map, and/or underlying data, and saw what was happening over Malta and thought, “This cannot be true!” So someone quietly let it slip away..

        Looking at the original fig. 4c, the most positive LU forcing in the world is centered on Malta. Malta? The only positive forcing colored is around New England, attributed in the original text to reforestation of former farmland. There is no explanation in the text for the Maltese Forcing.

        The Maltese Yellow has dimmed and shifted to the Tunisian-Algerian border area in the new fig. 4c, but Nebraska and South Dakota have chilled out compared to the former version. The original article attributes this glob of color to “contraction of the boreal forests in the western plains of Canada.” But the glob corresponds with the corn and wheat belt of the US and Canada, formerly natural grassland now replanted with grasses more nutritive for humans.

        Another thing..if deforestation creates increased albedo, why is so much depicted for little Central America, and barely any at all for Brazil? And why none at all for Africa?

        I question the reliability of the underlying LU data for cataloguing whatever it is purporting to catalogue.

        • mpainter
          Posted Mar 16, 2016 at 7:19 AM | Permalink

          “I question the reliability of the underlying LU data for cataloguing whatever it is purporting to catalogue.”

          Yes, we all question that. The modeler does not.

          Another question: is the modeler, by temperament and training, capable of questioning? This is not an ad hominem. Rather it goes to the heart of formulating public policy. Incompetence is the rule with climate modelers, this judgement based on results.

        • Follow the Money
          Posted Mar 17, 2016 at 2:13 PM | Permalink

          “Rather it goes to the heart of formulating public policy.”

          The heart of public policy in the climate matter is the monetization of human behavior. If the cause is not anthropogenic, it is worthless. Any related modeling will only be funded if its machinations can be used to produce a graph or other pictorial product which supports public argument that human activity is doing something injurious.

          I suspect 1/3 know this, 1/3 are just plodding along, and 1/3 are true believers who cannot follow the money.

    • stevefitzpatrick
      Posted Mar 15, 2016 at 1:42 PM | Permalink

      From Gavin’s response:

      “It’s useful to think of this as an example of Bayesian priors in action – given that 99% of the criticisms we hear about climate science are bogus or based on deep confusions about what modeling is for, scepticism is an appropriate first response, but because we are actually scientists, not shills, we are happy to correct real errors – sometimes they will matter, and sometimes they won’t.”

      I had to smile at that. Maybe Gavin should adjust his prior. The problems with GCMs (not the least of which is the ensemble’s clear divergence from reality) are pretty obvious to lots of people (scientists, not shills) who don’t happen to share Gavin’s priors about climate science and GCMs. The shills/scientists question will be answered by history, not via today’s blog sniping. But if I worked in climate science, and if I cared at all about history’s verdict on that question, I would be a lot more careful than Gavin about dismissing people who are skeptical of the diagnosed sensitivity from the GCM ensemble.

      • Posted Mar 15, 2016 at 1:55 PM | Permalink

        given that 99% of the criticisms we hear about climate science are bogus or based on deep confusions about what modeling is for

        There are some criticisms of little merit. But you hear what you want to hear.

        • mpainter
          Posted Mar 15, 2016 at 2:33 PM | Permalink

          Gavin indulges in hyperbole, obviously, so who takes him seriously? Now, if he declared that he ignored 99% of the criticism, that would be believed by all.

        • j ferguson
          Posted Mar 15, 2016 at 4:17 PM | Permalink

          I wonder what the percentage would be if duplicate criticisms were removed. in other words could 99% of the discrete concerns really be bogus. Seems unlikely.

        • Posted Mar 15, 2016 at 4:28 PM | Permalink

          Even if they were, are 99% of Nic Lewis’s criticisms bogus or based on deep confusions? That’s the only stat that’s relevant. The pea moved to give maximum offence. But the one vindicated in the corrigendum only speaks positively. Respect.

        • Michael Jankowski
          Posted Mar 15, 2016 at 6:28 PM | Permalink

          “…given that 99% of the criticisms we hear about climate science are bogus or based on deep confusions about what modeling is for…”

          Who has ANY confusion on “what modeling is for?”

        • stevefitzpatrick
          Posted Mar 15, 2016 at 6:31 PM | Permalink

          “are 99% of Nic Lewis’s criticisms bogus or based on deep confusions?”

          Short answer: no.

          Long answer: Gavin’s 99% bogus claim is risible. There are lots of serious questions about GCMs……. oh say, like that they disagree with each other so much that many (most?) have to be grossly wrong.

        • mpainter
          Posted Mar 15, 2016 at 6:43 PM | Permalink

          The models are used to inform policy. Gavin should not be so dismissive of criticism of climate models. He is a NASA employee and should show more responsiveness. He is deserving of censure.

        • stevefitzpatrick
          Posted Mar 15, 2016 at 7:18 PM | Permalink

          Gavin is an employee of the voters, not just of NASA, even as he is altogether too dismissive of those voters. I think he should keep that in mind. I very much doubt he will.

        • Posted Mar 15, 2016 at 10:52 PM | Permalink

          “Gavin is an employee of the voters, not just of NASA….”

          He’s an employee of the people of The United States, even non-voters and those who take more from the govt. than they contribute.

        • Steven Mosher
          Posted Mar 15, 2016 at 11:24 PM | Permalink

          Gavin has the wrong prior.

          1. Nic didnt criticize ‘climate science’.he made an observation and ask a question about
          a forcing.
          2. Nic has a history of finding and correcting errors in published science

          but Here is the interesting question. If Gavin’s prior was 99% before checking out nic’s criticism, then one wants to ask him what his update is? and what about the other errors?

          Pretty soon if gavin keeps updating his prior about Nic he might have to conclude that everything Nic says about climate science is likely true

        • Posted Mar 16, 2016 at 1:11 AM | Permalink

          Mosh:

          Nic didnt criticize ‘climate science’. He made an observation and ask[ed] a question about a forcing.

          Yes, it was inappropriate for me to my adopt Gavin’s wording. And Nic’s made observations and asked questions with remarkable courtesy throughout (contra ATTP), given that the whole point of Marvel et al was to assert, without foundation, that his own work on senstivity gave results that were biased low.

          Pretty soon if gavin keeps updating his prior about Nic he might have to conclude that everything Nic says about climate science is likely true

          97% would probably do.

      • Greg Goodman
        Posted Mar 15, 2016 at 10:28 PM | Permalink

        It’s a good question: what is modelling for?

        Tuning models principally to reproduce a short 30y segment of uncertain climate data and then extrapolating an exponential forcing 100y outside the data is not scientific.

        That BTW is the draconian 2.6 “scenario” !

        So what is modelling for? Forcing a political agenda on a pretence of science? Perhaps he who is not a shrill can explain what his non scientific use of modelling is for.

        • JamesG
          Posted Mar 17, 2016 at 6:38 AM | Permalink

          The models put a mathematical cloak around the pessimistic guesswork.

        • David A
          Posted May 21, 2016 at 1:37 PM | Permalink

          “So what is modeling for”?
          ============================

          It is clear that the vast majority of CAGW alarmist “harm” papers (increased extreme weather, hurricanes tornadoes, sea level rise, frogs getting bigger, frogs getting smaller, penguins declining, etc..) are predicated on the model mean and high TCS and ECS.

          Thus they cling to the models more then the observations.

          What would happen to university CAGW funding if CO2 was determined to be net beneficial for at least the next 100 years?

  25. Posted Mar 15, 2016 at 11:28 PM | Permalink

    Nazarenko 2014, one of the refs of Marvel et al:

    Abstract:
    We examine the anthropogenically forced climate response for the 21st century representative
    concentration pathway (RCP) emission scenarios and their extensions for the period 2101–2500. The experi-
    ments were performed with ModelE2, a new version of the NASA Goddard Institute for Space Sciences
    (GISS) ….

    Wow! It gets better. Now they want to train a model on 30y and extrapolate 500y outside the data.

    Remind me how well this model fits the existing data?

    • Posted Mar 15, 2016 at 11:38 PM | Permalink

      Nazarenko 2014,

      The credibility of the future simulations depends on how the climate model reproduces the
      preindustrial, historical, and current conditions, processes, and sensitivities.

      OK, so that’s settle then. Cred <= 0

      • Posted Mar 16, 2016 at 1:49 AM | Permalink

        As David Rose reported in the Mail on release of the AR5 Summary for Policymakers in September 2013:

        Richard Lindzen … said the IPCC had ‘truly sunk to a level of hilarious incoherence. They are proclaiming increased confidence in their models as the discrepancies between their models and observations increase.’

        There are many ways to try and show the discrepancies but Steve McIntyre’s in January is one of the clearest. As far as I know there’s been no scholarly attention given – or at least published – to this piece of ‘blog science’. As yet.

  26. JamesG
    Posted Mar 16, 2016 at 3:48 AM | Permalink

    I see Schmidt persists in illegally using frequentist statistics for a handful of pre-selected model runs.

  27. Posted Mar 16, 2016 at 4:10 AM | Permalink

    Hey Gavin, this little guy appreciates your modicum of candor

  28. Posted Mar 19, 2016 at 10:05 AM | Permalink

    its a pity, though, that Nics calibration curves arent inversed really, before constructing his
    “jeffreys prior” .

    it makes his JP a bit “off”.

  29. Posted Mar 29, 2016 at 11:41 PM | Permalink

    Reblogged this on I Didn't Ask To Be a Blog.

8 Trackbacks

  1. […] Nic Lewis at ClimateAudit:  Marvel et al. issue a correction to their paper [link] […]

  2. […] https://climateaudit.org/2016/03/11/marvel-et-al-giss-did-omit-land-use-forcing/ […]

  3. […] at Climate Audit, Nic Lewis has outlined the latest developments in the saga of the Marvel et al paper, which claimed to have demonstrated […]

  4. […] https://climateaudit.org/2016/03/11/marvel-et-al-giss-did-omit-land-use-forcing/ […]

  5. […] Learning with every mistake? Reminder of one encouraging moment last month reported at Climate Audit: […]

  6. […] We keep hearing from alarmists on here and elsewhere that ‘uncertainty’ in estimates of climate sensitivity means that we cannot disregard the high end estimates generated from the GCMs, meaning, effectively, that current urgent CO2 emissions reductions are justified. This is despite the fact that empirically derived observationally based estimates are generally lower than those estimates emergent from the GCMs. Climate scientists have attempted to justify the higher estimates and downplay the lower estimates, most notably a recent attempt from Marvel, Schmidt et al – which fell flat on its face here and here. […]

  7. […] [16] https://climateaudit.org/2016/03/11/marvel-et-al-giss-did-omit-land-use-forcing/ […]

  8. […] [xvi] https://climateaudit.org/2016/03/11/marvel-et-al-giss-did-omit-land-use-forcing/ […]