## IOP: expecting consistency between models and observations is an “error”

The publisher of Environmental Research Letters today took the bizarre position that expecting consistency between models and observations is an “error”.

The publisher stated that the rejected Bengtsson manuscript (which, as I understand it) had discussed the important problem of the discrepancy between models and observations had “contained errors”.

But what were the supposed “errors”? Bengtsson’s “error” appears to be the idea that models should be consistent with observations, an idea that the reviewer disputed.

The reviewer stated that IPCC ranges in AR4 and AR5 are “not directly comparable to observation based intervals”:

One cannot and should not simply interpret the IPCCs ranges for AR4 or 5 as confidence intervals or pdfs and hence they are not directly comparable to observation based intervals (as e.g. in Otto et al).

Later he re-iterated that “no consistency was to be expected in the first place”:

I have rated the potential impact in the field as high, but I have to emphasise that this would be a strongly negative impact, as it does not clarify anything but puts up the (false) claim of some big inconsistency, where no consistency was to be expected in the first place.

The reviewer summarized his concern in terms of media issues:

Summarising, the simplistic comparison of ranges from AR4, AR5, and Otto et al, combined with the statement they are inconsistent is less then helpful, actually it is harmful as it opens the door for oversimplified claims of “errors” and worse from the climate sceptics media side

Thus, the “error” (according to the publisher) seems to be nothing more than Bengtsson’s expectation that models be consistent with observations. Surely, even in climate science, this expectation cannot be seriously described as an “error”.

This is not to say that specific comparisons cannot be flawed e.g. around the edges, one may take issue on whether the geographical coverage of models corresponds to geographical coverage of observations (pace Cowtan-Way). But such craftsmanship issues hardly impugn the purpose of comparing models to observations – an exercise which was widely valued by climate scientists prior to the hiatus. It is ludicrous to suggest that the expectation of consistency is an “error” because “no consistency was to be expected in the first place”.

The publisher also said that the article “did not provide a significant advancement in the field”. However, most academic articles do not constitute “significant advancements” in their field and they still get published.

However, when AR5 was released, I noted that there was negligible literature available to AR5 about the discrepancy between models and observations, leaving IPCC in a very awkward position when it came for assessment. I noted that journal gatekeeping had contributed to this dilemma – both Lucia and coauthors and Ross and I had had submissions about model-observation discrepancies rejected. The Bengtsson rejection seems entirely in keeping with these earlier rejections.

Given the failure of the publisher to show any “error” other than the expectation that models be consistent with observations, I think that readers are entirely justified in concluding that the article was rejected not because it “contained errors”, but for the reason stated in the reviewers’ summary: because it was perceived to be “harmful… and worse from the climate sceptics’ media side”.

1. Posted May 16, 2014 at 11:27 AM | Permalink

Hide the decline writ large. This time it’s flat thermometer data that refuses to conform to GCMs and, instead of truncating the graph, all the papers that dare to mention the divergence are cut off.

2. Posted May 16, 2014 at 11:27 AM | Permalink

“Media side” – the masks slips a little further.

Now papers are being judged for “spin” rather than science. Pressuring Bengtsson is ample proof that the warmsters have lost the thread.

It will be interesting if, within the profession, climate scientists rally round Bengtsson. The usual suspects are being snippy but that need not stop some of the eminent neutrals from supporting him.

• Steve Ta
Posted May 16, 2014 at 11:56 AM | Permalink

The silence of the poor lambs of climatology has been deafening so far. There’s time yet for at least of one the clan to show that it isn’t really a cult – or is this just wishful thinking?

3. motvikten
Posted May 16, 2014 at 11:35 AM | Permalink

It would be informative to learn why Bengtsson has changed his view on climate change from 2006.
I have posted the link, and the quotation, at other places but I get no answer. It is dramatic change!

http://www.issibern.ch/~bengtsson/pdf/global_energy_problem.pdf

In 7:

“This combined with the need to raise energy production is expected to increase the concentration of carbon dioxide to approach a value twice that of the pre-industrial time towards the middle of the century. Such a high value is likely to give rise to irreversible changes in the climate of the Earth.

It seems that two major actions are needed and should be implemented with highest priority. These are carbon dioxide sequestration and increased investment in nuclear power, preferably using fast breeder reactors”

• Joshua
Posted May 16, 2014 at 11:48 AM | Permalink

==> “It seems that two major actions are needed and should be implemented with highest priority. These are carbon dioxide sequestration and increased investment in nuclear power, preferably using fast breeder reactors””

So, Bengtsson is one of those “climate scientist activists” who advocates specific policies?

Imagine the hundreds of comments we’re about to read from “skeptics” denouncing Bengtsson for his activism. “Skeptics” just hate “climate scientists activists”

The onslaught will be starting in 3….2….1…

• Posted May 16, 2014 at 12:04 PM | Permalink

It would be informative to learn why Bengtsson has changed his view on climate change from 2006.

There’s a fairly simple hypothesis that would explain the change. In 2006 Bengtsson still had a career he wanted to protect. Eight years later he feels it’s more important that the science he’s given his life to be saved from disaster, through taking account of the adverse observations that have now persisted for twice as long as was true then.

• Joshua
Posted May 16, 2014 at 12:16 PM | Permalink

So, concern about activism depends on which policies are being advocated? It isn’t the activism itself that is the problem?

Is that the standard that you think is generally applied by “skeptics?” In an explicit manner?

I mean I do think that is the standard that is applied by “skeptics” generally – but in my experience they claim otherwise – they claim that it is “activism” itself that is undermining scientific practice, and that “activism” has no part in science.

When I point out to “skeptics” that there is selectivity in their “concern” about activism, I generally get denial and I get called all assortment of names.

So let me ask you, do you support Bengtsson’s activist stance w/r/t climate change policies?

==> “It seems that two major actions are needed and should be implemented with highest priority. These are carbon dioxide sequestration and increased investment in nuclear power, preferably using fast breeder reactors”

If so – do you support “activism” among climate scientists generally, or do you only determine your support/criticism for “activism” among climate scientists contingent upon whether they agree with your interpretation of the science?

• motvikten
Posted May 16, 2014 at 12:32 PM | Permalink

That is one explanation.

An other is that he want’s to stop renewable energy from getting subsidies. According to Bengtsson (in Swedish media)there is no immediate problem with climate change.

He is still very pro nuclear in the Swedish energy debate, and those in favor of nuclear motivate it by low carbon.

Bengtsson is said to be responsible for The Swedish Royal Academy of Science view on climate change, and they say the same as IPCC.

I can understand if you find it crazy, but then you do not understand Sweden!

• pottereaton
Posted May 16, 2014 at 12:38 PM | Permalink

Another explanation is that the data since 2006 does not necessarily confirm that steady increases in CO2 will cause runaway temperatures and he adjusted his views.

In 2006 he saw some cause for alarm. The data since then has given him pause.

My guess is it has nothing to do with activism or ulterior motives.

• pottereaton
Posted May 16, 2014 at 12:41 PM | Permalink

Upon re-reading I see that Richard Drake had said essentially the same thing.

• Joshua
Posted May 16, 2014 at 12:43 PM | Permalink

Apologies for misquoting Roy. Here it is:

–> “Politicians can fix this . . . by telling the funding agencies that some percentage (say, 20%) of their climate research funding must go toward studying the 800 lb gorilla in the room: Natural sources of climate change.”

• Skiphil
Posted May 16, 2014 at 3:10 PM | Permalink

Joshua,

Glad to see that you remain obsessively concerned with any possible inconsistencies of views among various kinds of “skeptics”…..

I am far more interested in seeing some consistency and non-hypocrisy among mainstream “consensus” advocates about scientists who marry advocacy and science to a far far greater extent than could ever be said of Bengsston. For one example:

http://judithcurry.com/2014/05/16/reflections-on-bengtsson-and-the-gwpf/#comment-556587

• bmcburney
Posted May 16, 2014 at 5:18 PM | Permalink

Joshua,

On this evidence, Bengtsson was an “activist” in 2006. However, at least some of his opinions have evidently changed since then.

To support your charge of inconsitency, don’t you need to show us that, although some of his other convictions have changed, Bengtsson remains an activist? In the absence of some examples of “skeptic activism” on Bengtsson’s part, we don’t have much to be inconsistent about.

Also, “skeptic activism” is an interesting concept. If you find some, let us know. I can’t promise to be outraged but I would be willing to consider it.

• Paul
Posted May 16, 2014 at 6:17 PM | Permalink

When he was an activist based on the settled science, in ’06, maybe. But now that he is realizing it’s not settled, and he’s leaving behind his former friends who are hockey-sticking him, well, we’re a forgiving bunch. Besides, it will leave us more time to address purposefully obtuse goofs like you.

• Posted May 16, 2014 at 11:59 PM | Permalink

Many of us who do not agree with the consensus have no problems with advocacy or even activism. James Hansen comes to mind. However, advocacy and activism in public fora are not to be equated with acting behind the scenes to scuttle careers, prevent publication and generally act like the people Joshua admires.

• Posted May 17, 2014 at 12:37 AM | Permalink

Exactement.

• Nicholas
Posted May 17, 2014 at 3:30 AM | Permalink

I don’t like activism from scientists of any bent. What’s your point?

• Posted May 17, 2014 at 5:04 AM | Permalink

I can’t speak for Tom Fuller but here are some nice sentiments from yesterday, as Bishop Hill puts it:

It is regrettable that perceived political stances on the climate issue are apparently so affecting academic activity. The Grantham Institute at Imperial has always opposed such behaviour, believing that scientific progress requires an open society. We try to engage with a wide range of figures, some with radically different views on climate change.

The key words for me are ‘scientific progress requires an open society’. As long as we know the activist stance of a scientist we can allow for that in judging his or her public statements. In climate it’s the deadly work that has been in the dark – a small slice of which was exposed by the Climategate whistleblower and one example of which has now been made known through the courage of Lennard Bengtsson – that would soon destroy science. The open society is Karl Popper’s phrase. Other philosophers of science who went through the 30s and 40s, like Michael Polanyi, emphasize the deep impact on the scientific endeavour. It’s good to see someone from the Grantham Institute taking this view of this incident.

• JEM
Posted May 16, 2014 at 12:07 PM | Permalink

It is entirely possible that he still personally holds such opinions. It does not appear that he’s been out marching arm-in-arm with a bunch of politicians and activists promoting his opinions as The Only Right Way, so that’s perfectly reasonable.

It is also possible that a little bit of intellectual curiosity, a little bit of common sense, and some experience in the field led him to a little discomfort over the behavior of those whose work he may previously have trusted, so he decided to do some poking around himself and he found himself bludgeoned for his efforts.

The very public beating he took from the climate ‘science’ industry may have forced him to change how he presents his work, but I hardly think it’s done much for his personal opinion of those who did the pummeling, or their work.

• J Lindstrom
Posted May 18, 2014 at 5:36 AM | Permalink

I think your comment is probably closest to the truth. Bengtsson suddenly appeared at a Swedish skeptic blog “Klimatupplysningen” a couple of years ago, commenting, correcting and taking full part of the discussion. You could tell that he had the usual views on skeptics in the beginning. Gradually I believe that both his views on Skeptics and the science have changed. He has mentioned several times that he changes his opinion as the science changes. A true scientist in other words. Bengtsson embrace the AGW hypothesis and has still not backed down from it, but he thinks that other things are more important in the society. I would guess his views are close to that of Judith Curry (on the science). Bengtsson stance is based on pure science and there is no strong nuclear lobby in Sweden. Swedish ASEAs nuclear program was dismantled in the 80´s and a ban on any development of Swedish reactors was enforced by law! Sweden was then taken over by green Talibans. (Usually the same people that actually believed that FNL in South-Vietnam was the peoples uprising against the imperialistic US. It was not, it was regular forces from the north acting as guerrilla) Unfortunately, the educated middle class fell into this brain-washed category and now even the traditionally right wing parties embrace the green agenda. Reality has prevented a total industrial collapse because the winters are long and cold and we need our electricity regardless of ideology. The reactors still stands and to overcome the silly ban of building more reactors, the reactors themselves have been modified over time, equivalent of at least 3-4 more reactors of the original type. “Pragmatic” the swedes say. Hypocrisy others say. In this political environment Bengtsson tries to bring some common sense. Almost anything and everything in Sweden is based on “saving the climate” which in the end, of course, will be disastrous for the industry and wealth. You should also be aware that Swedish bloggers critical to Bengtsson usually emanates or are associated with the Swedish blog “Uppsalainitiativet”. UI was created to be a Swedish Sks-blogg trying to combat “Klimatupplysningen”. However, there are no climate scientists on the blog and some of the articles are “copy/paste” from SkS but usually gives an impression that they don´t have a clue what they are writing about. That can also be seen from the sometimes hilarious commentary section. One of the most active persons is a Statistician (Olle Häggström). Unfortunately OH does not show any of the qualities of the owner of Climate Audit. OH is only involved in personal attacks where he fabricates the usual motives for being Skeptic. Nobody really listen to him outside UI though. Hope this will give a better picture of the “climate” in Sweden, Bengtssons home country.

• H.Fretheim
Posted May 21, 2014 at 10:50 PM | Permalink

In 2006 it was still possible to treat the “pause’ in warming as just an anomaly. By 2012 it was obvious- as the so-called pause went into its 15th year, that the world had stopped warming for a period almost as long as the 20 year warming period that had been the only real observed basis for suggesting that the models might have any validity at all. We are now at over 17 and a half years- despite a 25% increase in the amount of CO2 added by mankind to the atmosphere. Clearly the models do not reflect reality.

John Maynard Keynes, the great (and definitely not right wing) economist once famously replied to essentially the same charge by saying “When the facts change, I change my opinion. What do you do, Sir?” That is the fundamental hallmark of a properly scientific attitude, rather than a dogmatic one. Professor Bengtsson has simply demonstrated a proper scientific attitude. This is how science is supposed to work.

4. AJ
Posted May 16, 2014 at 11:39 AM | Permalink

So is Environmental Research Letters indirectly stating that models are not science? Roger Pielke Sr. views models as hypotheses, which I naively assumed would be falsifiable by comparing them to observations. I’m thinking that if they can’t be compared, then they can’t be falsified, so in turn they can’t be science.

Am I missing something?

• Chuck L
Posted May 16, 2014 at 11:52 AM | Permalink

AJ nails it. This is conceptually equivalent to blaming heat/cold, floods/droughts, more/less hurricanes, and more/less tornadoes on climate change. This is also equivalent to “Heads I win, tails you lose.” What it is not equivalent to, is science.

• JEM
Posted May 16, 2014 at 11:58 AM | Permalink

Yes. You’re missing the research grants you’d otherwise have if you hadn’t voiced such an opinion.

5. snarkmania
Posted May 16, 2014 at 11:43 AM | Permalink

This is a widespread and unacceptable practice. Locally here in NM I’m struggling to get colleagues to draw a line in the sand regarding this growing practice of having uncalibrated models trump actual data. This practice is incompatible with best hydrologic practices, which of course directly relate to climate. Recently a well known climate scientist, Dr. David Gutzler, in authoring a NM State – funded report, applied some creative logic with regard to streamflows in New Mexico that weren’t complying with the IPCC’s conclusions. Note I inserted the portion in parentheses for proper context.:

“How should we assess future projections in streamflow, considering that the (IPCC-based) simulated streamflow over the past 60 years is changing with the wrong sign relative to observations? There are two distinct possible interpretations. First, the simulated future projections could be considered unreliable, given that the models generate the wrong answer (decreasing flow) for retrospective streamflow trend over the half–‐century period of time when we know the right answer (increasing flow). Alternatively, the simulated future trends could represent a reasonable long–‐term answer, keeping in mind that the long–‐term trend is subject to considerable uncertainty on decadal time scales. ”

Dr. Gutzler went on to promote the IPCC line, thereby rendering a potentially useful report totally useless (if not harmful) to water planners in our area. At least he acknowledged the divergence of the model from reality.

6. Dave L.
Posted May 16, 2014 at 11:48 AM | Permalink

Quoting from the Referee Report:

“The IPCC estimates of different quantities are not based on single data sources, nor on a fixed set of models, but by construction are expert based assessments based on a multitude of sources.”

Perhaps I am reading too much into ‘expert based assessments based on a multitude of sources’, but is this not a roundabout way of admitting that rather than using real observational data, we manufacture our own data via simulations, models, adjustments, etc.?

• Third Party
Posted May 16, 2014 at 12:09 PM | Permalink

SWAG

7. Posted May 16, 2014 at 11:58 AM | Permalink

I have no idea if Bengtsson et al. is a good paper, not having seen it. But the topic itself is an important one, and notwithstanding those attempts at gatekeeping mentioned above, there’s no stopping the flow at this point because the model/observational discrepancies are so large and growing. A few recent examples in print include:

– Fyfe, J.C., N.P. Gillett and F.W. Zwiers, 2013: Overestimated global warming over the past 20 years. Nature Climate Change, 3, 767-769, doi:10.1038/nclimate1972
– Swanson, K.L., 2013: Emerging selection bias in large-scale climate change simulations. Geophysical Research Letters, 40, DOI: 10.1002/grl.50562.
– McKitrick, Ross R. and Lise Tole (2012) Evaluating Explanatory Models of the Spatial Pattern of Surface Climate Trends using Model Selection and Bayesian Averaging Methods. Climate Dynamics DOI 10.1007/s00382-012-1418-9.
– Fildes, Robert and Nikolaos Kourentzes (2011) “Validation and Forecasting Accuracy in Models of Climate Change International Journal of Forecasting 27 968-995.
– Anagnostopoulos, G. G., D. Koutsoyiannis, A. Christofides, A. Efstratiadis & N. Mamassis (2010). “A comparison of local and aggregated climate model outputs with observed data.” Hydrological Sciences Journal, 55(7) 2010.
– McKitrick, Ross R., Stephen McIntyre and Chad Herman (2010) “Panel and Multivariate Methods for Tests of Trend Equivalence in Climate Data Sets”. Atmospheric Science Letters, DOI: 10.1002/asl.290

And I know of another one nearly accepted that continues the theme. It may be that Bengtsson et al. had some flaws, though I agree that the reviewer didn’t point to any. Instead the reviewer tries to argue that models and observations are not meant to be compared, and the editor swallowed this nonsensical argument, no doubt happy for a straw to clutch at.

But nobody should be surprised that ERL has the slant that it does: This is a journal with Peter Gleick, Stefan Rhamstorf and Myles Allen on its editorial board:
http://iopscience.iop.org/1748-9326/page/Editorial%20Board

You can’t advertise a hard-line editorial stance any better than that. Well, maybe they could: they list as their #1 Highlight publication of 2013… Cook, Nucitelli et al.
http://iopscience.iop.org/1748-9326/page/Highlights-of-2013

Strangely, they really seem to be objecting that the Times had the nerve to run the story after all the work that’s been done to convince the press about the supposed dangers of “false balance”:

With current debate around the dangers of providing a false sense of ‘balance’ on a topic as societally important as climate change, we’re quite astonished that The Times has taken the decision to put such a non-story on its front page.

Evidently they too subscribe to that editorial position: don’t print anything that might give the impression there’s actually a range of scientific views out there.

• Spence_UK
Posted May 16, 2014 at 1:05 PM | Permalink

It is worth noting the sequence of events surrounding the fifth paper on Ross’ list here. Not only were there problems publishing (I believe the paper, or its predecessor, was rejected at least once from another journal), when Professor Kundzewicz bravely published the paper in HSJ, he felt the need to write an editorial explaining that these were legitimate concerns from hydrologists and not “climate change critics”.

Despite Dr Kundzewicz’s best efforts, David Huard wrote a response describing the paper as a “black eye” for the journal. Note that once again, the upset climate scientists rather than adopting a scientific disagreement would rather place in the title of their paper an attempt to point out the damage to the reputation of the journal – almost like an organised criminal turning up at a shop and saying “nice place you have here, shame if anything happened to it”.

Luckily as a response to the original article, Dr. Koutsoyiannis had a right to reply and gave Dr. Huard’s paper the response it deserved (links below).

Links to the original paper and Dr Kundzewicz’s editorial are here: https://itia.ntua.gr/en/docinfo/978/

To Dr Huard’s article and Dr Koutsoyiannis response: https://itia.ntua.gr/en/docinfo/1140/

All of this shows the strange response the climate science community has to papers comparing models to observations.

• AJ
Posted May 16, 2014 at 1:21 PM | Permalink

Following Ross’s link to the 2013 highlights, I see this paper:

The upper end of climate model temperature projections is inconsistent with past warming
Peter Stott, Peter Good, Gareth Jones, Nathan Gillett and Ed Hawkins
2013 Environ. Res. Lett. 8 014024

http://iopscience.iop.org/1748-9326/8/1/014024/article

Abstract
… Here, for the first time, we combine multiple climate models into a single synthesized estimate of future warming rates consistent with past temperature changes. We show that the observed evolution of near-surface temperatures appears to indicate lower ranges (5–95%) for warming (0.35–0.82 K and 0.45–0.93 K by the 2020s (2020–9) relative to 1986–2005 under the RCP4.5 and 8.5 scenarios respectively) than the equivalent ranges projected by the CMIP5 climate models (0.48–1.00 K and 0.51–1.16 K respectively). Our results indicate that for each RCP the upper end of the range of CMIP5 climate model projections is inconsistent with past warming.

• Posted May 16, 2014 at 1:51 PM | Permalink

Ed Hawkins, (we have met, nice guy) is at Reading University – I’ll look forward to his thoughts on how the Prof has been treated by US scientists.

• RJC
Posted May 16, 2014 at 2:12 PM | Permalink

• AJ
Posted May 16, 2014 at 7:10 PM | Permalink

That’s certainly one way of looking at it. Another way is that a validation paper is ok one year and not the next. Perhaps it depends on the fancy of different reviewers? Arbitrary, n’est pas?

• Skiphil
Posted May 16, 2014 at 7:38 PM | Permalink

from the Abstract it sounds like the analysis is only limiting the “upper end” of CMIP5 projections

that would not undermine the “consensus” view so long as projections below that “upper end” are not affected

sure, the most fervent Alarmists might not like to see the highest projections removed, but it need not affect the “consensus” view

• John Ritson
Posted May 16, 2014 at 8:43 PM | Permalink

That is a very pertinent find. Also from the abstract of Stott et al. comes this “it is important to determine whether model projections are consistent with temperature changes already observed”. Which seems to be in the proper spirit of inquiry.

So ERL was in favour of comparing models versus outcomes in 2013 but now they are against it. This will take some explaining.

• Skiphil
Posted May 19, 2014 at 11:14 PM | Permalink

If I may make a couple of broad observations about how many scientific journals, including ERL, seem to operate:

There seems to be (often) far too great an emphasis upon innovation and originality in determining what gets published.

While there are evident appeals to original and/or innovative papers which (may seem to) help to advance a field,

the very first requirement for any journal and any work of science should be ACCURACY and PRECISION. Congruence with known EMPIRICAL data should come before all attempts at innovation and originality.

Thus, a paper comparing empirical observations with models and/or hypotheses and/or theory should, in general, be regarded as a potentially valuable contribution. Whether a paper finds a good fit or a bad fit between model/theory and data (or anything debateable in between), this kind of comparison needs to be regarded as valuable and worthy of the space in any journal claiming to be scientific.

Similarly, any paper or COMMENT(s) providing criticism and corrections for a paper already published should be considered MORE important not less, if the corrections or updates are accurate.

How can editors and reviewers be brought to see that maintaining an accurate scientific record is the FIRST responsibility of any journal?

…. and if a journal has already published a certain paper then they have the highest responsibility to bring into the published record any corrections, controversies, or updates about said paper.

Steve: I agree 1000%. Journals are reluctant to publish comments and, in some cases (GRL), have gotten worse. For policy purposes, it’s much better to have “engineering quality” information – but this does not fit into “innovation” criteria.

8. Posted May 16, 2014 at 12:01 PM | Permalink

How many published papers should now be rejected as their use of models is deemed incorrect.

Furthermore how on Earth shall one interpret models for policymaking purposes if it’s so incredibly difficult to go from model to observation?

9. Craig Loehle
Posted May 16, 2014 at 12:03 PM | Permalink

This brings to mind the long debate about the tropical mid-troposphere hot spot, where defenders of the faith objected strenuously (directing ad homs against Douglass, for example) to comparing the projections to the data, or the calling of names when anyone pointed to the “pause” until about 6 months ago.
The models are clearly held in higher regard than any mere “data”.

10. Posted May 16, 2014 at 12:05 PM | Permalink

The Editorial Director said the paper “contained errors” but as you say the published review does not justify this statement.
The paper will have had two or three reviewers, and it’s possible that other reviewers may have said there were errors. IOP say they hope to publish the other reviews, but they would have to get agreement from the reviewers.

I like the last bit of the review, which turned out to be an accurate prediction –
“I have rated the potential impact in the field as high, but I have to emphasise that this would be a strongly negative impact”.

• Bob K.
Posted May 16, 2014 at 12:57 PM | Permalink

Yes, those terrible errors. I am grateful that when I read a paper from a scientific journal that 100 percent of the errors have been filtered out 100 percent of the time through the magical process of peer review.

You have to hand it to the Editorial Director for knowing which side of her political bread is buttered.

• RJC
Posted May 16, 2014 at 2:10 PM | Permalink

“The paper does not make any significant attempt at explaining or understanding the differences, it rather puts out a very simplistic negative message giving at least the implicit impression of “errors” being made within and between these assessments, e.g. by emphasising the overlap of authors on two of the three studies.

What a paper with this message should have done instead is recognising and explaining a series of “reasons” and “causes” for the differences.”

From the report directly. Makes perfect sense to me, why publish something that doesn’t attempt to improve understanding in any way? If you think there’s a problem with the measurements, tell us what they are, don’t just say “there’s a problem but I don’t care what it is”

“The differences in the forcing estimates used e.g. between Otto et al 2013 and AR5 are not some “unexplainable change of mind of the same group of authors” but are following different two different logics, and also two different (if only slightly) methods of compiling aggregate uncertainties relative to the reference period” – the reviewers were able to debunk the paper completely, so why should they be bullied into publishing it?

• Steve McIntyre
Posted May 16, 2014 at 2:47 PM | Permalink

RJC says:

What a paper with this message should have done instead is recognising and explaining a series of “reasons” and “causes” for the differences.”

It would be nice if it provided such an explanation. However, pointing out discrepancies is important and relevant, especially if the articles are being relied upon for policy.

Several years ago, Ross and I submitted an article on widely cited Santer et al 2008 pointing out that they had used obsolete data and that key results did not hold up using up-to-date data. The article wasn’t “original” and didn’t “explain” the discrepancy but the discrepancy deserved to be pointed out in its own right. The invalid results of Santer et al continued to be cited in assessment studies. In the EPA Denial of Reconsideration Petitions, they stated that critics had had an ample time to publish a rebuttal to Santer et al but had failed to do so. Subsequently, Ross managed to lead author McKitrick et al 2010 on the same material, and by using an “innovative” statistical methodology, miraculously managed to run the gauntlet of adverse reviewers. If someone is contesting an article that is being relied upon for policy purposes, the academic criterion of “originality” ought to be irrelevant.

But our first article ought to have been accepted.

• Posted May 16, 2014 at 6:43 PM | Permalink

If an article is to discuss problems with existing models, I don’t see why a requirement should be to explain why. It is nice to know everything but that is aside from the point of identifying a fundamental problem. In the case of climate models, they contain numerous subtle differences and the causes cannot be reasonably dissected in a single paper. Gavinm Schmidt’s latest boondoggle of a paper proved that out rather nicely.

The Santer case is a great example where nothing novel needed presenting as the math was fine. The correction simply replicated the originals and extended the process using all of the data. Unfortunately it reversed the even then silly conclusion that models matched observation. There was no reasonable point on which the rebuttal should have been rejected.

• pottereaton
Posted May 16, 2014 at 8:55 PM | Permalink

Contemporaneous post by Steve on Santer et al 2008. The discussion afterward is also worth a read.

https://climateaudit.org/2008/10/16/santer-et-al-2008/

Then a follow-up “worse than we thought” post a year later:

https://climateaudit.org/2009/05/28/santer-et-al-2008-worse-than-we-thought/

• AntonyIndia
Posted May 16, 2014 at 10:25 PM | Permalink

I’m trying to get Gavin Schmidt’s view on this matter at RC; if it survives the moderation it would be after this one: http://www.realclimate.org/index.php/archives/2014/05/unforced-variations-may-2014/comment-page-4/#comment-532773

• AntonyIndia
Posted May 24, 2014 at 12:55 AM | Permalink

Gavin Schmidt did reply eventually: “The CMIP5 model histogram is not a probability density function, even though it is often causally used as such (even here). So the reviewer is technically correct that a pdf created from a simpler method will not match the MME spread even if one of the models was perfect, so in that sense, a ‘nice fit’ is not to be expected. However, we do expect (ideally) the real world to look like it could be drawn from the same distribution as the models we have and if it doesn’t, there is an interesting discrepancy to deal with. The solutions to that discrepancy can be complex – associated with the comparison itself (is it apples to apples), biases in the observations (Arctic coverage for instance) or issues with the model forcings or physics. See my paper in Nat. Geo. for more discussion of that. – gavin]” http://www.realclimate.org/index.php/archives/2014/05/unforced-variations-may-2014/comment-page-4/#comment-532908

11. Joe
Posted May 16, 2014 at 12:15 PM | Permalink

Did you forget this part:

A careful, constructive, and comprehensive analysis of what these ranges mean, and how they come to be different, and what underlying problems these comparisons bring would indeed be a valuable contribution to the debate.

What good is a paper that points out inconsistencies that were already known to begin with?

12. Bruce Cunningham
Posted May 16, 2014 at 12:30 PM | Permalink

I think this shows that many alarmists have viewed models as the final word on what actually will occur in the future, and that they cannot accept that this might not be true. The fact that observations have never agreed with model predictions has been pooh-pooed away as not important for so long and so often, that they have come to believe that it is a scientifically valid way of looking at it. Ross McKitrick observed this in a comment previously on Climate Audit. I even commented on it myself at the time. It has actually almost been official policy as is stated in the comment referenced.

13. Joshua
Posted May 16, 2014 at 12:39 PM | Permalink

Steve –

One would think that in the name of a “full cost auditing,” you would link to the IOP’s statement on the matter – rather than, as it seems you did here in the original post, very explicitly selectively quoting from the one review?

http://ioppublishing.org/newsDetails/statement-from-iop-publishing-on-story-in-the-times

Maybe I missed it, where you provided the link?

One might think that you’re playing a tribal game here, Steve, rather than rally engaging in a thorough audit of the matter. Wouldn’t a thorough audit require that you present opposing perspectives thoroughly before you write anything at all, and in good faith. Do you think that by writing a post where you selectively quote from the one review, you might be seen as “cleansing?”

Oh, and w/r/t cleansing – how do you feel about the term “denier.” Do you think that terminology that might be seen as holocaust references have a proper place in the discussion?

Steve: I linked to the statement in the very first sentence. You are free to apologize for overlooking this. I also provided extensive quotations from the statement itself, a courtesy almost never extended to me by critics.

• TerryMN
Posted May 16, 2014 at 12:49 PM | Permalink

You did miss it. It was buried in the first sentence, a full 8 words into the post, though.

• Joshua
Posted May 16, 2014 at 1:54 PM | Permalink

So he did link it. My bad. So then what explains this statement from Steve?:

–> “Thus, the “error” (according to the publisher) seems to be nothing more than Bengtsson’s expectation that models be consistent with observations. Surely, even in climate science, this expectation cannot be seriously described as an “error”. ”

According to the publisher, the “error” seems to be nothing more than Bengtsson’s expectation? How could he have read this:

–> “In the same way that one cannot expect a nice fit between observational studies and the CMIP5 models.

A careful, constructive, and comprehensive analysis of what these ranges mean, and how they come to be different, and what underlying problems these comparisons bring would indeed be a valuable contribution to the debate.” < —

And reached such a conclusion about what it "seems" to the publisher?

'Cause I figured his link must have gone to something other than the statement they actually made.

• TerryMN
Posted May 16, 2014 at 2:57 PM | Permalink

Yes, you completely missed it. And then, with a cursory “my bad” you go on to make other accusations.

One might think that you’re playing a tribal game here, Joshua, more interested in point scoring than in asking genuine questions or engaging in any real discussion. Especially combined with your other comments on Judy Curry’s blog disparaging “Steve Mac.” Don’t be surprised if others don’t regard your posts as anything other than partisan trolling.

• Steven Mosher
Posted May 16, 2014 at 3:54 PM | Permalink

Joshua.

How many times do I have to tell you to read harder.

Now, remember when you chastised peter lang for his mistake?

remember that his apology was not enough?

So, Joshua please explain how you could make this mistake

• Joshua
Posted May 16, 2014 at 4:40 PM | Permalink

–> “Yes, you completely missed it. And then, with a cursory “my bad” you go on to make other accusations.”

What would suffice as penance, Terry? Self-flagellation?

Is there any way that I can atone enough that we can discuss Steve’s misrepresentation? Or maybe you can explain how he legitimately goes from this:

–> “A careful, constructive, and comprehensive analysis of what these ranges mean, and how they come to be different, and what underlying problems these comparisons bring would indeed be a valuable contribution to the debate.”

To this:

–> “Thus, the “error” (according to the publisher) seems to be nothing more than Bengtsson’s expectation that models be consistent with observations.”

• Posted May 17, 2014 at 1:23 AM | Permalink

Re: Joshua (May 16 13:54), Joshua:

I thought the situation was “impossible to parody” — yet here we are with you making almost a Shakespearean effort of a parody within a parody. You made my day.

• David Jay
Posted May 17, 2014 at 7:46 PM | Permalink

I’m in for Joshua’s self-flagellation

• pottereaton
Posted May 16, 2014 at 12:51 PM | Permalink

Oh, and w/r/t cleansing – how do you feel about the term “denier.” Do you think that terminology that might be seen as holocaust references have a proper place in the discussion?

When Steve starts calling the “consensus” scientists responsible for this debacle “ethnic cleansers” in his posts you will have a valid analogy. Until then, you don’t. “Denier” has a far more specific connotation. If you had asked me before this post what “cleanser” referred to, I would have said a product used to clean sinks. Or a skincare product.

• Joshua
Posted May 16, 2014 at 5:09 PM | Permalink

Steve –

–> “I linked to the statement in the very first sentence. You are free to apologize for overlooking this. I also provided extensive quotations from the statement itself, a courtesy almost never extended to me by critics.”

I apologize.

–> “A careful, constructive, and comprehensive analysis of what these ranges mean, and how they come to be different, and what underlying problems these comparisons bring would indeed be a valuable contribution to the debate.”

And conclude this:

–> “Thus, the “error” (according to the publisher) seems to be nothing more than Bengtsson’s expectation that models be consistent with observations.”

What they are describing as an error is a failure to write a “careful, constructive, and comprehensive analysis of…how they come to be different…” not the discrepancy itself.

Disagree or not as to whether Bengtsson wrote a “careful, constructive, and comprehensive analysis” of the differences, that’s certainly scientific. But misrepresenting what they said in the way that you did is not scientific. Audit thyself.

• Nullius in Verba
Posted May 16, 2014 at 7:00 PM | Permalink

“What they are describing as an error is a failure to write a “careful, constructive, and comprehensive analysis of…how they come to be different…” not the discrepancy itself.”

That’s not a description of an error. So if you’re right, and that’s what they meant, that’s even worse.

However, I suspect you’re misreading again. At the least, we shouldn’t assume the journal was making such a basic error of logic without something a bit less ambiguous. It wouldn’t be fair to blame them for a silly error it’s not clear they actually made.

• Joshua
Posted May 16, 2014 at 11:15 PM | Permalink

==> “That’s not a description of an error. ”

The error was in failing to do due scientific diligence. If you want hang your hat on an argument that failure to conduct due scientific diligence is not an error when you’re writing a scientific analysis, have at it. More power to you.

• Posted May 16, 2014 at 11:41 PM | Permalink

The error was in failing to do due scientific diligence.

Oh come on. To explain the difference with the GCMs would require complete understanding of the code in those baroque contraptions and how every part behaves under unknown starting conditions. What you see is not lack of scientific diligence, it’s lack of time before the heat death of the Sun.

• Nullius in Verba
Posted May 17, 2014 at 4:00 AM | Permalink

“The error was in failing to do due scientific diligence.”

That’s rot. Due scientific diligence is making sure that what you do say is correct and complete enough not to mislead, to enable others to know exactly what you did to get your result so they can replicate and criticise it.

He needs (for example) to explain where he got the data from, what he did to it, and how he interpreted it, to get the results he claims indicates a discrepancy, and he needs to explain why the numbers he shows mean there is is an actual discrepancy. There’s no requirement to explain why it happens, although that would certainly be a useful contribution if you can do it, and likely the next step in the investigation.

It’s basic scientific method. You propose a theory (the models). You compare the predictions of the theory to observation. If the observations don’t match, the theory is wrong. That’s an important scientific step forward all on its own.

The next step is probably one best done by the modellers, though, as the only ones with a hope of parsing through the code to say what went wrong.

The reviewer makes a number of criticisms, some of which are describing the different and better paper that he would have preferred to see (but which are not ‘errors’ as such), and some of which could be interpreted as talking about errors in the paper – not properly explaining why the numbers shown constitute an actual discrepancy. These are: 1. That Otto et al used adjusted forcings to match observations the IPCC down-weighted as unreliable, 2. that the observations only estimate sensitivity for a subset of the globe (and maybe if you restricted the models in the same way the result might be closer), 3. the Otto paper uses the Kappa model which is unphysical and inaccurate (and which we now know from Nic Lewis is false – Otto et al. didn’t use it for precisely that reason, and it’s not relevant to this application anyway), and 4. they compared the ranges of model runs to pdfs, which the reviewer claims is not directly comparable but doesn’t explain why.

Of these, 1. is dependent on the IPCC’s criticisms of the observations, 2. is a testable hypothesis which Bengtsson could easily add, 3. as noted above is actually incorrect, and 4. we cannot easily judge without seeing the paper, but certainly if the models are to be any use as predictions then the values generated need to be samples from the same distribution, and therefore comparable. You do have to perform some extra steps to show the difference is ‘significant’, and this could have been done wrongly, but if so then the reviewer doesn’t say in what way.

1. and 4. may fairly be described as expecting consistency between observations and models being interpreted as an error. 2. I’d regard as a point that *does* need to be addressed, although I’d be surprised if it made a difference. 3. was an error on the reviewer’s part, which Bengtsson ought to have been able to reply to.

Steve covered 1, 2, and 4 in his article. You can, if you like, make a fuss about him missing 3., but given that the reviewer was totally wrong about it, it probably won’t get you anywhere. However, I do think it’s good that there are people here to be sceptical about the sceptics. Every scientist needs differently-motivated people checking their work.

• Steven Mosher
Posted May 17, 2014 at 12:08 AM | Permalink

“–> “A careful, constructive, and comprehensive analysis of what these ranges mean, and how they come to be different, and what underlying problems these comparisons bring would indeed be a valuable contribution to the debate.”

Basically the reviewer is asking for something that NO STUDY HAS EVER DONE, that is explain “how they come to be different”

There are studies of single models that hint at it, but nobody has explained this for a study of “models” why? Because nobody understands enough models in detail to explain why they differ. The models are roughly 1 million LOC.

• Posted May 17, 2014 at 1:00 AM | Permalink

I second Mosh’s request. If I were a policymaker with a passing interest in science I’d call the nearest modeler and ask if Bengtsson is wrong then who’s right and how can I go from models to observations – because without such a procedure there’s nothing models can tell us about future observations, ie, future events.

I’m all for caution and taking care. However if nobody ķnows how to take enough care, they should say so.

• Skiphil
Posted May 17, 2014 at 1:28 AM | Permalink

thanks, Steve Mosher, this comment addresses what I’ve been wondering about, possible double standards and biased standards in the ERL (and other) reviews…. it is easy in any field to set an unreasonably high bar and then say “this paper didn’t meet it”

When reviewers already approve of the “message” of a paper do they require it to surpass such a high bar??

• Posted May 17, 2014 at 5:48 AM | Permalink

Mosh, I see a parallel with the reviewer’s “how they come to be different” request and hundreds of little critics that come to these pages.

“Why don’t you do your own temperature index?”

“Why don’t you do your own paleo reconstruction?”

“Why don’t you write a better climate model?”

It seems to come down to, you can’t tell us we’re wrong unless you can do it better. They don’t seem to realize that pointing out error and quantifying it is a contribution.

There seems to be a continuing lack of appreciation for diligence, examination, and falsification, as well as a knee-jerk reaction similar to, but not exactly the same as:

If you are not a [woman, black, gay, lesbian, etc.] what right do you have to comment on [blank, blank, blank, etc.]?

Perhaps climate scientists can be added to the list of protected classes we are no longer allowed to discriminate against (offend).

• Joshua
Posted May 17, 2014 at 7:38 AM | Permalink

–> “Due scientific diligence is making sure that what you do say is correct and complete enough not to mislead, to enable others to know exactly what you did to get your result so they can replicate and criticise it.

That sounds very similar to what the reviewer said – that the author in question did not make sure that his work was correct and complete enough to avoid misleading. Did you read the much discussed section where s/he discussed exactly that?

Whether you agree with the reviewer’s assessment is one thing. I am certainly in no position to weigh in. But again, if you want to hang your hat on the position that failure to do a “careful, constructive, and comprehensive analysis” of the topic under study is not an error in scientific practice, then more power to you. In fact, I doubt that you would lower your standards like that in your own work. What’s interesting is why you would advocate doing so for others.

• Nullius in Verba
Posted May 17, 2014 at 10:06 AM | Permalink

Yes I did read it, and it’s discussing something different to what I meant. Bengtsson claims that models are inconsistent with observations, and he only needs to be complete and clear enough for somebody to understand exactly what he means by that (i.e. inconsistent in what respect?) and to check his working. There is no requirement for him to explain in detail *why* the models are inconsistent with observation. Saying “the model is inconsistent with observation and we don’t know why” is perfectly acceptable, scientifically. (And yes, I’ve done exactly that many times myself.)

What the reviewer is complaining about is that simply pointing out the inconsistency between model and observation gives the implicit impression of “errors” being made within and between the studies. Well, yes. “Errors” are certainly an obvious possible explanation, and one that has to be taken seriously. There’s nothing at all wrong with that – if there were, it would be far more difficult to detect and report errors in science, which would defeat the entire object. But given the reviewer used the words “implicit impression”, it seems Bengtsson *didn’t* say it was due to errors specifically, properly leaving it to the scientific community to determine what the cause *actually* was. Again, it’s hard to be sure without seeing the paper, but this seems like unexceptionable practice.

It would seem, though, that the reviewer doesn’t *want* for the possibility of the models being in error to be raised, and therefore expects that any apparent inconsistency has to be explained so as to make clear this is not the case. (Or, to be generous to him, that the proof that there are errors has to be bulletproof.) But from a scientific point of view, there is absolutely nothing wrong with raising the possibility, and so objecting to that is indicative of an unscientific bias.

The problem with this is that knowing that peer reviewers are doing this degrades the credibility of the entire research record. If people cannot or will not publish inconsistencies between theories and observations, we cannot be sure of any published theory that there are no observations inconsistent with it known. Since that is our one and only basis for scientific credibility, it sabotages the entire scientific enterprise at its root. The problem extends beyond this particular paper or this particular issue, but applies to every journal, every scientist, every institution, and every subject where this behaviour is found acceptable.

If I was ever to let it be known I thought it acceptable to withhold adverse results from publication because it might ‘give the implicit impression that there was an error’, my personal scientific reputation would be trashed, and deservedly so. Nobody’s perfect, and errors happen. We all have our biases, too. But that’s exactly why we have to be so careful about such principles.

• Joshua
Posted May 17, 2014 at 7:45 AM | Permalink

–> “Basically the reviewer is asking for something that NO STUDY HAS EVER DONE, that is explain “how they come to be different””

I love the “basically.” Pretty much a license to turn his/her words into whatever you want, eh?

Well, would say that the reviewer is “basically” saying that the journal’s standards should rest upon work that is “correct and complete enough to avoid misleading.” I would say that the author is saying that the journal’s standards should rest upon work that has the attributes of being “careful, constructive, and comprehensive.”

Those qualifications does not mean that the work has to be dispositive in explaining the discrepancy. An analysis can be “careful, constructive, and comprehensive” in building an analysis that explains why an answer cannot yet be resolved, or in describing why past explanations are wrong, of not sufficiently comprehensive.

You’re using the standards of a “skeptic,” of the sort where, when a scientist says that there’s a risk of something happening is within a range of probabilities, and they respond with – “But you could be wrong.”

• Steven Mosher
Posted May 17, 2014 at 2:46 PM | Permalink

No work is correct and complete enough to avoid misleading.

Lets take the recent two papers on Antartic “collapse”

Can you imagine a reviewer saying ” Oh dont print this some alarmist will misrepresent it”

14. JunkPsychology
Posted May 16, 2014 at 12:39 PM | Permalink

Well, well. It isn’t often that one sees the entire text of a journal referee’s report reproduced in a public statement from the journal.

I guess getting front-page coverage in the Times of London will have some unusual effects.

The questions anyone might still have had about the actual mission of Environmental Research Letters have been answered.

15. brent
Posted May 16, 2014 at 12:43 PM | Permalink

Comment on the Nature Weblog By Kevin Trenberth Entitled Predictions of climate
This is remarkable since the following statements are made
1. IN FACT THERE ARE NO PREDICTIONS BY IPCC AT ALL. AND THERE NEVER HAVE BEEN.
2.None of the models used by IPCC are initialized to the observed state and none of the climate states in the models correspond even remotely to the current observed climate.?
3.Moreover, the starting climate state in several of the models may depart significantly from the real climate owing to model errors. I postulate that regional climate change is impossible to deal with properly unless the models are initialized.
http://tinyurl.com/ycunacr

Real Climate’s Agreement That The IPCC Multi-Decadal Projections Are Actually Sensitivity Model Runs
Now, from an unlikely source (Real Climate) have come the statements
“A scenario only illustrates the climatic effect of the specified forcing – this is why it is called a scenario, not a forecast. To be sure, the first IPCC report did talk about “prediction” – in many respects the first report was not nearly as sophisticated as the more recent ones, including in its terminology. “
“One should not mix up a scenario with a forecast – I cannot easily compare a scenario for the effects of greenhouse gases alone with observed data, because I cannot easily isolate the effect of the greenhouse gases in these data, given that other forcings are also at play in the real world.”
Real Climate states that the scenarios can
“….. become obsolete, and….. cannot be verified or falsified by observed data, because the observed data have become dominated by other effects not included in the scenario.”
This is the definition of a sensitivity experiment! In other words, policymakers are being given global and regional multi-decadal model results by the IPCC which are not predictions but sensitivity model runs since a variety of important first order climate forcings and feedbacks are not included in the models!
http://tinyurl.com/yzpjg3y

• Political Junkie
Posted May 18, 2014 at 12:48 PM | Permalink

A couple of obvious but perhaps not trivial observations:

Why do academic ‘scenarios’ that are not ‘predictions’ appear in IPCC report ‘Summaries for Policymakers?’

Have policymakers treated the scenarios as predictions and reacted accordingly with modifications to energy policy, tax regimes, etc?

If so, what specifically are climate scientists doing to correct the policy makers’ ‘error?’

16. JunkPsychology
Posted May 16, 2014 at 12:43 PM | Permalink

The link to the IOP statement is in the first sentence of Steve’s post. Try the word “took,” helpfully highlighted in a different color.

Conditions aren’t looking favorable for those out trolling.

17. Brian R
Posted May 16, 2014 at 12:46 PM | Permalink

I think their idea is flawed but why not throw it back in their face. If climate models outputs are “not directly comparable to observation based intervals”, then they can’t be used to predict a dire future. Any attempt to use model outputs to claim sea levels will rise or floods to increase or droughts to increase or any of the other cockamamie thing can be summarily dismissed.

• Jimmy Haigh
Posted May 16, 2014 at 11:13 PM | Permalink

Further: If climate models outputs are “not directly comparable to observation based intervals”, why do they then bother “hindcasting” them to make them fit what actually happened? And then why are they always crowing about how good the models are?

18. Tom O
Posted May 16, 2014 at 12:56 PM | Permalink

So he changed his opinion. Hmmm, is that any different than falling out of love and getting a divorce? Or possibly, do you suppose that he might have LEARNED things since 2006 that has changed his mind? WHY, by the way, aren’t ALL people allowed to change their minds? Don’t you? But in reality, 8 years of being shown a different point of view and recognizing that taking the best from that may require a change in mind. Why do you see a problem with changing a position? Are your thoughts REALLY carved in stone?

• motvikten
Posted May 16, 2014 at 2:30 PM | Permalink

Of course he is allowed to change his mind, but I have not seen him explain why.

The problem is that he is a prof.em. in meteorology and member of an energy committee, and still 2014 very active in Swedish energy debate, arguing against wind power and suggesting new nuclear power plants, and then low carbon is used as motivation.

• pottereaton
Posted May 16, 2014 at 3:11 PM | Permalink

Have you checked his peer-reviewed papers to see if they contain explanations as to why he changed his mind? That would be the first place I would look.

It might be more subtle than a change of mind. He might have simply adjusted his position in the face of divergent data. The issue really does not break down easily into “for” and “against” arguments.

In my view, the power of the CAGW argument peaked some time around 2006. He might have been caught up in that wave, so to speak.

19. Posted May 16, 2014 at 1:16 PM | Permalink

Well, I guess if you live on the right side of the scientific “street” …

You can get anything you want
At Alice’s (climate change) restaurant!

20. Posted May 16, 2014 at 1:18 PM | Permalink

I understood this to mean that the observation-based estimates are described with well-defined confidence intervals (ie probabilities), but model-based are not (eg spread/range with no probabilities assigned). So one has to take care in comparing them. No?

I have to say I don’t have Otto et al etc in front of me though.

• Green Sand
Posted May 16, 2014 at 4:19 PM | Permalink

Tamsin

Observations are just observations, they don’t suggest or project estimations, only homo sapiens do that. But observations, and observations alone, can and do confirm or refute estimations, no matter upon which said estimations are based.

There is only one actual, any homo sapiens interpretation of it can only be a very low secondary, which sadly of late has been promoted well above its station.

• j ferguson
Posted May 16, 2014 at 8:59 PM | Permalink

Tamsin,
I thought that was the objection. Maybe someone can show how the difference in the method of assigning comfort with observations and comfort with projections is insufficient to invalidate an attempt at comparison of the two. That seems to be the point the referee was driving at. Maybe it is spurious.

• David L. Hagen
Posted May 16, 2014 at 9:30 PM | Permalink

Tamsin
See: Otto et al Energy budget constraints on climate response; Nic Lewis’s post; & 40 citing articles; Michaels post on Loehle, Spencer & Braswell
How are we to compare IOP’s reviewer comments versus the probability of all 34 year GCM projections against global temperature evidence? cf Spencer/Christy evidence?

• Steve McIntyre
Posted May 16, 2014 at 10:10 PM | Permalink

Tamsin writes:

I understood this to mean that the observation-based estimates are described with well-defined confidence intervals (ie probabilities), but model-based are not (eg spread/range with no probabilities assigned). So one has to take care in comparing them. No?

That isn’t what the reviewer said. If he wanted to say that the authors ought to have “taken care”, then that’s what he could have said. But instead, he said that “no consistency was to be expected” and other similar claims. If you parsed the actual statements, your exegesis would have a better chance of being convincing

• Posted May 16, 2014 at 11:44 PM | Permalink

“no consistency was to be expected”
There is a basis for that. Otto et al said:

“Our results match those of other observation-based studies and suggest that the TCRs of some of the models in the CMIP5 ensemble10 with the strongest climate response to increases in atmospheric CO2 levels may be inconsistent with recent observations — even though their ECS values are consistent and they agree well with the observed climatology. Most of the climate models of the CMIP5 ensemble are, however, consistent with the observations used here in terms of both ECS and TCR. We note, too, that caution is required in interpreting any short period, especially a recent one for which details of forcing and energy storage inventories are still relatively unsettled: both could make significant changes to the energy budget. The estimates of the effective radiative forcing by aerosols in particular vary strongly between model-based studies and satellite data. The satellite data are still subject to biases and provide only relatively weak constraints (see Supplementary Section S2 for a sensitivity study).”

• Posted May 17, 2014 at 12:47 AM | Permalink

Nick – you cannot possibly be suggesting all Bengtsson had to do is add words of caution and the reviewer would’ve been satisfied? Or are you?

• Posted May 17, 2014 at 1:24 AM | Permalink

omnologos,
“words of caution”, no. The reviewer is saying that Otto et al had said that their result should not be expected to match GCM results in the short term, and explained why. So a paper saying that there is a mismatch is redundant.

• Posted May 17, 2014 at 3:39 AM | Permalink

Nick
As I recall, the statement in Otto et al
“caution is required in interpreting any short period, especially a recent one for which details of forcing and energy storage inventories are still relatively unsettled: both could make significant changes to the energy budget”
referred to its energy budget ECS and TCR estimates baased on 2000-09 forcing and energy storage etc. data. It was not to do with the period over which comparisons with models were made.

• Posted May 17, 2014 at 4:20 AM | Permalink

Nicl,
The reviewer says
“The comparison between observation based estimates of ECS and TCR (which would have been far more interesting and less impacted by the large uncertainty about the heat content change relative to the 19th century) and model based estimates is comparing apples and pears”
That does seem to be related to the Otto caution.

• Posted May 17, 2014 at 5:14 AM | Permalink

Otto asks for caution in comparisons. The reviewer rejected the very notion of doing any comparison. Apples don’t turn into pears in the long run.

• Posted May 17, 2014 at 10:43 AM | Permalink

NickS,
The Otto et al best estimates of ECS and TCR based on the long 1970-2009 period were almost identical to those based on the 2000-09 period that the caution in Otto et al referred to. Although the uncertainty ranges were wider, I don’t see at all that this means there is an apples and pears comparison between Otto et al and model estimates, certainly for TCR. For ECS there is the difference between effective and equilibrium climate sensitivity to consider, which is non-negligible (but not huge) in about half the CMIP5 models.

• Joshua
Posted May 17, 2014 at 8:37 AM | Permalink

Well, this is fascinating:

Here’s what Steve says:

–> “That isn’t what the reviewer said. ***If he wanted to say that the authors ought to have “taken care”, then that’s what he could have said.****

Here’s what the author says:

—> “A careful, constructive, and comprehensive analysis of what these ranges mean, and how they come to be different, and what underlying problems these comparisons bring would indeed be a valuable contribution to the debate.”

It will be interesting to see whether this comments gets through moderation. What say you, Steve?

• Joshua
Posted May 17, 2014 at 8:39 AM | Permalink

A previous comment was put into moderation – so I guess that means there was something specific in that comment that flagged some automatic filter?

Steve: yes. I moderate after the fact. some words trigger moderation. Unlike some blogs, I give critics very wide licence and even permit them latitude on blog rules, and moderate “supporters”. I snip or delete a lot of comments for piling on or for being too angry or complaining.

• Steven Mosher
Posted May 17, 2014 at 2:47 PM | Permalink

arrg Joshua your conspiratorial ideation is showing

• TerryMN
Posted May 17, 2014 at 3:18 PM | Permalink

Joshua, are you trying to set a record for how many times one person can be completely and provably wrong in just a weekend?

21. Posted May 16, 2014 at 1:49 PM | Permalink

“A position impossible to parody!”

Have you not said that from time to time? Perhaps it’s a fitting evaluation of the editorial position.

22. Craig Loehle
Posted May 16, 2014 at 2:09 PM | Permalink

Granting that it might be DIFFICULT or COMPLICATED to compare model vs data (either runs or sensitivity or whatever) BUT the warmists don’t like ANY comparisons, and every attempt to do so (see Ross’ list above) is met with resistance and absurd arguments like comparing the range of models vs the data (for the hot spot–Gavin’s argument I believe).

• MikeN
Posted May 16, 2014 at 2:28 PM | Permalink

I think for the hot spot, they argue that the observations are wrong. Indeed, this has become a recurring argument from SS. When models do not match observations, the models could be wrong, the observations could be wrong, or both.

23. Steve McIntyre
Posted May 16, 2014 at 2:49 PM | Permalink

In accordance with an unevenly enforced blog policy, I’ve deleted some comments for venting and/or generalized complaining. Too many such comments makes threads unreadable for outside readers – actually, even for me.

In particular, I would prefer that readers not use this post to make generalized complaints about models. The nuance of this post is IOP’s idea that it was an “error” to compare models and observations, not that “models” are misguided – an argument that I find mostly uninteresting.

• Keith Sketchley
Posted May 16, 2014 at 6:27 PM | Permalink

Thankyou Steve.

Indeed, blogs get hard to read even with the sub-thread capability you have.

• AJ
Posted May 16, 2014 at 8:52 PM | Permalink

Earlier, I saw my “unscientific” comment temporarily placed into moderation. It didn’t bug me none too much as I said to myself “Self… It’s Steve’s blog and he can do whatever he pleases. He must be thinking it’s distracting from the main point…” I was surprised I was to see my zombie comment reanimated.

24. EdeF
Posted May 16, 2014 at 2:56 PM | Permalink

The models are coded with high sensitivity to increased CO2 levels. As GHGs increase over time, which they have been doing for awhile, the models predict higher overall temperatures. The models have diverged from measured temperature data over the last 20 years. How then can the models be trusted to model temperature increases over the next
100 years, not the absolute temperature, but the broad trends in temperature if there has
been so much divergence between models throughout the computer age?

25. Barclay E MacDonald
Posted May 16, 2014 at 3:01 PM | Permalink

Brent 12:43 pm
Thanks for the summary.

Just my general thought, because I am not buying it so far(not criticizing Brent):

So why isn’t it fair to ask why none of the “scenarious”, including an “average” of them, compare well to reality? And then conclude the “scenarious” are not useful to the real world and that is a bad thing!

26. pauldd
Posted May 16, 2014 at 4:28 PM | Permalink

I am not a scientist and I follow the climate debates as a layman so perhaps I am missing something here.

My understanding of the key attribution findings of the IPCC is that they were derived by comparing model outputs over the historical period with and without observed increases in greenhouse gas emissions. The difference between the with and without model runs were then used to show the effects of the greenhouse gas emissions.

Now I am reading that the climate scientists acknowledge that the models don’t capture natural variability and that one should not expect them to accurately model historical temperature variations.

If this is so, upon what are the IPCC’s attribution findings based? Help me if I am missing something, but this is an honest question from my layman’s perspective.

27. Posted May 16, 2014 at 4:31 PM | Permalink

It’s pretty hard to evaluate a review without a copy of the paper. Has anyone thought of asking Prof Bengtsson for a copy?

One might ask too whether he submitted the paper somewhere else.

28. Sven
Posted May 16, 2014 at 4:32 PM | Permalink

Are the models not “improved” and calibrated with observations?

29. Posted May 16, 2014 at 4:48 PM | Permalink

If the models are not expected to be consistent with observations, why should they be relied on for policy purposes?

30. PhilH
Posted May 16, 2014 at 5:07 PM | Permalink

The Bishop points out that the reviewer made a rather major error in her review.

31. Carrick
Posted May 16, 2014 at 5:11 PM | Permalink

I haven’t see the paper, but I read the review that was posted. So my comments are based on those of the reviewer:

It appears the problem is estimations of ECS from energy balance models, such as that apparently used here, wouldn’t be expected to be consistent with a full GCM model, even if it were to just analyze the output of the GCM model itself!

Moreover this is widely known already, it’s known already in the specific examples discussed in the paper, nor does the paper improve our understanding of why the differences are present (it’s already understood) or how to improve the agreement.

As to “actually it is harmful as it opens the door for oversimplified claims of “errors” and worse from the climate sceptics media side”, well saying wrong things that people are going to exploit for political gain that harms your agenda is a legitimate concern.

It’s an admission that the journal obviously has a political agenda, but that’s well known too.

• Posted May 16, 2014 at 5:20 PM | Permalink

“It’s an admission that the journal obviously has a political agenda, but that’s well known too.”
It’s the reviewer speaking, not the journal.

• Green Sand
Posted May 16, 2014 at 5:30 PM | Permalink

Nick

To the best of my knowledge, you are welcome to correct, reviewers speak but journals/editors decide. So either the journal agrees with the reviewer or the tail is wagging the dog? Which is it?

• Carrick
Posted May 16, 2014 at 5:34 PM | Permalink

I don’t think they would have published the paper even had the reviewer not made those frank, truthful, but “unfortunate” comments. But I’m sure framing it in terms of the politics didn’t “help the paper’s chances”.

Typically, when I send in reviews, I’m given an opportunity to make private comments to the editor (that is there is usually a text box I can fill in for comments not shared with the authors). I’m surprised this sort of “dicey comment” wasn’t sent in that place instead of being shared with the authors.

• Carrick
Posted May 16, 2014 at 5:30 PM | Permalink

True enough. But I think it’s still a fair statement here.

• Green Sand
Posted May 16, 2014 at 5:53 PM | Permalink

True enough. But I think it’s still a fair statement here.

Que? It maybe late and I might be beyond my best but I have no idea what the above means or refers to.

• Carrick
Posted May 16, 2014 at 5:57 PM | Permalink

To Nick’s comment:

“It’s an admission that the journal obviously has a political agenda, but that’s well known too.”
It’s the reviewer speaking, not the journal.

(replied in the wrong place)

However, since ‘s statement by the reviewer, perhaps “recognition” is a better word choice, as in:

“It’s a recognition [on the part of the reviewer] that the journal obviously has a political agenda…”

• Skiphil
Posted May 16, 2014 at 6:48 PM | Permalink

Reviewer yes, speaking for and to a journal that includes Peter Gleick, Stefan Rahmstorf, and Myles Allen as 3 of 8 members of its Executive Board. Hardly the most unbiased non-political journal around:

http://iopscience.iop.org/1748-9326/page/Editorial%20Board

• Steven Mosher
Posted May 17, 2014 at 12:10 AM | Permalink

well, when a reviewer of one paper I was involved with made political comments about “skeptics” the editor removed him.

• Skiphil
Posted May 16, 2014 at 6:50 PM | Permalink

Latest at Bishop Hill finds what appears to be a very significant error in the review, aside from its evident political bias (worry at the end about unwelcome public impact of Bengtsson’s paper):

http://bishophill.squarespace.com/blog/2014/5/16/that-error.html

• knr
Posted May 17, 2014 at 1:20 AM | Permalink

‘well saying wrong things that people are going to exploit for political gain that harms your agenda is a legitimate concern.’

If your doing politics or religion but here is supposed to be science there doing , now are they living up a good scientific standard or are they worried about how a new viewpoint impact on a marketing poly their using to push their ‘product’
Because it looks like the latter as they seem to be unable to find where the ‘science ‘ not the message is wrong .

32. greymouser70
Posted May 16, 2014 at 5:18 PM | Permalink

Hilary: Thanks for the trip down memory lane. And yes, the situation is somewhat similar to Arlo’s tale.

33. Carrick
Posted May 16, 2014 at 5:59 PM | Permalink

Models can be falsified, the issue here is disagreement between worse models and better models shouldn’t be seen as a problem with the agreement between model and observations.

34. Posted May 16, 2014 at 7:35 PM | Permalink

Ok, I just finished reading the review and it is 100% as bad as Steve points out here. Really shockingly bad even considering the highly politicized state of climate science, this puppy has set the bar for nonsense really high. Asking for a review of all of the causes for the problems is ridiculous and a completely different exercise than showing that there IS a problem. And to couch it in terms of damage to be made by “skeptics” –” opens the door for oversimplified claims of “errors” and worse from the climate sceptics media side.”

Wow!

It is not some oil-funded skeptic’s fault that observations are not warming as models predicted.

• S. Geiger
Posted May 16, 2014 at 10:54 PM | Permalink

I had the same thought. Apparently now the models present the null hypothesis, and its not enough to simply show that in some cases the models are (increasingly) diverging from reality. I can’t even imagine how one would go about, in the same submitted paper, establish exactly where and why the models are incorrect. That, of course, is the million dollar question that (hopefully) in which the entire climate modeling community is currently engaged.

35. Posted May 16, 2014 at 8:03 PM | Permalink

I’m afraid that to do so would be “harmful as it opens the door for oversimplified claims of ‘errors’ and worse from the climate sceptics media side.” (Quote from referee in IOP’s statement.)

Maybe Michael Mann will sue the climate for its failure to conform to the models.

• talldave2
Posted May 19, 2014 at 12:01 AM | Permalink

36. Sven
Posted May 17, 2014 at 2:27 AM | Permalink

The reviewers view that only explanation of the differences would have scientific value demonstrates very well what has happening so far with the consensus scientists’ work on the pause. After ” Houston we have a problem” paper after paper is coming out “explaining away” the pause. It”s the oceans, it”s the chinese aerosols, no it”s .,. They just expect everybody to be engaged in the same exercise. To do everything possible to prove that Houston does not have a problem.

37. Henry
Posted May 17, 2014 at 7:04 AM | Permalink

I was taught to believe George E P Box’s maxim “All models are wrong but some are useful” or in its longer form “Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful.” Empirical Model Building and Response Surfaces (1987)

From this I would agree with the reviewers suggestion that consistency should not be expected, but disagree with the suggestion that it is an error to compare model outputs and ranges with observation based intervals. Without a comparison, how it is possible to identify how useful a model is?

38. David Longinotti
Posted May 17, 2014 at 7:06 AM | Permalink

So we’re asked by the journal to believe that the real reason that the paper was rejected was that Bengtsson’s reasoning was scientifically baseless, a case of comparing ‘apples and oranges’. But suppose Bengtsson had concluded from the same sort of comparison that the measurements were likely in error and that the modeled sensitivity should be trusted instead. Does anyone believe that the paper would still have been rejected due to faulty methodology?

39. Craig Loehle
Posted May 17, 2014 at 7:11 AM | Permalink

Here is how it is supposed to work. A paper came out on trends in amphibian populations in the US. They found 2x the % declining as past studies. It seemed suspicious to me. I was able to get data from authors and accessed an additional year of data and used a different method. I got a considerable difference in results and sent in a comment. The reviewers said that this type of replication was simply essential in science. They were happy. Nothing about feeding skeptics. It is now accepted.
In contrast, in 2 seminars I’ve given on climate I’ve been yelled at by a prof during questions. I debated Michael Schlesinger (climate modeler) who presented a very general talk and basicly demanded that I and the audience simply bow to his omniscience and perfection–never responded to a single point I made and insulted me personally. My papers on climate change have gotten some of the most angry and strange comments of my entire 145 publication career. Yes a small sample, but add it to the others.

• Posted May 17, 2014 at 8:46 AM | Permalink

No wonder it’s a small sample!

40. j ferguson
Posted May 17, 2014 at 7:20 AM | Permalink

If the modelers prefer that their models not be compared to observations, maybe we can address the thing which most concerns a lot of us (well, me at least).

Mitigation.

The modelers seek for us to mitigate in the “real world” effects which show in their models but not so much in recent “real world” observations. Some of us don’t like the mitigation methods they suggest for reasons of cost and sometimes fear of ill and unintended consequences and maybe some of us don’t like the intended consequences either.

But that would be for mitigation efforts in the “real world”. Maybe they should confine their mitigation efforts to the virtual reality of their modeling world and leave the rest of us alone.

The costs of all of this could then be supported in Quatloos.

And the rest of us could get back to the Spring planting.

41. Craig Loehle
Posted May 17, 2014 at 9:52 AM | Permalink

Let us assume we can save the reviewer’s point by positing that the models are only generally correct for very long times and large scales, like 100 years and global. (still an iffy proposition). The big problem here is that 1) advocates were happy to point to the 1980-2000 rise as proof of the models and 2) thousands of papers have come out applying the models to short time scales and small areas to do impact studies. Cake, eating, also.

• MikeN
Posted May 18, 2014 at 11:57 AM | Permalink

Seems to me that not all of the models ‘failed’. The spaghetti graph shows a few that are close. Perhaps the models that are closest should be evaluated as to their long term predictions compared to the others. Tamino once said that models exhibit acceleration. I tried to quantify it at the time, but couldn’t find any model runs with high sensitivity. I think I’ll check again.

42. TAG
Posted May 17, 2014 at 10:35 AM | Permalink

The current (May-June 2014) issue of American Scientist has an article on the use of simulations in scientific research. It identifies the need for them for large systems in which experiments are not feasible using the examples of astrophysics and climate science. it identifies difficulties of simulating such systems that exist at multiple scales and the practical requirement for linking multiple simulations together to bridge these scales. it also a identifies issues of simulating phenomena for which the science is not well understood. it uses the example of cloud formation which it describes as an unresolved puzzle across several scientific disciplines.

Working in engineering, I know that model have to be both verified and validated. Verified in the sense that the model works as intended without bugs. Validated in the sense that the output is useful which, in part, means that its predictions are accurate

The article gives a reference to a published paper in climate science that appears to address the validation issue; it is:

Held, I.M.
2005
“The gap between simulation and understanding in climate modeling”
Bulletin of the American Meteorological Society 86:1609-1614

Perhaps this article provides insight onto which temporal and geographical scales that the climate models provide outputs that can be usefully compared to observations

43. pottereaton
Posted May 17, 2014 at 2:07 PM | Permalink

Steyn on the Bengtsson Affair

Wow.

44. John Ritson
Posted May 17, 2014 at 6:04 PM | Permalink

I found this August 2008 quote from Tim Flannery, Australia’s recent climate commissioner:
” Scientific studies confirm that our planet is warming at a rate consistent with the worst case scenario developed by the Intergovernmental Panel on Climate Change in 2001, meaning that we must make substantial inroads on our emissions in the next 20 years if we hope to avoid irreversible damage to Earth’s climate system. ”

So the expectation of consistency between scenarios and measurement were being taken so seriously back then that they were effecting the recommendations for national policy.

45. Steve McIntyre
Posted May 19, 2014 at 9:28 AM | Permalink

Paul Matthews has a Bengtssson time line http://ipccreport.wordpress.com/2014/05/19/the-lennart-bengtsson-story/. He links to and dicusses a second review now online at IOP.

46. Posted May 19, 2014 at 2:07 PM | Permalink

I see that IOP has not published the second review of the Bengtsson paper. It’s devastating. There is no way an editor could have accepted it, based on those reviews.

But I guess no one is interested now.

• j ferguson
Posted May 19, 2014 at 3:37 PM | Permalink

now published, Nick?

• Posted May 19, 2014 at 6:49 PM | Permalink

Indeed, thanks

• TAG
Posted May 19, 2014 at 4:04 PM | Permalink

Devastating second review

Paal Matthews describes it thusly

IOP publishes the second review of Bengtsson et al’s paper. The reviewer says that using TCS is “wrong” and ECS would be “right”, which is odd since many climate scientists (eg Myles Allen) are now saying TCS is what matters. There is also a claim that log-log plots should be non-dimensional, which is not true.

• Posted May 19, 2014 at 6:48 PM | Permalink

“Paul Matthews describes it thusly”
Well, he’s wrong, and the reviewer is right. He says that ECS should be used, because the author wants to talk about “committed warming”, and that would be correct. And the reviewer was quite right about log-log plots. It’s not impossible to plot dimensional quantities, but each will have an arbitrary offset depending on units. The reviewer gave a very helpful and constructive way of doing it right.

• Nullius in Verba
Posted May 19, 2014 at 9:32 PM | Permalink

Maybe. That would depend on the purpose of talking about the “committed warming”. You can use it for comparative purposes as a parameter, in which case you need to use a common standard. Or it might be discussed for policy purposes, in which case the amount of short-term (e.g. this century) warming committed to may be of greater practical interest. Ideally it ought to be given a distinctive name as a new concept, though.

Any chart of a dimensional quantity will have an arbitrary numerical dependence on units – with a linear scale the scale is ‘arbitrary’, while for a log plot it is the offset. The choice of unit fixes the scale/offset. Converting to a dimensionless quantity by dividing by some fixed quantity is exactly equivalent to choosing that fixed quantity as the unit. Thus the decibel-Watt (dBW) is one tenth the logarithm of the power in Watts, which is numerically equal to one tenth the logarithm of the dimensionless quantity consisting of the power divided by one Watt.

It’s no more wrong or arbitrary to plot the log of a dimensional quantity than it is to plot its value directly. In both cases, the numerical value plotted depends on the choice of unit, and changes if the unit changes. In both cases one is effectively dividing the quantity by the unit to get a dimensionless numerical quantity to plot (and then always multiplying the result by a length to create a physical chart).

The reviewer’s approach might indeed arguably be better, and easier to understand, but that doesn’t make the original approach ‘wrong’.

• Carrick
Posted May 25, 2014 at 1:30 PM | Permalink

Late to the rodeo here:

Nick states:

And the reviewer was quite right about log-log plots.

taking the log of a dimension yields nonsense

Probably he meant dimensional quantity.

However the log of a dimensional quantity is well defined because:

$log(x/y) = log(x) - log(y)$.

So for example:

$\log(10 \hbox{km}/ 1 \hbox{km}) = \log(10 \hbox{km}) - \log(1\hbox{km}) = \log(10 \hbox{km}) - 0$.

Taking the log of a dimensional quantity is exactly equivalent to first dividing by the unit dimension of that quantity, then computing the log of the resulting dimensionless quantity.

The only way taking the log of a quantity is disallowed is if we decree it can’t be done. Otherwise the results are sensible and easily interpreted.

47. Gerald Browning
Posted May 19, 2014 at 3:59 PM | Permalink

I added a comment that Steve seems to have removed. My comment showed that climate models are nowhere close to reality
after a day and a half of time for a number of reasons and could have added many more. Thus they cannot be used to determine climate sensitivity or anything else about the real atmosphere. I am beginning to think Steve is either unwilling to look up a few mathematical facts or is becoming more narrow minded. I don’t expect this comment to survive.

Jerry

48. Gerald Browning
Posted May 30, 2014 at 2:35 PM | Permalink

http://www2.ucar.edu/for-staff/daily/calendar/2014-04-29/could-we-have-predicted-early-2000s-global-warming-hiatus-1990s

49. Gerald Browning
Posted May 30, 2014 at 2:37 PM | Permalink

COULD WE HAVE PREDICTED THE EARLY-2000’S GLOBAL WARMING HIATUS IN THE 1990’S?
Submitted by potemkin on Thu, 03/27/2014 – 12:55pm
There is a perception that no climate model has simulated the recent plateau of global warming that started around 2000. This warming slow-down is commonly referred to as the “early-2000s hiatus”. Though the multi-model ensemble average of uninitialized climate models indeed shows early-21st century warming greater than what has been observed, there are a number of individual ensemble members from several models that did actually simulate the early-2000s hiatus. Those simulations were successful because the internally generated naturally occurring climate variability associated with the negative phase of the Interdecadal Pacific Oscillation (IPO) happened, by chance, to coincide with the observed negative phase of the IPO that contributed to the early-2000s hiatus. However, picking out those skillful ensemble members in advance would not have been possible in the 1990s prior to the hiatus, and thus would have had no predictive value. If the recently developed methodology of initialized decadal climate prediction could have been applied in the 1990s, both the negative phase of the IPO in the early 2000s as well as the hiatus could have been predicted. The processes associated with this skillful prediction include more heat being mixed into the subsurface ocean as indicated by positive (downward) net surface heat flux over the global ocean, and stronger Pacific trade winds that intensify the Pacific Ocean subtropical cells and mix more heat into the subsurface ocean.

50. Gerald Browning
Posted May 30, 2014 at 2:45 PM | Permalink

If we just knew which parameterizations to tweak, we could have gotten it right!

51. Gerald Browning
Posted May 30, 2014 at 2:50 PM | Permalink

From Abstract of Meehl paper.

1] Case studies involving notable past decadal climate variability are analyzed for the mid-1970s climate shift, when the tropical Pacific warmed over a decade and globally averaged temperature rapidly increased, and the early 2000s hiatus when the tropical Pacific cooled over a decade and global temperatures warmed little. Ten year hindcasts following the CMIP5 decadal climate prediction experiment design are analyzed for those two periods using two different initialization techniques in a global coupled climate model, the CCSM4. There is additional skill in the initialized hindcasts for surface temperature patterns over the Pacific region for those two case studies over and above that in free-running historical simulations with the same model. A 30 year hindcast also shows added skill over the Pacific compared to the historical simulations. A 30 year prediction from the initialized model simulations shows less global warming for the 2016–2035 period than the free-running model projection for that same time period.

52. Gerald Browning
Posted May 30, 2014 at 2:57 PM | Permalink

When they get the hindcast down to one and a half days, they might have some semblance to the real atmosphere and then we can check
exactly what games they have played. Or you can read Sylvie Gravel’s manuscript.

53. Gerald Browning
Posted May 30, 2014 at 3:11 PM | Permalink

There is only one way to correctly initialize observational data to obtain the slowly evolving in time solution (Browning and Kreiss 2002). Once the observed vertical component of vorticity and the total cooling/heating term is given at a particular instant in time, the remaining variables [pressure, entropy (potential temperature) and horizontal divergence] must be determined by known diagnostic elliptic equations. The cute trick here is that the heating is all from parameterizations and those can be tweaked to provide the desired solution. Clerarly this is easier to do if you already know that you need a hiatus.🙂

54. Gerald Browning
Posted May 30, 2014 at 3:26 PM | Permalink

At a Meehl seminar at NCAR, I asked him a number of specific details about the model he was using. He had no clue and asked that we take
the conversation off line in order to hide his lack of understanding of the differential equations and poor accuracy of the numerical
approximations.

• kim
Posted May 31, 2014 at 8:57 AM | Permalink

Sorceror’s Apprentices.
==============

55. Posted May 31, 2014 at 10:42 AM | Permalink

Reblogged this on contrary2belief and commented:
In climate science, one shouldn’t expect observations to correspond with what’s predicted by models.
Remind me again why the output of models is then used to determine policy?

56. Posted Jun 11, 2014 at 11:29 AM | Permalink

The censorship is becoming more blatant. We’re definitely past the end of the beginning now.

Posted Sep 27, 2014 at 10:17 AM | Permalink

Physicists use occasionally, intentionally, “bad” models with little connection to reality.

For instance when new experimental research like higher temperature superconductivity appears, they may say “the model based on this existing theory gives the following results”

But those results are clearly identified as nonrealistic and are not used for, e.g., building factories making such superconductors.

*****
The problem with the climate models at hand is maybe not so much that they are expected to be nonrealistic, as the reviewer said.

But that, if and while they are expected to be nonrealistic, they are not identified as such. And, worse, that they are used as a base for economic policy. As if they WERE realistic.

1. […] Steve McIntyre highlighted a response from the Institute of Physics (Publsihers of Environmental Research Letters) to a UK Times article reporting the suppression of a global warming paper.  A paper which again attempted to document the less than supportive evidence observed temperatures provide for climate models.  The paper was written by a well known climate scientist who chose the unfortunate path of publishing TRUTH rather than Real Climate dogma necessary for success in today’s Climate Science™ field. […]

2. […] Steve McIntyre on the bizarre excuse given for rejecting the paper: […]

3. By How the GW myth is perpetuated - Page 47 on May 17, 2014 at 4:41 PM

[…] consensus’ data for a scientific rebuttal | Watts Up With That? Climategate n+1 No, no, no. You have it all wrong. The climate models*, upon which much has been spent, were never meant to be accurate.I have rated […]

4. By Maggie's Farm on May 18, 2014 at 9:32 AM

I’m taking bets.

I’ll bet anybody \$1000 that Miami will not be underwater in ten years. Any takers? Here’s the scare headline: Miami Will Likely Be Underwater Before Congress Acts on Climate Change. Oh no – I’m scared. Not Miami! Meanwhile, backtracking climate guru…

5. By The Lennart Bengtsson story | The IPCC Report on May 19, 2014 at 8:09 AM

[…] picked up by The Guardian, but neither her statement nor the review report justifies this claim. Steve McIntyre highlights this inconsistency, while Jeff Id ridicules the reviewer’s suggestion that climate […]

6. By Steyn: The Descent of Mann | askmarion on May 20, 2014 at 7:02 PM

[…] As Steve McIntyre concludes his analysis: […]

7. By Friday morning round-up and Open Thread on May 23, 2014 at 2:49 PM

[…] magical moment when you figure out that “climate science” is a faith based institution: when a leading light in the field insists that it’s an error for people to expect any […]

8. […] climatologists for daring to advise a skeptic group & insist that models be based on observations. Even warmists are shocked. But dead silence about him here, in our close-the-ranks Swedish […]