Two Minutes to Midnight

There is much in the news about how IPCC will handle the growing discrepancy between models and observations – long an issue at skeptic blogs. According to BBC News, a Dutch participant says that “governments are demanding a clear explanation” of the discrepancy. On the other hand, Der Spiegel reports:

German ministries insist that it is important not to detract from the effectiveness of climate change warnings by discussing the past 15 years’ lack of global warming. Doing so, they say, would result in a loss of the support necessary for pursuing rigorous climate policies.

According to Der Spiegal (h/t Judy Curry), Joachim Marotzke, has promised that the IPCC will “address this subject head-on”. Troublingly, Marotzke felt it necessary to add that “climate researchers have an obligation not to environmental policy but to the truth”.

Unfortunately, as Judy Curry recently observed, it is now two minutes to midnight in the IPCC timetable. It is now far too late to attempt to craft an assessment of a complicated issue.

Efforts to craft an assessment on the run are further complicated by past failures and neglect both by IPCC and the wider climate science community. In its two Draft Reports sent to external scientific review, while IPCC mostly evaded the problem, its perfunctory assessment of the developing discrepancy between models and observations, such as it was, included major errors and misrepresentations, all tending in the direction of minimizing the issue.

IPCC has a further dilemma in coopering up an assessment on the run. Although the topic is obviously an important one, it received negligible coverage in academic literature, especially prior to the IPCC publication cutoff date, and the few relevant peer-reviewed articles (e.g. Easterling and Wehner 2009; Knight et al 2009) are unconvincing.

The IPCC assessment has also been compromised by gatekeeping by fellow-traveler journal editors, who have routinely rejected skeptic articles on the discrepancy between models and observations or pointing out the weaknesses of articles now relied upon by IPCC. Despite exposure of these practices in Climategate, little has changed. Had the skeptic articles been published (as they ought to have been), the resulting debate would have been more robust and IPCC would have had more to draw on its present assessment dilemma. As it is, IPCC is surely in a well-earned quandary.

Interested readers should also consult Lucia’s recent post which also comments on leaked IPCC draft material. Lucia’s diagnosis of IPCC’s quandary is very similar to mine. She also uses boxplots.

IPCC Statements

First, I’ll briefly review how IPCC’s position on the discrepancy has developed.

The First Order Draft stated (chapter 1):

The [temperature] observations through 2010 fall within the upper range of the TAR projections (IPCC, 2001) and roughly in the middle of the AR4 model results.

This assertion was flat-out untrue. Their Figure 1.4 (see below), which purported to support this claim, was not derived from peer reviewed literature and was botched. They misplaced observations relative to AR4 model projections (presumably due to an error in transposing reference periods).

figure 1.4 fod models vs observations annotated
Figure 1. IPCC AR5 First Draft Figure 1.4. The brown wedge purports to show AR4 projections. HadCRUT4 values have been overplotted in yellow (and within amendment correspond to the black squares plotted by IPCC) and appear to support IPCC’s summary. However, IPCC mislocated the AR4 and other projections. The red arrows show the actual AR4 envelope for 2005, 2010 and 2015 (digitized from the original AR4 diagram). Observations are outside the properly plotted envelope.

While the First Draft was a “draft”, the error nonetheless passed IPCC’s own internal review process. The error also went in a “favorable” direction. In the Second Draft, IPCC (chapter 1) re-iterated the assertion that observations were “in the middle” of projections:

the globally-averaged surface temperatures are well within the uncertainty range of all previous IPCC projections, and generally are in the middle of the scenario ranges”.

However, their revised Figure 1.4 directly contradicted their claim. Observations since 2007, including the most recent ones, were now outside the AR4 envelope, as shown below.

figure 1.4 models vs observations annotated
Figure 2. IPCC AR5 Second Draft Figure 1.4 with annotations: red squares are 2012 and 2013 (to date) HadCRUT4. The orange wedge illustrates combined AR4 A1B-A1T projections. The yellow arrows show verified confidence intervals in 2005, 2010 and 2015 digitized from the original AR4 diagram (Figure 10.26) for A1B. Observed values have been outside the AR4 envelope for all but one year since publication of AR4. IPCC authors added a grey envelope around the AR4 envelope, presumably to give rhetorical support for their false claim about models and observations; however, this envelope did not occur in AR4 or any peer reviewed literature.

In a recent article in National Post, Ross
McKitrick pointed out the inconsistency between IPCC’s language and its graphic, acidly observing:

The IPCC must take everybody for fools. Its own graph shows that observed temperatures are not within the uncertainty range of projections; they have fallen below the bottom of the entire span.

Reiner Grundmann at Klimazweibel also recently drew attention to the discrepancy in this graphic (citing McKitrick).

SPM Draft, June 2013
The Summary for Policy Makers attached to the Second Draft avoided any discussion of the discrepancy between models and observations.

Presumably responding to demands that the discrepancy be addressed, the Government Draft in June 2013 added a lengthy section (Box 9.2) purporting to address the discrepancy between models and observations and the Summary for Policy Makers included two somewhat inconsistent discussions of this issue in connection with both chapter 9 (Evaluation of Climate Models) and chapter 10 (Detection and Attribution).

The chapter 10 summary attributed the discrepancy in “roughly equal measure” to internal variability and a reduced trend in radiative forcing due to recent volcanic activity and downward solar phase:

The observed reduction in warming trend over the period 1998-2012 as compared to the period 1951- 2 2012, is due in roughly equal measure to a cooling contribution from internal variability and a reduced trend in radiative forcing (medium confidence). The reduced trend in radiative forcing is primarily due to volcanic eruptions and the downward phase of the current solar cycle. However, there is low confidence in quantifying the role of changes in radiative forcing in causing this reduced warming trend. {Box 9.2; 10.3.1; Box 10.2}

The chapter 9 summary also conceded the discrepancy, but attributed it “to a substantial degree” to natural variability, with “possible” contributions from forcing – mentioning aerosols as well as solar and volcanics – and, “in some models”, to too strong a response to greenhouse forcing:

Models do not generally reproduce the observed reduction in surface warming trend over the last 10-15 years. There is medium confidence that this difference between models and observations is to a substantial degree caused by unpredictable climate variability, with possible contributions from inadequacies in the solar, volcanic, and aerosol forcings used by the models and, in some models, from too strong a response to increasing greenhouse-gas forcing. {9.4.1, 10.3.1, 11.3.2; Box 9.2} [SPM – evaluation]

The IPCC Second Draft had cited four articles supposedly supporting the consistency of models and observations, three of which were also cited in the Government Draft (Mitchell et al 2012b GRL does not exist at GRL nor can an article by its title be located):

it is found that global temperature trends since 1998 are consistent with internal variability overlying the forced trends seen in climate model projections (Easterling and Wehner, 2009; Mitchell et al., 2012b); see also Figure 1.1, where differences between the observed and multimodel response of comparable duration occurred earlier. Liebmann et al. (2010) conclude that observed HadCRUT3 global mean temperature trends of 2-10 years ending in 2009 are not unusual in the context of the record since 1850. After removal of ENSO influence, Knight et al. (2009) concluded that observed global mean temperature changes over a range of periods to 2008 are within the 90% range of simulated temperature changes in HadCM3.

Both Easterling and Wehner 2009 and Knight et al 2009 had been severely criticized by Lucia in blog posts ( see here here here here here.) Lucia was sufficiently annoyed by the defects in Easterling and Wehner 2009 that she submitted a comment to GRL. Though her comment was accurate on all points, it was bench rejected by GRL. (see retrospective here). Subsequently, Lucia was co-author of another submission on the discrepancy between models and observations (a group that ecumenically included both Pat Michaels and James Annan), but this too was rejected (see discussion at Judy Curry’s here).

The criticisms in both the Liljegren comment and the Michaels et al submission were valid at the time and remain valid today. Many of their criticisms surfaced recently in Fyfe et al 2013, though this did not rebut Easterling and Wehner 2009 or Knight et al 2009 as directly. Fyfe et al 2013 was not published until after the IPCC deadline and, thus, Easterling and Wehner 2009 and Knight et al 2009 remained unrebutted in academic journals and were essentially all that was in the cupboard for the IPCC assessment.

Ross and I had experienced something similar in our comment on Santer et al 2008, which was likewise rejected by the original journal (International Journal of Climatology.) A couple of years later, Ross managed to get much of this material into print as McKitrick et al 2010. However, in the meantime, Santer et al 2008 continued to be cited in assessment reports. As an ironic footnote to our earlier controversy, AR5 now cites McKitrick et al 2010 and concedes that the discrepancy between models and observations in the tropical troposphere is unresolved.

The Problem Re-stated
IPCC’s Government Draft attempt to frame the discrepancy between models and observations as due to “natural variability” is ultimately a statistical problem – never a strong point of IPCC authors. Further, as noted above, the statistical analysis in the Government Draft purporting to support “natural variability” is not drawn from previously published literature, but was developed within the chapter (despite frequent protestations that IPCC does not itself do research.)

IPCC conceded in the Government Draft that there has been a 15-year “hiatus” (their term) in temperature increase, but assert that “individual decades” of hiatus are also “exhibited” in climate models, during which time the “energy budget is balanced” by energy uptake in the deep ocean:

However, climate models exhibit individual decades of GMST trend hiatus even during a prolonged phase of energy uptake of the climate system (e. g., Figure 9.8, (Easterling and Wehner, 2009; Knight et al., 2009)), in which case the energy budget would be balanced by increasing subsurface-ocean heat uptake (Meehl et al., 2011; Guemas et al., 2013; Meehl et al., 2013a).

However, pointing to the deep ocean doesn’t actually resolve the discrepancy between models and observations, since, as Hans von Storch recently observed, climate models did not include this effect.

Among other things, there is evidence that the oceans have absorbed more heat than we initially calculated. Temperatures at depths greater than 700 meters (2,300 feet) appear to have increased more than ever before. The only unfortunate thing is that our simulations failed to predict this effect.

IPCC also asserted that similar hiatuses are “common” in the instrumental record:

15-year-long hiatus periods are common in both the observed and CMIP5 historical GMST time series (see [Figure 9.8] and also Section 2.4.3, Figure 2.20; Easterling and Wehner, 2009, Liebmann et al., 2010).

As shown below, there is indeed a lengthy “hiatus” in the 20th century record, stretching almost 40 years from the 1940s until 1980. However, IPCC is surely being a bit sly in saying that 15-year-long hiatus periods were “common” in the 20th century. It is far more reasonable to say that there was a steady temperature increase from the 19th century to the 1940s, followed by a 30-40 year hiatus, then a 30-year period of increase to the end of the century.

instrumental vs model
Figure 3. HadCRUT4 GLB (black) versus CMIP5 ensemble average (red). Note the lengthy hiatus from the 1940s to 1980. Also note the divergence between models and observations in the 21st century.

The simplest inspection of the above graphic also shows important differences between the present hiatus and the long hiatus between the 1940s and 1980. In the long earlier hiatus, the models ran cooler than observations, whereas the opposite is the case right now. The model ensemble has been running hot for about 14 years and counting. Despite assertions by climate scientists of the supposed statistical insignificance of the divergence, in fact, it is, so to speak, unprecedented: there is no corresponding period in which models ran hot for such an extended period. In most statistical circumstances, residuals that consistently run in one direction at the end of a sample give grounds for statistical concern and not reassurance.

The suddenly-fashionable attribution of the present hiatus to unmodeled energy accumulation in the deep ocean also invites questions about the earlier hiatus, which the climate “community” conventionally attributes to aerosols. There is no independent record of historical aerosol levels, which (e.g. the prominent GISS series by Hansen’s group) have primarily been developed by climate modelers seeking to explain the long hiatus. Skeptics have long argued that aerosol histories have been used as a sort of deus ex machina to paper over excessively sensitive climate models.

Once again, IPCC invoked both “volcanic” and “aerosol” forcing as possible contributors for the present hiatus, but one feels that these efforts were somewhat half-hearted, though they did make their way to the SPM. The failure of IPCC scientists to draw attention in real time to the supposedly responsible volcanic events inevitably compromises any attempts to do so after the fact.

Thus, the sudden interest in positing energy accumulation in the deep ocean.

However, if the present hiatus is attributed to an unmodeled accumulation of energy in the deep ocean, how do we know that something similar didn’t happen during the long earlier hiatus? Could some portion of the earlier hiatus be due to deep ocean accumulation as opposed to aerosols? It’s a big door that’s being opened.

Opening the door also opens up questions about the potential length of the present hiatus. If unmodeled deep ocean processes are involved, how can we say with any certainty that the present hiatus won’t extend for 30-40 years?

Boxplots
In the (new) Box 9.2 of the Government Draft, IPCC conceded that recent 15-year observations run below models, but argue that 15-year trends ending with the big 1998 El Nino undershoot models.

an analysis of the full suite of CMIP5 historical simulations (augmented for the period 2006-2012 by RCP4.5 simulations, Section 9.3.2) reveals that 111 out of 114 realisations show a GMST trend over 1998-2012 that is higher than the entire HadCRUT4 trend ensemble …
During the 15-year period beginning in 1998, the ensemble of HadCRUT4 GMST trends lies below almost all model-simulated trends whereas during the 15-year period ending in 1998, it lies above 93 out of 114 modelled trends.

They then assert that models and observations cohere over the 62-year period from 1951-2012, concluding that there is therefore “very high confidence” in the models and that the 15-year discrepancies are mere fluctuations with the 1998 El Nino skewing recent comparisons:

Over the 62-year period 1951- 2012, observed and CMIP5 ensemble-mean trend agree to within 0.02 ºC per decade (Box 9.2 Figure 1c; CMIP5 ensemble-mean trend 0.13°C per decade). There is hence very high confidence that the CMIP5 models show long-term GMST trends consistent with observations, despite the disagreement over the most recent 15-year period. Due to internal climate variability, in any given 15-year period the observed GMST trend sometimes lies near one end of a model ensemble an effect that is pronounced in Box 9.2, Figure 1a,b since GMST was influenced by a very strong El Niño event in 1998

None of the above analysis by IPCC appears in peer reviewed literature. It is ad hoc analysis that can and should be parsed. Demonstrating that 15-year trend comparisons can yield inconsistent results does not remotely settle the statistical question of models running too hot that is evident in the opening graph. Indeed, it is little more than a debating trick.

The following graph compares models to observations over the period 1979-2013, long enough to place the 1998 El Nino in the middle, but excluding the earlier hiatus of the 1950s and 1960s. 1979 is also when the satellite record commences. The figure is a standard box-and-whiskers diagram of a type routinely used in statistics (rather than some ad hoc method). I’ve shown models with multiple runs as separate boxes and grouped models with singleton runs together. On the right in orange, I’ve done a separate box-and-whisker plot for all models. (Lucia has recently done plots in a similar style: her results look similar, but I haven’t parsed them yet as I’ve been working on this post.)

The figure shows that nearly every run of every model ran too hot over the 1979-2013 period, with many models running substantially too hot. The discrepancy can be seen with box-and-whiskers of the ensemble, but it pervades all models.

boxplot_GLB_tas_1979-2013
Figure 4. Boxplot of GLB temperature (tas) trends (1979-2013) from 109 CMIP5 RCP4.5 model runs versus HadCRUt4.

The boxplot shows fundamental discrepancies that pervade all models. Nor do these inconsistencies have anything to do with 15-year trends or the 1998 El Nino. IPCC’s entire discussion of 15-year trends is completely worthless.

Hiatuses in a Warming World
One final figure demonstrating the problem.

As noted above, IPCC (and others) have observed that hiatuses occur from time to time in climate models but didn’t disclose the scarcity of hiatuses of the length of the present negative trend (13 years from 2001 to 2013, a period that does not include the 1998 El Nino).

To assess this (varying a form of analysis that Lucia has used), I calculated all 13-year trends for all 109 CMIP5 RCP4.5 models presently at KNMI for the warming period 2005-2050, yielding a population of 3379 trends (109 models * 31 starting years). Only 0.5% of the population were negative (19 of 3379) and only 0.3% (10 of 3379) were lower than the slightly negative observed trend.

boxplot 13-year CMIP trends

So while it is true that 13-year hiatuses occur from time to time in CMIP5 models of a future warming world, they are statistically rather scarce. Given this scarceness, no one can “with medium confidence” attribute the present hiatus to “natural variability” and, whatever the ultimate explanation of the hiatus, IPCC’s attribution “with medium confidence” to “natural variability” is merely wishful thinking.

Tropical Troposphere
While recent discussion of the discrepancy between models and observations has focused on global surface temperature, the discrepancy between models and observations was first raised in connection with the tropical troposphere, where the discrepancy is even stronger.

In this earlier controversy as well, IPCC and other assessments (e.g. the US CCSP) placed far too much credence in pettifogging arguments by Santer and coauthors that the discrepancy was not “statistically significant”, arguments that were untrue at the time, but which have gone even further offside with the passage of time.

The IPCC has now unequivocally conceded the discrepancy, even citing McKitrick et al 2010 (though not without taking an unwarranted sideswipe at us). In the Second Draft, the IPCC said that explanation of the discrepancy was “elusive”. The new draft refrains from the word “elusive”, but concedes the over-estimate, noting that much of the over-estimate arises from an over-estimate of tropical ocean SST propagated upwards.

In summary, most, though not all, CMIP3 and CMIP5 models overestimate the observed warming trend in
the tropical troposphere during the satellite period 1979–2012. Roughly one-half to two-thirds of this difference from the observed trend is due to an overestimate of the SST trend, which is propagated upward because models attempt to maintain static stability.

The inconsistency between models and observations for tropical SST is even stronger than for global temperature – casting further doubt on IPCC’s attribution of the global inconsistency to “natural variability”. In addition, even IPCC’s seemingly broad concession somewhat understates the problem, as all (not “most”) CMIP5 RCP8.5 runs and models run too hot, as shown in the following boxplot:

boxplot_TRP_tlt_1979-2013
Figure 5. Boxplot of TRP TLT trends 1979-2013 for CMIP5 RCP8.5 models. Here I’ve used John Christy’s collation of CMIP5 runs. Christy collated RCP8.5 because that was used in Santer et al 2013. The historic portion of RCP8.5 and RCP 4.5 is very similar. It is possible that a couple of RCP4.5 runs will yield lower trends, but the overwhelming point will remain.

Although IPCC largely conceded the discrepancy, they couldn’t help taking a thoroughly unwarranted sideswipe at McKitrick et al 2010, stating:

The very high significance levels of model–observation discrepancies in LT and MT trends that were obtained in some studies (e.g., Douglass et al., 2008; McKitrick et al., 2010) thus arose to a substantial degree from using the standard error of the model ensemble mean as a measure of uncertainty, instead of the ensemble standard deviation or some other appropriate measure for uncertainty arising from internal climate variability.

The very high levels of significance observed in McKitrick et al 2010 occurred because there were very high levels of significance, not because of the use of “inappropriate” statistics. Indeed, as observed in our rejected submission, had Santer et al 2008 used up-to-data, their own method would have demonstrated the “very high significance levels” that IPCC objects to here.

Conclusion
No credence should be given to IPCC’s last-minute attribution of the discrepancy to “natural variability”. IPCC’s ad hoc analysis purporting to support this claim does not stand up to the light of day.

Gavin Schmidt excused IPCC’s failure to squarely address the discrepancy between models and observations saying that it was “just ridiculous” that IPCC be “up to date”:

The idea that IPCC needs to be up to date on what was written last week is just ridiculous.”

But the problem not arise “last week”. While the issue has only recently become acute, it has become acute because of accumulating failure during the AR5 assessment process, including errors and misrepresentations by IPCC in the assessments sent out for external review; the almost total failure of the academic climate community to address the discrepancy; gatekeeping by fellow-traveling journal editors that suppressed criticism of the defects in the limited academic literature on the topic.

Whatever the ultimate scientific explanation for the pause and its implications for the apparent discrepancy between models and observations, policy-makers must be feeling very letdown by the failure of IPCC and its contributing academic community to adequately address an issue that is critical to them and to the public.

That academics (e.g. Fyfe et al here; von Storch here) have finally begun to touch on the problem, but only after the IPCC deadline must surely add to their frustration. Von Storch neatly summarized the problem and calmly (as he does well) set it out as an important topic of ongoing research, but any investor in the climate research process must surely wonder why this wasn’t brought up six years ago in the scoping of the AR5 report.

One cannot help but wonder whether WG1 Chair Thomas Stocker might not have served the policy community better by spending more time ensuring that the discrepancy between models and observations was properly addressed in the IPCC draft reports, perhaps even highlighting research problems while there was time in the process, than figuring out how IPCC could evade FOI requests.

152 Comments

  1. Posted Sep 24, 2013 at 2:24 PM | Permalink

    Regarding Figure 1.4, I hadn’t looked at it in the First Draft, but I don’t think the error was due to transposing the reference periods. It looks to me like they mistakenly constrained the models to start in line with the 1990 value of the smoothed series rather than the actual series, and the smoothed series value lies below the 1990 observation. Either way, I conjecture that whoever fixed that error was different from the person who wrote the description in the text, which is why the text wasn’t updated. Also, when the graph was fixed, whoever drew it probably saw how badly it looked for them and tried to dilute the effect by adding the gray shading, in the hopes that careless readers will assume that’s part of the model spread. The gray shading is another example of ad hoc invention-on-the-fly in IPCC reports. The description in the text is incomprehensible and it doesn’t appear in any other graphs of model forecasts, where narrowness of the spread is seen as adding credibility.

    • Posted Sep 24, 2013 at 2:38 PM | Permalink

      Steve that is an unnecessary snip and if I so misunderstood your intention in the article that you feel you have to censor them then I suggest you remove the entire comments as I am clearly not qualified to say it is a great article.

      Steve: I think that you’re being too chippy here. I prefer that threads editorially comment on the thread. I snipped for editorial reasons, not because there was anything being “censored”. Reasonable people can disagree on such things.

      • Posted Sep 25, 2013 at 8:51 AM | Permalink

        In response to your snip and a few other helpful comments, I have written this up into an article:

        “Climate: UNKNOWNS are greater than KNOWNS”

        http://scottishsceptic.wordpress.com/2013/09/25/climate-unknowns-are-greater-than-knowns/

        The conclusion of which is:

        “Because the statistics, the logic and plain common sense tells us that man-made warming is smaller than “natural variation” or the “unknowns” that caused the climate models to fail, we can confidently say:

        It is unlikely that human influence on climate caused more than half of the increase in global average surface temperature from 1951-2010.

        No rational person could support the previous IPCC’s report where they said it was “very likely” that the majority of warming is man-made but to increase this confidence in the face of the failed predictions and the statistical tests that say otherwise is delusional.”

  2. Posted Sep 24, 2013 at 2:31 PM | Permalink

    In the SOD they also took a gratuitous swipe at our 2010 paper by saying we only found what we did because we averaged across all the models. My review comment in response was:

    [Page 27] This statement seems to insinuate that the studies finding a statistically significant discrepancy between models and observations in the tropical troposphere did so by averaging across the observational series. But McKitrick et al. (2010; 2011-ref. in row 26) reported results for individual series as well as for multi-series averages. The discrepancy between models and observations is statistically significant either way: in the case of series averages and for every individual series as well. If you are going to mention the distinction then you need to mention these findings as well.

    So now they add, grudgingly, an admission that individual models are also out of line:

    The very high significance levels of model–observation discrepancies in LT and MT trends that were obtained in some studies (e.g., Douglass et al., 2008; McKitrick et al., 2010) thus arose to a substantial degree from using the standard error of the model ensemble mean as a measure of uncertainty, instead of the ensemble standard deviation or some other appropriate measure for uncertainty arising from internal climate variability…Nevertheless, almost all model ensemble members show a warming trend in both LT and MT larger than observational estimates (McKitrick et al., 2010; Po-Chedley and Fu, 2012;Santer et al., 2013).

  3. kim
    Posted Sep 24, 2013 at 2:43 PM | Permalink

    Coopered but leaking,
    Green and rotten barrel staves.
    The Armada flees.
    ============

  4. Posted Sep 24, 2013 at 2:46 PM | Permalink

    In the SOD they tried to denigrate comparisons of the model means to observations by saying:

    Likewise, to properly represent internal variability, the full model ensemble spread must be used in a comparison against the observations, as is well known from ensemble weather forecasting (e.g. Raftery et al. 2005).

    My review comment was:

    [Page 27] The idea that observational trends should be compared to the extrema of model trends, rather than to the confidence interval around the mean of model trends, is statistically and methodologically incoherent. It is noteworthy that you have no supporting citations for this position. It amounts to a recommendation to engage in cherry-picking, and it is contradicted by your own methodologies elsewhere. You have already stated (p. 9_8) that a single model run can follow any one of many pathways. So to characterize the behaviour of a model you usually use ensemble means, presumably to draw out the underlying common aspects of the model runs in a forcing scenario. And you claim that the models are based on fundamental physical laws, implying that there is an underlying core theory common to the models. Presumably that core theory is revealed in the average behaviour across ensemble members. Yet here you say something different: that the proper way to compare models and observations is not to use the means but to use the endpoints of the full spread of the model runs. This is a bit too convenient: you can make that spread as wide as you like simply by adding more and more runs. Given enough runs, even from biased and incorrect models, eventually one will have a trend that coincides with the observed data. This proves nothing in a set-up where you can generate infinitely many model runs. It is not evidence in support of the “fundamental laws of nature” upon which the models are based, nor does it validate their parameterization and tuning, nor does it support anything to do with the models as a genre. All it says is that if you roll the dice often enough, eventually you get snake eyes. To say something about models as a group you have to test their average/common trend against the observational counterpart, which is precisely what McKitrick et al. (2010) does. Your argument, in effect, tries to have things both ways. You want to claim very high confidence in “the models” as a unified methodological entity or genre, but then you propose testing them as independent, atomistic single runs. Leaving aside the problem of cherry-picking, even if you got a perfect match between the observed trend and that from model run #1008, it tells you nothing whatsover about the model as a scientific tool, because you are not testing the model, you are just testing a list of numbers that came out of it. If you want to draw a conclusion about the model, you have to treat it as a data generating process and test it as such, which means taking account of the distribution of what it produces, i.e. the moments. The paper you cite as supporting evidence for your method does not address the issues under discussion. In fact the logic of Bayesian Model Averaging goes completely counter to what you are proposing to do, since it is used to neutralize cherry-picking (or “model selection”) bias in situations where researchers can pick from an extremely large number of models.

    So they changed the statement to

    Likewise, to properly represent internal climate variability, the full model ensemble spread must be used in a
    comparison against the observations (e.g., Box 9.2; Section 11.2.3.2; Raftery et al. (2005); Wilks (2006);
    Jolliffe and Stephenson (2011)).

    i.e. they just added a few other citations, which probably don’t support their point either. And they still go on to claim “increased confidence in the models” while applying a testing rule that cannot, by definition, support the models.

    • johanna
      Posted Sep 24, 2013 at 6:21 PM | Permalink

      Thanks for this explanation, Ross. It sounds like they are using the modelling equivalent of data dredging. Since we still regularly see research based on data dredging published in journals and publicised in the media, I guess it is not entirely surprising. But kudos for calling them on it.

      • johanna
        Posted Sep 25, 2013 at 12:18 PM | Permalink

        Thinking further, it might be worth you getting in touch with Steve Milloy, who knows more about data dredging than anyone else has forgotten.

        He has posted your stuff over the years, obviously respects it.

        Data dredging and what you have described are much of a muchness, and equally to be deplored.

    • johnfpittman
      Posted Sep 26, 2013 at 7:05 AM | Permalink

      Ross, doesn’t the claim:

      Likewise, to properly represent internal variability, the full model ensemble spread must be used in a comparison against the observations, as is well known from ensemble weather forecasting (e.g. Raftery et al. 2005).

      invalidate the reasoning that they used to claim there was not a need to answer Kreiss and Browning’s criticisms?

    • Kenneth Fritsch
      Posted Sep 26, 2013 at 11:40 AM | Permalink

      “Likewise, to properly represent internal variability, the full model ensemble spread must be used in a comparison against the observations, as is well known from ensemble weather forecasting (e.g. Raftery et al. 2005).”

      If the realizations from multiple runs of the same model are different only by the small variations in initial conditions used then one model realization is as likely as another, i.e. there is no central tendency of the mean of the model runs. Whether that translates to the trends arising from those realizations is another matter and one I have been attempting to resolve in my own mind. If the trends did follow one on one to the realizations, the mean of trends would not have a central tendency and one trend calculated would be as likely as another from another run of the same model. Under those assumptions a range of trends would be properly compared to the observed trends and the observed trend would be simply one of many possible given the small differences in initial conditions required to change the trend.

      Supposedly the chaotic nature of climate should effect trends over a decade or two, but I have found a large range of trends within the CMIP5 historical model runs in the only 2 models run with as many as 10 multiple runs over the period 1964-2005.

      Following this approach to its conclusion would perhaps show that the observed temperature trends were within the (lower)range of the trends, over extended periods of time, as determined from the models, but that the model range was so wide that its usefulness in making policy and predictions would be greatly diminished. Since the source of this uncertainty would be the chaotic nature of climate there would be no fixing it (reducing the uncertainty limits) by producing better models.

      • Kenneth Fritsch
        Posted Sep 29, 2013 at 6:13 PM | Permalink

        I think I can show good evidence that the distribution of trends from a multiple run of 10 from 2 models from the CMIP5 Historical model runs for the period 1964-2005 comes from a normal distribution. That would be in line with what I think Ross McKitrick and Lucia have assumed. I summarize my analysis at Lucia’s Blackboard.

        http://rankexploits.com/musings/2013/leaked-chapter-9-ar5-musings/#comment-119852

  5. jstorrshall
    Posted Sep 24, 2013 at 2:50 PM | Permalink

    “(e.g. Fyfe et al here; von Storch here)”

    links missing.

  6. Posted Sep 24, 2013 at 2:51 PM | Permalink

    My SOD review response to their comment about the methodological error of using the model mean was

    [Page 27] This sentence makes it sound like the model-observation discrepancies were only found due to an improper uncertainty metric. No, the uncertainty metric was correct. The discrepancies were found because the models over-predict warming in the tropical troposphere, and robust trend estimators indicate that the difference is statistically highly significant, so that the models on average predict a trend that is significantly higher than any individual observational series or all observational series averaged together. You cite no published papers in support of your claim that a better method would use “the standard deviation or some other appropriate measure of ensemble spread.” In fact you can’t even say what alternative measure you would prefer! Much less do you cite a paper that argues for it and uses it. So you simply are not in a position to ignore or set aside the findings in the published literature. As stated above, if you are going to appeal to the spread of individual model runs then you have to abandon any claim to have validated the models as a genre, or as a collective embodiment of the core theory of how the climate works. To the extent you want readers to think of climate models as a single collective genre, as represented by a core set of processes “based on fundamental laws of nature” it makes sense to talk about the behaviour of the average of model runs. And, as you point out, on that basis the model-observation discrepancies are very highly significant.

    Since they didn’t change their wording at all, I expect that when they release the author responses to expert review comments, the reply to this one will turn out to be something like “Rejected”

  7. Posted Sep 24, 2013 at 3:30 PM | Permalink

    The suddenly-fashionable attribution of the present hiatus to unmodeled energy accumulation in the deep ocean also invites questions about the earlier hiatus, which the climate “community” conventionally attributes to aerosols.

    For me it invites questions about their basic understanding of thermodynamics. How could heat possibly accumulate in the deep oceans flowing against a temperature gradient ? The number of states where heat flows upwards is >10**(10**100). It is unimaginably impossible for heat to flow downwards.

    Quindi e ridicolo !

    • kim
      Posted Sep 24, 2013 at 3:47 PM | Permalink

      It’s an occult explanation, and the last best hope of the alarmists.
      =====================

    • Craig Loehle
      Posted Sep 24, 2013 at 4:14 PM | Permalink

      Not meaning to give them an out, but if normally the sinking polar water is x degrees and now it is x+1 degrees (but still colder than the water it is sinking into) then heat will be transported to depth. Also, there are ocean vortices that mix surface water to depth and these could transport heat. The complaint that this is impossible is not true, but it is too convenient since no one has good data to prove or refute it. It is more plausible that the models are running hot, that the aerosol data is botched, that the TOA data has issues etc etc because we are trying to estimate highly spatially variable values for all global quantities with very few measurements.

      • DocMartyn
        Posted Sep 24, 2013 at 4:33 PM | Permalink

        Craig, the 300 to 700m layer is warming more rapidly than the 0-300m layer according to Trenberth.

        my blow up of the end of the plot.

        • Craig Loehle
          Posted Sep 24, 2013 at 5:39 PM | Permalink

          note that the 300 to 700m layer is 30% bigger than the 0-300 m layer and the plot is in joules, not deg C. More mixing would accomplish this graph. BUT as Willis has pointed out these are very very small actual changes in temperature with unknown uncertainties (but no doubt larger than they admit).

        • DocMartyn
          Posted Sep 24, 2013 at 8:09 PM | Permalink

          Craig, I do not think that they used an algorithm that accounted for coastal sloping as they were monitoring deep ocean.
          I may be wrong, but that was my reading.

      • Posted Sep 24, 2013 at 4:50 PM | Permalink

        Craig, You are describing the overturning meridional circulation which as far as I know is mostly driven by diluted salinity sinking at the north pole after mixing wit fresh melt water. For the Pacific this circulation rises up in the equator , but in the atlantic it can pass through to antarctica

        However, I don’t see this as being so different to convection in the atmosphere. With constant solar radiation the lapse rate acts always falls with height.

      • ianl8888
        Posted Sep 24, 2013 at 5:59 PM | Permalink

        Then why have the Argo buoys not recorded this sinking hotter water mass ?

        This is a serious question – the claim of (x+1) degrees of some huge, heated water mass sinking to the depths without being noticed is absolutely screwy to me

        • tty
          Posted Sep 25, 2013 at 6:17 PM | Permalink

          It is actually possible though unlikely. The Antarctic Bottom Water is mostly created beneath the ice in The Weddell sea. If this water becomes slightly warmer and a lot saltier it could still sink and displace slightly colder and much less salty water. Conveniently this would largely occur under the permanent sea ice in the Weddell sea, whither the Argo Buoys goeth not.

      • None
        Posted Sep 25, 2013 at 4:30 PM | Permalink

        “Not meaning to give them an out, but if normally the sinking polar water is x degrees and now it is x+1 degrees”

        Yes, whereas previously the ice melted to water at precisely 0 degrees, becoming denser and sinking in the surrounding water, it now melts at 1 degree instead.

        This damn AGW, now it’s increasing the melting point of water. Whatever next ?

        [Quite the opposite effect may be the case: as the area of ice decreases (as it has been since the last ice age), so it is likely that the quantity of ice being melted by the ocean each year decreases, this would reduce a cooling effect on the ocean (entirely independent of AGW obviously) possible causing lower level temperature rises]

        • None
          Posted Sep 25, 2013 at 4:31 PM | Permalink

          Er, melting point of ice, not water :-)

        • twr57
          Posted Sep 26, 2013 at 2:52 PM | Permalink

          But if it’s just melted, why would it sink? Maximum density of water is at +4C (which is one reason why ice forms on the surface). Also (I suppose) melted ice would contain less salt, and so be less dense, than the open sea. Transport of heat to the deep ocean in this way seems highly implausible.

      • beng
        Posted Sep 27, 2013 at 7:46 AM | Permalink

        Craig Loehle says:

        Not meaning to give them an out, but if normally the sinking polar water is x degrees and now it is x+1 degrees (but still colder than the water it is sinking into) then heat will be transported to depth.

        Craig, the x+1 water can’t sink unless the surface is warmer & less dense than it was previously. That should be detectable by SST measurements.

        • Craig Loehle
          Posted Sep 27, 2013 at 8:57 AM | Permalink

          There is unfortunately no data on sinking water and ocean circulation than enables this question to be answered. I was not arguing that they are right that the heat is going into the ocean (I think not) just that it is plausible enough to give them a much too convenient excuse. The heat rise in the ocean itself shows no uptick since 1997 which could account for a pause anyway.

    • Posted Sep 24, 2013 at 6:04 PM | Permalink

      Quindi e ridicolo !

      Or, in the official internal jargon of the climate “community”, as Steve nicely puts it, a travesty.

      Steve’s point is as always on the money. It’s two minutes to midnight and increasingly thin rabbits are having to be pulled out of the IPCC hat as governments begin to ask the right questions about the models and the pause. Even the UK state broadcaster felt the need to put out a video with Andrew Montford in it yesterday, very hopefully entitled UN ‘certain’ of climate change despite warming pause.

      On the right of that video is a sister report with quite a nice layman’s explanation of the Argo project. For the BBC’s David Shukman the favoured explanation is of more heat somehow being in the ocean. But I’m sure Steve’s right at this juncture not to dive into the thermodynamics but to reiterate with von Storch that the models didn’t predict that. Problem not solved.

      Let’s focus on the great inadequacy of the IPCC process and of the journals on which it relies, given that this discrepancy has been creeping up on us since before TAR. AR5 can’t look to peer-reviewed literature in order to answer questions from governments, because the relevant contributions were censored out by the journals. As Steve says, a well-earned quandary for this much-praised child of UNEP and UNFCCC.

  8. Energetic
    Posted Sep 24, 2013 at 3:37 PM | Permalink

    I think you got the name wrong, it is Jochem Marotzke:

    http://www.mpimet.mpg.de/en/staff/jochem-marotzke.html

  9. Craig Loehle
    Posted Sep 24, 2013 at 4:00 PM | Permalink

    Gatekeeping is also illustrated by reviewer responses to my paper, submitted in 2008:
    Loehle, C. 2009. Trend Analysis of Satellite Global Temperature Data. Energy & Environment 20: 1087-1098
    At the time, there were not published analyses of trends of different lengths of years for the UAH or RSS data, which is what I did. The reviewer complaints (can’t remember which journals) were that this was not ground-breaking basic science. But the world certainly did need to know what the trends over time looked like, and needs it even more now. It is like saying determining the unemployment rate or GNP or cancer mortality rate is not “basic science” and therefore not interesting–certain empirical quantities are needed and useful even if computing them is simple.
    The ludicrous idea of comparing the trends to the full model spread is a transparent attempt to make the models pass. The models are robust and dead-on but only compare the outliers to data? Please.

  10. David Brewer
    Posted Sep 24, 2013 at 4:09 PM | Permalink

    “So while it is true that 13-year hiatuses occur from time to time in CMIP5 models of a future warming world, they are statistically rather scarce. Given this scarceness, no one can “with medium confidence” attribute the present hiatus to “natural variability” and, whatever the ultimate explanation of the hiatus, IPCC’s attribution “with medium confidence” to “natural variability” is merely wishful thinking.”

    Oh but you don’t get it Steve. Medium confidence just means this. They have no idea what caused the hiatus, but they have no explanation available other than natural variability. Therefore their confidence in this explanation is medium, i.e. half-way between zero and certainty.

    • Posted Sep 24, 2013 at 5:06 PM | Permalink

      Must say – that made me chuckle… very aptly described.

    • kneel
      Posted Sep 24, 2013 at 10:07 PM | Permalink

      ” Therefore their confidence in this explanation is medium, i.e. half-way between zero and certainty.”

      Oh – I thought they got their confidence from someone in the afterlife, hence “medium” confidence :-0

  11. DocMartyn
    Posted Sep 24, 2013 at 4:24 PM | Permalink

    Is there anyway one can use the AR4 A1B figure with the HADCRU3 and HADCRU4 temperature series?
    I don’t doubt you, but find it hard to believe that the IPCC would use a unrepresentative figure that could so be spotted by investigators such as your self.
    Lord Lawson is calling for a Public Inquiry and I am sure that a pair of graphics, with overlays, would be much more persuasive than your added double headed arrows.

    • Posted Sep 24, 2013 at 5:23 PM | Permalink

      Do you mean this, from yesterday?

      GWPF chairman Lord Lawson is calling for an independent panel of climate scientists and statisticians to review the UKCP09 predictions.

      That as I understand it is a separate matter uncovered by Nic Lewis and written up by Andrew Montford for the GWPF. It’s not to do with the IPCC and AR5 but the UK Met Office forecasts for the UK government. It does however involve a GCM running hot. Hard though that may be to believe.

      • DocMartyn
        Posted Sep 24, 2013 at 8:11 PM | Permalink

        He has been angling for a Public Inquiry since 2009, he could get one too.

        • Posted Sep 25, 2013 at 12:58 AM | Permalink

          Sorry to be picky, but a reference to substantiate that would help us to be precise.

          Lawson hasn’t been asking for the same thing since 2009. He argued, like Steve, for a public inquiry into Climategate in 2009, which happened to coincide with the formation of the GWPF, or, failing that, as much openness as possible in what was done. Instead we had the false promise of Russell and Oxburgh leading, among other things, to the Commons select committee on science and technology dialing down its own inquiry. Lawson was surely proved right but he and the rest of us were immediately told to ‘move on’.

          What he’s asking for now is different and concerns something different. He should in my view be listened to all the more carefully because of what happened last time around.

  12. Follow the Money
    Posted Sep 24, 2013 at 5:20 PM | Permalink

    I myself am looking forward to discovering discrepancies between the assessments and their complementary executive summaries. History suggests their will be plenty The IPCC assessment “scientists” previous complaints about the professionally-misleading summaries was like listening to bicycle thieves complaining about bank robbers.

  13. Posted Sep 24, 2013 at 5:28 PM | Permalink

    Thanks, Steve.

    The IPCC has certainly painted itself into a corner. Not too surprising.

  14. Matt Skaggs
    Posted Sep 24, 2013 at 5:50 PM | Permalink

    Roughly one-half to two-thirds of this difference from the observed trend is due to an overestimate of the SST trend, which is propagated upward because models attempt to maintain static stability.”

    With no reference, this reads like a “just so” story.

    • Craig Loehle
      Posted Sep 24, 2013 at 7:01 PM | Permalink

      Not only that, but if the true SST trends are less than the data they use, doesn’t that mean warming is even further less than the models? By trying to save the atmospheric part of the models, they throw the SST trend under the bus. When you are a drowning man, you mistake flailing for swimming.

    • Salamano
      Posted Sep 25, 2013 at 9:26 AM | Permalink

      Tamino has a post up about detecting significance in deviations from trends…

      http://tamino.wordpress.com/2013/09/21/double-standard/

      In particular a response to a commenter on therefore the best way to go about the statistical math:

      http://tamino.wordpress.com/2013/09/21/double-standard/#comment-83213

      I’m wondering what all the stat-folks think about it. I’m not a stat genius; it seemed reasonable to me.

      Steve: my post was about the discrepancy between models and observations, not the “statistical significance” of trends, and, in particular, assessing whether IPCC has supported its assertions. In some datasets, observed trends are about half model trend. In such occasions, I’ve observed that if one wants to say that there is a statistically significant difference between models and observations, then one is equally obliged to say that there is a statistically significant trend. And vice versa. As to your question about my opinion of Tamino’s rant: it does not appear to be plagiarized.

      • Posted Sep 25, 2013 at 12:42 PM | Permalink

        Salamo,
        The person who started giving Scott Supak flak on Twitter was me: I told him he needs to specify the method of establishing statistical significance. otherwise, there is no bet. Tamsin Edwards agreed. (Not surprising she agrees as this is an obvious point, but possibly suprising she paid enough attention to the exchange to pipe in.)

        For history: It appears Pat Michaels said something during an interview– using a common idiom like “I wouldn ot be surprised X” or “I bet X will happen” or something like that. Then Supak aggressively pursued turning this into a real money bet with the claim that forcing a bet somehow “proves” Michaels believes what he says. But Supak does not seem to know anything about how one figures out ±95% confidence intervals. And now he seems to be trying to learn which method of computing him would most favor him.

        On the one hand, that makes sense in terms of wanting to win the money. But it’s inappropriate if the purpose of the bet was to prove Michaels meant what he said. If being willing to place real cash money on ones words were proof one believed what one said, then the bet still has to be structured to match what one said. And in the time period that the quote was made and Supak has been negotiating the Bet, Michaels (whose ‘belief’ is evidently being ‘tested’ by Supak) posted uncertainty intervals computed using a method that is nothing like the any Tamino discusses. It is more similar to the ones Steve is showing in this post here, or the the sorts in Fyfe or Easterlign and Wehrner.

        Also: For what it’s worth if we used the method Tamino describes to test whether models agree with observations: The don’t. Not by a long shot. Not. At. All. Haven’t for a long time.

  15. R
    Posted Sep 24, 2013 at 7:55 PM | Permalink

    Well there are certain elements of this post that are going to become outdated with some convincing papers already on the way.

    Steve: even if you’re right, how would that change my size-up of IPCC’s present position? Even if there are some brilliant solutions in the works, one could reasonably argue that a more honest acknowledgement of the problem by IPCC might have led to earlier publication of these papers. The papers will be too late for the IPCC cutoff and won’t help AR5.

    • Skiphil
      Posted Sep 24, 2013 at 9:35 PM | Permalink

      oh my, what an effective rebuttal!! Hand-wave toward a vague assertion about unspecified “convincing papers” not named and not yet published. Oh yes, a helpful way to settle any contentious issue….

      • R
        Posted Sep 25, 2013 at 12:03 AM | Permalink

        Not everything is a rebuttal. The need to force an argument for no apparent reason is odd.

        • Posted Sep 25, 2013 at 1:05 AM | Permalink

          You haven’t answered Steve’s question. Please don’t feel I’m trying to force an argument in pointing this out.

        • MrPete
          Posted Sep 25, 2013 at 6:57 AM | Permalink

          Re: R (Sep 25 00:03),
          We’ve yet to have a balanced set of reports.
          Your faith is quite impressive, in believing that (the “better” data to be published at an unknown future date?) will represent the first honest analysis, after twenty years of biased analysis and process.

          In addition to the extremely low probability that models have been matching data, what Steve has demonstrated is the absolute certainty that the IPCC and mainstream climate science process to date has produced biased results.

          This is not a matter of a minor tweak or a bit more data. The overall scientific analysis process for this corner of science has clearly been gamed. It’s no surprise. You can still google to read IPPR’s early government-funded Warm Words PR strategy. It’s been followed closely by many.

          What will it take for a new generation of model results to be squarely centered on the new data that emerges? What will it take for a new generation of uncertainty bands to honestly encompass reality?

          Sure looks like major reforms would be a minimum. But that’s a policy discussion for another site.

    • HAS
      Posted Sep 25, 2013 at 3:25 AM | Permalink

      The denouement no doubt turns on the point that they convince of.

      So far all I’ve been convinced of is that being knowingly obscure wastes bandwidth.

    • Carrick
      Posted Sep 25, 2013 at 12:09 PM | Permalink

      R:

      Well there are certain elements of this post that are going to become outdated with some convincing papers already on the way

      These papers have yet to see any public scrutiny. It’s worth reviewing this if their validity is accepted by the community, but a lot of papers that look good coming out of the gate end up getting run over by the bus once they make it to print.

      • Craig Loehle
        Posted Sep 25, 2013 at 12:22 PM | Permalink

        With all the throwing under the bus going on (including self-inflicted) there will be a need for lots of buses.

      • R
        Posted Sep 25, 2013 at 7:57 PM | Permalink

        The point I was trying to make is that there are certain elements of this discussion and that of the IPCC that are in essence arguments over things that are imaginary. This isn’t to say that there aren’t elements of this discussion which are interesting – just that both the IPCC and Steve are wrong in a couple places in this post.

        Steve: I try to be accurate and, if I’ve made errors, I’d like to correct them. Nor do I see how new papers would modify points in this post. Reviewing my post:
        – my introduction describes recent statements and seems accurate to me;
        – my account of IPCC First and Second Draft seems accurate to me and, in any event, unaffected by new papers;
        – likewise my comments on the SPM and gatekeeping of skeptic submissions on the discrepancy;
        – my observations about 20th century history also seem accurate to me and not vulnerable to new papers;
        – I asked questions about the long past hiatus and deep ocean during that period. New perspectives could arise on this, but data is likely to be an issue. I simply asked questions here and don’t see any statements that could easily become “wrong”
        – the boxplots and commentary on discrepancy between models and observations is simply summarizing data and won’t change.
        – my conclusion was editorial in nature, but again I don’t see how new papers would affect the editorial points.

        I don’t see the errors that you allude to and would appreciate a little more detail on where you believe the errors to occur.

        • Craig Loehle
          Posted Sep 25, 2013 at 9:33 PM | Permalink

          You are free to use more words if you actually have something to say.

        • HAS
          Posted Sep 26, 2013 at 2:40 AM | Permalink

          I remain convinced.

    • MikeN
      Posted Sep 26, 2013 at 10:21 AM | Permalink

      I think RealClimate’s posts on the hockey stick still have reference to papers coming soon. Stay Tuned!

  16. rogerknights
    Posted Sep 24, 2013 at 8:18 PM | Permalink

    Typo?: ” . . . but any investor in the climate research process must surely wonder why this wasn’t brought up six years ago in the scoping of the AR5 report.”

    Should that be “investigator”?

    Steve: no, I used “investor” intentionally. Governments are large investors in climate research. While they are obviously a different sort of investor than business investors, they must surely be annoyed at the arrogance of the climate community’s failure to deal with the discrepancy.

    • Posted Sep 24, 2013 at 10:21 PM | Permalink

      Stakeholder might be more appropriate.

      Nice work, as usual. I’d like to ask you to publish a post that is prescriptive in nature, showing how the IPCC can work its way out of the hole they have dug for themselves. They can blandly ignore the recent work that raises objections to what they write in AR5. That has been their modus operandum for a decade. But, as the IAC tried to do with their governance issues, if someone shows them a way to get out of their dilemma, it at least removes one excuse from their repertoire and just maybe would motivate some of them to explore alternatives to stonewalling.

      • Matt Skaggs
        Posted Sep 25, 2013 at 9:23 AM | Permalink

        …[show] how the IPCC can work its way out of the hole they have dug for themselves.”

        If you pull that thread a bit too vigorously, the entire sweater is likely to come apart. What the IPCC calls “attribution” is known as “root cause analysis” in the engineering world. There are very rigorous and well-established methods to perform that type of analysis, all of them carefully structured to ensure objectivity. The IPCC completely ignored this body of knowledge and went for a sort of meta-analysis based upon how the scientists wanted to present their information. So you get a little attribution here, a little there, you dance around the effects of changes in cloud cover, and the whole thing never forms a cohesive whole. Meanwhile, most of the scientists are actually just doing what they want to do with little real focus on the key predictions made by AGW. When I asked Stefan Rahmsdorf at RC whether his work on Antarctic warming could be construed as support for (the key prediction of) polar amplification, his response was that it was more aligned with changes in wind patterns. So, er, how did that make the cover of one of the world’s most prestigious science journals? In the approach to the model/observation discrepancy, we see the same sort of willful blindness that the evidence does not support the model outputs to the extent that the models provide zero evidence to support AGW.

        • Posted Sep 25, 2013 at 2:24 PM | Permalink

          “If you pull that thread a bit too vigorously, the entire sweater is likely to come apart.” Weezer fan, by any chance?

        • Matt Skaggs
          Posted Sep 26, 2013 at 9:44 AM | Permalink

          Erratum: My comment at 9:23 on September 25 should have referred to Dr. Eric Steig, not Dr. Stefan Rahmsdorf.

          Who is Weezer? I collect useful metaphors the way a crow collects shiny coins.

          Here is a thought experiment: consider AGW as a deviation from expected behavior in a climate control system. Let’s construct a fault tree to determine root cause (“attribution”). The first fork in the tree is “natural” or “anthropogenic?” Under “anthropogenic,” the fault tree is actually quite sparse. There is just the effect of CO2 on radiative absorption, the effect of land use changes on albedo, and perhaps waste heat itself if you value completeness. Note that this engineering perspective places the work of Dr. Roger Pielke Sr. on land use changes back where it belongs, of equal stature to the radiative hypothesis work (because we don’t really know, do we?). In comparison, the “natural” side of the fault tree is quite rich. Here we find Milankovich, Svensmark, waveform analysis, the conundrum of clouds, and all sorts of fascinating ideas about plausible causes of natural climate change. Now take one of the IPCC reports – any will do – and try to map the scope of effort on various topics against the fault tree. It will take you about five minutes to realize that the IPCC approach actually has very little to do with attribution.

        • Posted Sep 26, 2013 at 7:30 PM | Permalink

          Weezer = nerdy alt-rock group whose first hit was The Sweater Song. Much-repeated line in the chorus: “if you want to destroy my sweater, hold this thread as I walk away. Watch me unravel, I’ll soon be naked. Lying on the floor, I’ve come undone.”

    • PhilH
      Posted Sep 25, 2013 at 9:47 AM | Permalink

      The government “investors” have so much skin in the IPCC’s games that they cannot, at least publicly, express any annoyance at the “climate community’s” failure to deal with not only this discrepancy but all the other rat holes the IPCC has dug for themselves and their investors over the years. To think otherwise is, in my opinion, rather naïve.

      • seanbrady
        Posted Sep 25, 2013 at 3:04 PM | Permalink

        “Investor” is exactly the right word.

        The governments are investing actual money (our money) in climate research in order to obtain an outcome. What outcome do they seek? You may belive their claim that they just want the “truth” about carbon dioxide and global warming, or you may believe they are just buying research which is designed to convincingly support their existing climate agenda.

        Since the model/observed divergence is both a scientific problem and a PR problem the failure of the climaet research industry to adequately address it reduces the value of the research governments are funding.

        That must surely annoy the governments who write the checks and make their plans based on the expectation of getting quality and/or convincing research.

        PhilH, Steve is not being naive and he never said that governments are going to announce their annoyance publicly. But Joachim Marotzke’s promise that the IPCC will “address this subject head-on” indicates that they are probably doing so privately.

  17. Posted Sep 24, 2013 at 8:43 PM | Permalink

    Mr. McIntyre,

    As usual a brilliant piece of detective work on your part. Thank you.

    You wrote:

    Whatever the ultimate scientific explanation for the pause and its implications for the apparent discrepancy between models and observations, policy-makers must be feeling very letdown by the failure of IPCC and its contributing academic community to adequately address an issue that is critical to them and to the public.

    At risk of being found guilty of editorializing.

    German ministries insist that it is important not to detract from the effectiveness of climate change warnings by discussing the past 15 years’ lack of global warming. Doing so, they say, would result in a loss of the support necessary for pursuing rigorous climate policies.

    What incredible double-speak.

    It appears to me that certain policy makers are getting exactly what they want. What some policy makers currently in power seem to want from the IPCC process is scientific cover for whatever they are actually up to. I read the quote from Der Spiegel as evidence that the German minsters [and whoever else people can decide for themselves] made up their minds long ago on the subject of what they want to do and are simply not going to listen to any contrary ‘scientific advice’.

    As far as I can tell, from following you and other reputable thinkers, and as you continue do demonstrate with this article, the fact of the matter is that with the IPCC we are faced with as naked an attempt to “manufacture consent” of the populace as anything Noam Chomsky every dreamed up.

    I am not friendly to conspiracy theory, this isn’t conspiratorial thinking, this is just realpolitiks as usual.

    I wish it wasn’t so.

    Is it time now that we can talk politely about defunding and disbanding the IPCC? It is doing no one, but certain ministers currently in positions of power, any earthly good.

    I’ll leave everyone with this thought, trying to keep your eye on the pea does no good at all when the game is a swindle. You have to keep your eyes slightly off the ball to catch it being stolen form the cup by the magician.

    W^3

    • ianl8888
      Posted Sep 25, 2013 at 1:48 AM | Permalink


      this is just realpolitiks as usual

      Agreed :)

      • Posted Sep 26, 2013 at 3:54 AM | Permalink

        German ministries insist that it is important not to detract from the effectiveness of climate change warnings by discussing the past 15 years’ lack of global warming. Doing so, they say, would result in a loss of the support necessary for pursuing rigorous climate policies.

        In other words…

        [Plaintif] “Your honour, I’d like to strike the last 15 years of evidence from the record.”

        [Judge] “And why is that?”

        [Plaintif] “BECAUSE IT’S DEVASTATING TO MY CASE!”

    • Punksta
      Posted Sep 26, 2013 at 11:44 AM | Permalink

      it is important not to detract from the effectiveness of climate change warnings by discussing the past 15 years’ lack of global warming.

      Sounds like the old Don’t confuse me with the facts.
      Dilbert today is on the case

      http://dilbert.com/strips/comic/2013-09-26/

      Let’s make their comment more honest:
      we don’t want anyone detracting from the effectiveness of global warming alarmism, by discussing the past 15 years’ lack of global warming.

      • Posted Sep 26, 2013 at 12:44 PM | Permalink

        Funny, I read what the ministers said as saying, ‘what we are doing is so important, it doesn’t matter what the facts are.’ Maybe I am getting paranoid, but it seems to me they were more concerned about political support for their policies than whether they were actually necessary.

        Ok, so now what?

        I think the situation with ‘climate science’ in general, as much of a muddle it may be, is not a swindle in quite the way the IPCC process is.

        There are no doubt many good scientists doing good work who are involved in the IPCC process – maybe most. One good way to keep a process like this going is to have the people involved defending their own work and reputations. The process though, itself is constructed as a swindle. In a closed-rule institution it doesn’t take many corrupt controllers to steer the ship. All that may be necessary is a corrupted set of constituting articles that are then applied in a self-preserving fashion.

        With the help of people much smarter than I, like Steve M. Lucia, and others, the IPCC has been exposed to my satisfaction as essentially a swindle. I don’t believe this is merely the result of systemic incompetence, though I do think incompetence has been engineered into the system. Does anyone here think an adverse result of the the type being discussed in this post could have survived to publication? Anyone??

        Somehow, and I don’t know how, these types of ‘mistakes’ are making it through to publication with incredible regularity, resistance to correcting them is enormous, and science shown to be junk continues to be cited in subsequent reports.

        At this point I almost don’t care why or how the process got this way or continues to survive. I think it is time to start directing the appropriate energies towards either the deconstituting of the IPCC, or assisting it in whatever way is possible into a slide into complete irrelevance.

        Simply exposing their “errors” does not seem to be having the necessary effect – the IPCC’s results are too necessary for those that require them to willingly let them go.

        I’m not sure what to do, other than call a swindle a swindle.

        W^3

  18. Alex Heyworth
    Posted Sep 24, 2013 at 10:02 PM | Permalink

    Attributing the discrepancy to “natural variability” and leaving it at that is the biggest cop out of all time. The first thing the models should be aiming for is to accurately reproduce natural variability. Until they can do that, it is pointless trying to do any analysis of attribution.

  19. tomdesabla
    Posted Sep 24, 2013 at 10:05 PM | Permalink

    Steve,

    you left out this juicy end to the Spiegel quote:

    “Climate policy needs the element of fear,” Ott openly admits. “Otherwise, no politician would take on this topic.”

    • seanbrady
      Posted Sep 25, 2013 at 3:06 PM | Permalink

      nice!

    • André van Delft
      Posted Sep 30, 2013 at 3:46 PM | Permalink

      Quran (8:12) – “I will cast terror into the hearts of those who disbelieve.”
      Same mentality.

  20. Posted Sep 24, 2013 at 10:18 PM | Permalink

    My graphs will be a bit different because I do something a bit difference. I compute the variances using all non-overlapping periods I can fit into the ‘history+projections”. So our numbers will never match precisely. But they don’t differ much because the general result is “robust” to choice of methodology.

  21. Steven Mosher
    Posted Sep 24, 2013 at 10:25 PM | Permalink

    http://static.berkeleyearth.org/graphics/gcm-acceleration.pdf

    • Steve Reynolds
      Posted Sep 24, 2013 at 10:44 PM | Permalink

      So not only do the models run hotter than observations, most of them are accelerating their predictions.

    • HaroldW
      Posted Sep 25, 2013 at 11:19 AM | Permalink

      Does anyone else see Orion in that graph, along the y=x line?

    • Willis Eschenbach
      Posted Sep 25, 2013 at 4:24 PM | Permalink

      Mosh, your comments are often too cryptic for their own good. In this one you’ve pointed to a graph with 84% (26/31) of the dots representing individual GCMs above the line. It says:

      Those that plot above the line are predicting that the future warming will be larger than one might predict based in the 20th century response characteristics. As can be seen, about 60% of the GCMs show some degree of acceleration in their response.

      Umm … no, 84% of them show “some degree of acceleration in their response”. What is it with the Berkeley Earth guys, why the need to exaggerate? In any case, this means that most GCMs do not show a constant climate sensitivity. Instead, their climate sensitivity increases over time. Except for five of them, whose climate sensitivity decreases over time … go figure.

      Now that’s both a bizarre and a very interesting finding … but what are you saying about that? You’re a sharp guy … what was your reason for pointing this out? Me, when I see that, I think “TinkerToy models, anything’s possible, we’re seeing another reason that if you believe them you’re a fool” …

      But what do you think?

      Thanks,

      w.

      • MikeN
        Posted Sep 25, 2013 at 5:52 PM | Permalink

        I tried to evaluate if there is a difference in short term behaviour among high sensitivity and low-sensitivity climate models. However, I was unable to find Rommulan climate model runs at Climate Explorer when I checked a few years ago. Have these been updated?

      • Steven Mosher
        Posted Sep 26, 2013 at 3:41 PM | Permalink

        You’ll have to wait for the paper. I could have been a bit more descriptive in the caption about how the 60% is determined. If you merely count the dots above the line, you’ll be forgeting something.

        Anyway, it’s a work in progress and I dont mind sharing some hints with folks along the way. So, think of it as a hint

        • Skiphil
          Posted Sep 26, 2013 at 4:28 PM | Permalink

          OT, but an interesting episode in the drive for more open access science:

          http://www.michaeleisen.org/blog/?p=1430

          a series of papers dealing with data from NASA’s Mars Curiosity mission were paywalled at Science mag., even though US law requires all such data to be publicly available. The science blogger (and prominent biologist) at the link above decided to “liberate” them….

        • Willis Eschenbach
          Posted Sep 26, 2013 at 5:03 PM | Permalink

          snip

          w.

        • James Smyth
          Posted Sep 30, 2013 at 2:00 AM | Permalink

          “as can be seen”

      • Steven Mosher
        Posted Sep 27, 2013 at 12:41 PM | Permalink

        snip

        Steve Mc: I would prefer that both of you minimize foodfight aspects of this. I’ve also snipped the post to which you’re responding.

  22. Brian H
    Posted Sep 25, 2013 at 5:14 AM | Permalink

    Edit query: “up-to-data” = up to date?

  23. Jake Haye
    Posted Sep 25, 2013 at 5:23 AM | Permalink

    If the global warming signal can be masked for long periods by energy flows into the deep oceans, presumably a spurious or exaggerated warming signal can be created by flows in the opposite direction.

  24. michael hart
    Posted Sep 25, 2013 at 8:48 AM | Permalink

    By definition AR5 should, in part, be about what has happened since AR4.
    So the model runs that are claimed to produce a “hiatus” of a decade (or more); Did they predict the current continuing hiatus, using only the data available at the time?

    Or, based on more recent data, are they merely predicting an occasional hiatus at some indeterminate points in the future?

    Also, Lucia has discussed the claims about volcanoes, but what climatologically significant volcanic anomalies have occurred since 1991? I wouldn’t expect these to be predicted, but I was also given to believe there hadn’t been any.

  25. Craig Loehle
    Posted Sep 25, 2013 at 9:18 AM | Permalink

    They are throwing themselves under the bus with the “natural variability” excuse. If the cooling since 1998 is due to natural variability, how do we know that the warming from 1980 to 1998 (barely a longer time period) was NOT due to natural variability? Of course I believe natural variability IS the answer, but it does the opposite of saving their bacon.

  26. Posted Sep 25, 2013 at 9:58 AM | Permalink

    Their famous attribution graph depends on the assumption that their models accurately simulate natural variability with so much precision that they can draw tiny little blue uncertainty bands around the simulations that don’t overlap with the GHG forcing simulations.

    Now they admit there are sources of natural variability that they have no clue about, the uncertainty of which is wide enough to account for extended non-warming even in the presence of rising GHGs. This sounds like a fundamental contradiction of the basis of their attribution argument.

    I think the IPCC’s problem is they are trying to improvise wording that will sound plausible over the next few days, but what they need is wording that will sound reasonable over the next few years. When people look at the SPM in 2015 or 2016, it won’t matter if it was acceptable to the German delegation in fall 2013 if it turns out to be preposterous compared to reality.

    • Posted Sep 25, 2013 at 10:13 AM | Permalink

      Plagiarised within ten minutes in a tweet response I was already doing to a programmer friend who’s become a behavioural economist of late. With attribution of course. Thanks Ross.

    • David Brewer
      Posted Sep 25, 2013 at 2:05 PM | Permalink

      Just check the corresponding attribution graph in AR5, which is worse still.

      They put in lines for the model mean, not just uncertainty bands, and there are many more graphics, covering the oceans as well as land, and even the extent of polar sea ice.

      Overall the model means stick like glue to the observations, giving an impression of near-perfect understanding not just of global but of regional climate change over the last century.

      The models – or at least their means – are now precisely tuned to the observations. Just one problem, they are still obviously crap since as soon as they go from hindcast to forecast, they fail.

  27. Craig Loehle
    Posted Sep 25, 2013 at 10:17 AM | Permalink

    Here is their difficulty:
    If they invoke the deep ocean uptake of heat (without proof) this looks like handwaving but also means that the time to equilibrium could be very long such that the next 100 years is governed by the transient sensitivity not the equilibrium value. Estimates of transient sensitivity are low enough that there would be little reason for alarm. But rapid ocean turnover/uptake of heat contradicts what the models assume, so then their models are wrong.
    If they invoke natural variability for the pause, everyone is already aware that many skeptics claim that part of the warming from 1980 to 2000 was due to natural variability and thus the models are tuned too hot.
    If they invoke the lower bound of model runs, you find that those particular models run colder into the future than the others.
    If they invoke just the idea that temperature data is consistent with the lower bound of “more than half” the warming is human-caused, this “half” is only half the warming the models are forecasting. This is consistent with the new sensitivity studies but NOT with alarming warming.
    If they invoke “unknown” or more solar etc, this undermines the certainty they ascribe to their model inputs and the “settledness” of their science.

    • Posted Sep 25, 2013 at 11:51 AM | Permalink

      ” If they invoke the deep ocean uptake of heat (without proof) . . . ”

      Looks like that’s the plan:

      http://www.realclimate.org/index.php/archives/2013/09/what-ocean-heating-reveals-about-global-warming/

      Steve: I already quoted a direct statement from chapter 9.

      • Beta Blocker
        Posted Sep 25, 2013 at 3:55 PM | Permalink

        Re: Dan Hughes (Sep 25 11:51),

        Concerning Stefan’s post on RC, over on Lucia’s forum, this question was asked by “Coyote”:

        Coyote: ….. the real issue is if greenhouse warming is “hiding” in the ocean. So isn’t the question something more like what surface air temperature increase would we have seen if all this heat had not been buried in the oceans?

        To which I responded:

        That is certainly the heart of the matter. But Stefan’s explanation for The Pause should also raise other kinds of additional questions, such as:

        1) Is Stefan’s postulated heat sequestration process unique to this most recent episode of global warming; i.e., is it unprecedented in the history of global warming episodes?

        2) Will the sequestered excess heat remain in the oceans, or will it be restored to the atmospheric heat budget at some future point in time? (And if so, when?)

        3) Will the rate of heat sequestration in the oceans increase as GHG forcings increase, possibly maintaining global mean atmospheric temperature at or near its current level?

        …. or ….

        4) Will some future change in atmospheric-ocean dynamics act to moderate the process by which the excess heat is being sequestered in the oceans, thus allowing atmospheric temperatures to begin rising once again?

        It is reasonable to predict that over the next several months, the wording of the AR5 draft will be adjusted to whatever extent is necessary to closely align the wording of the final AR5 report with the ocean sequestration explanation for The Pause.

        But will the questions such an explanation automatically raise be addressed in any kind of scientifically credible way?

        • thisisnotgoodtogo
          Posted Sep 26, 2013 at 5:35 AM | Permalink

          pausibly deniable

  28. pottereaton
    Posted Sep 25, 2013 at 10:59 AM | Permalink

    When they invoke “natural variability,” they of course mean within the warming trend which they have already proved (to their satisfaction) to be occurring in previous IPCC reports.

    But natural variability can also occur over centuries and millenia. It apparently has not occurred to them that if you invoke “natural variability” that you could just as easily say climate has been warming up (literally and figuratively :-}) for a century-long cooling trend. They are expropriating the use of the term “natural variability”– which is widely used by skeptics– in a limited way in order to serve their purposes.

    The “hiatus” at 15 years is now nearly as long as the 18 years of warming that triggered global-warming alarmism. It seems to me they will be in trouble if the hiatus exceeds in length that of the warming trend, because as Steve points out, there was 40 years of cooling before that warming.

  29. Tony Mach
    Posted Sep 25, 2013 at 11:50 AM | Permalink

    It is Klimaziebel, *not* Klimazweibel.

    BTW: It has *nothing* to do with “zwei” (German for “two”), but instead is the German word for “Onion”: Zwiebel. Pronounced it sounds something like saying in English: “Ts-we bell.” So translated it is Climate-Onion (Klima-Zwiebel).

    • Tony Mach
      Posted Sep 25, 2013 at 11:58 AM | Permalink

      Sorry, meant to write: Klimazwiebel.

      (I can’t even spell …)

    • Tony Mach
      Posted Sep 25, 2013 at 12:06 PM | Permalink

      As many people whose native language is English have a problem have with Klimazwiebel, maybe this helps?

      Do you know Zwieback? The first part (“Zwie-“) is pronounced the same as the first part of Zwiebel. So think of ZWIEback, if you want to write KlimaZWIEbel.

      (To confuse matters, unlike “Zwiebel” the “Zwie-” part of Zwieback *is* actually related to “Zwei”, as in “Twice-Backed”. But enough of such strange languages.)

    • John Archer
      Posted Sep 27, 2013 at 10:17 PM | Permalink

      Climate-Onion? Interesting. Is that the kind they consume at Penn State?

      Mann ist was Mann ißt.

      But I know it’s Scheiße in his case. He probably just has the odd klimate zwiebel or zwei to mask the smell.

      They don’t work though. Phorrr!

    • Jeff Alberts
      Posted Sep 28, 2013 at 2:27 PM | Permalink

      When I learned German in High School in the US in the 70s (from a native German speaker) the “w” was always pronounced as a “v” is pronounced in english. Hence “zwei” would be pronounced “tsvy”. I don’t remember specifically encountering the word “zwiebel” but I would think it would be pronounced “tsveebel”.

      I tried Google Translate, but the audio seems pretty hoffic.

      • Jeff Alberts
        Posted Sep 28, 2013 at 2:28 PM | Permalink

        hoffic=horrific

  30. dan
    Posted Sep 25, 2013 at 1:36 PM | Permalink

    how do I get rid of the Follow Climate Audit popup.
    Cannot read the blog now unless full screen

  31. mattblack313
    Posted Sep 25, 2013 at 2:27 PM | Permalink

    Useful analysis and very clear dataviz of some relevant data at the Financial Times Blog:

    http://blogs.ft.com/ftdata/2013/09/23/how-the-ipccs-projections-match-up/?

    These are worth looking at if only because the visualization is very clean and the data is hard to challenge.

    Steve: it’s not as clean as you think. They’ve included “commit” as one of the projections.

  32. S. Geiger
    Posted Sep 25, 2013 at 3:06 PM | Permalink

    Very general (and basic) question. If somehow the extra energy had been going into the deep ocean would this mechanism of ‘warming’ (whatever that is proposed to be?) still require the expected increased rate of tropical troposphere temp increases?

    • Craig Loehle
      Posted Sep 25, 2013 at 3:43 PM | Permalink

      The tropical troposphere hot spot responds to tropical SST and thus the missing hot spot is a separate problem.

  33. Willis Eschenbach
    Posted Sep 25, 2013 at 4:44 PM | Permalink

    Here’s one of the many problems I have with their figures for the change in ocean heat content. The link DocMartyn gave above says that the increase in ocean heat content is about 2E+17 megajoules. The mass of the atmosphere is about 5.3E+15 tonnes. That gives us 38 MJ/tonne. Specific heat of the atmosphere is about one megajoule per tonne per °C.

    That means that we must be incredibly lucky, because if that amount of heat had gone into the atmosphere, it would have been enough to raise the atmospheric temperature by 38°C …

    So … where is that heat coming from? Did the downwelling radiation suddenly take a big jump? And did the oceanic absorption also suddenly take a perfectly corresponding jump, of just the right amount to avert Termageddon?

    You can see the problem …

    w.

  34. pottereaton
    Posted Sep 25, 2013 at 5:00 PM | Permalink

    Donna LaFramboise has an excellent column on this subject in today’s Wall Street Journal. I think it’s paywalled, but I’m going to link it in case it’s not and for those who subscribe:

    http://online.wsj.com/article/SB10001424127887323981304579079030750537994.html?mod=WSJ_Opinion_LEADTop

  35. RoyFOMR
    Posted Sep 25, 2013 at 7:20 PM | Permalink

    Willis, your back of the envelope calculation using the standard e=mc(deltaT) calculation rearranged to indicate a 38K rise in GAT is based on old physics.
    Clearly you haven’t received the memo re the new, improved physics that kicked in betwixt AR4 and AR5.
    I’m sure that Kevin will be willing to fill you in.

  36. Craig Loehle
    Posted Sep 25, 2013 at 7:43 PM | Permalink

    I tested if the lower ocean could warm faster than the upper using a two-box DEQ model. Allowing heat input from above and a 2 deg C colder bottom layer than top layer and a small amount of mixing, the mixing can carry the heat that the top is acquiring from above and send it below such that the lower layer warms faster than the upper. This is because the unit of water mixed from above carries more joules to a relatively colder region. It all depends on the balance of mixing which no one has a clue about. The idea that the ocean is nicely stratified like a layer cake and stable though is certainly false. Hurricanes for example stir the ocean quite a bit. Spencer has done some work on this. The problem is that the assumptions the climate modelers use for the ocean do not factor in this rapid of a transfer of heat, nor do the sensitivity estimates.

    • DocMartyn
      Posted Sep 26, 2013 at 6:44 AM | Permalink

      Craig, in your box model, did you model using four seasonal temperature boxes? I only ask as the gradient of temperature outside of the equatorial zone, undergoes remarkable changes in profile as one moves North or South, in the swings from summer to winter and back again.

      • Craig Loehle
        Posted Sep 26, 2013 at 9:05 AM | Permalink

        No, it was just warming at the top and 2 layers. By the way, if you have a warmer top and colder bottom and increase mixing you could have the top layer getting colder while the bottom gets warmer until they are the same temperature. People are thinking of these layers as too solid and unmixing.

  37. Ed Barbar
    Posted Sep 25, 2013 at 11:28 PM | Permalink

    “Opening the door also opens up questions about the potential length of the present hiatus. If unmodeled deep ocean processes are involved, how can we say with any certainty that the present hiatus won’t extend for 30-40 years?”

    Or assuming climate models are correct about the delta in forcings, how long can the oceans buffer heat and push out equilibrium? Maybe someone knows offhand. I don’t.

  38. tadchem
    Posted Sep 26, 2013 at 6:26 AM | Permalink

    ‘Marotzke felt it necessary to add that “climate researchers have an obligation not to environmental policy but to the truth”.’
    With this statement Marotzke implicity concedes that “environmental policy” and “the truth” are distinct and not necessarily compatible.
    A ray of light emerges from the IPCC.

  39. Posted Sep 26, 2013 at 6:48 AM | Permalink

    An analysis of the residuals between the models and the data would probably show a skewed distribution that’s most likely centered above zero due to the “warm bias” built into the models. What’s more, I wager that the temperature is heading downward and that will make it really difficult for the IPCC to continue to ignore the lack of fit their models have with the data.

  40. Chris Law
    Posted Sep 26, 2013 at 8:26 AM | Permalink

    So if I accept the missing energy is going to OHC to explain the hiatus as postulated by Climate Science (TM)then they also need to explain when and why this started occurring 15 years ago (or more for lag). Or rather why it has become – evidently – dominant since then.

    Or did I miss this explanation?

    • Posted Sep 26, 2013 at 8:41 AM | Permalink

      There’s a continuous flow of heat into and out of the deeper layers. The hypothesis about the “missing heat” going into the depths involves changes to these flows, on average, during the last 15 years relative to before that. Of course, these heat flows are almost certainly as variable as, say, land temperatures (within appropriate limits), so different averaging periods will produce different appearances.

      And, of course, we don’t have data for before 3-4 decades ago, so there’s no inconvenient history of changing heat flows to mess up the hypothesis.

  41. Alan Watt, Climate Denialist Level 7
    Posted Sep 26, 2013 at 9:25 AM | Permalink

    Steve: In your penultimate paragraph there are what I believe you intended to be links to papers by Fyfe and von Storch, but all I see in both cases is the word “here” with no associated link. Thanks.

  42. Posted Sep 26, 2013 at 10:56 AM | Permalink

    Given the state of AR5 and a bit of Bayes, what’re the odds on an AR6 ever appearing?

    http://thepointman.wordpress.com/2013/09/20/armageddon-report-no-5/

    Pointman

    • Punksta
      Posted Sep 26, 2013 at 12:28 PM | Permalink

      The Gatekeeping Problem.

      On the assumption that the existing journals are resolutely unreformable, would it not make sense to create others?

  43. Roy Spencer
    Posted Sep 26, 2013 at 2:37 PM | Permalink

    excellent summary. Something that is not emphasized enough: The current hiatus in warming cannot be compared to previous similar periods for the simple reason that the recent strength of radiative forcing has (theoretically) been at a *maximum*. The last 15 years is when you LEAST expect warming to stop.

    • Beta Blocker
      Posted Sep 26, 2013 at 7:15 PM | Permalink

      Re: Roy Spencer (Sep 26 14:37),

      Roy Spencer: excellent summary. Something that is not emphasized enough: The current hiatus in warming cannot be compared to previous similar periods for the simple reason that the recent strength of radiative forcing has (theoretically) been at a *maximum*. The last 15 years is when you LEAST expect warming to stop.

      My own guess, based on the pattern of the historical Central England Temperature Record (CET) between 1659 and 2007, is that The Pause is merely a pause, and that at some point in the future, the 1975-1998 warming trend is likely to resume — a decade from now, two decades from now whenever, it happens. Repeating a post I put up at The Blackboard:

      Beta blocker writes: …. “If they are having this kind of struggle in finding a credible way of explaining The Pause in the summary document, what kind of struggle will they be having in explaining their latest predictive plots — whatever those will eventually look like?”

      TimTheToolMan responds: …. “My guess is that AR5 model tuning will sacrifice some historical fit to get a better recent fit by playing with ocean heat uptake. Then we’ll start to see papers that rewrite the OHC history to match. /cynacism”

      Regarding TimTheToolMan’s prediction as to how the IPCC and the climate science community will deal with the issue, the more one thinks about the IPCC’s dilemma, the more one should believe that this is just what they will do, they will tune AR5’s modeling in ways which get a better recent fit, sacrificing some historical fit to do so, and then they will produce a series of new papers which rewrite the history of ocean heat content to match.

      This containment strategy will be pursued under the banner of “moving the science forward.”

      Once the final AR5 report is published and the public response to its conclusions begins to emerge, let’s periodically revisit TimTheToolMan’s prediction to see how the IPCC’s containment strategy is developing.

      Now, if a warming trend of some kind resumes before AR6, the IPCC and the climate science community will consider themselves off the hook, even if the warming trend predicted by AR4 models, shifted to the right in time, remains below the lower boundary of the AR4 model ensemble.

      But what if The Pause continues into AR6 and beyond, i.e. a flat or declining atmospheric global mean temperature in the face of ever-rising GHG emissions?

      If that’s what happens, let’s all come back in another seven years to see just how convoluted the IPCC’s AR6 explanations become over that seven-year period relative to the previous AR4 and AR5 reports.

      In any case, the IPCC and the climate science community will hang tough regardless of where global mean temperature goes in the next two to three decades. It is all but written in their job descriptions they have to do this.

  44. pottereaton
    Posted Sep 26, 2013 at 2:56 PM | Permalink

    On the subject of the IPCC:
    <James Delingpole, always entertaining

  45. Posted Sep 26, 2013 at 3:02 PM | Permalink

    Roy’s point is spot on. The issue isn’t the “pause” per se, but the gap in comparison to the model projections for conditions over the past 15 years. Figure 9.8A in the SOD shows that the current gap is unprecedented. Up to 1998 the model average and the observational average regularly cross back and forth. They stop crossing in 1998, and diverge thereafter.

  46. James Smyth
    Posted Sep 26, 2013 at 5:15 PM | Permalink

    Can someone point me to the best explanation of how to correctly determine the relative offsets of the models-vs-temperature graphs?

  47. michael hammer
    Posted Sep 26, 2013 at 9:41 PM | Permalink

    Heat could well be flowing into the oceans but it cannot possibly flow into the atmosphere today then suddenly switch to flowing into the oceans tomorrow and then maybe flowing back into the atmosphere the day after. Thermal systems simply do not work that way. Heat would flow into all sinks atmosphere, oceans, land and everything else all the time.

    Another point, CAGW advocates continuously claim very long time constants in the climate system with the impact of rising CO2 lasting centuries (despite the fact that the direct change in forcing due to a change in CO2 levels is instantaneous). This long time constant is mandatory to their claims since if the time constant is short we would have already seen essentially all the impact from half a doubling of CO2 which, using their numbers, is more than half of the total 0.7C of warming they claim. That means a full doubling is more than half of 1.4C so somewhere between 0.7C and 1.4C by their numbers – well below the 2C threshold for catastrophic. Yet now they claim the pause is due to a “natural” change in forcing suggesting the climate system responds almost instantaneously to changes in forcing. So which is it? Does a change in forcing elicit an instantaneous response or is there a long time constant. If the latter, does that mean the current claimed “natural” change in forcing is only the start of a very long time constant response? Given the severity of the initial response it suggests we could be in for centuries of substantial cooling.

  48. William Larson
    Posted Sep 26, 2013 at 9:51 PM | Permalink

    From the caption to Figure 2 in SM’s headpost: “IPCC authors added a grey envelope around the AR4 envelope, presumably to give rhetorical support for their false claim about models and observations; however, this envelope did not occur in AR4 or any peer reviewed literature.” (!!) Even though I read little or nothing about this in the comments here, I imagine that others as well as myself are appalled and stunned (but not surprised?) by this bit of witchcraft from the IPCC. Talk about disingenuous! In a way, this added non-literature grey envelope is another example of “hide the decline”. I suppose that you more-veteran observers of all these shenanigans just yawn and say “So what’s new” when these things are pointed out, but little ol’ naive me is still totally floored by this behavior by the IPCC.

    • RomanM
      Posted Sep 27, 2013 at 7:27 AM | Permalink

      Through the magic of Photoshop, we can still see what Fig 1-4 would look like without the superfluous gray patch:


      :)

      • Posted Sep 27, 2013 at 9:41 AM | Permalink

        I heartily recommend this to be promoted to an ‘update’ appendage on the main post. That gray thing was working to obscure the problem being discussed.
        Nice! job by Roman, using the statistician’s friend; Photoshop…
        Nice to see some data points for 2012 and 2013…
        RR

      • AntonyIndia
        Posted Sep 27, 2013 at 9:13 PM | Permalink

        Pardon my ignorance by do you or anybody else understand why IPCC observations done only 10 years ago need “big” error bars?

    • Craig Loehle
      Posted Sep 27, 2013 at 8:44 AM | Permalink

      They constantly talk about how good the models are, how you can count on the predictions (ahem, “scenarios”). But if you add the gray zone around it to say it is consistent with the data, it is nearly floor to ceiling and says you don’t know diddly (in the South we would say you don’t know sh*t from shinola). That is what I’m calling throwing yourself under the bus.

  49. bmcburney
    Posted Sep 27, 2013 at 11:49 AM | Permalink

    But the problem not arise “last week”.

    Word missing?

  50. Ben
    Posted Sep 27, 2013 at 11:59 AM | Permalink

    I don’t know if anyone noticed, but there is no more “best estimate” for climate sensitivity (because of…. no consensus ! it’s written right there in the SPM)

    And the good comment would be : has there ever been one ?

  51. See - owe to Rich
    Posted Sep 28, 2013 at 6:38 AM | Permalink

    On a point of information, can someone tell me where the figures that go with the SPM can be found, e.g Fig 1-4 which RomanM photoshopped above? All I see in the SPM is “insert figure xxx here”.

    Thanks,
    Rich.

    • Bob Koss
      Posted Sep 28, 2013 at 7:25 AM | Permalink

      See – owe to Rich,

      You’ll find them all at the end of the report.

      • See - owe to Rich
        Posted Sep 28, 2013 at 11:10 AM | Permalink

        Doh! ;~p

  52. Posted Sep 29, 2013 at 4:42 AM | Permalink

    In political terms, AR5 was actually the incoherent and rambling suicide note of the IPCC.

    http://thepointman.wordpress.com/2013/09/29/in-the-aftermath-of-ar5/

    Pointman

    • Follow the Money
      Posted Sep 29, 2013 at 5:53 PM | Permalink

      You may be able to add “wandering” and “internally inconsistent” to your description of “AR5.” I can almost guarantee it. All that was released so far (which your article mentions) is the “summary for policy makers,” which I have called “executive summary” above. The underlying document, i.e., that being summarized, is the document named “The Physical Science Basis.” If consistent with IPCC traditions, the AR5 “summary” will contain a wealth of misrepresentations and exaggerations about the underlying “science” document, the latter itself flawed, just not as awful as the “summary.” The IPCC website says “The Physical Science Basis” document will be released tomorrow, Sept 30. The report will not only be of interest for its own contents, but how the “summary” misstates or strategically ignores them. The delay of the release, a few days after the summary’s release, is almost a guarantee the pr ringmasters of the IPCC does not want the summary and the underlying document compared.

  53. Posted Sep 29, 2013 at 6:01 PM | Permalink

    IPCC not taking in consideration ”intentionally” the pause; are proving that they don’t have any shame at all.

  54. Skiphil
    Posted Sep 29, 2013 at 8:33 PM | Permalink

    Michael Mann has offered his own spin on the recent IPCC and PAGES 2K work, in a vituperative op-ed:

    Michael Mann: Climate-Change Deniers Must Stop Distorting the Evidence (Op-Ed)

    “The stronger conclusions in the new IPCC report result from the fact that there is now a veritable hockey league of reconstructions that not only confirm, but extend, the original Hockey Stick conclusions. This recent RealClimate piece summarizes some of the relevant recent work in this area, including a study published by the international PAGES 2k team in the journal Nature Geoscience just months ago. This team of 78 regional experts from more than 60 institutions representing 24 countries, working with the most extensive paleoclimate data set yet, produced the most comprehensive Northern Hemisphere temperature reconstruction to date. One would be hard-pressed, however, to distinguish their new series from the decade-and-a-half-old Hockey Stick reconstruction of Mann, Bradley and Hughes.”

    • MrPete
      Posted Sep 30, 2013 at 6:36 AM | Permalink

      Re: Skiphil (Sep 29 20:33),
      So, the guys who would never put thermometry together with reconstructions have done it yet again… along with all the rest of their shenanigans.

      It must be getting hard to so vociferously remain in denial that the data puts the lie to one’s strongly held beliefs.

      I keep wondering what it would take for them to get excited about the challenge of discovering that a completely different model is needed, one that appreciates earth’s resilience and nature’s variability.

  55. Beta Blocker
    Posted Sep 30, 2013 at 9:31 AM | Permalink

    Lucia, SteveF, thanks for your responses.

    Would it be fair to say that The Blackboard exists not to find definitive answers to the basic scientific questions surrounding climate processes and climate change, but to provide a forum where those with an interest in the nitty-gritty details can get together and offer up their own fairly detailed technical and scientific insights into those issues, as the issues are developing?

    In other words, The Blackboard is not acting as Alternative IPCC which is attempting to champion some cohesive set of alternative viewpoints concerning the scientific issues surrounding climate change.

    Rather, one viewpoint is as good as another here on this forum as long as a contributor offers a properly supported analysis which does not violate the laws of physics and which does not veer off into the weeds by pushing outlandish notions such as there is no true greenhouse effect operative in the atmosphere.

    • TimTheToolMan
      Posted Oct 2, 2013 at 11:50 PM | Permalink

      I think the crowd at The Blackboard simply enjoy playing whack-a-mole with poor scienctific papers and public statements and see post modern climate scientists as so many giraffes parading at the windfarm.

  56. seanbrady
    Posted Sep 30, 2013 at 10:17 AM | Permalink

    “I calculated all 13-year trends for all 109 CMIP5 RCP4.5 models presently at KNMI for the warming period 2005-2050, yielding a population of 3379 trends (109 models * 31 starting years). Only 0.5% of the population were negative (19 of 3379) and only 0.3% (10 of 3379) were lower than the slightly negative observed trend.”

    Very helpful calculation! It tackles “head on” (to quote Marotzke) the “we expected that” tack that Climate Science is taking:

    “The slow rate of warming of the recent past is consistent with the kind of variability that some of us predicted nearly a decade ago.”

    http://www.nytimes.com/2013/09/26/opinion/a-pause-not-an-end-to-warming.html?_r=0

  57. Steve McIntyre
    Posted Sep 30, 2013 at 12:11 PM | Permalink

    The chapter 9 version released by the IPCC today is the June 7, 2013 draft sent to governments. It bears the identical header and footer and pagination.

    • Bob Koss
      Posted Sep 30, 2013 at 7:03 PM | Permalink

      I wouldn’t be surprised if all chapters date to June 7. They issued a PDF of corrections dated Sept. 27 which are going to cause changes to various chapters to make the science match the SPM. They claim the changes will be minor. Take with a grain of salt and all that I expect.

      The top link on this page contains corrections to be made.

      http://www.climatechange2013.org/report/review-drafts/

      • Bob Koss
        Posted Sep 30, 2013 at 7:26 PM | Permalink

        These are the chapters where changes will be made.
        TS, 1, 2, 3, 4, 5, 6 … 11, 12, 13, 14

  58. Robert
    Posted Sep 30, 2013 at 10:09 PM | Permalink

    Francis Zwiers, vice chair of the IPCC WG1 speaking on a local CBC radio show today about the latest report and is asked about “the pause” (at 37:44). His explanation is interesting. Apparently he thinks that the models being wrong are the least plausible explanation.

    [audio src="http://podcast.cbc.ca/mp3/podcasts/bcalmanac_20130930_57108.mp3" /]

  59. Posted Nov 9, 2013 at 4:36 AM | Permalink

    Totally off topic, but grab a copy of this…

    The Age of Global Warming, by Rupert Darwall – ISBN 9780704372993, Quartet Books

    It’s a fantastic walk through how we got to where we are.

  60. Chip Knappenberger
    Posted Aug 15, 2014 at 12:01 PM | Permalink

    We take a further look at the IPCC’s analysis of models vs. observations here:

    http://www.cato.org/blog/clear-example-ipcc-ideology-trumping-fact

    -Chip

20 Trackbacks

  1. By Stillborn | Jay Currie on Sep 24, 2013 at 4:46 PM

    […] But the problem not arise “last week”. While the issue has only recently become acute, it has become acute because of accumulating failure during the AR5 assessment process, including errors and misrepresentations by IPCC in the assessments sent out for external review; the almost total failure of the academic climate community to address the discrepancy; gatekeeping by fellow-traveling journal editors that suppressed criticism of the defects in the limited academic literature on the topic. climate audit […]

  2. […] a must-read post today, Steve McIntyre demolishes the credibility of the IPCC as a scientific organization, demonstrating […]

  3. By Media and blog coverage | The IPCC Report on Sep 25, 2013 at 7:06 AM

    […] Steve McIntyre has come out of apparent semi-retirement to comment on how the IPCC addresses the disparity between models and observations. […]

  4. […] More here: http://climateaudit.org/2013/09/24/two-minutes-to-midnight/ […]

  5. […] fors meer opwarmen dan het werkelijke klimaat. De afgelopen weken hebben zowel Lucia Liljegren als Stephen McIntyre daar uitgebreid over […]

  6. […] On the eve of the official release of the latest report, Steve McIntyre has some thoughts. […]

  7. […] period of 34 years. This was very well explained in Steve McIntyre’s latest blog article two minutes to midnight where he showed that over the period 1979-2013 models on average warm up 50% faster than the real […]

  8. By Uppyn's Blog on Sep 28, 2013 at 12:50 AM

    […] statistician do such work (e.g. Lucia Liljegren or Steve McIntyre) and what they find is remarkable: the average of all those climate model runs is not consistent […]

  9. […] http://climateaudit.org/2013/09/24/two-minutes-to-midnight/#more-18392 […]

  10. By Marotzke’s Broken Promise « Climate Audit on Sep 30, 2013 at 8:07 AM

    […] « Two Minutes to Midnight […]

  11. […] Here is Figure 1.4 of the Second Order Draft, showing post-AR4 observations outside the envelope of projections from the earlier IPCC assessment reports (see previous discussion here). […]

  12. […] Here is Figure 1.4 of the Second Order Draft, showing post-AR4 observations outside the envelope of projections from the earlier IPCC assessment reports (see previous discussion here). […]

  13. […] Here is Figure 1.4 of the Second Order Draft, showing post-AR4 observations outside the envelope of projections from the earlier IPCC assessment reports (see previous discussion here). […]

  14. […] binnen de modelrange van verschillende IPCC-rapporten valt. Ik baseer me hier grotendeels op de analyse (en hier) van Steve McIntyre op Climate Audit. In de first draft zat er een fout in grafiek 1.4 […]

  15. […] McIntyre pointed out some time ago, here, that almost all the global climate models around which much of the IPCC’s AR5 WGI report was […]

  16. […] all model climates warmed much faster than the real climate over the last 35 years. Source: http://climateaudit.org/2013/09/24/two-minutes-to-midnight/. Models with multiple runs have separate boxplots; models with single runs are grouped together in […]

  17. […] […]

  18. […] simulated by those models to be, on average, in line with reality. But as Steve McIntyre showed, here, that is far from being the case. On average, CMIP5 models overestimate the warming trend between […]

  19. […] when AR5 was released, I noted that there was negligible literature available to AR5 about the discrepancy between models and […]

  20. […] when AR5 was released, I noted that there was negligible literature available to AR5 about the discrepancy between models and […]

Follow

Get every new post delivered to your Inbox.

Join 3,203 other followers

%d bloggers like this: