Climate Audit

Reconciling Model-Observation Reconciliations

Two very different representations of consistency between models and observations are popularly circulated. On the one hand, John Christy and Roy Spencer have frequently shown a graphic which purports to show a marked discrepancy between models and observations in tropical mid-troposphere, while, on the other hand, Zeke Hausfather, among others, have shown graphics which purport to show no discrepancy whatever between models and observations.  I’ve commented on this topic on a number of occasions over the years, including two posts discussing AR5 graphics (here, here) with an update comparison in 2016 (here) and in 2017 (tweet).

There are several moving parts in such comparisons: troposphere or surface, tropical or global. Choice of reference period affects the rhetorical impression of time series plots.  Boxplot comparisons of trends avoids this problem. I’ve presented such boxplots in the past and update for today’s post.

I’ll also comment on another issue. Cowtan and Way argued several years ago that much of the apparent discrepancy in trends at surface arose because the most common temperature series (HadCRUT4,GISS etc) spliced air temperature over land with sea surface temperatures. This is only a problem because there is a divergence within CMIP5 models in trends for air temperature (TAS) over ocean and sea surface temperature (TOS). They proposed that the relevant comparandum for HadCRUT4 ought to be a splice as well: of TOS over ocean areas and TAS over land.  When this was done, the discrepancy between HadCRUT4 and CMIP5 models was apparently resolved.

While their comparison was well worth doing, there was an equally logical approach which they either didn’t consider or didn’t report: splicing observations rather than models. There is an independent and long-standing dataset for night marine air temperatures (ICOADS). Combining this data with surface air temperature over land would avoid the problem identified by Cowtan and Way. Further, NMAT data is relied upon to correct/adjust inhomogeneity in SST series arising from changes in observation techniques, e.g. Karl et al 2015:

previous version of ERSST assumed that no ship corrections were necessary after this time, but recently improved metadata (18) reveal that some ships continued to take bucket observations even up to the present day. Therefore, one of the improvements to ERSST version 4 is extending the ship-bias correction to the present, based on information derived from comparisons with night marine air temperatures.

Thus, there seems to be multiple reasons to look just as closely at a comparison resulting from this approach, as one from splicing model data, as proposed by Cowtan and Way.  I’ll show the resulting comparisons without prejudging.

Troposphere

Spencer and Christy’s comparisons are for satellite data (lower troposphere.) They typically show tropical troposphere, for which the discrepancy is somewhat larger than for the GLB troposphere (shown below.) The median value from models is 0.28 deg C/decade, slightly more than double observed trends in UAH (0.13 deg C/decade) or RSS version 3.3 (0.14 deg C.) RSS recently adjusted their methodology resulting in a 37% increase in trend  (now 0.19 deg C/decade.)   The UAH and RSS3.3 trends are below all but one model-run combinations. Even the adjusted RSS4 trend is less than all but two (of 102) model-run combinations.

The obvious visual differences in this diagram illustrate the statistically significant difference between models and observations.  Many climate scientists e.g. Gavin Schmidt are deniers of mainstream statistics and argue that there is no statistically significant difference between models and observations. (See CA discussion here.)

CMIP5 and HadCRUT4

IPCC AR5 compared CMIP5 projections of air temperature (TAS) to HadCRUT4 and corresponding surface temperature indices (all obtained by weighted average of air temperatures over land and SST over ocean.)  In this case, the discrepancy is not as marked, but still significant. Median model trend was 0.241 deg C/decade (less than troposphere) while HadCRUT4 trend was 0.181 deg C/decade (Berkeley 0.163).  Berkeley was lower than all but six runs, HadCRUT4 lower than all but ten. Both were outside the range of the major models. As noted above, the basis of this comparison was criticized by Cowtan and Way, re-iterated by Hausfather.

Cowtan and Way Variation

As noted above, Cowtan and Way (followed by Hausfather) combined CMIP5 models for TAS over land and TOS over ocean, for their comparison to HadCRUT4 and similar temperature data. This had the effect of lowering the median model trend to 0.189 deg C/decade (from 0.241 deg C/decade), indicating a reconciliation with observations (0.181 deg C/decade for HadCRUT4) for surface temperatures (though not for tropospheric temperatures, which they didn’t discuss.)

ICOADS NMAT and “MATCRU”

The ICOADS air temperature series is closely related to SST series. There is certainly no facial discrepancy which disqualifies one versus the other as a valid index. There are major and obvious differences in trends between the ocean series and the land series. The difference is larger than in models, but models do project an increasing difference over the next century.

One wonders why the standard indices (HadCRUT4) combine the unlike series for SST and land air temperature rather than combining two air temperature series.  As an experiment, I constructed “MATCRU” as a weighted average (by area) of ICOADS and CRUTEM.  Rather than the consistency reported by Cowtan-Way and Hausfather, this showed a dramatic inconsistency – not unlike the inconsistency in tropospheric series prior to the recent bodge of RSS data.

 

 

Conclusion

What does this all mean? Are models consistent with observations or not?  Up to the recent very large El Nino, it seemed that even climate scientists were on the verge of conceding that models were running too hot, but the El Nino has given them a reprieve. After the very large 1998 El Nino, there was about 15 years of apparent “pause”. Will there be a similar pattern after the very large 2017 El Nino?

When one looks closely at the patterns as patterns, rather than to prove an argument, there are interesting inconsistencies between models and observations that do not necessarily show that the models are WRONG!!!, but neither are they very satisfying in proving that that the models are RIGHT!!!!

From a policy perspective, I’m not convinced that any of these issues – though much beloved by climate warriors and climate skeptics – matter much to policy.  Whenever I hear that 2016 (or 2017) is the warmest year EVER, I can’t help but recall that human civilization is flourishing as never before. So we’ve taken these “blows” and not only survived, but prospered. Even the occasional weather disaster has not changed this trajectory.