Nick Brown Smelled BS

http://narrative.ly/pieces-of-mind/nick-brown-smelled-bull/ h/t Mosher.


Fixing the Facts 2

AR5 Second Order Draft (SOD) Figures 1.4 and 1.5 showed the discrepancy between observations and projections from previous assessment reports. SOD Figure 1.5 (see below as annotated) directly showed the discrepancy for AR4 without additional clutter from earlier assessment reports. Even though AR4 was the most recent and most relevant assessment report, SOD Figure 1.5 was simply deleted from the report.

Nor can it be contended that IPCC erroneously located the projections in SOD Figure 1.5, as SKS claimed here in respect to SOD Figure 1.4. The uncertainty envelope shown in SOD Figure 1.5 was cited to AR4 Figure 10.26. As a cross-check, I digitized relevant uncertainty envelopes from AR Figure 10.26 (which I’ll show later in this post) and plotted them in the figure below (A1B – red + signs; A1T orange). They match almost exactly. Richard Betts acknowledged the match here.

figure 1.5 SOD annotated
Figure 1. AR5 SOD Figure 1.5 with annotations showing HadCRUT4 (yellow) and uncertainty ranges from AR4 Figure 10.26 in 5-year increments (red + signs).

AR5 Figure 1.4
Having deleted the informative (perhaps too informative) SOD Figure 1.5, IPCC’s only comparison between AR4 projections and actuals is in the revised Figure 1.4, a figure that seems more designed to obscure than illuminate.

In the annotated version shown below, I’ve plotted the AR4 Figure 10.26 A1B uncertainty range in yellow. Unfortunately, Figure 1.4 no longer shows an uncertainty envelope for AR4 projections. Here one has to watch the pea carefully. Uncertainty envelopes are shown for the three early assessments, but not for AR4, though it is the most recent. All that is shown for AR4 are 2035 uncertainty ranges for three AR4 scenarios (including A1B) in the right margin, plus a spaghetti of individual runs (a spaghetti that does not correspond to any actual AR4 graphic.) From the right margin A1B uncertainty range, the missing A1B uncertainty range can be more or less interpolated, as I have done here with the red envelope. I matched 2035 uncertainty to the right margin and interpolated back to 2000 based on the shape of the other envelopes. The re-stated envelope is about twice as wide as the actual AR4 Figure 10.26 uncertainty envelope that had been used in SOD Figure 1.5. Even with this much expanded envelope, HadCRUT4 observations are at the very edge of the expanded envelope – and well outside the actual AR4 Figure 10.26 envelope.

figure 1.4 annotated
Figure 2. AR5 Figure 1.4 with annotations. The yellow wedge shows the uncertainty range from AR4 Figure 10.26 (A1B). The red wedge interpolates the implied uncertainty range based on the right margin A1B uncertainty range.

AR4 Figure 10.26

Richard Betts recognized that there was no location error in connection with AR4 projections, but argued (see here) that the comparison in AR5 Figure 1.4 was “scientifically better” than the comparison in the SOD figure which, as Betts acknowledged, was “based on” an actual AR4 graphic (AR4 Figure 10.26).

However, if one is comparing AR4 projections to observations, IPCC is obliged to compare to actual AR4 graphics. Figure 10.26 was properly recognized in the SOD as the relevant comparison. It was also cited in contemporary (early 2008) discussion of AR4 projections by Pielke Jr (e.g here) and Lucia (here here)

AR4 Figure 10.26 was a panel diagram, the bottom row of which (see below) showed projections for GLB temperatures under six emissions scenarios, including A1B, with the editorial AR4 comment that the models “compare favourably” with observations.
figure 10.26 panel
Figure 3. Bottom row of AR4 Figure 10.26 showing uncertainty ranges for GLB temperature for six SRES scenarios. Original caption (edits show information for bottom row): Fossil CO2, CH4 and SO2 emissions for six illustrative SRES non-mitigation emission scenarios … and global mean temperature projections based on an SCM tuned to 19 AOGCMs. The dark shaded areas in the bottom temperature panel represent the mean ±1 standard deviation for the 19 model tunings. The lighter shaded areas depict the change in this uncertainty range, if carbon cycle feedbacks are assumed to be lower or higher than in the medium setting… Global mean temperature results from the SCM for anthropogenic and natural forcing compare favourably with 20th-century observations (black line) as shown in the lower left panel (Folland et al., 2001; Jones et al., 2001; Jones and Moberg, 2003).

The next graphic shows the effect of AR5′s “re-stated” uncertainty range on Figure 10.26. The right margin uncertainty ranges of Figure 1.4 have been inset at the correct location, with the ends of the uncertainty range corresponding to 2035 values of the red envelope showing the re-stated uncertainty range. Despite the doubling of uncertainty in the AR5 restatement, recent HadCRUT4 values are at the very edge of the expanded envelope.

figure 10.26 with AR5 legend
Figure 4. Detail of AR4 Figure 10.26 with annotation. HadCRUT4 is overplotted: yellow to 2005, orange thereafter. Original reference period is 1981-2000; the 1961-90 reference period used in AR5 is shown on the right axis.

Conclusion
In the final draft document sent to external reviewers, SOD Figure 1.5 directly compared projections from AR4 Figure 10.26 to observations, a comparison which showed that recent observations were running below the uncertainty envelope. The reference period for the AR4 uncertainty envelope was well-specified (1981-2000) and IPCC correctly transposed the envelope to the 1961-1990 reference period used in SOD Figure 1.5.

IPCC defenders have purported to justify changes to the location of uncertainty envelopes from the three early assessment reports on the basis that IPCC had erroneously located them in SOD Figure 1.4. Thousands of institutions around the world routinely compare projections to actuals without making mistakes about what their past projections were. Such comparisons are simple accounting, rather than cutting-edge science. It is disquieting that such errors persisted into the third iteration of the documents and the final version sent to external reviewers.

But, be that as it may, there was no reference period error concerning AR4 projections or in SOD Figure 1.5. So reference period error is not a reason for the deletion of this figure.

Richard Betts did not dispute the accuracy of the comparison in SOD Figure 1.5, but argued that the new Figure 1.4 was “scientifically better”. But how can the comparison be “scientifically better” when uncertainty envelopes are shown for the three early assessment reports, but not for AR4. Nor can a comparison between observations and AR4 projections be made “scientifically better” – let alone valid in accounting terms – by replacing actual AR4 documents and graphics with a spaghetti graph that did not appear in AR4.

Nor is the new graphic based on any article in peer reviewed literature.

Nor did any external reviewers of the SOD suggest removal of Figure 1.5, though some (e.g. Ross McKitrick) pointed out the inconsistency between the soothing text and the discrepancy shown in the figures.

Nor, in the absence of error, is there any justification for such wholesale changes and deletions after the third and final iteration had been sent to external reviewers.

In the past, IPCC authors famously deleted data to “hide the decline” in Briffa’s temperature reconstruction in order to avoid “giving fodder to skeptics”. Without this past history, IPCC might be entitled to a little more latitude. However, neither IPCC nor its supporting institutions renounced such conduct or undertook avoid similar incidents in the future. Thus, IPCC is vulnerable to concerns that its deletion of SOD Figure 1.5 was primarily motivated to avoid “giving fodder to skeptics”.

Perhaps there’s a valid reason, but it hasn’t been presented yet.

IPCC: Fixing the Facts

Figure 1.4 of the Second Order Draft clearly showed the discrepancy between models and observations, though IPCC’s covering text reported otherwise. I discussed this in a post leading up to the IPCC Report, citing Ross McKitrick’s article in National Post and Reiner Grundmann’s post at Klimazweiberl. Needless to say, this diagram did not survive. Instead, IPCC replaced the damning (but accurate) diagram with a new diagram in which the inconsistency has been disappeared.

Continue reading

Marotzke’s Broken Promise

A few days ago, Jochem Marotzke, an IPCC Coordinating Lead Author and, according to Der Spiegel, “president of the German Climate Consortium and Germany’s top scientific representative in Stockholm”, was praised (e.g. Judy Curry here) for his promise that the IPCC would address the global warming hiatus “head on” despite pressures from green factions in government ministries and for his declaration that “climate researchers have an obligation not to environmental policy but to the truth”.

However, it turned out that Marotzke’s promise was merely another trick. Worse, it turns out that Marotzke already knew that the report would not properly deal with the hiatus – which, in a revealing interview, Marotzke blamed on an ” oversight” (h/t to Judy Curry here). Worse, it turns out that IPCC authors were themselves complicit during the plenary session in causing information about the discrepancy between models and observations to be withheld from the SPM, as shown by thus far undiscussed minutes of the IPCC plenary session. Continue reading

Two Minutes to Midnight

There is much in the news about how IPCC will handle the growing discrepancy between models and observations – long an issue at skeptic blogs. According to BBC News, a Dutch participant says that “governments are demanding a clear explanation” of the discrepancy. On the other hand, Der Spiegel reports:

German ministries insist that it is important not to detract from the effectiveness of climate change warnings by discussing the past 15 years’ lack of global warming. Doing so, they say, would result in a loss of the support necessary for pursuing rigorous climate policies.

According to Der Spiegal (h/t Judy Curry), Joachim Marotzke, has promised that the IPCC will “address this subject head-on”. Troublingly, Marotzke felt it necessary to add that “climate researchers have an obligation not to environmental policy but to the truth”.

Unfortunately, as Judy Curry recently observed, it is now two minutes to midnight in the IPCC timetable. It is now far too late to attempt to craft an assessment of a complicated issue.

Efforts to craft an assessment on the run are further complicated by past failures and neglect both by IPCC and the wider climate science community. In its two Draft Reports sent to external scientific review, while IPCC mostly evaded the problem, its perfunctory assessment of the developing discrepancy between models and observations, such as it was, included major errors and misrepresentations, all tending in the direction of minimizing the issue.

IPCC has a further dilemma in coopering up an assessment on the run. Although the topic is obviously an important one, it received negligible coverage in academic literature, especially prior to the IPCC publication cutoff date, and the few relevant peer-reviewed articles (e.g. Easterling and Wehner 2009; Knight et al 2009) are unconvincing.

The IPCC assessment has also been compromised by gatekeeping by fellow-traveler journal editors, who have routinely rejected skeptic articles on the discrepancy between models and observations or pointing out the weaknesses of articles now relied upon by IPCC. Despite exposure of these practices in Climategate, little has changed. Had the skeptic articles been published (as they ought to have been), the resulting debate would have been more robust and IPCC would have had more to draw on its present assessment dilemma. As it is, IPCC is surely in a well-earned quandary.

Interested readers should also consult Lucia’s recent post which also comments on leaked IPCC draft material. Lucia’s diagnosis of IPCC’s quandary is very similar to mine. She also uses boxplots.

Continue reading

IPCC and the end of summer

Though I haven’t posted for a while, I’ve done quite a bit of work on climate recently, though it hasn’t been the sort of work that lends itself readily to blog posts.

I made a presentation at a workshop session in Erice in the third week of August, which, at Chris Essex’ request, was entitled “Year in Review”, focusing on developments in proxy reconstructions. Ross McKitrick made a presentation with an identical title, covering other topics. I spent much of my time on the section on proxy reconstructions in the forthcoming IPCC report, as presaged in the Second Draft, a document which I’ve had for some time, but which I hadn’t yet parsed. In carrying out my own internal review, I re-examined the voluminous literature on individual proxies: ice cores, speleothems, ocean sediments as well as tree rings and “multi-proxy” reconstructions.

The review reminded me of conversations that I had with two prominent though then relatively early/mid-career climate scientists shortly before the announcement of AR4 in January 2007 (mentioned in passing here). The occasion was the AGU conference in December 2006, where there had been a Union session on the NAS panel report on reconstructions, which was then very fresh. Both scientists said that they were convinced by our criticism of the data and methods used by Mann and similar articles and agreed that there would be little progress in the field without the development of new and better data. The more optimistic of the two estimated that this would take 10 years; the more pessimistic estimated 20 years. Although both scientists were tenured, neither was willing to be identified publicly and both required that I keep their identities confidential.

It is now seven years later – 70% of the way through the 10 year process contemplated by the more “optimistic” of the two scientists. This ought to be enough time to see first fruits of any improvement in the proxies themselves. My re-examination of literature on proxies was done with this in mind.

As CA readers are aware, I have been very critical of the repeated use in IPCC studies of proxies with known attributes (e.g. Graybill’s bristlecone chronologies and Briffa’s Yamal). Such “data snooping” – a term used in wider statistical literature – poisons standard statistical tests. In my opinion, real progress in the field will only come when performance and consistency can be demonstrated with out-of-sample proxies of the same “type”.

Conversely, I see little purpose whatever in the application of increasingly complicated and increasingly poorly understood multivariate methods to snooped datasets (e.g. ones with Graybill bristlecones and/or Briffa Yamal). Nor to datasets with gross contamination, such as the Tiljander or Igaliku lake sediments. Nor do I see much chance of progress when specialists are unable to specify the orientation of a proxy ex ante. Or when they use multivariate methods that permit ex post flipping or screening.

Over the past seven years, one sees both tendencies at work.

The production of temperature reconstructions using increasingly complicated multivariate methods on snooped datasets has continued unabated. Mann et al 2008 is the most prominent such article, but there have been others applying very complicated methods to what ought to be a simple problem.

New multiproxy reconstuctions cited by IPCC (e.g. PAGES2K) have also increasingly incorporated lake sediment data, especially since Kaufman et al 2009. Lake sediments have much higher accumulation than ocean sediments and in principle ought to yield higher resolution. However, ocean specialists have focused on a few proxies (O18, Mg-Ca, alkenones) and thereby developed populations that increasingly permit analysis for consistency. In contrast, lake sediment specialists have reported a bewildering variety of proxies (varve thickness, grain size, chironomids, XRay fluorescence, greyscale, pigments, pollen flux, …as well as occasional O18, Mg-Ca and alkenone) without seemingly making any effort at consistency. Lake sediments are also vulnerable to human disturbance e.g. agricultural activity at Korttajarvi (Tiljander) and Igaliku, rendering modern contamination a very real problem. Specialists have rushed to incorporate this still inchoate data into multiproxy reconstructions (Mann et al 2008; Kaufman et al 2009; PAGES2K; Tingley and Huybers 2013), without adequate precautions to ensure that the data is actually a proxy for temperature.

On the other hand, there has also been steady though less publicized progress by specialists on other fronts.

Antarctic ice cores are an “old” proxy, and, in deeper time, O18 data from Antarctic and Greenland ice cores have been perhaps the most important proxy. However, prior to 2013, no annually-resolved Antarctic core with data to the medieval period had been archived. (IPCC AR4 authors refused to show a relatively high (4-year) resolution Law Dome isotope series.) In 2013, three series became available: Law Dome, Steig’s new data from West Antarctic and, miracle of miracles, even an Ellen Mosley-Thompson series from the 1980s – to my knowledge, the first known sighting of Ellen Mosley-Thompson isotope data. (Despite the importance of Antarctica in the Southern Hemisphere, Mann et al 2008 did not use any Antarctic isotope data before the 13th century, instead reconstructing SH temperature with NH data such as bristlecones and contaminated Finnish sediments, as discussed below.) Unfortunately, no Greenland isotope data more recent than 1995 has been archived – the dead hand of Ellen Mosley-Thompson chilling progress in this area.

On a positive note, I am particularly struck by the progress in high-resolution O18 speleothem data, especially in China. This development has passed mostly under the radar, but offers several lines of real progress.
First, some speleothems directly connect the past two millennia to the Holocene. Placement of one- to two-millennium reconstructions in a Holocene context seems to me to be an important somewhat-behind-the-scenes trend in the field. Esper et al 2012, an under-discussed article, has an extremely interesting discussion in respect to tree rings – a topic that I’ve meant to discuss for ages and promise to get to. There has been interesting progress on treelines, including at (of all places) Yamal, again a topic that I’ve meant to discuss and promise to get to.

Second, speleothems provide information on the tropics and subtropics, areas which were abysmally covered by AR4 reconstructions. The interesting population of Chinese speleothems provide important insight into the East Asian monsoon, an insight that is much enhanced by a perspective that extends back through the Holocene to the LGM – a perspective that offers the opportunity to orient the data on a more rational basis that 20th century correlation.

Third, some speleothems are located relatively close to Lonnie Thompson’s tropical ice core data and permit a much clearer perspective on this data – particularly when placed in a Holocene context.

There has also been considerable progress in high-resolution ocean sediment series. Ocean specialists have tended to focus on “deep time”, but in the past decade, “high resolution” data has become more available. Somewhat counterintuitively, the recovery of recent sediments is a serious problem with many ocean cores: uppermost sediments are poorly recovered by piston corers and thus data on the past one-to-two millennia is surprisingly spotty and 20th century data is relatively uncommon.

So there was a both an opportunity and a need for an insightful assessment of work in the field since AR4 by the IPCC AR5 authors. In a recent post, Judy Curry mentioned that a young scientist of her acquaintance had complained of the tension in AR5 between incoming scientists with a fresh perspective and holdovers who were primarily concerned with vindicating/protecting AR4.

The Lead Author roster for the paleoclimate chapter gave grounds for both optimism and pessimism as to whether the section on recent paleoclimate would be insightful.

Giving some grounds for optimism was that Valerie Masson-Delmotte had succeeded Jonathan Overpeck as Coordinating Lead Author. Although committed to IPCC conclusions, Masson-Delmotte is more ecumenical in perspective. Plus, in her own right, she had written competent, interesting and detailed assessments and reconciliations of Antarctic ice cores – precisely the sort of thing that needs to be done with lake sediments and tree rings. For example, Masson et al 2001 canvassed all Holocene Antarctic ice cores, carefully analysing inconsistencies between the information. Some coastal ice cores showed strong recent increases in d18O that were inconsistent with declines in inland sites. Rather than relying on Mannian statistics to sort out inconsistent data, she concluded that the coastal ice cores had been affected by ice flow – with the lower part of the core coming originally from higher elevations. A careful assessment of the proxy literature in accordance with Masson-Delmotte’s own style and standards would have been a welcome change from, for example, Mann’s paean to the Hockey Stick (and Briffa’s obsequious defence).

However, the Lead Author of the section on recent paleoclimatology was Tim Osborn of CRU, a prominent Climategate correspondent. It is impossible to imagine an author more closely allied to Mann, Jones and Briffa and less qualified to provide an independent assessment of their work. In the wake of Climategate, Hans von Storch (among others) had urged CRU authors to step down from IPCC assessment roles, but instead a CRU author was placed in charge of the section that had occasioned the primary past controversy.

In addition, although Briffa, Mann and Jones have attracted most of the attention, Osborn had a personal role in some of the most notorious Climategate incidents and has undeservedly flown beneath the radar. It was Osborn who physically deleted the post-1960 data that hid the decline in the IPCC 2001 report. Osborn was also co-author of the first article that originally deleted the declining data (with the result that the inconsistency between the Briffa and Mann reconstructions was disguised.) It was Osborn who Mann asked not to reveal his “dirty laundry” to the wider community. In a lesser known AR4 hide the decline incident, Osborn suggested that AR4 authors withhold any visual display of the declining Law Dome (Antarctica) isotope series, even though this left the AR4 illustration with only two long Southern Hemisphere proxies (both tree ring series used by Gergis). Osborn was also one of the participants in the delete-all-emails incident. Remarkably, despite his central role in these incidents, Osborn is not listed as having been interviewed by any of the “inquiries” and has sailed pretty much under the radar.

Osborn’s publication record consists almost entirely of joint articles with Briffa – articles which have been rightly reviled on skeptic blogs both for internal inconsistency and assertions that can only be reasonably characterized as cargo cult “science”. I and others have focused on Briffa, but Osborn was also responsible.

Worse, given Osborn’s unrepented complicity in Hide the Decline and similar incidents, no reader can reasonably presume that Osborn has disclosed results adverse to the “message” or that the form of presentation has not been tailored to the advantage of the “message”. As too often, one has to watch the pea.

On balance, the IPCC section on recent paleoclimate mostly lives up to what one would expect from Osborn. Its focus is mainly on multiproxy reconstructions, rather than the data within the reconstructions. Unfortunately, many of these “new” reconstructions use snooped datasets and it takes time to determine what, if anything, is “new” in the reconstruction (or whether it depends on Graybill’s bristlecones or other “old” data). In fairness, AR5 conceded that NH reconstructions prior to AD1200 were heavily dependent on limited data (carefully avoiding the term “bristlecone”) lowering their confidence in reconstructions prior to AD1200, while still displaying them. Remarkably, their NH spaghetti graph ecumenically includes the Loehle and McCulloch 2008 reconstruction. (I haven’t endorsed this reconstruction as being “right” but see no reason why it is unworthy of being in a spaghetti graph.)

Their handling of the Southern Hemisphere is very curious – a topic that I’ll address in detail. However, as a quick preview, readers will recall that Briffa once famously wrote:

I am sick to death of Mann stating his reconstruction represents the tropical area just because it contains a few (poorly temperature representative ) tropical series. He is just as capable of regressing these data again any other “target” series, such as the increasing trend of self-opinionated verbage he has produced over the last few years

Despite presumed awareness of this absurd aspect of Mann reconstructions, IPCC’s Southern Hemisphere disagram relies on Mann et al 2008 reconstructions, which suffer from and exacerbate the defect observed years earlier by Briffa: Mann et al 2008 did not only use bristlecones and contaminated Tiljander sediments for its NH reconstruction, but for its Southern Hemisphere reconstruction. While IPCC did not place great confidence in Mann’s SH reconstruction, it is not clear that it contains any usable scientific information on SH temperature history.

Their section on the tropics is also curious: in the Zero Order Draft, they had observed that there was “nothing unusual” about recent drought or floods in a 1000-year context. At the time, I thought that there was zero chance that this (true) observation would survive the editorial slant. It’s interesting to see how this has evolved. An important new emphasis for recent paleoclimate in AR5 are model-proxy comparisons arising out of the PMIP3 Last Millennium project. I haven’t discussed this interesting enterprise in the past, but it is an important new direction and quite interesting.

In all, there’s lots of fresh material which lends itself to many individual posts. I’ve got much material in inventory and I’ll see what overall conclusions result from the exercise of writing them up.

However, to the extent that I might have hoped that IPCC AR5 would shed fresh insight into the development of recent paleoclimatology, it is mostly very disappointing. Too much rationalization of questionable multiproxy studies and not nearly enough assessment of actual progress on the data front. In a phrase: too much Osborn and not enough Masson-Delmotte.

Lewandowsky’s Backdating

In today’s post, I want to discuss Lewandowsky’s backdating of the blogpost in which he purported to “out” four skeptics, a claim that he re-iterated and embellished in a subsequent academic article, Lewandowsky et al (Fury). In response to a recent FOI request by Simon Turnill, the University of Western Australia stated that, based on their examination of records at Lewandowsky’s blog, it had been published on Sep 10 11:50:00 Australian Western Time (CLICK):

timestamp_foi2

However, in my opinion, there is overwhelming evidence that the blogpost was not published until September 11, 2012 between 4:00 and 4:30 am Australian Western Time (6 – 6:30 Australian Eastern), about 15 hours later. Between these times, the three then unidentified skeptics had been identified at both Climate Audit (here) and updates at Jo Nova (here), with these identifications even being reported by Barry Woods on a thread at Lewandowsky’s STW blog.

However, because of the date shown on Lewandowsky’s Australian blog, Lewandowsky appears to have the priority that he claimed both in the blogpost and the academic article. In today’s post, I’ll summarize the evidence for backdating, new information on which has arisen both through recent FOI and analysis by Simon Turnill. Continue reading

Guy Callendar vs the GCMs

As many readers have already surmised, the “GCM-Q” model that visually out-performed the Met Office CMIP5 contribution (HadGEM2) originated with Guy Callendar, and, in particular, Callendar 1938 (QJRMS). My attention was drawn to Callendar 1938 by occasional CA reader Phil Jones (see here and cover blog post by co-author Ed Hawkins here.) See postscript for some comments on these articles.

Callendar 1938 proposed (his Figure 2) a logarithmic relationship between CO2 levels and global temperature (expressed as an anomaly to then present mean temperature.) In my teaser post, I used Callendar’s formula (with no modification whatever) together with RCP4.5 total forcing and compared the result to the UK Met Office’s CMIP5 contribution (HadGEM2) also using RCP4.5 forcing.

In today’s post, I’ll describe Callendar’s formula in more detail. I’ll also present skill scores for global temperature (calculated in a conventional way) for all 12 CMIP5 RCP4.5 models for 1940-2013 relative to simple application of the Callendar formula. Remarkably, none of the 12 GCM’s outperform Callendar and 10 of 12 do much worse.

I’m not arguing that this proves that Callendar’s parameterization is therefore engraved in stone. Callendar would undoubtedly have been the first to say so. It is undoubtedly rather fortuitous that the parameters of Callendar’s Figure 2 outperform so many GCMs. The RCP4.5 forcing used in my previous post included an aerosol history, the provenance of which I have not parsed. I’ve done a similar reconstruction using RCP4.5 GHG only with a re-estimate of the Callendar parameters, which I will show below. Continue reading

Results from a Low-Sensitivity Model

Anti-lukewarmers/anti-skeptics have a longstanding challenge to lukewarmers and skeptics to demonstrate that low-sensitivity models can account for 20th century temperature history as well as high-sensitivity models. (Though it seems to me that, examined closely, the supposed hindcast excellence of high-sensitivity models is salesmanship, rather than performance.)

Unfortunately, it’s an enormous undertaking to build a low-sensitivity model from scratch and the challenge has essentially remained unanswered.

Recently a CA reader, who has chosen not to identify himself at CA, drew my attention to an older generation low-sensitivity (1.65 deg C/doubling) model. I thought that it would be interesting to run this model using observed GHG levels to compare its success in replicating 20th century temperature history. The author of this low-sensitivity model (denoted GCM-Q in the graphic below) is known to other members of the “climate community”, but, for personal reasons, has not participated in recent controversy over climate sensitivity. For the same personal reasons, I do not, at present, have permission to identify him, though I do not anticipate him objecting to my presenting today’s results on an anonymous basis.

In addition to the interest of a low-sensitivity model, there’s also an intrinsic interest in running an older model to see how it does, given observed GHGs. Indeed, it is a common complaint on skeptic blogs that we never get to see the performance of older models on actual GHGs, since the reported models are being constantly rewritten and re-tuned. That complaint cannot be made against today’s post.

The lower sensitivity of GCM-Q arises primarily because it has negligible net feedback from the water cycle (clouds plus water vapour). It also has no allowance for aerosols.[Update: July 22. Aerosols impacted the calculation shown here as the RCP4.5 update column is CO2 equivalent, which included aerosols though they were not listed as a separate column.]

In the graphic below, I’ve compared the 20th century performance of high-sensitivity HadGEM2 RCP45, the UK Met Office contribution to CMIP5 (red), and low-sensitivity “GCM-Q” (green) against observations (HadCRUT4-black). In my opinion, the common practice of centering observations and models on very recent periods (1971-2000 or even 1986-2005 as in Smith et al 2007; 2013) is very pernicious for the proper adjudication of recent performance. Accordingly, I’ve centered on 1921-1940 in the graphic below.

On this centering, HadGEM2 has a lengthy “cold” excursion in the 1960s and too rapid recent warming, strongly suggesting that aerosol impact is overestimated and that this overestimate has disguised the effect of too high sensitivity.

model comparison2
Figure 1. Black – HadCRUT4 plus (dotted) decadal HadGEM3 to 2017. Red – HadGEM2 CMIP5 RCP45 average. Green – GCM-Q average. All centered on 1920-1940. 25-point Gaussian smooth.

Although the close relationship between GCM-Q and observations in the above graphic suggests that there has been tuning to recent results, I have verified that this is not the case and re-assure readers on this point. (I hope to be able to provide a thorough demonstration in a follow-up post.)

I hope to provide further details on the model in the future. In the meantime, I think that even this preview shows that GCM-Q shows that it is possible to provide a low-sensitivity account of 20th century temperature history. Indeed, it seems to me that one could argue that GCM-Q actually outperformed HadGEM2 in this respect.

Update July 22: Forcing for GCM-Q was from RCP4.5 (see here zip). The above diagram used CO2EQ (column 2) defined as:

CO2 equivalence concentrations using CO2 radiative forcing relationship Q = 3.71/ln(2)*ln(C/278), aggregating all anthropogenic forcings, including greenhouse gases listed below (i.e. columns 3,4,5 and 8-35), and aerosols, trop. ozone etc. (not listed below).

Column 4 is CO2 only. The CO2EQ and CO2 column 4 forcings are compared in the diagram below in the same style.model comparison4

Met Office Hindcast

In a recent post, I noted the discrepancy between the UK Met OFfice contribution to IPCC AR5 and observations (as many others have observed), a discrepancy that is also evident in the “initialized” decadal forecast using the most recent model (HadGEM3). I thought that it would be interesting to examine the HadGEM2 hindcast to see if there are other periods in which there might have been similar discrepancies. (Reader Kenneth Fritsch has mentioned that he’s been doing similar exercises.)

In the figure below, I’ve compared HadCRUT4 (anomaly basis 1961-1990) to the Met Office CMIP5 contribution (red), converted to 1961-90 anomaly.

Continue reading

Follow

Get every new post delivered to your Inbox.

Join 2,882 other followers