In my first writeup, I observed that Exponent’s Logo transients appeared to be bodged too high, even with their unwarranted and adverse use of 67 deg F initialization (Exponent’s “temperature trick”). In today’s post, I’ve taken a closer look at the seemingly questionable calculation of the transients at 67 deg F, showing that the Patriot transients make sense only if initialization for the transients purporting to show Logo Gauge initialization were not actually initialized at 12.5 psi using the Logo Gauge (as stated and as is the purpose of the diagram). My reverse engineering shows that the Patriot dry transient in Figure 27 only makes sense if the Logo Gauge read 12.81 psi at initialization or if the Master Gauge (not the stated Logo Gauge) was erroneously used for initialization. If I’m correct, this is a very significant error – a botch, rather than a bodge – for which one would expect a prompt corrigendum, if not retraction, of the corresponding calculations. In a postscript to today’s post, I’ve attached a note on conversion from Logo and Non-Logo Gauge scale to correctly calibrated Master Gauge scale.
One of the ironies of the NFL’s conduct in this affair is that it can be established that NFL officials (under the supervision of NFL Executive Vice President Troy Vincent) over-inflated Patriot balls at half-time, the only proven tampering with Patriot balls. Brady and the Patriots were unaffected by the overinflation by NFL officials, as they destroyed the Colts in the second half.
Exponent must have noticed the over-inflation by officials, as it is implied by the post-game measurements, but failed to report or comment on it. Their avoidance becomes all the more conspicuous because many of the texts at issue in the Wells Report pertain to an earlier incident in which NFL officials had over-inflated Patriot balls, much to Brady’s frustration and annoyance at the time.
Continue reading →
By converting football pressures to ball temperatures under the Ideal Gas Law, it is possible to conveniently show Colt and Patriot information – transients, simulations and observations – on a common scale. I’ve done this in the diagram shown below, and, in my opinion, it neatly summarizes the actual information. Commentary follows the figure.
Figure 1. Transients as digitized from Figures 25 and 27 converted to temperature transients using Ideal Gas Law. Red- Patriot, blue- Colt; thick – dry; thin – wet; solid -Logo, dashed – Non-Logo. Simulations shown in open circles: large – Logo 67 deg F initialization; small – NonLogo 71 deg F initialization. Observed average: solid circle- Non-Logo; + – Logo. Continue reading →
Readers in the U.S. are doubtless aware of the “Deflategate scandal”, in which the NFL alleged that Tom Brady, the greatest quarterback of his generation, had conspired with an equipment manager and locker room attendant, to deflate a microscopic amount of pressure from footballs in the AFC championship game. The NFL seemed to be completely taken by surprise by the Ideal Gas Law and the fact that outside temperatures below calibration temperatures would result in much larger deflation without tampering.
The findings depend on the interpretation of statistical data by decision-makers – a topic that interests me. I found the technical report by Exponent, Wells’ technical consultants, to be very unsatisfactory on numerous counts:
- although they were reported by Wells to have considered “all permutations”, they hadn’t. On important occasions, they omitted highly plausible possibilities that indicated no tampering and, on other occasions, they only considered assumptions that were most adverse to the Patriots;
- on key occasions, it seemed to me that Exponent failed to properly characterize exculpatory results.
At the end of my analysis, I concluded that their key technical findings were simply incorrect and wrote up my analysis, now online here.
I watched both the AFC championship and the final. I have no fan commitment to the Patriots. As someone who’s played sports all his life and whose play has always been rushed, I am amazed at how time seems to stand still for great athletes such as Brady.
The summary is as follows.
Continue reading →
Last year, a paper of mine (Lewis 2014) showing that the approach used in Frame et al (2005), which argued for using a uniform prior for estimating equilibrium (strictly, effective) climate sensitivity (ECS), in fact led to a unique, objective Bayesian estimate for ECS upon undertaking a simple transformation (change) of variables. The estimate was lower, and far better constrained at the upper end, than the one resulting from use of a uniform prior in ECS, as recommended in Frame et al (2005) when estimating ECS. The only uniform priors involved were those for estimating posterior probability density functions (PDFs) for observational variables with Gaussian (normally distributed) data uncertainties, where they are totally noninformative and their use is uncontroversial. I wrote an article about Lewis (2014) at the time, and a version of the paper is available here.
I’ve now had a new paper that uses an essentially identical method to Lewis (2014), but with updated, higher quality data, published by Climate Dynamics, here. A copy of the accepted version is available on my web page, here.
A Scientific American article concerning Bjorn Stevens’ recent paper “Rethinking the lower bound on aerosol radiative forcing” has led to some confusion. The article states, referring to a blog post of mine at Climate Audit, “The misinterpretation of Stevens’ paper began with Nic Lewis, an independent climate scientist.”. My blog post showed how climate sensitivity estimates given in Lewis and Curry (2014) (LC14) would change if the estimate for aerosol forcing from Stevens’ recent paper were used instead of the estimate thereof given in the IPCC 5th Assessment Working Group 1 report (AR5 WG1). To clarify, Bjorn Stevens has never suggested that my blog post misinterpreted or misrepresented his paper.
The article also states, paraphrasing rather than quoting, “Lewis had used an extremely rudimentary, some would even say flawed, climate model to derive his estimates, Stevens said.” LC14 used a simple energy budget climate model, described in AR5 WG1, to estimate equilibrium climate sensitivity (ECS) from estimates of climate system changes over the last 150 years or so. An essentially identical method was used to estimate ECS in Otto et al (2013), a paper of which Bjorn Stevens was an author, along with thirteen other AR5 WG1 lead authors (and myself). Energy budget models actually estimate an approximation to ECS, effective climate sensitivity, not ECS itself, which some people may regard as a flaw. AR5 WG1 states that “In some climate models ECS tends to be higher than the effective climate sensitivity”; this is certainly true. Since the climate system takes many centuries to equilibrate, it is not known whether or not this is the case in the real climate system. LC14 discussed the issues involved in some detail, and my Climate Audit blog post referred to estimating “equilibrium/effective climate sensitivity”.
I sent Bjorn Stevens a copy of the above wording and he has responded, saying the following:
because I have reservations about estimates of ocean heat uptake used in the ‘energy-balance approaches’, and because of a number of issues (which you allude to) regarding differences between effective climate sensitivity estimates from the historical record and ECS, I am not ready to draw the inference from my study that ECS is low. That said, I do think what you write in the two paragraphs above is a fair characterization of the situation and of your important contributions to the scientific debate. The Ringberg meeting also made me confident that the open issues are ones we can resolve in the next few years.
Feel free to quote me on this.
Best wishes, Bjorn”
Update 26 April 2015
Gayathri Vaidyanathan tells me that the article has been changed at ClimateWire . Certainly, the title has been changed, and I presume the text has been amended per the version she sent me, which no longer suggests misinterpretation. But Scientific American is still showing the original version, so the situation is not very satisfactory.
Update 28 April 2015
The text of the article has now been changed at Scientific American, although the title is unaltered. The sentence referring to misinterpretation now reads “Stevens’ paper was analyzed by Nic Lewis, an independent climate scientist.*” At the foot of the article is the note:
“Correction: A previous version of this story did not accurately reflect Lewis’ work. Lewis used Stevens’ study in an analysis that was used by some media outlets to throw doubt on global warming.“
A guest post by Nicholas Lewis
In Part 1 I introduced the talk I gave at Ringberg 2015, explained why it focussed on estimation based on warming over the instrumental period, and covered problems relating to aerosol forcing and bias caused by the influence of the AMO. In Part 2 I dealt with poor Bayesian probabilistic estimation and summarized the state of observational, instrumental period warming based climate sensitivity estimation. In this third and final part I discuss arguments that estimates from that approach are biased low, and that GCM simulations imply ECS is higher, partly because in GCMs effective climate sensitivity increases over time. I’ve incorporated one new slide here to help explain this issue.
A guest post by Nicholas Lewis
In Part 1 I introduced the talk I gave at Ringberg 2015, explained why it focussed on estimation based on warming over the instrumental period, and covered problems relating to aerosol forcing and bias caused by the influence of the AMO. I now move on to problems arising when Bayesian probabilistic approaches are used, and then summarize the state of instrumental period warming, observationally-based climate sensitivity estimation as I see it. I explained in Part 1 why other approaches to estimating ECS appear to be less reliable.
A guest post by Nicholas Lewis
As many readers will be aware, I attended the WCRP Grand Challenge Workshop: Earth’s Climate Sensitivities at Schloss Ringberg in late March. Ringberg 2015 was a very interesting event, attended by many of the best known scientists involved in this field and in areas of research closely related to it – such as the behaviour of clouds, aerosols and heat in the ocean. Many talks were given at Ringberg 2015; presentation slides are available here. It is often difficult to follow presentations just from the slides, so I thought it was worth posting an annotated version of the slides relating to my own talk, “Pitfalls in climate sensitivity estimation”. To make it more digestible and focus discussion, I am splitting my presentation into three parts. I’ve omitted the title slide and reinstated some slides that I cut out of my talk due to the 15 minute time constraint.
In this part I will cover the first bullet point and one of the major problems that cause bias in climate sensitivity estimates. In the second part I will deal with one or two other major problems and summarize the current position regarding observationally-based climate sensitivity estimation. In the final part I will deal with the third bullet point.
In a nutshell, I will argue that:
- Climate sensitivity is most reliably estimated from observed warming over the last ~150 years
- Most of the sensitivity estimates cited in the latest IPCC report had identifiable, severe problems
- Estimates from observational studies that are little affected by such problems indicate that climate sensitivity is substantially lower than in most global climate models
- Claims that the differences are due to substantial downwards bias in estimates from these observational studies have little support in observations.
Rahmstorf et al 2015 Figure 5 shows a coral d15N series from offshore Nova Scotia (see left panel below). The corresponding plot from the source is shown on the right. Original captions for both follow. There’s enough information in the figures and captions to figure out Rahmstorf’s next trick. See if you can figure it out before looking at my explanation below the fold.
Figure 1. Left – Rahmstorf et al Figure 5. Original caption: Figure 5 A compilation of different indicators for Atlantic ocean circulation. The blue curve shows our temperature-based AMOC index also shown in Fig. 3b. The dark red curve shows the same index based on NASA GISS temperature data-48 (scale on left). The green curve with uncertainty range shows coral proxy data – 25 (scale on right). The data are decadally smoothed. Orange dots show the analyses of data from hydrographic sections across the Atlantic at 25 N, where a 1 K change in the AMOC index corresponds to a 2.3 Sv change in AMOC transport, as in Fig. 2 based on the model simulation. Other estimates from oceanographic data similarly suggest relatively strong AMOC in the 1950s and 1960s, weak AMOC in the 1970s and 1980s and stronger again in the 1990s (refs 41,51). Right – Sherwood et al 2011 Figure 3 excerpt. Original caption: time series … annual mean bulk d15N from six colonies of the deep-sea gorgonian P. resedaeformis. Shaded areas represent 95% confidence intervals around annual means. Dashed lines indicate long-term trends, where significant. Note the cold periods (blue bars) of the 1930s/1940s and 1960s and sustained warm period (red bar) since 1970. Bulk d15N is most strongly correlated with NAO at a lag of 4 years (r= -0.19) and with temperature at a lag of 3 years (r=-0.27, p<0.05). … Squares in bulk d15N plot show values of the eight individual samples used for d15N-AA analysis. Continue reading →