EPA: the Endangerment Finding was not a “highly influential scientific assessment”

The recent report of the EPA Office of Inspector General(OIG) contains a remarkable dispute between the OIG on the one hand and EPA and the Office of Management and Budget (OMB) on the other as to whether the Technical Support Document (TSD) for the Endangerment Finding was a “highly influential scientific assessment”, a defined category under OMB peer review policy. It would doubtless seem self-evident to most readers that, if any scientific assessment were to meet any criteria of being “highly influential”, the TSD for the Endangerment Finding would meet such criteria.

But readers should never under-estimate the capacity for institutional mendacity. The EPA and OMB have both vigorously argued that the TSD was NOT a “highly influential scientific assessment”. The OIG report includes a fascinating series of appendices in which EPA and OMB gradually articulate this seemingly improbable doctrine – a doctrine rejected by the OIG.

The dispute arises because the EPA peer review procedures did not meet U.S. standards for a “highly influential scientific assessment” – as clearly stated in the recent OIG report. Thus EPA and OMB have resorted to the improbable argument that the Endangerment Finding was not a “highly influential scientific assessment”.
Continue reading

Monthly Centering and Climate Sensitivity

In our recent discussion of Dessler v Spencer, UC raised monthly centering as an issue in respect to the regressions of TOA flux against temperature. Monthly centering is standard practice in this branch of climate science (e.g. Forster and Gregory 2006, Dessler 2010), where it is done without any commentary or justification. But such centering is not something that is lightly done in time series statistics. (Statisticians try to delay or avoid this sort of operation as much as possible.)

When you think about it, it’s not at all obvious that the data should be centered on each month. I agree with the direction that UC is pointing to – a proper statistical analysis should show the data and results without monthly centering to either verify that the operation of monthly centering doesn’t affect results or that its impact on results has a physical explanation (as opposed to being an artifact of the monthly centering operation.)

In order to carry out the exercise, I’ve used AMSU data because it is expressed in absolute temperatures. I’ve experimented with AMSU data at several levels, but will first show the results from channel 4 (600 mb) because they seem quite striking to me and because troposphere temperatures seem like a sensible index of temperature for comparing to TOA flux (since much TOA flux originates from the atmosphere rather than the surface.)

In the graphic below, the left panel plots the CERES TOA Net flux (EBAF monthly version) against monthly AMSU channel 4 temperatures. (Monthly averages are my calculation.) The right panel shows the same data plotted as monthly anomalies. (HadCRU, used in some of the regression studies, uses monthly anomalies.) The red dotted line shows the slope of the regression of flux~temperature, while the black dotted line shows a line with a slope of 3.3 (chosen to show a relationship of 3.3 wm-2/K). Take a look – more comments below.


Figure 1. CERES TOA Net Upward Flux (EBAF) vs AMSU Channel 4 (600 mb) Temperature. Left – Absolute; right – monthly anomaly.

The differences between regressions before and after monthly centering are dramatic, to say the least.

Considering absolute values (left) first. Unlike the Mannian r2 of 0.018 of Dessler 2010, the relationship between TOA flux and 600 mb temperature is very strong (r2 of 0.79). TOA flux is net downward when 600 mb temperature is at a minimum (Jan – northern winter/southern summer) and is net upward in northern summer (July) when global 600 mb temperature is at its maximum.

The slope of the regression line is 7.7 wm-2/K (slopes greater than 3.3 wm-2/K are said to indicate negative feedback.) There is an interesting figure-eight shape as a secondary but significant feature. This residual has 4 zeros during the year – which suggests to me that it is related to the tropics (where incoming solar radiation has a 6-month cycle maxing at the equinoxes, with the spring equinox stronger than the fall equinox.)

In “ordinary” statistics, statisticians try to fit things with as few parameters as possible. In this case, a linear regression gives an excellent fit and, with a little experimenting, a linear regression plus a cyclical term with a 6-month period would give an even better fit. There doesn’t seem to be any statistical “need” to take monthly centering in order to get a useful statistical model.

Now let’s look at the regression after monthly centering – shown on the same scale. Visually it appears that the operation of monthly centering has damaged the statistical relationship. The r^2 has been decreased to 0.41 – still much higher than the r^2 of Dessler 2010. (The relationship between TOA flux and 600 mb temperatures appears to be stronger than the corresponding relationship with surface temperatures, especially HadCRU.)

Interestingly, the slope of the regression line is now 2.6 wm-2/K i.e. showing positive feedback.

I’ve done experiments comparing AMSU 600 mb to AMSU SST and both to HadCRU. The results are interesting and will be covered on another occasion.

In the meantime, the marked difference between regression results before and after taking monthly centering surely warrants reflection.

I try to avoid speculations on physics since I’ve not parsed the relevant original materials, but, suspending this policy momentarily, I find it hard to visualize a physical theory in which the governing relationship is between monthly anomalies as opposed to absolute temperature. Yes, the relationship between absolute quantities still leaves residuals with a seasonal cycle, but it would be much preferable in statistical terms (and presumably physical terms) to explain the seasonal cycle in residuals in some sort of physical way, rather than monthly centering of both quantities (24 parameters!).

If there is a “good” reason for monthly centering, the reasons should be stated explicitly and justified in the academic articles (Forster and Gregory 2006, Dessler 2010) rather than being merely assumed – as appears to have happened here. Perhaps there is a “good” reason and we’ll all learn something.

In the meantime, I think that we can reasonably add monthly centering to the list of questions surrounding the validity of statistical analyses purporting to show positive feedbacks from the relationship of TOA flux to temperatures. (Other issues include the replacement of CERES clear sky with ERA clear sky and the effect of leads/lags on Dessler-style regressions.)

I suspect that it may be more important than the other two issues. We’ll see.

PS – there are many interesting aspects to the annual story in the figure shown above. The maximum annual inbound flux is in the northern winter (Jan) and the minimum is in the northern summer – the difference is over 20 wm-2, large enough to be interesting. The annual cycle of outbound flux and GLB temperature reaches a maximum in the opposite season to the one that would expect from the annual cycle of inbound flux. I presume that this is because of the greater proportion of land in the NH as Troy observed. In effect, energy accumulates in the SH summer and dissipates in the NH summer. An interesting asymmetry.

Note: AMSU daily information is at http://discover.itsc.uah.edu/amsutemps/. I’ve uploaded scripts that scrape the daily information from this site for all levels and collate into monthly averages. See script below. My collation of monthly data is also uploaded.

source(“http://www.climateaudit.info/scripts/satellite/amsu.txt”)
amsu=makef()
amsum=make_monthly(amsu)

download.file(“http://www.climateaudit.info/data/satellite/amsu_monthly.tab”,”temp”,mode=”wb”)
load(“temp”)
amsum=amsu_monthly

The CERES data used here is the EBAF version from downloaded from http://ceres-tool.larc.nasa.gov/ord-tool/jsp/EBAFSelection.jsp as ncdf file getting TOA parameters. Collated into time series and placed at CA:

download.file(“http://www.climateaudit.info/data/ceres/ebaf.tab”,”temp”,mode=”wb”)
load(“temp”); tsp(ebaf)

The graphic is produced by:

A=ts.union( ceres=-ebaf[,”net_all”],amsu=amsum[,”600″], ceresn=anom(-ebaf[,”net_all”]),amsun=anom(amsum[,”600″]))
month= factor( round(1/24+time(A)%%1,2))
A=data.frame(A) #reverse sign by convention
A$month=month
nx= data.frame(sapply(A[,1:2], function(x) tapply(x,month,mean,na.rm=T) ) )

# if (tag) png(file=”d:/climate/images/2011/spencer/ceres_v_amsu600.png”,h=480,w=600)

layout(array(1:2,dim=c(1,2) ) )
plot(ceres~amsu,A ,xlab=”AMSU 600 mb deg C”,ylab=”Flux Out wm-2″,ylim=c(-10,10),xlim=c(251.5,255),xaxs=”i”,yaxs=”i”)
lines(A$amsu,A$ceres )
points(nx$amsu,nx$ceres,pch=19,col=2)
title(“Before Monthly Normal”)
for(i in 1:11) arrows( x0=nx$amsu[i],y0=nx$ceres[i],x1=nx$amsu[i+1],y1=nx$ceres[i+1], lwd=2,length=.1,col=2)
i=12; arrows( x0=nx$amsu[i],y0=nx$ceres[i],x1=nx$amsu[1],y1=nx$ceres[1], lwd=2,length=.1,col=2)
text( nx$amsu[1],nx$ceres[1],font=2,col=2, “Jan”,pos=2)
text( nx$amsu[7],nx$ceres[7],font=2,col=2, “Jul”,pos=4)
abline(h=0,lty=3)
fm=lm(ceres~amsu,A); summary(fm)
round(fm$coef,3)
a=c(250,255);b=fm$coef
lines(a,fm$coef[1]+a*fm$coef[2],col=2,lty=3,lwd=2)
abline( 3.3* b[1]/b[2], 3.3,lty=3)
text(251.5,9,paste(“Slope:”, round(fm$coef[2],2)),pos=4,col=2,font=2)

plot(ceresn~amsun,A ,xlab=”AMSU 600 mb deg C”,ylab=”Flux Out wm-2″,ylim=c(-10,10),xlim=c(-1.75,1.75),xaxs=”i”,yaxs=”i”)
lines(A$amsun,A$ceresn )
title(“After Monthly Normal”)
abline(h=0,lty=3)
fmn=lm(ceresn~amsun,A); summary(fmn)
round(fmn$coef,3) # 2.607
a=c(-2,2);b=fmn$coef
lines(a,b[1]+a*b[2],col=2,lty=3,lwd=2)
abline(0,3.3,lty=2)
text(-1.5,9,paste(“Slope:”, round(fmn$coef[2],2)),pos=4,col=2,font=2)

Lindzen Choi 2011

Scripts and data for Lindzen and Choi 2011 are now online at CA here together with the original article.

It will take me a little while to get to this. The scripts are in IDL. Translations to R welcomed.

The Dessler (2011) Regression

Dessler (2011) reported the following:

A related point made by both LC11 and SB11 is that regressions of TOA flux or its components vs. ΔTs will not yield an accurate estimate of the climate sensitivity λ or the cloud feedback. This conclusion, however, relies on their particular values for σ(ΔFocean) and σ(ΔRcloud). Using a more realistic value of σ(ΔFocean)/σ(ΔRcloud) = 20, regression of TOA flux vs. ΔTs yields a slope that is within 0.4% of λ, a result confirmed in Fig. 2b of Spencer and Braswell [2008]. This also applies to the individual components of the TOA flux, meaning that regression of ΔRcloud vs. ΔTs yields an accurate estimate of the magnitude of the cloud feedback, thereby confirming the results of D10.

Although these findings have been widely praised by the “community” and already cited by Trenberth et al 2011, exactly what’s been shown in this paragraph is far from being clear on its face, despite Trenberth’s fulminations against SB11 that “the description of their method was incomplete, making it impossible to fully reproduce their analysis. Such reproducibility and openness should be a benchmark of any serious study.”

I asked Dessler for the source code supporting this paragraph, which he kindly provided me, but it would be better to have a Supplementary Information which makes manual requests unnecessary.

After parsing the code, I’ve come to the conclusion that almost nothing in the above paragraph makes any sense. I’ve set out my understanding below and, if I’ve misunderstood anything, I’ll amend accordingly.
Continue reading

Brian Hoskins and the Times Atlas

Brian Hoskins was one of the first people that Fiona Fox went to for a testimonial to the supposed rigor of the execrable Oxburgh inquiry. Hoskins, presently Bob Ward’s supervisor at the Grantham Institute, shamelessly called the Oxburgh inquiry “thorough and fair”. Although no one has yet pointed this out (partly because of efforts to erase all copies), Hoskins turned up in a similar role in the promotional trailer for the Times Atlas, where he is described as endorsing it as a “useful tool against climate change skeptics.” Continue reading

Troy: Dessler(2010) “artifact of combining two flux calculations”

Troy_CA has another excellent contribution to the continuing analysis of Dessler 2010 and Dessler 2011 (h/t Mosher for alerting me)

CA readers are aware that the sign of the regression coefficient from Dessler 2010 is reversed when CERES clear sky is used in combination with CERES all sky, instead of replacing CERES clear sky with ERA clear sky. Dessler purported to justify the substitution on the basis of a suggested bias in the CERES clear-sky, referring to Sohn and Bennartz 2008.

In my opinion, this passim reference hardly justifies a failure to disclose the adverse results using CERES clear sky. The adverse results should have been disclosed and discussed (just as Spencer and Braswell 2011 should have shown all relevant models in their justly criticized figure.)

Nick Stokes, rather predictably, swooned over Dessler’s supposed wisdom in replacing CERES clear sky with ERA clear sky. Some quotes:

What the reanalysis can then do is make the moisture correction so the humidity is representative of the whole atmosphere, not just the clear bits. I don’t know for sure that they do this, but I would expect so, since Dessler says that are using water vapor distributions. Then the reanalysis has a great advantage.

Instead of simply accepting this sort of arm-waving as proof, Troy_CA has carried out an insightful analysis, with some important conclusions that totally refute Nick’s swoon and, in the process, directly question the replacement of CERES clear sky with ERA clear sky and thus the conclusions of the original article:

the “dry-sky bias correction”, if it exists in ERA, accounts for very little of the difference we see between ERA_CRF and CERES_CRF

The bulk of these CERES_CRF vs. ERA_CRF differences come from this different value for the effective surface albedo. Note that this has nothing to do with a “dry-sky” longwave water vapor bias.

to me there seems to be little ambiguity that the magnitude of the positive feedback in Dessler10 is more of an artifact of combining two flux calculations that aren’t on the same page, rather than some bias correction in ERA-interim.

Following practices of critical climate blogs (I prefer “critical” to “skeptical”), Troy has commendably archived source code.

PS. I’ve obtained some source code from Dessler on some of the calculations in Dessler 2011 and will be posting on that.

PPS. Note that criticizing the analysis of Dessler (2010) does not imply that the conclusions of Spencer and Braswell are “right” (or that they are “wrong”).

The Times Atlas and “Y2K”

In the last couple of days, there has been much to-do in glacier world about an error in the Times Atlas on Greenland glaciers. See for example here here here.

Unlike the authors of 1000-year temperature reconstructions, glaciologists seem to be concerned about things like using data upside down. As of today, the Times Atlas has issued a somewhat contradictory press release, resiling from its press release, but standing by the atlas: Continue reading

Appeal of UEA’s Yamal FOI Refusal

Fred Pearce reported in the The Climate Files (page 54):

When I phoned Jones on the day the emails were published online and asked him what he thought was behind it, he said: “It’s about Yamal, I think”. The word turns up in 100 separate emails, more than “hockey stick” or any other totem of the climate wars. The emails began with it back in 1996 and they ended with it.

See here for a recent technical discussion of Yamal data and http://www.climateaudit.org/tag/yamal for tagged articles.

An April 2006 Climategate email (684. 1146252894.txt) referred to a regional chronology covering both Yamal and Polar Urals as follows:

we have three “groups” of trees:

“SCAND” (which includes the Tornetrask and Finland multi-millennial chronologies, but also some shorter chronologies from the same region). These trees fall mainly within the 3 boxes centred at: 17.5E, 67.5N; 22.5E, 67.5N; 27.5E, 67.5N

“URALS” (which includes the Yamal and Polar Urals long chronologies, plus other shorter ones). These fall mainly within these 3 boxes: 52.5E [SM: presumably 62.5E], 67.5N; 62.5E, 62.5N (note this is the only one not at 67.5N); 67.5E, 67.5N

“TAIMYR” (which includes the Taimyr long chronology, plus other shorter ones). These fall mainly within these 4 boxes: 87.5E, 67.5N; 102.5E, 67.5N;112.5E, 67.5N; 122.5E, 67.5N

A March 2007 email (780. 1172776463.txt) appears to indicate that the 2006 Chronology had elevated values around AD1000, as the 2007 email refers to an earlier version of the chronology with a “higher peak near 1000 AD”:

Here is the old version for you to compare with… the only noticeable difference is for the URALS/YAMAL region, which previously had a higher peak near 1000 AD.

Although I specifically drew the attention of the Muir Russell panel to the 2006 email as being very important in connection with their mandate to examine evidence “of the manipulation or suppression of data which is at odds with acceptable scientific practice and may therefore call into question any of the research outcomes”, the Muir Russell panel failed to cross-examine CRU on inconsistencies in their evidence to the panel with the contemporary emails in the Climategate dossier.

Earlier this year, I submitted an EIR request for the 2006 regional chronology and the associated list of sites. Both requests were refused. I then submitted an internal appeal to UEA, again refused.

While climate scientists frequently say that they are not in it for the money, UEA’s refusal was based not on the public interest, but on their claim that they would be financially damaged by disclosure of the 2006 chronology, saying that as “copyright holder”, they had “an expectation of making financial gain from” publication of the 2006 chronology and that disclosure would cause them
“financial harm via adverse impact upon reputation, ability to attract research funding, and funding arising from the citation of the publications within the REF process by which universities in the United Kingdom receive funding”.

I have now submitted an appeal to the Information Commissioner. See here for the appeal and here for the prior correspondence. The appeal includes a review of events, commenting unfavorably on a number of untruthful assertions made by CRU and UEA along the way as excuses for not providing data or not complying with FOI/EIR legislation.

Obviously, I think that their arguments are unconvincing. But aside from that, we often hear from the climate community that they are not in it for the money. Any climate scientist who has ever made such a statement should condemn UEA’s placement of the university’s supposed “financial gain” above the public interest. Unfortunately, the climate “community” have, as usual, stood by mutely. Will Michael Tobis or Andrew Dessler speak out against UEA’s refusal? Or will they maintain the silence of the lambs that we have observed in the past.

More Hypocrisy from the Team

Bishop Hill draws attention to the publication of Trenberth’s comment on Spencer and Braswell 2011 in Remote Sensing. Unlike Trenberth’s presentation to the American Meteorological Society earlier this year (see here here here, Trenberth et al 2011 was not plagiarized.

The review process for Trenberth was, shall we say, totally different than the review process for O’Donnell et al 2010 or the comment by Ross and me on Santer et al 2008. The Trenberth article was accepted on the day that it was submitted:

Received: 8 September 2011 / Accepted: 8 September 2011 / Published: 16 September 2011

CA readers are well aware of long-term obstruction by the Team not simply regarding details of methodology, but even data. Trenberth objects to incompleteness of methodological description in Spencer and Braswell 2011 as follows:

Moreover, the description of their method was incomplete, making it impossible to fully reproduce their analysis. Such reproducibility and openness should be a benchmark of any serious study.

Obviously these are principles that have been advocated at Climate Audit for years. I’ve urged the archiving of both data and code for articles at the time of publication to avoid such problems. However, these suggestions have, all too often, been resolutely opposed by the Team. Even supporting data, all to often, remains unavailable. I haven’t had time to fully parse Spencer and Braswell as to reproducibility but note that Spencer promptly provided supporting data to me when requested (as did Dessler.) In my opinion, Spencer and Braswell should have archived data as used and source code concurrent with publication, as I’ve urged others to do. However, their failure to do so is hardly unique within the field. That Trenberth was able to carry out a sensitivity study as quickly as he did suggests to me that their methodology was substantially reproducibile, but, as I noted above, I haven’t parsed the article.

Trenberth observes that “minor changes” in assumptions yielded “major changes” in results, concluding that the claims in Lindzen and Choi 2009 were not robust:

The work of Trenberth et al. [13], for instance, demonstrated a basic lack of robustness in the LC09 method that fundamentally undermined their results. Minor changes in that study’s subjective assumptions yielded major changes in its main conclusions.

I am not in a position to comment on the truth or falsity of Trenberth’s claims as applied to Lindzen and Choi 2009. However, this sort of argument has been a staple at Climate Audit (and in our published criticisms) of paleo reconstructions. Instead of commending us for such observations in respect to MBH, Trenberth publicly disparaged Ross and I personally for daring to criticize Mann et al. I agree with the principle that Trenberth enunciated here, but not with Trenberth’s hypocritical application of the principle.

Trenberth criticizes Spencer and Braswell for inadequate statistical analysis:

For instance, SB11 [8] fail to provide any meaningful error analysis in their recent paper and fail to explore even rudimentary questions regarding the robustness of their derived ENSO-regression in the context of natural variability.

To a considerable degree, Spencer and Braswell 2011 was a commentary on Dessler 2010. Neither article carried out satisfactory statistical analysis. Dessler 2010 reported a regression with an adjusted r2 of ~0.01 and purported to assert “confidence intervals”. UC carried out the “rudimentary” statistical operation of calculating the slope using the y-variable as regressand for consistency, obtaining different results. Results using CERES clear sky were opposite to results using ERA clear sky. Whatever the merits of CERES versus ERA, this is the sort of sensitivity that should have been reported. This is not to say that the statistical analysis of Spencer and Braswell 2011 was superior to that of Dessler 2010. It wasn’t. Neither article met the criteria enunciated by Trenberth.

If Trenberth really wants to get into the question of failures to explore “rudimentary questions” of robustness, I invite him to examine the infamous CENSORED directory of MBH98 or to search for the verification r2 results of early steps of MBH98.

Trenberth observes that “correlation does not mean causation” – a principle that is important at Climate Audit:

Moreover, correlation does not mean causation. This is brought out by Dessler [10] who quantifies the magnitude and role of clouds and shows that cloud effects are small even if highly correlated.

Unfortunately, this principle is applied opportunistically in paleoclimate. Team methodology, for example, makes no attempt to verify that 6-sigma bulges in strip bark bristlecone pine are due to temperature (as opposed to a mechanical effect of strip barking itself.) Team methodology accepts Yamal as a temperature proxy without explaining the decline in ring widths in the majority of nearby sites.

Trenberth wildly overstates Dessler 2011 as well by saying that it “quantifies the magnitude and role of clouds and shows that cloud effects are small”. “Quantifying the magnitude and role of clouds” is an enormous undertaking and would take hundreds of pages of analysis. Dessler 2011 is a short little article addressing a narrow issue. It did not pretend to “quantify the magnitude and role of clouds” nor did it do so.

Clouds were the major source of uncertainty in climate models in Charney 1979 and remained so in IPCC AR4 (2007). If Dessler 2011 did in fact show that “cloud effects are small”, this would be an epochal achievement in climate science. Given that a preprint of Dessler 2011 only became available on Sept 2, 2011, there has been little opportunity to analyse its results so far. Whether Dessler 2011 really proves that “cloud effects are small” remains to be seen. If, like Dessler 2010, it makes such assertions based on r2 of ~0.01, I think people could reasonably disagree on whether such far reaching claims had been firmly established.

Some Simple Questions

UC (whose comments should always be read attentively) wondered the other day about the effect of taking monthly normals – a step that is routine in much climate analysis as a preliminary to analysis. In the case of Dessler v Spencer, both parties to the litigation work after-taking monthly normals ( “post-normal” statistics, so to speak 🙂 )

I’ve spent a couple of days looking at CERES data without taking normals and present a couple of questions for readers to think about. (I know the answers.) I’d like readers to answer according to their general knowledge and not by looking up or researching.

The first question: the “solar constant” (as measured by CERES) is approximately 340.5 wm-2 (1362/4). What is the variation in average annual incoming solar as measured by CERES?

I’ve temporarily shut off comments on this thread so that you have an opportunity to make your answer without being affected by the answers from other readers. Update: Open now.

Answer: the difference between the annual max and annual min is about 23 wm-2, or about 5 times the 3.8 wm-2 anticipated from doubled CO2, as readers quickly observed. Some people instinctively think about solar variability, but it’s the eccentricity that matters.

Question 2: What is the approximate annual variability in global tropospheric temperature (pick an AMSU lower troposphere level) ? Does the annual temperature maximum coincide with the annual forcing maximum? If not, how much is the phase difference?