Hansen Simplified

I think that I can give a very simple explanation of just how bizarre the climate reconstruction by Jim Hansen is. The graph below shows the actual Mg/Ca values for ODP806B, which is used in his reconstruction. The orange point is a modern sample in 27.2 deg C water (a value of 4.5, higher than any values in the core.) Climatological temperatures in the Pacific Warm Pool from Levitus 1994 were 29.2 deg C, which, using typical transfer functions, would yield Mg/Ca of 5.3, much higher than any values in the core. The reason why the Mg/Ca values in the core are so low is because Mg dissolves relative to Ca between the surface and the depth of the core (2500 m).


Figure 1. Actual Mg/Ca values for ODP806B (G. ruber) together with modern observation (orange) cited in Lea et al 2000 and estimate for modern Warm Pool temperatures.

So how do you splice this series of Mg/Ca values to modern instrumental temperatures. You have to estimate how much they should be moved up and how much the series should be dilated by estimating how much Mg/Ca has dissolved. How did Hansen deal with this? In effect, he simply slid the the graph so that the last point on the graph (dated to 4320 BP) lined up the red dot.

How do we know that the last dot (from the Holocene Optimum) wasn’t warmer than the red dot? Hansen doesn’t say, but it is my understanding that John Hodgman helped with the calculations.

The Resident Expert

Don’t mess this.

Mears and Wentz: Polar Amplification Calamity

New Mears and Wentz data plotted up here shows the full impact of the looming polar amplification calamity in Antarctica. Data was downloaded from their FTP site here . Continue reading

Christy on Source Code

John Christy writes in reply to my email, occasioned by Connolley’s remark that Wentz and Mears had been forced to reverse engineer his code:

Steve:

We gave RSS the part of the code that was still a source of confusion (a correction for diurnal drift for the LT product). In addition, we provided intermediate adjustment datafiles for both MT and LT – going far beyond only the “final product” that Connolley seems to think. We did this as early as 2003.

It is true that RSS has not audited our complete code (really codes), but they were essentially able to reproduce the intermediate and final results for the various adjustments based on descriptions in our papers and in dozens of emails with more detailed information. At a conference in Asheville NC, (Oct. 2003) Dr. Mears presented a talk entitled “Understanding the difference between the UAH and RSS retrievals of satellite-based tropospheric temperature estimate” and stated he was satisfied as to having understood the main reasons for the differences between our two datasets. This was for the MT product. In that presentation he showed data from some of the intermediate files we had sent. The subsequent issue with the LT product was dealt with by indeed sharing the part of the code that created the unresolved problem.

We are working on version 6 of the datasets (not much change in trends, but significant change in technique). The paper will describe each step so that is should be easily reproduceable for anyone with interest. You guys have made it clear this is important.

John C.

Two comments. It sounds like Christy has made and is continuing to make a diligent effort to provide support and documentation for his analyses, as compared to the obfuscation of the Hockey Team e.g. Michael “I will not be intimidated into disclosing my code”, “I did not calculate the verification r2 statistic – that would be a foolish and incorrect thing to do” Mann. Second, if Connolley and others are concerned about aspects of Christy’s code that have not been examined, then you’d have thought that any one of the NRC panels that have investigated surface-troposphere discrepancies would have dealt with the matter. But perhaps these panels, like the North panel, didn’t do any research and just “winged” it.

Massachusetts v EPA at the Supreme Court

Jim Erlandson writes in with the following reference. If you follow through to the link at Northwesterm, you will see the cases and the judgements. Interesting issues abound. Take a look at the Amicus Brief by various scientists, including the omnipresent J.M. Wallace of our NAS Panel.

From today’s Wall Street Journal Law Blog:

Greenhouse Gases! “¢’‚¬? Massachusetts v. EPA, Oral Argument: 11/29/06
Some people call it the marquee case of the upcoming term, others refer to it as the most politically charged. With apologies to Paul Simon, the Law Blog calls it “Al Gore’s Shot at Redemption.”

At issue: The regulation of greenhouse gases. Twelve states sued the Bush Administration alleging that the Environmental Protection Agency is shirking its responsibility to regulate auto emissions, which, they say, contribute to global warming. The EPA says it doesn’t have the authority to regulate auto emissions, and, even if it did, there must be firmer scientific evidence that greenhouse gases cause global warming. The plaintiffs are joined by a host of environmental groups, while the auto and petroleum industries have aligned with the White House. The case turns on an interpretation of Clean Air Act, which orders the EPA to regulate car-engine emissions that “in [its] judgment cause, or contribute to, air pollution from which may reasonably be anticipated to endanger public health or welfare.”

The article links to a Medill Journalism (Northwestern Universtiy) overview piece which gives a good rundown of the groups involved and which side they’re supporting. It ends with the following quote (of special interest to readers of this blog) from Mary Nichols, a UCLA environmental law professor:

“The scientific case on the harm caused as a result of a failure to curb emissions is so overwhelming that it’s reasonable to think that the Court will send the case back to the EPA”

Hansen and Bracket Fatigue

Lots of interesting things to find when you turn over the rocks of Hansen et al 2006. These are comments on work in progress, but, to say the least, there appear to be some curious decisions and methodologies. Continue reading

Willis on Santer et al 2006

The new Santer et al. paper, Forced and unforced ocean temperature changes in Atlantic and Pacific tropical cyclogenesis regions, purports to show that sea surface temperature (SST) changes in the Pacific Cyclogenesis Region (PCR) and the Atlantic Cyclogenesis Region are caused by anthopogenic global warming (AGW). They claim to do this by showing that models can’t reproduce the warming unless they include AGW forcings. In no particular order, here are some of the problems with that analysis.

1) The models are "tuned" to reproduce the historical climate. By tuned, I mean that they have a variety of parameters that can be adjusted to vary the output until it matches the historical trend. Once you have done that tuning, however, it proves nothing to show that you cannot reproduce the trend when you remove some of the forcings. If you have a model with certain forcings, and you have tuned the model to recreate a trend, of course it cannot reproduce the trend when you remove some of the forcings … but that only tells us something about the model. It shows nothing about the real world. This problem, in itself, is enough to disqualify the entire study.

2) The second problem is that the models do a very poor job of reproducing anything but the trends. Not that they’re all that hot at reproducing the trends, but what about things like the mean (average) and the standard deviation? If they can’t reproduce those, then why should we believe their trend figures? After all, the raw data, and it’s associated statistics, are what the trend is built on.

Fortunately, they have reported the mean and standard deviation data. Unfortunately, they have not put 95% confidence intervals or trend lines on the data … so I have remedied that oversight. Here are their results:

(Original Caption) Fig. 4. Comparison of basic statistical properties of simulated and observed SSTs in the ACR and PCR. Results are for climatological annual means (A), temporal standard deviations of unfiltered (B) and filtered C) anomaly data, and least-squares linear trends over 1900–1999 (D). For each statistic, ACR and PCR results are displayed in the form of scatter plots. Model results are individual 20CEN realizations and are partitioned into V and No-V models (colored circles and triangles, respectively). Observations are from ERSST and HadISST. All calculations involve monthly mean, spatially averaged anomaly data for the period January 1900 through December 1999. For anomaly definition and sources of data, refer to Fig. 1. The dashed horizontal and vertical lines in A–C are at the locations of the ERSST and HadISST values, and they facilitate visual comparison of the modeled and observed results. The black crosses centered on the observed trends in D are the 2 sigma trend confidence intervals, adjusted for temporal autocorrelation effects (see Supporting Text). The dashed lines in D denote the upper and lower limits of these confidence intervals. I only show Figs. 4A and 4B. The left box is Fig. 4A, and the right box is 4B

I have added the red squares around the HadISST mean and standard deviation, along with the trend lines and expected trend lines. Regarding Fig. 4A, which shows the mean temperatures of the models and observations, the majority of the models show cooler SSTs than the observations. Out of the 59 model runs shown, only three of them are warmer in both regions. Two of them are over two degrees colder in both regions, which in the tropical ocean is a huge temperature difference. Only one of the 59 model runs is within the 95% confidence interval of the mean.

Next, look at the trend lines in 4A. In the real world, when the Atlantic warms up by one degree, the Pacific only warms by about a third of a degree. Even if the mean temperatures are incorrect, we would expect the models to reproduce this behaviour. The trend line of the models does not show this relationship.

The standard deviations (Fig. 4B) are even worse. There are no model results anywhere close to the observations. The majority of the models tend to overestimate the variability in the Pacific, and underestimate the variability in the Atlantic. This is probably because the variability is inherently larger in the Atlantic (standard deviation 0.35°), and lower in the Pacific (standard deviation 0.24). However, this difference is not captured by the models. The trend line (thick black line) shows that on average, the model Pacific variability is 90% of the Atlantic variability, when it should be only 60%. The light dotted line shows where we would expect the model results to be clustered, if they captured this difference in variability. Only a few of the models are close to this line.

3) All of this begs the question of whether we can use standard statistical procedures on this data. All of the data is strongly autocorrelated (Pacific, lag(1) autocorrelation = 0.80, Atlantic = 0.89). In their caption to Fig. 4 they say that they are adjusting for autocorrelation in the trend sigma. Unfortunately, they have not done the same regarding the standard deviations shown in Fig. 4B.

In addition to being autocorrelated, the Pacific data is strongly non-normal (Jarque-Bera test, p = 2.7E^{-9} ). Here is the histogram of the Pacific data.

As you can see, the data is quite skewed and peaked. Thus, even when we adjust for autocorrelation, it is unclear how much we can trust the standard statistical methods with this data.

4) There are likely more problems with this paper … but this is just a first analysis.

My conclusion? These models are not ready for prime time. They are unable to reproduce the means, the standard deviations, or the relationship between the two ocean regions. I do not think that we can conclude anything from this study, other than that the models need lots of work.

w.

The Hansen Splice

Mg/Ca proxies measure the temperature of calcification of G. ruber, which is not necessarily the same as surface temperature. Dahl et al state

G. ruber is present year–round at Site 723B, but experiences blooms during both monsoon seasons and calcifies above 80 m water depth (14–16)…. In accordance with the seasonality of G. ruber in the western Arabian Sea, Mg/Ca–derived SST from modern RC2730 sediments is 25 deg C, approximately 1 deg C cooler than the annual average.

Compare this to Hansen’s splice which purported to equate instrumental SST with calcification temperature over the mixed layer, stating:

Accepting paleo and modern temperatures at face value implies a WEP 1870 SST in the middle of its Holocene range. Shifting the scale to align the 1870 SST with the lowest Holocene value raises the paleo curve by ~0.5°C

If one applied a 1 deg C adjustment to allow for the difference between temperature of calcification and surface SST as indicated by Dahl et al, then Hansen’s Figure 4 has no force whatever. Modern warming would then, at most, be reaching Holocene Optimum levels (and there’s other hair on the calculation). So what’s the justification for Hansen’s splice? What due diligence did Cicerone perform on Hansen’s splice as part of his review?

Update: As pointed out in a comment below, this particular comparison between the calcification temperature of G. ruber and SST was made in the Arabian Sea and may not apply to the Western Equatorial Pacific. I’m obviously not an authority on G. ruber calcification temperatures relative to SST, but I’ve made an attempt to see what information exists on the topic, as it seems pretty germane to the splicing and the Arabian Sea comment is the only information that I’ve located to date. I would presume that G ruber calcifies throughout the mixed layer in the WEP as well and that the average temperature of the WEP mixed layer (or G ruber calcification) is lower than the surface temperature as estimated by GISS SST. In any event, this is the crux of the issue and it’s ridiculous that the matter is not addressed in a PNAS article.

Reference: Kristina A. Dahl, Athanasios Koutavas, and Delia W. Oppo, Coherent ENSO and Indian monsoon behavior since thelast glacial period http://005c496.netsolhost.com/as_eep_paper_v2.pdf

Climate Models – the Next Generation

Weaver says the next generation of his climate model will address the influence of climate on human evolution.

I guess we can expect headlines saying that scientists University of Victoria have shown that global warming, if left unchecked, will lead to the development of a third eye by the year 2075.

Warmest in a Millll-yun #2

I’ve had a chance to examine Hansen’s argument a little more closely. Structurally it’s a typical splicing argument that we’re familiar with – proxies up to a certain point and then instrumental temperature. In the case of MBH, they use proxies up to 1980 and then compare that to instrumental records. We’re all familiar with that sort of argument. Hansen takes splicing to an entirely new level. He takes a proxy record whose most recent reading is approximately 4320 BP (and there’s hair on that age estimate) and compares that to instrumental records in the 20th century, using the 1870-1900 period (for which he has values for neither series) as a benchmark. I guess you have to be a fellow of the National Academy of Sciences to be able to do this. It sounds impossible so let’s go through this step by step.

Continue reading