Don't Feed the Bears

One of my brothers forwarded this to me with the caption: “Isn’t it comforting to know that when you are about to become a bear’s breakfast, your buddy is standing there taking photos?”

Submitted Article on Tropical Troposphere Trends

Yesterday Ross and I submitted an article to IJC with the following abstract:

A debate exists over whether tropical troposphere temperature trends in climate models are inconsistent with observations (Karl et al. 2006, IPCC (2007), Douglass et al 2007, Santer et al 2008). Most recently, Santer et al (2008, herein S08) asserted that the Douglass et al statistical methodology was flawed and that a correct methodology showed there is no statistically significant difference between the model ensemble mean trend and either RSS or UAH satellite observations. However this result was based on data ending in 1999. Using data up to the end of 2007 (as available to S08) or to the end of 2008 and applying exactly the same methodology as S08 results in a statistically significant difference between the ensemble mean trend and UAH observations and approaching statistical significance for the RSS T2 data. The claim by S08 to have achieved a “partial resolution” of the discrepancy between observations and the model ensemble mean trend is unwarranted.

Attached to the article as Supplementary Information was code (of a style familiar to CA readers) which, when pasted into R, will go and collect all the relevant data online and produce all the statistics and figures in the article. In the event that Santer et al wish to dispute or reconcile any of our findings, we have tried to make it easy for them to show how and where we are wrong, rather than to set up pointless roadblocks to such diagnoses.

We only consider the comparison between the model ensemble mean trend and observations (the Santer H2 hypothesis). In our discussion, we note that we requested the collated monthly data used by Santer to develop his H1 hypothesis and that this request was refused, attaching the correspondence as supplementary information. Had the H1 data been available when the file was open, we would have analyzed them, but there weren’t, so we didn’t. The results for the H2 hypothesis are interesting in themselves.

We noted that an FOI request to NOAA had been unsuccessful, that the publisher of the journal lacked policies to require the production of data and that an FOI to the DOE was pending. We urged the journal to adopt modern data policies. With all the problems for the new US administration, the fact that they actually turned their minds to issuing an executive order on FOI on their first day in office suggests to me that DOE will produce the requested data. A couple of readers have taken the initiative of writing DOE expressing their displeasure with Santer’s actions as well and they think that the data might become available relatively promptly. Personally I can’t imagine any sensible bureaucrat touching Santer’s little campaign with a bargepole. I’ve long believed that sunshine would cure this sort of stonewalling and obstruction and I hope that that happens.

Update (Jan 27): Events are moving right along as I discovered when I started going through today’s email. In last week’s snail mail, I received a letter dated Dec 10 from some arm of the U.S. nuclear administration (to which Santer’s Lawrence Livermore belongs) acknowledging my FOI request of Nov 14 to the DOE [from memory, I’ll tidy the dates as I don’t have the snail response on hand], saying that it had been in their queue of requests, which are considered in the order in which they are received. The snail seemed especially slow on this occasion. So I wasn’t holding my breath.

Amazingly, in today’s email is a letter from a CA reader saying that the Santer data has just been put online http://www-pcmdi.llnl.gov/projects/msu/index.php (I haven’t looked yet, but will). He sent an inquiry to them on Dec 29, 2008; the parties responsible wrote to him saying that they would look into the matter. They also emailed him immediately upon the data becoming available.

Surprisingly (or not), the same people didn’t notify me concurrently with the CA reader even though my request was almost 6 weeks prior.

A New Metric for Amplification

ABSTRACT: A new method is proposed for exploring the amplification of the atmosphere with respect to the surface. The method, which I call “temporal evolution”, is shown to reveal the change in amplification with time. In addition, the method shows which of the atmospheric datasets are similar and which are dissimilar. The method is used to highlight the differences between the HadAT2 balloon, UAH MSU satellite, RSS MSU satellite, and CGCM3.1 model datasets.

“Amplification” is the term used for the general observation that the atmospheric temperatures tend to vary more than the surface temperature. If surface and atmospheric temperatures varied by exactly the same amount, the amplification would be 1.0. If the atmosphere varies more than the surface, the amplification will be greater than one, and vice versa.

Recently there has been much discussion of the Douglass et al. and the Santer et al. papers on tropical tropospheric amplification. The issue involved is posed by Santer et al. in their abstract, viz:

The month-to-month variability of tropical temperatures
is larger in the troposphere than at the Earth’s surface.
This amplification behavior is similar in a range of
observations and climate model simulations, and is
consistent with basic theory. On multi-decadal timescales,
tropospheric amplification of surface warming is a robust
feature of model simulations, but occurs in only one
observational dataset [the RSS dataset]. Other observations show weak or
even negative amplification. These results suggest that
either different physical mechanisms control
amplification processes on monthly and decadal
timescales, and models fail to capture such behavior, or
(more plausibly) that residual errors in several
observational datasets used here affect their
representation of long-term trends.

I asked a number of people who were promoting some version of the Santer et al. claim that “the amplification behaviour is similar in a range of observations and climate model simulations”, just what studies had shown these results? I was never given any answer to my questions, so I decided to look into it myself.

To investigate whether the tropical amplification is “robust” at various timescales, I calculated the tropical and global amplification at all time scales between one month and 340 months for a variety of datasets. I used both the UAH and the RSS versions of the satellite record. The results are shown in Figure 1 below. To create the graphs, for every time interval (e.g. 5 months) I calculated the amplification of all contiguous 5-month periods in the entire dataset. I took the average of the results for each time interval, and calculated the 95% confidence interval (CI). Details of the method are given in Appendices 2 and 3.

I plotted the results as a curve which shows the average amplification for the various time periods.

Figure 1. Change of amplification with time periods. T2 and TMT are middle troposphere measurements. T2LT and TLT are lower troposphere. Typical 95% CIs are shown on two of the curves. Starting date is January 1979. Shortest period shown is three months. Effective weighted altitudes are about 4 km (~600 hPa) for the lower altitude measurements, UAH T2LT and RSS TLT. They are about 6 km (~500 hPa) for the higher measurements, UAH T2 and RSS TMT.

I love surprises, and climate science holds many … despite the oft repeated claims that the “science is settled”. And there are several surprises in these results, which is great.

1. In both the global and tropical cases, the higher altitude data shows less amplification than the lower altitude. This is the opposite of the expected result. In the UAH data, T2LT, the lower layer, has more amplification than T2, the higher layer. The same is true for the RSS data regarding TLT and TMT. Decreasing amplification with altitude seems a bit odd …

2. In both the global and tropical cases, amplification starts small. Then it rises to about double its starting value over about ten years. It then gradually decays over the rest of the record. The RSS and the UAH datasets differ mainly in the rate of this decay.

3. The 1998 El Nino is visible in every record at about 240 months from the starting date (January 1979).

In an effort to get a better handle on the issues, I examined the HadAt2 balloon record. Here, finally, I see crystal clear evidence of tropical tropospheric amplification.
Continue reading

Steig’s Silence

Once upon a time, in the mists of time (Feb 2008), long before climate scientists had “moved on”, realclimate featured a post entitled Antarctica is Cold? Yeah, We Knew That, in which Spencer Weart, as noted by Pielke Jr, observed:

. . . we often hear people remarking that parts of Antarctica are getting colder, and indeed the ice pack in the Southern Ocean around Antarctica has actually been getting bigger. Doesn’t this contradict the calculations that greenhouse gases are warming the globe? Not at all, because a cold Antarctica is just what calculations predict… and have predicted for the past quarter century. . .

Bottom line: A cold Antarctica and Southern Ocean do not contradict our models of global warming. For a long time the models have predicted just that.

At AGU in December 2008, Eric Steig gave a preview of his January 2009 article. An RC commenter here reported on this preview as follows:

From http://blogs.nature.com/climatefeedback/2008/12/agu_2008_evidence_that_antarct.html: New research presented at the AGU today suggests that the entire Antarctic continent may have warmed significantly over the past 50 years. The study, led by Eric Steig of the University of Washington in Seattle and soon to be published in Nature, calls into question existing lines of evidence that show the region has mostly cooled over the past half-century.

To which RC coauthor Steig replied (and comments were promptly shut off):

The claim that our result “calls into question existing lines of evidence that show the region has mostly cooled over the past half-century” is wrong though. Wait until the paper is published and I’ll say more.–eric]

Upon recent publication of Steig et al 2009, coauthor Mann stated (also noted up by Pielke Jr):

“Contrarians have sometime grabbed on to this idea that the entire continent of Antarctica is cooling, so how could we be talking about global warming,” said study co-author Michael Mann, director of the Earth System Science Center at Penn State University. “Now we can say: no, it’s not true … It is not bucking the trend.”

Now reasonable people might well interpret that sort of statement as “calling into question existing lines of evidence that show the region has mostly cooled over the past half-century”. Clearly oracles or perhaps goose entrails are required for exegesis of these seemingly contradictory Delphic utterances. Pielke Jr has a little fun with the Team on this, observing:

So a warming Antarctica and a cooling Antarctica are both “consistent with” model projections of global warming.

This elicited a reply from Steig (who seems like pleasant fellow who’s fallen in with a rough crowd over at RC):

I have to admit I cringed when guest writer Weart wrote the article on RealClimate, which I didn’t get a chance to read first. I’m not sure what models he was talking about that said Antarctica should be cooling. A review of the literature would show you (see e.g. Shindell and Schmidt in GRL) that models have been predicting warming.

Fair enough. But this raises the usual problem of the silence of the lambs.

If Steig cringed when he read the Weart article, surely he had an obligation to correct the record at RC. But if you now turn to the thread in question and search ‘Steig’, there is nothing until the final comment (mentioned above.) At no point did Steig record his disagreement with the contents of the RC post. Nor did he record his disagreement when RC coauthors piled on to any commenters who questioned the premises of the Weart post.

Or maybe Steig did write in expressing his disagreement and, like other critics, was censored by Gavin Schmidt. 🙂

Update (Jan 24 4.54 pm): While I was writing this post, Steig added another comment at Pielke Jr

When I said that “I cringed” I don’t mean that I thought there was anything wrong with Spencer’s article. I meant that I thought he wasn’t clear enough that he was referring to the models show a slower warming in Antarctica than e.g. in the Arctic, which was and remains the correct assessment of what the model show. And I suspected that his article would be used in exactly the way Roger Piekle Jr. has used it; to give the impression that scientists are being careless and inconsistent. But as I said above, this is a red herring.

As for why I didn’t make this point at the time, I have a day job. I can’t spend all my time worrying about how blogs on RealClimate may get mis-used and misrepresented by others.

Roger replies in a new post here.

Steig observes that he has a “day job” and can’t worry about how RC gets “misrepresented by others”. But here he is, instantly contesting Roger’s amusement at the RC tangle, not just once but twice. He also had time to make an inline comment closing the past RC thread. So he has time to contest Roger’s supposed miscues, but not to say something at RC about an article that made him “cringe”. Too bad.

Antarctic RegEM

Antarctic: Signy Island - Adelie penguins
Image by mark van de wouw via Flickr

For discussion of new study. by Steig (Mann) et al 2009.

Data:
Data sets used in this study include the READER and AWS data from the British Antarctic Survey. SI Tables 1 and 2 provide listings. They leave out station identifications (a practice that “peer” reviewers, even at Nature, should condemn).

Matches to information at the BAS are complicated by Mannian spelling errors and pointless scribal variations (things like D_10 vs D10, which can be matched manually, but why are the variations there in the first place??)

Anyway, I’ve made collations of the station information in an organized way and collated station and AWS into organized time series (downloaded today) and archived these versions at CA for reader reference. You can download them as follows:

download.file(“http://data.climateaudit.org/data/antarctic_mann/Info.tab”,”temp.dat”,mode=”wb”);load(“temp.dat”)
download.file(“http://data.climateaudit.org/data/antarctic_mann/Data.tab”,”temp.dat”,mode=”wb”);load(“temp.dat”)

If for some absurd reason, you want to analyze them in Fortran or machine language or Excel, you can easily write these things out into ASCII files using R and I’d urge you to learn enough R to do that.

There are references to thermal IR satellite data associated with Comiso. There is a citation to snail literature, but no digital citation. I’ve been unable to locate a monthly digital version of this data – maybe some readers can locate it.

I haven’t been able to locate any gridded output data from the Mannian RegEM analysis. For the PNAS article, Mann et al made a pretty decent effort to archive code and intermediates, but, for the present study, it’s back to the bad old days. No code, no intermediates, not even any gridded output that I can locate.

[Update Jan 23 – Steig says that data will be online next week.]

Station Counts
Here’s an interesting little plot from the collation of surface and AWS data. For some reason, there seems to be a sharp decline in AWS counts at the start of 2003 – from 35 at the end of 2002 to 9 at the start of 2003. It seems implausible that this is real though I am not familiar with the data and perhaps it is. Maybe it’s an Antarctic version of GHCN not collecting non-MCDW station data after 1990?

Refs:
Nature 457, 459-462 (22 January 2008) | doi:10.1038/nature07669; Received 14 January 2008; Accepted 1 December 2008.

Eric J. Steig1, David P. Schneider2, Scott D. Rutherford3, Michael E. Mann4, Josefino C. Comiso5 & Drew T. Shindell6

Abstract:
http://www.nature.com/nature/journal/v457/n7228/full/nature07669.html

Full text (pdf): http://thingsbreak.files.wordpress.com/2009/01/steigetalnature09.pdf

SI:

Click to access nature07669-s1.pdf

Methods:
http://www.nature.com/nature/journal/v457/n7228/full/nature07669.html#online-methods

Reblog this post [with Zemanta]

More on Voodoo Correlations

Mann said:

Although 484 (~40%) pass the temperature screening process over the full (1850–1995) calibration interval, one would expect that no more than ~150 (13%) of the proxy series would pass the screening procedure described above by chance alone.

Reader DC said:

Of the 484 proxies passing the 1850-1995 significance test, 342 also passed both sub-period tests (with 341 having r values with matching sign). 111 passed only one of the sub-period tests, and 31 failed both sub-periods.

Let’s think about this a little in terms of statistics. If a “proxy” is a proxy, then it is a proxy regardless of the subperiod. It is not enough to have a “significant” relationship in the 1850-1995 period, it should also have “significant” relationship in the 1850-1949 and 1896-1995 periods (Mann’s late-miss and early-miss periods.)

DC remarked above, in effect, that nearly 30% of the 484 “passing” proxies failed this elementary precaution. I checked this calculation and can confirm it. This can be done as follows.

download.file(“http://data.climateaudit.org/data/mann.2008/Basics.tab”,”temp.dat”,mode=”wb”);load(“temp.dat”)
details=Basics$details; passing=Basics$passing
temp=(passing$whole&passing$latem&passing$earlym);sum(temp)
#342

342 out of 1209 is only 29% (as opposed to Mann’s stated 13% by chance). As observed in September, Mann’s chance benchmark is wrong because his pick two-daily keno method inflates the odds. [As a reader noted, Mann’s 13% is based on the 1850-1995 period and the yield for passing 1850-1995, 1850-1949 and 1896-1995 would necessarily be lower. This goes the other way from pick two daily keno. Autocorrelation is a third benchmarking issue and it doesn;t look to me like Mann’s benchmarks adequately allow for observed autocorrelation.]

I don’t want readers to place any weight on any benchmarks right now other than indicatively, as today I want to look at a different issue: how different proxy classes stand up to this undemanding test. In a given proxy class (ice cores, dendros, speleos, whatever), which proxy classes outperform random picking?

The “best” performer are the Luterbacher series – series which have no business whatever being in a “proxy data sets”. 71 out of 71 Luterbacher series pass the above test. This is not much of an accomplishment since Luterbacher uses instrumental data in his “proxies”. That instrumental data has a high correlation with instrumental data means precisely nothing. You’d think that someone in the climate science “community” would object to this, but seemingly not. The inclusion of these series obviously inflates the count. Without these absurd inclusions, we have 24% of the proxies passing elementary screening ( (342-71)/(1209-71).

“Low-frequency” make up 51 of the 1209 series. Of these 51 series, only 8 series pass the above elementary screening (15.8%). One of these series (Socotra O18- which is non-incidental in M08 reconstructions BTW), fails an additional undemanding test that “significance” have a consistent sign. This leaves 7/51 (13.7%) as being “significant”.

annual=Basics$criteria$annual;
c(sum(!annual),sum(temp&!annual))#51 8

Code 9000 dendro proxies make up over 927 of 1209 M08 proxies. Only 143 pass the above simple test ( 15.4%).

dendro=(Basics$details$code==9000)
c(sum(dendro),sum(temp&dendro)) #[1] 927 143

On the other hand, Briffa MXD proxies (code 7500) have a totally different response: 93 out of 105 (88%) pass M08 screening. This is such a phenomenonal difference from run-of-mill dendro proxies that one’s eyebrows arch a little. Now these aren’t ordinary Briffa MXD proxies. These series were produced in part by Rutherford (Mann) et al 2005 performing RegEM on Briffa MXD data; then M08 truncated the Rutherford Mann MXD versions in 1960 because of the “divergence” problem and replaced actual data from 1960 to 1990 by infilled data, all prior to calculating the above correlation. I haven’t parsed every little turn of Mannian adjustments, but you will understand if I view the statistical performance of this data for now as a little suspect. None of this data is earlier than AD1400 in any event.

I’ll look at the other classes of data (only 55 series left) tomorrow.

realclimate and Disinformation on UHI

In a recent CNN interview discussed at RC here, Joe D’Aleo said:

Those global data sets are contaminated by the fact that two-thirds of the globe’s stations dropped out in 1990. Most of them rural and they performed no urban adjustment. And, Lou, you know, and the people in your studio know that if they live in the suburbs of New York City, it’s a lot colder in rural areas than in the city. Now we have more urban effect in those numbers reflecting — that show up in that enhanced or exaggerated warming in the global data set.

Gavin Schmidt excoriated this claim as follows:

D’Aleo is misdirecting through his teeth here. … he also knows that urban heat island effects are corrected for in the surface records, and he also knows that this doesn’t effect ocean temperatures, and that the station dropping out doesn’t affect the trends at all (you can do the same analysis with only stations that remained and it makes no difference). Pure disinformation.

Later in the comments (#167), an RC reader inquired about UHI adjustments, noting the lack of discusison of this point as follows:

#167/ In all of the above posts there is no mention of the urban heat island effect, nor of the effect of rural station drop out nor of the effect the GISS data manipulation has on surface temperature. Why is that?

To which Gavin replied:

[Response: Because each of these ‘issues’ are non-issues, simply brought up to make people like you think there is something wrong. The UHI effect is real enough, but it is corrected for – and in any case cannot effect ocean temperatures, retreating glaciers or phenological changes (all of which confirm significant warming). The station drop out ‘effect’ is just fake, and if you don’t like GISS, then use another analysis – it doesn’t matter. – gavin]

Neither CRU nor NOAA have archived any source code for their calculations, so it is impossible to know for sure exactly what they do. However, I am unaware of any published documents by either of these agencies that indicate that they “correct” their temperature index for UHI effect (as Gavin claims here) and so I’m puzzled as to how Gavin expects D’Aleo to be able to “know” that they carry out such corrections. And as to GISS adjustments, as we’ve discussed here in the past (and I’ll review briefly), outside the US, they have the odd situation where “negative UHI adjustments” are as common as “positive UHI adjustments”, raising serious questions about whether the method accomplishes anything at all, as opposed to simply being a Marvelous Toy. Continue reading

Jones et al 2009: Studies Not "Independent"

One of the ongoing Team mantras has been that the Mann hockey stick has been supported by a “dozen independent studies”. Obviously, I’ve disputed the claim that these studies are “independent” in any non-cargo cult use of the term “independent”. A new article by Jones and multiple coauthors (Holocene 2009) comments on this issue. Continue reading

Rain in Maine Falls Mainly in the Seine

A blog article here reviews the “standing joke” of Mann’s stubbornness in refusing to correct the wrong locations of MBH98 in the recent Mann et al 2007 network, where, as I observed, the prior errors are perpetuated without apology, even though the incorrectness of the locations has long been known to Mann. In a routine google, I noticed that there are even “rain in Maine” T-shirts , though the vendors have, for some reason, used a Parisian scene.

NASA GISS Withdraws Access Blocking

Earlier today, I reported that I had been unable to access their server using R even to get a tiny data set. (I haven’t been working on NASA GISS data and haven’t downloaded anything much from them for months.) The following simple script failed for me (and for Roman in New Brunswick, Canada), but, strangely enough, not for other R users. Very odd.

url=”http://data.giss.nasa.gov/gistemp/tabledata/GLB.Ts+dSST.txt” #monthly glb land-ocean
working=readLines(url)

Error in file(con, “r”) : cannot open the connection
In addition: Warning message:
In file(con, “r”) : cannot open: HTTP status was ‘403 Forbidden’

Annoyed by this, I sent the following email to NASA GISS employee Gavin Schmidt, who occasionally acts as a spokesman on GISTEMP matters, sending a copy to an eminent climate scientist who doesn’t necessarily agree with me on many things, but who disdains the undignified behavior that is all too prevalent in the field.

Dear Dr Schmidt,

My IP address has been blocked by NASA GISS from downloading GISS data using a computer script. This is undignified and petty behavior and I request that you take immediate steps to remove the block.

Yours truly, Stephen McIntyre

I received the following answer from Robert Schmunk of NASA GISS (who had been involved in a prior blocking of my access discussed here). I copy this email on the basis that it is official correspondence from a federal employee and not “personal” communication:

Stephen,

Please do not write to Dr. Schmidt on issues related to GISS website management as he has essentially nothing to do with that topic.

Although a few IP numbers have been barred from accessing the GISS website(s) in the last couple weeks, none of them should specifically be a machine that you might be using. (I say that based on the assumption that the blocked IP addresses are not, as best I can tell, Canadian.) However, I can only confirm this if you will inform me what IP number you might be using, or if you might be assigned dynamic IP numbers, then the domain name of your ISP.

I did configure one of our webservers yesterday to bar access by the user agent “R”, as we have recently had problems with two locations attempting massive data scrapes and who identified themselves with that user agent. As someone at one of those locations has since contacted me and discussed the matter, I have now lifted that block.

If you were using software which identified itself to our servers as user agent “R”, then you can try accessing them again now and see if you are able to get through. If not, then as I indicated above, please let me know what IP and/or ISP numbers you may be coming from.

rbs

The script works again.

While I’ve “scraped” their site in the past (because they refused to provide organized data) – and this led to the identification of their “Y2K” problem, I had done no such scrape in months.

Schmunk’s explanation doesn’t make sense as it stands. He says that he blocked access from R, but then why were some people able to access the site using R? He must have done something else as well.