Antarctic RegEM

Antarctic: Signy Island - Adelie penguins
Image by mark van de wouw via Flickr

For discussion of new study. by Steig (Mann) et al 2009.

Data:
Data sets used in this study include the READER and AWS data from the British Antarctic Survey. SI Tables 1 and 2 provide listings. They leave out station identifications (a practice that “peer” reviewers, even at Nature, should condemn).

Matches to information at the BAS are complicated by Mannian spelling errors and pointless scribal variations (things like D_10 vs D10, which can be matched manually, but why are the variations there in the first place??)

Anyway, I’ve made collations of the station information in an organized way and collated station and AWS into organized time series (downloaded today) and archived these versions at CA for reader reference. You can download them as follows:

download.file(“http://data.climateaudit.org/data/antarctic_mann/Info.tab”,”temp.dat”,mode=”wb”);load(“temp.dat”)
download.file(“http://data.climateaudit.org/data/antarctic_mann/Data.tab”,”temp.dat”,mode=”wb”);load(“temp.dat”)

If for some absurd reason, you want to analyze them in Fortran or machine language or Excel, you can easily write these things out into ASCII files using R and I’d urge you to learn enough R to do that.

There are references to thermal IR satellite data associated with Comiso. There is a citation to snail literature, but no digital citation. I’ve been unable to locate a monthly digital version of this data – maybe some readers can locate it.

I haven’t been able to locate any gridded output data from the Mannian RegEM analysis. For the PNAS article, Mann et al made a pretty decent effort to archive code and intermediates, but, for the present study, it’s back to the bad old days. No code, no intermediates, not even any gridded output that I can locate.

[Update Jan 23 – Steig says that data will be online next week.]

Station Counts
Here’s an interesting little plot from the collation of surface and AWS data. For some reason, there seems to be a sharp decline in AWS counts at the start of 2003 – from 35 at the end of 2002 to 9 at the start of 2003. It seems implausible that this is real though I am not familiar with the data and perhaps it is. Maybe it’s an Antarctic version of GHCN not collecting non-MCDW station data after 1990?

Refs:
Nature 457, 459-462 (22 January 2008) | doi:10.1038/nature07669; Received 14 January 2008; Accepted 1 December 2008.

Eric J. Steig1, David P. Schneider2, Scott D. Rutherford3, Michael E. Mann4, Josefino C. Comiso5 & Drew T. Shindell6

Abstract:
http://www.nature.com/nature/journal/v457/n7228/full/nature07669.html

Full text (pdf): http://thingsbreak.files.wordpress.com/2009/01/steigetalnature09.pdf

SI:

Click to access nature07669-s1.pdf

Methods:
http://www.nature.com/nature/journal/v457/n7228/full/nature07669.html#online-methods

Reblog this post [with Zemanta]

174 Comments

  1. Steve McIntyre
    Posted Jan 21, 2009 at 8:02 PM | Permalink

    The “move” comments operation in the present WordPress configuration seems to delete, rather than move comments. I’ll experiment with one, but if it doesn’t work, I’ll leave comments where they are. Feel free to repost on this thread.

  2. Joel
    Posted Jan 21, 2009 at 8:37 PM | Permalink

    So Eric and his team discovered a calculation that allowed them to in-fill a factual data gap and they found a warming trend. How unlikely is that?

  3. Mark T
    Posted Jan 21, 2009 at 9:03 PM | Permalink

    Well, they also used satellite data, but last time I checked, the satellite data also indicated cooling for the last 25 years or so that it is available. We shall see. Any time Michael Mann is on the record talking about a “new statistical method,” warning bells go off in my head.

    Mark

  4. Posted Jan 21, 2009 at 9:29 PM | Permalink

    The following are the station history comments for Butler Island AWS WMO ID 89266 from the University of Wisconson Antarctic Automated weather stations project. I assume this is GISS station ID Butler Island 700892660009. The temperature graph from GISS is available through- http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=700892660009&data_set=0&num_neighbors=1

    The supplementary data of Steig et al 2009(http://www.nature.com/nature/journal/v457/n7228/extref/nature07669-s1.pdf) indicate a warming trend of 0.45 C/decade for Butler Island – the highest of any site reported. I thought it would be interesting to see what the station history revealed-(from http://amrc.ssec.wisc.edu/aws/butlerismain.html). Note that Butler Island sits on the east side of the Antarctic Peninsula. Google earth shows a featureless rounded island surrounded by sea ice.

    Station history
    1983: This unit was deployed by the BAS but upon installation at Butler Island the unit did not operate. The unit was removed leaving the tower and other equipment in place on Butler Island and will be returned to Wisconsin for repairs.
    1986: AWS 8902 was tested and found to be functioning well. This unit was deployed 1 March 1986. The old station was located and found to be almost totally buried. The solar panel, aerovanes, and top tower section were returned to Rothera.
    1986-87; 01 Mar 86. On 01 Oct 86 wind direction failed for unknown reasons. On 19 Jul 87 station stopped transmitting for unknown reasons.
    1990: Wind speed and direction were intermittent after 2 May.
    1991: Pressure gauge ceased functioning 8 Dec.
    1992: Performance: 100%
    1993: Performance: Station not functioning after 3 November.
    1994: Performance: Station off until 15 February, and again 18 March-5 April. Wind system intermittent July, October-December.
    1997: Performance:Aerovane replaced 11 February. Pressure had to be corrected due to a failure of the precision time-based correction to the system clock. Aerovane “frozen” most of the time in May and August through November.
    1998: Performance:Aerovane not functioning from 10 September to 27 October. Pressure continues to need correction due to the failure of the precisiontime-based correctin to the system clock.
    1999: Performance:Aerovane intermittently “frozen” in July. Pressure continues to need correction due to the failure of the precision time-based correction to the system clock.
    2000: Performance:Aerovane not functioning from mid-June through December. Pressure continues to need correction due to the failure of the precision time-based correction to the system clock.
    2001: the following work was done “Moved Solar panel and electronic boxes up so all above 120cm allowing accumulation for the next year.”(http://amrc.ssec.wisc.edu/aws/butlerismain.html).
    2003 Visited on 22/12/03 The mast was not raised but the old solar panel and charging box were removed. The new solar panel was mounted on the mast. The new battery box was placed at the bottom of the mast in a hole that just buried it on the western side of the mast. A flag was placed on top of the box so it could easily be located. The wind vane was replaced with a repaired one. New cables were connected and the AWS started up without any trouble.

    The GISS graph shows a break between 2003 and two new data points for 2007, 2008(?). There is data available for station 892660 for the intervening period but not shown on GISS (eg http://www.tutiempo.net/en/Climate/BUTLER_ISLAND/01-2005/892660.htm) for some reason.

    Given the station history I am surprised that Steig et al 2009 manage to define a trend at all let alone a rising one of 0.45degrees C/decade. If it were me I would have left this station out of the analysis altogether as it appears far to unreliable. I wonder how many other stations are similarly affected? Did reviewers bother to examine station records at all?

  5. MartinGAtkins
    Posted Jan 21, 2009 at 11:05 PM | Permalink

    The “findings” have conveniently been announced just before the maximum melt at the Antarctic. The British Broadcasting has already run a story on it here.

    The BBC world News announced that the Antarctic “Is melting much faster than we thought”. They didn’t have time to delve into the archives and show ice shearing of an ice sheet but I’m sure they will. Science by press release. As far as I know no details of the way the study was conducted have been released and probably won’t be until the Arctic ice starts to reform. By then the public and the will only remember the words “The Antarctic ice is melting fast”. The politicians will of course treat the announcement as fact even when satellite data shows cooling and surface stations don’t cover anything like a broad area of the continent.

    By their own words “trends across the bulk of the continent have been much harder to discern, mainly because data from land stations is scarce.”

    snip

  6. Leon Palmer
    Posted Jan 21, 2009 at 11:16 PM | Permalink

    realclimate had a similiar story on the arctic

    http://www.realclimate.org/index.php/archives/2008/11/mind-the-gap/

    They extrapolate hotspots where there are no data using a “model” fit to where there is data.I believe this is called extrapolating beyond the end of the curve. Sounds like they’re using the same concept in the antarctic now!

    • Raven
      Posted Jan 21, 2009 at 11:58 PM | Permalink

      Re: Leon Palmer (#7)
      Why do we need data to validate the models when we can use models to create the data used the validate the models….

  7. Robert
    Posted Jan 22, 2009 at 12:45 AM | Permalink

    “Some regions of Antarctica, ,,, have warmed,,,,but others,,, have recorded a cooling trend. That ran counter to the forecasts of computer climate models, and global warming skeptics have pointed to Antarctica in questioning the reliability of the models.
    In the new study, scientists took into account satellite measurements to interpolate temperatures in the vast areas between the sparse weather stations.

    Which one of these definitions of interpolate works best?

    in⋅ter⋅po⋅late  [in-tur-puh-leyt]
    1. to introduce (something additional or extraneous) between other things or parts; interject; interpose; intercalate.
    2. Mathematics. to insert, estimate, or find an intermediate term in (a sequence).
    3. to alter (a text) by the insertion of new matter, esp. deceptively or without authorization.
    4. to insert (new or spurious matter) in this manner.

    I like 3.

  8. hswiseman
    Posted Jan 22, 2009 at 12:51 AM | Permalink

    Antarctica has confounded the AGW theory until now, particularly as the well mixed gas,low water vapor environment (much easier to warm dry air than moist air) and complete absence of UHI offer a perfect warming proving grounds. So everyone must be breathing easier now, especially with the official nod from Dr. Mann.

    I guess they must have worked out a “completely buried in the snow and ice and sending random data” adjustment for Butler Island. More than a slight whiff of desperation here.

  9. Peter
    Posted Jan 22, 2009 at 2:34 AM | Permalink

    It seems to me that Real Climate went to some lengths to explain that the Antarctic was cold, and that they knew and expected it, and how it fit the AGW theory. Perhaps I could suggest they rename the post.

    Old name “The Antarctic is cold? Yeah, we knew that”

    New Name “The Antarctic is warming, Yeah we knew that too”

  10. Posted Jan 22, 2009 at 3:09 AM | Permalink

    Steve Mc:

    I’ve just installed the update to the “Move Comments” plugin and it now works correctly.

    Steve: Thanks!!

  11. Chris H
    Posted Jan 22, 2009 at 3:10 AM | Permalink

    @Peter
    Exactly my thoughts 🙂 although that seems to be Real Climate’s general modus operandi.

  12. Posted Jan 22, 2009 at 3:28 AM | Permalink

    0.45 degrees per decade? The temperature anomaly map for 1957-2008 indicates 0.25 degrees for the past 50 years. Not something I find particularly significant. Am I missing something?

  13. Peter
    Posted Jan 22, 2009 at 3:44 AM | Permalink

    I am noticing a correlation of my own. It seems that if observational data contradicts a given theory, a certain team member is brought in to “Mann-handle” the data until it falls into line. In Canada, we call this guy on a hockey team a role-player.

  14. braddles
    Posted Jan 22, 2009 at 4:00 AM | Permalink

    The reported warming in West Antarctica is still only 0.1 C per decade. It’s worth noting that West Antarctica is less than half the size of East Antarctica, and if the East is not warming then the whole must be warming very slowly indeed. So even with this heroic “interpolation”, the temperature trend in Antarctica is still a poor match for the models, which predict Antarctica will warm earlier and faster than the globe as a whole.

  15. Geoff Sherrington
    Posted Jan 22, 2009 at 4:13 AM | Permalink

    As already posted, Macquarie Island, lat 54 36 south, not quite the Antarctic, has shown no cooling or warming within error since 1968 (start of my data), though the max temp for the past 3 years ending Dec 08 is below the line.

    Logically, if you have evidence that a place is warming, you have to show why nearby places are cooling or staying even.

    Notice all the press releases about Macquarie Island?

  16. tty
    Posted Jan 22, 2009 at 5:01 AM | Permalink

    There is an interesting comment on how the satellite data were adjusted in the supplementary information:

    “Accuracy in the retrieval of ice sheet surface temperatures from satellite infrared data depends on successful cloud masking, which is challenging because of the low contrast in the albedo and emissivity of clouds and the surface. In Comiso (ref. 8), cloud masking was done by a combination of channel differencing and daily differencing, based on the change in observed radiances due to the movement of clouds. Here, we use an additional masking technique in which daily data that differ from the climatological mean by more than some threshold value are assumed to be cloud contaminated and are removed. We used a threshold value of 10°C, which produces the best validation statistics in the reconstruction procedure.”

    In other words they removed the lowest temperatures from the satellite data until they got a fit with the RegEM results. I also wonder how they derived the “climatological mean” since the problem is that there isn’t any climatological data for most of Antarctica.

  17. tty
    Posted Jan 22, 2009 at 5:15 AM | Permalink

    Here is another little gem:

    “Independent data provide additional evidence that warming has
    been significant in West Antarctica. At Siple Station (76 S, 84 W)
    and Byrd Station (80 S, 120 W), short intervals of data from AWSs
    were spliced with 37-GHz (microwave) satellite observations, which
    are not affected by clouds, to obtain continuous records from 1979 to
    1997 (ref. 13). The results show mean trends of 1.16 +- 0.8 C per decade and 0.45 +- 1.3 C per decade at Siple and Byrd, respectively (13). Our reconstruction yields 0.29 +- 0.26 C per decade and 0.36 +- 0.37 C per decade over the same interval. In our full 50-year reconstruction, the trends are significant, although smaller, at both Byrd 0.23 +- 0.09 C per decade) and Siple (0.18 +- 0.06 C per decade).”

    Notice that the uncertainties of the reconstructions are much smaller than in the actual data. Now, that’s what I call real modelling.

    • Ron Cram
      Posted Jan 24, 2009 at 9:13 AM | Permalink

      Re: tty (#19),

      Great post! That is uproariously funny!

  18. Posted Jan 22, 2009 at 6:07 AM | Permalink

    Not all of the Southern Hemisphere high latitudes are warming as we know. The ERSST.v3 and ERSST.v3b Southern Ocean SST anomalies have shown cooling since the 1990s–since the 1880s for that matter. (In the now obsolete ERSST.v3 data, the cooling started in the mid-1980s).

    Source:
    ftp://eclipse.ncdc.noaa.gov/pub/ersstv3b/pdo
    Specifically:
    ftp://eclipse.ncdc.noaa.gov/pub/ersstv3b/pdo/aravg.mon.ocean.90S.60S.asc
    The main page for the ERSST.v3b datasets:
    http://www.ncdc.noaa.gov/oa/climate/research/sst/ersstv3.php

  19. Gina Becker
    Posted Jan 22, 2009 at 6:12 AM | Permalink

    How does one reconcile, I wonder, this warming trend with the record high sea ice levels of 2007 in Antarctica?

  20. Brian Rookard
    Posted Jan 22, 2009 at 7:17 AM | Permalink

    In the RC blog comments on this story, a fellow by the name of Chip points out here that the trends are sensitive to the start date (pointing to articles in the literature). It appears that if the series is started prior to 1965, you will find warming, after that, cooling. The start date for Steig’s study – 1957. Does this make a difference?

  21. jae
    Posted Jan 22, 2009 at 7:52 AM | Permalink

    See Roger Pielke Sr’s comments, also.

  22. tallbloke
    Posted Jan 22, 2009 at 7:52 AM | Permalink

    This from George E. Smith on WUWT:

    Well I read that Paper by Professor Eric Steig of WU. Strangely, although I am a paid up member of AAAS, I was not able to log in and download that “embargoed” paper, so I had to get it from somebody with a top secret clearance.

    So I already e-mailed Prof Steig; and first I asked him, given that the West antarctic is warming at over 0.1 deg C per decade; when does he predict it will reach the melting point and start the global flooding by raising the sea.

    He replied that he doesn’t make such predictions; but that it would be “many many decades before melting started” My guess was 3000 years.

    So then I aksed him how deep down in the ice do the satellite measurements observe the temperature, and how deep in the ice does his 0.1 deg C per decade propagate. He replied that the satellites only measure the very surface temperature; that ice is a very good insulator so the rise doesn’t go very deep. He said that the major heat source of that 6000 feet of ice is warmth from the earth underneath.

    In other words, a storm in a teacup. The Prof and his team used 25 years of satellite data, which can roughly cover the whole of Antarctica, and they used ground based but coastal weather station sites that date from OGY in 1957/58 to calibrate the satellite data, so they then extrapolated the coastal measured data over the whole continent.
    East Antarctica is still cooling; so no problem there, but west is warming more than East is cooling, so net warm.

    Please note that cooling is bounded by 0K or -273.15 C, while warming has no known upper limit.

    Also note that EM radiation cooling from Antarctica goes as T^4, so a net increase overall, means that Antarctica increases its pitiful contribution to the cooling of planet earth.

    So let’s hear it For a warming Antarctica.

    By the Way Prof Steig was very polite, and forthright and sounds like an OK chap to me.

    But it still sounds to me like a report that somebody found that a sheet of toilet tissue now absorbs water faster and will sink a little sooner.

    Key point from the studies author is that the warming is due to heat from the interior of the Earth – Man Made Global Warming not involved.

    George

  23. jae
    Posted Jan 22, 2009 at 8:53 AM | Permalink

    Trenberth doesn’t seem too enamored of the study. link.

  24. jae
    Posted Jan 22, 2009 at 8:55 AM | Permalink

    Wrong link, see here.

  25. George M
    Posted Jan 22, 2009 at 9:26 AM | Permalink

    Anyone working on getting a copy of the “statistical analysis” for examination by Steve?

  26. Erik Ramberg
    Posted Jan 22, 2009 at 10:06 AM | Permalink

    Keep it real, guys. I count 5 posts where accusations of scientific fraud are leveled at this study. There are plenty of avenues for legitimate criticism. Don’t go off into the wilderness.

    Steve: Blog policies prohibit atttributions of motive or use of the word “fraud” or its cognates. I’d appreciate it if you’d assist in the administration of this policy by identifying the posts that breach this policy so that I can snip the offending portions.

  27. Johne M.
    Posted Jan 22, 2009 at 10:07 AM | Permalink

    I love (and expected) how they blow off the sea-ice trends as simply being due to natural forces. If a region gets colder, snowier, more icy etc., it’s natural (winds, La Niña etc.). If the opposite happens, it’s all on us…

  28. Gaudenz Mischol
    Posted Jan 22, 2009 at 10:27 AM | Permalink

    Any surprise when you see the author: Eric Steig. One of the team-members of realclimate together with Mann. So the result was highly predictable :-)))

  29. Peter Thompson
    Posted Jan 22, 2009 at 10:45 AM | Permalink

    “Modified principal component analysis technique” and RegEM. Deja vu all over again.

  30. Steve McIntyre
    Posted Jan 22, 2009 at 10:46 AM | Permalink

    I’ve posted up collations of the AWS and surface station data. See http://data.climateaudit.org/data/antarctic_mann and addition to head post.

  31. jae
    Posted Jan 22, 2009 at 10:53 AM | Permalink

    I have no idea how solid the study is, but the press release from Univ. Wash. rings an alarm bell:

    “The researchers devised a statistical technique that uses data from satellites and from Antarctic weather stations to make a new estimate of temperature trends.” (My bold)(See WUWT)

  32. Steve McIntyre
    Posted Jan 22, 2009 at 10:59 AM | Permalink

    #31,32. In fairness, this is a far more sensible application of infilling methods than paleoclimate. Here everything is at least a temperature of some kind. RegEM and PCA have a far better chance of yielding a sensible result in this sort of application than using Graybill bristlecone ring widths.

    Data in Antarctic is sparse. It’s odd that it wouldn’t have warmed up with the rest of the world. Readers shouldn’t drop standards of data rigor merely because they like Antarctic data that seems to go down. It’s quite reasonable to cross-examine the Antarctic data to see if a different interpretation of it is possible.

  33. Peter Thompson
    Posted Jan 22, 2009 at 11:09 AM | Permalink

    Steve, is everything really a temperature? I’m not so sure. To me, this would be akin to infilling GISTemp gaps with UAH or RSS data. in order to do this, doesn’t one have to calibrate the satellite to the instrumental (thermometer) record? If this process could be done once, with one set of station data, and shown that the exact process gave the same match to the other 40-50 stations, then fine. I’m guessing no such thing occured, necessitating some novel processing of the data, so a specialist was summoned. This must introduce error, and make claims measured in tenths of degrees questionable.

    • Phil.
      Posted Jan 22, 2009 at 11:38 AM | Permalink

      Re: Peter Thompson (#36),

      The satellite measurements are not MSU measurements which don’t work over the Antarctic (see RSS), they are surface temperature measurements by IR as opposed to weather station data which are measured above the surface. “IR the infrared data are strictly a measure of clear-sky temperature8 and because surface temperature differs from air temperature 2–3 m above the surface, as measured at occupied stations or at AWSs.

  34. Steve McIntyre
    Posted Jan 22, 2009 at 11:10 AM | Permalink

    CAn anyone figure out what grid was used in this article? I’ve read the text and SI and didn’t notice anything, but I might have missed it.

  35. Steve McIntyre
    Posted Jan 22, 2009 at 11:11 AM | Permalink

    #36. Sure. And I’m not endorsing this thing – I don’t know what they did exactly. AllI’m saying is that going from satellite to nearly co-located surface temperatures is a lot less hairy than going from bristlecone pine widths to world temperature.

  36. Vernon
    Posted Jan 22, 2009 at 11:38 AM | Permalink

    Is my reading of the article at RC correct? I asked over there and am waiting for an answer but wanted to get some input from the other side of the arguement.

    Namely:

    -from the 1940’s to present there is a cooling trend;
    -and from the 1970’s to present there is a cooling trend;
    -but from the 1960′ to present there is a warming trend.

    Therefore, we can conclude that there was cooling between the 1940’s to the 1970’s since those periods are warmer that your 1960’s starting point of your trend. How is this different from the ‘deniers’ picking the beginning point to drawn a specific conclusion since this study appears to pick a starting point both before and after the one used shows a cooling trend?

  37. Posted Jan 22, 2009 at 12:26 PM | Permalink

    I’m a beginner at looking at this data so was wondering if someone could tell me what the ‘r, RE, CE and trend columns stand for in the tables? Thanks.

  38. Paul Penrose
    Posted Jan 22, 2009 at 12:35 PM | Permalink

    So, once again here we have non-statisticians inventing novel statistical techniques to model historical temperature trends and then report the results in tenths of a degree per decade without any confidence intervals. Color me unimpressed.

  39. Horace
    Posted Jan 22, 2009 at 12:48 PM | Permalink

    “Starting today,” Mr. Obama said, “every agency and department should know that this administration stands on the side not of those who seek to withhold information, but those who seek to make it known.”

    Wonder if this applies to data and methods as well . . .

  40. mugwump
    Posted Jan 22, 2009 at 12:51 PM | Permalink

    Anyone know where I can get the article? I object to paying $32 to read the results of a study paid for with my taxes…

  41. DG
    Posted Jan 22, 2009 at 12:55 PM | Permalink

    How will modelers reconcile this new discovery when according to Spencer Weart the climate models correctly predicted Antarctica should cool. No problem, time to move on……

    Antarctica is Cold? Yeah, We Knew That
    http://www.realclimate.org/index.php/archives/2008/02/antarctica-is-cold/

    Bottom line: A cold Antarctica and Southern Ocean do not contradict our models of global warming. For a long time the models have predicted just that.

    • mugwump
      Posted Jan 22, 2009 at 1:09 PM | Permalink

      Re: DG (#45), I just made the same point (with the same quote) over at realclimate. No doubt it will be moderated into oblivion.

  42. AJ Abrams
    Posted Jan 22, 2009 at 1:10 PM | Permalink

    I formally request Steve to look into the statistical method used here. This is a audit site, and here we have a great example of something that can be tested to see if it’s relevant. I think a good starting point would be the use of component vectors and to find out, yet again, if the level of uncertainty is again greater than the claimed trend (if it is, then the whole thing is toilet paper and yet another exercise on what ifs).

    As I’m pretty sure that this is fluff – I think after it’s proven fluff it should be examined from a “why” standpoint.

    I think I have a handle on the why. There is wordage at use in the summary that is rather telling when combined with Hansen’s comments about the paper.

    In summary, I think this is why they did this and where the language is going to go: They aren’t saying this ridiculously small warming trend that ended some time ago is caused by global warming. In the summary it’s clear that they are deliberate in mentioning this. It’s my opinion that they did this because they understood that doing so would contradict previous declarations that cooling is predicted and expected. The whole paper was meant as a media piece. They seemed to understand that the media would link it to global warming and that they would see it as a debunking of skeptics major claim without actually looking at the work .

    Pure PR magic. They get to debunk a claim by skeptics while at the same time not contradicting themselves. There is no way this is a winnable situation by critics of AGW. If we show that it’s wrong, then we are back as we were before, but the main stream media won’t report that. They gain 1,000,000 AGW robots saying that the cooling Antarctica was debunked…while specifically never saying it.

  43. mugwump
    Posted Jan 22, 2009 at 1:35 PM | Permalink

    As I understand it, they’ve got significant warming on the coast of the Antarctic Peninsula, mainly because there’s less sea ice around than there used to be. The coast is also where they have an instrumental record. They’ve somehow managed to extrapolate that to the rest of the peninsula and the rest of Antarctica. Given Mann’s involvement, and the lack of any rigorous statistician’s involvement, my BS O’ Meter is reading off the scale. But until I get the paper, probably best not to jump to any conclusions.

    • Phil.
      Posted Jan 22, 2009 at 4:10 PM | Permalink

      Re: mugwump (#48),

      mugwump:
      January 22nd, 2009 at 1:35 pm
      As I understand it, they’ve got significant warming on the coast of the Antarctic Peninsula, mainly because there’s less sea ice around than there used to be. The coast is also where they have an instrumental record. They’ve somehow managed to extrapolate that to the rest of the peninsula and the rest of Antarctica.

      Perhaps you should wait to read the paper before making such comments because what you describe is exactly what they didn’t do!

      Given Mann’s involvement, and the lack of any rigorous statistician’s involvement, my BS O’ Meter is reading off the scale. But until I get the paper, probably best not to jump to any conclusions.

      Indeed and get your meter recalibrated.

  44. Leon Palmer
    Posted Jan 22, 2009 at 1:58 PM | Permalink

    Looked at the images for the arctic extrapolation to fill in missing data at

    http://www.realclimate.org/index.php/archives/2008/11/mind-the-gap/

    in the two figures

    Click to access twotemps.pdf

    in this first one, note the green data north of alaska is replaced by hot in the NCEP reanalysis. And it’s claimed that this is a good fit!

    Click to access rc-gap.pdf

    in this one, note how sparse the HadCRU data is, but still NCEP produces a massive hotspot over the North pole.

    Extrapolation is a wonderful thing (if it’s cold to my left, and I’m warm, it must be hot to my right!). I suspect the same thing happening in this antarctic analysis. However the claim is that the extrapolation is done with a physics based model so it must be right.

    What should have been done is to assume a temperature for the north pole and see how well the ncep model fits (e.g., assume cold, then normal, then warm north pole) and if all 3 fit the real data equally well, then there is no predictive capability in the analysis and it’s bogus.

  45. Posted Jan 22, 2009 at 2:07 PM | Permalink

    But until I get the paper, probably best not to jump to any conclusions.

    But jumping to conclusions is exactly what the authors want. They give a glimpse of their findings to the media so that they’ll jump to their AGW conclusions and since the paper hasn’t been released there are no facts to prevent them from making those leaps.

    • Jeff Alberts
      Posted Jan 22, 2009 at 2:39 PM | Permalink

      Re: Eric J D (#50),

      It’s like those “studies” you hear on the news all the time “Chocolate is good for you!” or “Alcohol is healthy!”. The studies usually say no such thing.

  46. Posted Jan 22, 2009 at 2:14 PM | Permalink

    I posted the following comment on Real Climate, Professor Steig was kind enough to answer! It seems that AWS data are largely irrelevant to the results!

    Comment: I’d be interested on your comments on the quality of the AWS records. Take for example the station at Butler Island that you report has a warming trend of 0.45 degrees C per decade. GISS data shows net cooling over the record period (WTF!). But given the paucity of data points any trend appears optimistic. If you examine the station history through the University of Wisconson website there appears to be a long record of malfunctions and changes in site including changes in elevation of over 100m. The station appears to have been buried by snow a number of times. Station Elaine on the Ross Ice shelf is similarly affected.
    How do the problems with the AWS network affect your results? How much time was spent vetting the reliability of the AWS records? Did you accoutn for this in your analysis? Given their unreliability I am wondering why you bothered using AWS at all.

    [Response: The AWS records useful because they provide a totally independent estimate of the spatial covariance pattern in the temperature field (which we primarily get from satellites). I agree with you that in general the data quality at many of the AWS sites is suspect. However, the trends in our results (when we use the AWS) don’t depend significantly on trends in the AWS data (in fact, the result changes little if you detrend all the AWS data before doing the analysis. –eric]

  47. Gary
    Posted Jan 22, 2009 at 2:32 PM | Permalink

    I am not clear as to which satelite method they used for determining surface temperature. But you may be interested to see NASA response to my question about ability to determine surface temperatures by satelite

    Your Science Question:
    ———————————————————————-

    1) How are “surface temperatures” determined for forrests and other areas where the land is covered?
    2) What is the accuracy of land surface measurements (+/- degrees C)
    3) What is the effect of cloud cover on accuracy?
    Thank you, Gary

    Our Response:
    ———————————————————————-

    Thank you for your interest in the AIRS products.

    1) Surface Skin Temperature is the specific AIRS product. It is
    determined by the combined retrieval algorithm which determines the
    cloud-cleared radiance (brightness temperature) and the surface
    emissivity. Dividing the first by the second yields the physical skin
    temperature, which may be ground (if bare surface), ocean skin
    temperature (not to be confused with bulk temperature), or forest
    canopy skin temperature.

    2) Land surface temperature is problematical, since the emissivity of
    bare earth will vary greatly over the 50 km diameter spot in which our
    retrieval is made. Our estimated uncertainty at present is 2->3 K.

    3) We have found no correlation with fraction of cloud cover, beyond
    our retrieval yield dropping when it reaches about 80%. Low stratus
    clouds are problematical, as we cannot discriminate between a field
    covered 100% by low stratus and a clear field. The temperature of the
    cloud tops of low stratus is close to that which would be encountered
    on the surface.

    Please check the documentation describing AIRS products at
    http://disc.gsfc.nasa.gov/AIRS/documentation.shtml

    • Geoff Sherrington
      Posted Jan 23, 2009 at 5:20 PM | Permalink

      Re: Gary (#52),

      It being traditional to measure daily maximum and minimum temperatures, one can image maximums are possible when this method signals that there is a window in the cloud and the land surface can be seen. However, assuming cloudiness causes cooling (and I’m not sure about this in the Antarctic), then the minimum temperature would not be easily recordable at recordable at all, because the surface would be under cloud.

      Also, the mere fact of being able to see the surface does not necessarily say much about the surface temperature because of winds that might be coming from cloud-covered areas nearby dominating the local temperature.

      If this reasonably correct?

  48. Cbone
    Posted Jan 22, 2009 at 2:42 PM | Permalink

    So here is Mann’s response to the question about how AGW theory can predict both warming and cooling in Antarctica. He does admit to cherry picking, in a roundabout way when he mentions that the trend depends on “the time interval and season one looks at.” Either way, it is a very long non answer to a fairly simple question.

    [Response:Why do the critics think that everything is so simple and binary, for example that we can lump all anthropogenic forcings into a simple “AGW” forcing. Guess what, its not that simple. There are multiple anthropogenic forcings that have quite different impacts (e.g. anthropogenic greenhouse gas increases, aerosols, land-use changes and, yes, stratospheric ozone depletion). Anyone who follows the science is of course aware of this. The temperature trends in Antarctica depend on the time interval and season one looks at, because certain forcings, such as ozone depletion, are particularly important over restricted past time intervals and during particular seasons. The interval over which we expect cooling of the interior is when ozone depletion was accelerating (1960s through late 20th century) and this is precisely when we reproduce the cooling trend both in the reconstruction (primarily during the Austral fall season) and the model simulation experiments discussed in the paper. Over the longer-term, and in the annual mean, greenhouse warming wins out over the more temporary and seasonally-specific impacts of ozone depletion in our simulations, and apparently in the real world. Do you really think that all of the authors and reviewers would have overlooked a basic internal contradiction of logic of the sort you imply, if it actually existed? This is all discussed in detail in the paper. Why not go to your local library and read it and perhaps learn something? -mike]

  49. Pearland Aggie
    Posted Jan 22, 2009 at 3:04 PM | Permalink

    Most of the perceived/measured/calculated temperature deviations are against a baseline period where the zero line is established. Quite often one can find what baseline data period was used, but I don’t know that I’ve ever seen a good rationale as to why that period was chosen. It seems to me that a lot of the deviation from the baseline period had to do with which baseline period is chosen. Does anyone have a link to a good source that explains how the baseline period is chosen?

    • mugwump
      Posted Jan 22, 2009 at 3:14 PM | Permalink

      Re: Pearland Aggie (#56), at least insofar as trend calculation is concerned, the baseline is irrelevant (derivative of a constant is zero).

  50. Pearland Aggie
    Posted Jan 22, 2009 at 3:22 PM | Permalink

    57. I was referring to the deviation from “normal”, where “normal” is typically defined as a baseline or average of a series of numbers over a time period. I wasn’t necessarily talking about the trend calculation.

    • Dave Dardinger
      Posted Jan 22, 2009 at 3:38 PM | Permalink

      Re: Pearland Aggie (#58),

      The couple of things I’ve seen here concerning the baseline is that it’s taken s the mean of some reference (perhaps arbitrary) time-period. It’s also been taken to be a 30 year period so that it represents climate rather than weather. Since the first multiproxy reconstructions were done in the mid 1990s, this would make them use a period starting in the mid60s. But all this is only of historical interest since the absolute value of deviation is of no value or significance. Relative values or the trend are all that matter.

      • Dave Dardinger
        Posted Jan 22, 2009 at 3:47 PM | Permalink

        Re: Dave Dardinger (#59),

        BTW, Pearland and other newcomers, you can just click the “reply and paste link” below the message number you’re responding to and it will give you the start of the message in one easy click. If you want to reference another message in the thread later on, you can go to this message and do the same. If you want to reference messages in another thread you could open a second window, find the message in the other thread, click the desired link and then cut and paste it into the thread you’re composing your message in. Here’s an example from another thread:

        Re: Dave Dardinger (#5),

        I hope it will work as advertised

        • Pearland Aggie
          Posted Jan 22, 2009 at 3:49 PM | Permalink

          Re: Dave Dardinger (#64), Thanks for the help! Sorry for the newb mistake! 🙂

        • Dave Dardinger
          Posted Jan 22, 2009 at 3:59 PM | Permalink

          Re: Dave Dardinger (#64),

          No, I guess you have to cut and paste the actual URL from the message in another thread. I seem to remember there being a working around though.

  51. mugwump
    Posted Jan 22, 2009 at 3:42 PM | Permalink

    I don’t think the baseline is intended to represent “normality”.

  52. Pearland Aggie
    Posted Jan 22, 2009 at 3:44 PM | Permalink

    59. I understand what you’re saying, but it seems to me that anomalies are calculated in reference to a specific baseline, and if the baseline, for some reason, does not adequately represent the mean of the data, then the anomalies can appear more or less severe than they really are. It seems to me that the choice of a baseline period is critical in how the data is presented….that’s all I was wondering about. Thanks for all the responses!

  53. Pearland Aggie
    Posted Jan 22, 2009 at 3:45 PM | Permalink

    60. Maybe “normal” was the wrong word to use, but the baseline does represent zero anomaly. Maybe I should have said “zero”.

  54. Peter D. Tillman
    Posted Jan 22, 2009 at 3:46 PM | Permalink

    Here’s the link to the abstract (which I didn’t see upthread):

    Nature 457, 459-462 (22 January 2008) | doi:10.1038/nature07669; Received 14 January 2008; Accepted 1 December 2008.

    Eric J. Steig1, David P. Schneider2, Scott D. Rutherford3, Michael E. Mann4, Josefino C. Comiso5 & Drew T. Shindell6
    http://www.nature.com/nature/journal/v457/n7228/full/nature07669.html

    Full text for subscribers only. Anyone located a free copy online?

    Cheers, Pete Tillman

    • Pearland Aggie
      Posted Jan 22, 2009 at 3:57 PM | Permalink

      Re: Peter D. Tillman (#63),

      Here is the link for the Supplemental Information:

      Here is the link to the Methods:

  55. Pearland Aggie
    Posted Jan 22, 2009 at 3:58 PM | Permalink

    Another newb mistake…sorry.

    Click to access nature07669-s1.pdf

    http://www.nature.com/nature/journal/v457/n7228/full/nature07669.html#online-methods

  56. Steve McIntyre
    Posted Jan 22, 2009 at 4:05 PM | Permalink

    Has anyone been able to locate a digital version of the satellite thermal IR data used for statistical analysis?

  57. Steve H.
    Posted Jan 22, 2009 at 4:34 PM | Permalink

    Anyone know anything about this

    http://blog.lib.umn.edu/scholcom/accessdenied/2007_11.html

    Why would a U.S. Senator want to deny public access to publicly funded research?
    Most research funded by large federal agencies like the NIH is currently published in very expensive commercial journals. Reed Elsevier, the world’s largest commercial science publisher, has for years been proud to earn profits in the 30-40 percent range for their investors – a profit funded by extremely inflated pricing practices, with libraries sometimes paying tens of thousands of dollars per year for one journal title. Open access advocates maintain that this research is funded by taxpayers and the results should be freely available in open access venues like PubMed.

    Recently, the U.S. Senate passed a 2008 Department of Health and Human Services appropriations bill including a provision mandating that NIH research, which is funded by taxpayers, be made freely available to those taxpayers within 12 months of publication. Senator James M. Inhofe (R-OK) attempted to insert an amendment to delete this provision. Why would Senator Inhofe wish to deny public access to scientific research funded by taxpayer dollars?

    It could be related to the fact that Reed Elsevier is one of the top donors to Senator Inhofe (according to data from the Center for Responsive Politics). It could also be related to the Senator’s strident stance that global climate change is a hoax perpetrated by the media and the scientific establishment. After all, if the public does not have access to the enormous amount of scientific evidence demonstrating the reality of climate change, they will more readily accept his claim that global warming is “the most media-hyped environmental issue of all time”, perpetuated by “climate alarmists”.

    For more information about the 2008 Department of Health and Human Services appropriations bill (subsequently vetoed by President Bush), read Peter Suber’s Nov. 1 issue of OA News.


    Steve: This blog is 1000% in favor of archiving of data. Perhaps the commenter would write to Lonnie Thompson and ask him to archive the Dunde, Guliya, Dasuopu etc. samples taken as long ago as the 1980s and 1990s.

    Perhaps the commenter will write to Mann et al and ask them to archive the results from the study that is the topic of the present post. Perhaps he will join with me in my FOI requests to Santer.

    I have left this comment up despite its violation of blog policies on politics because it has an adverse tone, but I do not wish readers to respond or debate this issue here. Please discuss politics elsewhere.

  58. Clark
    Posted Jan 22, 2009 at 5:30 PM | Permalink

    #71 That’s old and not up to date. Beginning in 2006, the NIH required that all publications deriving from their support must be deposited into PubMed central.

    For-profit journals (which are the vast majority in most disciplines) are not nefarious actors. They have a huge infrastructure to take in, referee, copy edit and figure format thousands upon thousands of manuscripts a year. In years past, publications cost were ultimately borne by library and individual subscriptions. The collapse of hard-copy subscriptions, and the forced free availability of papers means the journals will almost certainly move to “page charges” for researchers submitting papers (many do this already). It’s not the end of the world, but it does mean that if Steve McIntyre wants to publish an audit of the latest Mannomatic, he’s got to come up with $3000 or so for the privilege.

    • mugwump
      Posted Jan 22, 2009 at 6:25 PM | Permalink

      Re: Clark (#72),

      For-profit journals (which are the vast majority in most disciplines) are not nefarious actors.

      I disagree. Most of the work at most journals is unpaid volunteer work. Certainly in physics, maths, computer science, engineering, statistics, the authors submit fully formatted manuscripts in latex, so the journal does no copy-editing. Unpaid (academic) sub-editors and editors manage the allocation of reviewers, collecting responses, mediating with the authors etc. More unpaid academics do the reviewing. In many cases the journal does little more than print the hardcopy, for which they charge university libraries exhorbitant subscription fees.

      I know. I used to edit a journal and sat on the library subcommittee for my department. Admittedly, a few journals like Nature obviously have much bigger marketing budgets, but I’d be very surprised if the majority of the real work is not carried out by volunteers.

      In the age of the internet print journals are well past their use-by date. They only persist because they are great franchises for their publisher-owners.

  59. anonymous
    Posted Jan 22, 2009 at 5:41 PM | Permalink

    And of course 30-40% profit rate for investors is also ridiculous. Between 2003 and 2007 yield was between 2.5% and 2.8%, with PRE tax profit being around 15% of turnover (which is also roughly similar as proportion of market cap). Steady, but hardly eye-popping.

  60. John
    Posted Jan 22, 2009 at 6:08 PM | Permalink

    Steeve.

    One thing that would actually be useful is a short description of Mr. Mann’s main pet theories, followed by a very brief description of every single voodoo and hocus pocus and data massage trick he perpetrated. I agree it might be a long laundry list, but I think it would be a great thing to have. In this, I include some of his less scientific statement done while wearing the hat of scientist, such as his expressing political leanings when he was called as an expert witness in court.

    Each entry should refer to a footnote containing, possibly, a quote of the pertinent text and mandatorily, a litterature or event reference substantiating the claim.

    That would be a very useable and useful document to have.

  61. Vernon
    Posted Jan 22, 2009 at 7:08 PM | Permalink

    Just wondering, having read what the one of the authors of the study posted on RC, why is no one else pointing out that the study attributes 11 years of warming on the entire 50 year period even though it is will documented that 39 years were cooling? Per RC, 11 years is weather and 30 years is climate. Why does no one else call them on this?

    • mugwump
      Posted Jan 22, 2009 at 8:57 PM | Permalink

      Re: Vernon (#76), you can try calling them on it. You’ll most likely be moderated into oblivion. If not, you’ll just be accused of being a troll/septic/denialist/etc.

  62. Brian Rookard
    Posted Jan 22, 2009 at 9:40 PM | Permalink

    A copy of the Nature article can be found here.

  63. davidc
    Posted Jan 22, 2009 at 10:49 PM | Permalink

    From the article: “We assess reconstruction
    skill using reduction-of-error (RE) and coefficient-of-efficiency
    (CE) scores as well as conventional correlation (r) scores.” Still not quite there. The conventional correlation measure is r^2, which gives the proportional of total variance explained by the correlation. In Figure 1 they gives r=0.46, so r^2=0.22 and 22% of total variance is explained in the reconstruction.

    • Craig Loehle
      Posted Jan 23, 2009 at 7:58 AM | Permalink

      Re: davidc (#79), Gee, I always learned that if you R^2 was less than 0.5 (or even higher) that you conclude that you don’t know much (cue song “don’t know much about history…”)

  64. davidc
    Posted Jan 22, 2009 at 11:10 PM | Permalink

    In Fig 2b the data before 1970 is mosly below the regression line. This is before the satelite data were available, so how was the method described applied over this time period? If these data are removed it seems that the (already weak) positive correlation would go.

  65. VG
    Posted Jan 22, 2009 at 11:23 PM | Permalink

    Re: SH recently invented warming: You may want might to record this figure for posterity.. in case its changed soon.
    http://www.nsidc.org/data/seaice_index/s_plot.html
    Also this one which has been changed twice already (downwards of course)

    Unfortunately the SH ice just keeps coming back…

  66. CJA
    Posted Jan 22, 2009 at 11:27 PM | Permalink

    Anyone interested in seeing Mike Mann’s censorship skills at their subtlest, check comments 78-79 of their current discussion. Mike’s response to post 78 originally ended with “I am sure . . . University has a science library” (as he clearly obtained the name of the University from my isp).

    10-15 minutes later he removed this (before I decided to point out it was wrong of him to give my personal information away for the sake of a petty jab). In my second comment (79), Mike edited my response to “Thank You [edit]…” The edited line was, as written by me: “Thank you for removing the unnecessary reference to my current location from your response (where for the record I am currently visiting; I’m sure their science library is on par with the one at PSU.”

    I was admittedly as much on his side as against, and (not being a climate scientist) I was interested in learning their statistical method. But he’s a little bit touchy, isn’t he?

    • James Lane
      Posted Jan 22, 2009 at 11:59 PM | Permalink

      Re: CJA (#82),

      In Australia, we call this “playing the man, not the ball” (obvious pun avoided).

  67. Steve McIntyre
    Posted Jan 22, 2009 at 11:46 PM | Permalink

    #82. Did you include a university identification on your sign-on? Or did he root into the server logs for IP? He’s done that before and been called out on it. It’s interesting what Mann regards as “statistical” literature. Tikhonov regularization and ridge regression are isomorphic in a sense, but the motivations and handling seem different to me.
    There’s obviously an overlap between statistics and applied math and lots of algebra is common, but I’d be surprised if Roman or UC or Jean S regarded Tikhonov, Golub,… as the most relevant “statistical” references.

    • Mark T
      Posted Jan 23, 2009 at 12:00 AM | Permalink

      Re: Steve McIntyre (#83),

      There’s obviously an overlap between statistics and applied math and lots of algebra is common, but I’d be surprised if Roman or UC or Jean S regarded Tikhonov, Golub,… as the most relevant “statistical” references.

      Tikhonov is taught to signal processing folks. In fact, I’m pretty sure the regularization method mentioned in Haykin’s “Adaptive Filter Theory” text is Tikhonov’s. I see Golub’s name pop up, too. I don’t know if it would be relevant to anything you’re talking about here however.

      Mark

      Steve: I;ve got a lot of Golub, Fierro papers and they’re interesting and the algebra is connected to principal components and other multivariate techniques. But the approach is different than (say) Jolliffe or Brown/Sundberg. I doubt that either Golub or Fierro would think of themselves as “statistics” guys.

      • Mark T
        Posted Jan 23, 2009 at 1:11 AM | Permalink

        Re: Mark T (#86),

        I doubt that either Golub or Fierro would think of themselves as “statistics” guys.

        No, actually, I believe his SVD work was really the extent of it (maybe some related) and that was more of an algorithm for doing the decomposition efficiently. That’s why signal processing folks get exposed to Golub. Not many people in here care about systolic arrays, either, but they are an incredibly efficient means for doing a lot of PCA-like work (Gram-Schmidt, for example). And that, btw, is how I got interested in component analysis – I started with the Gram-Schmidt Cascaded Canceler by Karl Gerlach for use in a radar application. The rest is, as they say, history. There’s probably a “Connections” episode in there somewhere.

        Mark

  68. Mark T
    Posted Jan 22, 2009 at 11:51 PM | Permalink

    He has done this in the past. Someone posted from a Shell Oil account once. Mike did the honorable thing and chose to use that as a means to discredit the poster (you work at Shell Oil, you are obviously part of the plan). His ethical standards are, well, not very high.

    Mark

  69. Steve McIntyre
    Posted Jan 23, 2009 at 12:10 AM | Permalink

    Point made about RC. No more piling on please.

  70. Mark T
    Posted Jan 23, 2009 at 1:12 AM | Permalink

    Uh, not that I’m attributing the systolic array to Golub, just that often you start with a cool algebraic implementation that leads you into some cool statistical implementation.

    Mark

  71. Sekerob
    Posted Jan 23, 2009 at 3:12 AM | Permalink

    All the hubba hubba, the SIA of the Antarctic is virtually back on the 79-00 mean, so who reads records of ice in 2007 as significant? It was fripping predicted to see contradictory ice growth, same as the Antarctic through increase snow has mitigated SLR… but now the time has approached that that effect is being overwhelmed by GW too.

    Cryosphere Today data in km square
    Jan 16 2008: 3,491,000
    1979-2000 mean: 3,304,000
    Variance: 187,000

    As for the present concentration imagery of ice. It looks horrible. Watch that melt. Watch Wilkins.

  72. Calum
    Posted Jan 23, 2009 at 5:33 AM | Permalink

    This study undermines the projections of GCMs.

    The accepted AGW contention/projection is that the long-term warming trend in the Antarctic, due to AGW, would be much smaller than the mean global warming trend (until a tipping point was reached sometime in the future, many decades, at which point the warming trend in the Antarctic would then accelerate leading to a catastrophic rise in sea-level). This contention/projection allowed for natural variability to play a significant part in explaining the obeserved cooling in Antarctic, now and in the recent past. Hence in Feb 2008 Real Climate stated “The Antarctic is cold? Yeah, we knew that”

    The results of this new study, a process of deduction and the application of a specific statistical method, undermines the projections of the accepted AGW modelling process. This study claims that the warming trend in the Antarctic ‘matches’ the mean global warming trend over the same period. It is worth repeating, “This study claims that the warming trend in the Antarctic ‘matches’ the mean global warming trend over the same period.” This represents a real and significant quandry for proponents of AGW. Do they reject their own models and accept the results of this study? Confusingly, some AGW proponents have decided to accept both , hence in Jan 2009 we have Real Climate effectively stating, “The Antarctic is warming, Yeah we knew that too.”

    It has to be noted that not all proponents of the AGW hypothesis are happy with this study. Indeed a few have expressed publicly their skepticism. It’s results do not sit well with their currently held views.

  73. CJA
    Posted Jan 23, 2009 at 7:32 AM | Permalink

    Steve (83) – He definitely had to root into the server log, as there was no way to identify the university via my name or email. (I assumed that the IP just popped up while he was reviewing my comment as part of the comment-review protocol). Thanks for the comment on the sources he led me to.

    Steve: On my editor screen, now that I think about it, IP addresses are shown, so , now that I think of it, it wouldn’t require looking at a server log, but only an IP lookup. You’d think that he had better things to do. Also, blog software usually says that the IP information is only for control purposes and not for the prurient interest of the blog operator.

  74. Steve McIntyre
    Posted Jan 23, 2009 at 8:22 AM | Permalink

    #88. I took SVD in linear algebra purely as a math thing, without any connection to stats, only to eigenvalues and peculiar matrix operations. Golub looks like that to me. I still have my linear algebra book from 1963, all bristling with complicated concepts.

    • Mark T.
      Posted Jan 23, 2009 at 9:57 AM | Permalink

      Re: Steve McIntyre (#97),

      I took SVD in linear algebra purely as a math thing, without any connection to stats, only to eigenvalues and peculiar matrix operations. Golub looks like that to me. I still have my linear algebra book from 1963, all bristling with complicated concepts.

      Me too. However, I brilliantly went through all of my degrees without taking linear algebra (it was my last class, actually), so I had learned SVD looooong before I got that treatment. SVD is the method mentioned in Hyvarinen’s “Independent Component Analysis” text for doing PCA, btw.

      Mark

      • Jeff Alberts
        Posted Jan 23, 2009 at 12:57 PM | Permalink

        Re: Mark T. (#100),

        Re: Steve McIntyre (#97),

        I took SVD in linear algebra purely as a math thing, without any connection to stats, only to eigenvalues and peculiar matrix operations. Golub looks like that to me. I still have my linear algebra book from 1963, all bristling with complicated concepts.

        Me too. However, I brilliantly went through all of my degrees without taking linear algebra (it was my last class, actually), so I had learned SVD looooong before I got that treatment. SVD is the method mentioned in Hyvarinen’s “Independent Component Analysis” text for doing PCA, btw.

        Mark

        You guys just made my brain melt…AGAIN

  75. mugwump
    Posted Jan 23, 2009 at 9:43 AM | Permalink

    Reading through now. This seems very marginal.

    detrending of the TIR data before reconstruction demonstrates that the results do not depend strongly on trends in said data (Supplementary Information).

    [pp 460, paragraph 2]

    Next paragraph:

    In the reconstruction based on detrended TIR data, warming in West Antarctica remains significant at greater than 99% confidence, and the continent-wide mean trend remains at 0.08C per decade, although it is no longer demonstrably different from zero (95% confidence).

    Let’s think about that. If

    the results do not depend strongly on trends in said data

    yet when you detrend said data

    the continent-wide mean trend … is no longer demonstrably different from zero

    the only “result” you can claim is that the continent-wide trend is statistically indistinguishable from zero.

    Somehow they get to spin this as Antarctica is warming?

  76. Mike Davis
    Posted Jan 23, 2009 at 9:50 AM | Permalink

    96 Craig: That is the answer! It is all in the song lyrics!

  77. RomanM
    Posted Jan 23, 2009 at 10:01 AM | Permalink

    Thanks to Steve’s scraping up the data I was able to do some looking at what is going on.

    Because of a lot of missing values, it is difficult to calculate annual anomaly values for many of the temperature sequences. Instead, I thought that a good initial look might be obtained by calculating the trends for each sequence by month. This way, you don’t have to calculate anomalies and missing values in some months play a lesser role. So, I wrote an R script for calculating trends by month for one or more stations:

    # Calculate regressions (temp ~ time) for each month separately for a given time series
    # will only do regression when at least lim (default = 3) non-NAs exist for a given month
    # Outputs trend (slope) and degrees of freedom for residuals

    regress = function(tsdat, lim = 3) {
    star = min(time(tsdat))
    slopes = rep(NA,12)
    df = slopes
    for (i in 1:12){
    dat.win = window(tsdat,start=c(star,i),deltat=1)
    yr = time(dat.win)
    if (sum(!is.na(dat.win))> (lim-1) ) {
    lmmod = lm(dat.win~yr)
    slopes[i] =coef(lmmod)[2]
    df[i] = lmmod$df.residual} }
    list(slope =slopes,df=df)}

    # example
    regress(Data$surface[,1])

    # run regress program for a set of time series
    all.reg = function(alldat) {
    nc = ncol(alldat)
    slop = matrix(NA, nrow = nc, ncol = 12)
    df = slop
    colnames(slop) = month.abb
    colnames(df) = month.abb
    rownames(slop) = colnames(alldat)
    rownames(df) = colnames(alldat)
    for (j in 1:nc) {
    reg = regress(alldat[,j])
    slop[j,]= reg$slope
    df[j,] = reg$df}
    list(slope=slop, df=df)}

    # Examples:
    # do all surface time series
    trsurf = all.reg(Data$surface)

    #do all surface from 1980 to present
    trsurf.1980 = all.reg(window(Data$surface, start=1980))

    #do all aws
    traws = all.reg(Data$aws)

    # Plot graph of all surface for each month
    matplot(t(trsurf$slope), type=”l”,main=”Spaghetti Graph of 43 Stations”,ylab = “Annual Trend (C/Year)”, xlab = “Month”)
    text (1:12, -.6, month.abb, col =”red”)

    The graph generated looks like this:

    Since the number of available temperatures varies so widely between series (and between months within a series), the results have to be read carefully. That is why I also included the degrees of freedom for the residuals of each of the regressions done in the script. The wildly varying series in the graph (with a trend of +1.3C per year in July!!!) is based on only five consecutive years of data in the 1980s.

    I haven’t tried to do any summarizing or analysis of the results (for example, how trends relate to latitude and longitude), but I hope this will help for looking at the data. If anyone finds any glitches in the script, please let me know.

  78. Steve McIntyre
    Posted Jan 23, 2009 at 10:19 AM | Permalink

    #101. Roman, when I read a post by someone else with a script, it renews my efforts to do so myself. All too often people post interesting things up, but it’s hard to tell what they’ve done without a script.

    Memo to Excel USers: one of the many disadvantages of Excel for statistical analysis is replicability – a cornerstone of CA objectives. You may have done the prettiest spreadsheet in the world, but it’s no substitute for being able to paste an R-script into a console and repeat someone’s analysis in a way that you an vary it if you want. (Matlab as well, but R is an open source platform.)

    • RomanM
      Posted Jan 23, 2009 at 10:53 AM | Permalink

      Re: Steve McIntyre (#102),

      it’s no substitute for being able to paste an R-script into a console and repeat someone’s analysis in a way that you an vary it if you want.

      I think that being able to easily vary parts of the script to adapt them to do different analyses is a major time and effort saver. It is a simple matter to change what I wrote to include R-squares or standard errors for the slopes or even the entire regression as part of the output. Not to mention that I pick up occasional tips to improve my own programming 🙂 .

  79. Craig Loehle
    Posted Jan 23, 2009 at 10:23 AM | Permalink

    If the trend is not statistically different from 0, then that is that. End of story. You don’t get to claim it is warming!

  80. Steve McIntyre
    Posted Jan 23, 2009 at 10:24 AM | Permalink

    #101. I have a hard time distinguishing those trends from zero. Amazing that taking the eigenfunctions of the covariance matrix 🙂 would have such an impact.

  81. Steve McIntyre
    Posted Jan 23, 2009 at 11:05 AM | Permalink

    Not to mention that I pick up occasional tips to improve my own programming

    Me too. I’ve learned it on the run and am always alert for new techniques and methods. Reader Nicholas was especially clever at downloads and scrapes and compression and I consult his scripts all the time.

  82. Steve McIntyre
    Posted Jan 23, 2009 at 11:06 AM | Permalink

    Eric Steig responded very promptly to an inquiry saying that the data would be online next week.

  83. Phil
    Posted Jan 23, 2009 at 12:12 PM | Permalink

    Looking at various web sites that describe the instrument used in this study (the USGS Advanced Very High Resolution Radiometer [AVHRR]), there are certain statements that may or may not have an impact on the data used in this study. Following, I have excerpted certain quotes that may raise questions about the data, although it is unclear from the descriptions whether or not the data used in this study is affected or not. Please refer to the complete description on the respective websites for context.

    From http://edc2.usgs.gov/1km/paper.php:

    Under 2.2 Data Acquisition Network: …onboard tape recorders to acquire LAC data. The one exception is Antarctica where complete land coverage is not always available (emphasis added). Since the major uses of the data sets are related to surface vegetation cover, this is currently not regarded as a major limitation.

    Does this mean that there are gaps in the satellite data used in this study? It is hard to tell one way or the other from this description. Also, the statement that the main intended use of the data sets are related to surface vegetation cover raises the question of whether this instrument was designed to be used to measure temperatures in Antarctica, where there is essentially no vegetation. Again, this may or may not affect the data used in the study.

    Under 2.4 Orbital Pass Generation: …The global land 1-km AVHRR data set consists of only the afternoon (ascending) passes, and no descending (night time) data, from the NOAA polar orbiting satellites. … each half-orbital pass did not stretch from pole to pole.

    Is it possible that only afternoon data was collected and no nighttime data? If this is true, it would have a major impact on the credibility of the study. Unfortunately, one cannot tell from this description if this statement applies to the data used in the study.

    Under 3.3 Atmospheric Correction: … Several approaches for correction of water vapor exist but there is no community agreement on a feasible method. The basic problem is determination of the spatial and temporal variability of water vapor concentrations. The same circumstances affect aerosol corrections. Therefore no water vapor or aerosol correction will be applied.

    The input to the atmospheric correction process is radiance values from the calibrated visible and near-infrared channels. The output of the atmospheric correction process is surface reflectance (in percent) of the visible and near-infrared, albeit without corrections for water vapor and aerosols.

    Presumably, there is little water vapor and there are few aerosols over Antarctica, but maybe not. The fact that no corrections are made for water vapor and aerosols may or may not be relevant to the study.

    From http://edc.usgs.gov/guides/avhrr.html:

    Under Acquisition … Night acquisitions are acquired upon request only. … Prior to March 1990, approximately 40 percent of the data received were archived. (i.e. 60 percent of data prior to March 1990 may be missing)

    The study states under “METHODS-Data” that the

    passive infrared brightness measurements … are continuous beginning January 1982 and constitute the most spatially complete Antarctic temperature data set.

    The description from the above referenced AVHRR website may or may not be in conflict with the statement in the study in that night acquisitions may not have been requested for this time period and that data prior to March 1990 may be very incomplete. Again, it is difficult to tell whether these statements apply to the data in the study, but I think it is worth asking.

    • Alan Wilkinson
      Posted Jan 24, 2009 at 3:06 AM | Permalink

      Re: Phil (#110),

      Pielke Sr made a similar comment that the satellite data is time-dependent and doesn’t measure max/min values as surface stations do.

      The satellite IR data commenced in 1982, yet Fig 2 purports to show reconstructed satellite data stretching back to 1957. Furthermore, the “warming” occurs prior to 1982. Yet Mann claimed the measured warming trend was independent of the surface station data and its admitted quality defects.

      I don’t believe this can be true.

      • Alan Wilkinson
        Posted Jan 24, 2009 at 3:15 AM | Permalink

        Re: Alan Wilkinson (#125),

        Correction, it was Steig, not Mann, who said:

        The AWS records useful because they provide a totally independent estimate of the spatial covariance pattern in the temperature field (which we primarily get from satellites). I agree with you that in general the data quality at many of the AWS sites is suspect. However, the trends in our results (when we use the AWS) don’t depend significantly on trends in the AWS data (in fact, the result changes little if you detrend all the AWS data before doing the analysis.

  84. Mark T.
    Posted Jan 23, 2009 at 1:18 PM | Permalink

    I’m sorry. My brain melts often, too, if its any consolation. 🙂

    Mark

  85. thefordprefect
    Posted Jan 23, 2009 at 6:33 PM | Permalink

    Cooling/warming in the antactic
    CO2 is acknowledged to be reasonably homogeous over the earth.
    However, there is an anthropomorphic generated ozone hole over the antarctic (and a smaller one over the actic)
    from wiki ozone contributes 3–7% to the “greenhouse” effect The lack of ozone and the increase in CO2 over the arctic must be fighting it out for dominance.

    When the ozone hole starts closing then perhaps the ice cover will really start melting! Time to get those old cans of 1980’s deoderants out and start spraying

    • Luis Dias
      Posted Jan 23, 2009 at 7:27 PM | Permalink

      Re: thefordprefect (#118),

      Or otherwise. The Antarctic is very very cold. A slight increase of temperature would make the surrounding sea warmer, which in turn would cause more rainfall, which in turn would be snow, which in turn would cause the Antarctic ice sheets to increase more. Which in turn would rise the albedo…

      Oh, these chaotic phenomena are so hard to predict 🙂

    • John M
      Posted Jan 23, 2009 at 7:34 PM | Permalink

      Re: thefordprefect (#118),

      Either I’m dense, or the writing in the Steig et al. paper is pretty obtuse. Where in the paper does it say that ozone depletion and AGHGs are duking it out in the Antarctic in a radiative forcing sense?

      The discussion I see indicates that there is some confusion as to how ozone depletion ought to impact wind patterns.

      Our reconstruction differs from the results of modelling experiments that tie Antarctic surface temperature change to stratospheric ozone loss through changes in the SAM. In such simulations, the largest negative temperature anomailes in East Antarctica occur in summer, whereas in our reconstruction, East Antarcic cooling is restricted to autumn (Fig. 3). The simulations show warming in austral summer and autumn, restricted to the peninsula, whereas in our reconstruction the greatest warming is in winter and spring, and in continental West Antarctica as well as on the peninsula.

      Later on, in conjunction with zonal trends:

      Radiative forcings alone are inadequate to account for the observations (Supplementary Information).

      I know in their PR campaign, the authors speculate that ozone recovery will accelerate warming in the Antarctic, but where is it in their peer-reviewed paper?

      BTW, if you want to argue that ozone depletion is leading to cooling by a decrease in radiative forcing, shouldn’t the cooling anomalies be greatest in the Antarctic spring?

      http://www.climateaudit.org/?p=4914#comment-321699

      • thefordprefect
        Posted Jan 23, 2009 at 8:14 PM | Permalink

        Re: John M (#120), Well when I wrote that it was just “my” idea. However having looked it seems to be a very old idea!

        Ozone Hole Recovery May Reshape Southern Hemisphere Climate Change
        April 24, 2008
        A new study by CU-Boulder, NOAA and NASA shows the recovery of the Antarctic ozone hole may amplify warming in the continent’s interior…
        A full recovery of the stratospheric ozone hole could modify climate change in the Southern Hemisphere and even amplify Antarctic warming, according to scientists from the University of Colorado at Boulder, the National Oceanic and Atmospheric Administration and NASA…
        The scientists found that as ozone levels recover, the lower stratosphere over the polar region will absorb more harmful ultraviolet radiation from the sun. This could cause air temperatures roughly 6 to 12 miles above Earth’s surface to rise by as much as 16 degrees Fahrenheit, reducing the strong north-south temperature gradient that currently favors the positive phase of SAM, said the research team…
        NASA’s Pawson said ozone recovery over Antarctica would essentially reverse summertime climate and atmospheric circulation changes that have been caused by the presence of the ozone hole…

        http://www.colorado.edu/news/r/203359a9370cd63b7591db525a1a656a.html

        So its not the GHG effect only (peaking in spring-summer – October/November?), but a change in prevalent winds occurring later

        • John M
          Posted Jan 23, 2009 at 8:38 PM | Permalink

          Re: thefordprefect (#121),

          So its not the GHG effect only (peaking in spring-summer – October/November?), but a change in prevalent winds occurring later

          Yes, I know that’s the argument, but doesn’t the Steig et al. paper now raise into question exactly what the effect of ozone depletion on the circulation is? I thought what I quoted says that their “reconstruction” differed from the original models and they had to re-examine the models. Since their results seem to be “inconsistent” (sorry) with previous models, I’m asking for where their paper says added ozone should lead to enhanced warming.

      • Urederra
        Posted Jan 24, 2009 at 10:40 AM | Permalink

        Re: John M (#117),

        I know in their PR campaign, the authors speculate that ozone recovery will accelerate warming in the Antarctic, but where is it in their peer-reviewed paper?

        I am looking at the IPCC AR4 report and in the second section, the one entitled “Changes in Atmospheric Constituents and in Radiative Forcing“, in page 204 there is a chart that shows the weight of the different types of anthropogenic radiative forcings. For stratospheric ozone they give a “re-evaluated” value of -0.05 (+/-)0.10 W/m^2

        I wonder how can they predict or project anything if they don’t even know whether the anthropogenic change of stratospheric ozone cools or warms the atmosphere. It is like trying to send a rocket to Mars when you don’t know whether gravity pulls or pushes.

        In my opinion, the levels of uncertainities in the anthropogenic RFs shown in that chart prevents anybody from running any reliable computer model. Just look at the Cloud albedo effect (-0.70 [-1.1 +0.4] W/m^2) That’s a huge uncertainty in the value!. How can you rely on any climate model that uses such fuzzy parameters?

        • John M
          Posted Jan 24, 2009 at 2:44 PM | Permalink

          Re: Urederra (#136),

          Along those lines, in the Steig et al. paper:

          The [literature] results show mean trends of 1.1+/-0.8 C per decade and 0.45+/-1.3 C per decade at Siple and Byrd respectively (ref 13). Our reconstruction yields 0.29+/-0.26 C per decade and 0.36+/-0.37 C per decade over the same interval. In our full 50-year reconstruction, the trends are significant, although smaller, at both Byrd (0.23+/-0.09 C per decade) and Siple (0.18+-/0.06 C per decade).

          Interesting how 0.18+/-0.06 is considered smaller than 0.29+/-0.26 and 0.23+/-0.09 is considered smaller than 0.36+/-0.37.

          And these data are what are used in part to draw all the pretty colored pictures showing the contour lines and significant warming, and are also used (presumably with the radiative forcing data you quote) to fine tune the models, including accounting for all the ocean and wind flow data that now explains everything.

    • Jeff Alberts
      Posted Jan 23, 2009 at 10:56 PM | Permalink

      Re: thefordprefect (#115),

      As far as we know the “ozone holes” have always been there and always will. There’s zero evidence that they are human-caused, human-influenced, or “recovering”. There’s been extremely little change in them since they were identified in the mid 1950s.

      • thefordprefect
        Posted Jan 23, 2009 at 11:13 PM | Permalink

        Re: Jeff Alberts (#121), The Antarctic ozone hole was discovered in 1985 by British scientists Joesph Farman, Brian Gardiner, and Jonathan Shanklin of the British Antarctic Survey.
        http://www.theozonehole.com/

        Mike

        • Urederra
          Posted Jan 24, 2009 at 9:47 AM | Permalink

          Re: thefordprefect (#123),

          There is not such thing as an ozone hole.

          Never the ozone readings have dropped to 0 Dobson units in any of the poles.

        • Jeff Alberts
          Posted Jan 24, 2009 at 7:35 PM | Permalink

          Re: thefordprefect (#123),

          Actually it was discovered in 1956 by Dobson, the guy for whom the Dobson Unit is named after.

      • Phil.
        Posted Jan 24, 2009 at 12:07 AM | Permalink

        Re: Jeff Alberts (#121),

        As far as we know the “ozone holes” have always been there and always will. There’s zero evidence that they are human-caused, human-influenced, or “recovering”. There’s been extremely little change in them since they were identified in the mid 1950s.

        That really is denial, see below!

        • Ron Cram
          Posted Jan 24, 2009 at 10:18 AM | Permalink

          Re: Phil. (#124),

          Where does this data come from? You provide an image but no link. According to this, published in 2007, the ozone hole has not been shrinking. And according to this, the ozone hole was bigger in 2008 than in 2007.

          I do not understand why deodorant sprayed in North America would congregate over the South Pole and cause a hole in the ozone layer. We are getting OT here. Perhaps you would like to explain the mechanism of this physical theory on the message board?

        • Jeff Alberts
          Posted Jan 24, 2009 at 7:37 PM | Permalink

          Re: Phil. (#124),

          No it isn’t see below:

          Besides a step change, which would be totally inconsistent with gradually increasing use of CFCs, the trend is flat.

        • thefordprefect
          Posted Jan 25, 2009 at 7:44 AM | Permalink

          Re: Jeff Alberts (#147),
          Factfile
          The discovery of the ozone hole was first announced in a paper by British Antarctic Survey’s Joe Farman, Brian Gardiner and Jonathan Shanklin, which appeared in the journal Nature in May 1985.
          Ozone in the atmosphere is measured using the Dobson Spectrophotometer – equipment designed in the 1920s, but still the world standard. Ozone is measured in Dobson Units, DU and a typical measurement is about 300 DU.
          An ozone hole is defined as an area of the atmosphere having ozone values less than 220 DU.

          Click to access the_ozone_hole_2008.pdf

          When was the ozone hole discovered?
          Ozone depletion by human-produced CFCs was first hypothesized in 1974 (Molina and Rowland, 1974). The first evidence of ozone depletion was detected by ground-based instruments operated by the British Antarctic Survey at Halley Bay on the Antarctic coast in 1982. The results seemed so improbable that researchers collected data for three more years before finally publishing the first paper documenting the emergence of an ozone hole over Antarctica (Farman, 1985). Subsequent analysis of the data revealed that the hole began to appear in 1977. After the 1985 publication of Farman’s paper, the question arose as to why satellite measurements of Antarctic ozone from the Nimbus-7 spacecraft had not found the hole. The satellite data was re-examined, and it was discovered that the computers analyzing the data were programmed to throw at any ozone holes below 180 Dobson Units as impossible. Once this problem was corrected, the satellite data clearly confirmed the existence of the hole
          http://www.wunderground.com/education/holefaq.asp
          The graph shown in Phil. (#124)
          http://www.atm.ch.cam.ac.uk/tour/part3.html
          Remember the ozone hole refers to the Minimum levels not average
          Mike

        • Jeff Alberts
          Posted Jan 25, 2009 at 2:53 PM | Permalink

          Re: thefordprefect (#152),

          So, how is a step change explained by gradual increases in CFC usage? And how does a ban on CFCs not show a similar step change back to “normal”.

          Perhaps a hole was first “imaged” in the 80s, but the seasonal variation was discovered long before that. The imaging was just a formality.

        • thefordprefect
          Posted Jan 25, 2009 at 4:52 PM | Permalink

          Re: Jeff Alberts (#155),
          you’ll need to check your source data going to the NASA page you get this:

          Some oddities but no step change.
          http://ozonewatch.gsfc.nasa.gov/

          Other links I have given explain the chemistry behind ozone depletion. Release CFCs and they diffuse out. To remove them they have to be chemically changed – a different process.

  86. John M
    Posted Jan 23, 2009 at 8:39 PM | Permalink

    Oops. Forgot to blockquote. That first sentence is yours.

  87. Steve McIntyre
    Posted Jan 23, 2009 at 11:06 PM | Permalink

    Realclimate says:

    The results don’t depend on the statistics alone. They are backed up by independent data from automatic weather stations,

    I just ran Roman’s script on the AWS data and the average of all the trends is almost perfectly flat – 0.000037282 deg C/year.

    Is there any basis for the RC claim or is this just them saying things?

  88. Calum
    Posted Jan 24, 2009 at 4:03 AM | Permalink

    Real Climate have posted a postscript with regard this contentious study (scientists within the pro-AGW community have publicly expressed skepticism)

    Postscript
    Some comment is warranted on whether our results have bearing on the various model projections of future climate change. As we discuss in the paper, fully-coupled ocean-atmosphere models don’t tend to agree with one another very well in the Antarctic. They all show an overall warming trend, but they differ significantly in the spatial structure. As nicely summarized in a paper by Connolley and Bracegirdle in GRL, the models also vary greatly in their sea ice distributions, and this is clearly related to the temperature distributions. These differences aren’t necessarily because there is anything wrong with the model physics (though schemes for handling sea ice do vary quite a bit model to model, and certainly are better in some models than in others), but rather because small differences in the wind fields between models results in quite large differences in the sea ice and air temperature patterns. That means that a sensible projection of future Antarctic temperature change — at anything smaller than the continental scale — can only be based on looking at the mean and variation of ensemble runs, and/or the averages of many models. As it happens, the average of the 19 models in AR4 is similar to our results — showing significant warming in West Antarctica over the last several decades (see Connolley and Bracegirdle’s Figure 1)..

    The qualification now of linking ‘Similarity’ in trends in warming with models is not same as the original claim this study makes that the Antarctic warming trend ‘matches’ the mean global average over the same period.

    As Real Cimate now contends, “The Antarctic is cold/warm (delete as appropiate)? Yeah, we knew that.”

    • Posted Jan 24, 2009 at 8:41 AM | Permalink

      Re: Calum (#127),

      Real climate also had this up in February

      Bottom line: A cold Antarctica and Southern Ocean do not contradict our models of global warming. For a long time the models have predicted just that.

      Maybe they had their computer monitors upside down.

  89. Vernon
    Posted Jan 24, 2009 at 6:41 AM | Permalink

    Twentieth century Antarctic air temperature and snowfall simulations by IPCC climate models. Andrew Monaghan, David Bromwich, and David Schneider. Geophysical Research Letters, April 5, 2008

    “We can now compare computer simulations with observations of actual climate trends in Antarctica,” says NCAR scientist Andrew Monaghan, the lead author of the study. “This is showing us that, over the past century, most of Antarctica has not undergone the fairly dramatic warming that has affected the rest of the globe. The challenges of studying climate in this remote environment make it difficult to say what the future holds for Antarctica’s climate.”

    The authors compared recently constructed temperature data sets from Antarctica, based on data from ice cores and ground weather stations, to 20th century simulations from computer models used by scientists to simulate global climate. While the observed Antarctic temperatures rose by about 0.4 degrees Fahrenheit (0.2 degrees Celsius) over the past century, the climate models simulated increases in Antarctic temperatures during the same period of 1.4 degrees F (0.75 degrees C).

    The error appeared to be caused by models overestimating the amount of water vapor in the Antarctic atmosphere, the new study concludes. The reason may have to do with the cold Antarctic atmosphere handling moisture differently than the atmosphere over warmer regions.

    That shows that based on physical evidence there was only .2C warming for the century

  90. Steve McIntyre
    Posted Jan 24, 2009 at 9:48 AM | Permalink

    Turner et al 2005:

    Only two stations from the interior of the Antarctic have long temperature records, so it is not possible to make any clear statement about change over this vast area.

    However, the data from Vostok shows no statistically significant change over a record that extends back over 40 years. Comiso (2000) found a slight cooling on the high plateau of East Antarctica over the period 1979 to 1998, and this is also reflected in the READER data for Vostok over this period. However, this occurred after several decades of slight warming since the station was established in 1958. Clearly, more work is needed on this decadal time scale variability of temperatures over East Antarctica.

    At Amundsen–Scott Station there is a cooling in all seasons, but only the annual trend of −0.17 °C decade−1 is statistically significant at the 10% level. However, it should be noted that it has not been possible to obtain much metadata for the station, and the study of Hogan et al. (1993) has highlighted changes in the nature of the temperature record around the time of the relocation of the station in December 1974. Clearly the South Pole temperature record requires further investigation.

  91. Calum
    Posted Jan 24, 2009 at 10:30 AM | Permalink

    Another way to look at this study is not to look for similarties with the projections of GCMs, as the authors of this study are now having difficulties in doing, but to look for similarities with ‘natural variability’ and by introducing the notion of ‘locality’.

    If you do this then initial period of ‘deduced’ warming followed the current period of ‘observed’ cooling in Antarctic region may indicate not only natural varistion but that the magnitude of natural variation is much larger than anyone expected. These ‘localised’ natural variations maybe so significant that it makes meaningless the projected warming trends of GCMs.

    I think that this study highlights a un-stated fact that no matter what aspect we humans look at the Antarctica it continues to surprise. We can only conclude that this place is special and that scientists are being continually forced to re-draft and refine their explanations of what Antarctica is all about.

    The boys at Real Climate may have done everyone a favour.

  92. Bill Illis
    Posted Jan 24, 2009 at 10:31 AM | Permalink

    I downloaded the temperature data for the Amundsen-Scot station at the South Pole (from GISS) and there is monthly data going back to 1957.

    What this study really does is call out all those dedicated scientists (freezing their _ off) collecting the best temperature data they can presumably using the best methods and equipment available – and this study says their data is wrong.

    Here is the South Pole annual temp data back to 1957 – the trend is -0.0666C per decade. Mann and Steig are effectively saying this data has been collected improperly.

    Here is the monthly temp series (and I don’t why the study linked above says there needs to be “investigation” of it.)

    • Gerald Machnee
      Posted Jan 24, 2009 at 10:53 AM | Permalink

      Re: Bill Illis (#135),
      This data cannot be better than infilled.

    • Simon Evans
      Posted Jan 26, 2009 at 5:04 PM | Permalink

      Re: Bill Illis (#135),

      And you did the same for Vostok too, did you? Or are you only interested in putting a case to question the paper rather than presenting a balanced assessment (the paper specifically references the SP cooling trend – see figure 4)?

      • MartinGAtkins
        Posted Jan 26, 2009 at 6:46 PM | Permalink

        Re: Simon Evans (#164),

        And you did the same for Vostok too,

        My linear trend line gives a different result from Bills but matches other documented observations.

        Vostok since !958 -0.9C or -0.176 per decade.
        Amundsen-Scot since 1957 -0.8 or -0.153 per decade.

        • Simon Evans
          Posted Jan 26, 2009 at 8:24 PM | Permalink

          Re: MartinGAtkins (#165),

          That’s interesting, Martin. The Steig paper declares the Vostok trend as being +0.1C/decade over 1957-2006. Perhaps you have discovered a major fault in the paper? I think you should follow this up.

        • MartinGAtkins
          Posted Jan 27, 2009 at 6:43 AM | Permalink

          Re: Simon Evans (#166),

          I follow it up and found a bug in “gnuplot”. If you run many linear plots for a prolonged time, it starts to give erroneous results. Annoying because if i can’t find a fix I’ll have to change the software.

          The only chart I can find indicates a cooling trend but only goes up to about 1999 and no linear trend line.

  93. Michael Jankowski
    Posted Jan 24, 2009 at 11:30 AM | Permalink

    I’d love to see a study taking data from a comparable array of random US stations and the satellite data and seeing how well this methodology works. It would seem to me that demonstrating success of the methodology in such a manner before applying it elsewhere would be the proper way of doing things, scientifically speaking.

    • Craig Loehle
      Posted Jan 24, 2009 at 11:55 AM | Permalink

      Re: Michael Jankowski (#138), excellent point. Always test a new method where you have known data for testing (even if you have to use artificial data for that). Of course, they view RegM as established methodology.

    • Posted Jan 24, 2009 at 12:30 PM | Permalink

      Re: Michael Jankowski (#138),

      If I’m right, this is a problem because the sat data was actually surface emissivity measurement not LT microwave. The surface measurement responds to trees, dirt and water differently than ice. It’s difficult to replicate anywhere else.

      • tty
        Posted Jan 24, 2009 at 2:07 PM | Permalink

        Re: Jeff Id (#140),

        Much of the higher latitudes of the northern hemisphere north of the treeline in winter should work. The emissivity would hardly be affected by whether there is ice or dirt under the snow.

        • Posted Jan 24, 2009 at 4:37 PM | Permalink

          Re: tty (#141),

          If there are enough stations and few enough trees it would work, heck I’m an engineer verification is about all I can understand. I wonder with interpolations, how much slight adjustment would be required to still get your pre-determined result. Mann’s comment in Steve’s next post is almost like it’s a relief they were able to show warming-not terribly scientific.

  94. Bernie
    Posted Jan 24, 2009 at 2:16 PM | Permalink

    Jeff:
    Re: tty (#141), How about Greenland?

    • tty
      Posted Jan 24, 2009 at 3:48 PM | Permalink

      Re: Bernie (#142),

      Very few weather stations up on the icecap in Greenland. There is a fairly good coastal station net, but those are mostly in a maritime low-arctic climate. The Russian or Canadian tundra, or even the northern Steppes/Prairies are probably better Antarctic analogs in winter.

  95. Gerald Machnee
    Posted Jan 24, 2009 at 8:32 PM | Permalink

    Do we know who peer-reviewed this study?

  96. Posted Jan 24, 2009 at 9:00 PM | Permalink

    I’ve collected an assortment of graphs and pictures that support a more reasonable picture of Antarctica here. I find Prof Humlum’s thumbnail Antarctica temperature pics, classed by season and decade, very revealing. Also last year’s study Antarctic Temperature and Sea Ice Trends.

  97. Alan Wilkinson
    Posted Jan 24, 2009 at 10:37 PM | Permalink

    The immediate questions I would pose as an auditor (or a businessman considering an investment based on this work) would be:

    a) what does a sensitivity analysis show to be the critical data points for establishing these trends?

    b) what does examination of the sources of those critical measurements indicate about their reliability?

    c) why does a reanalysis of existing data combined with new (satellite IR) data that dates back only to 1982 justify a reversal of sign of previously observed temperature trends, particularly since the major reversal is prior to the start of the new data?

    d) are the means and relativities of the different data series inputs reliably and sufficiently aligned to support a trend calculation that spans them?

    d) do the confidence limits placed on the trend results survive proper treatment of autocorrelation and realistic degrees of freedom for the study data?

  98. DJA
    Posted Jan 25, 2009 at 5:29 AM | Permalink

    Just a thought, are letters to Nature peer reviewed?

  99. thefordprefect
    Posted Jan 25, 2009 at 11:45 AM | Permalink

    http://www.antarctica.ac.uk/bas_research/our_views/climate_change.php
    A non-hysterical statement.

    Not peer reviewed
    No statistical evidence

  100. janama
    Posted Jan 25, 2009 at 3:49 PM | Permalink

    The University of Wisconsin has maintained around 100 AWS in antarctic since 1980.

    http://amrc.ssec.wisc.edu/databook/fieldreports/fldrep03.doc

    why haven’t the Australian stations been included

    http://aws.acecrc.org.au/datapage.html

  101. RomanM
    Posted Jan 25, 2009 at 4:29 PM | Permalink

    I tried to find a free digital data set for plotting those neat maps of Antartica as viewed from above (from below?) so I could plot station locations in R. I didn’t come across anything useful (and didn’t find what I needed in R, so I came up with a fairly simple way to do it. You will need Steve’s data set for plotting the station values:

    #function to convert lat and long to south polar view
    trans.spole = function(lat,lon,R=1){
    crad = pi/180
    x = R*sin(crad*(90-lat))*sin(crad*lon)
    y = R*sin(crad*(90-lat))*cos(crad*lon)
    list(x = x, y = y)}

    library(maps)

    #extract antartica from world map
    temp = map(“world”, plot=F)
    anta = map(“world”,region = temp$names[grep(“Anta”,temp$names)],plot=F)

    #convert to south polar view
    anta.p = trans.spole(anta$y,anta$x)
    anta$x=anta.p$x
    anta$y=anta.p$y
    rm(anta.p,temp)

    To illustrate the use, I calculated the mean of the monthly trends for the years 1969 to 2000 using only the ones with (on the average) 12 or more years of temperature records and plotted the mean trends on the map of Antartica using Blue and Red for negative and positive with size of dot proportional to the value:

    #Calculate regressions (temp ~ time) for each month separately for a given time series
    #will only do regression for art least lim non-NAs in a given month (default 3 or more)
    #Outputs trend (slope) and degrees of freedom for residuals

    regress = function(tsdat, lim = 3) {
    star = min(time(tsdat))
    slopes = rep(NA,12)
    df = slopes
    for (i in 1:12){
    dat.win = window(tsdat,start=c(star,i),deltat=1)
    yr = time(dat.win)
    if (sum(!is.na(dat.win))> (lim-1) ) {
    lmmod = lm(dat.win~yr)
    slopes[i] =coef(lmmod)[2]
    df[i] = lmmod$df.residual} }
    list(slope =slopes,df=df)}

    #run previous program for a set of time series
    all.reg = function(alldat, lim = 3) {
    nc = ncol(alldat)
    slop = matrix(NA, nrow = nc, ncol = 12)
    df = slop
    colnames(slop) = month.abb
    colnames(df) = month.abb
    rownames(slop) = colnames(alldat)
    rownames(df) = colnames(alldat)
    for (j in 1:nc) {
    reg = regress(alldat[,j], lim = lim)
    slop[j,]= reg$slope
    df[j,] = reg$df}
    list(slope=slop, df=df)}

    #get station lat and long information
    sta = Info$surface[-c(2,16,38),1:4]

    #convert station lat and long to south polar view
    sta.coord= trans.spole(sta$lat,sta$long)

    #do all surface from 1969 to 2000
    #Restrict to 12 year or longer records
    trsurf.latex = all.reg(window(Data$surface, start=1969, end =2000 ), lim = 12)
    decmean.latex = 10*rowMeans(trsurf.latex[[1]])

    plot(sta.coord$x,sta.coord$y, pch=19, asp=1,col=c(“blue”,”red”)[1+(decmean.latex>0)],cex=4*abs(decmean.latex),xlab=””,ylab=””,axes=F,main =”Plot of Mean Station Trends”)
    map(anta, fill = F,add=T)

    Hopefully, somebody might find it useful. Here is the result:

    • thefordprefect
      Posted Jan 25, 2009 at 5:29 PM | Permalink

      Re: RomanM (#157), This may be of interest?

      Click to access trends2006.col.pdf

      Antarctic near-surface temperature trends 1951-2006
      (Minimum of 35 years’ data required for inclusion)

      • RomanM
        Posted Jan 25, 2009 at 6:03 PM | Permalink

        Re: thefordprefect (#160),

        The point of my post was to give people some ability to generate a particular type of graph on R when they are examining the temperature data. My example was not intended to be the result of some long study intended for deep discussion.

    • Q.F.
      Posted Jan 27, 2009 at 9:26 AM | Permalink

      Re: RomanM (#157)

      Just regarding the plot, the eye (mine at least) compares the circles by area not radius. It might be better to use:

      cex=k*sqrt(abs(decmean.latex))

      for some choice of k.

      Steve:
      I agree with this point; I’ve used this form of expression from time to time and like it a lot.

      • RomanM
        Posted Jan 27, 2009 at 10:38 AM | Permalink

        Re: Q.F. (#168),

        Excellent point! This type of adjustment is usually done when plotting histograms with unequal width bars because in that case also the eye sees the area instead of the height of the bar.

        I actually tried this (AFTER I had already posted the script). k = 2 works pretty well.

        This same effect was noticeable when I tried plotting the results for the AWS as well as the stations on the same graph. I used squares for the AWS and circles for the staions. Even adjusting the relative sizes of the two symbols to match area didn’t help a lot because it got too crowded with a lot of overlapping.

  102. janama
    Posted Jan 25, 2009 at 5:08 PM | Permalink

    on Monday June 9, 2008 The 3rd Antarctic Meteorological Observation, Modeling, and Forecasting Workshop was held in Madison.

    http://amrc.ssec.wisc.edu/meeting2008/program.shtml

    here is their chart of the temp variations observed by their Automatic Weather Stations from 1980 – 2005

    when they compared their measurements to the computer model projections they reached the following conclusions.

    Observed temperature and precipitation for Antarctica show no statistically significant trends overall.
    IPCC AR4 models have approx. the right snowfall sensitivity to temperature, but warm way too strongly in relation to observations.
    The cause is not related to the leading model of atmospheric variability, the Southern Hemisphere Annular Mode. Annomalously strong coupling between water vapor and temperature is implicated that
    overwhelms the actual dynamical effect.
    Obviously a lot more research is needed to isolate the precise
    cause among the models.
    Does raise flags regarding the reliability of future projections of
    climate change over Antarctica.

    http://amrc.ssec.wisc.edu/meeting2008/presentations/Day3/DBro

    mwich_AMOMFW_2008-2.pdf

  103. thefordprefect
    Posted Jan 25, 2009 at 5:34 PM | Permalink

    Appologies – there is also this:

    Click to access reader.temp.pdf

    Antarctic near-surface temperature trends 1971-2000
    (Minimum of 27 years’ data required for inclusion)

  104. Vernon
    Posted Jan 26, 2009 at 10:11 AM | Permalink

    I asked the author (Eric) over on RC if the problems found in the following assessments of RegEM still carried over into his work?

    Smerdon, Jason E., Kaplan, Alexey, Carver, Alexander J. Biases and variance losses in RegEM pseudo-proxy reconstructions for different coupled-model integrations: The impact of standardization procedures. Found here:

    Which was based on the conclusions from the Smerdon, J.E., and A. Kaplan (2007), Comment on “Testing the fidelity of methods used in proxy-based reconstructions of past climate”: The role of the standardization interval, by M.E. Mann, S. Rutherford, E. Wahl, and C. Ammann, Journal of Climate, 20(22), 5666-5670 paper.

    They due the following conclusions about the use of RegEM.

    – The Rutherford et al. (2005) formulation of RegEM causes warm biases and variance losses in derived pseudo-proxy reconstructions.

    – Given real-world constraints, the Mann et al. (2005) pseudo-proxy test used a RegEM formulation that makes it inapplicable.

    – The Rutherford et al. (2005) RegEM reconstruction of historical climate is likely an underestimate of past variability from annual to centennial scales, and by comparison, so too is the Mann et al.
    (1998) result.

    – Our results do not invalidate RegEM as a suitable reconstruction technique, but suggest that currently documented results suffer from the shortcomings that we describe.

    Further in 2008 they published:

    Smerdon, J.E., A. Kaplan, and D. Chang, On the origin of the standardization sensitivity in RegEM climate field reconstructions, Journal of Climate, 21(24), 6710-6723

    “When standardizations are confined to the calibration interval only, pseudoproxy reconstructions performed with RegEM-Ridge suffer from warm biases and variance losses.”

    • Posted Jan 30, 2009 at 11:55 AM | Permalink

      Re: Vernon (#163), even as a non-statistician, it looks to me like you’ve got a potential hit with those conclusions of weakness in the work of Mann, Rutherford, Wahl and Ammann:-

      “When standardizations are confined to the calibration interval only, pseudoproxy reconstructions performed with RegEM-Ridge suffer from warm biases and variance losses.”

      From the paper, no, Letter:-

      “We use a method (notes 9,10) adapted from the regularized expectation maximization algorithm11 (RegEM) for estimating missing data points in climate fields. RegEM is an iterative algorithm similar to principal-component analysis… We assess reconstruction skill using reduction-of-error (RE) and coefficient-of-efficiency (CE) scores as well as conventional correlation (r) scores.”

      Note 10 is “Mann, M. E., Rutherford, S., Wahl, E. & Ammann, C. Robustness of proxy-based climate field reconstruction methods” and note 9 refers back to Rutherford.

      Janama @ #156 “no Australian data” The Australian sector is on the cooler East side.

      The link to the letter @ #109 has gone dead now, is that the norm after x days? (thankfully I’ve got it). Cannot get Methods without paying – all this seems like more varieties of data that should be publicly available and isn’t.

  105. RomanM
    Posted Jan 27, 2009 at 10:39 AM | Permalink

    Hey, I didn’t see the inline response when I submitted my comment. Honest!!! 🙂

  106. Steve McIntyre
    Posted Jan 27, 2009 at 10:49 AM | Permalink

    #170. Inline responses aren’t always a good idea. I used this cex method with some plots showing weights of Mann proxies and it nicely showed the role of bristlecones. I’ve done a couple on Mann CPS and meant to post them last week.

  107. Not sure
    Posted Jan 27, 2009 at 5:24 PM | Permalink

    That seems to make quite a difference:

    Did I screw up the R?

    split.screen(c(1,2))
    screen(1)
    plot(sta.coord$x,sta.coord$y, pch=19, asp=1,col=c(“blue”,”red”)[1+(decmean.latex>0)],cex=4*abs(decmean.latex),xlab=””,ylab=””,axes=F,main =”Plot of Mean Station Trends (Radius)”)
    map(anta, fill = F,add=T)
    screen(2)
    plot(sta.coord$x,sta.coord$y, pch=19, asp=1,col=c(“blue”,”red”)[1+(decmean.latex>0)],cex=2*sqrt(abs(decmean.latex)),xlab=””,ylab=””,axes=F,main =”Plot of Mean Station Trends (Area)”)
    map(anta, fill = F,add=T)

    • RomanM
      Posted Jan 27, 2009 at 8:30 PM | Permalink

      Re: Not sure (#172),

      No,you didn’t screw it up. A 2 to 1 ratio of radii becomes a 4 to 1 ratio in area and the eye reacts to that. The square root takes it back to the correct relative relationship. Changing the 4 to a 2 reduces the size of the larger circles.

      It does make a difference!

  108. Jean S
    Posted Jan 28, 2009 at 8:46 AM | Permalink

    Some data is now here:
    http://faculty.washington.edu/steig/nature09data/ReadMe.html

    Of course, no code and only links to raw data (AVHRR data does not even seem to be uptodate) …

One Trackback

  1. […] Antarctic RegEM […]