CRUTEM and HadCRU October 2008

Released today on the promised schedule are CRUTEM3 and HadCRUT3 for October 2008. October 2008 was in the top 8 crutem3 (0.517 deg C)and in the top 6 hadcru3 (0.440 deg C) Octobers.

Because our collective eyes right now are fairly attuned to the colors of these grids and how changes in individual stations affect the contours, it’s interesting to take a look at these new contoured maps and compare them both to each other and to the GISS contour map.

First the CRUTEM3 (top), HadCRUT3 (middle), GISS (Nov 13 version – 250 km smoothing rather than 1200 km of the frontpage GISS image).

First a small point. Living in Toronto, I often look first at how the maps represent Toronto since I know what the weather’s been like here. For the most part the land portion of the HadCRUT3 map is identical to the CRUTEM3 map, but not where I live. In CRUTEM world, we experienced a colder than average October (which is how it felt on the ground here), while in HadCRU world we experienced a warmer than average October. One possible and even likely explanation is that HadCRU includes temperatures from the Great Lakes (which weren’t warm for swimming this year.) I didn’t go swimming at our place on Lake Ontario once this year. But maybe October water was less chilly than usual.

More points after you look at the graphics.



A second small point in the light of our watching the ball as stations got added to the GISS map of the Canadian Arctic. The land station contributions to HadCRU3 look like a slightly later GHCN version than CRUTEM3. HadCRU3 has a gridcell a bit to the southwest of Ellesmere Island (presumably Resolute) that got added after Nov 7, according to Gavin Schmidt. This gridcell is absent from the CRUTEM3 version. The Nov 13 GISS version lacks both Resolute and Alert for reasons, presumably because of an oversight as they put patches on patches.

A third point – the global average in the GISS 250 km version is 0.78 deg C, while it’s 0.61 deg C in the 1200 km version.

The CRUTEM3 version looks a lot like the GISS version. Although these compilations are often described as “independent”, recent events have clarified (if clarification were needed) that these compilations are not “independent”. Both rely almost exclusively on GHCN – GISS adds a few series around the edges.

The reverse engineering of CRUTEM3 looks almost pathetically easy given that we’ve already waded through step 0 of GISS, where they collate different GHCN versions (dset0) into a single station history (dset1.) CRU doesn’t have the bewildering sequence of smoothing operations that Hansen uses at multiple stages (though Hansen, mercifully, doesn’t use Mannian butterworth smoothing).

To my knowledge, unlike GISS, CRU does not make the slightest attempt to adjust for UHI, relying instead of articles like Jones et al 1990 purporting to show that UHI doesn’t “matter”.

We can already emulate GISS step0 – not that it makes any sense, but it provides a benchmark. Here’s all that seems to be necessary to produce a gridded CRUTEM3 series given a dset1 data set. First, create an anomaly-version of the series. I have a simple function anom on hand and this could be done as follows:

dset1.anom=apply(dset1,2,anom)

Then one could make an average of dset1 series within gridcell i as follows, where info is an information dataset in my usual style containing for each station, inter alia, its lat, long and gridcell number (called “cell” here):

for (i in 1:2592) grid[,i]=apply(dset1.anom[,info$cell==i],1,mean,na.rm=T)

This would yield the CRUTEM3 series. My guess as to why they don’t want to show their work is because they probably use hundreds of line of bad Fortran code to do something that you can do in a couple of lines in a modern language. Anyway, I’ll experiment with this at some point, but this is my hypothesis on all that’s required to emulate CRUTEM3. CRU has been funded by the US DOE; if, like GISS, they are doing nothing other than trivial sums on GHCN data, one feels that the money would be better spent on beefing up QC and data collection at GHCN.

I downloaded CRUTEM3.nc (today’s version) and checked for gridcells with October anomalies of 5 deg C or higher and then checked to see what stations were in those gridcells. I obtained teh following list, all but one in Siberia, the other one being Barrow, Alaska (where there is an extraordinary contrast between nearby stations that deserves comment.)

1710 22223724000 NJAKSIMVOL’ 62.43 60.87
1716 22223921000 IVDEL’ 60.68 60.45
1729 22224817000 ERBOGACEN 61.27 108.02
1720 22224125000 OLENEK 68.50 112.43
1723 22224329000 SELAGONCY 66.25 114.28
1721 22224143000 DZARDZAN 68.73 124.0
1724 22224343000 ZHIGANSK 66.77 123.4
1686 22220069000 OSTROV VIZE 79.5 76.98
1688 22220292000 GMO IM.E.K. F 77.72 104.3
1693 22221432000 OSTROV KOTEL’ 76 137.87
1685 22220046000 GMO IM.E.T. 80.62 58.05
3361 42570026000 BARROW/W. POS 71.3 -156.78

It looks like CRU has been paying attention to the GHCN commotion and has avoided using one of the problem versions. It is however interesting that some of the above stations (Olenek, Erbogacen etc) were problem stations and one would hope that one of the data distributors has actually checked these stations against original daily versions.

59 Comments

  1. conard
    Posted Nov 14, 2008 at 1:14 PM | Permalink

    Nice post. I predict very little heat.

    As to your last sentence is there a specific results oriented reason you feel this is bad procedure? ( I asked Nicholas the same thing in an earlier thread. ) Unless the values are discrete I will usually allow multiple bins or less often treat the boundary condition like the proverbial donkey and hay piles.

    Changing the title was a nice gesture. Thanks.

    Steve: I moved the comment about filters to an earlier spot in the post. I’m wary of Mannian-padded butterworth filters because they are designed for a completely different purpose – things like audio receivers, and focus on recovering frequencies rather than end points. In climate studies, you have negligible interest in recovering specific frequencies and considerable interest in not screwing up the end points. So they seem about as poor a choice of filtering method as one could make. Plus I instinctively resist the idea that an applied study should concurrently be developing smoothing functions. If Mann wants to advocate Mannian smoothing, publish in a statistics journal not in a climate journal where none of the reviewers or readers really have any idea whether the concept makes any sense or not.

  2. Chris
    Posted Nov 14, 2008 at 1:33 PM | Permalink

    Steve, in my layman’s way, I’ve gone through the stations you listed above. Basically, what I’ve tried to demonstrate is that EVERY single one of them with data from pre-1951 (i.e. the start of the 1951-1980 GISS baseline), had at least one October pre-1951 that was milder than October 2008. This has obvious implications in terms of how appropriate the 1951-1980 baseline may be in judging what is “normal”. (I’m doubt you’ll be surprised by what I’ve come up with, but I hope it’s useful work anyway).

    Also, the following links provide an interesting graphical representation of a couple of the mildest pre-1951 Siberian Octobers, to compare against October 2008:
    http://data.giss.nasa.gov/cgi-bin/gistemp/do_nmap.py?year_last=2008&month_last=10&sat=4&sst=0&type=anoms&mean_gen=10&year1=1943&year2=1943&base1=1951&base2=1980&radius=250&pol=reg
    http://data.giss.nasa.gov/cgi-bin/gistemp/do_nmap.py?year_last=2008&month_last=10&sat=4&sst=0&type=anoms&mean_gen=10&year1=1947&year2=1947&base1=1951&base2=1980&radius=250&pol=reg
    http://data.giss.nasa.gov/cgi-bin/gistemp/do_nmap.py?year_last=2008&month_last=10&sat=4&sst=0&type=anoms&mean_gen=10&year1=2008&year2=2008&base1=1951&base2=1980&radius=250&pol=reg

    1710 22223724000 NJAKSIMVOL’ 62.43 60.87

    Oct 2008 +2.9C
    Oct 1944 +3.0C
    Oct 1955 +3.3C

    1716 22223921000 IVDEL’ 60.68 60.45

    Only goes up to 1990?

    1729 22224817000 ERBOGACEN 61.27 108.02

    Oct 2008 -1.1C
    Oct 1947 -0.7C

    1720 22224125000 OLENEK 68.50 112.43

    Oct 2008 -6.9C
    Oct 1947 -3.3C
    Oct 1948 -6.2C

    1723 22224329000 SELAGONCY 66.25 114.28

    Only goes up to 1990?

    1721 22224143000 DZARDZAN 68.73 124.0

    Oct 2008 -6.6C
    Oct 1943 -6.5C
    Oct 1947 -3.6C
    Oct 1949 -6.4C
    Oct 1951 -6.0C

    1724 22224343000 ZHIGANSK 66.77 123.4

    Only goes up to 1990?

    1686 22220069000 OSTROV VIZE 79.5 76.98

    Note: only goes back to 1951
    Oct 2008 -5.4C
    Oct 1985 -4.5C
    (Oct 1983-1985 average -5.3C)

    1688 22220292000 GMO IM.E.K. F 77.72 104.3

    Oct 2008 -8.1C
    Oct 1938 -6.9C
    Oct 1943 -5.2C
    Oct 1945 -7.1C
    Oct 1948 -7.5C
    Oct 1954 -7.9C
    Oct 1947 -3.9C

    1693 22221432000 OSTROV KOTEL’ 76 137.87

    Oct 2008 -7.2C
    Oct 1943 -6.4C
    Oct 1947 -4.6C

    1685 22220046000 GMO IM.E.T. 80.62 58.05

    This is weird. Only appears to go up to ~2000, but then looking at the data it has anomalies for a few months since then. The only month with data in the last 3 years is…… Oct 2008!
    (Note: only goes back to 1957)
    Oct 2008: -6.7C
    Oct 1959: -5.2C

    3361 42570026000 BARROW/W. POS 71.3 -156.78

    Oct 2008: -5.1C
    Oct 1902: -3.8C
    Oct 1911: -2.7C
    Oct 1938: -3.8C
    Oct 1940: -4.8C
    Oct 1949: -4.5C

    • John S.
      Posted Nov 14, 2008 at 3:38 PM | Permalink

      Re: Chris (#2),

      Nice work, Chris! The raw station data often show warmer periods than now half a century or more ago. Alas, this is often obscured by quaint “homogenization” of station records in the anomaly sausage-machine, which introduces segmented trend-adjustments around some “knee” date. Those expecting results reflecting unadulterated measurements can bury their heart in that knee.

  3. RomanM
    Posted Nov 14, 2008 at 2:07 PM | Permalink

    There is something that I don’t quite understand about the middle (HadCru) map. According to the web site you linked it to:

    A plus sign in any grid box indicates that the temperature anomaly in that grid box this month is the highest since the dataset starts in January 1850. Similarly a minus sign signals the lowest anomaly since 1850.

    If you look at roughly (100W, 60S), there are a + and a – in two adjacent girid blocks. Although records in opposite directions located next to each other in the same month might be possible, I would think it highly unlikely. However, that isn’t the strangest thing. At (120-140W, 80N), there are three minus signs in grid blocks which appear to be coloured yellow and red – indicating positive anomalies with one of them in the one to three degree range. Most of the other minus signs are in blue boxes. Since these anomalies pertain to the specific grid blocks that they occupy, this would appear to be an oxymoron: a “positive minimum anomaly”. Any idea on what’s going on here?

  4. Posted Nov 14, 2008 at 3:44 PM | Permalink

    This is a very enlightening post!

    Toronto is warmer in HadCRUT than in CRUTEM probably because, as Steve suggests, they included lake surface temperatures.

    To test this hypothesis I used a primary synop station, located 100 km far away from where I live on the Adriatic sea coast in Italy – 16149 Rimini.
    October was a warm month with an anomaly of +2.04 °C.

    Correctly, CRUTEM and Gistemp show Italy and all eastern Europe as being quite warm.

    Unfortunately, the second half of September and the first week of October were vey cold and the water of the Adriatic sea, a very shallow sea basin, got unusualy colder than normal. Cold sea water persisted during the month of October even if air temperature was very warm for the season.
    HadCRU, accordingly, changed a warm October, experienced in Italy, in a cold one. That’s a miracle!

    Now look at the British Isle. It was quite cold there but HadCRU says it was very warm. The same happened in France, in Morocco and in the Northern Atlantic around Iceland. In the SH the only place I found was in Queensland.

    With the exception of Italy, all the other places warmed from CRUTEM to HadCRU and I’m sure this had an impact on global anomaly.

    Sure, sst leads air temperature, but someone could tell Jones and his collegues at CRU that strong wind advection can and do produce a sustained situation in which sea water and the above air are decoupled.
    They simple change facts!

  5. Ed
    Posted Nov 14, 2008 at 4:03 PM | Permalink

    A recent study published in Nature and based on a simple GCM is predicting that we will be going into a quasi permanent “big chill”. Hansen’s comment is as follows:

    James E. Hansen, NASA Goddard Institute for Space Studies:
    “Look at Figure 3 in our “Target” paper Target CO2: Where should humanity aim? [pdf]. Yes, the Earth has been on a 50-million-year cooling trend with superposed glacial-interglacial oscillations.* It would take only a small further reduction in climate forcing (less long-lived GHGs or whatever) to yield more ice during the glacial phase of glacial-interglacial oscillations. But that is entirely academic at this point, unless humans go extinct. Although orbital variations instigate the glacial-interglacial swings, the mechanisms for climate change are changes in GHG amount and surface albedo (as we show in Fig. 1 of our paper) — those mechanisms are now under the control of humans. Another ice age cannot occur unless humans go extinct. It would take only one CFC factory to avert any natural cooling tendency. Our problem is the opposite: we cannot seem to find a way to keep our GHG forcing at a level that assures a climate resembling that of the past 10,000 years”

    Do I see an alternate causality (albedo i.e. land use) other than CO2 in the above statement that could explain any slight AGM over the last 150 years?

    See link
    http://dotearth.blogs.nytimes.com/2008/11/13/more-on-whether-a-big-chill-is-nigh/#more-503

    • Scott Brim
      Posted Nov 15, 2008 at 8:59 AM | Permalink

      Re: Ed (#7),

      In contrasting the competing philosphies of climate-as-living-organism versus climate-as-mostly-machine, it is in Dr. Hansen’s opinion that humans could somehow prevent another ice age through C02 management practices wherein the climate-as-machine type of thinking reaches its most preposterous and absurd proportions.

      Being a child of the 1960s television era, it is easy for me to envision a Smokey the Bear cartoon character pointing his finger out at the audience and intoning in a deep resonant voice, “Onlyyy youuu can prevent global warming.”

      Now, it is apparent that even the goal of taking the earth’s temperature accurately while employing some acceptable level of scientific rigor and precision is a very difficult and expensive proposition, one which has not so far been entirely successful — let alone thinking we could somehow be successful at controlling the earth’s temperature and climate through carbon management practices.

      In watching the ebb and flow of debate over the true affect and impact of these emerging issues over just how accurately and objectively the earth’s temperature is currently being measured, one might think to ask, is the instrumental approach necessarily the most “accurate” means we have at our disposal?

      Is it not possible that with enough scientific rigor and discipline, that secondary, non-instrumental indicators might prove to be more useful tools in determining where temperature trends have been, and where they might be going, assuming of course that our examination of the secondary indicators is pursued in an intellectually honest and accountable fashion?

  6. steven mosher
    Posted Nov 14, 2008 at 4:50 PM | Permalink

    Well we know that GISTemp doesnt adjust for grids that are
    part sea and part land ( at least I find no reference)
    HADCRUT does…

    Blending a sea-surface temperature (SST)
    dataset with land air temperature makes an
    implicit assumption that SST anomalies are
    a good surrogate for marine air temperature
    anomalies. It has been shown, for example by
    [Parker et al., 1994], that this is the case, and
    that marine SST measurements provide more
    useful data and smaller sampling errors than
    marine air temperature measurements would.
    So blending SST anomalies with land air tem-
    perature anomalies is a sensible choice.

    I wonder if Parker looked at the the great lakes??

    Continuing

    Previous versions of HadCRUT
    [Jones, 1994, Jones & Moberg, 2003] blended
    land and sea data in coastal and island grid
    boxes by weighting the land and sea values by
    the area fraction of land and sea respectively,
    with a constraint that the land fraction cannot
    be greater than 75% or less than 25%, to
    prevent either data-source being swamped by
    the other. The aim of weighting by area was
    to place more weight on the more reliable
    data source where possible….

    And then they go onto describe the new method I believe RomanM has addressed some problems with it elsewhere on CA ( am I remembering correctly Roman?)

    But does SST include lakes?

    suggests maybe not. I’ll have to go get the data.

    • RomanM
      Posted Nov 15, 2008 at 8:32 AM | Permalink

      Re: steven mosher (#8)

      And then they go onto describe the new method I believe RomanM has addressed some problems with it elsewhere on CA ( am I remembering correctly Roman?)

      You’ve got it right! This violates the principles of stratified sampling (see here for example) where estimates from two or more different strata are combined into a single estimate. The weights that should be used are determined not by how much information is available from each stratum, but by the (fixed) proportion of the population that the particular stratum represents. If you combine your estimators otherwise, you introduce bias (systematic over- or under-estimation of the parameter in question) thereby losing any possible gain in the reduction of variability.

      In the case of the temperatures, presumably the relatives weights for land and water in a grid box could also change over time as measurement sites are added or removed creating artificial differences in the anomaly sequence. Since the relative weights differ from grid box to grid box, changes in SSTs would also produce effects non-uniformly across the grid boxes.

      Re: tty (#9),

      Your explanation doesn’t hold water! (chuckle) I can’t imagine someone actually basing anomaly records on virtually non-existent information. 😉

  7. tty
    Posted Nov 14, 2008 at 5:10 PM | Permalink

    RomanM:

    “Although records in opposite directions located next to each other in the same month might be possible, I would think it highly unlikely.”

    Not that unlikely. Remember that HadCRU uses surface measurements for SST. These gridboxes are down in the Southern Ocean where there is essentially no ship traffic. Say that there is one previous October measurement in each box. Now a ship passes through and takes two new measurements. One is slightly higher than the previous one and the other slightly lower. Presto! Two new records in opposite directions.

    As for Hansens argument about ice ages, the problem is that we know quite well that it insolation changes that moves the climate both out of and into glaciations, and that the changes are quite abrupt, much faster than can be explained by changes in GHG (changes on the order of 5 degrees or more globally in 50 years). Apparently there are threshold effects where global circulation changes rather abruptly. As a matter of fact this change between glacial and interglacial climates is the only “tipping point” for which there is any actual evidence.

  8. steven mosher
    Posted Nov 14, 2008 at 5:35 PM | Permalink

    the source data has coastal marine data for the great lakes

    need to read.

    Click to access rayner_etal_2005.pdf

    charts there seem to indicate no coverage??

    SteveMc Might be instructive to do a map of

    http://hadobs.metoffice.com/hadsst2/data/download.html

  9. tty
    Posted Nov 14, 2008 at 5:42 PM | Permalink

    “But does SST include lakes?

    suggests maybe not. I’ll have to go get the data.”

    I don’t know if SST includes lakes, but there is at least three SST temperature records well inland in the October data. One near the southern Caspian might be understandable, but the other two would seem to refer to the lower Mackenzie River and Lake Peipus in Estonia respectively. “Curiouser and curiouser”.

  10. david elder
    Posted Nov 14, 2008 at 6:26 PM | Permalink

    Steve, can you help me clarify something. CRUTEM is land temperature from stations. HadCRU is land temperature from CRU plus Hadley temperature from the oceans. Is the Hadley ocean temperature from satellites? And what is GISS – land only, or sea data from satellite as well? Also, is there anywhere that the balloon temperature data since the late 50s can be accessed to compare it with the surface records?

  11. B Buckner
    Posted Nov 14, 2008 at 6:51 PM | Permalink

    D Elder,
    HadCRU uses sea surface temps from ships for its ocean air temperatures (assumes air temp anomaly equals sea surface temp anomaly), GISS uses satelites.

  12. Jesper
    Posted Nov 14, 2008 at 7:18 PM | Permalink

    Anomalous warmth over Siberia is the primary surface temperature signature of the Arctic Oscillation’s positive phase.

    A large positive AO excursion occurred during October, as can be seen here.

  13. Steve McIntyre
    Posted Nov 14, 2008 at 10:47 PM | Permalink

    With this Siberian stuff, don’t forget the “Divergence” between GHCN Monthly and GHCN Daily records for a spot check on Verhojansk, a site in the heart of the Siberian zone.

  14. beng
    Posted Nov 15, 2008 at 7:33 AM | Permalink

    A very reliable, but UHI-contaminated station in western Maryland (USA) near me:
    http://i4weather.net/oct08.txt
    had October at 1.8F (.85C) below avg.

    Oh wait, sorry, Hansen says UHI is insignificant.

  15. John S.
    Posted Nov 15, 2008 at 10:43 AM | Permalink

    Substitution of SSTs for air temps in HadCRUT time series is scientifically problematical in many respects. The two variables are seldom highly coherent even when co-located, as bouy measurements show. Ship reports are never truly co-located and, in most grid cells, are not frequent enough to provide a basis for constructing proper daily averages for every day of the month, necessary for avoiding aliasing errors. But most egregious is the wholesale adulteration of SSTdata by the unproven “adjustment” of bucket temperatures. No less than the “homogeneity” adjustment, this introduces a twiddle factor that is responsible for most of the trend seen in the annual global seWhile Hadley may have been more prudent than GISS in dealing with the recent monthly replication fiasco, they produce a data sausage whose ingredients are even more unreliable.

  16. lgl
    Posted Nov 15, 2008 at 11:29 AM | Permalink

    Steve,
    wouldn’t it be better to use this GISS so all have the same 61-90 baseline.
    When I do that and compare to met.no ,GISS and Hadley show Norway too warm. Judging from their maps Norway was at least 1 deg C warmer but the met.no says it was 0.4 C warmer. For the western parts of the country it’s even worse. GISS and Hadley show +0.5 to +1 but reality says -0.5 to -2.

  17. lgl
    Posted Nov 15, 2008 at 1:16 PM | Permalink

    Another example of inaccurate GISS here
    Several areas of Siberia show negative anomaly at meteo.ru autumn of 2005, they are all positive at GISS.
    Or is it just a matter of averaging?

  18. Steve McIntyre
    Posted Nov 15, 2008 at 2:02 PM | Permalink

    Does anyone know how to get at meteo.ru station data in an ASCII or zip form? I used to be able to get at it, but they changed the formats and now I can’t find it.

  19. Steve McIntyre
    Posted Nov 15, 2008 at 4:27 PM | Permalink

    #23. Sort of. But you used to be able to access zip files with temperature histories for a given station which you could specify by id#. Now I can’t locate any url’s. I HATE it when agencies prevent being able to access data through hokey JAva stuff.

  20. steven mosher
    Posted Nov 15, 2008 at 8:54 PM | Permalink

    http://meteo.infospace.ru/wcarch/html/e_sel_stn.sht?adm=612

    scroll to the location verhoyansk then the file icon
    select zip. I’ll try it and send to you

  21. Janama
    Posted Nov 15, 2008 at 9:05 PM | Permalink

    here it is

    http://www.panoramio.com/photo/8225894

  22. Nicholas
    Posted Nov 15, 2008 at 9:33 PM | Permalink

    conard,

    As to your last sentence is there a specific results oriented reason you feel this is bad procedure? ( I asked Nicholas the same thing in an earlier thread. ) Unless the values are discrete I will usually allow multiple bins or less often treat the boundary condition like the proverbial donkey and hay piles.

    Sorry, I hadn’t seen your question. I deal mostly with frequency domain signals so I’m not really qualified to answer this in a statistical context. However, from what I have read, there are good reasons to avoid smoothing at all. However, if you have to do it, it strikes me that a Gaussian filter would be far superior in this type of situation. Butterworth is best used when the corner frequency is far below or above the frequencies you are interested in. In the case of climate data, I don’t think that’s the case. If your filter window is 20 years and your data is 600 years long, that’s simply too close to the sort of period which will make a difference. A 10 year distortion at the end could affect the trend enough to matter. Even just a simple moving average seems to me like it would have better understood and milder effects than a reactive type filter.

  23. david elder
    Posted Nov 15, 2008 at 9:46 PM | Permalink

    To B Buckner re information in comment 13 thanks very much from D Elder. Pity the Mann men aren’t as cooperative!

  24. Posted Nov 16, 2008 at 12:25 AM | Permalink

    Very nice write up in the UK Telegraph, with a corresponding link at the Drudge Report.

    The world has never seen such freezing heat By Christopher Booker

    A surreal scientific blunder last week raised a huge question mark about the temperature records that underpin the worldwide alarm over global warming. On Monday, Nasa’s Goddard Institute for Space Studies (GISS), which is run by Al Gore’s chief scientific ally, Dr James Hansen, and is one of four bodies responsible for monitoring global temperatures, announced that last month was the hottest October on record.

    The error was so glaring that when it was reported on the two blogs – run by the US meteorologist Anthony Watts and Steve McIntyre, the Canadian computer analyst who won fame for his expert debunking of the notorious “hockey stick” graph – GISS began hastily revising its figures. This only made the confusion worse because, to compensate for the lowered temperatures in Russia, GISS claimed to have discovered a new “hotspot” in the Arctic – in a month when satellite images were showing Arctic sea-ice recovering so fast from its summer melt that three weeks ago it was 30 per cent more extensive than at the same time last year.

  25. See - owe to Rich
    Posted Nov 16, 2008 at 3:10 AM | Permalink

    Written in haste, so I may not have seen if this was already discussed. Anyway, being UK-centric rather than Toronto-centric, I am concerned about the Hadcrut3 anomaly for UK apparently being positive (by the colour on the map), when we had our first London snow in October for 70-odd years, and CET daily max achieved its lowest value since 1974. Is this an example of grid cells overlapping land and sea, and the sea anomaly has somehow outweighed the land anomaly?

    Will they still do the same combination in 20 years time when we have colder sea temperatures and higher land temperatures?

    Expert explanations would be welcome.

    Rich.

  26. UK John
    Posted Nov 16, 2008 at 4:49 AM | Permalink

    #25 Janama,

    I am not going there for my holidays !

  27. Jeff C.
    Posted Nov 16, 2008 at 10:51 AM | Permalink

    Hmmm…the October anomaly map from GISS seems to have changed yet again. The Siberian hotspot has shrunk and the Canadian Arctic hot spot that Steve noted had appeared in the GISS Nov 12th revision is gone. Are we starting to see some actual “quality control” from GISS?

    http://data.giss.nasa.gov/cgi-bin/gistemp/do_nmap.py?year_last=2008&month_last=10&sat=4&sst=0&type=anoms&mean_gen=10&year1=2008&year2=2008&base1=1951&base2=1980&radius=1200&pol=reg

  28. steven mosher
    Posted Nov 16, 2008 at 2:55 PM | Permalink

    for grins do 100km radius.

    http://data.giss.nasa.gov/cgi-bin/gistemp/do_nmap.py?year_last=2008&month_last=10&sat=4&sst=0&type=anoms&mean_gen=10&year1=2008&year2=2008&base1=1951&base2=1980&radius=100&pol=reg

  29. steven mosher
    Posted Nov 16, 2008 at 2:57 PM | Permalink

    it would be interesting to do a sensitivity studies of radius versus anomaly over time

  30. AlanB
    Posted Nov 16, 2008 at 3:20 PM | Permalink

    Steve: A third point – the global average in the GISS 250 km version is 0.78 deg C, while it’s 0.61 deg C in the 1200 km version

    On RealClimate in response to Comment 221

    [Response: The average is only over the area that is filled in – depending on how clustered the anomalies are, the map with the wider interpolation could have the same, less or more. Only the 1200km product is the ‘official’ number. – gavin]

  31. steven mosher
    Posted Nov 16, 2008 at 4:14 PM | Permalink

    RE. myself.

    the 100kn radious is actually 250km. Over the “the historical” record, looking at the annual mean over 10 year periods, there is no real difference between 250km and 1200km averaging.

  32. Bernie
    Posted Nov 16, 2008 at 4:27 PM | Permalink

    The piece also gets a shot in at the peripatic head of the UN globabl heating mission:

    Dr Pachauri, a former railway engineer with no qualifications in climate science, may believe what Dr Hansen tells him. But whether, on the basis of such evidence, it is wise for the world’s governments to embark on some of the most costly economic measures ever proposed, to remedy a problem which may actually not exist, is a question which should give us all pause for thought.

    The Telegraph piece should get some attention among members of Congress!!

  33. Posted Nov 16, 2008 at 4:27 PM | Permalink

    Here is the RSS temperature anomaly for October:

    Let’s compare with the HadCRU map, where I put in evidence some areas that differ with the RSS map.

    Now I try to give some explanation for each of these differences.

    1) Barrow: all Alaska was very colder than normal with the exception of the northen coast. Possible explanations: more frequent wind from the sea to the north which is usualy warmer; UHI problems; later sea ice formation or snow cover in that area.
    2) In the middle of Labrador: surface station problem.
    3) USA Pacific waters: colder water with warmer air aloft; typical sea-air decoupling, wrong CRU’s assumption regarding near coast waters.
    4) Eastern North Atlantic: cold air advection onto warm sea in the extratropics. Nothing unusual, wrong CRU’s assumption.
    5) Italy-Adriatic sea: colder water with warmer air aloft; typical sea-air decoupling, wrong CRU’s assumption regarding near coast waters.
    6) Eastern Equatorial Pacific: was sst so warmer than normal there? Anyway air temperature was normal. Further investigation.
    7) Waters off Somalia: probable strong upwelling. Colder water with warmer air aloft; typical sea-air decoupling, wrong CRU’s assumption regarding near coast waters.
    8) Waters south of Java: colder water with warmer air aloft; typical sea-air decoupling.
    9) Eastern Australia: cold air advection onto warm sea in the extratropics. Nothing unusual, wrong CRU’s assumption.
    10) Far Indian Ocean to the south of Madagascar: cold air advection onto warm sea in the extratropics. Nothing unusual, wrong CRU’s assumption.
    11)Waters off Chile: cold air advection onto warm sea in the extratropics. Nothing unusual, wrong CRU’s assumption.
    12)Argentina: it’s hard to understand what’s going on there. The only reason to have warmer temperature at the ground and colder aloft, I think, is a problem with the surface stations (UHI, omogenisation).

    What’s the effect of all these assumptions on global mean?

  34. Posted Nov 16, 2008 at 4:35 PM | Permalink

    I try again for the images.
    RSS:

    HadCRU

  35. Deep Climate
    Posted Nov 16, 2008 at 5:35 PM | Permalink

    Steve said:

    The Nov 13 GISS version lacks both Resolute and Alert for reasons, presumably because of an oversight as they put patches on patches.

    I don’t know who “they” are or what you mean by “patches on patches”. Perhaps you could clarify what you are talking about. I do hope that you understand that the change was in GHCN, not GISS:

    Nov. 13, 2008: NOAA corrected GHCN data for October, 2008 again (second correction). Monthly figures have been corrected (third version).

    • Gerald Machnee
      Posted Nov 16, 2008 at 6:32 PM | Permalink

      Re: Deep Climate (#41),
      When I checked a few days ago, the October mean for Resolute seemed a few degrees above normal. In November there were 3 days of data missing, so that may be why November is not used.

  36. Steve McIntyre
    Posted Nov 16, 2008 at 5:49 PM | Permalink

    #41, I’ve discussed problems at GHCN in the past on numerous occasions. They are nearly 20 years late in updating many stations and their update patterns are erratic. GISS uses them as a supplier and like any other distributor, they need to take care that their supplier has proper QC. Like most readers here, I don’t get the purpose of fingerpointing.

    I just examined versions of the data set and the GHCN version on Nov 14 12.30 am had removed the Resolute station (presumably because they went back to a version before it was added to try to fix the Russian problem.) It was in the Nov 12 version and is once again in the Nov 16 version.

    But in either case, the Canadian Arctic had been discussed in commentary; the change was noticed instantly by many readers including myself. Gavin Schmidt had already noted that the information was available as of Thursday; the change was readily noticeable so it’s pretty goofy that GISS didn’t work with their supplier to deal with the problem for their update.

  37. Steve McIntyre
    Posted Nov 16, 2008 at 6:41 PM | Permalink

    I just checked Resolute against Canadian source data. It looks like there was a carryover incident earlier in the year, operating the other way. I’ll do a post on it separately.

    • GeneII
      Posted Nov 16, 2008 at 7:17 PM | Permalink

      Re: Steve McIntyre (#44),

      I just checked Resolute against Canadian source data. It looks like there was a carryover incident earlier in the year, operating the other way. I’ll do a post on it separately.

      Am i missing something–aren’t these people paid to do this job? Is this kind of shoddiness common?

  38. steven mosher
    Posted Nov 16, 2008 at 6:55 PM | Permalink

    the giss quality checks are made manually. prior to processing. have a look at Karl 98 again for a flow diagram of the GHCN QC checks. figure 1.

  39. crosspatch
    Posted Nov 16, 2008 at 7:05 PM | Permalink

    “It looks like there was a carryover incident earlier in the year, operating the other way.”

    I had a hunch there would be more of those. I wouldn’t be surprised to learn that there are all sorts of these errors sprinkled through the record. Sounds to me like it is time for an overhaul somewhere and a wholesale regeneration of that data from the original source files is in order if there is to be any trust in those data.

    • Posted Nov 16, 2008 at 7:42 PM | Permalink

      Re: crosspatch (#46),

      Oddly enough, introducing extra noise above and beyond “weather noise” into the record makes it more difficult to demonstrate projections are wrong. Noise of any sort means means that we are more uncertain about mean trends.

      That means that these errors will impact attributions studies and our ability to discern whether models are on track or off track.

  40. steven mosher
    Posted Nov 16, 2008 at 8:02 PM | Permalink

    re 48. You’ll note Lucia that for attribution studies the PIs reduce the number of models that they
    use, for example screening models that exhibit too much drift during control runs. That way they get a tighter ensemble when then do comparison studies ( with and w/o C02 forcing) If they
    ran all the models, like they do in hindcast studies, the probability of the with and w/o studies overlapping is higher

  41. crosspatch
    Posted Nov 16, 2008 at 9:12 PM | Permalink

    Re: lucia

    ‘introducing extra noise above and beyond “weather noise” into the record makes it more difficult to demonstrate projections are wrong’

    It would seem to me that it would make it more difficult to demonstrate anything at all.

  42. AlanB
    Posted Nov 17, 2008 at 10:39 AM | Permalink

    At the moment if I try to make a map – regular or polar view – at GISTEMP with the 250 km Smoothing Radius I now get broken links. Do others get the same?
    Make map here Be sure to clear cache first.

    (Mapmaker works fine for 1200km Smoothing)

  43. Posted Nov 17, 2008 at 12:45 PM | Permalink

    crosspatch (#51),
    Yes, It makes it more difficult to demonstrate anything at all. But, if you have been following the “not inconsistent” chronicals by Roger Pielke Jr. noisy measurements means one would constaintly decree all sorts of things are “not inconsistent” with data.

  44. steven mosher
    Posted Nov 17, 2008 at 12:51 PM | Permalink

    re 52 worked fine yesterday. I did about 20 of them

    • AlanB
      Posted Nov 17, 2008 at 1:08 PM | Permalink

      Re: steven mosher (#54), Thanks Mosh. looks like it is fixed now. The comment about the “grahics bug” has been removed as well I think. Just a hiccup!

  45. Steve McIntyre
    Posted Nov 17, 2008 at 12:57 PM | Permalink

    Steve Mosher, if you feel like sticking needles in your eyes again — can you see anywhere in the Hansen programs where any QC is done? Hansen et al stated:

    Our analysis programs that ingest GHCN data include data quality checks that were developed for our earlier analysis of MCDW data. Retention of our own quality control checks is useful to guard against inadvertent errors in data transfer and processing, verification of any added near-real-time data, and testing of that portion of the GHCN data (specifically the United States Historical Climatology Network data) that was not screened by Peterson et al. [1998c].

    I took a quick look at Step 0 and Step 1 and didn’t see anything that could be construed as QC.

  46. Chris M
    Posted Nov 17, 2008 at 4:42 PM | Permalink

    As a UK resident I know we just had a cold October – the official UK Met Office site has data for the month that shows the month was -0.7C below the 1961-90 average! Yet the HadCRU map (and this is produced by a research ‘arm’ of the UK Met Office apparently shows a positive temperature anomaly of about +0.5C. Seems very odd.

  47. steven mosher
    Posted Nov 17, 2008 at 9:01 PM | Permalink

    RE 55. I’ll redownload and have a look. Some of the QA/QC is done offline, some is manual by their own admissio. I’ll run through the code later tonight after dinner

  48. John Finn
    Posted Nov 18, 2008 at 3:55 AM | Permalink

    Re: #57

    Chris M

    Several UK stations had the same Oct=Sep error as those in Siberia – though with less dramatic effect obviously.

  49. steven mosher
    Posted Nov 18, 2008 at 4:17 AM | Permalink

    RE55.

    Most of the files were relatively easy to check, kinda like a bad dream. I’ll double check
    it all again, but I couldnt find anything like outlier analysis for example. It’s clear
    there are checks for “bad” data, but its data that has been labelled somewhere else as
    “bad” No in situ QA. The approach is a file is prepped. Fed to GISSTemp. Gisstemp
    checks for obvious blunders in file length, missing data, etc. So there Appear to be no checks within the GISStemp for bad data being ingested.

    Most of the QA performed in GISS per their documentation happens offline. Like dropping stations,
    dropping periods for stations. Steps that have never been fully documented.

    A few more notes:
    Step 1 has changed from my last download. new python code written May31 2008. There are a few
    adjustments to records ( as mentioned in Hansen 2001, and calculation of stats functions.)
    and undocumented routines like “dropstrange.py . I’m currently seeing nothing I would call QA, except rudimentary checks.. ( missing data, etc). I found this though..

    C?*** Special title for Jim’s plots..

    I’ll go over it again, but its appears that that SOME OTHER program or process was used to identify problem data.( like St Helena) and then input files were altered ( like start date end date) so that GISSTemp assume that the data AS INGESTED is good. Remember there are steps described in Hansen 2001 that are not captured by the source code published.

    More later I want to go over the Python again since its new code since I looked at it last. I might not get to that until Thursday.

    It sure would be nice if they used version control.

  50. Sven
    Posted Nov 19, 2008 at 3:18 PM | Permalink

    Can someone enlighten me on how the HADCRUT yearly anomaly figures are calculated. The most logical thing would seem to be to just take a simple average of monthly figures by adding the monthly anomalies together and dividing the result by 12. I started looking at the years that are similar to 2008 and found out that (apart from 1995) there are significant differences between the HADCRUT yearly figure and the average of monthly anomalies. For example (the first figure being the HADCRUT yearly and the second the average of months):
    1990 – 0.248, 0.254
    1995 – 0.276, 0.276
    1997 – 0.355, 0.350
    1999 – 0.262, 0.296
    2000 – 0.238, 0.270
    2008 – 0.304, 0.315
    The largest difference is somehow in the years 1999 and 2000. I wonder how would the higher anomalies for these years influence the trend line? Would there be a decline since 1997 already? Strange anyhow…

  51. Sven
    Posted Nov 20, 2008 at 3:38 AM | Permalink

    I checked the trend lines. With official HadCrut3 annual data the trend lines starting with 97, 98, 99 and 00 are all going up and only with a starting point being 2001 and onwards, they are going down. But if I use simple averages of monthly data for all the years between 1997 and 2008, then the trend line with a starting point of 1997 is almost flat, a 1998 starting point shows downward trend and only using 1999 and 2000 as starting point do I get a clear upward trend…

4 Trackbacks

  1. […] #7 by “Ed” on Climate Audit on the source story of Long View #2 […]

  2. […] normal. But when expert readers of the two leading warming-sceptic blogs, Watts Up With That and Climate Audit, began detailed analysis of the GISS data they made an astonishing discovery. The reason for the […]

  3. […] error was so glaring that when it was reported by the US meteorologist Anthony Watts and Steve McIntyre…GISS began hastily revising its […]

  4. […] long before my own FOI requests for CRU station data, I discussed CRU calculations in more detail here, concluding with the observation that “if, like GISS, they are doing nothing other than […]