Christy et al (J Clim 2009), Surface Temperature Variations in East Africa and Possible Causes, is a really excellent article that will interest many readers interested in surface temperature data sets. It’s interesting on a number of counts, not all of which I have time to summarize today.
It is a detailed study of station records from Kenya and Tanzania in East Africa an area which more or less covers nine 5-degree gridcells from 10S to 5N and from 30E to 45E. They collected and digitized original British East Africa and German colonial station data, as well as GHCN, GISS and their sources (but not CRU data), resulting in a substantial expansion of available data. Christy obviously has an excellent record of placing data online and I hope that this extends to the newly collected station data (which is not online at the moment.)
Although surface data is the backbone of temperature history, detailed analyses of station data are rare and detailed analyses of non-US non-European data are even rarer. The absence of such analyses is an indictment of the authors of the major temperature indices (CRU, GISS, NOAA). They are funded to publish temperature indices and this sort of technical study should be part and parcel of their obligations.
Christy et al approach the calculation of gridcell temperatures a little differently than GISS (CRU still not providing an operational description of methodology). First, they try to identify different versions of the same station and to obtain a station history from these versions. This is along the lines of GISS’ calculation: GISS collecting various versions in dset0 and combining them in dset1. The approach of Christy et al looks more sensible than the GISS approach, though I doubt that the difference “matters” a lot to the final answer, other than being more logical.
Next Christy et al apply a breakpoints algorithm to distinguish unreported step changes in the station history citing the radiosonde method of Haimberger et al 2007. Breakpoint detection, as I read the article, is done through internal properties of the series, rather than through neighbor comparison (a la USHCN), neighbors being a lot sparser than in the US. They set the sensitivity of the changepoint parameter at three different settings and report on its impact on the trend (it’s non-trivial.) Breakpoint detection is a complicated statistical procedure and not one that I’ve studied enough to have an independent opinion on the merits of the various approaches (both the Christy et al code and USHCN code are unarchived in any event). However, Christy et al describe their test statistic (p 3345) and their methodology seems clearly preferable to the weird GISS two-legged coercion (CRU methodology needless to say is unknown).
Then they average (“merge”) the station anomaly series. I can’t tell in a first reading when they converted the data into anomalies – their Figure 1 shows anomaly series.
Then they compare their trends with trends calculated from HadCRUT3v, CRUTEM3v and GISS, reporting that they were unable to replicate the high trend from the major indices. It seems that the CRU series is dominated by the Nairobi airport:
The recent trends of TMean calculated from global datasets do not agree with our results for this cell. As shown in Table 2, the 1979–2004 TMean trend of the central cell as produced by HadCRUT3v, CRUTEM3v, and GISS (0.31, 0.47, and 0.35 deg C/decade, respectively) are markedly inconsistent with all of the time series for that cell constructed in this study. Evidently, the main signal used by HadCRUT3v for this cell since 1979 is derived from the single Nairobi, Kenya, station at Jomo Kenyatta Airport (P. Jones 2004, personal communication). Our unadjusted time series for this site does indeed show significant warming since 1979 (0.25 deg C/decade), but the higher trend is not corroborated by the many nearby stations used in our analysis. Such differences were also found in central California (Christy et al. 2006) and northern Alabama (Christy 2002), where our more comprehensive reconstructions were on average about 0.1 deg C/decade more negative in the cells covering those areas versus values for the cell from global databases.
Christy et al continue with an interesting discussion of Tmin versus Tmax, arguing that Tmax samples a bigger volume of air than Tmin and is a more reliable index of large-scale changes (citing Pielke et al 2007).
The idea that the Jones CRUTEM series is dominated by Nairobi airport will come as little surprise to CA readers – recently we saw that the CRUTEM Hawaii series is little more than an alter ego for Honolulu airport.
I think that it is time to recognize that calling CRUTEM an index of “Land” temperature is a misnomer as, in addition to “Ocean” and “Land”, there is a third important surface in major temperature indices: airport tarmac. In appreciation for Phil Jones’ efforts at measuring airport temperatures around the world, perhaps it would be appropriate that his series be rebranded CRUTAR.