RSS versus UAH: Battles over Tropical Land and Ocean

Until I recently examined the underlying technical literature on the construction of the UAH and RSS satellite data sets, I had little appreciation of the complicated adjustment and estimation procedure involved in the satellite temperature indices.

The UAH and RSS satellite temperature indices are constructed from many different satellites (TIROS-N, NOAA-6,7,8,9,10,11,12,14,…), each of which has distinctive instrumental and orbital properties, which need to be allowed for in the estimation of the “true” tropospheric temperature. The estimation of these parameters is done differently by Christy and his associates on the one hand and Mears and his associates on the other. Controversies over the merits of these adjustment procedures have been ongoing for over a decade.

From the perspective of CA readers, there is a substantial statistical component to the estimation of the various adjustment and bias correction parameters, as can be seen from examining Christy et al 2000 url and Mears et al 2003 url , both of which describe statistical procedures, though not always expressing things in ordinary statistical terms.

In looking at tropical tropospheric results recently, I happened to do a crosscut of UAH versus RSS, which seemed to me to provide an interesting perspective on this debate. I am not nearly familiar enough with the overall issues to venture an opinion on who is “right” and who is “wrong” in any of these disputes. I am merely presenting a graph that intrigued me.

Before doing so, I want to present a few graphics that illustrate the history of a few relevant parameters for the satellites considered in the two articles references above (which do not include the most recent satellites), but which illustrate the scale of variation which can occur.

First, here is a graphic from Christy et al 2000 showing their adjustment by satellite for orbital decay. Aside from particulars of orbital decay, this graphic also provides information on satellites used for construction of the index and major transitions. For example, in 1986, there was a switchover from N-8 to N-10, with N-6 being briefly called out of retirement because of the short lifespan of N-8. In 1992, there was another transition from N-10 to N-12 with overlap being provided by N-11.

Figure 1. From Christy et al 2000. Impact of orbital decay.

Second, here is another graphic showing changes in local equatorial crossing time LECT), another effect adjusted for by both parties. This is presented here primarily to help readers get their eye into when the satellite switches took place.

Figure 2. From Christy et al 2000. Changes in local equatorial crossing time.

Next is a crosscut comparing UAH and RSS presented by Tamino last year in a surprisingly sober assessment of the differences between UAH and RSS.

Tamino observed:

There are two differences which are apparent to the eye. First, there’s a “step change” at 1992, with RSS being higher than UAH after that but lower before that. Second, in the most recent time period (from about 2003 on) there’s an annual cycle, with RSS being relatively higher during northern hemisphere summer and UAH relatively higher in northern hemisphere winter.

1992 and Other Steps

The existence of a 1992 step appears to be common ground among the parties. It was specifically mentioned by John Christy in a recent email to me following my recent post on satellites (in which he kindly also provided a variety of references on the interesting adjustment issues.)

The 1992 step has been attributed by the various parties to differences in handling the NOAA-10 to NOAA-12 satellite switch.

Recently, I experimented with strucchange on RSS versus UAH in the tropics, stratifying land and ocean series separately, yielding a pretty interesting crosscut relative to the above debate. I think that there are a variety of interesting points in this graphic, which I’ll discuss below.

Figure 3. Tropical T2LT UAH versus RSS, stratified by Ocean and Land, showing breakpoints from strucchange.

First, there is a remarkable lack of similarity between the Ocean and Land differentials. Over tropical oceans, RSS increases relative to UAH quite consistently, whereas the patterns over tropical land are quite erratic. Since 2004, UAH tropical land has a dramatic increase relative to RSS overlying the strong annual cycle in the difference series.

In the land series, strucchange picks out several breakpoints, each of which can be plausibly identified with a satellite changeover. The 1992 breakpoint at the transition from NOAA-10 to NOAA-12 is picked out. However, the 1992 breakpoint is not as unique as presumed in previous discussion nor even primus inter pares. There is a substantial breakpoint in 1986 at the transition to NOAA-10 (from a patchwork of satellites in the immediately prior period. There is a noticeable breakpoint in 1998 at the end of NOAA-12.

There is also a noticeable breakpoint around 2004-2005. Mears et al 2009 observe:

In order to continue the atmospheric temperature record past 2004, when measurements from the last MSU instrument degraded in quality, AMSU and MSU measurements must be intercalibrated and combined to extend the atmospheric temperature data records.

While the earlier history of the respective RSS and UAH adjustments has been much debated, it looks like another chapter needs to be written on the MSU/AMSU changeover, where, far from differences diminishing from prior analyses, the differences are increasing, with RSS/UAH difference trends being opposite over tropical land and over tropical ocean – a point not apparent in the combined difference series, where the trends offset.

Both Christy et al (J Clim 2009) and Mears et al ( Journal of Atmospheric and Oceanic Technology, 2009a,b) see url have recent technical discussions, but I didn’t notice any discussion of this point in a quick perusal.


  1. Andrew
    Posted Jul 18, 2009 at 2:09 PM | Permalink

    It would be cool if we could get Mears and Wentz or Christy and Spencer to do a guest post on these issues.

    BTW, the difference between the land and ocean probably has to do with the diurnal adjust procedures-apparently ocean temperatures are thought to need very little if any diurnal correction.

    At least I THINK that’s what the deal is. I could be totally wrong.

  2. stephen richards
    Posted Jul 18, 2009 at 2:20 PM | Permalink

    This is something of a concern to me and I’m sure many other people. Anthony W has shown that the surface station data are unreliable and I was rather pinning my hopes on the Satelite data to fill the gap, so to speak. This analysis, although by no means complete, does hint at problems of sufficient magnitude as to make the data unusable to the second decimal place.

  3. DeWitt Payne
    Posted Jul 18, 2009 at 3:00 PM | Permalink

    It has been suggested that the change in 2003 is related to the use of AQUA, which has station keeping ability (fuel) and does not require diurnal or orbital decay correction, by UAH and not by RSS I haven’t done the land vs sea comparison, but all the geographic sub regions show very different behavior for RSS-UAH anomalies. The global comparison is probably the least informative.

  4. Ryan O
    Posted Jul 18, 2009 at 5:34 PM | Permalink

    Steve, your RSS-UAH graphs look remarkably similar to Antarctic AVHRR-ground temperature graphs, including the step change in 1987 and the big spike in 1995:

    The Wilcoxon test is non-parametric, and so would not require correction for serial correlation of the residuals.
    Needless to say, this gives me an opinion on whether I think RSS or UAH is more accurate.

    • Steve McIntyre
      Posted Jul 18, 2009 at 8:07 PM | Permalink

      Re: Ryan O (#4),

      Ryan, I had the AVHRR data in the back of my mind when I was collating satellite information. I wonder how they do the MSU-style adjustments.

      • Ryan O
        Posted Jul 18, 2009 at 9:01 PM | Permalink

        Re: Steve McIntyre (#10), The exact procedures are not the same, of course, because they depend on the different characteristics of each instrument, but the principles are the same and they involve a lot of approximations and assumptions. I used to be more confident about satellite measurements until I realized how sensitive the resulting trend calculations were.

        Re: Jeff Id (#13), Yep. I actually trust the ground data more. That’s the data part of the ground data, not the uber-adjusted final index part. I would second the opinion of the importance of Watt’s project.

        • Steve McIntyre
          Posted Jul 18, 2009 at 9:09 PM | Permalink

          Re: Ryan O (#15),

          unfortunately, the station project has had negligible penetration outside the US – and that’s the biggest question mark.

        • Geoff Sherrington
          Posted Jul 20, 2009 at 12:28 AM | Permalink

          Re: Steve McIntyre (#16),

          Steve, In some ways it might have helped. If you look at the photos of tidy Australian BOM high quality stations in Australia, on BOM Web sites, you might feel that Anthony’s work highlighted the need for site cleanup. I do not know if it was aleady under way before then, but it continues. OTOH, how much adjustment of past results has been made is less clear.

          Here’s another spaghetti graph to show that even relatively clean sites have adjustment problems. There was a station site change about 1942. There is guesswork in-filling of a small % of the daily data.

          Would you be doing mathematical hara-kiri by assuming no change of any significance in 130 years?

    • Ulises
      Posted Jul 20, 2009 at 3:46 AM | Permalink

      Re: Ryan O (#4),

      The Wilcoxon test is non-parametric, and so would not require correction for serial correlation of the residuals.

      Surprises me a bit; transformation into ranks smoothes the outliers away, but it should not remove the autocorrelation structure ?

      • Ryan O
        Posted Jul 20, 2009 at 9:43 AM | Permalink

        Re: Ulises (#39), You are right to be surprised. I was incorrect. The Wilcoxon test still assumes independence of the residuals. Its benefit is that it does not require an assumption of normality for the sample distributions. However, for serially correlated residuals, the computed confidence levels will be too tight.

  5. Gerald Browning
    Posted Jul 18, 2009 at 5:41 PM | Permalink

    Steve McIntyre,

    I have discussed the problems with satellite retrievals of temperatures from radiances a number of times on this site. One expects the most problems over oceans and in the tropics because of lack of in situ data (radiosondes and surface measurements) to help with the ill posed inversion of the integral equation (exactly as you have now noticed). And in the presence of clouds, the inversion process is even more dicey.

    Note that exactly where satellites are supposed to be most helful (in areas like the southern hemisphere where there is very little in situ data), they have the most


  6. Robert Wood
    Posted Jul 18, 2009 at 5:47 PM | Permalink

    stephen richards:

    It will forever be ridiculous to attempt temperature data and reconstructions to any accuracy greater than 1C (or 1F if you wish). The three decimal points are the result of computers, not measurements, and should be ignored.

  7. Geo
    Posted Jul 18, 2009 at 7:16 PM | Permalink

    Doesn’t make him wrong, of course, but I think it pretty clear that Tamino enjoys any opportunity to stick a thumb in UAH’s eye.

    But having an agenda and being wrong don’t necessarily go together.

  8. DG
    Posted Jul 18, 2009 at 7:52 PM | Permalink

    This was discussed in part at WUWT last year

    Wouldn’t the calibration procedure have a lot to do with the data quality? How are the two products calibrated and to what traceable standard?

    Tamino also stated:

    Note: Having compared RSS and UAH to the HadAT2 data set, I find that there’s more divergence between RSS and HadAT2 at the 1992 step than between USH and HadAT2. So I withdraw my opinion that the step change represents a reason to prefer RSS over UAH.

    As per J. Christy (again IIRC) their product is compared to balloon data, but not adjusted to it. If there are problems with the satellite (and hence balloon) data as is inferred, and there are problems with surface station values as has been documented, does it mean then there really is no reliable metric to monitor earth’s near surface and atmospheric temperatures?

    • Steve McIntyre
      Posted Jul 18, 2009 at 8:13 PM | Permalink

      Re: DG (#8),

      The 1992 “step” is only one of many bias corrections. While the handling of the NOAA-10 to NOAA-12 transition is interesting and relevant, it is only one of a number of transition and drift issues. I see no reason why one should “prefer” one series over another based on a superficial analysis of this step. The other steps are relevant too, as is drift.

      The various satellite authors have put a lot of work into their indices and unless one has mastered their methods, it’s silly to “prefer” one index to another.

  9. Steve McIntyre
    Posted Jul 18, 2009 at 8:04 PM | Permalink

    I see little value in people opining on the relative merits of UAH and RSS. There are multiple adjustment issues and procedures. It is quite possible that one vendor handles one adjustment better and the other vendor handles a second adjustment better.

    Readers need to be aware that the construction of the index requires the estimation of literally dozens of adjustment parameters and each parameter has an uncertainty associated with it which is not all that easy to estimate.

    Without wading through the procedures by which these parameters are estimated, it is impossible to express a preference.

    All that a third party can say is that is that the uncertainty has to encompass the estimates of each vendor.

  10. Harry Eagar
    Posted Jul 18, 2009 at 8:14 PM | Permalink


    Reminds me of reading about constructing medieval and early modern price deflators from proxies. After several months of slogging through these papers, I concluded that you cannot do it, to any useful range of error.

  11. Posted Jul 18, 2009 at 8:29 PM | Permalink

    My own experience with RSS and UAH was only to do with a single transition point at 1992. By using the ground data UAH seemed more accurate at handling the 1992 transition which turns out to be the same conclusion that Dr. Christy came to using radiosonde data.

    After homoginization, the RSS was very close to UAH but there are a number of slightly divergent elements pointed out here.

    The analysis never left an engineering style feeling of comfort though as the correction was somewhat sensitive to values chosen. It’s very interesting because the satellite info is so much less susceptible to human interaction on an individual day by day basis. The sat’s seem like they have to be better but after doing my own reading of the papers creating the data the corrections are amazingly complex.

    Like SteveM my own comfort with the trend of satellite data is shaky. It doesn’t seem like we have any good measurements of temp over even 30 year time periods. Anthony Watt’s project is again one of the most important things going on in climatology. Consider what kind of calibrations can be done if we can identify good data!!!

  12. Steve McIntyre
    Posted Jul 18, 2009 at 8:58 PM | Permalink

    Given all the previous discussion of the 1992 step, there has been negligible discussion of the post-2004 divergence over tropical land – which seems to be larger than the 1992 discrepancy.

    • Posted Jul 18, 2009 at 9:11 PM | Permalink

      Re: Steve McIntyre (#14),

      I’ve seen it discussed in passing a few times,my understanding is that UAH is using a station holding sat now whereas RSS is using the old style NOAA sats. The level of correction required for a station holding sat is massively reduced. If it were my decision in Washington, I would put multiple multi-sensor station keeping sats up in a GPS style redundant fashion (and a world wide ground network)… [snip – policy]. The lack of station keeping has nearly destroyed the best quality dataset we could have so if Steve will permit it. WTF!

    • Ryan O
      Posted Jul 18, 2009 at 9:21 PM | Permalink

      Re: Steve McIntyre (#14), 1998 was the introduction of the AMSU-A and AMSU-B, replacing the original MSU sensor design. This corresponds fairly well to that weird divergence starting in 1998. ~2004 was probably UAH’s switchover to NOAA-17. I don’t remember how long of an overlap Christy uses prior to switching, nor do I know RSS’s method, either. But I’m pretty sure RSS uses a significantly smaller overlap, so RSS may have switched ~late 2002 or 2003.
      Could be that the in-orbit degradation characteristics of the new AMSU sensors aren’t well known, so each of the parties uses different assumptions to model them. But it’s pretty clear that there are defined breaks that correspond well to satellite switchover times.

  13. J Christy
    Posted Jul 18, 2009 at 9:51 PM | Permalink


    Quick comments:

    1. RSS Tropical value in Nov 1980 is way out of bounds.

    2. The RSS warm shift in tropical temps in 1992 occurs relative to every dataset we’ve checked – HadAT, HadCRUT3v, GISS, UAH, ERSST, RATPAC, RICH, ZOUMSU, RAOBCOREv1.2-1.4, all Tropical sondes in Christy et al. 2007, etc.

    3. I think the annual cycle differences during the AMSU period (1998-) are at least in part UAH artifacts due to the merging method which at one point assumes AMSU should look just like the previous MSU values but can’t do it perfectly (too complicated to explain in a short note). It is trivial to unforce this (I’ve done it) and the annual cycle differences are reduced. But no way at this point to determine the exact magnitude. Key point – this does not affect the overall trend (global trend values ranged from 0.123 to 0.125 C/decade for various versions of the annual cycle corrections, i.e. the annual cycle corrections add no bias.) If I’m convinced I’ve made a better product, (I’m not there yet) I’ll put out a version 5.3 of the UAH datasets.

    4. RSS applies much stronger diurnal corrections over land. Note that when a satellite drifts to cooler diurnal times, RSS warms relative to UAH (i.e.NOAA-14 1997-2003) which in our view is an overcorrection. RSS then cools relative to UAH when the satellite drifts to warmer temps (NOAA-15 2003-2009), again, in our view an overcorrection. RSS bases the diurnal correction on climate model simulations of the diurnal cycle, UAH bases the corrections on empirical values from different local-time positions along the cross-track scan. Some may remember we had an error in the LT version of this correction back in 2005 that Wentz and Mears were clever to discover.

    5. When global trends are compared for 70S-85N (RSS domain) the difference is only 0.02 C/decade and is getting closer as the relative warm shift of RSS in 1992 is being mitigated by the relative cooler drift over the NOAA-15 period. So we are looking at relatively small difference issues in the larger context.

    6. In independent, separate comparisons with both the US VIZ and Australian radiosondes, the UAH products displayed consistently smaller error characteristics than RSS. And, curiously, RSS revealed a noticeable annual cycle in the differences vs. the sondes while UAH did not. However, I still think at least part of the annual cycle feature is a UAH problem.

    7. The MSU to AMSU conversion has been discussed in the literature. Basically, for UAH, AMSU5 is made to look like MSU2, and a slant-wise retrieval of AMSU5 can generate MSU2LT. There’s much more in the literature.

    8. Mears overall method is quite defensible. We believe UAH’s is also. Note that in virtually all cases, when our error bars are applied, they overlap one another quite a bit.

    9. The recent BAMS Climate Summary of the Globe 2008 indicated global LT trends of +0.14 C/decade +/- 0.02 C/decade with the range representing the full range of all of the versions of tropospheric data sets available to us.

    10. Since 2003 UAH has used the AMSU on AQUA which has on-board propulsion, and thus rigorous station-keeping. As a result, no diurnal drifting occurs, so UAH needs no diurnal correction for that period – which gives evidence to our hypothesis that the RSS diurnal corrections are a bit too strong.

    Will be traveling to several meetings next week and difficult to contact.

    John C.

    • Posted Jul 18, 2009 at 10:12 PM | Permalink

      Re: J Christy (#19),

      There’s not much for me to say immediately except thanks for stopping by. Theres enough in your post to blog for a long time.

      While I understand the difference between sensors, time and math, I still think the recent annual variance may be partially a different response of the atmosphere due to whatever changes have occurred. IMO it’s beyond the accuracy of current sensors to do anything with it.

    • Richard
      Posted Jul 19, 2009 at 3:50 AM | Permalink

      Re: J Christy (#19), Thanks John for a short, but very clear post. That answered a lot of questions. I for one appreciate your effort to post.

    • Ryan O
      Posted Jul 19, 2009 at 10:35 AM | Permalink

      Re: J Christy (#19), Thank you very much, Dr. Christy.

    • Posted Jul 19, 2009 at 2:04 PM | Permalink

      Re: J Christy (#19),
      Thanks for stoppinb by.

      RSS bases the diurnal correction on climate model simulations of the diurnal cycle, UAH bases the corrections on empirical values from different local-time positions along the cross-track scan.

      My preference is for empirical data to use empirically determined corrections whenever possible. So, I’d prefer the UAH method (unless there is some very strong positive evidence the empirical method is poor.)

    • Posted Jul 19, 2009 at 3:24 PM | Permalink

      Re: J Christy (#19),

      10. Since 2003 UAH has used the AMSU on AQUA which has on-board propulsion, and thus rigorous station-keeping. As a result, no diurnal drifting occurs, so UAH needs no diurnal correction for that period – which gives evidence to our hypothesis that the RSS diurnal corrections are a bit too strong.

      Yet the years for which AQUA has been used, 2003-8, show a pronounced minimum in the anomaly in May/June of ~0.2ºC which isn’t apparent in RSS and suggests to me that the UAH drift correction prior to AQUA might not be right.

  14. Posted Jul 18, 2009 at 10:05 PM | Permalink

    I don’t know if this is helpful, but here is a post from Roy Spencer describing the differences between UAH and RSS. I thought the comment about UAH using data from further South was pretty important.

    1) we calculate the anomalies from a wider latitude band, 84S to 84N whereas RSS stops at 70S, and Antarctica was cooler than average in April (so UAH picks it up).

    2) The monthly anomaly is relative to the 1979-1998 base period, which for RSS had a colder mean period relative to April 2009 (i.e. their early Aprils in the 1979-1998 period were colder than ours.)

    3) RSS is still using a NOAA satellite whose orbit continues to decay, leading to a sizeable diurnal drift adjustment. We are using AMSU data from only NASA’s Aqua satellite, whose orbit is maintained, and so no diurnal drift adjustment is needed. The largest diurnal effects occur during Northern Hemisphere spring, and I personally believe this is the largest contributor to the discrepancy between UAH and RSS.

  15. Barclay E. MacDonald
    Posted Jul 18, 2009 at 10:33 PM | Permalink

    J. Christy, thank you for checking in. Interesting that you point out the LT global trends appear to give us some reassurance regarding accuracy in the big picture. I was looking at the > 1C differences consistently occuring between RSS and UAH in the Tamino and the Tropical Land graphics above and was immediately loosing faith in the satellite info. Analysing this looks like a real pandora’s box. Definitely, looking forward to hearing more from you when you have the time.

  16. Ausie Dan
    Posted Jul 19, 2009 at 12:05 AM | Permalink

    Stsve, re your remarks about problems with satelite and all other temperature data. Would it be good practice for all such reports to be accompanied by an error extimate?
    We could then evaluate the significance of reported trends.

  17. Geoff Sherrington
    Posted Jul 19, 2009 at 4:35 AM | Permalink

    From a limited number of rural ground stations in Australia I analysed a year ago, I wrote on David Stockwell’s Nich modeling that

    “Many of the stations show a reduction in scatter about year 1990 +/- 3 years. The cause is unknown to me. Many of these stations switched to 30 minute recording around 1992-4. The near-global 1998 high temperature is virtually absent from this data set. Cause unknown, again.”

    I have since found more examples of the “necking” behavior, which shows as Tmax and Tmin moving closer together for about 5 years around the early 1990 period. Maybe the mean would stay about constant and not be detected by strucchange type analysis.

    It’s just another input which needs explanation when comparing ground to satellite reconstructions.

    Now, here’s an example of a different type of “necking”, where several different record keepers seem to converge on the same early 1990s. (Note: there has been some subjective guesswork infilling in this graph. It is a quite small amount, but it makes the results irreproducable unless I post the raw data.)

  18. Allen63
    Posted Jul 19, 2009 at 4:56 AM | Permalink

    J. Christy, thanks for the insights.

    I still “feel” better about believing satellite global temperatures vs. GISS temperatures (for example).

    Problem is, based on this post, it “seems” satellite data temperature trends “might” be in error by 0.1C to 0.2C per 30 years (maybe more),

  19. KevinUK
    Posted Jul 19, 2009 at 7:29 AM | Permalink


    Thanks for this great post. I’ve been hoping you’d give the same M&M treatment you’ve given thus far to the proxy temperature reconstruction and ‘recorded’ (but heavily adjusted) temperature records in the past to the satellite measurements. From what I can gather from your findings so far the UAH and RSS indices as just as problematic as the HADCRUT, GISS etc indices.

    In both cases the ‘adjustments’ appear to be of the same magnitude as the claimed trends in temperature increase. Given this situation, how does it remain possible for the IPCC to justify its continued existence? How is it possible to claim that there has been any warming trend towards the latter part of the 20th century at all when it looks to me as though any warming trend in the global mean surface temperature (if physically there could ever be such a thing) anomaly could be entirely an artefact of the adjustments made to the measurements? Even more importantly how can such a claimed warming trend be attributed to man and not be almost completely explainable (and much more likely) due to natural climate variability?

    Now I know part of my second paragraph is editorialising Steve, but please let it stand as it is an important point that everyone reading this important thread should understand whether or not they be ‘warmers’, ‘luke warmers’ or ‘die hard AGW skeptics’. If we can’t really be sure whether or not there has been a warming trend in the recent past caused by our continued use of fossil fuels, why on earth are we funding organisations like the IPCC to tell us that we must drastically limit how much fossil fuels we burn over the next century? If the answer is that we must apply the ‘precautionary principle’ just in case then we are all fools as, in effect, we will be basing our future livelihood on a very low probability event (a possible large increase in global mean surface temperature dues to our continued use of fossil fuels) just because of the possibility of a high consequence (significant man-caused climate change). We apply the ALARP principle in all our everyday activities and as the saying goes ‘we take the risk and we accept the consequences’. Why should our continued use of fossil fuels be any different? Our world economy, health, well being and livelihood benefit immensely from our continued use of fossil fuels. Surely the benefits we receive far out weigh the (more often perceived rather than real) consequences that could arise from our continued use of fossil fuels?


  20. Barclay E. MacDonald
    Posted Jul 19, 2009 at 11:49 AM | Permalink

    Kevin UK

    Your second paragraph is the bottom line. The third paragaph gets us to the politics. Nice synopsis!

    But before I draw firmer conclusions on the second paragraph, I would like to first have a much greater understanding of the satellite data and its analysis. Can’t hurt. Especially since the press has yet to pay any serious attention to this kind of detail anyway. Let’s continue to pursue where SM is leading. It’s fascinating.

  21. John S.
    Posted Jul 19, 2009 at 1:01 PM | Permalink

    What struck me when I first examined the RSS-UAH discrepancies in the tropical series was the Nov 1980 outlier, the 1992 step, and the clear change in the spectral character around 2000. But there also seems to be a different seasonal cycle that is removed from both series in computing monthly anomalies, which results in a dip in cross-spectral coherence at the annual frequency. It is a discrepancy that has not been addressed here.

  22. Steve McIntyre
    Posted Jul 19, 2009 at 1:10 PM | Permalink

    I had an inquiry offline on how these plots were done. I used the R package strucchange by Achim Zeilis.


    If x is a time series, I first calculated a simple OLS trend:

    fm0 < – lm(x ~ year);

    I then calculated the breakpoints using the function breakpoints in strucchange and recovered a factor for the segments using the function breakfactor:

    bp < – breakpoints(x ~ year,breaks=nbreaks);
    fac0 <- breakfactor(bp)

    In order to recover separate trends for each segment – which was the problem that interested my correspondent, I did a new regression against both year and fac0*year as follows:

    fm1 < – update(fm0,x ~ year+fac0*year)

    The function fitted(fm1) recovers the fitted values.

    For convenience, I organized these steps into a function as follows:

    make.bp=function(x,nbreaks=5) {
    bp < – breakpoints(x ~ year,breaks=nbreaks);
    fac0 <- breakfactor(bp)
    fm0 <- lm(x ~ year);
    fm1 <- update(fm0,x ~ year+fac0*year)

    In the case at hand, the following command obtains the values of interest:


    The plot of a given panel is then:

    plot(year,A$x ,type=”l”,axes=FALSE,xlab=””,ylab=ylab0,ylim=ylim0,yaxs=”i”)
    mtext(side=3,”Tropical T2LT: Land minus Ocean”,font=2,cex=1,line=.7)

  23. Kenneth Fritsch
    Posted Jul 19, 2009 at 5:27 PM | Permalink

    So what is a climate scientist to do who uses the available temperature time series? Use them all and hope that they all support his thesis or use the one(s) that do support his thesis and then attempt to show why that series is the correct one or point out that his results are not robust across the series spectrum or note that his results are very dependent on the reported accuracies and confidence intervals of the series but that he has reservations about the reported values or simply use the series that puts his conclusion(s) in the most favorable light and let the reader do the sensitivity tests?

    Or do these scientists whose investigations depend strongly on these series being valid within the reported limits band together in an attempt to get an independent measure of the series? Also how leery must a scientist be of his conclusions changing when these series are periodically corrected? And does that concern tend to inhibit significant changes from being made or proposed?

    • Geoff Sherrington
      Posted Jul 20, 2009 at 12:11 AM | Permalink

      Re: Kenneth Fritsch (#33),

      You might see a similar frustration in my post 24 above. I can only suggest that we keep probing.

      At the moment, methods like break points and other deconvolutions seem popular. There is scope to widen these analyses to include other climate data like rainfall, evaporation, Tmax and Tmin separately, rather than Tmean, etc, to try to distinguish iunstrument variables from climate variability. Sooner or later there will be better confidence in both hypotheses and data.

      The correspondence of temperature data between different agencies has to continue under the microscope because the agreement is still not good enough for some purposes. Thank you, Dr Christy, for your openness there.

      You make a good point that authors might look at their older papers and either issue a caution about findings or nullify them. Few authors seem to be doing this, which adds to the confusion, but it is a sustainable proposition that some past papers are no longer relevant because their temperature data basis was wrong. e.g there were past hypotheses based on Tmax and Tmin converging over decades, but that pattern might not have been so reliable everywhere.

  24. Posted Jul 20, 2009 at 2:22 AM | Permalink

    One Re: Geoff Sherrington (#35), One would have to be concerned about whether breakpoints were instrumental or real. As the changs are e in 1978 in Australia, an average of all Australian stations shows, once cannot assume that a step change or break is a result of a station move or some other methodological bias. It might be real. Have GISS shown that the breaks are not real? The assumption that all climate changes are gradual should be questioned too.

    • Geoff Sherrington
      Posted Jul 20, 2009 at 3:26 AM | Permalink

      Re: David Stockwell (#36),

      I agree. What I meant was that if you suspect a satellite sensor for temperature is degrading, you might get useful supplementary information if you look at other data gathered by other instruments to see if there is a non-temperature climate change at the same time. If quite a few diverse climate measures change, it lessens the likelihood of a satellite instrument problem. Sorry for my poor wording. The several station changes that I have studied give remarkably large temperature break points even though the change might be a few km at best. Importantly, they do not give greak points solely because the microclimate is different. In some datasets they give break points because adjusters assign them different trends before and after. There might be less inclination to adjust (say) rainfall records in hindsight. It’s messy.

  25. Posted Jul 21, 2009 at 7:13 PM | Permalink

    Mr McIntyre,

    As you have discovered connecting satellite data from different satellite sensors and integrating them is a very challenging effort. But what is important to note is satellite sensors are well characterized and can have internal calibration and be calibrated against known targets or against well calibrated ground sensors. The point is the satellite sensor is a single known entity with measurable or defendable precision/errors. Therefore their global measurements are self consistent to the highest degree.

    As you noted the challenge of integrating these systems when they are updated or age, now contemplate the same problem with thousands of different ground sensors with disparate calibration or state information that may be completely outdated. Sensors that have been moved, replaced, deleted, etc. If you thought getting a defendable temperature profile out of a sensor that samples the globe for years and can be reference calibrated, think of the mess we have with ground sensors.

    This logically leads me to conclude (without the need to do it mathematically) that the ground based sensor network that produces a global temperature cannot produce a more accurate measurement. It is impossible for a ill defined network of various sensors or varying quality to create a more accurate number than one sensor used over many years across the globe.

    I may be over simplifying it, but that is an engineer’s back of the envelope quick test that usually gives the direction of the answer, if not also a reasonable magnitude.

    I would bet that if someone used one of these satellite sensors to assess the ground sensor network used by NOAA and GISS they would not only discover the precision (or lack of it) for each sensor site, but could prove how poorly the ground network performs when compared to the satellite sensor.

    Cheers, AJStrata

  26. Kenneth Fritsch
    Posted Jul 27, 2009 at 6:19 PM | Permalink

    I wanted to combine my analysis of the Versions 1 and 2 USHCN stations that I posted at
    Post #19 on thread titled: USHCN V2 Deletions and Additions at with some breakpoint calculations using the methods and R scripts of Steve M as posted at the following three threads:
    More Tropical Troposphere: UAH versus NOAA
    RSS versus UAH: Battles over Tropical Land and Ocean
    June 2009 and the Big Red Spot

    Before proceeding, I needed to compare the Steve M breakpoint method from R with those described and graphed for global NOAA/NCDC temperature anomalies (1880-2006) in the paper titled, “Abrupt global temperature change and the instrumental record”, by
    Matthew J. Menne, NOAA/NESDIS/NCDC, Asheville, NC and linked at:

    Menne was the main authority referenced for determining the breakpoints used in the USHCN temperature series adjustment for Version 2. The NOAA global data is linked here:

    The NOAA/NCDC Global Temperature Anomaly Breakpoints using the library(strucchange) method in R were at the years 1906 1945 1973 which are very close to those determined in Menne. The graph of this series with the breakpoints is presented below.

    Secondly, before looking at breakpoints for V1 and V2 versions of USHCN, I wanted to look at the breakpoints for the V2 series for the period 1895-2006 for the contiguous US. For that calculation I found a single breakpoint at the year 1963 and the graph for it is listed below.

    Thirdly, before proceeding I calculated the breakpoints for the differences series of USHCN V2 – GISS and USHCN V2 – USHCN V1. Those breakpoints are shown graphically below and the years were as follows:

    V2-GISS: 1912, 1929, 1986

    V2-V1: 1939, 1992

    Finally, I calculated the breakpoints for the V1 and V2 versions for individual USHCN stations for the periods 1895-2006 and 1920-2006 and tabled the results as shown below. The stations were subdivided by the trend differences between the V1 and V2 versions into the largest differences and the smallest differences. All stations with small V1 to V2 trend differences, which were used in the analysis, had significantly positive trends.

    The results of these breakpoint calculations points to perhaps some significant differences between temperature series for the US between USHCN V1 and 2 and between V2 and GISS. Without any a prior evidence or inklings for assigning the breakpoints to specific differences, at least at this point in the analysis, the breakpoints and time of occurrences in the series is just an interesting observation.

    The analysis of breakpoints for the individual stations V1 and V2 series could perhaps reveal whether the methods of adjustment for the V2 series under or over adjusted. Over adjusting would imply that real climate caused breakpoints are improperly being adjusted out of existence. Under adjusting would be indicated by breakpoints remaining substantially unchanged for both versions V1 and V2. Those stations with the largest trend differences V1 to V2 could indicate larger adjustments of breakpoints while the smaller trend differences V1 to V2 could be evidence of less breakpoints needing adjustment.

    The stations with the larger trend differences did tend to have more breakpoints, but it does not appear, from my somewhat meager numbers of stations studied, that the V2 version for those large trend difference stations has reduced numbers of breakpoints. In some cases the adjustment evidently only changed the timing of the breakpoint(s). In the stations that I analyzed, the group with the smallest trend differences actually showed a larger reduction, on going from V1 to V2, in breakpoints than the group with the largest trend differences.

    I did my analysis using two time periods the longer 1895-2006 one and one for the period 1920-2006. The shorter period contains much less missing data and in my mind is probably more reliable. The question to be answered is whether the time period selection changes the breakpoints significantly. Overall the changes are not dramatic but there are changes in breakpoint numbers and time of occurrence.

    Again I am not sure how to interpret these differences, but I do think that the authors of the USHCN V1 to V2 change would have done and reported some of the breakpoint analysis since their adjustment depend strongly on locating breakpoints.

%d bloggers like this: