New Satellite Data

Get ready for a new round of satellite disputes. Spencer and Christy have version 6.0 out in draft form as of Sept 9, 2006. Here’s a plot of the data from the website . The SH result is particularly intriguing – now showing virtually no change. In the recent U.S. CCSP report, as pointed out by Fred Singer in Stockholm, additional to the low trend in tropospheric temperature, there is the fingerprint problem that tropospheric temperature trends are supposed to be running hotter than surface temperature trends – but observations are the opposite.

I’m sure that there will be a new furore. There is a readme at the site about the diurnal corrections in version 6.0.


  1. Peter Hearnden
    Posted Sep 25, 2006 at 7:33 AM | Permalink

    Humm, I was going to say your graphs seem to show greater varibility in the SH (which would be very odd indeed) until, just in the nick of time…, I saw the scales aren’t the same – doh! To clear up confusion I think the scales should be the same 🙂

  2. Michael Hansen
    Posted Sep 25, 2006 at 7:36 AM | Permalink

    Quick eyeballing the HADCRUT3 data 1978-2005 gives:

    GLOBAL: 0.16 degree/decade
    NH: 0.21 degree/decade
    SH: 0.10 degree/decade

    This should be compared to the msu satellite data:

    GLOBAL: 0.128 degree/decade
    NH: 0.20 degree/decade
    SH: 0.05 degree/decade

    If tropospheric temperature is supposed rise faster than surface data, the scoreboard is: no win, one tie, and two losses.

    I predict less than 48 hours before RealClimate will claim victory for the climate models 🙂 (sorry, I couldn’t resists. I’m perfectly aware that it’s much more complex than this…I just wished that RC, and others, would be more humble as to what the capability of the models are at this point in time)

  3. Steve McIntyre
    Posted Sep 25, 2006 at 7:53 AM | Permalink

    #1. OK, done.

  4. Posted Sep 25, 2006 at 8:06 AM | Permalink

    Yes, and the SH temperatures should be less prone to confounding factors such as UHI (in the thermometer) and land use effects. So SH temperatures should be a more reliable indicator of the magnitude of global warming.

  5. Taipan
    Posted Sep 25, 2006 at 8:09 AM | Permalink

    Looking at those graphs, i wonder what the outcome would be if we remove just the 1998 year. It appears to be such a huge spike.

  6. Hank Roberts
    Posted Sep 25, 2006 at 8:57 AM | Permalink

    1998 is the last big El Nino?

  7. bender
    Posted Sep 25, 2006 at 9:03 AM | Permalink

    Re #5 The same musing could be applied to the highly anomalous 2005 hurricane season.

    Of course, these data streams (storm frequency, temperature) may not be independent of one another. In which case it would be necessary to dismiss more than just one observation in order to kill the hypothesis of an underlying warming trend. You’d have to dismiss 1998 temperatures, 2005 hurricanes (and so on, for all the other temperature-dependent data streams that are showing a rising trend).

    IOW perturbing one data point in one data stream has no impact on all those others that are trending upward. In general, I think skeptics are going to have to start thinking more about the totality of data. The tendency is to focus too much on one data stream.

    I say this because it is precisely this myopia that allows the warmers to “move on” so easily when one line of argumentation starts looking a little sketchy for them. Breadth of awareness will keep the skeptic honest – and effective in holding the warmers to account.

  8. bender
    Posted Sep 25, 2006 at 9:28 AM | Permalink

    Re #6
    So what?

    All these upward trending processes have quasi-cyclic components to them. 1998’s strength was not 100% El Nino. Under the AGW hypothesis it was largely El Nino but partly GW trend (choose your coefficients). Dismissing 1998 in its entirety would be “throwing the baby out with the bathwater”. Skeptics do themselves harm with these “all-or-nothing” arguments. Like the ridiculous suggestion that a cool 2006 implies a climate trend reversal. Just because a cycle (or noise) dominates a trend does not mean the trend is not there.

  9. Steve McIntyre
    Posted Sep 25, 2006 at 9:35 AM | Permalink

    I agree with Bender on this. You have to include all the data. 1998 was a big El Nino, but so what. It’s there.

  10. Barney Frank
    Posted Sep 25, 2006 at 9:45 AM | Permalink


    I agree with you about the number of uptrends.
    The question for me is not GW, its the the A in AGW. Not because I think it is inevitable that GW will continue but because if there is little or no A to it we can’t do anything about it and arguably shouldn’t even if we could. Of course, I doubt there’s much we can even if there is significant A to GW.
    I know most folks here have a much more pure science perspective than I do. I’m more interested in making sure policies are based on some sound science than the science itself. And so far I don’t see much sound science anywhere, just a bunch of conjectures based on some rudimentary knowledge and a bunch of bias. The most disturbing behavior I
    see is the use of data that show either warming, cooling or stasis being used by one side or the other as confirmation, not that the planet is warming or cooling but that their pet theory is advanced whether the data shed any light on causation or not, which so far very little of it seems to.

  11. bender
    Posted Sep 25, 2006 at 10:55 AM | Permalink

    Of course I don’t want to dismiss the skeptical nature of #5. A comparable question that is fair to ask is what the inferred trend would look like if 2007, 2008, were not unlike 2006. This is an interesting question because it suggests one should be looking at 95% prediction intervals (which are for future observations), which are a lot wider than 95% confidence intervals (which are for the mean).

    Similarly, it is also a fair question, when a series contains many cycles overtop a trend, to ask what the trend looks like if one estimates from:
    (1) cycle trough to cycle peak (slope biased high) vs.
    (2) cycle peak to cycle trough (slope biased low) vs.
    (3) cycle peak to cycle peak (fair comparison)

    Warmers choosing (1) and coolers choosing (2) should be viewed with equal suspicion.

  12. John Hekman
    Posted Sep 25, 2006 at 11:06 AM | Permalink

    It’s not just that there has been little or no warming in the SH; what is striking to me is that NH and SH have diverged quite a bit in the last year or so.

  13. John Hekman
    Posted Sep 25, 2006 at 11:20 AM | Permalink

    Changes in the NH and SH values are negatively correlated (insignificantly) since January 2005. The observations themselves are insignificantly positively correlated.

  14. Posted Sep 25, 2006 at 11:25 AM | Permalink

    I’m curious. Has anyone constructed a statistically reliable graph of surface temps that exclude metropolitan UHI areas, then compaired these measures to those that include them?

  15. Jack
    Posted Sep 25, 2006 at 12:48 PM | Permalink

    Ocean cooling! The SH is dominated by the ocean-atmosphere interactions, the NH by land-atmosphere interactions. If, as reported, the ocean SST was decreasing over 2002-2005, then this would be expected to pull down the SH atmospheric temperatures, reducing the trend.

    I wonder what Wentz et al. think of this.

  16. Ken Fritsch
    Posted Sep 25, 2006 at 12:56 PM | Permalink

    Warmers choosing (1) and coolers choosing (2) should be viewed with equal suspicion.

    We waste too much space looking at and discussing small samples and recent reversals from trends because as any statistician knows the uncertainty in any conclusions drawn would be from the floor to the ceiling. I was always taught that arbitrary removal of outlying points is a statistical heresy.

    That my Chicago Bears are now 3-0, does not make me project success for them that much more than I did at the season’s start. Unfortunately for many of my fellow fans, thinking Super Bowl, the statistical message is lost.

  17. John Cross
    Posted Sep 25, 2006 at 12:58 PM | Permalink

    Re Singer’s point, is he taking into account the contamination by stratospheric cooling (i.e. Fu)?


  18. Steve Bloom
    Posted Sep 25, 2006 at 1:11 PM | Permalink

    Back to the new S+C results, I suspect that they will get much less attention than before due to the history of errors in their prior work and their “bait and switch” vis-a-vis the CCSP report.

  19. Steve McIntyre
    Posted Sep 25, 2006 at 1:36 PM | Permalink

    #18. It’s amazing to me that so many climate scientists are quick to conclude that MBH errors don’t “matter” and S&C errors do – especially when one compares the cooperative approach of S&C and the obfuscation of MBH.

  20. Pat Frank
    Posted Sep 25, 2006 at 1:45 PM | Permalink

    #19 — The disparity in fine-toothed-comb attention is telling, isn’t it. In that continuing mode, we now have a small correction due to satellite drift turned into a “history of errors.” Here’s my prediction: Steve B.’s #18 dismissal notwithstanding, S+C #6 will get far more critical attention than Phil Jones #anything.

  21. Posted Sep 25, 2006 at 1:47 PM | Permalink

    The global warming apparently doesn’t exist at one hemisphere but it is surely a hemisphere controlled by the oil companies and their propaganda.

    The inappropriate behavior of the Southern Hemisphere can’t affect the consensus of all the hemispheres, led by the Northern hemisphere, that we live in a global warming era that will end up with a doom by Christmas 2016 unless we follow al-Gore’s instructions.

    More seriously, how uniform are the increasing CO2 concentrations? How big differences do the climate models predict for the two hemispheres for the overall temperature and its trends?

    Best wishes

  22. Jeff Norman
    Posted Sep 25, 2006 at 1:57 PM | Permalink

    Regarding the 1997/1998 el Nino spike.

    What I find interesting is that during the el Nino a vast amount of heat energy was introduced into the troposphere and subsequently lost to space. AGW is supposed to trap heat in the atmosphere or at least slow down the loss to space. Assuming temperature is a representative proxy for heat energy, then it would appear heat energy is not being trapped in the troposphere.

    Pursuant to this thought, I also find it interesting that this vast amount of heat energy seemingly appeared out of nowhere. I know it came out of the Equatorial Pacific Ocean but why wasn’t this heat loss represented in the SSTs? Or was it? Or is it only just showing up now?

    I suspect that Steve Bloom will continue to get much less attention here than before due to his history of errors and his bait and change the subject arguments.

  23. nanny_govt_sucks
    Posted Sep 25, 2006 at 2:02 PM | Permalink

    So, looking at the NH vs SH charts, it’s obvious that all the human emissions of CO2 are stuck in the NH, right? There seems to be no trend in the SH, so that must mean no CO2 forcing down there.

  24. bender
    Posted Sep 25, 2006 at 2:20 PM | Permalink

    Re #16
    Except that Rex Grossman is your quarterback, and as a Gator I can tell you he is truly under-rated. 3-0 is a trend.

  25. David Archibald
    Posted Sep 25, 2006 at 2:38 PM | Permalink

    Twenty-eight years of data, and the Southern Hemisphere is the same temperature it was twenty-eight years ago. This is proof that there has been no global warming over the last twenty-eight years, and that global warming, should it ever occur, has yet to begin.

  26. Steve Bloom
    Posted Sep 25, 2006 at 2:59 PM | Permalink

    Re #20: One “small correction”? *snork* A couple of errors with pretty large consequences, as I recall. S+C’s stuff got a lot of attention mainly because they sought to portray their findings as an anti-AGW argument, underlined that stance by aligning themselves with the ExxonMobil-funded FUDtanks, and wound things up with the aforementioned bait-and-switch. Consider by way of contrast the treatment by all sides of the ocean cooling findings of Lyman et al.

    Re #22: Sounds like you need to read the basics on ENSO. Wikipedia would be a good place to start; follow the links there for details.

  27. Steve McIntyre
    Posted Sep 25, 2006 at 3:08 PM | Permalink

    #26. So where are you on MBH, Steve B?

  28. Spence_UK
    Posted Sep 25, 2006 at 3:37 PM | Permalink

    Remember folks:

    Serial egregious errors, not to be trusted

    Trifling differences that don’t matter, anyway we’ve "moved on"

    You heard it here first! Actually, you probably heard it on Realclimate first. Credit where it is due…

  29. Kevin
    Posted Sep 25, 2006 at 3:38 PM | Permalink

    Re#6: Much of the apparent trend in NH in the earlier versions was driven by the El Chichen and Mt. Pinatubo eruptions and the El Nino in ’98 and this (unsurprisingly) still appears to be the case.

    Re# 2: We must be mindful that decadal trend calculations are based in simple linear regression, which is very sensitive to outliers. If I have time this week I’d like to have a go with econometric methods.

  30. Steve McIntyre
    Posted Sep 25, 2006 at 3:44 PM | Permalink

    #28. It is so pathetic, isn’t it.

  31. Kevin
    Posted Sep 25, 2006 at 3:52 PM | Permalink

    Very sloppy. (But then I understand the EPA presumes a linear dose-response function to set safe levels. This is even more stupid since dose-response functions will always be logistic except in the middle dose ranges.) Steve, these data have been centered; do you know if the raw data are available?

  32. bender
    Posted Sep 25, 2006 at 3:58 PM | Permalink

    Re #31 OT. But aren’t EPA dose-responses calculated on a probit scale? That would solve the nonlinearity issue, would it not?

  33. Kevin
    Posted Sep 25, 2006 at 4:26 PM | Permalink

    Re 32: As I recall from various sources it’s a crude linar extrapolation developed ages ago. I have heard there is some movement to revamp the method but as far as I know they’re still doing it the old way. In any event, they should be using Logistic Regression.

  34. bandwidth
    Posted Sep 25, 2006 at 4:35 PM | Permalink

    Probit regression goes back to Finney (1971). Their method is older than that?

  35. Ken Fritsch
    Posted Sep 25, 2006 at 6:41 PM | Permalink

    Except that Rex Grossman is your quarterback, and as a Gator I can tell you he is truly under-rated. 3-0 is a trend.

    Two forced passes for interceptions is a small sample as is a 4th quarter TD pass. I want to believe, and especially so in Rex’s case, but then it is never that simple and especially if one pays attention to statistics.

  36. bender
    Posted Sep 25, 2006 at 6:49 PM | Permalink

    Re #35. The sample is small, but your inference is correct, point well-taken. Rex’s decision making will improve as he matures. Remember, he never played his senior year. You want to score touchdowns, not field goals? Your receivers need to get better on those timing routes. Way too many communication breakdowns. Should have been 35-16.

  37. John Cross
    Posted Sep 25, 2006 at 6:57 PM | Permalink

    #19 Steve: without meaning to open up many cans of worms, there is a difference. According to Wegman and others changes to MBH will not change our understanding of global warming. However the issue of current warming of the troposphere is important especially if it is not what is expected.

    #28 Spence: I see, a change from 0.08 to 0.13 (IIRC) is not important as long as you can produce a graph where it looks small.

  38. Jeff Weffer
    Posted Sep 25, 2006 at 6:57 PM | Permalink

    What happened in 1998 besides El Nino. The spike seems to be such an outlier, that more investigation of this figure and its cause is called for.

  39. Pat Frank
    Posted Sep 25, 2006 at 7:14 PM | Permalink

    #37 — What understanding of global warming? GCM confidence limits have never been calculated; or, at least, published. That degree of carelessness could be a first for physics.

  40. nanny_govt_sucks
    Posted Sep 25, 2006 at 7:14 PM | Permalink

    How is it that all that extra well-mixed CO2 in the atmosphere somehow can’t raise the temperature in the Southern hemisphere?

  41. John Cross
    Posted Sep 25, 2006 at 7:21 PM | Permalink

    #39 Pat: I am not sure if that is what Dr. Wegman had in mind when he said it. You could always e-mail him.


  42. nanny_govt_sucks
    Posted Sep 25, 2006 at 7:26 PM | Permalink

    #37, #41: So Wegman’s word is gospel when it comes to Global Warming, but his statistics can’t be trusted?

  43. Tim Ball
    Posted Sep 25, 2006 at 7:34 PM | Permalink

    Who wrote the Wikipedia entry on this subject? As I understand anyone can alter Wikipedia entries and although this is noted most people are not aware. A potentially good idea has been politically usurped. I also understand there is an active campaign by AGW proponents to make sure the entries about climate and related matters fit their view. In my opinion Wikipedia is a completely unreliable manipulated source of information.

  44. Paul Williams
    Posted Sep 25, 2006 at 7:51 PM | Permalink

    #21 Lubos, that doom is due by Christmas 2015, not 2016. Al, et al, have been banging on about doom in ten years for at least a year now. Unless it’s a non-standard, moving average type of ten years, that is.

  45. Steve Sadlov
    Posted Sep 25, 2006 at 7:55 PM | Permalink

    RE: # 21 – Global warming may be a land / continent thing.

  46. Pat Frank
    Posted Sep 25, 2006 at 8:26 PM | Permalink

    #41 — It doesn’t matter what Wegman meant, John, I was reacting to the generalized presumption that we understand global warming. We don’t.

    #26 — Try blowing your nose before posting, Steve B.; it might help your prose. Excluding such concepts as “ExxonMobil-funded FUDtanks” from your method of thinking might improve your analytical acuity. The correction to S+C didn’t help your AGW charade, because even after it the troposphere still wasn’t warming fast enough for you (the rate needs to at least double). And even if the rate were faster your position would be scientifically untenable because GCMs are not adequate to falsifiably predict climate response to small forcings. And apparently no one is checking their confidence limits, surprise, surprise.

  47. TCO
    Posted Sep 25, 2006 at 9:01 PM | Permalink

    46. If the sattelite data comes to match what models predict, that doesn’t help the AGW case at all?

  48. Steve Bloom
    Posted Sep 25, 2006 at 9:01 PM | Permalink

    Re #46: “rate needs to at least double”… Wrong.

  49. Nicholas
    Posted Sep 25, 2006 at 9:11 PM | Permalink

    Interestingly, when I look at the NH graph, it seems to me that the temperature was basically flat between 1979 and 1998 and then after that big El Nino it stayed high. I think if I were to put straight line(s) through that graph I would put a flat one along 0 between 1979 and 1998 and then another flat(ish) one at about +0.2 from 1998 onwards.

    Is there some physical explanation for that? It certainly doesn’t seem to correlate well with the theory of steady warming over the period, as you would expect if the steadily rising CO2 levels are driving it. I suppose you could argue it’s some kind of bistability which has been exposed by forcing(s)?

  50. Pat Frank
    Posted Sep 25, 2006 at 11:15 PM | Permalink

    #48 — “Wrong.”

    Am I? Let’s see. Here’s a link to the April 2006 final global temperature re-analysis report by Karl, ea (caution, automatic 9 MB download). Let’s first look at the surface and lower tropospheric trends in Figure 3, page 12 (Executive Summary), predicted and measured, in Celsius.

    Predicted Measured
    Global Average
    0.17(+0.17/-0.12) 0.13, 0.13, 0.17 (Avg: 0.14)
    Lower Trop.: 0.20(+0.2/-0.15) 0.09, 0.1, 0.1, 0.17 (Avg: 0.12)
    Surface: 0.16(+0.21/-0.18) 0.12, 0.12, 0.13 (Avg: 0.13)
    Lower Trop.: 0.22(+0.28/-0.22) 0.06, 0.04, 0.05, 0.13 (Avg: 0.07)
    Hmmm. Looks like about double to me, Steve B, except in the Tropics where the Lower Tropospheric prediction is about 3x too large. Let’s also notice the very large variances (+n/-n) in the predicted values. The predictions were from 49 runs of 19 different models and varied all over the place. But somehow the errors partly cancel. Fortuitously, perhaps? Do we need more evidence that GCMs are unreliable?

    And Figure 5.6, page 114, plotting T(surface) vs. T(trop.) for two different tropospheric data sets: Not one of the sets of measured data of actually lies on the predicted trend lines. And the esd.s of the predicted values are almost invariably larger than the values themselves.

    Doesn’t look good for your side, Steve B.

    Here’s a real descent into humor: A side-bar comment on page 7 of the final Report — “Errors in observed temperature trend differences between the surface and the troposphere are more likely to come from errors in tropospheric data than from errors in surface data.(bolding added)”

    Incredible. How does Phil Jones do it?

    To add irony to fantasy, Richard Kerr commenting on the report in the May 12, 2006 Science, wrote: “Global warming contrarians can cross out one of their last talking points. A report released last week … finds that satellite-borne instruments and thermometers at the surface now agree: The world is warming throughout the lower atmosphere, not just at the surface, about the way greenhouse climate models predict.

    One might ask: Which models? What agreement?

    There’s another quote in Kerr’s article that bears considerable attention here in Steve M.’s contextual world: Kerr again: “Hashing out those differences [among analyses with colleagues] over the same table “was a pretty draining experience,” says Christy.

    So, tell us Steve B., how is it that a stooge of the “ExxonMobil-funded FUDtanks” who engages in “bait-and-switch” deceptions would sit down and work things out with his tough-minded dissenting colleagues, while Drs. Mann, Ammann, Wahl, (D. Div. notwithstanding) and Schmidt, saints of AGW all, obfuscate, stonewall, evade, and engage in hostile polemics; do everything, in short, but behave as their professional ethics demand?

  51. Steve McIntyre
    Posted Sep 25, 2006 at 11:23 PM | Permalink

    I’ve heard second-hand that comments were put in the CCSP report that Christy disagreed with and that he was not allowed to include dissenting comments.

  52. DeWitt Payne
    Posted Sep 26, 2006 at 12:46 AM | Permalink

    #49 — step change

    I noticed that too. I know that linear regression doesn’t really tell you that much, but…

    Monthly data from 12/1978 – 12/1997


    Global 0.02 C/decade

    NH 0.11
    SH -0.04

    7/2001 – 8/2006

    Global -0.11

    NH 0.22

    Looking at just the data for the North Pole and South Pole, the data show similar trends of slight cooling through 1993 and then diverge drastically. Note that the 1998 El Nino does not show up in the data from the poles.


    NoPol 0.86 C/decade
    SoPol 0.26

    Any ideas on what might have happened in the time frame of 1993-1994 that caused such a dramatic shift?

    Only the data from 2006 has been corrected using version 6.0, or at least that was my understanding from the readme. That’s why it’s _6.0p. The rest of the data will be reprocessed when version 6.0 of the diurnal corrections is completed sometime in the next couple of months.

  53. Posted Sep 26, 2006 at 12:49 AM | Permalink

    51 has the ring of truth. I find it hard to believe that Christy would agree with the report. Roger Pielke Sr resigned from the CCSP, didn’t he? The CCSP can’t reconcile the surface temperature data with the satellite data until the warm bias has been corrected in the surface data.

  54. epica
    Posted Sep 26, 2006 at 1:03 AM | Permalink

    Was the sign error in the original MSU data actually Christy’s personal mistake or was it just done under its responsabilities? Was the new data set actually produced again by him and Spencer alone or had someone spent the time and had the patience to check their coding again? If we would add, just as a mind experiment, the same error again on the new data set I think the trends would be perfectly in agreement with the predictions. Do I see that right that already now predictions and MSU are in agreement within the error bars of each type of studies? Interesting.

  55. Pat Frank
    Posted Sep 26, 2006 at 1:13 AM | Permalink

    #51 — That would be despicable, if true. On the other hand, had that happened to me in Christy’s place, and if the changes were truly objectionable, I’d have required my name to be taken right off the author list. Then, when people asked me why I wasn’t there, I’d explain that I was there, but had bailed because editorial insertions had ruined the objective honesty of the work.

    Let the chips fall…

    Christy left his name attached. Presumably, any imposed changes weren’t bad enough to set him off.

    By the way, I don’t know if she’s ever been openly acknowledged here, but IMHO, whatever else one may think, Sonja Boehmer-Christiansen deserves a very large vote of thanks for cracking open AGW to debate by publishing M&M 2003 and initially making the paper free access.

  56. Pat Frank
    Posted Sep 26, 2006 at 1:16 AM | Permalink

    #54 — I predict tropospheric warming of 0(+/-30) C. My prediction and MSU are in agreement within the error bars of each type of studies. Interesting?

  57. Posted Sep 26, 2006 at 1:30 AM | Permalink

    The Christy ‘error’ was much hyped:

    “A new article in Science by researchers Carl Mears and Frank Wentz from Remote Sensing Systems (RSS) identified a problem with how the satellites drifted over time, so that a slight but spurious cooling trend was introduced into the data. When this drift is taken into account, the temperature trend increases by an additional 0.035 degrees per decade, raising the UAH per-decade increase to 0.123 degrees centigrade. Christy points out that this adjustment is still within his and Spencer’s +/- 0.5 margin of error.”

  58. epica
    Posted Sep 26, 2006 at 2:32 AM | Permalink

    #56 Not really interesting, however very confirmative. The prediction of 0±30°C is in perfect agreement with someone claiming seriously there is no literature about error estimates of GCM simulation.

  59. Steve Bloom
    Posted Sep 26, 2006 at 2:51 AM | Permalink

    Re #50: Pat, if your representation of those figures is correct the conclusions of the report would be substantially inconsistent with its contents, which of course they are not. One might even characterize what you did as obfuscation.

    Re #53: Truthiness comes to Climate Audit! I suppose it was inevitable.

    Re #54: I don’t know that S or C have ever made it clear who originated the error, and I’m pretty sure nobody else has checked the new work. I think part of the concern expressed by the report with regard to data transparency is that RSS was unable to completely check S+C’s work so as to resolve the remaining discrepancy. It will be very interesting to see the response of the rest of the community if after the over-polite manner in which S+C were treated throughout the report process they rub salt in the wound of having effectively repudiated the report by also refusing to come clean with their data and methods. As I noted above, anyone here who thinks S+C’s new product is going to be treated respectfully under such circumstances may be in for a surprise.

  60. Willis Eschenbach
    Posted Sep 26, 2006 at 2:53 AM | Permalink

    Steve B, in #18 you say regarding Spencer and Christy and the UAH MSU analysis:

    Back to the new S+C results, I suspect that they will get much less attention than before due to the history of errors in their prior work

    And in #26, you say

    Re #20: One “small correction”? *snork* A couple of errors with pretty large consequences, as I recall. S+C’s stuff got a lot of attention mainly because they sought to portray their findings as an anti-AGW argument, underlined that stance by aligning themselves with the ExxonMobil-funded FUDtanks, and wound things up with the aforementioned bait-and-switch. Consider by way of contrast the treatment by all sides of the ocean cooling findings of Lyman et al.

    Setting aside your various ad-homs, I’d like to make a couple of points about Spencer and Christy:

    1) They have always been totally open about their procedures, their data, and their errors.

    2) All but one of the errors they identified themselves, and duly reported them.

    3) When the RSS folks found one error, they acknowledged it, and adjusted their dataset accordingly.

    “An artifact of the diurnal correction applied to LT
    has been discovered by Carl Mears and Frank Wentz
    (Remote Sensing Systems). This artifact contributed an
    error term in certain types of diurnal cycles, most
    noteably in the tropics. We have applied a new diurnal
    correction based on 3 AMSU instruments and call the dataset
    v5.2. This artifact does not appear in MT or LS. The new
    global trend from Dec 1978 to July 2005 is +0.123 C/decade,
    or +0.035 C/decade warmer than v5.1. This particular
    error is within the published margin of error for LT of
    +/- 0.05 C/decade (Christy et al. 2003). We thank Carl and
    Frank for digging into our procedure and discovering this
    error. All radiosonde comparisons have been rerun and the
    agreement is still exceptionally good. There was virtually
    no impact of this error outside of the tropics.”

    Note that, far from “pretty large consequences”, this is 0.035°C/decade. Also, note that they thanked RSS for finding the error. That’s how real science works, Steve, not the Piltdown Mann style …

    Here’s another:

    Update 8 April 2002 **********************

    Roy Spencer and I are in the process of upgrading
    the MSU/AMSU data processing to include a new
    non-linear approximation of the diurnal cycle
    correction (currently the approximation is linear).
    In preliminary results, the effect is very small,
    well within the estimated 95% C.I. of +/- 0.06
    C/decade. In the products released today, some
    minor changes have been included (though not the
    new non-linear diurnal adjustment). The 2LT trend
    is +0.053 C/decade through Mar 2002. The difference
    in today’s release vs. last month’s is a slight
    warming of monthly data after 1998. Essentially,
    this release corrects an error in the linear diurnal
    adjustment and produces better
    agreement between the MSU on NOAA-14 and the AMSU
    on NOAA-15. The single largest global anomaly
    impact is a relative increase of +0.041 (April 2001)
    while most are within 0.02 of the previous values.
    The net change in the overall trend was toward a more
    positive value by +0.012 C/decade. Again, this is
    still an interim change, and we anticipate a final
    version (“E” or “5.0”) next month.

    Now, you know what, Steve? The discovery and correction of these errors do not make me trust the S+C data less, they make me trust it more. Why? Because it’s been under the microscope, both by the creators and the detractors, and every error found so far has been fixed. That process leads me to more confidence in their results than, say, the Phil Jones dataset that he won’t release because he’s terrified that someone will find an error …


  61. Peter Hearnden
    Posted Sep 26, 2006 at 3:14 AM | Permalink

    ‘Nice’ ad hom of Phil Jones that, Willis.

    I wonder, since you seem to have time, if you might try excluding areas N and S of 70 degrees from the S&C record and showing us what you get? I’ve read not unconvincing arguments that ice and high plateaus cause problems with the S&C record.

  62. Steve Bloom
    Posted Sep 26, 2006 at 3:25 AM | Permalink

    Re #60: Willis, I don’t have the history memorized and don’t have time to look just now, but assuming the accuracy of your quotes

    “The new global trend from Dec 1978 to July 2005 is +0.123 C/decade” (current) and

    “The 2LT trend is +0.053 C/decade through Mar 2002”

    it does make it seem like a pretty big error over a very short time.

    Regarding open data and methods, who do you think the report’s recommendations were aimed at if not S+C?

    On the ad hom accusation, are you saying that my characterization of TCS and CEI is unfair?

  63. Willis Eschenbach
    Posted Sep 26, 2006 at 4:15 AM | Permalink

    Re #61, Peter H., you say:

    ‘Nice’ ad hom of Phil Jones that, Willis.

    Actually, it’s not an ad hom. It’s what he said.

    Steve B., what you are quoting is the difference in the trend between 2002 and 2005, not the size of the error. The error is 0.035 per decade, so over the three year period you point out, the error made a difference of 0.01°C/decade in the trend

    Also, you say:

    On the ad hom accusation, are you saying that my characterization of TCS and CEI is unfair?

    Not at all, I make no judgement of the fairness of the characterization. I’m saying it’s irrelevant.

    Finally, Peter, per your request, the UAH data without the North and South Poles:


  64. Peter Hearnden
    Posted Sep 26, 2006 at 4:28 AM | Permalink

    Willis, I don’t recall Phil Jones ever using the word ‘terrified’ or implying that he was terified, (though you clearly do imply such – the ad hom, he’s so weak that he’s terrified) but I’d take you word for it.

  65. Willis Eschenbach
    Posted Sep 26, 2006 at 5:04 AM | Permalink

    Peter, you’re right … he didn’t say “terrified”. He said:

    We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.

    However, you seem to be working very hard to miss the point. The point is, in contrast to Phil Jones, S+C are open about their data, their methods, and the errors in their analysis. Perhaps, in that context, you would like to comment on the Jones quote above, rather than trying to distract the converstation.


  66. John Cross
    Posted Sep 26, 2006 at 5:37 AM | Permalink

    Willis, I commend S&C in being open about their data and information. However you seem to have missed the episode where Spencer (and to a lesser degree Christy) raged against Fu because he published results that claimed S&C didn’t account for the stratoshperic cooling. Rather than investigate his work they attacked him and the journal in which he published it.

    Speaking of which, I have not seen anything published that shows that we should not use the correction that Fu proposed. If we do use it then it significantly raises the trends.


  67. Anders
    Posted Sep 26, 2006 at 6:23 AM | Permalink

    #66, John Cross do you mean you could not find this one: Spencer, R.W., and J.R. Christy, 1992: Precision and radiosonde validation of satellite gridpoint temperature anomalies, Part II: A tropospheric retrieval and trends during 1979-90. J. Climate, 5, 858-866. ?

    Spencer did not attack the journal and Fu, he simply observed that what Fu had “discovered” was something Spencer and Christy had known about, and corrected for, for the past 13 years. He also noted that this could have been picked up and corrected before publishing if he and Christy were used as referees as would be normal.

    If you have not seen anything else on Fu it might be because there was nothing else.

  68. beng
    Posted Sep 26, 2006 at 6:56 AM | Permalink

    RE 22: Jeff Norman writes:

    I know it came out of the Equatorial Pacific Ocean but why wasn’t this heat loss represented in the SSTs? Or was it? Or is it only just showing up now?

    Jeff, the way I look at it is the ocean conveyor redistributes warm water poleward (sinking) at the “cost” of cool, upwelling water in lower latitudes. The ENSO cycling of the SA west-coast upwelling should eventually balance somewhere. IOW, the big slowdown of cool upwelling in the east Pacific should eventually (a year?) cause an equal slowdown of high-latitude sinking & thus poleward movement of warm water. That may be partially explain the quick cool-down after the 1998 El Nino (another way would simply be radiation to space).

    Perhaps this “cooling down” (less sinking) may be continuing in the SH since 1998 & helping to cause its slight cooling trend since then (w/the NH incurring more sinking, which might explain the North Atlantic warmth).

  69. Paul Linsay
    Posted Sep 26, 2006 at 7:05 AM | Permalink

    #66: As I understand it, Fu tried to separate tropospheric and stratospheric mixing of temperature data by estimating that the data in the satellite’s viewing angle was mixed as

    f*T + (1-f)*S

    The only physical solution is f in the range [0,1]. He got f greater than 1 which resulted in turning a falling stratospheric temperature profile into an additional positive increase to the troposphere.

  70. welikerocks
    Posted Sep 26, 2006 at 7:15 AM | Permalink

    “will not change our understanding of global warming”

    What is that?

    Arguing fractions of temps, with less then 50 yrs of data over HUGE time spans; down-playing earth age along with the error margins (flawed data, less understood proxies,fudged data, hidden data, and bad statistics as well); throwing in the ad homs about industry and funding ; wrapping all that up and calling it “understanding of AGW” as in “certain” ( Is it really certain to an earth scientist with no emotional/political attachment?)

    Or is it the “understanding” of a person who holds strong political and social beliefs who happens to hold a scientific degree?

    Which reminds me… was it Christy who mentioned his time spent in a poorer country, and the need for modernization there..”Please don’t forget them” he says to the Dem Senator at the hearing ((who believed in global warming and was very snippy)) ?

    I thought he was very sincere and brave at that point.

    Oh hey NASA don’t forget the Sun! Sheesh.

  71. bender
    Posted Sep 26, 2006 at 7:19 AM | Permalink

    Re #58

    The prediction of 0±30°C is in perfect agreement with someone claiming seriously there is no literature about error estimates of GCM simulation.

    epica, could you provide a few references illustrating how error and propagation of error is studied in these GCMs? This question has come up before, so it’s clearly an important one. Thanks!

  72. John Cross
    Posted Sep 26, 2006 at 9:14 AM | Permalink

    #67 Anders: Thanks, yes Spencer did bring up that paper up but in doing so he showed that he did not really understand Fu’s methodology. In you look at what he says after a time, his response is much more mild. From TechCentralStation Spencer says:

    As is often the case, the press release that described the new study made claims that were, in my view, exaggerated. Nevertheless, given the importance of the global warming issue, this line of research is probably worthwhile as it provides an alternative way of interpreting the satellite data.

    So there is more on the topic but nothing (as I said) as to why we shouldn’t use it.


  73. John Cross
    Posted Sep 26, 2006 at 9:23 AM | Permalink

    #69 Paul, the paper I was thinking about was the first one in 2004 by Fu (Contribution of stratospheric cooling to satellite inferred tropospheric temperature trends). In it he used the radiosondes to derive a better weighting function for the satellites. This weighting function had a negative part which was used to counter the contamination from the stratospheric cooling.

    What you are describing sounds like the old S&C method and I am not aware of Fu using it, but I have not followed his work in the past year or two. If you have a reference I would appreciate it.


  74. Pat Frank
    Posted Sep 26, 2006 at 11:22 AM | Permalink

    #58 — Suppose you direct me to a published study in which the parameter uncertainties have been propagated through a GCM calculation to yield some sort of confidence limit.

    Ensemble comparisons are not error analyses. They are studies of calculational divergence. Likewise initial conditions studies and reproducibility studies: Calculational divergence. They are a measure of (un)reliability, not the error metric.

  75. Pat Frank
    Posted Sep 26, 2006 at 11:25 AM | Permalink

    #59 — Suppose you go to the report and show where my numbers in #59 are incorrect, Steve B. If you can’t do that, or won’t, your criticism is just more of the hot air you spend your life worrying about.

  76. Pat Frank
    Posted Sep 26, 2006 at 11:37 AM | Permalink

    #75 — meant to refer to the ‘numbers in #50’ not 59.
    #71 — should have read ahead. 🙂

  77. bender
    Posted Sep 26, 2006 at 11:41 AM | Permalink

    Re #76
    Your clarifications in #74 as to what we are after are helpful. Calculational divergence is a separate, but equally horrifying, issue.

  78. Sam
    Posted Sep 26, 2006 at 9:43 PM | Permalink

    Re #60 Willis:

    The hair cutting reminds me of a recent point I tried to make in our local rag. I was trying to demonstrate how Warmers cherry pick the facts they will use when they find them advantageous, but ignore the same things when they are inconvenient. For instance, I mentioned how in 2004 the record numbers of tornadoes confirmed in Kansas/Nebraska were trumpeted as a sign of AGW. This year,however, the complete absence of any confirmed tornadoes in the same region through the end of June was completely ignored. Following this observation, one of the Warmer faithful chastised me by claiming to have searched on-line finding the “report” of one possible tornado in March. She was very proud that she had rebutted my point.

  79. Thomas Palm
    Posted Sep 27, 2006 at 9:37 AM | Permalink

    Has anyone audited Spencer & Christy’s algorithms yet? If not, why should we trust them?

  80. Steve McIntyre
    Posted Sep 27, 2006 at 9:56 AM | Permalink

    #79. No one should "trust" results. If, after all the blue chip panels, NRC, IPCC, CCSP – you don’t know whether any of them audited Spencer and Christy results, then it must be your view that the climate science verification process is in total disarray.

  81. Dave Dardinger
    Posted Sep 27, 2006 at 10:12 AM | Permalink

    re: #79,

    The question isn’t whether anyone has, but if anyone could? I am quite certain that if anyone had tried to audit them and run into the sort of problems Steve M has in getting the proper information, that Steve would have been happy for such a person to post the facts here and would then expect S&C or someone associated with them to come along and either expedite the matter or rebutt the claim.

  82. Paul Linsay
    Posted Sep 27, 2006 at 10:39 AM | Permalink

    #73, John, I only read the first Fu paper. My comment was a simplified description of what Fu did. It is unphysical to have a negative weighting function.

    #79. Thomas. Has there ever been a thorough audit of the ground based temperature data and algorithms? There are examples of weather stations located in parking lots, on roofs of houses, in alleys and other inappropriate places. Land use changes have a large effect on nearby weather stations. Hans Erren [I think he’s the one!] has a plot of UHI corrections to temperature data that turns a cooling trend into a warming trend. The ground based temperature data covers maybe 15% of the earth’s surface, it certainly has no coverage of the 70% covered by the oceans. The list of problems goes on and on, yet what is essentially a Mom and Pop home based weather network is treated as the gold standard by the AGW community. Meanwhile the modern hi tech weather satellite with 100% earth coverage, which uses the same technology that is used to measure the radiation from the Big Bang, is constantly trashed. This is science?

  83. Douglas Hoyt
    Posted Sep 27, 2006 at 10:42 AM | Permalink

    I seem to recall that Spencer and Christy turned over all their code to Wentz and others to examine and that is how Wentz found an error. So that would constitute auditing.

  84. glrmc
    Posted Sep 27, 2006 at 1:59 PM | Permalink

    Re #49, and the related posts about the 1998 El Nino temperature seeming to be an unusually large outlier. These types of comment have been made before, and perhaps need exploring more carefully.

    Hansen et al.’s new PNAS paper has produced a 2005 T-Rex spike higher than for 1998. This (i) conflicts with both the Hadley version of T-Rex and the new S&C MSU time-series under discussion here; and (ii) “depends on the positive polar anomalies” which were in part based on estimated values “up to 1,200 km from the nearest measurement station”.

    In other words, Hansen et al.’s high 2005 result is a contrived figure. Whether or not the contrivance is justified might be said to be a matter of opinion. But, given that prior to publication of the PNAS estimate T-Rex had flatlined for seven years, at the very least there was strong subliminal pressure to produce such a figure.

    Taking oneself back to 1998-99, AGW alarmists were looking at a T-Rex that had been rising for a number of years and then had a major anomaly superimposed on it. Similarly to now, strong AGW subliminal pressure was present, in this case to “make the most” of the El Nino benefice.

    Would it be stretching credulity to imagine that in statistically treating the 1998 anomaly some new data manipulations were introduced? And that those manipulations were then maintained in succeeding years, thus producing not only a high 1998 figure but also the upward step-shift in T-Rex across 1998?

    Note that I am NOT imputing motive in asking this question. Handling the exceptional 1998 event could well have posed statistical problems that caused changes in data handling that seemed entirely justifiable at the time.


  85. John Cross
    Posted Sep 27, 2006 at 6:37 PM | Permalink

    RE # 82: Paul, I went back through what I refer to as the first Fu paper (Contributions of stratospheric cooling …) and I do not agree with your interpertation of his methodology. Essentially what Fu did was to take the channel 4 and channel 2 MSU data but use the radiosondes to develop an effective weighting function in the 350 – 800 hPa range.

    This effective weighting function does have a negative part but that should be no surprise since he is combining channels and attempting to remove a portion of the signal. Fu mentions this in his paper which is why he calls it an effective weighting as opposed to a physical weighting.

    While your second point was not addressed to me I do agree with it. However I will note that of the 3 main groups doing this S&C produces the lowest increase.

    Finally, to anyone wondering about an “audit” of S&C, from what I know they have been quite open with their data and methodology.

  86. Paul Linsay
    Posted Sep 27, 2006 at 6:56 PM | Permalink

    #85. John, I still think Fu is fubar. [Couldn’t let the opportunity pass!]

  87. Posted Oct 1, 2006 at 3:51 AM | Permalink

    I don’t think anyone has seen S+C’s code; the lack of interest in auditing it here is quite amusing

  88. Willis Eschenbach
    Posted Oct 1, 2006 at 3:58 AM | Permalink

    William, why should we audit S&C’s code when RSS (and others) have put thousands of hours into that very task already? Perhaps you’d like to comment on the lack of auditing of the ground temperature stations by AGW proponents, which I find “quite amusing”, and on Phil Jones’ refusal to release his data … in contrast to S&C …


  89. Posted Oct 1, 2006 at 6:06 AM | Permalink

    WE: you are confused. RSS haven’t audited S+C’s code, because they haven’t seen it. Unlike, well, MBH’s code. S+C relese data in exactly the same manner as Jones: they release the final product only.

    I don’t know quite what you mean about the ground stations: but if you mean the UHI, this has been extensively studied. There is a nice paper by Peterson, available via the wiki UHI article, if you haven’t already read it. And of course the IPCC provides a nice summary of the research as of 2001.

  90. TCO
    Posted Oct 1, 2006 at 7:32 AM | Permalink

    William: Do you approve or disapprove of Jones’s refusal to disclose the source data and calculations?

  91. Steve McIntyre
    Posted Oct 1, 2006 at 7:58 AM | Permalink

    #89. William – is there nothing that you won’t say? "unlike MBH code" — Mann archived his code only after a congressional committee asked him for it. Even then, he didn’t archive code that worked and didn’t archive all his code – for example, nothing on confidence interval calculations, nothing on Preisendorfer’s Rule N. The code did show that Mann calculated the verification r2 statistic, which, subsequently, he denied calculating.

    It’s my understanding that S&C provided their code to Wentz. If that understanding is wrong, as you suggest, then my question is: how many blue chip reviews have there been trying to reconcile surface and tropospheric temperatures? You mean that, after all of this, no one has looked at S&C code? If that’s the case, these supposedly blue-chip panels have consisted of "pathetic amateurs", to borrow a phrase, and proper due diligence is long overdue.

  92. Peter Hearnden
    Posted Oct 1, 2006 at 8:06 AM | Permalink

    Re #91, ‘you people’, ‘pathetic amateurs’. I thought this place had rules?

  93. Steve McIntyre
    Posted Oct 1, 2006 at 8:14 AM | Permalink

    #92. I’ve edited my comment to make the comment about the panels in their public capacity, rather than people as posters on the board. If after all the time and money and policy concern, no one’s checked the S&C code, it simply takes my breath away. Having said that, I believe that the code has been sent to Wents and Connolley is simply spreading disinformation.

  94. Posted Oct 1, 2006 at 8:38 AM | Permalink

    I don’t see any evidence it went to Wentz, though I suppose it might have. What I recall from the text of the Mears et al Science paper is that they found it by working through what the output was supposed to be, not by looking at the code. As for all your whinging about MBH – at least the code is there; S+C’s still isn’t publically visible. If your language is anything to go by, you seem rather sensitive on this issue.

  95. TCO
    Posted Oct 1, 2006 at 9:16 AM | Permalink

    William, do you approve or disapprove of Jones refusal to share source data and methods for his instrumental temps?

  96. Posted Oct 1, 2006 at 9:41 AM | Permalink

    Dear TCO, I think people (for example S+C) should make their code publically available, and that applies to Jones too. Whether Jones does share his code, I don’t know. But the data Jones uses is all from-other-people: none of it is his own (just like the input MSU data; it would make no sense to ask S+C to share that, its up to the people that generated it).

    Do you approve or disapprove of S+C not making their code public?

  97. TCO
    Posted Oct 1, 2006 at 9:44 AM | Permalink

    William: so you agree that all should share their data and methods. Could you please go look at the Jones behavior and see if they are complying? I just want to make sure that you hold each side to the same standard. (BTW, I am completely comfortable with holding my side to that standard, am in the process of beating Steve up.)

    Steve: TCO, all code for our articles was archived at the time of publication. The code has been operated by third parties. If there’s something that you don’t understand about the code, that’s a different issue than its availability. I’ve never suggested that Mann had an obligation to provide tutorials on his methodology. One of the reasons for providing source code is so that information on methodology that the author may not have explained in the text can be observed by subsequent readers. IF you read Gary King or McCullough and Vinod, you’ll get the idea.

  98. Steve McIntyre
    Posted Oct 1, 2006 at 10:15 AM | Permalink

    #94. Mann did not archive all his code only a portion. "Whinging" – excuse me. Mann refused to disclose his code. It took a request from a congressional committee and then he only archived a portion of it under protest. For you to claim that Mann has behaved more appropriately than Spencer and Christy is completely laughable. This is a guy who denied even calculating the verification r2 statistic to the NAS Panel, who, to their discredit, let the falsehood pass.

  99. Posted Oct 1, 2006 at 10:40 AM | Permalink

    Dear William #96,
    Spencer and Christy post their data on the internet

    periodically that are incidentally very useful to observe the climate in real time (e.g. Steve Milloy uses them). Whether or not you can easily find their full codes, the availability of their work is higher than the average, while the degree of co-operation in the hockey team is almost certainly below the average.


  100. Posted Oct 1, 2006 at 10:43 AM | Permalink

    S+C’s code is still not publically available. It may have been disclosed to W+S, but for that we seem to have nothing but your “understanding”, which knowing your love of audit, I’m sure you wouldn’t want anyone to put any weight on.

    Lubos – you’re confusing input and output. AFAIK, no-one has ever complained about MBH output not being available.

  101. Steve McIntyre
    Posted Oct 1, 2006 at 11:05 AM | Permalink

    #100. Christy told me that he had sent the code to Wentz and it is my understanding that he filed correspondence to the House Energy and Commerce Committee showing that, when challenged on it by Waxman. Having said that, I quite agree that my information is second-hand and that confirmation on whether the S-C code was provided to Wentz-Mears should come from the principals. Having said that, I think that you’re going to find that it was provided and that your comments on this are wrong. I presume that you just “winged” your comments.

    I have complained repeatedly and vociferously about MBH output not being available. (1) The results of the individual steps are still not available. Mann refused to provide this and Nature refused to require him to do so. If you can obtain the results of the individual steps, I’d appreciate it. (2) The verification r2 for the individual steps as calculated by Mann is not available. (3) The output from the CENSORED calculations was never made available.

    I’ve complained about each of these.

  102. TCO
    Posted Oct 1, 2006 at 11:06 AM | Permalink

    William: People have complained about Mann code not being available and he told reporters that showing the algorithm would be “giving in to intimidation”.

  103. TCO
    Posted Oct 1, 2006 at 11:08 AM | Permalink

    Steve, when you say that the “code has been operated by third parties” does that mean that third parties have repeated your ARFIMA work?

  104. Posted Oct 1, 2006 at 12:04 PM | Permalink

    I meant “output” in the sense that Lubos was confused about: that of the final product. S+C make this available, as MBH do, but none of the intermediate steps are.

    Code: would be nice to know for sure. Is there any reason S+C don’t just dump it on their web site? You do for yours (and this is commendable).

  105. TCO
    Posted Oct 1, 2006 at 12:06 PM | Permalink

    Come on Bill: The Jones failure to disclose methods is very well known. Why can’t you examine and comment on that?

  106. Mike Carney
    Posted Oct 1, 2006 at 12:08 PM | Permalink

    William you have an amazingly aggressive point of view for a wiki editor. You have already told Willis he was confused when you had no evidence that the code was not shared with RSS. At the Wegman congressional hearings Christy testified that he did share code with Wentz. Further, you stated that Christy only provides final numbers — again without evidence. In the same hearing, under oath, Christy stated that he provided intermediary results to Wentz so he could further check the work. To be complete, at the same hearing a letter from Wentz was produced that complained Christy did not provide enough code (but you might consider him whining at that point and I know you don’t like whining). Wentz did not refute that Christy had provided code and data.

    The basic equation is data + process = results. Jones provides only the results. In the case of satellites the data is available. So there is no complaint there. The idea that Jones shouldn’t produce his data set because it came from other sources is… silly. Reacquiring the data from the original sources would be an enormous, costly undertaking and probably some of the data is no longer available from the original providers. Jones was paid from the public coffers to acquire this data. It’s not available for what reason? World governments are preparing to spend trillions of dollars partially based on evidence from Phil Jones and he won’t provide data and methods for what reason?

    There are thousands of studies that Steve (I referring to the unpaid one) has not audited. The fact we can’t move on from the few he has undertaken is largely due to the effort which you and others spend denying results of those audits which have occurred. There have been problems and if those were admitted, data and process provided, the discussion would end. Other studies that you think need to be audited could be addressed. Or you could do the audit.

  107. bender
    Posted Oct 1, 2006 at 12:20 PM | Permalink

    Re #103
    I imagine he means what he says: other people have been able to acccess and run his code without assistance or oversight. I would suspect one of those people would be Wegman. Wegman didn’t report using the AR(p) models simulated via Hosking, but instead used a simpler low-order red noise model – an option contained in the M&M code.

    Here you are taking over yet another thread on this same topic. It was OT last time. It’s OT this time. This thread is supposed to be about satellite data. Post this stuff in an ARMA or Ritson thread.

  108. William Connolley
    Posted Oct 1, 2006 at 12:26 PM | Permalink

    If you can show me where C says he has shared code with W, I’ll be happy to change my mind. Mind you, that can be qualified: if the sharing occurred *after* the Aug 11th paper was published, that would be somewhat different. But it would be nice to be sure, so do provide a something a bit more precise so we can all check.

  109. Steve McIntyre
    Posted Oct 1, 2006 at 12:31 PM | Permalink

    #105. TCO, Connolley is staying "on-message".

    #104. The verification statistics are not an “intermediate product”. The output from the steps is not an “intermediate” product.

  110. TCO
    Posted Oct 1, 2006 at 12:35 PM | Permalink

    Is that something you want me to emulate or was that a subtle dig at him? Or both? Ya subtle, couldabeena Feilds/theoretical finance Nobel medalist, ya. 🙂

  111. Posted Oct 1, 2006 at 1:02 PM | Permalink


    Incredible. How does Phil Jones do it?

    By removing the errors in surface data.

    Because of these non-climatic factors, both SST and MAT data must be corrected (or ‘homogenized’) to remove their effects.

    (Jones et al, Nature 322 July 1986)

  112. Pat Frank
    Posted Oct 1, 2006 at 1:14 PM | Permalink

    #108 — Why don’t you ask Christy yourself, William, or Wentz. Both men are available by email. Steve M. shouldn’t have to do your leg-work for you. Let us know what you discover.

  113. Steve McIntyre
    Posted Oct 1, 2006 at 1:25 PM | Permalink

    I’ve already sent an email to Christy. But I know what the answer will be because I talked to Christy in Washington after the E&C Committee hearings. He sent the code to Wentz.

    But I don’t know where Connolley thinks he’s going with this. Let’s suppose that Christy had never sent his code to anyone. How does this enhance confidence in climate science quality control? What have all these blue chip panels been doing?

  114. Hans Erren
    Posted Oct 1, 2006 at 1:37 PM | Permalink

    re 111:

    You can’t homogenise sparse regions (Africa, Siberia), because you cannot intercompare stations. Automated homogenisation is also error prone.
    Kriging gives already a fair approximation of uncertainty, based on sample density.
    Where are the uncertainty estimates in the global gridded data?

  115. Posted Oct 1, 2006 at 1:49 PM | Permalink


    Canvas buckets, unknown size and speed of ships, unknown locations.. Uncertainty due to these factors would be interesting indeed. Significant effect or no?

  116. Steve McIntyre
    Posted Oct 1, 2006 at 2:01 PM | Permalink

    UC, I posted some comments on bucket adjustments and Reid (1991) also commented on this. The bucket adjustments do not appear to be incidental but have a definite function for IPCC.

  117. Spence_UK
    Posted Oct 1, 2006 at 3:32 PM | Permalink

    Of course, we should expect the same standard from all scientists; as we expect Mann to release his code, it is only right that we expect the same from Spencer and Christy. It would be interesting to confirm what did happen in this case, but from a decent source rather than trying to second guess from reading between the lines in the paper.

    In the interests of consistency:

    Mind you, that can be qualified: if the sharing occurred *after* the Aug 11th paper was published, that would be somewhat different.

    Not picking holes here, just for clarity – in what way would it be different? Likewise, I assume it is fair to apply this “difference” to Mann in the event he failed to provide his code until after the M&M, vS&Z or B&C papers were published? (Again, I don’t know whether his partial code release was before or after these papers, just want to make sure we have a level playing field)

  118. Steve McIntyre
    Posted Oct 1, 2006 at 3:47 PM | Permalink

    First of all, I completely agree that Spencer and Christy should share their code. If Connolley is correct and they haven’t, this should have been dealt with long ago. (However I’m sure that S&C provided code to Wentz long before the Aug 11 paper, if only because I talked to Christy in Washington in late July.) Mann’s partial release of code was after the MM papers and vS (2004) and after submission of the VZ Comment on MM (2005) and the BC paper.

    I guess that Connolley is now a supporter of Barton’s intervention – unlikely bedfellos to be sure – since without Barton, there would not even be the partial Mann code availability.

  119. Mike Carney
    Posted Oct 1, 2006 at 4:00 PM | Permalink

    William: Let’s see. You made a statement with no supporting evidence and now you will deign to “change your mind” if I provide detailed evidence to the contrary.

    Audio archive is at a house committee website. Skip over to 1:05:00 to hear Christy’s testimony. Here are a few quotes on RSS using some of Christy and Spencer’s work that Christy provided to RSS: “it would be fine to use and critique, that’s sort of what science is all about”, “if there was a mistake we wanted it fixed”, “expressed our gratitude to RSS for discovering our error”.

    Compare those to this quote from Phil Jones: “We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.” And he didn’t. Is there something wrong with it? We don’t know and clearly Phil doesn’t want to know.

    William, you want to beat up on Christy for not being forthcoming enough? He probably could make more information available. But make a believer out of me that you really care about data transparency. Should Jones release his data and methods?

  120. Posted Oct 1, 2006 at 4:16 PM | Permalink

    The chances of Connelley actually being evenhanded about data disclosure is indistinguishable from zero.

    After all, he has a reputation and a political career to maintain.

  121. Hank Roberts
    Posted Oct 1, 2006 at 5:07 PM | Permalink

    About the new satellite data?

  122. Posted Oct 1, 2006 at 5:12 PM | Permalink

    About anything in climate science.

  123. Hank Roberts
    Posted Oct 1, 2006 at 6:59 PM | Permalink

    Sorry, I wasn’t clear.
    I assume the link is still to draft files, but don’t see a timestamp; has there been any update on the new satellite data?

  124. Steve McIntyre
    Posted Oct 1, 2006 at 7:11 PM | Permalink

    Please note that William Connolley has a comment on this topic at his blog here .

  125. John Cross
    Posted Oct 1, 2006 at 8:51 PM | Permalink

    Re # 118: Steve, just to confirm, when you said you talked to Christy in Washington in July you mean July of 2005. This is the date when M&W published their paper.

    While this really does not say one way or another here is a quote from Spencer:

    “While their criticism of the UAH diurnal cycle adjustment method is somewhat speculative, Mears & Wentz were additionally able to demonstrate to us, privately, that there is an error that arises from our implementation of the UAH technique. This very convincing demonstration, which is based upon simple algebra and was discovered too late to make it into their published report, made it obvious to us that the UAH diurnal correction method had a bias that needed to be corrected.”

  126. Steve McIntyre
    Posted Oct 1, 2006 at 9:20 PM | Permalink

    I meant July 2006, so this particular date has no significance in the matter if the article was in Aug 2005. However Christy said that he supplied code to Wentz. He did not do so after Wentz’ first email request but did so afterwards and Wentz sent him acknowedgement and thanks. Accordingly Christy was very angry about questions from Waxman, to whom Wentz had sent the first email expressing reluctance.

    Again I re-iterate to you that you are on the horns of dilemma. Either you are falsely impugning Christy or the verificaiton procedures in climate science are an even worse joke than anyone previously thought.

  127. TCO
    Posted Oct 1, 2006 at 9:24 PM | Permalink

    There’s no horn of a dillema, Steve, just the truth. It’s not a dillema if the truth makes one look bad or fails to support some larger argument or objective. Truth is truth. Just let the chips fall.

  128. Posted Oct 2, 2006 at 12:30 AM | Permalink


    Interesting indeed. But maybe they are learning, check

    Brohan, P., J.J. Kennedy, I. Haris, S.F.B. Tett and P.D. Jones, 2006: Uncertainty estimates in regional and global observed temperature changes: a new dataset from 1850. J. Geophysical Research 111, D12106

    Figure 12 a. (12b, sea temps are more accurate??)
    Figure 13. Land data 95% CI is 0.5 C. Same as Mann’s millennial.

  129. Posted Oct 2, 2006 at 1:47 AM | Permalink

    #119: I was hoping there was a written record. None of those quotes you provide indicate that Christy shared any code.

    By contrast, JC’s #125 does strongly imply no code sharing by that point.

    However, it would be nice to see something more definite, so if McI is emailing Christy it will be interesting to see the reply

  130. bender
    Posted Oct 2, 2006 at 4:27 AM | Permalink

    Re #128
    That Figure 13 of the new land surface temperature data of Brohan et al. 2006 is interesting.

    The uncertainty increases fairly dramatically as one proceeds back in time. But even today it’s significant. Also, for the most recent decade the old measurements (hadcru2) are at the high end of the new (hadcru3) 95% CI.

  131. bender
    Posted Oct 2, 2006 at 4:34 AM | Permalink

    Re #16
    Care to reconsider?

  132. John Cross
    Posted Oct 2, 2006 at 9:27 AM | Permalink

    Re #126 Steve, I don’t follow your post. Did you ever say that to me before? I don’t really consider myself on the horns of a dilemma at all. I have not falsely accused Christy of anything (in fact I stated before that I thought he had shared information however I am not so sure now). In fact the only accusation I made of Christy was that of ignoring and downplaying Fu’s work which is not a false accusation at all.

    In regards to open science, I do believe that science should be shared and in fact I am probably willing to take that further than most would be.

  133. Steve McIntyre
    Posted Oct 2, 2006 at 10:34 AM | Permalink

    #132. John C, the second paragraph was directed more at Connolley than at you and repeats a point made previously. If Connolley is seriously concerned with the lack of verification of key tropospheric calculations – and this is after a couple of NRC panels and IPCC reports – then the systems within climate science for validating work could hardly be more pathetically amateur.

  134. Michael Jankowski
    Posted Oct 2, 2006 at 1:50 PM | Permalink

    I had no idea Connolly was such an advocate for releasing source code! I wonder how many requests he sent to Mann?

  135. Michael Jankowski
    Posted Oct 2, 2006 at 2:02 PM | Permalink


    Why don’t you ask Christy yourself, William, or Wentz. Both men are available by email. Steve M. shouldn’t have to do your leg-work for you. Let us know what you discover.

    [sarcasm on]I am sure Dano and the red-carded Dr. John Hunter were chomping at the bit to lambast and ridicule this poster for being so lazy and irresponsible as to not email Dr. Christy with his questions directly before raising them on internet blogs.[/sarcasm off]

  136. Posted Nov 20, 2006 at 4:01 AM | Permalink

    Version 6.0 (draft) has been withdrawn

    Update 10 Nov 2006 *******************************

    Notice that data products are back to version 5.2 for LT and 5.1 for MT and LS.

    We had hoped to solve the inconsistencies between NOAA-15 and NOAA-16 by this
    time, but we are still working on the problem. The temperature data for LT
    and MT are diverging, and we had originally thought that the main error
    lay with NOAA-15. However, after looking closely, there is evidence that
    both satellites have calibration drifts. We will assume, therefore, that
    the best guess is simply the average of the two. This is what is represented
    in LT 5.2, MT 5.1 and LS 5.1. These datasets have had error statistics
    already published, so we shall stick with these datasets for a few more months
    until we get to the bottom of the calibration drifts in the AMSUs. However,
    the error statistics only cover ther period 1978 – 2004. The last two years
    cover the period where the two AMSUs are drifting apart, so caution is
    urged on the most recent data.

    I wonder if Steve could redo the data from version 5.2?

  137. Steve McIntyre
    Posted Nov 20, 2006 at 1:51 PM | Permalink

    There is quite a bit of differences between 6.0p and 5.2. Unfortunately, I didn’t save the earlier data, but you can compare the graphics.

  138. Posted Dec 7, 2006 at 12:55 PM | Permalink

    if CO2 is the bugaboo that it’s supposed to be and if there’s supposed to be this direct link between CO2 conc. and air temperature then how about the following: – if the GBL data can be broken down into the NH and SH – then can it not be broken down further into North America, Eurasia, Australia, etc., as well as North Atlantic and Northern Pacific w/o loosing accuracy. And, if that’s possible then why not post a plot of these areas vs industrial output [i.e., CO2 production] vs air temps……..or am i way off

    thanks much

  139. Dave Dardinger
    Posted Dec 7, 2006 at 1:49 PM | Permalink

    re: #138 Hi, Doug.

    I’m afraid you are far off. The problem is that CO2 is a well mixed gas, at least on an annual basis. There’s a bit of a lag between the NH and the SH, but it’s still not noticable compared with the other natural cycles. And the GHG effect of CO2, whether it be large or small depends on the total concentration of CO2 which is vastly larger than the amount added in any one year (The additional amount which appears in the atmosphere is about .5% per year). So you can only look for a CO2 signal on the scale of decades. The basic question separating warmers and skeptics is just how large this signal is. We skeptics say a degree C or a little more (for doubling the initial CO2 concentration). The warmers basically start with 2 degrees C and go up from there [depending on the fund raising situation for the environmental group, political party or university we’re talking about. {grin}] And of course for warmers, any warming, whether summer/winter or day/night, is bad.

  140. John Toomay
    Posted May 1, 2007 at 3:07 PM | Permalink

    Having read all 100+ entries, I have found a lot of interesting comments, thankyou

One Trackback

  1. […] Satellite data — An archive here.  Esp note here, here, here, here, and here. […]

%d bloggers like this: