Opportunism and the Models

Many CA readers have probably been checking out some interesting post at Lucia’s about Stefan Rahmstorf’s opportunistic smoothing of temperature observations in Copenhagen. See here here and here at Lucia’s. Also see David Stockwell’s recent post here and his recent E&E paper on Rahmstorf et al (Science 2007) (Rahmstorf here). Also see the recent Copenhagen Synthesis Report here.
[Update – Jul 2] David Stockwell had an excellent comment on this issue in April 2008 here – see Rahmstorf comment at #14 (thanks to PaulM for drawing this to my attention).

Rahmstorf, a realclimatescientist (a literal translation of the compound German noun), stated in Rahmstorf et al 2007 that the trends were worse than we thought:

The data now available raise concerns that the climate system, in particular sea level, may be responding more quickly than climate models indicate… The global mean surface temperature increase (land and ocean combined) in both the NASA GISS data set and the Hadley Centre/Climatic Research Unit data set is 0.33°C for the 16 years since 1990, which is in the upper part of the range projected by the IPCC.

This was illustrated with the following figure, which has a couple of interesting features. First, the method of smoothing is described only as follows: “All trends are nonlinear trend lines and are computed with an embedding period of 11 years.”

Figure 1. Excerpt from Rahmstorf et al 2007. “All trends are nonlinear trend lines and are computed with an embedding period of 11 years.”

Commenters at Lucia’s and David’s state that Rahmstorf refused to disclose his smoothing method (which proved ultimately to be a sort of Mannian smoothing) on the basis that he did not hold the copyright. Eventually Jean S figured it out and his method was used in David Stockwell’s E&E paper. David has an R-port of the method online (using an R-package ssa presently unavailable for Windows).

Secondly, Rahmstorf has zeroed both models and observations on 1990. I recall some controversy about Willis Eschenbach zeroing GISS models on 1958; I have a vague recollection of Hansen’s dogs saying that this was WRONG. I don’t vouch for this recollection, but, if the events were as I vaguely recall, I don’t see any material difference in Rahmstorf’s centering here.

In David Stockwell’s E&E article, he observed that Rahmstorf’s method applied to updated GISS and CRU resulted in the smooth tapering off. See the online article for the following image. Obviously the tapering off diminishes the rhetorical impact considerably.

Figure 2. Stockwell’s extension of R2007 using the same methodology.

The new Copenhagen Report has an update of the Rahmstorf 2007 diagram (Rahmstorf is said to have added 2007 and 2008 data). The data is said to be smoothed over 11 years (as in Rahmstorf 2007):

Changes in global average surface air temperature (smoothed over 11 years) relative to 1990. The blue line represents data from Hadley Center (UK Meteorological Office); the red line is GISS (NASA Goddard Institute for Space Studies, USA) data… (data from 2007 and 2008
added by Rahmstorf, S.)

However, the Copenhagen diagram doesn’t taper off – a highly important difference in rhetorical effect from the updated version of the Rahmstorf diagram published by David Stockwell.

Figure 3. Rahmstorf’s extension – note the lack of taper occurring in the Stockwell version.

Jean S once again figured this out – Rahmstorf opportunistically changed the smoothing parameter to one that yielded an image that was rhetorically more effective and failed to disclose the change in accounting procedure, falsely reporting that he used the same parameter as in the prior article. This is what the “Community” calls “GARP” – Generally Accepted Realclimate Procedure.

I’ve done a few experiments comparing AR4 A1B models to updated CRU data. More on this tomorrow. I’ve done my own port of Rahmstorfian smoothing to R without using the ssa package – working from first principles. It uses nothing more complicated than svd and a quasi-Mannian padding. (Rahmstorf’s “copyright” pretext is absurd BTW. The Community really has to tell the Team to stop such nonsense.)


  1. Richard Henry Lee
    Posted Jul 1, 2009 at 11:13 PM | Permalink

    The selection bias in climate studies has been an ongoing problem, yet the peer reviewers for scientific journals continue to fail to do their job. Thanks to Steve and the many other talented people he inspires who are trying to make sure we get the science right. Kudos all around.

    If Diogenes had been alive today, he would have found an honest man in Steve McIntyre and the many others who have the same intellectual curiosity on this and other similar blogs.

  2. theduke
    Posted Jul 1, 2009 at 11:26 PM | Permalink

    This is what the “Community” calls “GARP” – Generally Accepted Realclimate Procedure.

    I wonder if the Germans have a compound noun for that particular method.

    When I was in college studying history and language, I had an expository writing professor who would use the term “gorp” to describe lackluster writing. I had no idea what it meant at the time, and assumed it meant something akin to garbage. I’ve since learned that “gorp” is an acronym for “good old raisins and peanuts.” In other words, it means trail mix. A substitute for sustenance in the absence of a hearty meal.

    “Garp,” “Gorp,” whatever.

  3. pete m
    Posted Jul 1, 2009 at 11:34 PM | Permalink

    a realclimatescientist (a literal translation of the compound German noun),

    hehe – can’t wait for The Community to come here pointing this out as the reason for their (past) lack of co-operation with you.

    Here’s a tissue in advance + 1.

    3-4 years more of the current trend will see this sort of nonsense disposed of once and for all. You cannot ignore a 10 year trend and claim “recent” events have caused etc.

    Nice work, as always.

  4. anonymous
    Posted Jul 2, 2009 at 1:57 AM | Permalink

    Obviously they need to start the temperature trend from 1970 to get the necessary scare slope – it’s incredible that even non-scientific journalists don’t point out that the trend from say the 1940’s instead is much much smaller.

  5. Hal
    Posted Jul 2, 2009 at 2:07 AM | Permalink

    duke #2
    “I wonder if the Germans have a compound noun for that particular method. ”

    You could make a compound word out of it:


    But it would generally be broken up in common usage:

    Allgemein anerkannte realklimatische Behandlungsmethode

    Alternately, one could say:

    Allgemein anerkannte realklimatische Prozedur

    but the AARP would claim precedence for that acronym.


    Reading Rahmstorf’s bio, he sure has made a good living from AGW. Who would want to give that up.

  6. Spence_UK
    Posted Jul 2, 2009 at 2:44 AM | Permalink

    I see the Texas Sharpshooters are back in town.

    Good work by David, Lucia and JeanS.

    What do you mean when you say the observations are zeroed on 1990? The graph doesn’t seem to show that. Have I missed something?

    • Spence_UK
      Posted Jul 2, 2009 at 2:49 AM | Permalink

      Re: Spence_UK (#7),

      OK I’m a bit slow this morning, I realise now the trends are pinned at 1990 – not quite the same as pinning a single year, although it still offers scope for a very small amount of fine-tuning. Lots of small incremental tuning steps are easier to argue around, as the team can isolate each step and insist that this alone “doesn’t matter” (ignoring the rather large resulting bias when all steps are taken together).

      • Ulises
        Posted Jul 2, 2009 at 5:39 AM | Permalink

        Re: Spence_UK (#8),

        There is a difference between David’s plot and the other ones. In Fig. 2 from above, the year ticks, the data points and the haircross all align, but the data points are not zero in 1990. In the othere ones, ticks and haircross match, but the data points appear to be placed with plus a half year shift relative to the ticks. So y=0 does not hold then for the 1990 measurements, but for an interpolation between the 1989 and 1990 values, maybe to account for a difference between mid-year (the midpoint for all-year measurement) and Jan 1 (the starting point of the IPCC forecast ?). If the grey forecast funnel is placed accordingly, it’s not a problem.

        Additionally I’d like to note that IMHO the y-axis legend in the plots is wrong : It should not be ‘Temperature Change’, for which I would expect a difference or derivative, but ‘Temperature Anomaly’.

        • Jean S
          Posted Jul 2, 2009 at 6:02 AM | Permalink

          Re: Ulises (#10),
          Yes, Rahmstorf is plotting the anomalies with a half year shift (i.e., the anomaly for 2000 is plotted to 2000.5). As I said in #9, the y=0 condition only holds for the “trends”, but the same (correponding 1990 trend) value is also subtracted from the measurement anomalies. I.e., the 1990 anomaly is not forced to be zero, but measurements are relative to 1990 trend value whatever is the meaning of that.

          For the other concern of yours, you should contact Stefan Rahmstorf 😉

    • Jean S
      Posted Jul 2, 2009 at 3:25 AM | Permalink

      Re: Spence_UK (#7),
      Thanks! The “trend” value (1990) is also subtracted from the observation anomalies.

  7. schnoerkelman
    Posted Jul 2, 2009 at 5:49 AM | Permalink

    Ah yes, the world according to GARP. In the original, Irving concludes “In the world according to Garp, we are all terminal cases.” In the sequel I guess it’s “We’re all gonna fry!” 🙂

  8. Posted Jul 2, 2009 at 6:07 AM | Permalink

    Steve, I’m glad you have picked up on this (I’d call it Copenhagen Report Averaging Procedure). You have so far only reported a fraction of the story, which is in fact much worse. Here is a summary.

    Rahmstorf et al (2007) did not explain how they handled the endpoint problem. They referred to a paper by Moore et al, but that does not explain the method, it merely says ‘a variation on the minimum roughness criterion’ (MRC).

    In April 2008 David Stockwell observed that Rahmstorf was not using MRC, since his smoothed curve did not go through the final point, and wondered what was being used. Rahmstorf broke the RC convention and responded to a skeptic blog, but did not reveal the algorithm and made a number of false
    statements: (a) That the algorithm was “distributed in the matlab file ssatrend.m.” (nobody has been able to find this) (b) that he did not use padding (you have to use padding near the endpoints) (c) that the conclusions dont depend on the choice of algorithm (simply false) (d) that only the last 5 years are affected (in fact it is 11) (e) that the algorithm is described in the Moore et al paper.

    Now for figure 3 of the new Copenhagen report.
    As you say, the caption says it is “smoothed over 11 years”. In an astonishing piece of detective work, Jean S figured out not only what the endpoint algorithm was, but that the smoothing period was greater than 11 years. After several days his comment got through the RC censor and Stefan admitted (comment 255 here) that he had used ‘M=15’, because the 11 year period was ‘too short to determine a robust climate trend’ (translation: the 11-year period showed the true fact that temps are levelling off, which Stefan wants to hide). In comment 363 on the same thread, Charlie wondered why the caption had not been changed, to which Stefan replied implausibly that he ‘hadn’t noticed the error’, before repeating the falsehood that it makes no difference.

    Worse still, it turns out that ‘M=15’ is the length of the padding period, meaning that the averaging is in fact over 29 years, so that all of the red curve in Fig 3 after 1993 is influenced by Rahmstorf’s choice of endpoint smoothing. So Rahmstorf has repeated the factor of 2 blunder that he made on David’s blog a year ago.

    By the way, if anyone wonders what the algorithm is, it’s very simple:
    “Here, I will use a variation of the minimum roughness criterion where the series is padded with a linear extrapolation based on the m preceding points.” (From Grinsted’s thesis).
    Matlab code snippets from Jean S are at Lucia’s and David’s blogs.
    If Rahmstorf was a proper scientist, he would just have said this when asked.

    • Steve McIntyre
      Posted Jul 2, 2009 at 7:26 AM | Permalink

      Re: PaulM (#12),

      Thanks for the link to David Stockwell’s posts on this last year which I hadn’t followed closely (tho I posted a comment on one of the threads).

      Rahmstorf’s comments at David Stockwell definitely complied with GARP.

      Nicolas Nierenberg posted on this matter recently here ; he obtained a copy of ssatrend.m from Grinsted (who apparently did not supply the code to Rahmstorf and did not intend it for the use that Rahmstorf applied it.)

      I’ve looked at ssatrend.m and have emulated it from first principles – which enables a closer look at the algebra. Since it involves SVD and retained principal components, it is well within one of the specialties of this blog and I’ll comment on these issues.

      The method turns out to have an amusing data mining aspect which will amuse readers. (Whether the data mining “matters” is a different issue, but it’s interesting to see another example.) Stay tuned.

      • Posted Jul 2, 2009 at 10:52 AM | Permalink

        Re: Steve McIntyre (#13),


        The same comments on trends being worse than previously forecast also generally refer to increasing 2100 sea level forecasts. I am only aware of two of these since AR4. One is by Rahmstorf, and the other is by Grinstead. The Rahmstorf model is incredibly naive using a single variable of global temperature to explain the rate of sea level rise from both expansion and increased mass (ice melt). It is quite interesting that in defending his model against responses from other scientists Rahmstorf used the same type of techniques as were discussed in your post. He changed the smoothing period, as well as the end points to try to hold his hypothesis together without disclosing either. I commented on the Rahmstorf sea level paper here.

        Grinstead’s model is more sophisticated although similar in concept. From my viewpoint it is interesting, but relies heavily on temperature projections from before the instrumental period as well as a very short period of recent observed sea level rise from satellites.

        Despite these couple of papers which are based on global naive models, I haven’t seen any evidence that the authors of the AR4 chapter on sea level rise have changed their views since it was published. But that is based on monitoring sites like realclimate. I would be interested to read any recent papers by those authors if someone could point me to them.

  9. Fred
    Posted Jul 2, 2009 at 8:12 AM | Permalink

    Not being a stats whiz – just got by in my Uni days, it seems to me that the numerical jiggery-pockery being employed to keep the dream alive is getting more extreme as the real data and the current theory diverge ever more strongly.

    The question seems to be – how much longer can this naked emperor stayed bare-buffed before ordinary (non-stats whiz kids) folks go “Look, I can see his buttocks”

    The challenge seems to be to produce a simple to understand example that would be clear to an 8th grader or a hockey mom that “selective” methods have been carefully chosen to produce desired results.

  10. Posted Jul 2, 2009 at 8:20 AM | Permalink

    (Rahmsdorf’s “copyright” pretext is absurd BTW. The Community really has to tell the Team to stop such nonsense.)

    That would be the first step down the path of self-immolation on their part. Not going to happen.

  11. Jean S
    Posted Jul 2, 2009 at 8:21 AM | Permalink

    Dr. realclimatescientist commenting over RC:

    2. When someone says that climate aspects are progressing faster than was expected a few years ago, it sounds to me like that verification of that statement would consist mostly of looking at the effect of adding recent data. Correct?

    [Response: Not correct – anyone who looks at the actual report can see that this is not what it is about. If climate trends are calculated over sufficiently long periods, trends do not change a lot by adding two or three more years. What the report is about is scientific progress: new scientific studies that appeared since the deadline for the IPCC report. This is said directly in the bit we quote above: “Since 2007, reports comparing the IPCC projections of 1990 with observations show…” It is new analysis, not adding a couple of years of data.

    I’m speechless.

    • Mike B
      Posted Jul 2, 2009 at 10:23 AM | Permalink

      Re: Jean S (#16),

      Wow. Just Wow.

      Things have deteriorated to the point that “scientific progress” is “refining your method in the face of new data so that your conclusions don’t change.”

      So now, in addion to data snooping (Bristlecones and foxtails), metric snooping (%Cat 4/5), we can add method snooping. It’s a rogues gallery of Texas sharpshooter fallacies.

    • Posted Jul 3, 2009 at 1:55 PM | Permalink

      Re: Jean S (#16),
      Translated, this says
      “the trend in the Team’s ability to spin the data has to outstrip the opposite trend in the data itself”
      Re: Hubris « the Air Vent (#37), ROTFLMAO with these “stiffen end-points”

  12. Basil
    Posted Jul 2, 2009 at 8:31 AM | Permalink

    So the Team invented a smoothing procedure, that can produce different smooths depending on some parameter?

    I know this is just repeating a frequent criticism of Steve’s, but since there are already established algorithms of all kinds for smoothing, why not use a method that is already accepted in some fashion by an outside body of experts? All this use of obscure and difficult to reproduce methods simply sets off alarm bells, because we all know “If you torture the data long enough, it will confess, even to crimes it did not commit.”

  13. John D
    Posted Jul 2, 2009 at 8:52 AM | Permalink

    I would really prefer it if these publications would use a y-scale that actually offered some real perspective, say min and max values at -10 and +10 degrees. I think that would really help to provide an idea of how serious these trends are.

  14. Posted Jul 2, 2009 at 9:09 AM | Permalink

    I have a much younger sister, when she was about 3 there was a birthday cake sitting on my parents counter over night. In the morning the top of the cake had several small fist sized chunks of cake ripped out of the corner complete with little 3 YO size finger slots through the cake. Of course the cake being of the birthday variety wasn’t politicized so the family pointed out the problem to her otherwise perfect crime with great enjoyment.

    She of course didn’t do it either.

  15. clazy
    Posted Jul 2, 2009 at 9:10 AM | Permalink

    re JeanS @16: Perhaps Herr Doktor should write in German and hire a qualified translator who understands the difference between expect and believe/understand/assume/etc. Perhaps not.

  16. Peter D. Tillman
    Posted Jul 2, 2009 at 10:13 AM | Permalink

    Note in particular the Finnish recalculation, extended to 2008, at

    –and discussed at http://landshape.org/enm/rahmstorf-revisited/
    The Finnish statistician wrote “The updated trend is just as significant as the original one, done by Rahmstorf with other top scientists of the IPCC – it was calculated with the very same method.”

    Cheers — Pete Tillman

  17. Paul Penrose
    Posted Jul 2, 2009 at 10:55 AM | Permalink

    Rahmstorf’s name is misspelled in the last sentence of the main post. I don’t normally nit pick on spelling since that’s one of my particular failings, however people can be pretty sensitive about their names. I’m surprised that nobody else noticed it before I did.

    Fixed; all other occurrences were correct. At realclimate, Rahmstorf drew attention to a mis-spelling of his name at David Stockwell’s, implying that this somehow showed general inattention at Stockwell’s and was not amused when Ian Castles pointed out that Rahmstorf himself had misspelled Ammann’s name in a paper. Steig also complained about a mis-spelling of his name on one occasion at CA (although I did not make the spelling error, Steig held me responsible for it. 🙂 Maybe this is one of the lessons at realclimatescientistcharmchool.

  18. TAG
    Posted Jul 2, 2009 at 10:59 AM | Permalink

    (Rahmsdorf’s “copyright” pretext is absurd BTW. The Community really has to tell the Team to stop such nonsense.)

    Copyright protects the expression of an idea and not the idea itself. It does not apply to a computer algorithm just the expression of that algorithm in some sort of program or code. So someone could not copy a copyrighted pieces of computer code but is still quite free to describe the algorithm.

    Part of my job is to assist lawyers so I have picked some knowledge up from them. But one must be aware of the dangers of a little knowledge. Hopefully a trained lawyer will comment on this.

    • Steve McIntyre
      Posted Jul 2, 2009 at 11:10 AM | Permalink

      Re: TAG (#26),

      Here’s is Rahmstorf’s online comment on this..

      The smoothing algorithm we used is the SSA algorithm (c) by Aslak Grinsted (2004), distributed in the matlab file ssatrend.m.

      SSA – singular spectrum analysis – has been around for ages and, on this ground, aside from any other grounds, I’m quite sure that Aslak Grinsted would be unable to copyright SSA. It would be like Mann purporting to copyright principal components.

      Could Mann perhaps copyright “Mannian principal components” – the method that both the NAS panel and Wegman agreed was incorrect? (Although, as I’ve observed elsewhere, Mann’s terms of employment at the time probably meant that any copyright belonged to the University of Massachusetts of U of Virginia, given that there is no evidence that they renounced their rights to Mann’s work product to Mann.)

      Similarly, I presume that Aslak Grinsted’s employer would possess rights to the code.

      BTW I haven’t seen any evidence that Grinsted claimed rights over the “SSA algorithm”. To my knowledgem “SSA algorithm (c) by Aslak Grinsted” is merely Rahmstorf’s affectation.

      • Posted Jul 2, 2009 at 11:47 AM | Permalink

        Re: Steve McIntyre (#27),

        Geez, I misspelled Aslak Grinsted’s name in my post. Apologies.

        Actually the “ssatrend.m” source file I received from Grinsted did say. “(c) Aslak Grinsted 2004” In the header. ssatrend.m doesn’t do the SSA part of the algorithm, it uses the first PC to filter the values along with some end point smoothing. I don’t believe that Grinsted is asserting copyright in any event.

        • Steve McIntyre
          Posted Jul 2, 2009 at 12:30 PM | Permalink

          Re: Nicolas Nierenberg (#28),

          it uses the first PC to filter the values

          This is what you’d expect, but take another look at how they select the retained eigenvector.:) It’s a little different than anyone would expect. It’s almost Mannian (c).

        • Posted Jul 2, 2009 at 12:53 PM | Permalink

          Re: Steve McIntyre (#30),

          In the case of the temperature and sea level stuff it was always the first one which explained the vast majority of the variance. This bit of code selected it. What am I missing?

          [mx,trendidx]=max(abs(sum(E))./sum(abs(E))); %trend would be non oscillatory and therefore have a large sum in the index

          %throw away everything but the trend:

        • Steve McIntyre
          Posted Jul 2, 2009 at 1:45 PM | Permalink

          Re: Nicolas Nierenberg (#31),

          mathematically it picks the eigenvector with the max(abs(sum(E))).

          in perhaps the majority of practical cases, this will pick the PC1, but the formula implies that there are cases in which this formula will pick lower order PCs. I haven’t experimented with data to identify circumstances in which a lower order PC is selected. It’s even possible that the protocol is totally pointless. I remember doing a post on crossing numbers increasing by one with each eigenvector under certain circumstances (spatial autocorrelation), but I don’t recall offhand whether these conditions were restrictive or not.

      • TAG
        Posted Jul 2, 2009 at 2:04 PM | Permalink

        Re: Steve McIntyre (#27),

        SSA – singular spectrum analysis – has been around for ages and, on this ground, aside from any other grounds

        The particular expression of SSA in computer code could be copyrighted. However, as you note, there may be any number of other existing expressions of SSA and any one is perfectly free to create their own expression

        Additionally, and this is often misunederstood, mathematical algorithms are not patentable. Because of this, computer programs, in themselves, are not patentable. So a mathematical calculation can never be patented, However if this calculation is part of a method that will have a useful external effect when implemented on a specific computer, then the overall method and physical system to accomplish this specific effect may be patented. The software isn’t patentable but its use to do something in the world is. So an SSA algorithm is not patentable. Assuming SSA is well known then the use of SSA would be considered obvious and so again would not be part of a patentable invention. However if an improved SSA algorithm made a non-obvious improvement on an external output then the use of the improved algorithm could be patented.

  19. Craig Bear
    Posted Jul 2, 2009 at 12:04 PM | Permalink

    And to think i thought that i didn’t have to worry because the raw data plotted against the “models” was always showing a significant error (which is a nice way to explain to any lay person how unreliable a computer model is). But now they are “fixing” the raw data so that this error is not there, because obviously the data was wrong…

    Anyone ever heard of overtuning?

    Also… why are we back to starting at 1970 again? Convenient, much?

  20. MikeN
    Posted Jul 2, 2009 at 2:08 PM | Permalink

    Regarding zeroing of data. Almost all of the charts in that report look like they have been zeroed to make things look bad. For example Greenland ice from 2003, with 2003 set to 0.

  21. Plimple
    Posted Jul 2, 2009 at 2:38 PM | Permalink

    Is that really your recollection of what Eschenbach did? Can you indicate how you know Rahmstorf arbitrarily shifted only one of his datasets relative to the other?

    As I recall, Willis wasn’t happy that the beginning of the timeseries in the obs and model output were divergent, and so he shifted the obs centering onto a single year, the beginning of the timeseries, 1958. What was really “WRONG” was that he failed to apply the same centering to the model output and so they were inconsistent with each other. Centering both the model output and the obs on 1958 would have preserved the divergence and they would have remained consistent.

    So, can you indicate how you know Rahmstorf applied this same method?

    Steve: As I said in the post, I don’t have a specific recollection about what the dispute was about, other than something that Willis did was said to be WRONG. I obviously agree that series should be centered consistently and, if Willis failed to do so, then the criticism would be justified. As to how much it “mattered”, I guess that would depend on the difference between a 1958 reference point and a ? 1951-1980 reference point, if that was the alternative. Willis can represent himself; I’m not vouching for his procedure one way or the other. My interest here is merely that Rahmstorf has sanctified 1990 centering. That’s fine with me as one does not thereafter have to argue about whether this procedure has been sanctified.

  22. Posted Jul 2, 2009 at 4:07 PM | Permalink

    Steve: Looking forward to your further comments on the method.

    In the case of the temperature and sea level stuff it was always the first one which explained the vast majority of the variance.

    I did some experiments with simulated trends and noise. I think that ssa attempts to fit data with a linear sum of periodic function (ssa assumes stationarity). Because of this, a linear trend (like presumed AGW) with a long embedding period (L/2) will be split into two periodic components, a concave one and a convex one. You can see some plots of this on my site. Also, the rahmstorfsmoothingmethod uses the first PC. So I imagine it could be quite unstable as to whether a linear trend is fit with a convex or concave slope.

    Wouldn’t be surprised if this is a little shop of horrors.

    • Steve McIntyre
      Posted Jul 2, 2009 at 9:17 PM | Permalink

      Re: David Stockwell (#36),

      David, stay tuned on this tomorrow.

      This little story is going to turn into a comedy.

  23. Willis Eschenbach
    Posted Jul 3, 2009 at 1:13 AM | Permalink

    Steve, you say:

    Secondly, Rahmstorf has zeroed both models and observations on 1990. I recall some controversy about Willis Eschenbach zeroing GISS models on 1958; I have a vague recollection of Hansen’s dogs saying that this was WRONG. I don’t vouch for this recollection, but, if the events were as I vaguely recall, I don’t see any material difference in Rahmstorf’s centering here.

    Indeed your memory is correct. I got heavily and repeatedly slagged on the weasel’s blog for this very thing indeed, but of course, Rahmstorf is a team player and I’m not …


  24. Posted Jul 3, 2009 at 5:48 AM | Permalink


    Since 2007, reports comparing the IPCC projections of 1990 with observations show…

    So, they are comparing data to the FAR? Those are “the IPCC projections of 1990”. Even if Stefan thinks the TAR projections should be tested since 1990, they were published in 2001. So, they are “the IPCC projections of 2001.”

  25. Rob Spooner
    Posted Jul 3, 2009 at 6:05 AM | Permalink

    I’m not a scientist, real climate or otherwise, just an ordinary layman with some background in math. As I read this discussion, two concerns arise.

    The first is that an 11-year-smoothed calculation does not result in something on the y-axis that can be called temperature. It’s an index with no physical meaning, just as the Dow Jones Industrial Average has no financial meaning.

    However, this is a small concern since the result of the calculation is supposed to be something like some weighted average of temperature. I can live with that. However, the x-axis is a problem. It’s one thing when data is smoothed but the granularity is too small to be seen. An annual chart of daily temperatures that takes a day-night average can show July 3 as a point even though it’s a 24-hour-span.

    On the other hand, an 11-year smoothing on a graph that shows annual variations raises the question, what are we looking at? The x-axis shows specific years but they are endpoints of ranges that dwarf the difference between successive points.

    Just as importantly, why are the values reported for the endpoints? That’s a rhetorical question, to which the answer is obvious. It’s not going to make much of a headline to announce that latest research has shown that a smoothed temperature index kept rising right up through December 31, 2003.

    • Craig Loehle
      Posted Jul 3, 2009 at 7:35 AM | Permalink

      Re: Rob Spooner (#41), Bingo! When I published my paper on climate paleo-reconstruction and used the midpoint for my smoothed data (ending 1/2 period before the end of the data) commenters kept nagging that I didn’t go all the way to the end.

      • Posted Jul 3, 2009 at 11:10 AM | Permalink

        Re: Craig Loehle (#42),

        Perhaps because they were mislead by the title into believing that your reconstruction covered 2000 years?

        • Craig Loehle
          Posted Jul 3, 2009 at 1:21 PM | Permalink

          Re: Phil. (#43), The DATA covered 2000 years before smoothing (1996 years to be precise).

  26. Jeff Norman
    Posted Jul 9, 2009 at 10:06 AM | Permalink

    Oh, now I have it.

    Variated Integer Accumulation Graphical Regression Analysis.

One Trackback

  1. By Hubris « the Air Vent on Jul 2, 2009 at 8:58 PM

    […] while climatology lately has often stiffen the endpoints of graphs to hide temperature down trends HERE, (check the NOAA also) they probably didn’t decide to stiffen the downtrend in Figure 1 but […]

%d bloggers like this: