Russian Bias

There has been much discussion on this site regarding the methodology employed by Dr. Hansen’s “Step 1”, also known as the “bias method” in HL87. Readers unfamiliar with the topic may want to read through the material here, here, and here to gain a better understanding of the method and issues raised with employing it.

What was clear since we first unraveled the process was that it was destined to corrupt the combined station data and, as it’s name implies, add a bias to that combined data. What was not clear was what the net effect would be to the regional and worldwide temperature record.

Thanks to Steve’s help, I recently completed an initial look into the effect the bias method has on the temperature record.

On September 18 Steve posted his R implementation of Hansen Step 1, along with Jean S’ Matlab implementation. Although my implementation in Visual Basic was nearly complete, I decided to be pragmatic and abandon my attempt to implement Step 1 and utilize Steve’s R code (R is free, Matlab is not). I stubbornly refused to let go of my familiar VB and Excel and fully embrace R, but fortunately Steve graciously made a few modifications to remove some minor annoyances, and he refined the station ordering when it happened that an MCDW record did not exist.

With code in hand I applied Step 1 against all Russian GHCN stations (Europe and Asia) and collected the bias applied to the individual scribal records in a spreadsheet. I calculated the average bias across all scribal records on a year-by-year basis. What one can see from the following plot is that, for Russian records, the method introduces an artificial cooling to records before 1987.

Yearly bias of Russian scribal records

I have already started work on other regions of the world and want to stress that the results shown may not apply to those regions. All of this is interesting, but preliminary. Keep in mind too that the bias method implemented in Step 1 is but one of many problems, or “algorithms of interest”, already uncovered or waiting to be discovered.


  1. Steve McIntyre
    Posted Sep 28, 2007 at 7:35 PM | Permalink

    It’s funny that the errors that we;ve identified tend to increase the trends.

    AS John Goetz observes, this effect is different from whatever error has caused the increase in trend at Wellington NZ in the “urban adjustment” process.

  2. Anthony Watts
    Posted Sep 28, 2007 at 7:55 PM | Permalink

    John, excellent work, can you plot a trend line with this graph? While I can visualize one, having one would drive the point home.

  3. Posted Sep 28, 2007 at 8:02 PM | Permalink

    This came to my mind as well after reading this post, tried to find some possible reasons:

    1) we are selective, find/publish only those errors that tend to increase trends

    2) Intentional data manipulation by them

    3) biased scientific method in mainstream climate science, results with no trends or showing MWP etc. go through more extensive testing than results showing the opposite. Averages to zero with case 1) 🙂

    4) change

  4. Kristen Byrnes
    Posted Sep 28, 2007 at 8:19 PM | Permalink

    It’s the Berlin Wall adjustment

  5. John Goetz
    Posted Sep 28, 2007 at 9:25 PM | Permalink

    Anthony, I don’t think a trend line is appropriate here, because it would reflect only the trend of the bias. The biases are added to whatever the temperature records were each year. Thus, the trend of those temperatures before and after the bias was applied is of interest. I have not gotten that far (I think it is a Step 2 problem).

  6. John Goetz
    Posted Sep 28, 2007 at 9:33 PM | Permalink

    #3 UC

    1) I am only selective in what I publish in the sense that I can fully defend what I publish. I try hard not to cherry pick. That is why I looked at all of Russia. Not just Siberia, Asian, or European Russia, but all of it. Now I am looking at the ROW. I tend to go very slowly because I manually look at the data and the results.

    2) I do not believe there is intentional data manipulation “by them”. I think it is just chance, or luck. You can decide if it is good or bad luck. An interesting experiment would be to look at the records as they existed when HL87 was written and the same methods were applied to see if the same conclusions would be drawn (added to the to-do list).

    3) and 4) I do not follow

  7. Posted Sep 29, 2007 at 3:06 AM | Permalink

    ANTHONY WATTS! HELP! Sorry to raise my ‘voice’ but I am trying to attract the attention of Anthony Watts because I am unable to enter his blog, which is a bit unfair considering the number of plugs I’ve given it over at my place. Anyway, all I get when I try to enter is the following message:
    “You don’t have permission to access /watts/ on this server.
    Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request.”
    Please contact me here, or via my site and let me know what the pass word is!

  8. Louis Hissink
    Posted Sep 29, 2007 at 3:21 AM | Permalink

    Not wishing to rush into this thread prematurely but have we not considered the fact that the algorithms enunciated here might have some divergencence, statistically, from the algorerithms described by Mann et al?

  9. Johan i Kanada
    Posted Sep 29, 2007 at 3:45 AM | Permalink

    The bias seems very small, approx .06 degrees from approx 1935 to 1988 or so.
    Would that make a significant difference in the temperature trend itself?
    Compared to the famous 0.6 degrees it is a magnitude smaller.

  10. Dn Keiller
    Posted Sep 29, 2007 at 5:21 AM | Permalink

    re#9 But it all adds up- Lower the early part of the record, raise the later part. What’s a piffling little 0.06C, or a 0.1C or even 0.2C here and there compared with the massive 0.6C?

    Mysteriously all the “errors” seem to have the same effect. Much like bank “errors”, the vast majority of which favour the bank.
    I’m sure there are good statistical tests (Steve?)that can determine whether we are looking at a random or non-random (fiddled) “error” process here.

  11. Posted Sep 29, 2007 at 6:24 AM | Permalink

    You’ve done it again, Steve!

  12. Johan i Kanada
    Posted Sep 29, 2007 at 6:26 AM | Permalink

    I am not in any way defending the sloppy data management approach apparently so prevalent among the Team members.
    But it would be (potentially) very illuminating to calculate two trends (for Russia in this case), one with and one without this bias, and then compare the two.

  13. Willis Eschenbach
    Posted Sep 29, 2007 at 6:35 AM | Permalink

    Johan, you say:

    The bias seems very small, approx .06 degrees from approx 1935 to 1988 or so.
    Would that make a significant difference in the temperature trend itself?
    Compared to the famous 0.6 degrees it is a magnitude smaller.

    The real problem is the 0.06° bias from 1988 – present. This is the time when the maximum AGW is claimed, and represents 0.3°/century …


  14. Dave B
    Posted Sep 29, 2007 at 7:08 AM | Permalink

    #11…BCL…you really don’t understand this, and it’s implications, do you?

  15. steven mosher
    Posted Sep 29, 2007 at 7:22 AM | Permalink

    re 11. BCL, we you artificially cool the past you increase the warmind TREND.

    I’ll keep it simple for you. If the temperature were a constant 10C from 1900 to 2007
    you’d have no warming trend. If you artificallt cool the record from 1900 to 1950
    to 0C then draw a trend line it looks like a warming trend. You have a chance to
    change you post on your site orI can come over and embarrass you there

  16. Don Keiller
    Posted Sep 29, 2007 at 7:49 AM | Permalink

    re #11 BCL, thanks again for giving me a good laugh. Your site’s a hoot!

  17. Mike
    Posted Sep 29, 2007 at 8:31 AM | Permalink

    Good one BCL. I’ll tell everyone I know to check out your site to get the catastrophic AGW view. Don’t stop writing.

  18. PabloM
    Posted Sep 29, 2007 at 8:54 AM | Permalink

    What is BCL? “Bringing Constant Laughter” or something?

  19. Posted Sep 29, 2007 at 10:09 AM | Permalink

    BCL is the nick BigCityLib

  20. Posted Sep 29, 2007 at 10:12 AM | Permalink

    That must have been a lot of work John Goetz. Thanks for doing it.

    I have a few questions and comments to make sure I understand what you did. These are not criticisims, only questions. I don’t expect any of them would make a major difference, but I want to understand your results:

    1. First, I appreciate the analysis of a large region rather than just a handful of stations. It may be beneficial to group the stations by latitude instead of by country. Possibly latitude bands like northern polar, northern mid, equatorial, southern mid, southern polar. If the effect of the bias is not truly random, there may be a pattern by latitude. Let me know if I can help sort the stations into latitude bands.

    2. When you average the biases, is the denominator the total number of stations (including those without bias) or the number of biased stations?

    3. When averaging station biases, are the biases for each station included in all years with data from the station? This is a tough one to explain. Maybe an example will work:

    Stations A1 and A2 are scribal variations. Station A1 has data from 1950 to 1980 and station A2 has data from 1960 to 1980. A2 has a bias of -0.1C. Is the A2 bias included in all years from 1960 to 1980, or just 1980?

    4. Were you able to distinguish between scribal variations and independent measurements at the same location?

    5. Do you have a hypothesis for what happened between ~1987 and ~1992?

  21. Johan i Kanada
    Posted Sep 29, 2007 at 10:40 AM | Permalink

    #13 “The real problem is the 0.06° bias from 1988 – present. ”
    What do you mean, there is no bias from 1990 and on, only in earlier years.
    Which means that the resulting trend, if any, is exaggerated, provided one starts
    sometime before 1990 and ends sometime after 1990.

  22. John Goetz
    Posted Sep 29, 2007 at 10:42 AM | Permalink

    #20 John V.

    I don’t mind questions at all. I’ve been checking and rechecking my work multiple times to make sure I have not done something stupid. Regarding your questions:

    1) I agree that that grouping stations by latitude would be interesting. As I have examined individual records I have not noticed any particular pattern, but that does not mean it does not exist. I have asked Steve for a few enhancements to the output of his Step 1 program, and if he is able to do that then running this analysis should be less cumbersome. In the meantime I have started analyzing some other regions and might have results tomorrow night (the weather is awfully nice here today so I won’t be in for long).

    2) The denominator is all records, whether or not they show a bias. Remember that the first record in the order is never biased, and stations with a single record are not biased either. Thus, if I were to remove from the denominator the count of stations with zero bias, the magnitude of the bias would grow.

    3) The A2 bias is included in all years for which the record was valid. For example, if a record extends from 1960 to 1980 and is biased -0.1C, then all years from 1960 to 1980 have that -0.1C counted toward the yearly average. However, if a valid annual average for, say 1962, cannot be calculated because the data is insufficient, that -0.1C is not included in the year 1962.

    4) I have not done that analysis. Many records look largely the same, but are not identical. Some are obviously scribal versions, but for others it is a little harder to tell. I will post an interesting example in a comment later, once I dig it back up (it is on my to-do list).

    5) The period of 1987 to 1991 or 1992 is the typical period of overlap between the MCDW record and earlier records. The MCDW record – if it exists – is the first one in the combination order and therefore always has a 0 bias. The older records rarely extend beyond 1992, which is why you see the rapid tailing off.

  23. Posted Sep 29, 2007 at 10:45 AM | Permalink

    Correction to question 3 in post #20 above:
    “Stations A1 and A2 are scribal variations. Station A1 has data from 1950 to 1980 and station A2 has data from 1960 to 1980. A2 has a bias of -0.1C. Is the A2 bias included in all years from 1960 to 1980, or just 1960?”

  24. John Goetz
    Posted Sep 29, 2007 at 10:49 AM | Permalink

    #9 Johan

    I agree that the magnitude of 0.06 (which is in fact the average across all years) is small, but then again it is 10% of the 0.6 you mention. 10% is not something I would normally ignore. But remember I have only looked at Russia. This number may be different globally – and it largely does not affect the USHCN records.

    Perhaps the more interesting question is, how does it affect the trend? Note that the yearly temperature anomaly is calculated against the 1951 to 1980 mean. In Russia (and possibly not the rest of the world), that “benchmark mean” is biased downward.

  25. Posted Sep 29, 2007 at 11:06 AM | Permalink

    Stephen, Come on over.

    But it seems you also would make the cooling trend between 1940 and 1970 seem less, and you would understate the warming trend between about 1975 to whatever year the bias ends.

  26. Posted Sep 29, 2007 at 11:11 AM | Permalink

    “…you also would make the cooling trend between 1940 and 1970 seem less.”

    By which I mean that the pre bias-correction numbers make the cooling trend seem greater than
    the corrected numbers (which make it seem less).

  27. Mark T
    Posted Sep 29, 2007 at 11:25 AM | Permalink

    The 1940 to 1970 difference is on the order of 2 hundredths of a degree. The trend difference is nearly nothing. From 1940 to 2000, however, it is on the order of 8 hundredths, which isn’t a whole lot overall, but nearly a tenth and the entire argument is only a few tenths in the first place. You’ve had grade school math, right BCL? If not, don’t take it, I’d much rather continue laughing at your posts than see you suddenly start making sense.


  28. Posted Sep 29, 2007 at 12:51 PM | Permalink

    #22 John Goetz:
    Thanks for the detailed answers. From my perspective, it looks like you did everything right.

  29. Stephen Richards
    Posted Sep 29, 2007 at 1:52 PM | Permalink


    Where can I get R. I’ve done a MS Live search with no luck. Help much appreciated.

  30. John Goetz
    Posted Sep 29, 2007 at 2:01 PM | Permalink

    Start here and here.

  31. Stephen Richards
    Posted Sep 29, 2007 at 2:03 PM | Permalink

    Thanks John. I shall have a go at moving from excel to R

  32. Johan i Kanada
    Posted Sep 29, 2007 at 2:40 PM | Permalink

    #24 “Perhaps the more interesting question is, how does it affect the trend?”

    Exactly, this is what I would like to see. One trend (if any) based on raw data, another trend (if any) based in the ‘biased’ data set.

  33. steven mosher
    Posted Sep 29, 2007 at 2:54 PM | Permalink

    RE 25.

    Sorry BCL you can’t get my email that easily, plus somebody already pointed out
    your boneheaded comment.

    Cherry picking a “regime” to discuss differences in trend is easy. For example, since 1998
    the world has cut its rate of warming.

    Your contention that SteveMc “warmed” the record is simply false. Deal.

  34. steven mosher
    Posted Sep 29, 2007 at 3:09 PM | Permalink

    RE 20. JohnV Geography questions seem a bit strange here.

    In a nutshell here is the scribal record problem.

    You have a SITE. lat lon X, Y.

    You have 4 records for that site coming from different authorities.

    This is like compiling an “edition” for shakespeare were you have 4 versions of the same text
    ( that works SUCKS i can tell you)

    Anyway, assume you have 10 records and 4 sites.

    An X indicates a missing record.

    Record1. 1 1 1 2 X X X 2 2 3
    Record2. X 1 1 2 2 2 2 2 X X
    record3. 1 1 X 2 2 X 2 2 2 X
    record4 X X X 2 2 2 X X X 3

    Now, you have 4 records of the same site from different authorities.
    REconstruct the SOURCE.

    Hansen does it by averaging and mucking about with the various versions of the same
    text. The clearly correct approach is to fill the missing data with data from other
    records. The question has been, does Hansens wacky approach BIAS the record.

    One thought was the wackiness would even out. In RUSSIA it hasnt.

  35. Steve McIntyre
    Posted Sep 29, 2007 at 3:42 PM | Permalink

    Again, folks, don’t forget there’s another tranche of Hansen adjustments that we’ve just scratched the surface on – go back to the Wellington. Sometimes Hansen makes a positive UHI adjustment; sometimes negative. In the US, the adjustments seem to go in the physically expected direction. Outside the U.S. there is no framework of decent stations and it appears erratic.

    Plus there are issues at the GHCN level, which we haven’t scratched yet – the Wellington station splices two different stations at the GHCN level and has the scribal continuation as a different station. This creates another bizarre Hansen bias. How prevalent is this? Who knows?

  36. Posted Sep 29, 2007 at 4:29 PM | Permalink


    2) I do not believe there is intentional data manipulation “by them”. I think it is just
    chance, or luck. You can decide if it is good or bad luck.

    That’s possible. And it is quite difficult to find evidence for misconduct, as the algorithms involved are beyond simple.

    3) and 4) I do not follow

    It was a comment on Steve’s

    It’s funny that the errors that we;ve identified tend to increase the trends.

    Which can easily be extrapolated to hockey-stick studies (stick and blade). Mann’s PCA or Juckes CVM are good examples. There are numerous publications that verify the AGW theory, and it seems that whatever happens it always confirms the theory. It can be so because the theory is correct, but at least partly it is so because anything that does not fit the agenda is subjected to rigorous attempts at falsification. On the other hand, Mann’s papers seem to slip out of reviewers hand like a slipper soap. As a result, there are more errors that increase the trends. That’s what I meant by 3) .

    4) was a typo (sorry about that), I meant that maybe the sample size (errors in climate science) is still so small that are the errors found are hockey-stickish by chance.

  37. neil
    Posted Sep 29, 2007 at 4:49 PM | Permalink

    Re 35
    I am the NZ lurker who alerted David Wratt to this blog.
    It is not only Wellington that GISS has modified. —Have a look at Christchurch where they have muddled up the 2 sites (Airport and Gardens) by making an error with their coordinates. There is a similar splice as for Wellington.
    PS the Christchuch Gardens site has been in the same location since the 1860s and has data up to Jul2007, and the airport started in 1954

  38. Steve McIntyre
    Posted Sep 29, 2007 at 4:55 PM | Permalink

    #37. It’s amazing how quick someone like Wratt is to say that it is inappropriate for us to show the unadjusted data and how negligent he is in alerting NASA of obvious errors in their work.

  39. Posted Sep 29, 2007 at 5:13 PM | Permalink

    #34 steven mosher:
    I understand the scribal record problem.
    I merely suggested that a logical grouping would be by latitude rather than by country. (Climate is pretty insensitive to political boundaries).

    John Goetz:
    I am interested in attempting to distinguish the scribal series bias from the independent series bias. I don’t have the code (or the time) to calculate the bias for each series as you have done. Is your spreadsheet of series biases available? Thanks.

  40. steven mosher
    Posted Sep 29, 2007 at 5:21 PM | Permalink

    RE 39.

    Sorry JohnV my bad.

  41. John Lang
    Posted Sep 29, 2007 at 5:27 PM | Permalink

    I’m assuming this is only Step 1. It is easy to forget that this is Step 1 of about 5 in which the bias/adjustment is positive to the trend in all cases.

  42. John F. Pittman
    Posted Sep 29, 2007 at 5:28 PM | Permalink

    #22 Appreciate that you avoided the altitude/lattitude red herring. As has been posted many times (would provide if I could figure out how, but the 3 or 4 or is it 5? Steve’s can), an approximate altitude (also equals lattitude by way of MacArthur, GEOGRAPHICAL ECOLOGY; Harper and Row 1972; SBN 06-044152-6) adjustment has been made. The actual lapse rates are most decidedly site specific.

    #20 John V Item

    5. Do you have a hypothesis for what happened between ~1987 and ~1992?

    Answer: From Hansen:

    We suggest equal emphasis on an alternative, more optimistic, scenario that emphasizes reduction of non-CO2 GHGs and black carbon during the next 50 years. This scenario derives from our interpretation that observed global warming has been caused mainly by non-CO2 GHGs. Although this interpretation does not alter the desirability of slowing CO2 emissions, it does suggest that it is more practical to slow global warming than is sometimes assumed.

    This is the time period that on the first part in lower 48 (the EU was first; for 2007

    Worldwide Shift from Incandescents to Compact Fluorescents
    Could Close 270 Coal-Fired Power Plants

    the power companies were found to be in violation of US environmental laws. On the second part, the new regulations 1995 reduced PM10, sulphates, and NOx.

    Perhaps the details are in the Devil (Mosher forgive me, DUKE blue devils, go Bill Gates).

    Duke Energy dukes it out in Supreme Court
    Company contends government pollution-control rule changes over past 25 years were ‘arbitrary.’

    WASHINGTON, DC, April 2, 2007 (ENS) – The U.S. Supreme Court today unanimously upheld a federal program designed to clean up the nation’s oldest coal-fired power plants, vacating a lower court ruling that has derailed major air pollution enforcement efforts against Duke Energy and other utilities.

  43. Posted Sep 29, 2007 at 8:21 PM | Permalink

    I spent a little time looking at scribal vs independent series tonight.

    For every station in the GHCN database (7364 stations), I did a series-by-series comparison. For each series, I calculated the fraction of matching values with every other series at the same station. If series A is a perfect scribal record of series B, then A and B have 100% matching records (for the months where both have values).

    The following is a break-down of the maximum fraction of matching values for every series in the GHCN database (worldwide):

    BIN | CUM %

  44. Posted Sep 29, 2007 at 8:23 PM | Permalink

    Oops — I used the less-than sign in my previous post. Here it is again:

    I spent a little time looking at scribal vs independent series tonight.

    For every station in the GHCN database (7364 stations), I did a series-by-series comparison. For each series, I calculated the fraction of matching values with every other series at the same station.

    If series A is a perfect scribal record of series B, then A and B have 100% matching records (for the months where both have values).

    The following is a break-down of the maximum fraction of matching values for every series in the GHCN database (worldwide):

    BIN | CUM %
    10% 16%
    20% 22%
    30% 24%
    40% 27%
    50% 30%
    60% 34%
    70% 38%
    80% 46%
    90% 55%
    100% 100%

    The first column is a histogram bin for the maximum fraction of matching values with another series at the same station. The second column is the cumulative percentage. Stations with only one series are excluded. For example, 30% of the series worldwide have less than 50% matching records with other series at the same station.

    The main Russian country code is 222. The following is a similar breakdown for station series in the 222 country code (I do not know exactly which stations John Goetz used in his analysis, but country code 222 should be a close estimate):

    BIN | CUM %
    10% 8%
    20% 20%
    30% 22%
    40% 24%
    50% 31%
    60% 34%
    70% 38%
    80% 44%
    90% 59%
    100% 100%

    Based on these results, I think a few things should be done:

    First, my results should be reviewed to ensure the percentages above are correct.

    Second, we should decide on a percentage of matching values required for a series to be considered a scribal variation. The large jump in cumulative percentage between 90% and 100% matching values leads me to believe that the scribal variation cutoff should be 90% or higher matching values.

    Third, the GISTEMP bias should be re-calculated for scribal variations and independent series separately.

    The revised version of OpenTemp that computes the scribal variations, my input data, and summary spreadsheet are available here:

  45. John Goetz
    Posted Sep 30, 2007 at 9:40 AM | Permalink

    #44 John V:

    If I understand, you are also saying that 45% of the worldwide series have 90% or greater matching values?

    Some of the records you have examined, I am sure, have long periods where the two versions match. Thus, at least sections of the records can be considered identical scribal versions. I have also seen records that differ occasionally by three of four tenths of a degree, but otherwise match. Given we are talking about monthly averages, those could be scribal variants as well. It only takes one or two days each month of misrecorded data (someone records a 5 instead of a 2 or a 9 instead of a 4) to throw the monthly average off by a few tenths of a degree.

    Then there are records (or portions of records) that consistently differ by some fixed amount, or some fixed amount plus noise. That may be due to different locations, measurement equipment, or both.

    And then there are others with odd patterns of differences. I posted elsewhere on this site my observation of Bratsk. Bratsk record 0 and record 1 overlap from 1951 through 1990. Looking at the period from 1951 to 1965, I see an interesting pattern of differences on a month-by-month basis. Subtracting record 0 from record 1, I see that most (not all) differences are as follows:

    Jan – 0
    Feb – 0
    Mar – -0.2
    Apr – 0
    May – +0.2
    Jun – +0.3
    Jul – +0.2
    Aug – +0.1
    Sep – -0.1
    Oct – 0
    Nov – 0.1
    Dec – 0.1

    This pattern seems to indicate some sort of adjustment is going on, although it could be a scribal variation.

    Characterizing the differences between records and even portions of records is important. Combining separate records should be done in such a way as to remove peculiar mathematical artifacts, such as the current dependency on station order. I think that Steve has indicated in the past that multiple algorithms may be needed to combine records, depending on the patterns of differences seen in the records.

  46. Posted Sep 30, 2007 at 10:53 PM | Permalink

    #45 John Goetz:
    It is my opinion that it is very difficult (if not impossible) to distinguish scribal variations from independent records with closely calibrated instrumentation.

    We both know that the GISTEMP algorithm for calculating the bias could be improved. I suggest that a sensible algorithm for combining station series should be developed (call it the CA algorithm). The most obvious would be to average the monthly difference between the series for all months where both series are reporting. Scribal variations should average to a zero bias while independent series should have a non-zero bias.

    You have already calculated the GISTEMP bias. The corresponding bias be calculated using the CA algorithm (combining stations in the same order as GISTEMP). The relevant comparison is then GISTEMP bias vs CA bias, rather than GISTEMP bias vs zero bias.

    It is important to somehow remove the effect of independent series because a few large, justifiable biases could easily swamp the 0.06C net bias you have found for Russia.

  47. Demesure
    Posted Oct 1, 2007 at 1:40 AM | Permalink

    That’s crazy. Adjustments aside, Hansen has decided to keep the data that make an upper trend and discard those that don’t.
    For Wellington, he deletes hot temperatures before 1940. For France, he keeps low temperatures before 1940 that even Meteo France considers NON reliable. That’s not cherry picking, that’s banana picking!

    I’ve compared GISS and the European database and this data massaging is eyes popping (graphs of the 9 stations common to the 2 database are here, note also the inexplicable GISS’ 1990 jump on nearly all stations, jump that is absent in the European database).

  48. Demesure
    Posted Oct 1, 2007 at 2:09 AM | Permalink

    Oh, I should add, the only consistent station in all GISS’s stations for France is Mont-Aigoual (temperatures for GISS & European database are the same).

    This station, like nearly all stations from the European db shows the warming in France over the last 30 years is not continuous but by a sudden jump around June 1986 (nicknamed “Chernobyl effect”). Then over the last 10 years, no real temperature increase anymore. It seems Hansen doesn’t like it, so he truncates post-1990 data for Mont-Aigoual (a high quality rural station maintained for weather forescasts by France Meteo with permanent professional observers).

  49. Willis Eschenbach
    Posted Oct 1, 2007 at 2:42 AM | Permalink

    Demesure, thanks for the links to the graphs. One thing that stands out is the radical difference in the 1980-1981 change in the two datasets. In the KNMI data, in almost all of the stations you showed, the temperature drops from 1980 to 1981. In the GISS data, on the other hand, it spikes radically … do you have any guesses about why that might be? It seems to be happening across the board, lasts for three years, and then the two records become close again in 1983.

    Weird …


  50. Posted Oct 1, 2007 at 3:25 AM | Permalink

    GISS ANN is DEC-NOV average, Are you certain the KNMI average is not JAN-DEC?
    Take a look at:

  51. Demesure
    Posted Oct 1, 2007 at 4:05 AM | Permalink

    I used monthly temperatures for both databases then calculate annual values with “tapply” in R. With Mont-Aigoual (graph above), I have the same results so the calculations are kosher. What is not is Hansen’s data picking.

    And look at his 1990 peak for nearly all French stations (you may check for the Netherlands) !

  52. Demesure
    Posted Oct 1, 2007 at 4:08 AM | Permalink

    #49, Willis, you meant the 1990-91 “divergence” I suppose.
    No, I’ve no idea of why and I suspect it is not an isolated “incident”, for Europe anyway.
    I can check for other European countries if anyone is interested.

  53. Hans Erren
    Posted Oct 1, 2007 at 5:14 AM | Permalink

    re 51:
    I prefer long time series, indeed a jump in 1986 and then flat (note that the 2003 summer heat wave doesn’t show up in the annual average, winter temperature variation dominates the annual average.

  54. JerryB
    Posted Oct 1, 2007 at 6:23 AM | Permalink

    Re #48,

    The Mont Aigoual data in GHCN V2 ends in 1988. It was not Hansen who truncated it,
    it was an effect of how the GHCN data were collected, and how subsequent updates
    are collected. Non-US stations which are not MCDW stations do not get updated in
    GHCN V2.

  55. Steve McIntyre
    Posted Oct 1, 2007 at 6:44 AM | Permalink

    #45 John Goetz:
    It is my opinion that it is very difficult (if not impossible) to distinguish scribal variations from independent records with closely calibrated instrumentation.

    If you have scribal variations, then the difference series has a very disproportionate number of zero values. So the first thing to do is to check the median of the 5% truncated series. If it’s scribal, it will be 0. This isn’t an absolute rule, but practically it works pretty well.

    Of course, you get into bizarre QC problems in GHCN, such as the case documented at Wellington, where one scribal version gets spread into two different series and a “bias” gets calculated that has no conceivable meaning. To drain the swamp, you need to determine provenance of the GHCN series. I’ve written and the answer so far is that it’s in someone’s university notes and they’re looking for the information.

  56. Steve McIntyre
    Posted Oct 1, 2007 at 9:07 AM | Permalink

    #47. I’d threaded this, but have taken the thread offline until I can verify which GISS versions and which KNMI versions were used.

  57. Steve McIntyre
    Posted Oct 1, 2007 at 9:15 AM | Permalink

    #47. KNMI’s webpage for Bourges says that

    the use the GHCN adjusted version for the data. So what we’re talking about here is the difference between GHCN adjustments and GISS adjustments. The GISS adjustment in question here is the Step 2 adjustment, which does not pertain to the Step 1 bias discussed in this post, but is more along the lines of the issue that is being considered in the Wellington posts.

  58. Posted Oct 1, 2007 at 9:23 AM | Permalink

    #56, #57 Steve McIntyre:
    Should the thread be restored with a retraction? It was getting a lot of attention.

    Steve: I briefly transferred Demesure’s comments above as a new thread since they were for a part of the world not commented yet. I thought that he had used original data, but it turned out that the KNMI data was GHCN adjusted – so it was simply a comparison of GHCN adjusted versus GISS adjusted – which is a different sort of beauty contest. If we’re going to talk about that, I’d rather do so under my own thread where I have properly controlled the analysis. The GHCN adjustments are a whole new can of worms that need to be disentangled systematically and I’d rather do that my way. So I’ve simply left his comments in the present thread, tho they are a bit off topic.

  59. JerryB
    Posted Oct 1, 2007 at 9:46 AM | Permalink

    GHCN adjusted? Almost gobsmackingly bad move.

    Restoring it with a retraction wouldn’t make it a good move.

  60. JerryB
    Posted Oct 1, 2007 at 10:01 AM | Permalink

    John V,

    A while back, during a discussion of Dawson, Yukon Territory, Canada, and
    GHCN adjusted data, I took a look at some, and would not recommend its
    use to anyone. The following sample may suggest why.

    GHCN V2 has two sets of temperature data for Dawson, Y.T. which include
    the early 20th century. They differ slightly. They are each adjusted,
    but with very different adjustments. A few samples below: the leading 0
    or 1 is the set number, the first line of each pair is “raw”, and the
    second line is adjusted.

    _ YEAR __ JAN __ FEB __ MAR _ APR MAY _ JUN _ JUL _ AUG SEP OCT __ NOV __ DEC

    0 1901 -29.10 -28.40 -11.50 -4.70 6.50 14.00 16.30 12.30 7.60 -2.00 -16.30 -22.20
    0 1901 -27.30 -27.90 -12.90 -8.10 1.60 8.60 11.20 8.30 5.50 -1.90 -14.60 -20.00

    1 1901 -29.10 -28.40 -11.50 -4.70 6.50 14.00 16.30 12.30 7.60 -2.00 -16.30 -22.20
    1 1901 -27.10 -26.20 -9.70 -3.80 6.10 12.60 14.60 10.90 6.60 -2.70 -16.30 -21.10

    0 1902 -26.30 -21.70 -21.40 -2.80 7.90 14.40 16.60 13.10 6.10 -1.10 -21.00 -30.10
    0 1902 -24.50 -21.20 -22.80 -6.20 3.00 9.00 11.50 9.10 4.00 -1.00 -19.30 -27.90

    1 1902 -26.40 -21.80 -21.70 -2.90 7.90 14.50 16.70 13.00 6.10 -1.20 -21.20 -30.20
    1 1902 -24.40 -19.60 -19.90 -2.00 7.50 13.10 15.00 11.60 5.10 -1.90 -21.20 -29.10

    0 1903 -32.40 -22.80 -14.20 -4.40 6.30 14.90 15.80 13.90 5.80 -6.10 -19.80 -17.70
    0 1903 -30.60 -22.30 -15.60 -7.80 1.40 9.50 10.70 9.90 3.70 -6.00 -18.10 -15.50

    1 1903 -32.50 -22.90 -14.30 -4.50 6.50 15.00 15.90 13.90 5.90 -6.20 -19.90 -17.80
    1 1903 -30.50 -20.70 -12.50 -3.70 6.10 13.60 14.20 12.50 4.90 -6.90 -19.90 -16.70

    0 1904 -29.50 -31.50 -19.10 1.30 7.30 12.50 14.20 11.90 2.80 -1.60 -17.50 -16.00
    0 1904 -27.70 -31.00 -20.50 -2.10 2.40 7.10 9.10 7.90 0.70 -1.50 -15.80 -13.80

    1 1904 -29.70 -31.60 -19.20 1.40 7.30 12.60 14.30 12.00 2.90 -1.70 -17.60 -17.60
    1 1904 -27.70 -29.40 -17.40 2.20 6.90 11.20 12.60 10.60 1.90 -2.40 -17.60 -16.50

    0 1905 -31.00 -20.70 -11.20 0.80 9.60 15.10 15.20 12.90 4.20 -4.10 -12.00 -23.60
    0 1905 -29.20 -20.20 -12.60 -2.60 4.70 9.70 10.10 8.90 2.10 -4.00 -10.30 -21.40

    1 1905 -30.20 -21.00 -11.20 0.90 9.70 15.20 15.30 13.00 4.20 -4.60 -12.10 -23.70
    1 1905 -28.20 -18.80 -9.40 1.80 9.30 13.80 13.60 11.60 3.20 -5.30 -12.10 -22.60

  61. Bob B.
    Posted Oct 1, 2007 at 10:03 AM | Permalink

    Steve, What happened to the Hansen in France Blog?

    Steve: see Demesure’s comments in Russian Bias.

  62. Posted Oct 1, 2007 at 10:08 AM | Permalink

    Those adjustments do look terrible. (Never mind how terrible a monthly *average* of -32.5C looks, even for a Canuck). I have been avoiding adjusted data as much as possible.

  63. Steve McIntyre
    Posted Oct 1, 2007 at 10:14 AM | Permalink

    #60. I think that we talked about the Dawson data in connection with Rob Wilson’s tree ring data. He ended up in a pickle about what to do in his proxy studies, because the adjustments were so problematic. This would be a good statio to verfy in detail.

  64. JerryB
    Posted Oct 1, 2007 at 10:27 AM | Permalink

    Re #60,

    Yes, it was in that connection.

    I had done some checking on differences among multiple streams of data for
    given locations in GHCN “raw” mean data; I think I will go run that program
    through GHCN adjusted mean data.

  65. Posted Oct 1, 2007 at 11:32 AM | Permalink

    Here are the GHCN-v2 and GHCN-v2-adjusted sets along with the GISS combined set of the raw data and the GISS-adjusted for the station in Malta:

    The two GHCN series and the two GISS ones are the same since 1982.
    Going back, the GHCN adjustment warms the values by 0.1-0.4 °C, the other three sets are very similar since 1963.
    In early years GISS urban adjustment cools down Luqa by 0.2° from 1951 to 1954 and by 0.1 from 1955 to 1962.

    The next plot has data since 1870

    The big oddity is that GHCN adjusts the period 1920-40 adding even up to 1.5°C.

    Can anyone explain this?

    N. B. If the two figures don’t show up, I provided the links.

  66. Demesure
    Posted Oct 1, 2007 at 1:40 PM | Permalink

    #57. Oh yes, sorry for the OT and for not giving the link to my data.
    Steve, it seems data of your link to KNMI (GHCN V2, black curve in graph below) is still different from data I retrieve from “KNMI” at this link : . In fact, it’s the “European Climate Assessment Dataset”. I found it thanks to a link on the KNMI site so I called it KNMI but I shouldn’t because it’s misleading. Now I’ve rename it ECAD (blue curve).

    So bad for not being here, maybe I’ve missed something important. I must confess, all thoses GMT (genetically modified temperatures) are a real nightmare for me and I’d love to have a “Hansen in France” thread to get things clearer.

  67. Demesure
    Posted Oct 1, 2007 at 1:56 PM | Permalink

    OT again, to reply to Willis #49
    The link given by Steve to Bourges (GHCN V2) shows missing months for 1990-91.
    That may explain the huge difference between blue and red curves above.
    Some months lacking is ridiculous for such recent data : on the ECAD database, you have even daily temperatures !
    I’ll check other French stations.

  68. Demesure
    Posted Oct 1, 2007 at 2:20 PM | Permalink

    I got it now. For France, most data from the GISS don’t have January, feb, and Mars for 1991 (see .TXT files retrieved from the GISS site).

    Remove cold months and you make the 1991 temperature jump. Voila Waldo !
    Don’t know who of GHCN or GISS fathered him but he’s here.

  69. MarkR
    Posted Oct 1, 2007 at 2:42 PM | Permalink


    Both blended and non-blended ECA series are available for download. Blended series are series that are near-complete by infilling from nearby stations. They are also updated using synoptical messages. Only these blended series are further analysed in ECA&D.

    In this procedure the gaps in a daily series are also infilled with observations from nearby stations, provided that they are within 25km distance and that height differences are less than 50m.

    Apparently one can get “blended”, or “unblended”. Interesting that their criteria for infill are a lot closer to home than GISS.

  70. Steve McIntyre
    Posted Oct 1, 2007 at 5:42 PM | Permalink

    Demesure, I’ll do a post on Bourges, which will be a good way of drawing attention to the ECAD data set. HOwever, I can’t tie your representation of GISS back to GISS data and this needs to be sorted out first. Also you need to specify which GISS version you’re using. Below is dset=1.

  71. Willis Eschenbach
    Posted Oct 2, 2007 at 2:02 AM | Permalink

    Demesure, I can’t recreate your data. I went to the GISS site, and their data for Bourges peaks in 1990 (see Steve’s graph above), while yours peaks in 1991 … what’s up with that?


  72. Demesure
    Posted Oct 2, 2007 at 4:35 AM | Permalink

    #69 Mark I used blended data.
    BTW, for Bourges, ECAD blended and non blended are identical.

  73. Demesure
    Posted Oct 2, 2007 at 4:47 AM | Permalink

    #70, 71
    The data I use for GISS is from the “Download monthly data as text” link under the graphs generated by GISS (dset = 1) (for example here : )

    Well, I realize that the graphs I made from the GISS monthly data is with a simple annual mean: missing months are not counted instead of being interpolated (see line 34 of the R source code). Hence the 1991 peak in my graphs and not in GISS’ for Bourges and for other towns.

    But it visibly leads to a real problem in the GISS datasets: for ALL stations in France, 1991 Jan-Feb-Mar are missing (try Bourges, Marseille, Lyon…). I’m not aware a national disaster struck us and made data disappear for 3 months in 1991 !

    And if you look at neighbouring Spain or Italy, it seems that 1991 is also a particular year for GISS since a lot of stations have data ended or started around that date.
    Germany, GB, the Nertherlands seem at first glance to be spared from this mysterious 1991 disease.

    So the 1 millyuyn $ question is: what was happening in Southern Europe in early 1991 to GISS data ?

  74. Posted Oct 6, 2007 at 10:38 AM | Permalink

    John Goetz:
    Have you had any time to consider my suggestion above? The more I think about it, comparing the “Hansen bias” to a “sensible bias” is the only fair comparison.

  75. John Goetz
    Posted Oct 6, 2007 at 7:36 PM | Permalink

    #74 John V

    No, I have not had a chance to start on working on an alternative “CA” combining algorithm. I have had a horrendously busy work and extra-curricular week, so all I have done is get part way into slogging through the rest of the world. Maybe I will be done by next weekend just doing that.

    I agree that the only relevant comparison is with a widely accepted algorithm. However, I will be the first to admit I am a horrendously slow programmer. I know what I want to do in my head, but visual basic and excel are woefully underpowered for this task and I am very new to R. I had started looking at modifying Steve’s code to do what you suggest, which I agree to the first order it is a better means of combining records. But I have barely started. I spend 10x the time reading documentation and looking at examples than actual coding. Part of me knows that I won’t be able to respond fast enough to the very good suggestions that will follow. After all, I am sure a number of people will find places where a simple average is inappropriate.

%d bloggers like this: