Adjusting USHCN History

Although the USHCN version used in Hansen’s 1999 press release seems to be expunged from official U.S. government records, it was fortunately preserved by John Daly. Jerry Brennan sent me a link to a version preserved by John Daly yesterday, from which I was able to replicate the version in the 1999 press release, as shown below.

 uhcn8.gif  uhcn9.gif

Figure 1. USHCN Version from 2000. Left – From Contemporary Press Release; Right – from data archived by John Daly. Both are 1951-1980 anomalies in deg C.

Now here’s the fun part. Here’s a recent NOAA version of the same data. If you look carefully, you can see that the late 1990s are higher and the 1920s and 1930s are lower. Compare 1999 to earlier years.
uhcn16.gif

The graph below shows the difference between the version archived in 2000 and the version downloaded from the table-making NOAA webpage. These are obviously not small adjustments – as the adjustment is approximately the same amount as the temperature increase being observed.

uhcn10.gif
Figure 2. Difference between 2000 and 2007 Versions

The effect of the adjustments since 2000 has been to bring the USHCN history more in line with the CRU version. One wonders exactly what adjustments have been performed by CRU and others and the recent admission by Brohan et al 2006 that original versions of many series have been lost (or never even collated by CRU in the first place) leaving only the adjusted versions at CRU (with the nature of some or all of the adjustments undocumented and unknown) is extremely disquieting.

Update: Jerry Brennan observed that the comparison above is between the 2000 GISS version and the 2007 NOAA version and that the more appropriate comparison was between GISS versions. I’ve done so below from the GISS data yielding the differences below: red – GISS2007 vs GISS 2000; blue – NOAA 2007-GISS 2000. So the effect exists with GISS adjustments as well. The proposed NOAA beta version looks like it’s going to increase the adjustments shown below even more (as noted above).

uhcn15.gif


129 Comments

  1. Steve McIntyre
    Posted Feb 16, 2007 at 9:44 AM | Permalink

    Here’s the script to produce these figures.
    ##COMPARE CURRENT USHCN PLOT TO OLDER VERSION

    ##JOHN DALY VERSION FROM 2000
    v00<-read.table("http://www.john-daly.com/usatemps.006&quot;,skip=7,fill=TRUE,nrow=1999-1880)
    v00<-ts(v00[,2],start=1881)
    mean(v00[(1951:1980)-1880]) #[1] -4.419479e-18
    #so this is zero-ed on 1951-1980; it's also in deg C

    ##SUPPLEMENTARY: RECONCILE DALY VERSION TO FIGURE IN http://eobglossary.gsfc.nasa.gov/Study/GlobalWarm1999/
    layout(1);par(mar=c(0,0,1,1))
    plot(1881:1999,v00,type="l",ylim=c(-1.4,1.9),axes=FALSE,xlim=c(1884.2,1995.75))
    axis(side=1,tck=0.025,at=seq(1880,2000,20));axis(side=2,las=1,at= -1:2,tck=0.05);box();grid(col="grey40")
    lines(1881:1999,filter(v00,rep(.2,5)),col="red",lwd=2)
    points(1881:1999,v00,pch=19,cex=.5)

    ## DOWNLOAD CURRENT DATA AND MAKE ANNUAL ANOMALY SERIES 1951-1980 and DEG C
    #table was downloaded from http://lwf.ncdc.noaa.gov/oa/climate/research/cag3/na.html
    #loc<-"d:/climate/data/jones/ushcn/jan2007.txt"
    #table saved to climateaudit

    loc<-"http://www.climateaudit.org/wp-content/uploads/2007/02/jan2007.txt&quot;
    test<-read.table(loc)
    v07<-ts(rev(test[,2]),start=test[nrow(test),1],end=2006) #1895 to 2006
    v07<-(v07-mean(v07[(1951:1980)-1894]))*5/9

    ###DIFFERENCE FIGURE
    layout(1);par(mar=c(1,1,1,1))
    plot(1895:1999,-v00[(1895:1999)-1880]+ v07[(1895:1999)-1894],type="h",col="blue",lwd=2,las=1,ylab="")

    #COMPARE VERSIONS (not shown)
    nf<-layout(array(1:2,dim=c(2,1)),heights=c(1.1,1.3))
    par(mar=c(0,3,1,1))
    plot(1895:1999,v07[(1895:1999)-1894],type="b",pch=19,axes=FALSE,xlab="",ylab="")
    axis(side=1,labels=FALSE);axis(side=2,las=1);box();#mtext(side=2,line=2,"deg C")
    text(1890,1,"2007 Version",font=2,pos=4)

    par(mar=c(3,3,0,1))
    plot(1895:1999,v00[(1895:1999)-1880],type="b",pch=19,axes=FALSE,xlab="",ylab="")
    axis(side=1);axis(side=2,las=1);box();#mtext(side=2,line=2,"deg F")
    text(1890,1,"2000 Version",font=2,pos=4)

  2. Posted Feb 16, 2007 at 9:57 AM | Permalink

    The results from United States clearly represent an upper limit to the urban influence on hemispheric temperature trends.

    (0.15 C 1901-84), Jones 1990

  3. JerryB
    Posted Feb 16, 2007 at 10:36 AM | Permalink

    Just to clarify who’s on first, the data for the graphs is from GISS, which
    does use USHCN data, but also uses data from GHCN US stations that are not
    part of the USHCN collection. There seems to be about 600 GHCN US stations
    in the “lower 48″ that are not included in USHCN.

  4. Jean S
    Posted Feb 16, 2007 at 10:38 AM | Permalink

    Steve, I think that’s actually based on the “raw” USHCN data, the “adjusted” USHCN version (from 1996?) looks different. See (Plate 2) of

    Hansen, J.E., R. Ruedy, Mki. Sato, M. Imhoff, W. Lawrence, D. Easterling, T. Peterson, and T. Karl 2001. A closer look at United States and global surface temperature change. J. Geophys. Res. 106, 23947-23963, doi:10.1029/2001JD000354.

    http://pubs.giss.nasa.gov/abstracts/2001/Hansen_etal.html

    Overall, the paper is rather “insightful” discussion on these adjustments. The press release figure in the previous post, on the other hand, seems to be the older GISS analysis of the GHCN network data, see (figure 6):

    Hansen, J., R. Ruedy, J. Glascoe, and Mki. Sato 1999. GISS analysis of surface temperature change. J. Geophys. Res. 104, 30997-31022, doi:10.1029/1999JD900835.

    http://pubs.giss.nasa.gov/abstracts/1999/Hansen_etal.html

  5. BKC
    Posted Feb 16, 2007 at 10:42 AM | Permalink

    Re. #2

    How does he come to that conclusion?

    Is he assuming the temp change btw 1901 and 1984 should be greater than or equal to 0° without UHI?

  6. John G. Bell
    Posted Feb 16, 2007 at 10:43 AM | Permalink

    How much data has been lost? Do we know? Was data selectively lost? That is, is the remaining data biased by the loss? How much additional data is contaminated (destroyed for this purpose?) by the uncertainty over its modification?

    Please, I don’t care how this came about or who if anyone is involved. I just wonder about the data and the use we can make of it. Is there a problem here? Does Brohan think there is a problem or anyone else who is familiar with the archiving of this data?

  7. JerryB
    Posted Feb 16, 2007 at 10:43 AM | Permalink

    I should have mentioned that the differences graph should compare old GISS
    data to new GISS data, rather than to new NOAA data.

  8. Jean S
    Posted Feb 16, 2007 at 10:48 AM | Permalink

    #4 (contd) If you look (i) plot of plate 2 (“total adjustments”) in Hansen et al 2001, if follows pretty much your figure 2. Then if you look figures (d)-(h) you see from where most of these adjustments are coming.

  9. Steve Sadlov
    Posted Feb 16, 2007 at 11:07 AM | Permalink

    This is why I have given up on the surface record as being anything more than a rubbish bin. Dumpster diving for data is not my cup of tea.

  10. Milton
    Posted Feb 16, 2007 at 11:16 AM | Permalink

    I am very sympathetic with your work into exposing “global warming climate scientists” for their biased work. However it has become very tedious to keep up with this never ending study after study by everyone concerned(I read RealClimate and others). Every day there is a new study of temps or glaciers or antartica or tree rings or sea levels or precipitation, etc,etc. etc. And it seems every government on the planet has their own study groups that dream up all kinds of climate issues. To me,the first order of business about future climate should be to determine if there is anything that humans can do to control it, whether climate changes are manmade or natural. If we have a proven method to control climate then we should have a plan of how to control a dangerously warming climate( we’ll know when it gets warmer every year for x number of years), and a plan for a dangerously cooling climate. If we don’t know how to control the climate then we’ll adjust to whatever occurs like we always have until we can develop a proven method of climate control. Why does no one address how to control the climate instead of fighting over who can prove what the past climate was or what the future climate will be? It would be nice to know that if we get into a new little ice age or new dust bowl that we can do something about it.

  11. Larry Huldén
    Posted Feb 16, 2007 at 11:17 AM | Permalink

    When looking at Steve’s Figure 2 it seems very unlikely that these adjustments only represents corrections of different kind like outliers etc. It looks like a systematical adjustment which is not related to correcting the temperature data, it is more like an adjustment to correct a trend.

  12. Onar à…m
    Posted Feb 16, 2007 at 11:34 AM | Permalink

    Re #10

    “( we’ll know when it gets warmer every year for x number of years),”

    Will we? In case you haven’t noticed, this discussion is about whether we can trust the global surface temperature measurements. Steve has uncovered what can only be described as a disconcerting effort to eliminate the 1930s heat period.

  13. Ken Fritsch
    Posted Feb 16, 2007 at 11:39 AM | Permalink

    Re: #4

    What I find interesting is the fact that the US has a relatively very high density of temperature stations in a technically developed nation going back in time and even with the “corrections” in the temperature anomaly trend does not impressively display the signature of GW over the past century.

    From the link in #4 for the 2001 Hansen paper, we see the total temperature anomaly adjustments from 1900 to 2001 are on the order of +0.3 degrees C for the US — a large part of the warming trend.

  14. Bill F
    Posted Feb 16, 2007 at 11:41 AM | Permalink

    Has anybody emailed them to ask for a detailed explanation of the basis for the adjustments? Any response you could get from them would undoubtedly help in figuring out why the adjustments seem so much like a trend correction.

  15. Jean S
    Posted Feb 16, 2007 at 11:49 AM | Permalink

    Looking the plate 3 of Hansen etal 2001, it seems that largest USHCN adjustmenst (1-3C !!!) has been made in SW of Texas. Anyone has any raw data from that area?

  16. jae
    Posted Feb 16, 2007 at 12:17 PM | Permalink

    Why does no one address how to control the climate instead of fighting over who can prove what the past climate was or what the future climate will be? It would be nice to know that if we get into a new little ice age or new dust bowl that we can do something about it.

    Err, several states are addressing this, and the US Congress will be addressing this. Unfortunately, whatever they do will only harm the domestic economy and transfer wealth to China and India, without affecting the climate. It is foolish to try to control climate, when we don’t even know what’s going on. And we certainly cannot control Mother Nature.

  17. Milton
    Posted Feb 16, 2007 at 12:28 PM | Permalink

    Ref # 12
    Yes, I noticed. I don’t trust global temperature measurements. Never have. Why, because there were no global temperatures being taken 125 years ago.

    That was my point. Does it really matter if we know for sure? It’s a nice exercise, but knowing that wont help us if the planet enters a dangerous climate trend. What we need to know is how to control the climate in the future.
    Why can’t the climate scientists get together and develop a plan to control a dangerous climate no matter what caused it. Just because its natural doesn’t mean its not dangerous.

  18. Steve McIntyre
    Posted Feb 16, 2007 at 12:33 PM | Permalink

    Milton, these are large questions, but there are lots of people worrying about big issues. I’m just trying to understand the data, brick by brick. There are lots of other approaches to the issues, but I can’t do every
    thing. Let’s discuss data issues on this thread.

  19. Jean S
    Posted Feb 16, 2007 at 12:33 PM | Permalink

    #15 (contd) I plotted a few of those “adjusted” USHCN stations available:

    http://cdiac.ornl.gov/epubs/ndp/ushcn/state_TX_mon.html

    I suppose people in, e.g., Alpine TX (USHCN 410174), are completely unaware that they are living in one of those places which has been hit hardest by the global warming…

  20. Darwin
    Posted Feb 16, 2007 at 12:38 PM | Permalink

    Roger Pielke Sr. in November submitted a study to Geophysical Research Letters Unresolved issues with the Assessment of Multidecadel Global Land Surface Temperature Trends. It includes information from Lu et al about adjustments indicating a sort of hop step up in temperatures after adjustment. In 1989, Thomas Karl of NCDC and now head of CCSP published a piece with Phil Jones on the biases created by the urban heat island effect.It said:“Results indicate that in the United States the two global land-based temperature data sets have an urban bias between +6.1°C and +0.4°C over the twentieth century (1901’€”84). This bias is as large or larger than the overall temperature trend in the United States during this time period, +0.16°C/84 yr. Temperature trends indicate an increasing temperature from the turn of the century to the 1930s but a decrease thereafter. By comparison, the global temperature trends during the same period are between +0.4°C/84 yr and +0.6°C/84 yr. At this time, we can only speculate on the magnitude of the urban bias in the global land-based data sets for other parts of the globe, but the magnitude of the bias in the United States compared to the overall temperature trend underscores the need for a thorough global study.” His subsequent work went on to diminish the effect. And in a Nature article in 1996, Trends in high-frequency climate variablility in the twentieth century, that claimed to show that extremes of climate in the U.S. have increased since 1976, compared to the previous 65 years, consistent with the theory of global warming. Pielke Sr et al’s paper was in response to actions by Karl in writing reconciling surface and atmospheric temperature trends. Pielke wrote: “The type of analyses that are presented in our paper should have been included in the CCSP Report ‘Temperature Trends in the Lower Atmosphere: Steps for Understanding and Reconciling Differences.’ Unfortunately, as I discuss in my Public Comment on CCSP Report, the CCSP Report failed to provide the appropriate breadth of perspectives that the policymakers need. That CCSP Report, therefore, is an advocacy document which promotes the narrower perspective of its authors on the subject of reconciling surface and tropospheric temperature trends. The JGR paper that we have completed should, therefore, be considered as adding information to be communicated to policymakers on the robustness of the multi-decadal surface temperature trends.” There seems to be a lot of scientists questioning the land surface numbers and what’s being done to them.

  21. Peter Lloyd
    Posted Feb 16, 2007 at 12:38 PM | Permalink

    re: Milton/10

    While in sympathy with your frustration, I strongly disagree with your suggestion that we forget the argument and get on with efforts to control/adapt to climate.

    Unless we get down to the truth of what is happening to our climate and why, we are bound to make a mess of our reaction to it. Many good climatologists believe that CO2 levels are not the cause of climate change but are the result – plus some help from anthropogenic. If they are right, then all proposals to impose carbon taxes and set up carbon trading schemes will have no effect on global temperature rise and climate. But they will take billons out of the economy while they detract attention and resources from the real issues, i.e., cutting back massively on our dependence on fossil fuels and using energy much more efficiently. Success in these latter areas will boost economies, whereas carbon taxes will cripple them. Moreover, when it eventually becomes obvious that carbon controls have failed to affect climate, we will still be faced with the same problem, but by then with depleted resources.

    [snip - no DDT here. Discuss it elsewhere]

    If we get the reaction to climate change wrong, the result will be much worse – billions will suffer all over the planet.

    That’s why we need to use good science, and get the answers right.

    By the way, Milton, mankind will never, ever, be able to control climate. Just as well – the challenges thrown at us by Mother Nature are as nothing to the mess politicians would make of it.

  22. Michael Jankowski
    Posted Feb 16, 2007 at 12:48 PM | Permalink

    Why does no one address how to control the climate

    Maybe because it’s not possible.

    Proving what the past climate was and what the future climate will (might) be would tell us how much of an effect (if any) we could have on the future climate, what we might need to do to adapt, etc.

  23. Steve McIntyre
    Posted Feb 16, 2007 at 12:50 PM | Permalink

    #7. Jerry, I;ve updated to show a GISS-GISS comparison as well as a NOASS-GISS comparison.

  24. JerryB
    Posted Feb 16, 2007 at 12:51 PM | Permalink

    A few things may be getting combobulated here.

    USA differences between new NOAA NCDC numbers, and old GISS numbers,
    tend to exceed differences between new, and old, GISS numbers.

    GISS numbers changed in large part due to GISS switching from “raw”
    USHCN data to “adjusted” USHCN data.

    A glimpse of USHCN adjustments may be had by looking at

    and some brief discussion thereof is at

    http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html

    If “time of observation bias” is a new concept to you, there is a brief
    discussion of it at http://www.john-daly.com/tob/TOBSUM.HTM

    I’ve got to get some snow off of my driveway, or I would a bit less brief.

  25. Reid
    Posted Feb 16, 2007 at 2:40 PM | Permalink

    “Why does no one address how to control the climate”

    When humans can build a Dyson sphere for the sun we will be able to control the climate.

  26. PHE
    Posted Feb 16, 2007 at 2:41 PM | Permalink

    Please clarify. I’m misunderstanding something. I cannot see any visual difference between the two graphs at the top of the article. Should I?

  27. David Smith
    Posted Feb 16, 2007 at 3:15 PM | Permalink

    Re # 26 I think that the top two graphs simply show that the Daly data replicates Hansen’s 1999 chart. they should be the same, and they are.

  28. Bill F
    Posted Feb 16, 2007 at 3:20 PM | Permalink

    I think the two graphs side by side are Steve’s demonstration that he can duplicate the slide with the digital data set. So if there is no difference, it means he has successfully duplicated the slide.

  29. John Hekman
    Posted Feb 16, 2007 at 4:30 PM | Permalink

    given that there was a “new ice age” scare in the Seventies, I wonder if there were any articles published at that time that had long term temperature graphs for the U.S. showing how temps had plunged since the Thirties. These articles would be pretty amusing today. In addition, one could splice the more recent temps onto the graphs to get additional versions of how warm it is today relative to the Thirties.

    In Pasadena, where I live, there was a “historic” heat wave last July, when temps reached 105 F. But then they mentioned that the all-time high temp for Pasadena was 110 F, in 1933.

  30. Posted Feb 16, 2007 at 4:40 PM | Permalink

    Steve:

    forgive me if I am writing this in the wrong place.
    Seems to me that your original work on exposing the “hockey stick” is becoming more widely known but to me the real issue is the refusal to release data (from whoever) and that the IPCC is using information which cannot be replicated or tested due to the refusal to make known all data and codes etc to be able to test and replicate the results. (please orrect me if I am wrong).

    Most people don’t know this. By most people I mean the man on the street. One of the biggest problems faced by those who question the IPCC findings is the technical nature of many of the arguments and discussions. There are many here in NZ who see the bias in the media and within the government but making the man in the street aware of the real issues is very hard. All we see is a picture of a polar bear and a statement by greenpeace or the WWF saying they are danger from GW. It tugs at heart strings and people believe it. Most of these people have the attention span of a gnat and most discussions on AGW end quickly due to them becoming too complicated. Therefore, when the subject is raised in an attempt to bring these issues into the public arena (which they are by a number of talkback hosts in NZ who do question the governments stance on GW and the IPCC) they are quickly ended as the media will not spend any time on matters which require a bit of thinking.

    Is there a way that the problems you face with the data etc can be simplfied in a way that joe average might be able to understand? I think this is essential to gaining some public support. The Education minster here in NZ is contemplating having all NZ schoolchildren watch Al Gore’s “An Inconvenient Truth”, hockey stick an all. This how absurd and ignorant our politicians are but we need something to fight them with.

    Yhanks

    Thanks

  31. Steve Sadlov
    Posted Feb 16, 2007 at 4:45 PM | Permalink

    RE: #29 – Not to be in any way conspiratorially disposed, but …. in my 4 decades of observing California weather, I’d have to chalk up last year’s heat wave as being severe, but hardly unique. Anywhere inland from the fog belt, it is a fact of life that it gets hot in the summer. Indeed, 105F in Pasadena (or Calistoga, Fairfield, Concord, South San Jose, Hollister, Paso Robles, Ojai, Sherman Oaks, Ontario, Temecula, etc) is not unheard of. It’s almost as if there has indeed been a Big Brother like “adjustment” of all surface records for many locations to erase heat waves like the ones of the 80s and the 70s that I personally remember so well.

  32. jae
    Posted Feb 16, 2007 at 4:56 PM | Permalink

    20: That paper by Pielke is great. One of the photographs of a met station actually shows a barbeque grill sitting directly under the station. I hope they move it before firing it up!

  33. John Hekman
    Posted Feb 16, 2007 at 5:03 PM | Permalink

    The issue of temperature “adjustments” and refusal to release data or claims to have lost data are a lot easier to understand than the statistics behind the hockey stick and the Wegman report. I would think that Jon Stoessel or someone else who is skeptical of the conventional wisdom could put together a pretty good show of these issues, with the appropriate interviews of “officials” who are trying to explain why the public does not need this data.

  34. Peter Lloyd
    Posted Feb 16, 2007 at 5:04 PM | Permalink

    re 30

    Same happening here in UK. Copies of Gore’s movie going to all State schools. Shades of Lysenko!

    It’s frustrating – even as the AGW bandwagon becomes accepted as an everyday certainty in the public mind, the good scientific evidence against it gets better, stronger, harder to dismiss. Paul is right on – how to put the message across? Any good PR guys out there?

  35. george h.
    Posted Feb 16, 2007 at 5:27 PM | Permalink

    re #30 and 33,

    Sorry about one more off-thread comment, but I agree with both of you about the need for a simplified but technically accurate counter to the AGW hysteria which is being crammed down our throats by the MSM and the ecosocialists who are about to give Gore an Oscar. Jon Stossel is very good on this, and he has a great grasp of economics, which one doesn’t often see in the average reporter. I’d love to see Jon Stossel and Michael Crichton collaborate on a primetime special or documentary which would present the other side.

  36. Steve Sadlov
    Posted Feb 16, 2007 at 5:32 PM | Permalink

    RE: #33 and 35 – Assuming they would agree to be interviewed …. I am just picturing Mann, Jones and others, being picked apart by a Stossel or a Rivera. That would be priceless!

  37. David Smith
    Posted Feb 16, 2007 at 5:40 PM | Permalink

    Here’s a US city, chosen at random (more or less):

    A plot of Baton Rouge, LA (US) annual temperatures from NCDC is given here .

    Baton Rouge annual temperature from GISS is given here .

    While I can’t eyeball a difference in trend, I do see significant variation in individual years. How can that be? It’s the same site, same thermometer, same raw data.

    When there’s such apparent adjustment and interpretation of one station’s annual numbers it becomes easy to imagine that a tweak here and a turn there opens the door for mischief.

  38. Steve Sadlov
    Posted Feb 16, 2007 at 5:41 PM | Permalink

    RE: #31 – Heck, now that I think about it ….. from personal experience:
    ~ Late springs, summers of 1978, 82, 83, 84, etc – 103 Deg F at least once in places like Sunnyvale, Santa Clara, Milpitas – e.g. along the south shore of SF Bay
    July or Aug 1985 (e.g. right before the big Ojai fire), during a sundowner, 117 Deg F at Santa Barbara
    Early June 2000 – 110 deg in my backyard (lower slopes of a coastal mountain range near San Francisco) due to a triple barrel High pressure series / extreme compression

    These proclamations we often hear seem to miss all these events. And this is obviously a small sample.

  39. David Smith
    Posted Feb 16, 2007 at 5:43 PM | Permalink

    RE #37 Sorry, NCDC apparently doesn’t allow linkage. Here’s the site , click on Baton Rouge (southern US) and then select the annual temperature display.

  40. Henry
    Posted Feb 16, 2007 at 5:49 PM | Permalink

    Of the five NOAA adjustments to the USHCN raw data, four happen to raise the temperatures towards the end of the period, helping confirm the message that temperatures have risen. This is not statistically significant. But add in the GISS adjustments and, in the language of the IPCC, it allows the conclusion to be that the adjustments are “very likely” to have been chosen with a particular impact in mind.

    My personal guess is that this is in fact not deliberate, but I would not be surprised if there are unconcious biases involved.

  41. Gerald Machnee
    Posted Feb 16, 2007 at 7:08 PM | Permalink

    RE #4 **Hansen, J., R. Ruedy, J. Glascoe, and Mki. Sato 1999. GISS analysis of surface temperature change. J. Geophys. Res. 104, 30997-31022, doi:10.1029/1999JD900835.

    http://pubs.giss.nasa.gov/abstracts/1999/Hansen_etal.html**

    I think I am still having a difficult time understanding what is going on.
    Do I read the graphs at the top of this thread that after about 1970 they have INCREASED the temperatures and DECREASED before that? Now my understanding would be that to account for UHI you would do the opposite. So are they adjusting the temperatures to be closer to the so-called “global temperatures”? Because reading from the Hansen report I quoted here,that appears to be the problem for them – the USA was warmer in the 1930’s than now.

  42. Lee
    Posted Feb 16, 2007 at 8:32 PM | Permalink

    SteveM
    You say:
    “One wonders exactly what adjustments have been performed by CRU and the recent admission by Brohan et al 2006 that original versions of many series have been overwritten leaving only the adjusted versions is extremely disquieting.” Given that you are talking in this article about the adjustments from raw data to the final analysis, and that this sentence is talking about such adjustments, the sentence implies rather strongly that the data used by HADCRUT has been overwritten by their adjustments. This is not the case.

    I assume (given that you didn’t quote it, one must make an assumption) you are referring to this, from page 6 of Brohan et al 2006 – emphasis added:

    “Inhomogeneities are introduced into the sta-
    tion temperature series by such things as
    changes in the station site, changes in mea-
    surement time, or changes in instrumentation.
    The station data that are used to make Had-
    CRUT have been adjusted to remove these in-
    homogeneities, but such adjustments are not
    exact ‘€” there are uncertainties associated with
    them.
    For some stations both the adjusted and un-
    adjusted time-series are archived at CRU and
    so the adjustments that have been made are
    known [Jones et al., 1985, Jones et al., 1986,
    Vincent & Gullet, 1999], but for most stations
    only a single series is archived, so any adjust-
    ments that might have been made (e.g. by Na-
    tional Met. services or individual scientists)
    are unknown
    .”

    They are referring here to the data available to them, not to changes they have made. And it is not an “admission,” it is a simple factual description of the data available to them. They are pointing out the unremarkable fact that some available station data has been previously adjusted for “homogeneity” to correct for changes in station conditions prior to archiving, and they may or may not know how or whether those adjustments are made. And they make that description in the context of describing the uncertainties in the data, and how they deal with those uncertainties. There is nothing even the least bit untoward here.

  43. JerryB
    Posted Feb 16, 2007 at 8:38 PM | Permalink

    Re #41,

    Gerald, see the links at comment 24. The main pitch is that times of
    observation, among USA coop observers, mostly volunteers, have gradually
    shifted from evening to morning. Evening observations have a high bias,
    while morning observations have a low bias. The time of observation bias
    (TOB) adjustments would therefore lower the temperature estimates of the
    periods of relatively more evening observations, and raise the temperature
    estimates of relatively more morning observations. I trust I make myself
    obscure, to quote a line from A Man for All Seasons.

  44. DaleC
    Posted Feb 16, 2007 at 9:29 PM | Permalink

    re #19 Jean S and the annual mean temperature for Alpine, Texas, CDIAC ORNL plot:

    Using the USHCN raw daily data aggregated annually for max and min, and then taking the simple average, shows a plot which is higher to about 1960, and then lower to 2002 – that is, a lot flatter. The data at http://cdiac.ornl.gov/epubs/ndp/ushcn/state_TX_mon.html appears to go only to 2002.

    Chart with comparative scaled overlay is here.

    Several points to note:
    Since there is no daily data until 1927, how is the CDIAC series inferred from 1900 to 1927?
    Why is the CDIAC plot 2 degrees F lower?
    The observation record is very patchy, causing a big spike at 1962. How does the CDIAC plot handle this?
    The number of observations drops dramatically from 2001.

    Assuming that the CDIAC annual mean of the monthly means is created from exactly the same daily data as I have used (NDP070 downloaded September 2006), the point of this is to show the extent of the data adjustments which have been applied, and the extent to which different methods of summarizing the data can lead to quite different trends.

    The observation record seems too patchy to me to trust a trend, but the good people of Alpine might be somewhat comforted to know that according to the simple average they have in fact been enjoying a downward trend of 2 degrees F since 1930.

  45. Jim Mitroy
    Posted Feb 16, 2007 at 10:11 PM | Permalink

    Just looking at the two graphs.

    They look the same to me!

  46. David Smith
    Posted Feb 16, 2007 at 10:50 PM | Permalink

    Re #43 Jerry B, thanks for the links. A question: why would the time of observation shift slowly over decades? It seems like there would be a measurement protocol, followed by all cooperative observers, and any change of protocol would be sudden and clean.

    Also, I assume that the raw data includes observation time and that the climate people adjusted each station’s data. Do you know if that is correct?

  47. John Baltutis
    Posted Feb 16, 2007 at 11:49 PM | Permalink

    Re: #42
    What’s your point? Steve’s wondering what adjustments have been performed by CRU on all station data contained in the Had-CRUT and voicing a concern that most unadjusted station data may have been overwritten by CRU or lost, since it wasn’t archived.

  48. Lee
    Posted Feb 17, 2007 at 12:04 AM | Permalink

    re 47:

    As near as i can tell, SteveM is voicing that concern based on no evidence.

    That paragraph in Brohan et al is talking about the condition of the historical data available to them, not what they have done to the data.
    Steve implied that CRU overwrote data – there is no evidence for that, and they certainly didn’t “admit” in taht paragraph that they had done any such thing.

  49. Posted Feb 17, 2007 at 1:03 AM | Permalink

    Why would a temperature record ever needed to be adjusted upward? I can understand downward due to heat island effects. But upward? Was the little structure sitting in a big puddle of water? The measuring devices weren’t calibrated? Neither of those two make sense, but that is all I can think of.

  50. James Lane
    Posted Feb 17, 2007 at 1:23 AM | Permalink

    Lee, this is what Steve wrote:

    One wonders exactly what adjustments have been performed by CRU and the recent admission by Brohan et al 2006 that original versions of many series have been overwritten leaving only the adjusted versions is extremely disquieting.

    Brohan et al (2006) report that:

    For some stations both the adjusted and un-adjusted time-series are archived at CRU and so the adjustments that have been made are known [Jones et al., 1985, Jones et al., 1986, Vincent & Gullet, 1999], but for most stations only a single series is archived, so any adjustments that might have been made (e.g. by National Met. services or individual scientists) are unknown.

    I don’t see any inconsistentcy here (unless we’re going to get into some tedious debate about the meaning of “overwritten”). Do you think it’s satisfactory that for most stations the unadjusted data is now unavailable? And that the nature of any adjustments is unknown?

  51. George Taylor
    Posted Feb 17, 2007 at 2:10 AM | Permalink

    In 2002 I wrote a short paper and submitted it to the AMS Applied Climate Conference. It compared long-term trends for the original (1994) HCN data set with a revised (1999) release. The main difference seemed to be in adjustments NCDC was making to the later data. Most of the adjustments were made in the early part of the record (before 1948) and generally amounted to lowering the early data (thereby increasing the long-term trend).

    From the paper, “The original HCN data showed relatively little warming at Oregon rural stations, with virtually no change since the 1930s. The most recent HCN data set, however, has been adjusted significantly, especially early in the record. This causes linear trends at most stations to be a great deal higher than they were previously.”

    The paper is available at

    http://www.ocs.orst.edu/pub/pdfs/hcn.pdf

  52. JerryB
    Posted Feb 17, 2007 at 6:18 AM | Permalink

    Re #46,

    David,

    The US NWS coop volunteers get to choose their time of observation. They are
    asked (urged, required, take your pick) to be consistent about it. It seems that
    over the years, more and more succcessive volunteers opted for evening, rather
    than morning, times of observation.

    For the USHCN, the station history file at ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/
    includes such information up to October 1, 1994. The temperature data files
    include “raw” and adjusted numbers. Changes of the usual adjustments for each
    station coincide with changes noted in the station history file for that station, but
    the rational for the precise amount of the adjustment is not described there.

  53. John Lang
    Posted Feb 17, 2007 at 7:11 AM | Permalink

    You can adjust data to account for Urban Heat Island, changing weather station locations and time of day bias, …

    .. but if a flat or declining trend then turns into a rise of 0.6C, you can’t then blame that on rising greenhouse gases, the increase is then caused by UHI.

    And isn’t the adjustment BACKWARDS. If you are adjusting for UHI, shouldn’t you adjust the past temperatures UP and not down if you are trying to tease out the trend without UHI. You do not adjust current temperatures UP to take OUT the rising UHI effect.

    It is a complete crock.

  54. JerryB
    Posted Feb 17, 2007 at 7:13 AM | Permalink

    Re #52,

    Oooops, that should have indicated that more volunteers
    opted for morning, rather than evening, times of observation
    in more recent years. Sorry about that.

  55. Jean S
    Posted Feb 17, 2007 at 7:14 AM | Permalink

    There are a few “historic” (from 1994, including Jones & Hansen temperature histories) series available here:

    http://cdiac.ornl.gov/ftp/trends93/

    Might be interesting to compare those to the current ones.

  56. Gerald Machnee
    Posted Feb 17, 2007 at 7:27 AM | Permalink

    RE #43 – the only problem is that they are shifted down before about 1970 and up after that – The time shift cannot account for much of that. It looks the getting rid of a MWP in the 1900’s.

  57. Brooks Hurd
    Posted Feb 17, 2007 at 8:00 AM | Permalink

    Lee,

    When I read the statement in Broohan 2006 that:

    for most stations only a single series is archived, so any adjustments that might have been made (e.g. by National Met. services or individual scientists)are unknown.

    I understand that the HadCRU records include only some of the original (unadjusted) data. Hopefully, the individual met stations have their own unadjusted records, so that data may still exist in its original unadulterated form. The instrument records (some spliced to proxy reconstructions) which one sees in the IPCC SPMs and in the media come from GISS, HadCRU, and USHCN. All of these datasets are adjusted, not once, but many times. When the datasets are updated, they add the latest average temperatures including any adjustments which they determine the raw data needs.

    When Broohan sates that they adjust data because

    Inhomogeneities are introduced into the station temperature series by such things as changes in the station site

    he is saying that they are adjusting the data to remove the effect of UHI. Then they go back and use the adjusted data to show that there is no significant UHI. Do you find this sort of behavoir acceptable?

  58. Steve McIntyre
    Posted Feb 17, 2007 at 8:53 AM | Permalink

    “for most stations only a single series is archived, so any adjustments that might have been made (e.g. by National Met. services or individual scientists)are unknown”

    I don’t recall any prior such admission that unadjusted data was no longer available. If they’ve admitted it previously, then I’d be happy to note this, but, if so, someone should have made an issue of it previously.

  59. David Smith
    Posted Feb 17, 2007 at 8:55 AM | Permalink

    RE #52 Thanks, Jerry.

    In the README file of the link you provide, I saw this with regards to the time-of-observation, which is raw data that greatly affects the adjustments:

    ____________________________________________________________________________

    Time of Observation Data

    (3) the third position is the code for the quality of the available observation
    times for a given station:

    F = information concerning the observation times for the station during that
    year was suspect or “flaky”;
    G = information concerning the observation times for the station during that
    year was “good” and the information was judged to be accurate;
    Blank = information concerning the observation times was not available for the
    station during that year and the data are represented as missing.

    _____________________________________________________________________________

    I like people with a sense of humor. I wonder how many “flaky” datasets they found, and what they did with those, and how they handled missing data.

    As best as I can tell, they calculated a bias factor for groupings of stations rather than calculate the bias of an individual station. My inclination would be to do the adjustment to each station’s data, not to a group of stations. Perhaps adjusting a group makes no difference to the outcome but it just seems odd, if that is what they did.

  60. JerryB
    Posted Feb 17, 2007 at 9:54 AM | Permalink

    Re #59,

    David,

    All of the USHCN adjustments are applied on a station by
    station, month by month, basis. The magnitude of the time
    of observation bias adjustments are calculated on a latitude,
    longitude basis, and derived from “nearby” stations that
    had hourly temperature readings.

    Of 131,623 station/years of data, 6,230 station/years are
    flagged as having “flaky” observation time information. :-)

  61. Lee
    Posted Feb 17, 2007 at 12:10 PM | Permalink

    SteveM, you write a sentence that starts by talking about adjustments performed by CRU, and then continues in the same sentence to say that original data series have been overwritten by adjusted data. As I clearly and specifically said above, the sentence very strongly implies – it just barely skips around stating outright – that CRU have overwritten original data with adjusted data – and that is not true.

  62. Lee
    Posted Feb 17, 2007 at 12:14 PM | Permalink

    re 57:
    “When Broohan sates that they adjust data because

    Inhomogeneities are introduced into the station temperature series by such things as changes in the station site

    he is saying that they are adjusting the data to remove the effect of UHI. Then they go back and use the adjusted data to show that there is no significant UHI. Do you find this sort of behavior acceptable?”

    No., that is not what they are saying. They specifically describes in the paper manyh of the kinds of changes that introduce inhomogeneities, including things like changes in station location or height, reading time, and so on.

  63. K
    Posted Feb 17, 2007 at 12:50 PM | Permalink

    Lee #61. Overwritten is a pretty clear word. And it shouldn’t have been raised. It would be the rare computer program designed to actually write over the input. And who would be so foolish as to manually alter an orginal record?

    But there is a legitimate question. Does CRU keep the original or oldest record available after adjusted tables are made and presented? Information that is no longer available might as well not exist.

    A sentence of two from someone acquainted with CRU repository practice would be welcome.

  64. David Smith
    Posted Feb 17, 2007 at 1:41 PM | Permalink

    Lee, can you help me with the following question (thanks in advance):

    Figure 4 of Brohan indicates 763 known adjustments, with what appears to be a typical value of about 0.5C. The other 3,600 or so stations show no record of adjustments in the CRU files.

    Per Brohan, adjustments are for such things as changes in station site, changes in measurement time or changes in instrumentation. It would be an unusual station which has not seen relocation, changes in instrumentation or (as we’ve seen with NCDC) changes in time of measurement, especially stretching back 50 or 100 or 150 years.

    Yet, there’s no record of such changes (as indicated by retention of the unadjusted data in CRU files) for 80% of the stations. How does one construct a reliable 150-year history when one doesn’t know what’s already been adjusted, and why, and when, and by how much?

  65. Lee
    Posted Feb 17, 2007 at 2:01 PM | Permalink

    re 64:
    Your Socratic argument/question can be restated as – they remove as much noise as they can from their data, but there is still noise. How does one extract a reliable signal from noisy data?
    I would hope you don’t disagree that it is possible to extract reliable signals from noisy data. Brohan 2006 is largely about how they do so in this case.

  66. Brooks Hurd
    Posted Feb 17, 2007 at 2:43 PM | Permalink

    Lee,

    Adjusting the data to reduce temperatures in the 1930s and increase temperatures after 1980s is not a noise reduction excercise. The effect of such adjustments would enhance a trend, but it is not a technique to reduce noise.

    It is surely possible to extract signals from data with large standard deviations. However when one adjusts data with the aim of making the signals more clear, one may be creating signals where there were none.

  67. Lee
    Posted Feb 17, 2007 at 3:21 PM | Permalink

    Brooks, you make several claims about what they did, with no supporting evidence.
    They did not simply reduce earlier temps and raise later temps – that may be the result of the adjustments, but it was not simply a matter of changing data to get that result. What they did was adjust temp series that had inhomogeneities – and THAT is a technique to reduce noise.
    They adjust the data with the aim pof removing knonw errors in the data – that does NTo create signals where there were none, it reduces bad data.
    From where I sit, it seems that if one starts with the assumption that they are cooking their data, one easily comes to the conclusion that they are cooking their data – and taht charge is then often hinted at without solid supporting evidence. And that is egregious.

  68. Ken Fritsch
    Posted Feb 17, 2007 at 4:08 PM | Permalink

    Lee, I believe you get undeserving attention simply because you act as a sounding board for people asking rhetorically what is happening with these adjustments. They do not necessarily expect you to deliver a specific answer or details, and that you do not should not disappoint anyone.

    The real question for this thread comes through loud and clear to those interested in better defining and understanding AGW and GW and has little to do with how you want to interpret how Steve M posed it. One sees a century trend in temperatures measured over the US that is (was?) relatively small compared to what was reported for the globe as a whole. The magnitude of the temperature adjustments that the agencies making them publish (if not the details on how they were derived) show that the adjustments are a large part of that trend. These adjustments are adjustments that have changed significantly over rather short periods of time and in the recent past. Those changes have got to pique the interest in how these adjustments specifically come about (not the published referrals in general terms) to all but those with unflinching faith in those doing the adjustments or those entirely content with the current consensus on GW.

  69. David Smith
    Posted Feb 17, 2007 at 4:23 PM | Permalink

    An interesting exercise is to compare the current NCDC US temperatures with the satellite-derived data for the US, from RSS. The RSS monthly data, from 1979-2006, is given here .

    Now, RSS measures the lower troposphere, and their definition of “continental US” won’t exactly follow the coastline, but it should be reasonably close to the actual US surface temperature. Otherwise, the atmospheric lapse rate is changing, which would be a profound development.

    My Excel spreadsheet puts the satellite trend for the US (1979-2006) at about 0.15 to 0.20C/decade while the NCDC temperature data looks to be running 0.25 to 0.30C/decade.

    That should, at the least, be a cause for reflection at the NCDC. I’ve not looked at GISS or Hadley US data, but I imagine the results would be similar.

  70. Steve McIntyre
    Posted Feb 17, 2007 at 5:43 PM | Permalink

    In light of some editorial parsing, I’ve edited the text to read:

    One wonders exactly what adjustments have been performed by CRU and others and the recent admission by Brohan et al 2006 that original versions of many series have been lost (or never even collated by CRU in the first place) leaving only the adjusted versions at CRU (with the nature of some or all of the adjustments undocumented and unknown) is extremely disquieting.

    Jones and Mann and the rest of the Hockey Team could remove much criticism by simply archiving data and methods without quasi-litigation. It’s a ridiculous issue to contest and I don’t know why other climate scientists put up with it. If their results are valid, they will survive careful scrutiny of the data. Relying on unaudited statements would be unthinkable in business and the resistance of Jones and others to archiving data should be of concern.

    I’ve never expressed any opinions on policy other than scientists should be required to properly archive data and methods.

  71. Lee
    Posted Feb 17, 2007 at 6:02 PM | Permalink

    “Making a histogram of the adjustments ap-
    plied (where these are known) gives the solid
    line in figure 4. Inhomogeneities will come in
    all sizes, but large inhomogeneities are more
    likely to be found and adjusted than small
    ones. So the distribution of adjustments is bi-
    modal, and can be interpreted as a bell-shaped
    distribution with most of the central, small,
    values missing.
    Hypothesising that the distribution of ad-
    justments required is Gaussian, with a stan-
    dard deviation of 0.75
    ‘—¦
    C gives the dashed line
    in figure 4 which matches the number of adjust-
    ments made where the adjustments are large,
    but suggests a large number of missing small
    adjustments. The homogenisation uncertainty
    is then given by this missing component (dot-
    ted line in figure 4), which has a standard de-
    viation of 0.4
    ‘—¦
    C. This uncertainty applies to
    both adjusted and unadjusted data, the former
    have an uncertainty on the adjustments made,
    the latter may require undetected adjustments.
    The distribution of known adjustments is
    not symmetric ‘€” adjustments are more likely
    to be negative than positive.
    The most
    common reason for a station needing adjust-
    ment is a site move in the 1940-60 period.
    The earlier site tends to have been warmer
    than the later one ‘€” as the move is often
    to an out of town airport. So the adjust-
    ments are mainly negative, because the ear-
    lier record (in the town/city) needs to be
    reduced [Jones et al., 1985, Jones et al., 1986].
    Although a real effect, this asymmetry is small
    compared with the typical adjustment, and
    is difficult to quantify; so the homogenisation
    adjustment uncertainties are treated as being
    symmetric about zero.
    The homogenisation adjustment applied to
    a station is usually constant over long periods:
    the mean time over which an adjustment is
    applied is nearly 40 years [Jones et al., 1985,
    Jones et al., 1986,
    Vincent & Gullet, 1999].
    The error in each adjustment will therefore beconstant over the same period. This means
    that the adjustment uncertainty is highly
    correlated in time: the adjustment uncertainty
    on a station value will be the same for a
    decadal average as for an individual monthly
    value.
    So the homogenisation adjustment uncer-
    tainty for any station is a random value taken
    from a normal distribution with a standard de-
    viation of 0.4
    ‘—¦
    C. Each station uncertainty is
    constant in time, but uncertainties for differ-
    ent stations are not correlated with one an-
    other (correlated inhomogeneities are treated
    as biases, see below). As an inhomogeneity
    is a change from the conditions over the cli-
    matology period (1961’€”90), station anomalies
    will have no inhomogeneities during that pe-
    riod unless there is a change sometime during
    those 30 years. Consequently these adjustment
    uncertainty estimates are pessimistic for that
    period.
    Figure 4 also demonstrates the value of mak-
    ing homogenisation adjustments. The dashed
    line is an estimate of the uncertainties in the
    unadjusted data, and the dotted line an esti-
    mate of the uncertainties remaining after ad-
    justment. The adjustments made have reduced
    the uncertainties considerably.”

  72. Jean S
    Posted Feb 17, 2007 at 6:24 PM | Permalink

    Lee: I would hope you don’t disagree that it is possible to extract reliable signals from noisy data. Brohan 2006 is largely about how they do so in this case.

    Absolutely no, Brohan et al is the way no sensible statistician would handle the data. That’s why there is a whole field called “robust statistics” for the cases like this. Extracting “signals” from “noisy data” is in some other contexts called “signal detection” and Brohan et al has neither nothing to do with that.

    Lee, you came to his place claiming “open mind”. Now, what I’ve been following your contribution, the open mind in your case means opposing anything what Steve or anyone with disagreeing views with you writes here. Seriously, is that really an “open mind”?

    To make this case clear, do you have anything to say how Brohan, Hansen, Jones etc. could have done better to avoid the indefinsible situation like the topic here: there is simply no way the temperature measurements in 1900’s were better than the ones in 1930’s. And if you look Steve’s figure 2 with an “open mind”, you realize that this is what it is implying: these guys implicitly believe the temperature measurements in 1900’s were better than the ones done in 1930’s. I can think only one reason for that, but maybe you “with an open mind” can offer other possibilities.

    Seriously, I do not know if I’m the only one here, but I’m really tired of you. You claim to be somehow “scientific” but only thing you have to offer is some type of whining. I can understand the value of feedback from a strongly AGW committed person to this forum. IMHO, there are some “overreactions” from the “skeptic side” in this board. But even from that perspective you are a pale shadow of a person like Steve Bloom etc. So why in hell you keep posting here?!?!

  73. Lee
    Posted Feb 17, 2007 at 7:33 PM | Permalink

    Jean:
    1. I have modified my ideas on at least a couple of things as a result of being here. Specifically, I used to believe the the dendro reconstructions were very likely essentially valid, i am now neutral and waiting for the argument to conclude. The reason “in hell” that I am here, is that I prefer to have my ideas challenged, and explore challenges to my defenses of my ideas, even the ones I am pretty sure of.

    2. When I see statements that strongly imply or state things that are simply not true, I’ll point them out. Deal.

    3. The magnitudes of the homogeneity adjustments in Figure 2 DO NOT IMPLY ANYTHING ABOUT WHETHER THEY ARE BETTER. They are a function of changes that have happened over time at the stations, and of the period picked for a baseline, not of the accuracy of the measurements. This is pretty basic. If I measure 30 years of data, and then move the station to a nearby location where consistently reads 1C higher, and then measure 30 more years, neither set of data is inherently better. But to compare the data, I have to adjust one or the other – that adjustment does not mean that adjusted the data were less or more reliable, only that there was an inhomogeneity between the two periods.

  74. John G. Bell
    Posted Feb 17, 2007 at 7:57 PM | Permalink

    What would happen if people in cold climates, areas of the US, when given the choice, decided to go out to look at the thermometer when it was a generally warmer time of the day? Wouldn’t that cause the adjustment to be doubly wrong? I don’t know if that is how it breaks but without looking at the raw data, who can say?

  75. Brooks Hurd
    Posted Feb 17, 2007 at 8:52 PM | Permalink

    Re: 67 Lee,

    If you look at the article which Steve linked about the USHCN Version 1 and Version 2 Beta, you will see the adjustments. I discussed what I found by looking at the adjustments for the warmest 25 years at this link.

    If you look at the article, you can easily see that what I said is correct. “As I got past the first ten, I realized that I could predict that if the year were prior to 1986, then it would be lower in V2. If the data were after 1986, then the V2 ranking would be higher. 1986 moves up on V2. Two years, 1987 and 1921 did not change. There are two exceptions, 2006 and 1998 swap positions. In addition, 1941 drops off the V2 chart and 1995 is added to V2.”

  76. Brooks Hurd
    Posted Feb 17, 2007 at 9:07 PM | Permalink

    At the CA link that I referenced, the post was #29.

  77. Lee
    Posted Feb 17, 2007 at 9:33 PM | Permalink

    Brooks – so what? That the adjustments have that effect does not mean that the adjustments were made with that intent. Your observation essentially restates the obvious – that the newer analysis shows more warming in recent years than the older analysis.

  78. Ron Cram
    Posted Feb 17, 2007 at 9:39 PM | Permalink

    re: 73

    Lee,
    The issue of what adjustments are being made and why has been a recurring question because data and methods are not being archived and made available for other scientists to audit. Karl Popper would call this pseudo-science and not science.

    Most people would make an off-the-cuff assumption that any changes would either adjust older temps upward or more current temps downward in order to adjust for the UHI effect. As far as I know, there are no general conditions (or changes over time) that would cause older temps to be adjusted down or newer temps to be adjusted up. And there is certainly no reason why both should happen. It seems to me that, when making adjustments for changes over time, you would adjust one or the other.

    I would be interested to learn more about why these adjustments were made.

  79. Steve McIntyre
    Posted Feb 17, 2007 at 10:44 PM | Permalink

    #78. Both CRU and USHCN temperature calculations are more like accounting systems than scientific research. The science is relatively trivial compared to the accounting – and is limited to things like calculating the difference between canvas and wooden buckets.

    Calculating a temperature index is a lot like calculating a Consumer Price Index. In the real world, the Consumer Price Index is calculated by a professional statistical service, not by academics writing little papers for journals. In bizarro-world, the temperature indices calculated by scientists working part-time as amateur accountants and obviously not doing a very good job as accountants – as witness, the lack of audit trails, the inability to locate (say) key diskettes, their either losing the original unadjusted data or their failing to take care to collect unadjusted data in the first place (or to collect/save the adjustment process where adjusted data was used.) Temperature indices should be calculated by a professional statistical service, who understand data integrity, not climate scientists who are untrained in statistical management and doing it on a seat-of-the-pants basis. Also the accounting obviously shouldn’t be done by people who have are also advocates. It taints the ability of third parties to trust their results.

  80. johnmccall
    Posted Feb 17, 2007 at 10:50 PM | Permalink

    re:56 & 78 “getting rid of a MWP in the 1900’s” and “pseudo-science” — sums it all up nicely.

  81. Dave Dardinger
    Posted Feb 17, 2007 at 11:00 PM | Permalink

    re: #78 and others.

    The thing to note about the “excuse” for lowering earlier temperatures is that it is clothed in the fig-leaf of trying to allow for moving a station to a cooler location. This could be correct at the actual time of movement, but there’s no reason to suspect that the relative positions of the two stations would remained the same over the years temperature-wise. A rural airport quickly becomes surrounded by urban growth, just because it’s a vital piece of infrastructure. Thus while a central city may have continued to grow and become warmer, the rural airport is now part of a urban landscape and has most probably warmed rather more than the old station. But you won’t see that taken into account in current temperature measurements at such airports.

    I know when I was a kid, the airport in Columbus, Ohio which was close to where we lived, was nearly rural. My ggrandparents had a farm which abutted the airport property on the north There were other farms in the area. Now when you fly into Columbus, you pass over several miles of houses and light industry / warehousing before you get to the airport. I’m sure Columbus isn’t unique in that regard.

  82. JPK
    Posted Feb 18, 2007 at 9:33 AM | Permalink

    Most, if not all of the weather stations in the 1930s were at airfields. Thier prime purpose was not to gather
    climate data, but to provide surface readings for aircraft safety. In some of the more rural areas, the air-
    traffic controllers would double as weather observer and give hourly “limited weather observations”. In those
    days, the observer would use the sling pyschrometer to get his wet bulb and dry bulb temperatures. Today most stations
    use an electronic device such as the TMQ-11. The sling pyschrometer, of course, still yields very accurate
    surface readings. All weather oberservers, even back in the 30s, had to pass federal certification tests.

    Hard data like dry and wet bulb temperatures were very accurate -even in the 1930s. Today, NOAA uses many volunteers
    would give limited weather observations- I wonder how many of these observations have made it into the official
    weather database? High School physics classes can now have wind, temp, and pressure data transmitted via the
    web in XML format to NOAA; this has been going on for at least 5 years. Other volunteers -mainly
    hobbyists do the same. Most of this information is collected automatically, but none of the equipment is certified
    like the FAAs equipment.

  83. JerryB
    Posted Feb 18, 2007 at 10:48 AM | Permalink

    The US NWS always used many volunteers as its official
    weather observers. I don’t have the numbers handy, but I
    would guess that in the 1930s, more non-airport observers,
    than airport observers, were the official sources for NWS
    weather data. Besides the many personal volunteers,
    including many farmers, such observers included employees
    of electric power companies, water companies, radio
    stations, colleges/universities, and national parks.

  84. James Erlandson
    Posted Feb 18, 2007 at 11:30 AM | Permalink

    Cooperative Weather Observers (NOAA History)

    Climatological records get more valuable with time. The climatological base generated through the efforts of the volunteer Cooperative Weather Observer provides not only the cornerstone of our nation’s weather history, but also serves as the primary data for research into global climatic change.

  85. Tom C
    Posted Feb 18, 2007 at 12:01 PM | Permalink

    #79 Steve, Let’s discuss action (strategy and tactics) for reforms and for dealing with violations of professional, scientific, academic, and public interest norms, standards, guidelines, etc. This thread on adjustment of temperatures is another example of the superb scientific work that you and other stalwarts at ClimateAudit are doing to uncover a host of violations of the science used in the Global Warming issue.

    I recommend that ClimateAudit establish a permanent thread to discuss, plan, and organize actions for reforms and for dealing with these violations, specifically in regard to the science used in the Global Warming issue.

    1. Various mechanisms are available to deal with different types of violations. For example, the peer review process in scientific journals can deal with violations of scientific norms and standards. However, these mechanisms are either not being used or not being used sufficiently. For example, the peer review process in scientific journals works to some degree, but has failed to catch gross violations that you and others contributors to Climate Audit have exposed. This failure indicates the need to reform the peer review process (a strategy) and to develop specific ways to reform the peer review process (tactics).

    2. The urgent need for action (strategy and tactics) is prompted by the February 2, 2006 IPCC release of a Summary for Policy Makers (SPM) which is being used to claim that the science is settled and to stifle scientific investigation and public debate. However the IPCC has refused to release the scientific basis (Working Group 1 report) for the SPM until May 2007. In the world of science, the scientific basis is published before, or at the same time as, any finding or summary. In the world of propaganda, the IPCC Summary for Policy Makers is free to make headline-grabbing, alarmist claims without allowing the public, the press, or scientists to check the scientific basis for the claims.

    R.K. Pachauri, the IPCC chairman, told Reuters:

    I hope this report will shock people, governments into taking more serious action as you really can’t get a more authentic and a more credible piece of scientific work.

    The IPCC touts this Fourth Assessment as an historic scientific study. It is historic but only because it is the greatest charade of science in history. It is a disgrace that scientists concocted and/or participated in this most unscientific process (SPM published before Working Group 1 report published; cart-before-the-horse) and lent their prestige to claiming it is science or a science based process.

    The IPCC knows that whatever blunders scientists and the public might uncover in the Working Group 1 report in May or later will never get the headlines and policy-maker attention that the IPCC SPM is getting now in February. The IPCC gets an A+ grade for Propaganda, and an F- grade for Science.

    But the question is: What do we do about IPCC’s gross violation that has worldwide impact on policy makers, legislators, the media, scientists, and the public? What multi-pronged strategies and tactics are needed to deal with this tipping point: IPCC’s historic violation of the scientific process?

    3. I recommend that ClimateAudit establish a permanent thread to discuss actions (strategy and tactics) for reforms and for dealing with IPCC’s violation and all the other violations that ClimateAudit has done an excellent job of exposing. Various mechanisms are available to deal with different types of violations. Actions would include strategy and tactics for 1) more or better use of existing mechanisms, 2) reform of existing mechanisms, 3) development of new mechanisms. The tread would offer the opportunity to 1) propose various strategies and tactics, 2) assess the pro and cons of strategies and tactics, 3) discuss priorities for actions, 4) form working groups to organize, participate in, or report on actions for reforms and for dealing with violations of professional, scientific, academic, and public interest norms, standards, guidelines, etc.

    Some examples of topics the thread would discuss:

    A. Short-term: What mechanisms (existing or new) can be used to stop the hemorrhaging caused by February 2, 2006 IPCC release of a Summary for Policy Makers (SPM)?

    What mechanisms can be used to inform the public and policy-makers that the release of SPM before release of the scientific basis for SPM is the greatest scientific scandal in history? The SPM findings are being adopted and acted on now by world leaders and legislators, before the scientific basis is released and scrutinized. The far-reaching consequences of IPCC’s audacious abuse of science may be irreversible.

    B. Long-term: Reform of IPCC process. (Is reform feasible? If not, what is an alternative?).
    C. Requirements for government funded research for transparency, public disclosure, data access, audit trails, and archiving.

    D. Reform of peer review process at scientific journals.

    E. New legislation or regulations, where applicable and appropriate, to address gross violations affecting the public interest. For example, if government agencies that fund Global Warming research do not adopt requirements for transparency, public disclosure, data access, and archiving, then the regulations or legislation may be needed.

    F. Standards and guidelines of members of scientific societies and organizations.
    G. University (academic) standards for professors and university sponsored research.

    What do you think about this recommendation for ClimateAudit to establish a permanent thread to discuss, plan, and organize actions to deal with violations of professional, scientific, academic, and public interest norms and standards uncovered by ClimateAudit?

  86. Gator
    Posted Feb 18, 2007 at 1:59 PM | Permalink

    http://pubs.giss.nasa.gov/docs/2001/2001_Hansen_etal.pdf

    Aren’t the corrections explained in the above paper? Note, for example, they make a great effort to find stations in rural areas by looking at population and satellite light maps of the US at night. The largest effect in the adjustments of the temperature measurements came from “time of observation debiasing”.

    Following on the post above (#85)this makes you all look very unprofessional. Do your reference searches, follow the paper. You would look really silly filing an official complaint about unprofessional behavior, only to have someone point to a paper and say “That was published in 2001. Didn’t you read it?”

    Looking at #85 though, I guess this isn’t the point. It seems Tom C wants a political overclass that would censor and censure whatever didn’t fit his political point of view. All this masked in the name of “auditing” … pretty OTT when no one here seems to able to find the papers that are in the public domain in the first place…

  87. John G. Bell
    Posted Feb 18, 2007 at 2:10 PM | Permalink

    Reform won’t occur until scientists outside of climatology rise up in revolt. Right now I think the physicists, astronomers, statisticians, geologists, insert your field here, each in general think, well they kind of mucked it up in my field but perhaps they got the other stuff right. Anyway I’ve got my own research to do and I don’t want to be associated with these people. Dang but they sure have found a cash cow haven’t they.

    Well the fallout will hit all of the sciences. Science will lose esteem in the public eye and funding will be harder to get because of it. Joe Sixpack doesn’t know a climatologist from a physicist. How hard the fall depends on whether climatologists get lead off the mountain before they walk off a cliff.

    I think a few FOI requests could sell a lot of newspapers. I hate to say it but the middle game will be driven by money and ambition.

  88. John A
    Posted Feb 18, 2007 at 2:14 PM | Permalink

    Steve,

    It might be useful to compare the US composite record with the satellite record, putting them to the same arbitary zero in 1979

  89. John G. Bell
    Posted Feb 18, 2007 at 2:19 PM | Permalink

    Well you know what I meant :).

  90. bruce
    Posted Feb 18, 2007 at 3:14 PM | Permalink

    Re #85: Good points. In your point E you say:

    New legislation or regulations, where applicable and appropriate, to address gross violations affecting the public interest. For example, if government agencies that fund Global Warming research do not adopt requirements for transparency, public disclosure, data access, and archiving, then the regulations or legislation may be needed.

    It would also be good to invite a lawyer to review existing legislation to see whether charges can be brought. In Australia there is a clear body of law (mainly incorporated in the Trade Practices Act) relating to “false and misleading conduct”. I would be surprised if US law didn’t have similar requirements.

    All of the work on this site, and other sites (Warwick Hughes, Pielke Snr, etc), actually marshalls the evidence in support of a charge that the IPCC and its contributors have knowingly engaged in false and misleading conduct with the aim of influencing policy, and scaring the public into supporting initiatives to continue lucrative funding.

    I would have thought that at least some observers of this site will have a legal background, and be able to offer an opinion as to whether a legal case can be made, and whether, and in what jurisdiction, charges might be brought.

    For example, it would be interesting to know whether it can be proven that there have been breaches of the Data Quality Act referred to on another thread (post #8 on HadCRU3 versus GISS thread).

    I realise that it is unpleasant for all involved to engage in litigation. However, it is likely that nothing else will be as effective in causing people to be careful about the robustness of their work and the truth of their statements. Considering the massive consequences for countries and economies of the AGW scare campaign, is it not unreasonable to hold these people accountable for the truthfulness of their statements?

  91. Jean S
    Posted Feb 18, 2007 at 3:32 PM | Permalink

    Steve, I still think (see my #4) that the temperature series in the GISS press release (and preserved by Daly) shown here (Figure 1) is not “2000 GISS version”. The series seems to match perfectly to the figure Plate 2/(A)/(a) in Hansen 2001 labeled “USHCN: Raw Data”. It does not match with
    1) GISS 1999 shown in Hansen 1999/Figure 6 [I think this is actually the “other news release” figure of the previous post, http://www.climateaudit.org/wp-content/uploads/2007/02/uhcnh2.gif%5D
    2) GISS 2001 shown in Hansen 2001/Plate 2/(A)/( c)
    nor
    3) Adjusted USHCN 2000 shown in Hansen 2001/Plate 2/(A)/(b) [I think this is actually your "NOAA version of the same data" with the figure above with no label.]

    So I think your figure 2 is actually the difference between (old, i.e., “still current”) USHCN adjusted – USHCN raw, i.e., essentially showing the total USHCN adjustments (essentially the same information as in Hansen 2001/Plate 2/(B)/(i))

    Please check, and if I’m correct re-read my #8 as in that case Hansen’s figures give much additional information about the source of the USHCN adjustments.

  92. Larry Huldén
    Posted Feb 18, 2007 at 3:32 PM | Permalink

    From Lee’s comment: “That the adjustments have that effect does not mean that the
    adjustments were made with that intent. Your observation essentially restates the
    obvious – that the newer analysis shows more warming in recent years than the older analysis.”
    If you think of Philip Jones making adjustments you would not trust on him at all. He is hiding data
    which he later is adjusting. If we never get to know which data he used in previous statements
    we will never know if the new statement has any meaning.
    I have seen volunteers checking temps when the outdoor temps have been -35 to -40 centigrades
    in northern Finland, and I feel that they are doing a good job, even in night time. I don’t
    think that there could be any large scale trend errors because of wrong time of checking.
    Lee is right that changes in the station site doesn’t mean directly corrections concerning UHI.
    Whatever corrections, they would be random in relation to general temperature trends when
    you look at a large set of stations. The errors occur when some termometers come more often
    in sunshine, and other in shade, because of many local changes in the environment.
    Another problem is a large set of different algorithms used for calculating the daily mean
    values. I have seen some figure in the US temp data set that there are some 200 different
    methods in use. As far as I know they have been checked for discrepancies. So, the only big
    adjustment to be done is UHI. The few studies I have seen indicate that deviations
    sometimes are unexpectedly large. The difference have been several degrees celsius between
    the city and the countryside in the surroundings (measured by engineers constructing houses).
    The conclusion is that the large scale “urban correction” in climate research that has been
    done has no meaning. The complex relationship is so far unknown. We don’t know what the
    conditions are when the difference is affecting measurements in climate trends.

  93. David Smith
    Posted Feb 18, 2007 at 5:10 PM | Permalink

    Re #88 The RSS satellite data for the US, lower troposphere, is here . The monthly numbers need to averaged into annual values.

    Again, it should not be a perfect fit with US surface data, but it should be reasonably close. You’ll find that it is not close.

  94. JerryB
    Posted Feb 18, 2007 at 5:57 PM | Permalink

    Re #91

    Jean and Steve,

    Some background which may be of assistance:

    The data with which the right graph of Steve’s figure 1 was generated came from
    the then predecessor of the GISS page http://data.giss.nasa.gov/gistemp/graphs/
    on which you can see the current version near the bottom of that page.

    From there is went to my PC, and from there to John Daly’s website. At that
    time, John did not download that digital file. He downloaded the graph, and
    later became interested in the digital file after I brought to his attention that
    some of his “station of the week” graphs, which were generated from GISS
    digital station data, seemed not to match the then current GISS digital station
    data, but for reasons which neither he, nor I, knew at that time. He then
    inquired of Hansen about the apparent discrepancies, and Hansen referred him
    to the PDF to which you are referring Steve, or to a preprint thereof.

    None of the graphs in this thread should be labeled “USHCN” even though the
    reason for this thread is the change by GISS from using “raw” USHCN data,
    to using adjusted USHCN data, as part of their sources for USA data.

  95. Posted Feb 18, 2007 at 6:16 PM | Permalink

    Dear Milton #17,

    Why can’t the climate scientists get together and develop a plan to control a dangerous climate no matter what caused it. Just because its natural doesn’t mean its not dangerous.

    Because this would be equivalent to fighting against the laws of Nature. Moreover, it is not hard to see that if the climate variations are natural, they are extremely unlikely to be dangerous in a few decades in the future – simply because life has existed for billions of years and even homo sapiens has been around for millions of years.

    Your last quoted sentence demonstrates a bias. Maybe I am biased in the opposite direction but if I were choosing a similar sentence to yours, I would choose the opposite one about the opposite question. ;-) More explicitly:

    Just because the climate change may be (partially or dominantly) man-made doesn’t mean that it is dangerous. :-)

  96. Tom C
    Posted Feb 18, 2007 at 7:14 PM | Permalink

    #86 It’s not about anyone’s point of view; it’s about upholding scientific standards whatever anyone’s point of view is on Global Warming.
    It’s not about politics; it’s about scientific methods and professional standards that apply, or should apply, to all scientists.
    It’s not about what scientists say or conclude; it’s about whether scientists will provide access to data so that other scientists can see if the data supports the scientist’s conclusion, or so that other scientists can try to replicate the scientist’s work.
    It’s not about new standards for scientists; it’s about traditional standards for scientists.

    As for censure and censorship, there already is a political overclass (the Global Warming establishment) that fully occupies that role. For example, the attempt to silence GW skeptics includes referring to GW skeptics as “deniers” on a par with Holocaust deniers. Al Gore helped establish this particularly odious smear and censure. In his 1992 book Earth in the Balance, Gore conjures up an “ecological holocaust” and says “the evidence of an ecological Kristallnacht is as clear as the sound of glass shattering in Berlin.”

    With the Feb 2, 2007 IPCC SPM, unfettered by public scrutiny of its scientific basis until May, the Global Warming establishment is now reaching a crescendo of censure and censorship.

  97. David Smith
    Posted Feb 18, 2007 at 8:09 PM | Permalink

    RE: Time of Observation Adjustment

    I agree with the basic idea of time-of-observation adjustments (“TOA”). I am still struggling with how it is applied.

    Jerry B or anyone, do you know if this NOAA memorandum describes the current way that TOA is calculated and applied?

    If it is current, then I have some concerns.

    * Only 7 of the 48 states were sampled (California, New York, Colorado, Washington, North Carolina, Illinois and Indiana)

    * Only 6 of the roughly 1,200 possible months were sampled (Januaries of 1931, 1941, 1951, 1965, 1975 and 1984)

    * Only 1 of the 12 months was sampled (January)

    * The observations were placed into one of three time-bins, instead of using each actual time. The article notes the the “differences of the biases were less than 0.3F” for this binning method, though I don’t grasp what that number means, and it seems big when one is looking for small trends.

    * the model uses what appears to be a longitude-latitude grid of some kind (for application to areas outside those 7 states?), but I haven’t grasped what is being described.

    To me, it seems reasonable to calculate the adjustment station by station, using actual station observation times for each month of each year. Big job yes, but it’s a critically-important issue.

    I wonder if they ran tests to show that using but 1 of 12 months, and but 7 of 48 States, is a robust practice.

    Anyway, perhaps this is an obsolete memorandum (though it is still posted on the website) or perhaps I am misunderstanding it. Any help is appreciated.

  98. Tom C
    Posted Feb 19, 2007 at 12:26 AM | Permalink

    Re #90 ‘€” Bruce, Your call for stronger, more effective action, and specific suggestions, are exactly what is needed at this time. See Paul’s call for action at #103 on Unthreaded #4.

    http://www.climateaudit.org/?p=1136#comment-85575

    See follow-up discussion in #107 on Unthreaded #4.

    http://www.climateaudit.org/?p=1136#comment-85621

    If we get to a critical mass on ClimateAudit, maybe we can make a difference.

  99. Armand MacMurray
    Posted Feb 19, 2007 at 4:24 AM | Permalink

    RE: #95

    …even homo sapiens has been around for millions of years.

    Just a correction for this sidetrack — homo sapiens has been around for 100-200k years.

  100. Posted Feb 19, 2007 at 7:32 AM | Permalink

    Thanks, Armand #99, you’re right. So I really wanted homo (genus) that is 1.5-2.5 million years old.

    http://en.wikipedia.org/wiki/Homo_%28genus%29

  101. Gaudenz Mischol
    Posted Feb 19, 2007 at 7:43 AM | Permalink

    Now that we discuss and compare the graph of US-temps (some of them stored on john-daly.com) all of a sudden the homepage of John Daly is down. Coincidence?

  102. JerryB
    Posted Feb 19, 2007 at 8:12 AM | Permalink

    Re #101,

    Gaudenz, I would guess that it was a coincidence. As of
    this time, it is up again, and its access log indicates
    that it resumed operating about five minutes ago. I do
    not have information about what caused the outage. Sorry.

  103. Gaudenz Mischol
    Posted Feb 19, 2007 at 9:38 AM | Permalink

    No conspiracy thoughts from me, the next time I’ll wait a little bit longer!

  104. bruce
    Posted Feb 19, 2007 at 10:47 AM | Permalink

    re #92:

    The few studies I have seen indicate that deviations sometimes are unexpectedly large. The difference have been several degrees celsius between the city and the countryside in the surroundings (measured by engineers constructing houses).

    There is a way that we can all get a direct experience of UHI effects. 2 years ago I bought a new Subaru Outback which has an outside temperature readout on the dash – first time I have had a car with that. We travel regularly from our house near the coast in Sydney, past Sydney Airport, down to our place in the country some two hours (180km) southwest ie both inland and at an elevation of 600m.

    It is fascinating to watch how much the temperature can fluctuate on that two hour journey. There have been times when we left the country place when the temperature readout was -4 deg C (a morning frost), increased to a peak of 26 deg C at Sydney Airport, and ended up being 22 deg C at our Sydney house.

    More typically, last weekend when we left the farm the thermometer ready 22 deg C, it increased to 28deg C on the outskirts of Sydney, read 31 deg C at Sydney Airport, stayed at that level past Sydney City, and dropped back to 27 deg C at our house.

    The conclusion that I draw from these experiences are two-fold. Temperatures fluctuate all over the place even within a traverse of 180 km, making a nonsense of grid-cell averages. Second, there is a measurable UHI effect associated with Sydney Airport and Sydney City.

    I strongly suggest that anybody interested in whether or not there are significant UHI effects purchase a thermometer for their car (readily available at electronics or auto aftermarket shops) and observe the UHI effects directly.

  105. Tom C
    Posted Feb 19, 2007 at 11:38 AM | Permalink

    Re # 79 ‘€” Steve, This thread about adjustment of USHCN history and temperature records is a perfect example of why I and other folks are recommending stronger, more effective actions than simply publishing articles in peer reviewed scientific journals. This effort is not about adding a political dimension to Climate Audit. This effort is about building on and emphasizing the legitimate means (mechanisms) that scientists can use to correct abuses of science, in addition to writing peer reviewed articles in scientific journals.

    For example, you took stronger, more effective action in your effort to correct the hockey stick problem. Another example is the pursuit of remedies under the Data Quality Act that TAC #4, MarkR #8, and you #9 discussed on the HadCRU3 vs GISS thread.

    So, myself and other folks are recommending expanding these efforts and taking actions to a higher level. See Paul’s call for stronger, more effective action at #103 on Unthreaded #4.

    http://www.climateaudit.org/?p=1136#comment-85575

    See Bruce’s call for stronger, more effective action at #90 on thread: Adjusting USHCN history.

    http://www.climateaudit.org/?p=1142#comment-85442

    Because ClimateAudit wants not only to discuss abuses of science in regard to GW but also to correct such abuses of science, it seems that CA would benefit from establishing a permanent thread to coordinate legitimate efforts to correct abuses of science. Some reasons for such a thread are 1) to avoid duplication of efforts and waste of resources, 2) to find scientists and other folks (e.g. lawyers) who would like to participate, in various capacities, in such efforts, 3) to help differentiate the efforts and methods in different countries, and 4) provide a base to share information on methods and status of various efforts to correct abuse of science in regard to GW.

    I hope this explanation helps to clarify my recommendation for CA to establish a permanent thread to coordinate legitimate efforts to correct abuses of science. See the recommendation for action (strategy and tactics) in #85 on thread: Adjusting USHCN history.

    http://www.climateaudit.org/?p=1142#comment-85390

  106. Steve McIntyre
    Posted Feb 19, 2007 at 12:09 PM | Permalink

    #105. I understand your point but I don’t want to encourage mere venting.

    I’m a believer in writing letters and formal requests and making people answer. However I haven’t really pushed data issues for a while. If people are interested, I think that it might be worthwhile for lots of people to make formal requests to various institutions for data that they should have archived long ago. At a certain point, they might decide that it would be easier to simply archive the data than to continue having to handle requests, appeals and new requests. If it’s just me asking, then it’s fairly easy for them to stonewall. But if 10 or 20 people all ask separately for slightly different things, then maybe there would be some progress.

    I’ll return this issue some time. However, I’d like people to stop thinking in emotional terms like “abuse of science” and more in practical terms like making a request for data in accordance with subsection x(y)(iiii)(2) or whatever. If there’s an overarching point to be made, it’s made better by having a proper file of requests and refusals – and it’s better if it’s not just me being refused.

  107. Posted Feb 19, 2007 at 12:11 PM | Permalink

    [snip -too much politics]

  108. Ken Fritsch
    Posted Feb 19, 2007 at 12:23 PM | Permalink

    Re: #97

    David Smith, I have been delving into the background references to the Hansen papers as you obviously have. It appears that some like Lee and Gator (in comment #86 in this thread) judge that the Hansen papers contain the pertinent data for evaluating the details of the temperature adjustments when in fact one has to go to the main references of these papers and secondary ones to get close to the details.

    Several references were made to Karl, et al. (1988) and other years for these authors for the specifics for the adjustments. The individual and total adjustments noted in the Karl paper over the past 100 years were:

    Time of Observation: +0.3 degrees C, Shift to Max/Min thermometers: +0.03 degrees C, Stations History Adjustment Procedures: +0.25 degrees C, Filling in missing station data: +0.08 degrees C, UHI effect: -0.10 degrees C. The total adjustments (from a graph): +0.54 degrees C.

    With the adjustment being a large portion of the temperature trend increase over this period of time would seem to invite interest in just how these adjustments were made, and even more importantly what other alternative ways where available, if any. The possibility of data snooping the results using alternative methods of adjustments would seem to be something that one would want to investigate.

    I would be more wary of ignorance of data snooping in the adjustment process by these people than their doing anything more premeditated. Certainly the procedures you listed for the TOBS adjustment might lead to different results if, for instance, other states where selected — or other months. I agree that the extra work of using all the data available would make better sense and wonder why it could not be scanned in with the modern technology available.

  109. JerryB
    Posted Feb 19, 2007 at 1:49 PM | Permalink

    Re #97 and #108,

    The TOB (time of Observation Bias) adjustment procedure
    described in post 97 is quite different than the procedure
    used in USHCN adjustments. In the USHCN procedure, the TOB
    adjustment is applied for each month to each station’s data.

  110. David Smith
    Posted Feb 19, 2007 at 1:52 PM | Permalink

    Re #109 Is there a reference available on the internet that describes the specifics? I haven’t found one.

  111. David Smith
    Posted Feb 19, 2007 at 2:25 PM | Permalink

    Re #108 Jerry, the abstract of Karl’s paper says:

    Abstract
    Hourly data for 79 stations in the United States are used to develop an empirical model which can be used to estimate the time of observation bias associated with different observation schedules. The model is developed for both maximum and minimum monthly average temperature as well as monthly mean temperature. The model was tested on 28 independent stations, and the results were very good. Using seven years of hourly data the standard errors of estimate using the model were only moderately higher than the standard errors of estimate of the true time of observation bias. The physical characteristics of the model directly include a measure of mean monthly interdiurnal temperature differences, analemma information, and the effects of the daily temperature range due to solar forcing. A self-contained computer program has been developed which allows a user to estimate the time of observation bias anywhere in the contiguous United States without the costly exercise of accusing 24-hourly observations at first-order stations.

    That sounds like what the link in # 97 describes.

    The USHCN references the Karl method, and that’s all I’ve been able to find.

    Thanks for the help.

  112. David Smith
    Posted Feb 19, 2007 at 2:38 PM | Permalink

    Re #108 Ken, I am hoping that the process the USHCN uses is more robust than what is described in #97, and that I’ve simply stumbled across some obsolete document. I hope Jerry can help.

    Otherwise, if the #97 document is indeed accurate of current practices, then an examination of the TOA model needs to be done for different months, years and locations to establish that it is robust.

  113. JerryB
    Posted Feb 19, 2007 at 3:16 PM | Permalink

    David,

    The Karl et al paper is available at

    http://ams.allenpress.com/perlserv/?request=get-abstract&issn=1520-0450&volume=025&issue=02&page=0145

    My blurb is at http://www.john-daly.com/tob/TOBSUM.HTM

    Your comment 112 makes me suggest that you go back and read
    my comment 109. :-) They both use Karl’s program, but they
    use it very differently.

  114. Posted Feb 19, 2007 at 4:41 PM | Permalink

    Steve: re: 106

    You make the point well that responses should not be emotional. This is exactly what the AGW supporters use and it’s limit will be reached as it becomes overkill. What you suggest is absolutely what I had in mind but I believe we need assistance to ask the right questions in the right way.

    It is hard for these things not to become politicised in this day and age and again I would remind you that you cannot wish the rules were different but only work within the current climate (sorry!). As the tide turns then influence can be used to change the rules for the better for the future.

    As an example, I am quite happy to write to our government ministers, government scientists etc here in New Zealand to seek answers from them on the releveant points, but as I am sure you are aware they tend to try and wriggle their way out of hard questions so questions need to be phrased in such a way as to embarass them into giving answers rather than fluffy statements.

    We need to be sure the questions are based on solid ground and solid information and that they elicite responses. If the same questions are asked of different governments in different countries and seperate organisations within a country I suspect we will obtain a range of answers which may well be revealing. I suggest that Steve lists those questions that he believes will achieve this aim and we then use those same questions within our own letters with the responses then being able to be archived here for all to view.

    It would be nice to have an idea of how many countries are represented by visitors/contributors to the site.

  115. David Smith
    Posted Feb 19, 2007 at 6:16 PM | Permalink

    RE #113 Thanks for the link, Jerry. What I’ll do is to try to summarize what I think is being done and you can confirm/correct that.

  116. DaleC
    Posted Feb 19, 2007 at 9:17 PM | Permalink

    JerryB, re comment #39 in USHCN Versions,

    Any progress on the file of global daily temperatures?
    URL available?

    Thanks.

  117. David Smith
    Posted Feb 19, 2007 at 9:44 PM | Permalink

    Jerry, are you aware of any USHCN time series which uses only the subset of first-order stations? Those would not have TOB adjustments.

  118. JPK
    Posted Feb 20, 2007 at 7:08 AM | Permalink

    If anyone checked the FMH1-B Handbook hourly surface observations cannot begin earlier than 10 minutes
    before the hour, and must be transmitted before the hour. I believe this regulation is world-wide. I think
    the TOA is uneeded. That 10 minute window could only yeild very small biases one way or another.

  119. JerryB
    Posted Feb 20, 2007 at 9:14 AM | Permalink

    Re #116,

    A sample of a very pliminary sample is at http://www.john-daly.com/ghcndsum.smp
    and it points to a zip file of the full very preliminary summary
    which is about one megabyte in size. (I will need to do a
    summary of the summary eventually.)

    Re #117,

    I am not aware that anyone has extracted such a series from
    USHCN and placed it where others might access it.

    Re #118,

    Most USHCN sites are not first order stations.

  120. JerryB
    Posted Feb 20, 2007 at 9:34 AM | Permalink

    Re #119,

    Pardon my typos, The first sentence in 119 should have
    started as: A sample of a very preliminary summary …

  121. Steve Sadlov
    Posted Feb 20, 2007 at 12:37 PM | Permalink

    RE: #79 – In the days when +/- 1 deg C was considered good enough (e.g. for farmers, airmen, etc) it drove a certain set of expectations. All of the sudden, the world is debating +/- 0.1 deg C (or finer gradations). It’s hard to admit that the historical data cannot inform the debate due to be too coarse and too lacking in quality. It is a “no go” for the AGW side of the debate, a taboo.

  122. Michael Jankowski
    Posted Feb 21, 2007 at 11:19 AM | Permalink

    If Spencer and Christy revise the UAH history to make the results “more accurate” via adjustments they have recently become aware of, the AGW crowd is up in arms, saying it’s a sign of flawed data and methods. The surface record gets “adjusted,” and not a peep is heard.

  123. Posted Feb 23, 2007 at 11:53 AM | Permalink

    Science…

    To the ‘FaithBased’ theorists you start with a conclusion and then construct hypothesis to support it.

    No dissent will be tolerated.

  124. Paul Everett
    Posted Feb 23, 2007 at 11:47 PM | Permalink

    [snip - no DDT]

    I have a really nasty speculation. Pure speculation. It is “known”, I guess, that Antarctica was once tropical. If that is true, I say IF, then that would argue that it was not always in its current position. OR, that the planet was once so much hotter that no ice existed at the poles. That’s certainly possible. However, it is also “known” that either Mercury or Venus flips on its axis rather often, I’ve forgotten how often but relatively often.

    Just suppose that the melting of the ice caps and the redistribution of all that weight into the liquid form of the great oceans destabilizes the axial rotation of the earth in such a way that it rotates 90 degrees, putting Antarctica and Arctic at the new “equator” and the equator at the new poles.

    I think you could say that was the end of humanity as we now know us, or at least technological humanity. Just a very wild speculation about possible consequences of the melting of the poles.

    Paul

  125. george h.
    Posted Feb 24, 2007 at 11:43 AM | Permalink

    Also courtesy of Numberwatch:

    Maier’s Law:

    If the facts don’t conform to the theory, they must be disposed of.

    Corollaries:
    1) The bigger the theory, the better.

  126. David Smith
    Posted Feb 25, 2007 at 9:00 AM | Permalink

    Acouple of papers on temperature reconstruction problems are here and here .

    One section from the Pielke Sr et al paper discusses time of observation adjustments (p. 425-426):

    “We attempted to apply the time of observation adjustment using the paper by Karl et al. The actual implementation of this procedure is very difficult so, after several discussions with NCDC personnel familiar with the procedure, we chose instead to use the USHCN database to extract the time of observation adjustments applied by NCDC… An example is shown here for Holly, Colorado (Figure 1) which had more changes than any other site used in our study.
    ” What you would expect to see is a series of step function changes with known time of observation changes. However, what you actually see is a combination of step changes and other variability, the causes of which are not all obvious…”

    See Figure 1.

    I see no fundamental problem with adjusting the data to correct for TOB or other biases, but there needs to be a clear, easy to find record of exactly what adjustments were made, and why.

    On a related issue, as best as I can tell, the USHCN uses Karl’s Fortran program, based on his 1986 paper, to make the TOB changes. I hope that Karl’s reasoning and programming are proper and validated, as a lot of adjustment is based on that “black box” into which latitude, longitude, month, observation time, etc are entered and an adjustment is cranked out.

  127. David Smith
    Posted Feb 25, 2007 at 9:05 AM | Permalink

    RE #130 Sorry, the second paper is here .

  128. John G. Bell
    Posted Feb 25, 2007 at 1:47 PM | Permalink

    I am trying to understand this in general terms. My lack of background prevents me from being able to contribute much but I did find this in the Keim paper right before the summary and conclusions

    “Even at the climate divisional level, the USHCN pattern is more geographically cohesive in that no division cooled over the period of record, and the region of significant warming are all contiguous divisions in the southeastern portion of the study region(figure 1). This seems much more logical than the NCDC data pattern where adjacent divisions have significant trends, but in opposing directions, e.g., MA-1 and MA-2″

    That may be so but do we really know contiguous divisions really behave this way relative to each other. They were put in different divisions because they represented different ecological niches after all.

  129. Posted Feb 28, 2007 at 7:21 AM | Permalink

    In Brohan et al adjustments are shown as histogram (there goes the time information). You can use Figure 2. adjustments and make similar statement,

    Hypothesising that the distribution of adjustments required is Gaussian, with a standard deviation of 0.75 C gives the dashed line in figure 4 which matches the number of adjustments made where the adjustments are large, but suggests a large number of missing small adjustments. The homogenisation uncertainty is then given by this missing component (dotted line in figure 4), which has a standard deviation of 0.4 C. This uncertainty applies to both adjusted and unadjusted data, the former have an uncertainty on the adjustments made, the latter may require undetected adjustments.

    I’m almost sure that they are only joking.

5 Trackbacks

  1. [...] the National Oceanic & Atmospheric Administration (NOAA) in the USA. The remarkably restrained Steve McIntyre has been looking at some of the data they put out. And guess what? It has changed recently! They [...]

  2. [...] global warming “hockey stick” graph, notices some funny business when it comes to the reporting of historical temperature data. He’s noting that, in more recent reports, temperatures reported from the 1920s and 30s are [...]

  3. [...] (USHCN) and the UK-based Climatic Research Unit (CRU). Steve McIntyre of Climate Audit has been researching past data quite extensively and to his surprise discovered recent changes. It seems that the data [...]

  4. By Numberwatch by John Brignell » Fakery! on Feb 24, 2007 at 11:13 AM

    [...] what we long suspected has been shown to be true. Records of past temperature data are being altered to exaggerate the apparent rate of global warming. The perpetrators and their acolytes make the [...]

  5. By Desde el exilio on Feb 27, 2007 at 5:55 AM

    En el futuro, el pasado serà¡ mà¡s frà­o…

    Orwell saluda! Steve McIntyre, autor de Climate Audit, nos presenta un interesante caso de manipulacià³n de datos climà¡ticos. Resulta que, a pesar de que el “original” de la conferencia de prensa de Hansen en el 1999 ha desaparecido de los…

Follow

Get every new post delivered to your Inbox.

Join 3,194 other followers

%d bloggers like this: