Atmoz Agrees on USHCN Adjustment Defect

A theme in many recent posts has been whether the USHCN and NASA adjustments are successful in achieving their goals.

On a number of occasions, we’ve observed that the USHCN station history (SHAP) adjustment appears to be an odd statistical procedure and can be objectively seen to be unsuccessful in picking up recorded station moves. This issue was re-visited in connection with Lampasas TX, where a 2000 move to a non-compliant location was not identified and corrected for by the USHCN adjustment algorithm. In a post on Lampasas, I observed that the SHAP/FILNET algorithm seemed to have the effect of blending stations, in the following terms (though similar observations have been made on other occasions):

My impression of the impact of the SHAP/Filnet adjustments is that, whatever their stated intention, they end up merely creating a blend of good and bad sites, diluting the “good” sites with lower quality information from sites that are “bad” in some (objective) sense. When this version gets passed to Hansen, even his “unlit” sites no longer reflect original information, but are “adjusted” versions of unlit sites, in which it looks to me like there is blending from the very sites which are supposed to be excluded in the calculation.

Atmoz has now analyzed an arbitrary USHCN site (Saguache), mentioning Watts Up (but not CA), concluding:

In this post, I examined one surface station record to determine the effects of microsite bias. In doing so, I found that the SHAP adjustment as applied by NOAA does not account for all the station moves in the station history. A simple and tractable correction method is outlined in this case study which uses regional anomalies to correct stations for local effects.

This is the third occasion [Lampasas, TX; Miami, AZ] where SHAP corrections have been documented to not fully account for station moves. Furthermore, this analysis was done on a random station in the USHCN; it was not cherry-picked to prove a point. This suggests the SHAP algorithm does not correct for all microsite issues related to station moves. People using the SHAP-corrected data should be aware that not all microsite biases have been removed, and they should attempt to account for these issues themselves.

At this point, I’ve not evaluated whether Atmoz’ proposed method is more successful than the USHCN method. However, it’s nice that a third party has confirmed that the USCHN adjustment algorithm has failed to identify station moves – a point made here and at Watt’s Up, occasioning some unwarranted derision elsewhere.


  1. steven mosher
    Posted Mar 2, 2008 at 2:37 PM | Permalink

    Some sharp grad student could publish a pile of papers wading through this swamp.

  2. Posted Mar 2, 2008 at 2:41 PM | Permalink

    I appreciate the professional way Atmoz has approached this, casting aside the rhetoric seen elsewhere and focusing on the problem.

    I agree, there’s papers in the making on this.

    Mosh you have to get that double click problem of yours fixed, about half the posts you make are doubled like the one above.

  3. Posted Mar 2, 2008 at 3:14 PM | Permalink

    I just invited Atmoz to run the same analysis on Dillon, CO. This is a USHCN site that I have not written up nor commented on, but that has been unknowingly surveyed by both myself and Bob Thompson within hours of each other.

    I’ll withhold any comment on this station’s siting or posting pictures of it until he or Steve or both runs an analysis on it.

  4. George M
    Posted Mar 2, 2008 at 4:07 PM | Permalink

    There is more to the Lampasas story. I spoke to the NWS COOP manager by telephone last week, and he told me that the offending building was built after that station was moved to the present location. As I understand it, there is no location entry on the forms to indicate what was likely a marginally acceptable location became unacceptable without any move taking place. So the level of imponderability is worse than initially suspected. How many other sites have been encroached on in this same way? And unless local, knowledgable citizens are actively interviewed for each site, how would anyone know?

  5. Stan Palmer
    Posted Mar 2, 2008 at 4:27 PM | Permalink

    I hope that this is not too obvious a question.

    However, what sort of V&V was done on this algorithm before it was implemented in production data.

    Some have asked for the difference between an engineering “report” and a scientific “paper”. An engineering report would be required to detail the V&V done to certify a proposed algorithm. A scientific “paper” could just be the reporting of a potentially useful idea.

  6. Sam Urbinto
    Posted Mar 2, 2008 at 5:01 PM | Permalink

    Atmoz seems to be a rather thoughtful individual, and if only everyone were as lucid and rational and logical, we probably wouldn’t have all the circular snarky bese on a regular basis, methinks.

    As far as the siting issues.

    “unwarranted derision”? What, is there a problem pointing out this is nothing more than a bunch of misguided anti-environmentalists taking pictures in an attempt to falsify global warming?


  7. Julie KS
    Posted Mar 2, 2008 at 5:10 PM | Permalink

    George M, that’s very interesting. That could explain why the MMS obstruction data, if I read it right, did not indicate that any buildings were nearby.

    Some of the other MMS data on the last location contradict each other, too.

    Did the coop manager say the radio station bldg was new, or the Ace store bldg? The official coordinates show the station to be at the corner of the Ace hardware store bldg next door to it’s current location. I wonder, did they move the sensor when the Ace store was built, and not record the move? Hmmm.

  8. Posted Mar 2, 2008 at 7:01 PM | Permalink

    Re: #13
    A link to the paper “McKitrick, Ross R. and Patrick J. Michaels. (2007) Quantifying the influence of anthropogenic surface processes and inhomogeneities on gridded surface climate data” is here.

    I believe this is a very important paper. The conclusion is: Fully correcting the surface temperature data for “nonclimatic effects reduces the estimated 1980-2002 global average temperature trend over land by about half.”

    Ross McKitrick gives a very entertaining summary here.

    Note the IPCC’s response to the finding that their temperature data is severely contaminated. They conceded the evidence of contamination, but claim that the strong correlation of social-economic development to warming is due to “strengthening of the Arctic Oscillation and the greater sensitivity of land than ocean to greenhouse forcing owing to the smaller thermal capacity of land.” The claim is obviously preposterous. McKitrick says de Laat and Maurellis had emphasized that climate models made no such prediction.

    Also note that the acknowledgments section of the paper thanks our host “Stephen McIntyre for assistance with the figures.”

  9. steven mosher
    Posted Mar 2, 2008 at 7:22 PM | Permalink

    Re 6. Stovepipe.

    “Can you show that either site (Lampasas or Miami) is having a material impact on:

    The locale observed trend;
    The regional observed trend; or
    The global observed trend?”

    Of course not. dont be silly. That is not the question.

    When the IRS audit’s your taxes you do not ask if that one personal phone call you charged to
    business makes a “material impact” on the tax you paid or the revenue of the US government received.
    You fix your return. They note the problems, and you are expected to correct them. It’s a very narrow very focused question and solution.

    Some people keep trying to turn this into a Global warming question.
    But, until you finish the audit, until all the corrections are made, it’s not a global warming question. It’s a “fix your damn report” question. The best course of action is
    to avoid making broad claims, or meaningless defenses.

  10. George M
    Posted Mar 2, 2008 at 8:04 PM | Permalink

    Sorry for the OT: Julie, please drop me an email at w5vpq (at) I have some additional info for you to add to the surface stations posting.
    Thanks, George M

  11. Posted Mar 2, 2008 at 8:23 PM | Permalink

    RE3, Anthony:
    The Dillon station seems uninteresting from a station move standpoint… at least post-1948, which is the first entry in the station history.

    I’m unsure if Anthony would appreciate me hotlinking images from his site, so here’s a link to the Dillon Filnet record. The jump in the 1930s is odd. It appears in several nearby stations as well: Steamboat Springs, Chama, Laramie, and Canon City.

    I’m unsure what’s going on here, if it’s a real effect or not.

  12. Posted Mar 2, 2008 at 8:37 PM | Permalink

    Hi Atmoz,

    Thank you for doing that. There is something else I want to point out, but NCDC is not loading frm my location, so It will have to wait.

    You are welcome to hotlink images, just so long as the source is identified.

    BTW you’ll also notice the three individual prominent peaks on the GISS record for Dillon on an extremely rural station call Cheesman Lake.

    Same spacing, about 20-24 yesrs each. I don’t see theses equated to station moves, but perhaps soemthing else.

    As soon as NCDC MMS is available to me I’ll elaborate further on what I was hoping you’d find, it may still be there.

  13. Posted Mar 2, 2008 at 8:54 PM | Permalink

    Ok NCDC MMS came back up for me, and I could double check something. Do you see anything pop out from April 2002 to the present related to the entire record?

    I won’t say why at this point other than to say something happened then that the observer told me about when I was there that is supported in the station records.

    I want to see if your method can pick it up.

    Atmoz thanks again for your willingness to do additional investigation.

  14. Posted Mar 2, 2008 at 9:32 PM | Permalink

    From the second image above, the “corrected” values (red) for the year 2002 and onward are lower than the “raw” values. This means the anomaly at Dillon is higher than the regional average. I originally noted this, but because the change is so small, I thought it was due to an artifact of the procedure. The mean temp at Dillon is ~0.3F higher than surrounding station after April 2002 compared to the time period from 1996 through April 2002. This is well within the “noise” of the procedure, as evidenced by the first plot above.

    Also note, I’m working with maximum temperatures. A change that would influence mainly the minimum temps would not be noticed by this procedure.

  15. Joe Black
    Posted Mar 3, 2008 at 5:51 AM | Permalink

    From the plot at it looks like some of the NOAA adjustments are done on a seasonal basis.

  16. Joe Black
    Posted Mar 3, 2008 at 5:54 AM | Permalink

    Link worked in preview.

  17. Joe Black
    Posted Mar 3, 2008 at 6:18 AM | Permalink

    Note that annual temperatures are built from daily max/min –> Monthly max/min/mean —> Annual. If the monthly data are averaged using a days in the month weighting (months with 31 days carry more weight), one gets a different annual mean from using equal monthly weights and there can be a trend in the difference as the monthly trends at a given site do not necessarily move in a uniform fashion.

  18. Pierre Gosselin
    Posted Mar 3, 2008 at 6:37 AM | Permalink

    Here’s an interesting experiment:

    A young student, with the help of his dad, mounted a GPS-tracked thermometer to the family car and drove it through Phoenix recording the temperature and UHI. I don’t have time to translate the German text, but the plot says a lot.

  19. Pierre Gosselin
    Posted Mar 3, 2008 at 6:51 AM | Permalink

    The a.m. PHX measurement is described at ClimateSkeptic:

  20. Hoi Polloi
    Posted Mar 3, 2008 at 7:44 AM | Permalink

    I wonder with how many public funds this Phoenix Experiment was allocated? I reckon the usual climatologists wouldn’t do that for under $300.000 and require a 12 months preperation time…

  21. MarkW
    Posted Mar 3, 2008 at 8:41 AM | Permalink

    The critical question still remains. Could they get a latte at StarBucks between runs?

  22. steven mosher
    Posted Mar 3, 2008 at 9:58 AM | Permalink

    GOBS of TOBS

    Atmoz is looking at things with fresh eyes. That’s a good thing. According to NOAA, USHCNv2
    will “account” for documented and undocumented changes using a variety of mechanisms.
    I’m still not seeing the ushcn data using these methods. JERRY B? what say you?

    Anyway, onto to the TOBS point. As I searched in vain over many a ftp I stumbled on this decription of DATA QUALITY WRT TOBS. Next time you guys prepare a report for a government
    agency, use the ‘F’ designator and see how far it gets you.

    QUESTION: how many of the TOBS adjustments ( which warm the record) are ‘F’ Quality

    ” but MOSHPIT, what does F stand for?”
    Not that word.. not the F word?

    Time of Observation Data

    (3) the third position is the code for the quality of the available observation
    times for a given station:

    F = information concerning the observation times for the station during that
    year was suspect or “flaky”;
    G = information concerning the observation times for the station during that
    year was “good” and the information was judged to be accurate;
    Blank = information concerning the observation times was not available for the
    station during that year and the data are represented as missing.

    Filnet Data

    (3) the third position is the code for the temperature indicator for the Time of
    Observation bias correction:

    O = corrected;
    Blank = no observation time correction (treated as a station move).

    Confidence Record

    (3) the third position is the code for representing the significance level at which
    the initial adjustment was made:

    1 = sigma of 1.0 and confidence interval of 16% to 84%;
    2 = sigma of 2.0 and confidence interval of 5% to 95%;
    3 = sigma of 2.57 and confidence interval of 1% to 99%;
    5 = sigma of 3.75 and confidence interval of 0.01% to 99.99%;
    C = closed station value; the station has missing values at the end of the
    period of record for at least one calendar year;
    U = the algorithm was unable to adjust the entire series due to the station
    density of the network, but an estimate for the missing data is given by
    using neighboring stations;
    X = the algorithm was unable to adjust the data.

    Now, since there is an official government code for calling some TOBS data Flaky, can I use this
    word without being accused of horrible crimes against climate science?

    quatloo bets on how long before this is changed.


  23. Gary
    Posted Mar 3, 2008 at 10:14 AM | Permalink

    Moshpit, we want Flakey crusts on our cherry pies.

  24. Mike B
    Posted Mar 3, 2008 at 10:56 AM | Permalink

    Moshpit #22:

    I don’t think it’s going to change anytime soon. The flaky crust of TOBS has been discussed at CA before.

    But seriously, reliable metadata on time-of-observation is required for calculation of a reliable TOBS adjustment.

  25. stEven Mosher
    Posted Mar 3, 2008 at 11:40 AM | Permalink

    re 23. TOBS adjustment. I have read through Karls paper. I have looked at Jerry B data
    ( fine fine work) And the question of TOBS comes down to this.

    1. Do you select stations with CONSISTENT TOBS over the course of history and live
    with the sparse temporal/spatial feild.

    2. Do you “adjust” stations based on a model that has an error larger your observation error?
    And then do your pretend that your adjustments improve your statistics?

    3. Do you ignore this and live with the varience due to bias.


  26. steVen Mosher
    Posted Mar 3, 2008 at 11:55 AM | Permalink

    re 24. I have my fill of flaky for february. March maddness is upon us.

    Tobs remains one of those areas that is ripe for a restudy. Atmoz???

    in grad school you have two options..

    1. Carve out a new little peice of territory..the albedo of speedos in Monte carlo OR
    2. Cut down a beanstalk, like TOBS, and watch a giant tumble from the heavens.

    I like #2. Just remember to duck and run for cover.

  27. Joe Black
    Posted Mar 3, 2008 at 12:10 PM | Permalink

    It seems like there is TOBS (Temperature at time of observation – as listed on station log sheets) and tOBS (time of observation). You blowing smoke again? 😉

  28. Sam Urbinto
    Posted Mar 3, 2008 at 12:19 PM | Permalink

    Joe Black:

    Note that annual temperatures are built from daily max/min –> Monthly max/min/mean —> Annual. If the monthly data are averaged using a days in the month weighting (months with 31 days carry more weight),

    Regardless of how they come up with the daily figure (I understand usually it’s min/max but in some cases it’s weighted somehow by frequency or something, not sure) it is my understanding the daily figures are averaged into the month and then the anomaly calculated. So it shouldn’t matter if it’s 28 29 30 or 31 days.


  29. Sam Urbinto
    Posted Mar 3, 2008 at 12:32 PM | Permalink

    MarkW: Regarding swimsuits. You are correct.

    Positive Proof of Global Warming

  30. Joe Black
    Posted Mar 3, 2008 at 12:43 PM | Permalink

    Sam Urbinto says:
    March 3rd, 2008 at 12:19 pm

    #28 as of post

    The annual data are what GISS et. al. seem to use and are concerned with. NOAA seem to use an equal monthly weighting in determining an annual temp. It would seem to be more accurate to use a days-in-month weighting.

    Given that we are always taking about “climate”, a monthly approach would seem to be a more accurate assessment of climate “change”

    Just saying, as a Winter affectionado.

  31. Sam Urbinto
    Posted Mar 3, 2008 at 12:52 PM | Permalink

    Joe, but if the number for the month is averaged, isn’t it already weighted? On the other hand, if the month was weighted by days rather than a mean that doesn’t portray the structure of the days, that would be better much like doing it by day would.

    But then again, we are talking about some nebulous quality of means of spot temperatures that I wouldn’t think stable enough (even if everything were perfect regarding calibration, placement, measurement devices over time) to base some reflection of trends upon in the first place.


  32. Joe Black
    Posted Mar 3, 2008 at 12:59 PM | Permalink

    I think you are missing the point.

    The two ways of calculating an annual mean I’m refering to are:

    Sum of month means/#months


    (Sum of month mean * # of days/month)/365

    Turns out there is a difference and a difference in trend.

  33. steven mosher
    Posted Mar 3, 2008 at 1:16 PM | Permalink

    re 27. I am no Bill Parrish.

    I’m refering to the “adjustment” made to data in the USHCN
    that “corrects” for differing times of observation. This adjustment is affectionately referred to
    as TOBS.

  34. Joe Black
    Posted Mar 3, 2008 at 1:24 PM | Permalink

    I know, but I still think it should be referred to as tOBS, to avoid confusion.

  35. steven mosher
    Posted Mar 3, 2008 at 1:30 PM | Permalink

    re 30. Joe have you looked at daily data? A while back I compared the Monthly data
    USHCN outputs versus the daily there were some odd differences..

  36. K
    Posted Mar 3, 2008 at 1:43 PM | Permalink


    Suppose every day in Jan. was constant at 60F. And every day of Feb. was constant at 62F.

    Averaging the two months gives 61.00F. Averaging by day for those 61 days gives slightly less, 60.95F.

    OTOH when comparing two months such as Jan. 2007 v. Jan. 2008 it doesn’t matter how many days are in the month.

    There is no concern if a consistent method is used for all data and dates. Ha!

  37. Joe Black
    Posted Mar 3, 2008 at 2:00 PM | Permalink

    K says:
    March 3rd, 2008 at 1:43 pm

    There is no concern if a consistent method is used for all data and dates. Ha!

    No, at many stations the trend of each month is not the same. Improper monthly weighting (all months equally weighted) results in a different (non correct) annual trend.

  38. jae
    Posted Mar 3, 2008 at 2:01 PM | Permalink

    1930 Weather Station.

  39. Joe Black
    Posted Mar 3, 2008 at 2:04 PM | Permalink

    Joe have you looked at daily data? A while back I compared the Monthly data
    USHCN outputs versus the daily there were some odd differences..

    I’ve started, but haven’t finished yet. I ignored months where NOAA indicated there were more than two days of missing data (2/~30

  40. Joe Black
    Posted Mar 3, 2008 at 2:20 PM | Permalink

    I’m about to head out tomorrow for an 8-week road trip to go do some “gravity research”, so any work on historical US air temps will only be done in my “spare” time.

  41. Steve McIntyre
    Posted Mar 3, 2008 at 7:24 PM | Permalink

    I’ve deleted some OT posts on rounding – which is not a subject that I wish to bother discussing here,

  42. Posted Mar 3, 2008 at 7:56 PM | Permalink

    Oops. Two Wrongs Don’t Make a Right… Unfortunately. This bug should not invalidate previous work showing the SHAP correction did not properly modify the temperature record for all station movements.

  43. K
    Posted Mar 3, 2008 at 11:43 PM | Permalink

    #36, which, unfortunately, I wrote is about as messed up as matters get.

    #37 Joe Black and some others saw that it is a mess. Joe’s short comment gave a clue about what was wrong.

    About all I can say that I have had insomnia for perhaps 90 hours, I can’t even be sure of how long. And while I thought about weighting I had a great insight which was actually an illusion. (#36 sure looked right this morning.)

    I still haven’t slept but my thoughts seem to clear and then fog at various times, it isn’t constant. And I see why Joe is right.

  44. Brian
    Posted Mar 4, 2008 at 7:06 AM | Permalink

    I read the following at:

    “March 1, 2008: Starting with our next update, USHCN data will be taken from NOAA’s ftp site rather than from CDIAC’s web site. The file will be uploaded each time a new full year is made available. These updates will also automatically include changes to data from previous years that were made by NOAA since our last upload. The publicly available source codes were modified to automatically deal with additional years.”

    Does the foregoing mean that the latest USHCN data become available more frequently or less?

  45. Joe Black
    Posted Mar 5, 2008 at 8:42 PM | Permalink

    steven mosher says:
    March 3rd, 2008 at 1:30 pm

    re 30. Joe have you looked at daily data? A while back I compared the Monthly data
    USHCN outputs versus the daily there were some odd differences..

    Well, I looked at two, month long, daily station data sheets and came up with monthly means that were different from the NOAA monthly data by between 1.6 and 1.9 deg F. Rounding, that would make the differences (100% – 2/2) even, not odd.

    Now I have to look at the daily data from the .dly files to see what it looks like.

2 Trackbacks

  1. […] regional average as the 10 closest stations within 1000km. The results from this analysis have been commented at Climate Audit, and have generally recieved praise from the ‘coolers’ group, and not so much praise […]

  2. […] regional average as the 10 closest stations within 1000km. The results from this analysis have been commented at Climate Audit, and have generally recieved praise from the ‘coolers’ group, and not so much praise […]

%d bloggers like this: