The HO-83 Hygro- thermometer

In the discussion of the Tucson weather station, Ben Herman of the U of Arizona observed that there were serious biases with the HO-83 hygrothermometer – introduced in the early 1990s – which was said to be a contributor to the uptick to Tucson values. Although USHCN has implemented adjustments to U.S. data to deal with time-of-observation bias and station history, both of which resulted in significant upward adjustments of recent data relative to earlier data, I have been unable to see any evidence that either NOAA or NASA made any attempt to adjust for the upward bias of recent readings using the HO-83 thermometer, although its problems are thoroughly discussed in the specialist literature.

Problems with a warm bias in the HO-83 system were first reported in print in Gall et al 1992 in connection with Tucson here. Kessler et al 1993, a follow-up article about stations in New York by some of the same authors said:

They indicated that the HO-83 maximum temperature readings at Tucson were probably too warm by 1-2 deg C on sunny, light wind days, that the problem could probably be attributed to the design of the instrument and that an investigation was under way… Their investigation (Gall et al 1992) revealed that the warm bias (~1 -2 deg C) maximized shortly after solar noon and that the bias was very small or even slightly negative shortly before sunrise. Consequently recalibration checks done in the early morning would fail to reveal any bias in the HO-83 hygrothermometer.

Cyrus G. Jones and Kenneth C. Young 1995 said :

Examination of differences between the two instruments found that the original version of the HO-83 read approximately 0.6 deg C warmer than the redesigned instrument. Significant changes in the differences between the two instruments were noted between winter and summer. It is suggested that for stations with climatology similar to the ones used in this study monthly mean temperatures reported by the original version of the HO-83 be adjusted by adding -0.4 deg C to June, July August and Sept observations and by adding -0.7 deg C for the remainder of the year.

These results were noted up in Karl et al 1995, which was discussed in Karl and Knight 1996 as follows:

Karl et al. (1995) show that, on average, the HO-83 increased the maximum temperature by about 0.5°C relative to the HO-63 instrument and also increased the minimum but only by 0.1°C. Much larger effects have been noted in Tucson, for example (Gall et al. 1992), and Jones and Young (1995) also find a consistent positive bias at several stations they examined in the southern and central plains. This suggests that the trends of maximum T in Chicago are biased warm not only due to increased urbanization but by the introduction of the HO-83 instrument in 1986.

Gaffen and Ross 1999 cite these results as follows:

Karl et al. (1995) suggest that the change to the HO-83 hygrothermometers in the 1980s may have lead to spurious increases of 0.5 C in daily maximum temperature and ‘‘maybe’’ 0.1 C in daily minimum temperature, but that ‘‘the exact bias of any one instrument is unknown.’’

Karl et al 2002 discusses the adjustments even showing the figure below:
ho83.h7.jpgho83h7.jpg
Karl et al 2002 Figure 2.

The problem is discussed again in Lin et al (2003) which said:

The HO-83 maximum temperature readings were about +1 to +3 C higher on sunny, light wind days. The research of both Kessler et al. (1993) and Gall et al. (1992) illustrates the need to account for shield/sensor bias prior to analysis of operational NWS temperature data and determination of short-term temperature trends. Other researchers (Robinson 1990; Canfield and McNitt 1991; Meyer and Hubbard 1992; Croft and Robinson 1993; Blackburn 1993; Easterling et al. 1993; Guttman and Baker 1996; Andresen and Numberger 1997), in various ways, pointed out the climate data discontinuities and trends or changes in variability resulting from changes of temperature measuring systems and site locations.

It is also discussed at length in Peterson 2003 (which has been discussed previously at CA) : Peterson actually carries out an adjustment for HO-83 (an adjustment not evidenced in USHCN as discussed below). Peterson 2003 said:

However, C. Jones (2000, personal communication) reprocessed the data with extreme daily maximum and minimum hourly temperatures as surrogates for maximum and minimum temperature. The mean temperature adjustment is the average of the maximum and minimum since mean temperature in the U.S. cooperative network is the average of the maximum and minimum daily temperature. The annual mean temperature adjustment that results from this methodology, +0.6 deg C, is in keeping with the ‘‘approximately 0.6 deg C warmer’’ value determined by Jones and Young (1995). This value is added to the data from the HO-83 to make them comparable to the ASOS instrument. This adjustment to the ASOS standard essentially removes the HO-83 bias

Peterson 2003 also discussed Comrie 2000 (discussed previously in connection with problems in Tucson) in a disapproving way, stating that part of the urban bias reported in Comrie 2000 may be due to HO-83 problems not discussed by Comrie:

Comrie (2000) did not reference Gall et al. (1992), which looked at then-current measurements at the National Weather Service Office in Tucson and found ‘‘daytime temperatures that are two to three degrees too high.’’

USHCN Adjustments

So problems with the HO-83 thermometer are amply documented in specialist literature. USHCN has been quick to adjust time-of-observation bias which worked in the direction of increasing 20th century trends – what have they done to adjust for HO=83 bias which caused an upward bias to measurements in the 1990s?

USHCN version 1 adjustments are summarized here and include no HO-83 adjustment although they purport to adjust for introduction of MMTS as well as the TOBS and station history adjustments discussed elsewhere. Likewise USHCN Version 2 do not adjust for HO-83 bias.

I must confess that this indicates at least the possibility of a bias in adjustment decisions: one is hard-pressed not to take away an impression that, if the HO-83 adjustment had increased 20th century trends, that Karl and Hansen would have been on it like a dog on a bone, but, since the adjustment will lower late 20th century temperatures, the adjustment mysteriously becomes too elusive to implement. Just an impression.

References:

Gall, R, K. Young, R. Schotland, and J. Schmitz , 1992. The Recent Maximum Temperature Anomalies in Tueson: Are They Real or an Instrumental Problem? Journal of Climate Volume 5, Issue 6 (June 1992) pp. 657–665 url
X. Lin, K. G. Hubbard, E. A. Walter-Shea, J. R. Brandle, and G. E. Meyer, Some Perspectives on Recent In Situ Air Temperature Observations: Modeling the Microclimate inside the Radiation Shields, Journal of Atmospheric and Oceanic Technology, pp. 1470–1484 url
Cyrus G. Jones and Kenneth C. Young , An Investigation of Temperature Discontinuities Introduced by the Installation of the HO-83 Thermometer Journal of Climate Volume 8, Issue 5 (May 1995) pp. 1394–140 url
Ronald W. Kessler, Lance F. Bosart, and Robert S. Gaza, Recent Maximum Temperature Anomalies at Albany, New York: Fact or Fiction, Bulletin of the American Meteorological Society, Volume 74, Issue 2 (February 1993) pp. 215–226 url
Dian J. Gaffen and Rebecca J. Ross, 1999. Climatology and Trends of U.S. Surface Humidity and Temperature, Journal of Climate 12, 811-828. url
Karl, T.R., and Coauthors, 1995: Critical issues for long-term climate monitoring. Climate Change, 31, 185–221.
Thomas R. Karl and Richard W. Knight, 1996. The 1995 Chicago Heat Wave:How Likely Is a Recurrence? BAMS url
KEVIN E. TRENBERTH, THOMAS R. KARL, AND THOMAS W. SPENCE, THE NEED FOR A SYSTEMS APPROACH TO CLIMATE OBSERVATIONS BAMS 2002 url
THOMAS C. PETERSON 2003 , Assessment of Urban Versus Rural In Situ Surface Temperatures in the ContiguousUnited States: No Difference Found J Cvlimate 2003. url

122 Comments

  1. Curtis
    Posted Aug 22, 2007 at 8:39 PM | Permalink | Reply

    Wow. Are we sure the cooling trend of the 1970’s hasnt continued?

    Just joking, but all these little problems adding a 1/2 degree upward bias here, another there, problematic observational data everywhere, ever expanding urban thermal islands — could it be possible that there is no trend in global temperatures?

    How can they say with any certainty there is any trend in temperatures global or local, as all these errors pile up and present an error margin greater than the trend they think they’re observing.

    2 weeks ago I was worried about global warming, now Iam wondering about the next Ice Age.

  2. Larry
    Posted Aug 22, 2007 at 9:17 PM | Permalink | Reply

    I thought that I read that the main problem with the HO-83 turned out to be an unreliable aspiration fan. That would explain the high bias in the PM, and the low or nonexistent bias in the dark. It would also mean that it’s an intermittent problem, so it would be hard to apply a correction factor to an individual station, because that station may or may not have a problem.

  3. Posted Aug 22, 2007 at 9:23 PM | Permalink | Reply

    I’m still unclear on exactly what the USHCN is trying to measure, especially going into the future. Is it the temperature at the 1200 stations? US temp in open rural grassland? The temperature of the shell of air 4-5 ft above the surface of the US? Or the temp trends. What about any interaction of temperature with rainfall?

    Also is the temperature during the day well represented by the max min or has the shape of the temperature curve during the day changed?

  4. TAC
    Posted Aug 22, 2007 at 9:26 PM | Permalink | Reply

    Honestly! First you break the shaft of your opponent’s stick; then you break the blade. What’s next?

    Well, don’t be surprised when the gloves come off. ;-)

  5. Larry
    Posted Aug 22, 2007 at 9:35 PM | Permalink | Reply

    O2converter says:
    August 22nd, 2007 at 9:23 pm

    A big part of the problem is that these weather stations were never designed to measure long-term climate trends to within 0.1C. What were they designed to do, or what do they want them to do?

    The temperature measuring part is supposed to measure air temperature (NOT radiation), and if they’re sited correctly, elevation shouldn’t make any difference.

  6. Bob Meyer
    Posted Aug 22, 2007 at 9:37 PM | Permalink | Reply

    This is leading to extraordinarily complicated adjustments. It seems that wind and sun data are required for the corrections. Does such data exist? Of the stations that I visited I only found one that had a device for measuring sunlight though a few had anemometers.

    If you know that your station has some kind of uncorrectable error should the data be thrown out?

    Maybe a yearly sinusoidal correction could reduce some of the error but I’m not sure how to prove that without at least a year long experiment in several places.

    As for only looking for biases that move data in the direction of the CO2 theory, you don’t need to attribute bad motives to these people. Years ago I read about the “Milliken Effect”. Robert Milliken was a brilliant experimenter who first figured out how to measure the charge of an electron. Unfortunately, he arrived at the wrong value. He wasn’t far off but just enough to cause problems with later experimentation.

    Scientists who tried to replicate his results got answers somewhat different from Milliken’s but his reputation as an experimenter intimidated some of those doing the replications so they kept finding reasons to move their value closer to his. Each successive experimenter moved his results a little less until finally they closed in on the correct value.

    People tend to see what they expect to see. I’ve watched this with umpires for years. Take a look at Barry Bond’s strike zone and compare it to some rookie. I’ll bet that it is an easy 3 inches smaller all around. This is because umpires expect to see Bonds only swing at good pitches. After all, Bonds has a great batting eye.

    The NASA people are just seeing what they expect to see and it is up to others to point out the things that they ignore. It’s only when they get militantly obstructionist that their motives need to be questioned. But I have to mention that their refusal to open their books does move them closer to the obstructionist side of the argument.

  7. wildlifer
    Posted Aug 22, 2007 at 9:37 PM | Permalink | Reply

    Wow. Are we sure the cooling trend of the 1970’s hasnt continued?

    The point of this exercise is that we can’t be sure of any temperature that’s ever been taken. For all we know, the globe’s warmed 5C in the last century.

  8. steven mosher
    Posted Aug 22, 2007 at 9:37 PM | Permalink | Reply

    TCA,

    You pull the Jersey over the head next!. Trust me, It’s sucks when they do that
    to you.

  9. Steve McIntyre
    Posted Aug 22, 2007 at 9:47 PM | Permalink | Reply

    #7. My issue here is why neither NASA nor NOAA address HO-83? That’s a different issue than whether meaning can be assigned to overall results.

  10. Philip_B
    Posted Aug 22, 2007 at 9:48 PM | Permalink | Reply

    What they are trying to measure is the global trend in near surface temperatures. However, it’s by no means clear there is a global trend. Maybe, there is just a bunch of local trends that sorta looks like a global trend.

    BTW, I am still looking for an explanation of how temperature datasets extending over decades can have a significant and increasing TOB and why the TOB adjustment is required. Was the criteria used, it gives us more warming so it’s in?

  11. Posted Aug 22, 2007 at 9:48 PM | Permalink | Reply

    Do we know what instrumentation is used in the rest of the world?

    How frequently are the instrument’s calibration checked against a local precision thermometer?

    I’m familiar with the “resistor check” however that only proves the electronics. It tells us nothing about thermistor drift.

    Mann, this has got to be a real blow to the “high quality” network of continuously monitored “global” temperatures.

  12. Posted Aug 22, 2007 at 9:57 PM | Permalink | Reply

    Philip_B August 22nd, 2007 at 9:48 pm,

    In addition wouldn’t the TOB adjustment be different for each station depending on actual TOB differences and local conditions?

    How is the adjustment value for each station derived? Or is it something like a blanket correction?

    i.e. a 12 hour diff = .6 deg C and so a 2 hour diff = .1 deg C?

  13. Anthony Watts
    Posted Aug 22, 2007 at 10:02 PM | Permalink | Reply

    I just completed the equipment inventory for the USHCN network, and there are 64 ASOS stations in USHCN. Here is the breakdown by equipment type:

    NIMBUS 196
    MMTS 674
    CRS w/ MAX-MIN 251
    ASOS HYGROTHERM 64
    THERMOGRAPH 5
    OTHER NS EQUIP 19
    UNKNOWN 12
    Total: 1221

    ASOS and HO-83 accounts for about 5% of USHCN. What we don’t know yet is what percentage of GISS it accounts for. The ASOS instrumentation has also been exported by the manufacturer.

    The NWS recognized the problems, and in this page, talk about replacement of the HO-83 with a new model, the DTS1.

    NASA also recognized the problem, and in specifying the ASOS platform for KSC had this to say in their contractor report:

    The hygrothermometer used in the ASOS is a slight modification of the fully automated HO-83 hygrothermometer which has been in operational use since 1985. The minor modifications made were intended to improve its performance.

    What we don’t know is what percentage of the 64 ASOS systems in USHCN were deployed with the HO-83, and what percentage remain operational, if any.

    Given that the problem went undetected for 4 years in Tucson, and the high temperature records there stand, it stands to reason that the HO-83 has contaminated part of the USHCN and GISS record in other instances for perhaps as long or longer.

  14. Philip_B
    Posted Aug 22, 2007 at 10:13 PM | Permalink | Reply

    How is the adjustment value for each station derived? Or is it something like a blanket correction?

    The link below explains how it is estimated on a monthly basis and I assume the TOB adjusted monthly data is averaged to produce the yearly data and an average TOB adjustment results. The problem is while you would expect TOB over a month, you would expect the monthly TOBs to average or cancel out over a year. So if your monthly estimated TOBs don’t average out to zero or close to, what you have (in the yearly average TOB) is noise or estimating error.

    http://www.cdc.noaa.gov/USclimate/Data/

  15. JMS
    Posted Aug 22, 2007 at 10:28 PM | Permalink | Reply

    Wait, that was a hygrothermometer, right? Those are used to measure RH, right?

  16. JMS
    Posted Aug 22, 2007 at 10:31 PM | Permalink | Reply

    Plus, shouldn’t this be account for in the station history adjustment?

  17. Steve McIntyre
    Posted Aug 22, 2007 at 10:31 PM | Permalink | Reply

    Gall et al 1992 says:

    If [the HO-93 problem] is a design problem, then it is serious because these instruments have been installed at all first-order sites across the country and all of these could be providing erroneous readings.

    So it looks like this affects all ASOS measurements. In addition to the USHCN records, GISS uses a lot of ASOS records without any adjustment for TOBS, station history or, it seems, HO-83 bias.

  18. Curtis
    Posted Aug 22, 2007 at 10:33 PM | Permalink | Reply

    The point of this exercise is that we can’t be sure of any temperature that’s ever been taken. For all we know, the globe’s warmed 5C in the last century.

    I was being sarcastic. ;) We can’t be sure if there is any trend in temperatures, so yes, maybe the world is 5 degrees warmer. It could also be about the same, but with some undiscovered influence changing weather patterns…

    I mean when I was a kid, Halloween pictures frequently featured snowbanks taller than me… Now we’re lucky to have snow for Christmas.

  19. Steve McIntyre
    Posted Aug 22, 2007 at 10:34 PM | Permalink | Reply

    #16. The station history adjustment is not a statistical technique that is known off the island and its properties look very suspicious to me so there’s no guarantee that it will work. Ben Herman observed that the Tucson history looks like the HO-83 bias is still carried in the Tucson data without any adjustment, so the station history adjustment was ineffective in this case.

  20. Posted Aug 22, 2007 at 10:35 PM | Permalink | Reply

    From the NASA KSC Contractor report I cited in 13, there was this bit of information about the thermohygrometer tested in ASOS

    During the reliability test, an excessive number of dew point data quality failures occurred. One failure resulted from the reported dew point temperature exceeding the ambient temperature by more than two degrees. Another failure caused a dew point temperature jump of nine degrees in one minute, which exceeded the data quality limit of six degrees in one minute. An investigation of the problem resulted in arevised optical loop adjustment procedure for the hygrothermometers which was subsequently, validated during the ASOS test at Sterling, VA and also at several remote test sltes.

    and it appears they have more “bugs” in the system:

    The Extended Reliability Test performed at the Sterling Research & Development Center in Sterling, VA, focused primarily on reliability. Results of the LEDWI’s precipitation identification reliability test indicated that the insecticidal paint used on the LEDWI sensor head as part of Systems Management Incorporated’s (SMI, a subsidiary of AAI Corp.) insect abatement program was not totally effective. Approximately three months after the application, spider activity was observed near the sensor head and resulted in false sensor reports of snow.

    And NOAA’s own testing of ASOS wasn’t too complimentary either:

    Table2.9. Sterling
    Research & Development Center ASOS System Reliability*

    System
    Operational Hours
    Number of Failures
    MTBF (Hours)

    Multiple Sensor
    9408
    11
    855

    Single Sensor
    9408
    13
    724

    *(NOAA/NWS 1993)

    I’d point out the MTBF for a mercury thermometer is in years, especially if you don’t drop it.

  21. Posted Aug 22, 2007 at 10:42 PM | Permalink | Reply

    RE15, The HO-83 measured dewpoint and air temperature…they used an “all in one” system for both measurements. Once dewpoint and temperature is known, RH is derived mathematically.

  22. Posted Aug 22, 2007 at 10:47 PM | Permalink | Reply

    Philip_B August 22nd, 2007 at 10:13 pm ,

    If I understand the link correctly the adjustments are not based on the individual stations but a semi-blanket adjustment based on averages.

    And only .3 C difference. Wow. Just 1/2 of the estimated warming signal.

  23. steven mosher
    Posted Aug 22, 2007 at 10:50 PM | Permalink | Reply

    RE 17.

    Don’t forget that Parker site selection was heavily weighted to Airports
    and presumably ASOS.

  24. steven mosher
    Posted Aug 22, 2007 at 10:53 PM | Permalink | Reply

    RE 12.

    I have a couple of issues with TOBS, but this instrument thing is
    low hanging fruit.

  25. Bob Meyer
    Posted Aug 22, 2007 at 10:56 PM | Permalink | Reply

    Steve McIntyre said:

    My issue here is why neither NASA nor NOAA address HO-83? That’s a different issue than whether meaning can be assigned to overall results.

    Dr Xavier won’t let me play with “Cerebro” anymore so I’ll have to guess that the reason that NASA & NOAA don’t look at the HO-83 is that they don’t want to appear incompetent once again.

    Its more than the bias issue. It is inexcusable that these instruments would be deployed without rigorous testing for the purpose for which they were intended – long term climate trend analysis. If they weren’t intended for this then they shouldn’t be used for that purpose. It’s lose/lose for them. Either they’re incompetent because they failed to deploy the proper instruments or they’re incompetent because they used instruments that could not reasonably be expected to accurately measure long term trends.

    There simply was no upside to looking into the HO-83, so they didn’t.

  26. Posted Aug 22, 2007 at 10:58 PM | Permalink | Reply

    Anthony Watts August 22nd, 2007 at 10:35 pm ,

    An MTBF of around 800 hours means about one failure per month for a given instrument. This is a high quality network?

    What is the MTBF for low quality stations?

    Note that vacuum tubes run at their rated filament voltage were designed for a MTBF of 2,000 hours.

    Your typical fan used in electronics has an MTBF of at least 10,000 hours and more usually 100,000 hours.

    Consumer electronic equipment these days is designed for at least 100,000 hours MTBF. Any lower and the return rate is excessive. One of the reasons for “on demand” fans in desktop computers is to reduce the failure rate.

  27. Posted Aug 22, 2007 at 11:06 PM | Permalink | Reply

    BTW here is what the HO-83 Hygrothermometer looks like on ASOS stations:

    The picture is from the ASOS in Mount Shasta, CA and part of USHCN that I recently surveyed. Surprisingly, the same old microsite biases were present even in this modern installation. Buildings, vehicles, asphalt, tree shading, and vegetation heights near the sensor were all noted.

    See the photo essay here:
    http://gallery.surfacestations.org/main.php?g2_itemId=664

  28. Posted Aug 22, 2007 at 11:12 PM | Permalink | Reply

    “Adjusting the optics” means that the dew point was measured by a cooled mirror. The mirror would be cooled by a thermoelectric cooler and the point at which the mirror fogged would be the dew point temperature.

    Note that thermoelectric coolers produce a LOT of heat. Inside the enclosure. Which means that temperatures taken after a dew point measurement would need to be adjusted – probably done by instrument software. Or else there would need to be a cooling off period before temperature readings are taken. It also means that there would be a delay between the measurement of air temperature and the measurement of dew point temperature. The fact that it looks like the dew point is measured once a minute seems problematic.

    It would be good to get actual technical specs for these instruments. Including internal layout.

    And how about those improvements. What was improved?

  29. Jeff C.
    Posted Aug 22, 2007 at 11:17 PM | Permalink | Reply

    Re # 13, Anthony

    “What we don’t know is what percentage of the 64 ASOS systems in USHCN were deployed with the HO-83, and what percentage remain operational, if any.”

    Much of this information is available form the station history file available from NCDC. http://www1.ncdc.noaa.gov/pub/data/ushcn/station.history.Z

    The file sets a flag under instrumentation that can include “Hygrothermometer (type unknown)”, “Hygrothermometer – H06x series” or “Hygrothermometer – H08x series”. There are 46 USHCN stations with the H08x flag set. The unknown may contain additional sites but that is not clear. This file is only current through 1996 so determining when the H08x series was removed is unclear.

    I’ll put the list here for reference. Steve – please snip if this is taking up to much space.

    USHCN Station Name, state, begin date
    MUSCLE SHOALS FAA AP, AL, 19851001
    TUSCALOOSA ACFD, AL, 19870501
    FRESNO WSO AP, CA, 19870218
    REDDING WSO, CA, 19870501
    APALACHICOLA WSO AP, FL, 19910701
    FORT MYERS FAA AP, FL, 19850425
    KEY WEST WSO AP, FL, 19850612
    PENSACOLA FAA AP, FL, 19880315
    TALLAHASSEE WSO AP, FL, 19850913
    SAVANNAH WSO AP, GA, 19850727
    BATON ROUGE WSO AP, LA, 19851002
    LAFAYETTE FCWOS, LA, 19851018
    NEW ORLEANS AUDUBON, LA, 19870601
    PORTLAND WSFO AP, ME, 19851210
    BALTIMORE WSO CITY, MD, 19910101
    MOUNT CLEMENS ANG BASE, MI, 19810508
    MINNEAPOLIS WSFO AP, MN, 19851211
    CUT BANK FAA AP, MT, 19850801
    GLASGOW WSO AP, MT, 19930921
    GREAT FALLS WSCMO AP, MT, 19850709
    HELENA WSO, MT, 19841127
    KALISPELL WSO AP, MT, 19850926
    MILES CITY FCWOS, MT, 19910926
    ELKO FAA AP, NV, 19850801
    RENO WSFO AP, NV, 19841122
    WINNEMUCCA WSO AP, NV, 19850508
    ROSWELL FAA AP, NM, 19851106
    ALBANY WSFOAP, NY, 19850206
    BINGHAMTON WSO AP, NY, 19851205
    BUFFALO WSCMO AP, NY, 19860707
    ROCHESTER AIRPORT, NY, 19850606
    SYRACUSE WSO AP, NY, 19850603
    ASTORIA WSO AP, OR, 19851004
    BAKER FAA AP, OR, 19851009
    NORTH BEND FAA AP, OR, 19861099
    ALLENTOWN WSO AP, PA, 19850612
    ERIE WSO AP, PA, 19841102
    WILLIAMSPORT WSO AP, PA, 19850529
    PROVIDENCE WSO AP, RI, 19850599
    GREENVILLE-SPARTANBURG AP, SC, 19850629
    EL PASO WSO AP, TX, 19841113
    SAN ANTONIO WSFO, TX, 19860501
    BURLINGTON AP, VT, 19851099
    SPOKANE WSO AP, WA, 19930921
    CHEYENNE WSFO, WY, 19870731
    LARAMIE AP, WY, 19890101

  30. Posted Aug 22, 2007 at 11:23 PM | Permalink | Reply

    If dew point was taken once a minute that means a lot of cycling of the thermoelectric cooler.

    The cycling on time would depend on dew point. Which means the heat generated in the enclosure would be dew point dependent.

  31. Philip_B
    Posted Aug 22, 2007 at 11:30 PM | Permalink | Reply

    The originator of the TOB adjustment method says there is significant error in the method.

    Karl et al. (1986) mentioned that the uncertainties in
    TOB adjustment are from one-fourth to one-third the
    magnitude of the TOB bias, which in turn depends
    on the season and time of observation.

    http://climatesci.colorado.edu/publications/pdf/R-318.pdf

    I’d say averaging the monthly TOB adjustments over a year, removes the TOB and leaves the uncertainties (error) behind.

  32. Armand MacMurray
    Posted Aug 22, 2007 at 11:49 PM | Permalink | Reply

    For those interested in a nice description of all the different climate-related instrument networks in the western USA (at least), I found this recent presentation by Kelly Redmond useful: http://climate.washington.edu/ewachnet/jun2007workshop/Redmond-eWaCHnetPresentation.pdf

    Another presentation at this same meeting describes the apparently quite successful setup of a real high-quality network throughout Oklahoma – the “Mesonet”. It includes some interesting graphs comparing mean temps from “colocated” HCN, Mesonet, and CRN sites. For example (slide 24), a year’s worth of mean temps from the COOP (HCN) site near the Stillwater CRN station only had 65.8% of the readings within +/- 1C of the CRN reading. The local Mesonet station, however, had 99.4% of its readings within +/- 1C of the CRN reading. The presentation, by Chris Fiebrich is at: http://climate.washington.edu/ewachnet/jun2007workshop/Fiebrich-eWaCHnetPresentation.pdf

  33. Posted Aug 22, 2007 at 11:52 PM | Permalink | Reply

    Re: TOB adjustment link above:

    This should effectively eliminate most of the biases (over 2 Deg.F) in some climate divisions that have become part of the divisional averages. These biases affect both trends and actual estimates of divisional averages.

    Does most = more than 1/2? More than 2/3s? If there was 1/3 of the 2F unadjusted that would leave a .6F difference i.e. about 1/2 the warming signal.

    Note I made a mistake above (August 22nd, 2007 at 10:47 pm) the .3 degrees I referred to was F not C. My rocket just crashed.

  34. Armand MacMurray
    Posted Aug 22, 2007 at 11:57 PM | Permalink | Reply

    Another item from the meeting seems to indicate that the Weather Service wants to modernize the HCN stations (1000 sites) by 2013 (with optimal funding, longer with less). This seems to be a very recent effort in the planning stages, and looks like a CRN-lite effort. Among other things, they want to leverage (not sure if that means “adopt”, or instead “adopt to the extent the budget allows”) CRN equipment standards, siting standards, metadata standards, and calibration standards.
    Here’s the link: http://climate.washington.edu/ewachnet/jun2007workshop/Bair-eWaCHnetPresentation.pdf

  35. Bob Meyer
    Posted Aug 23, 2007 at 12:16 AM | Permalink | Reply

    I wanted to find out how the HO-83 works so I could figure out why the reliability was so low (if insects weren’t the sole cause of problems) I ran across the following NWS website with some fascinating information.

    NWS Central Region

    It contains part of the results of a comparison between an ASOS (Automated Surface Observing System) and an HO-83 over the period of six months. The article opens up:

    Temperature measurements from ASOS tended to be lower (colder) than those from the standard HO-83. Figure 4 shows that the daily average ASOS temperatures were equal or below the output from the standard equipment for all but three days during the six-month period studied. The difference during the three days in which ASOS was warmer than the HO-83 was only one degree.

    I don’t know if the image will reproduce properly, it is very poor quality but it shows a consistent difference in average temperature between the ASOS and the HO-83. The “high” temperatures are even worse with differences in temperature as great as 8 F!

    Their conclusions were interesting in that they were trying to explain the difference in temperature readings using microsite differences even though the two instruments were only separated by 75 feet. Apparently, microsite problems can be troublesome after all.

    As would be expected they assumed that the new instrument package (the ASOS) had a problem, not the HO-83. This is not to fault them, I would have begun assuming the new instrument was the problem too. Here is part of their conclusions:

    The next most striking difference was the consistently lower temperatures reported by ASOS. This was although the hygrothermometer used by ASOS is essentially the same as the standard one currently in use. The differences were most significant in the daily maximum temperatures. The correlation between these temperature differences and the daily average wind speeds suggests that the slight difference in the environment between the sensors may be a factor. The most apparent difference in environment between the two equipment sites is the proximity of a nearby drainage ditch to the ASOS combined sensor group. This drainage ditch has been suspected to affect local temperatures, measured with the standard equipment, long before ASOS was installed. Wind speeds are likely a significant factor in the differences between ASOS and the standard equipment temperatures, but other weather factors may also be at play.

    One possible solution to the consistently lower ASOS temperatures would be to relocate the ASOS combined sensor group to another location in the airfield away from the drainage ditch. More data are needed to confirm the temperature differences, especially to analyze the equipment’s performance during the summer months. Closer analysis of weather conditions during the times of the largest temperature differences may also shed further insight on the source of the temperature discrepancies.

  36. Bob Meyer
    Posted Aug 23, 2007 at 12:26 AM | Permalink | Reply

    Re: #35

    I almost forgot the interesting part. The ASOS is supposed to contain an HO-83 which means that the older instrument almost certainly drifted upwards in a non-uniform way. It appears to have become more sensitive to sunlight. Other explanation – the product was improved and the newer instrument with the lower readings was the more accurate one.

  37. Posted Aug 23, 2007 at 12:34 AM | Permalink | Reply

    Bob Meyer August 23rd, 2007 at 12:16 am ,

    The maximum delta T was at low wind speeds. This might indicate an internal heating problem for the HOs.

    It would be interesting to see how the deltas correlate with the dew point. Something the reviewers didn’t check.

  38. Posted Aug 23, 2007 at 12:42 AM | Permalink | Reply

    BTW the difference between the dew point temp and air temp would be highest in winter and on the coldest days when the RH was lowest.

    Of course this is just a hypothesis. It would need to be tested.

  39. Bob Meyer
    Posted Aug 23, 2007 at 12:57 AM | Permalink | Reply

    M. Simon said:

    The maximum delta T was at low wind speeds. This might indicate an internal heating problem for the HOs.

    If we could find out if the power consumption changed from the older instruments to the newer ones it would give your hypothesis a pretty good boost.

    Do you know what kind of sensor is used in the HO-83? Thermistor? RTD? Semiconductor? Each has its own peculiarities.

  40. aurbo
    Posted Aug 23, 2007 at 1:01 AM | Permalink | Reply

    Re #18;

    I mean when I was a kid, Halloween pictures frequently featured snowbanks taller than me… Now we’re lucky to have snow for Christmas.

    The simple answer to the snow-bank observaton is: How tall were you when your were a kid? When I was a kid I remember snow banks in NYC were almost over my head. Of course I was probably about 3 feet tall then. Think back to early childhood. Remember when the kitchen table was over your head? End of problem.

    BTW, White Christmases were few and far between in NYC, ironically after the song “White Christmas” was written (1942) I don’t think NYC had a white Christmas for over 20 years. (The 1947 heavy snow came a day late on Dec 26th).

    Re the NYC temperature (and sensor problem) here is an interesting site that discusses this problem quite extensively.

    Finally, as I remember it, the ASOS specifications for temperature accuracy were ± 1°C! That makes a mercury-in-glass thermometer read like a precision instrument. The development and implementation of HO83 was a scandal that was never fully exposed, right from the contract award through delivery. Numerous alterations to the instrument had to be made until they got it accepted which adds credence to the well know aphorism; “close enough for government work”.

  41. Posted Aug 23, 2007 at 1:03 AM | Permalink | Reply

    M. Simon August 23rd, 2007 at 12:42 am ,

    I remembered the chart wrong:

    http://www.sp.uconn.edu/~mdarre/NE-127/NewFiles/psychrometric_inset.html

    The lowest delta Ts (wet bulb – dry bulb) for a given RH are at lower temps.

    I wonder why they didn’t put the two instruments in an atmosphere simulation chamber and measure the differences – to start? To get a baseline under controlled conditions.

  42. Posted Aug 23, 2007 at 1:13 AM | Permalink | Reply

    Bob Meyer August 23rd, 2007 at 12:57 am,

    In another thread Steve Sadalov (sp?) said they used thermistors. I think some one else agreed with him.

    What I’m basing my guesses on is a general understanding of instrumentation and sensors. I’d like to go over the construction of the instruments and see if I can come to any conclusions.

    It is possible the upgrade of the HOs might have been as simple as adding a thermal shield or a baffle where it would do the most good. If they did a total redesign it probably would be given a different series number; where 8x, 8y, and 8z might indicate different instruments in the package. And 9x would be a total redesign from 8x.

  43. Posted Aug 23, 2007 at 1:22 AM | Permalink | Reply

    aurbo August 23rd, 2007 at 1:01 am ,

    I lived in Chicago in ’79. The snow was definitely over my head by the end of winter. I’m 186.69 cm. (6′ 1 1/2″). The snow was piled at least 8 ft high. Maybe 10 ft. That was one cold SOB too. I was just a kid then (35). LOL

    How did the company get the contract? Inside connections? Ear marks? Low bid?

  44. aurbo
    Posted Aug 23, 2007 at 2:11 AM | Permalink | Reply

    How did the company get the contract? Inside connections? Ear marks? Low bid?

    Principally through inside connections. Earmarks were not an issue. There may have been a low-ball bid as well. The problem for the Government was that they had severe budgetary and time constraints and it was difficult to change horses in midstream. The original design of the hygrothermometer was a disaster including goofs that even a high school weather student wouldn’t have made.

    For example, originally the sensor looked like an immature mushroom. There was a vertical cylinder containing the instruments and aspirataor and a mushroom shaped top to keep precipitation away. The original attempt by the contractor (who has never made weather instruments before) had the aspirator at the bottom of the cylinder upwind from the sensors! They got that fixed early-on, but then the problem was that the apirated air exited the cylinder at he top and promptly circulated down just oustide the cylinder only to be drawin into the instrument at the bottom. In short, the instrument did a lot of recirculating the air it exhausted. The temperature sensor was a thermistor and the dew point sensor a Peltier effect thermoelectrically cooled mirror with an electronic feedback system which cooled the mirror undil dew formed on its surface. A light was focused on the mirror at an angle where its reflection could be picked up by a photo-sensor. When dew formed the reflection was sharply reduced and the temperature of the mirror was read at that time. All very fancy and imaginative, but the design was poorly constructed and had tjhe problem of introducing heat into the cyclinde which was also supposed to be measuring air temperatures. There were other problems too numerous to mention here.

    The NWS was up against deadlines and had to scramble to try to bring the instrument up to specifications which were pretty loose to start with.

    BTW, the “fix” for the recirculation problem was brilliantly [sarcasm] solved by welding a skirt around the cyclinder above the base to deflect the descending air away from the inlet. This can be clearly seen in the photo posted above in #27.

    The loose specs were accepted because the ASOS system was designed to replaced human observers whose jobs were already stricken from the budget. It was never conceived to be a instrument for precise climatology. There were many other problems with the whole ASOS system. You can get the gist of this by reading the GAO report to Congress outlining these in some detail.

    The long and short of it is that due to the lack of interest in developing a system that could be used as a climatological benchmark, ASOS was designed to provide the needs of the aviation community in which temperatures (outside the freezing range and within the high limits of runway temperatures that affected gross weight as a function of runway length) and precipitation were of little operational interest. The poor design was largely a result of the incompetence of the contractor who won the bid and the lack of sufficient oversight on the part of the Government employess whose job it was to evaluate the proposal and monitor its development and production.

  45. Bob Meyer
    Posted Aug 23, 2007 at 2:16 AM | Permalink | Reply

    M. Simon said

    I wonder why they didn’t put the two instruments in an atmosphere simulation chamber and measure the differences – to start? To get a baseline under controlled conditions.

    [sarcasm on] Probably because they were scientists and not engineers. [sarcasm off]

    Funny, but NOAA is not very far from NIST in Boulder, CO and the people at NIST can measure almost anything. It is one of the few government agencies that can do what they are supposed to do. If they were given two instruments they would know which is reading correctly and why in a matter of a few weeks at the outside.

    Thermistors – I have just had a major nightmare with thermistor drift. Certain epoxy bead thermistors seem to be sensitive to pressure and can drift when bonded into a hole in a block of aluminum due to the slow strain relief of the bonding epoxy. We had drifts of half a millidegree per hour continuing for weeks even though the thermistors were kept at a stable temperature. We moved to glass beads which don’t have that problem.

    The hardest part in these investigations is finding temperature indicators that are more stable than the thermistors that you are testing. The first platinum RTDs we used were worse than the thermistors.

  46. Bob Meyer
    Posted Aug 23, 2007 at 2:35 AM | Permalink | Reply

    Re 44

    aurbo – That’s quite a story. I was looking for something subtle like thermistor drift but what you describe is classic FUBAR.

    Just think, if Schmidt and Hansen had simply accepted that Steve’s discovery was significant and not tried to belittle it we might never have discovered just how screwed up the temperature data is. We should be grateful for their irrational intransigence.

  47. Louis Hissink
    Posted Aug 23, 2007 at 3:22 AM | Permalink | Reply

    I have an Oregon Scientific weather station here in the caravan in Halls Creek and it has an outside sensor unit that measures humidity and temperature. The inside base station adds a barometer to the sensor group.

    It is dry season here in the Kimberley, (means no rain until the monsoons start in December) but the Oregon reckons right now that it will be cloudy and rainy. Huh?

    Reading the various “errors” in the professional instrumention here makes me wonder whether meterologists and climate science actually understands what drives the weather because the Oregon weather station leaves me rather disillusioned. It isn’t cheap and while I was tempted to get the PC friendly model (costs more) I am now unwilling to get it.

    BTW if anyone is interested I have a separate temperature logger that records ambient temperature every minute and looking at the graphs of the daily fluctuations becomes really interesting. Some here have expressed an interest in seeing what temperature does during the day but Halls Creek is urbanised rural.

    Halls creek is also a major BOM station since there is an important airfield here.

    Anyone interested in getting a copy of the raw data (I have to correct the time data since the longitude I am at is not WST time in Oz that the sensor is programmed on).

    Send me an email to hissinkl1947 XX bigpond.com if you would like the data – its all ascii stuff and easy to email.

  48. MarkW
    Posted Aug 23, 2007 at 5:04 AM | Permalink | Reply

    Larry,

    The height of the station makes a big difference in what temperature is read at night. In still air, cool air will sink. So closer to the ground becomes decidedly cooler than a higher up. The difference between 6 ft off the ground and 4 ft can sometimes make a difference of up to 1C.

  49. MarkW
    Posted Aug 23, 2007 at 5:22 AM | Permalink | Reply

    Simon M.

    Another factor to consider. The greater the difference between the current air temperature and the dew point, the longer and harder the the thermoelectric cooler would have to work in order to cool the mirror down to the dew point.

    Perhaps that’s why they saw a big warm bias during the heat of the day, but very little at night. (night time temperatures often drop down near the dew point)

  50. MarkW
    Posted Aug 23, 2007 at 5:32 AM | Permalink | Reply

    Louis H.

    The rainy/dry indicator on those things is just run off the barometer. If the pressure is falling, it calls for rain. If it rising it calls for sunny. Approaching low pressure systems often precede rain.

    I remember an old analog barometer that my parents had. The needle would move up and down as the pressure changed. At the low end of the scale they had a picture of some clouds, at the high end, a sun.

  51. bernie
    Posted Aug 23, 2007 at 7:57 AM | Permalink | Reply

    OK, I get that some of the current and recently deployed instruments have non-trivial biases and they do not appear to have been explcitly factored into any existing adjustments, but (a) what is the overall impact on the temperature record in the US and ROW? and (b) what is the official position on these problems?
    This is the kind of finding that should give any thinking person reason to pause before accepting any simplistic pronouncements about current climate trends.

  52. John F. Pittman
    Posted Aug 23, 2007 at 8:28 AM | Permalink | Reply

    Is the quote from Peterson a bit garbled? “The adjustment to the ASOS standard removes the HO-83 bias” and ” “This value is added to the data from the HO-83″. Shouldn’t this have read that the .6 was subtracted from the HO-83 and the adjustment to the HO-83 standard removes its bias? Otherwise, I would assume that they found the HO-83 in error and corrected good data set to the erroneous data set.

  53. Bob Meyer
    Posted Aug 23, 2007 at 9:12 AM | Permalink | Reply

    Re: 35

    I found the first part of the report that I linked in 35. It is:

    Part A

    This site has quite a few reports that might prove useful.

  54. Kenneth Fritsch
    Posted Aug 23, 2007 at 9:16 AM | Permalink | Reply

    I believe it was Dr. Lindzen who noted that errors in surface temperature measurements could well be nearly equally in both directions but that a tendency to look and legitmately discover them in only one direction could bias the adjusted temperatures.

  55. Larry
    Posted Aug 23, 2007 at 9:21 AM | Permalink | Reply

    #18

    I mean when I was a kid, Halloween pictures frequently featured snowbanks taller than me… Now we’re lucky to have snow for Christmas.

    Moving from Minnesota to California will do that.

  56. Curtis
    Posted Aug 23, 2007 at 9:35 AM | Permalink | Reply

    Moving from Minnesota to California will do that.

    ;)

    Yes, it would. However I havent moved that far… I live in Calgary, AB, grew up about 600 miles NE of here…

  57. Howard
    Posted Aug 23, 2007 at 10:00 AM | Permalink | Reply

    If the HO-83 makes up 5% of USHCN and the bias is 0.6 deg. C, would the resulting influence on the US average be 0.03 deg. C? It seems that this is fairly insignificant unless the ASOS stations are given a higher weight in the computed average.

    It seems that if HO-83 is used internationally where the total # of stations is quite low, then the bias could possibly have a significant impact on the Global Average Temperature.

  58. Larry
    Posted Aug 23, 2007 at 10:18 AM | Permalink | Reply

    57, if it were just one issue, I would agree, but the point of auditing the surface stations is to show that it’s a preponderance of issues. This one is simply a little more identifiable than most.

    The larger context that this fits into is the debate over the reliability of surface data v.s. satellite data. The satellites certainly have some potential sources of error, but the surface measuring network has so many degrees of freedom that it’s not possible to say with any certainty that the net effect is reliable data.

  59. Rob
    Posted Aug 23, 2007 at 10:21 AM | Permalink | Reply

    So Curtis, you grew up near Ft Mc, and are trying to compare Ft Mc winter weather to Calgary? I grew up in the Calgary area and still live here and the one thing I know is that Calgary weather is completely unpredictable. About 5 years ago, I had snow fall at my place every month of the year, and I had never experienced that in 30 years.

    I wonder what equipment they use in Calgary, especially with our Chinooks that can swing the temp up to 30 degrees celcius in around 1 hour. I wonder if Environment Canada uses any corrections on Calgary data as that would have to be one complicated formula to try to account for all of the factors.

    Steve, you’re Canadian, do we use the same equipment as the US to measure temps?

  60. Murray Duffin
    Posted Aug 23, 2007 at 10:27 AM | Permalink | Reply

    From reading the Tucson HO-83 analysis I inferred that the failure was the muffin fan, and that it took about 2 years to crap out.
    So far I have done 7 stations for surfacestations.org, in Fl and Ga. This morning I was checking the GISS temp plots for these stations to see if there was any correlation to historic events with no obvious results. Since I was on to that anyway, I decided to check the HO-83 stations listed above.
    Fort Meyers – Possible upward jump ’82-’84. Slight cooling trend since 1990.
    Key west – Possible upward shift ’84-’88, but not very certain. Slight warming since 1990.
    Pensacola – Major jump ’83-’85. No trend since about ’86/’87.
    Tallahassee – Possible shift ’81-’82 0r ’84-’85, but not at all clear.
    Savannah – Possible jump ’84-85. Possible reverse ’95-’96.
    As a judgment call, I would say that the HO-83 did introduce some upward bias.
    As the stations I have visited showed no average warming trend I decided to check the plots for all stations in Fl and southern Ga. Interestingly, based on eyeball analysis I can see no overall global warming trend in the raw data, and very little in the adjusted data. (I only checked all 3 data sets for my 7 stations). Arcadia Fl is one really questionable station. The raw and “combined sources” plots look very similar and show a slight downtrend since the 1930s. The “homogeniety adjusted” plot shows a clear uptrend 1910-2005. Seems very questionable. This adjustment bias wasn’t clear to the eyeball in the other stations I checked. There were no cases of downward adjustment bias.
    Looking only at the “combined sources” plots f0or all Fl stations there are several strangely different trends, eg – Belle Glades FL up 1.6 degrees C 1922-2005. Ocala Fl – no trend 1938 to 2005. Ft. Lauderdale- steady uptrend 1910 to 1990, then flat to 2005. Miami – up from 1915 to 1988, then flat to 2005. Looks like a major shift up from ’83 – ’88. Miami WSO city – up 1 degree C 1966-2005. Jacksonville (3 sites) downtrend. Mayport – down 1 degree C 1986-2005. Atlanta Ga – down from 1985-2005!!
    Most relatively rural sites show the high 1930s, decline from ca 1940 to ca 1975 and the up to ca 1990 or 1998, then flat, with little or no overall trend since late 1920s. The strong rising trends are clearly sites that experienced rapid urbanisation, and they seem to go flat since ca 1988/1990.

  61. MarkW
    Posted Aug 23, 2007 at 10:29 AM | Permalink | Reply

    Howard,

    Now add that 0.03C to the 0.15C that Steve discovered a couple of weeks ago.

    Then remember that we have only begun to look.

    (Nor have we even begun to attempt to figure out what the micro-site issues and urbanization issues that Anthony Watt’s is uncovering, are doing to the record.)

  62. Larry
    Posted Aug 23, 2007 at 10:34 AM | Permalink | Reply

    I have a question that Steve or someone else here may be able to address. It’s struck me as strange that they take these temperature numbers with a resolution of 0.1C. Do we know what the resolution of the original mercury thermometers was, and as they replaced mercury thermometers with various electronic sensors what the resolution and the published accuracy on those various instruments were? I know from experience that early electronic instruments would publish accuracy to within +/1 0.1%, but in practice, you’d be lucky to be able to keep them calibrated them within 1%. They drifted a lot. Newer electronics are much better, but as someone mentioned, the sensing elements themselves have manufacturing issues.

    I have a hard time believing that any of this data that was collected before maybe 1990 is actually good to within 0.1C.

  63. steven mosher
    Posted Aug 23, 2007 at 10:35 AM | Permalink | Reply

    RE 60.

    You need to check a HO 83 site against a nearby non H0 -83 site.

    Look at differences in TREND for Tmax and Tmin

  64. Larry
    Posted Aug 23, 2007 at 10:39 AM | Permalink | Reply

    60, the fans on these are known to be problematic, but I was looking at surfacestations, and somewhere in California IIRC, they had simply cut the wire to the fan.

    Welcome to the world of real instrumentation, where if someone has a problem, he solves it in the most expedient way, and doesn’t tell anyone. If it affects the accuracy of the measurement, oh well…

  65. Murray Duffin
    Posted Aug 23, 2007 at 10:50 AM | Permalink | Reply

    Oops! Overlooked Apalachicola – Clear jump ’84-’85 ending a long downternd from the late ’20s, then a slight up trend to 2005. Looks like a clear HO-83 artifact. Murray

  66. SteveSadlov
    Posted Aug 23, 2007 at 10:53 AM | Permalink | Reply

    Ft. Mc versus Calgary …. see “banana belts” (in this case, as already noted Chinook driven). Also, many people make the mistake of confusing drought with warmth. They are often completely opposite regimes. Case in point, the past winter in California. It was on par with truly horrible ones such as 1990 – 91 and 1975 – 76. And, it was also the driest since US governance began (150 plus years).

  67. steven mosher
    Posted Aug 23, 2007 at 11:08 AM | Permalink | Reply

    RE 65.

    Murray. Download GISS raw for Fort Meyers and Arcadia and Everglades.
    ( as a text file)

    Everglades record starts in 1926, so limit the other files to that.
    I stopped at 1995 ( two of the three files had missing annuals the same year after that)
    You will have to infill a couple dates in 1964 and 1993.

    Then do a difference on Fort Meyers – Arcadia and Frt Meyers – Everglades.

    SOMETHING weird goes on in 1983. a rapid cooling at Fort meyers and then
    a rapid warming.. weirdness before this as well.

  68. Curtis
    Posted Aug 23, 2007 at 11:13 AM | Permalink | Reply

    http://weather.gladstonefamily.net/cgi-bin/wxphoto.pl?station=1

    This is kinda neat, Its a photo archive of weather stations listed by geotag. Whats so neat about it, is that it includes weather stations from all over the world. (canada, spain, france, uk, russia, as well as usa) Also there is a link to nearby stations from each station… So you can skip around the world finding stations that are near to each other…

  69. Murray Duffin
    Posted Aug 23, 2007 at 11:16 AM | Permalink | Reply

    Re 60 and 65:
    Right, where do I get min/max? Why did you choose those stations? Murray

  70. Posted Aug 23, 2007 at 11:28 AM | Permalink | Reply

    MarkW August 23rd, 2007 at 5:22 am ,

    That makes sense.

  71. Posted Aug 23, 2007 at 11:31 AM | Permalink | Reply

    John F. Pittman August 23rd, 2007 at 8:28 am ,

    Is the quote from Peterson a bit garbled? “The adjustment to the ASOS standard removes the HO-83 bias” and ” “This value is added to the data from the HO-83″. Shouldn’t this have read that the .6 was subtracted from the HO-83 and the adjustment to the HO-83 standard removes its bias? Otherwise, I would assume that they found the HO-83 in error and corrected good data set to the erroneous data set.

    I read that and assumed they got the sign wrong. Maybe a bad assumption.

  72. steven mosher
    Posted Aug 23, 2007 at 11:35 AM | Permalink | Reply

    RE 69.

    I picked those stations because the are the two closest.

    Start with GISS.

    like so:

    http://data.giss.nasa.gov/cgi-bin/gistemp/findstation.py?lat=26.6&lon=-81.87&datatype=gistemp&data_set=0

    At the bottom of the page yuo will a download file as text.

    Like so.

    http://data.giss.nasa.gov/work/gistemp/STATIONS//tmp.425747960010.0.1/station.txt

    select all. Copy. Paste into excell. Fuss about with text to columns.

    If you want MIN MAX you have to go to USHCN.

    http://cdiac.ornl.gov/cgi-bin/broker?_PROGRAM=prog.climsite.sas&_SERVICE=default&id=083186

    Get a comma delimited file.

  73. Jeff C.
    Posted Aug 23, 2007 at 11:40 AM | Permalink | Reply

    Re 65 and 67,

    I have put the USHCN station history file in an Excel spreadsheet, decoded and formatted it and added headers and comments. There is a wealth of info here on location, station moves, the instrumentation type and dates for each site (including specifically adressing the HO-8x series) up through about 1996. This is from where I compiled the list of stations in #29 above. In anyone is interested and wants to dig into this deeper, email me at jeffc1728 – at – yahoo – dot – com and I’ll forward it. It’s a big file (around 30 MB) but zips down to around 6 MB.

  74. Anthony Watts
    Posted Aug 23, 2007 at 11:57 AM | Permalink | Reply

    And here from NCDC is an Excel file of all USA ASOS stations, when they were commissioned. and other deatils.

    Should be a simple matter to cross check with GISS to see what ASOS stations are used in GISS

    ftp://ftp.ncdc.noaa.gov/pub/data/inventories/ASOSLST.XLS

  75. steven mosher
    Posted Aug 23, 2007 at 12:39 PM | Permalink | Reply

    RE fort meyers..

    Looking at Daily data. There are apparent discntinuities in the 1970 -1990 Regime

    I loaded up Tmin and Tmax daily frm USHCN. Then did a difference

    (FORT MEYERS – EVERGLADES.) Since all we care about is the “trend” or anomaly
    then this paired difference can signal issues with a site.

    Over long perids of time ft meyers and everglades should exhibit similiar trends.

    Anyway. weirdness ( gavins term) is present in this record. more later perhaps

  76. Murray Duffin
    Posted Aug 23, 2007 at 1:09 PM | Permalink | Reply

    Re 74: That list is all at airports, and mostly FAA, with only some NWS. Only 3 correspond to the one I checked above, and they are 1996/97. Could be replacements for the HO-83, but if so they seemed to stay at the new HO-83 level. Hmmm. Murray

  77. Michael Jankowski
    Posted Aug 23, 2007 at 1:32 PM | Permalink | Reply

    SOMETHING weird goes on in 1983. a rapid cooling at Fort meyers and then
    a rapid warming.. weirdness before this as well.

    FWIW, 1983 was when Ft Myers’ new airport (SW Int’l) opened. IF the station moved from the old Page Field to the new airport in 1983, it was relocated from a spot about 1-2 miles from the wide Casaloosahatchee River on the southern edge of downtown Ft Myers to what was a completely rural spot 6-8 miles to the southeast and well away from the water.

  78. steven mosher
    Posted Aug 23, 2007 at 1:47 PM | Permalink | Reply

    RE 77.

    interesting… Note guys when I say “cooling” I’m talking relative cooling to a
    nearby site

    (Fort meyers – everglades) in this example. It’s the quick and dirty method of
    finding an issue.

  79. Murray Duffin
    Posted Aug 23, 2007 at 1:53 PM | Permalink | Reply

    Re 72: Steven – Scale on min,max,mean bar charts is in 2 degree increments. At thia scale I can detect no trend at all for Ft. Meyers, and possibly a slight uptrend in Tmin and downtrend in Tmax, with no visible trend in Tmean for Arcadia. I have no skill at playing with Excel, so am not going to even try making my own plots from the data files, especially as the data is only in whole degrees and will still make tenths of degrees trends hard to detect. I imagine you are much more skillfull. If you want to try some analyses, I will be very interested in your results. MurrayMurray

  80. Mike B
    Posted Aug 23, 2007 at 3:19 PM | Permalink | Reply

    This TOB adjustment really has me perplexed. I’ve read the Karl paper, and I can’t make sense of it.

    So I set up a crude simulation, where I superimposed a daily sinusoid on top of an annual sinusoid, with random noise added to both.

    On the first iteration, I found a result not unlike what Karl predicted. Morning observations were very slightly biased in one direction, with evening observations biased in the opposite direction. But subsequent iterations reversed the signs, with others showing all times of day biased in the same direction (some positive, some negative), and some showing no bias at all!

    The short of it is that I’m not seeing a Time of Observation Bias. Or am I just being dense about this?

  81. JerryB
    Posted Aug 23, 2007 at 3:33 PM | Permalink | Reply

    Re #80,

    Mike B,

    Try
    some information abot TOB and how it works
    .

  82. Posted Aug 23, 2007 at 3:55 PM | Permalink | Reply

    Mike B August 23rd, 2007 at 3:19 pm says:

    This TOB adjustment really has me perplexed. I’ve read the Karl paper, and I can’t make sense of it.

    I thought it was just me.

    I read some papers, Svensmark, Spencer, McIntyre, and heck even string theory from Lubos and I can pick up pretty quick what it is all about. The reasoning is clear and logical. The exposition devoid of obfuscating prose. Then I read things like the TOB adjustment and my head turns to mush. I wonder how I went from reasonably smart to totally stupid in a matter of hours?

  83. Larry
    Posted Aug 23, 2007 at 4:07 PM | Permalink | Reply

    82, it’s not that hard, you just have to realize that the min and max thermometers take the respective min and max over a period that runs from reset to reset. And they reset them when an operator is available; i.e. mid-day. Depending on when they do the reset, the peak (or valley) that you catch may be from the present day, or it may be from the previous day. And it may vary on a day-by-day basis. This introduces a source of error that can’t be assumed to be random. That makes it a bias.

  84. Posted Aug 23, 2007 at 4:25 PM | Permalink | Reply

    Larry August 23rd, 2007 at 4:07 pm,

    This introduces a source of error that can’t be assumed to be random. That makes it a bias.

    I get that. So tell me what the correction factor should be based on first principles.

    And shouldn’t that correction average to zero over a year?

    How can we tell what the bias is? Suppose it was max 90 yesterday and 70 today (we had one of those last week). What should the TOB correction be? You pick the measurement hours you like and give me a number.

  85. Larry
    Posted Aug 23, 2007 at 4:47 PM | Permalink | Reply

    You have to assume that you know at what hour the peak actually occurred. Then you can decide which day that was the max for. No, it’s not foolproof.

  86. Neil Fisher
    Posted Aug 23, 2007 at 4:47 PM | Permalink | Reply

    This introduces a source of error that can’t be assumed to be random. That makes it a bias.

    Huh – perhaps it’s the cynic in me talking, but it seems to me that when a potential bias increases temperature, it is assumed to be random unless someone can demonstrate otherwise, yet if there is a potential bias that decreases temperature, we can’t assume it’s random.

    Damn, I wish bender was still around – another one for his list.

  87. Posted Aug 23, 2007 at 5:06 PM | Permalink | Reply

    Larry August 23rd, 2007 at 4:47 pm ,

    Larry make your own assumptions about when the data was read and when it should have been read. Then give me a number.

    You say it is not foolproof. I’ll go further. It can’t be determined. Prove me wrong. Give me a number. Show your reasoning.

    I can be convinced.

    Or suppose it was max 70 yesterday and 90 today (had one of those last week too). What is the adjustment?

    Assume the month of August.

    I believe TOB adjustments can’t be done in aggregate. Temperature fluctuations will skew the results. Fer instance the TOB for 85 yesterday and 90 today will be different from 70 yesterday and 90 today. As I understand it corrections are not done on a day by day basis. Perhaps that is an incorrect assumption on my part.

  88. steven mosher
    Posted Aug 23, 2007 at 5:08 PM | Permalink | Reply

    RE 82.

    Even if one does a TOB adjustment, The adjustment is an ESTIMATE with an error.
    The way it’s implemented the adjustment rectifies the mean, and leaves the
    varience untouched. It should come in with its error

  89. Posted Aug 23, 2007 at 5:16 PM | Permalink | Reply

    steven mosher August 23rd, 2007 at 5:08 pm ,

    Wouldn’t the estimate need to be derived based on day to day fluctuations?

    Is it?

    Stats is not one of my strong points. I do know my way around a Gaussian curve. I understand a little of the rest.

    So you have an estimated bias with an estimated error. What is the confidence level? How is it determined?

  90. Posted Aug 23, 2007 at 5:22 PM | Permalink | Reply

    steven mosher August 23rd, 2007 at 5:08 pm,

    What is the reasoning on why variance should be conserved? Or is that just an assumption?

  91. Posted Aug 23, 2007 at 5:30 PM | Permalink | Reply

    Let me put it in a simple analogy.

    You have different instruments measuring the length of different objects.

    What do the stats on one instrument tell you about how to correct the stats on another instrument? How can you tell the best estimate of length A based on instrument 1 measurements from measuring length B on instrument 2?

    Seems too be a lot of “magic goes here” between the numbers and the result.

  92. Philip_B
    Posted Aug 23, 2007 at 5:45 PM | Permalink | Reply

    The key point to bear in mind about TOB is that it results from including temperatures from the day prior to the reporting period. If the reporting period is a day or even a month it could be significant, but if the reporting period is a year any error from one day is averaged over 365 days hence would be trivial. It is simply not possibly to get an average TOB anything like 0.2C, because it would require a TOB error of 0.2C * 365 from one day.

    Another way to look at this is in any 365 day period, irrespective of time of observation, 364 of the observations can only include temperatures actually occuring in the year. Only one day can possibly include temperatures occuring outside the year. And if you then average the daily temperatures to get an annual mean, its irrelevant if the temperature recorded on day x actually occured on day x-1 (except of course for day 1).

  93. Posted Aug 23, 2007 at 6:37 PM | Permalink | Reply

    Philip_B August 23rd, 2007 at 5:45 pm,

    That is what I thought too (not in as much detail and based on a different chain of reasoning – yours is better).

  94. BarryW
    Posted Aug 23, 2007 at 7:12 PM | Permalink | Reply

    If you are recording at a time where the temp is not a max or min then you may skew one of the values (max or min) by a day with an inclusion from a previous month and an exclusion of the last day of the month in the monthly average. If you measure around a max or min then even resetting the thermometer would still cause it to register the max for that day in the next days reading. Consider the case of a 90 deg day followed by a 70 deg day. If I record when the temp is max (say 3 o’clock) on the first day, reset my thermometer, the temp will climb back up to the 90 deg. The next day, when I read it the max will still show 90. So I will have double counted the max from the previous day. In the opposite case of 70 followed by 90, I would record a seventy on the first day and a 90 on the second. So I would get a warming bias. Mins would work in reverse.

    What this all means in terms of TOB corrections algorithms I haven’t figured out yet.

  95. Posted Aug 23, 2007 at 7:42 PM | Permalink | Reply

    Barry August 23rd, 2007 at 7:12 pm,

    I agree.

    Now how can you tell from the temperature record that the events you describe (correctly) actually happened?

    It seems to me that the information is lost.

  96. Philip_B
    Posted Aug 23, 2007 at 8:19 PM | Permalink | Reply

    Re#94

    I can see how that could result in bias over a short period but not a systematic bias over a year at many sites, which the TOB adjustment requires. BTW, I’ve asked a real statistician if he can explain this.

  97. Philip_B
    Posted Aug 24, 2007 at 12:40 AM | Permalink | Reply

    Having read the link below in detail, I now see how a TOB could occur over an entire year. In simple terms, if your time of observation is in the early morning, you will tend to double count cold minimums and if time of observation is late in the day, you will tend to double count warm maximums.

    http://www.john-daly.com/tob/TOBSUMC.HTM

    Which leaves the question of why the trend from 1950 of over -0.2C, i.e. +0.2C TOB adjustment. For this to occur there must have been a wholesale shift of time of observation from late in the day to early in the day. I’d say somewhere in the region of at least 25% of all stations if the size of annual TOBs in the link above are representative.

    The NWS service advises observers to record min/max temperatures late in the day. So I don’t know why so many observers have changed their time of observation (assuming they have of course and the change in TOB is not estimating error). Does anyone know if such a change has occured?

    http://www.nws.noaa.gov/om/coop/forms/b91-notes.htm

    WHEN TO TAKE OBSERVATIONS. Take your observations at the same hour each day, if at all possible. Prior approval is needed to change the scheduled time of observation. Routine River and/or Rainfall observations should ALWAYS be taken in the MORNING, preferably at 7 a.m. Temperature observations should be taken as late in the day as is convenient after 5 p.m.

    And if changes to time of observation have occured the NWS should have a record.

  98. Posted Aug 24, 2007 at 1:36 AM | Permalink | Reply

    Philip_B August 24th, 2007 at 12:40 am,

    Doesn’t that imply that the TOB should vary by month over the year? Also because of the noise of the signal the inflection points in the winter/summer corrections are going to be skewed. Some years high some years low. Plus the bias will depend on some extent to the warming/cooling trend.

    I still don’t see how a blanket correction can work. The correction, it seems to me, would be dependent on the delta T of the max (min)from day to day.

    Also why in the heck are we still doing twice daily manual observations when the stations are electronic? Haven’t these guys heard of modems?

    They are still doing readings in the same way they did when glass thermometers were the norm. What is this? Nostalgia?

  99. Philip_B
    Posted Aug 24, 2007 at 3:38 AM | Permalink | Reply

    M. Simon, I see how you can get a reasonable estimate of the average annual TOB for a particular time of day and adjust all sites based on what the actual time of observation is or even what you estimate the time of observation to be. Although I’m not sure how accurate the time of observation estimation process is. So the adjustment is an average based on an estimate and is unrelated to the actual TOB at a particular site for a particular year.

    Let’s say that in 1950 most observations were manual and at the directed time – late in the day. That would result in an upward TOB and hence the need for a minus TOB adjustment. If over time, manual reading is replaced by automated readings, which presumably don’t have TOB, then we should see a steady reduction in the TOB adjustment. In fact, in 1950 there was a small minus correction indicating a (small) majority of observations late in the day. However, since then there has been a steadily increasing positive adjustment.

    My point is that this change in TOB could be caused by a large proportion of observers shifting to early observation. If they haven’t then the TOB results from estimating error.

    http://cdiac.ornl.gov/epubs/ndp/ushcn/ts.ushcn_anom25_diffs_pg.gif

    I guess it could also be due to a shift to automated TOB free readings and the apparent 0.2C TOB adjustment is in fact a reducing negative TOB adjustment applied as a +ve correction. If so, it needs to be explained exactly why the adjustment is implemented this way and why the absolute temperature is artificaly inflated.

  100. Mike B
    Posted Aug 24, 2007 at 7:46 AM | Permalink | Reply

    Glad to have the discussion about TOB, and glad to hear I’m not the only one confused.

    A couple of quick reactions to some of the issues.

    1) I agree that we cannot assume that there is no TO bias. So we need to test if there is. Since anectodal (what if the high was 90 yesterday and only 70 today) have limited probitive value, I ran a simulation. The simulation showed no bias. I’ll grant that the simulation was crude, so I’d be interested in other simulations or analyses performed on real hourly min-max readings.

    2) Double counting – Obviously, it data are not collected at regular 24 hour intervals, you can have double counting. But long as the readings are conducted at regular 24-hr intervals (say 5:00 am), how do you double count anything? The high or low may not be reported on the correct day, that’s easy to see. If double counting can occur when data are colected at regular intervals, how does collecting the data at midnight eliminate it?

    Mushily written papers by agendtists are not enough to satisfy my skeptical nature.

    Thanks.

  101. steven mosher
    Posted Aug 24, 2007 at 8:16 AM | Permalink | Reply

    RE 89.

    The TOBs adjustment is an estimate. for example if you change from a Noon
    observation to a midnight TOBS the difference in TMAX would be .5C +-(error)
    ( example figures )

    Now when Hansen ajusts a series here is what happens
    Befre After
    14 : 14
    13.9 : 13.9
    14.1 : 14.1
    13.6 :14.1 ( TOBS adjust)
    13.7: 14.2 ( TOBS)
    13.5 :14.0 ( TOBS)

    Since he adds a constant he preserves the varience. BUT TOBS is an estimate
    with an error. I content you dont get to do the adjustment for FREE.
    The adjustment “fixes” the mean but it has error.

    Minor picky thing

  102. JerryB
    Posted Aug 24, 2007 at 10:04 AM | Permalink | Reply

    A few comments about TOB comments in this thread:

    They distract from the topic of this thread.

    Several of them simply express lack of understanding of TOB, and therefore
    are not informative even if they were on topic.

    Some may be misleading.

    Perhaps Steve may some day post a thread on TOB, and that would be the place
    for extended discussion thereof.

  103. steven mosher
    Posted Aug 24, 2007 at 10:46 AM | Permalink | Reply

    Defer to JerryB.

    I think a TOBS discussion would be in order at some point

  104. Posted Aug 24, 2007 at 12:19 PM | Permalink | Reply

    steven mosher August 24th, 2007 at 8:16 am,

    I don’t understand why variance must be preserved. If the highs are too high or the lows are too low wouldn’t variance in the real world actually be less than the assumed variance?

    It is my contention that a real TOB adjustment must be done on a station by station basis based on the actual delta Ts from one day to the next.

  105. steven mosher
    Posted Aug 24, 2007 at 1:18 PM | Permalink | Reply

    RE 104.

    This discussion of varience in independent of TOBs.
    I am NOT saying the varience should be preserved. I am saying the oppsite.

    I am saying Hansens method of adjustment is varience preserving and it should not be, but.

    He is adding an subtracting random variables to a series but he is only adding the means
    and not the errors. This gives his final measures more accuraacy than they deserve.

  106. Mike B
    Posted Aug 24, 2007 at 1:28 PM | Permalink | Reply

    JerryB:

    Sorry to take things off topic. Not the first time it’s happened at CA.

    TOB adjustment has come up in several threads lately, and given the impact that it has on the instrument record,verifying it’s accuracy and precision is vital.

    Regarding the following comments:

    “Several of [the comments] simply express lack of understanding of TOB, and therefore
    are not informative even if they were on topic.

    Some may be misleading.”

    Sounds rather like the AGW community’s original response to M&M’s analysis of MBH98. :)

    Having had my say, I’ll remain mute on the topic until there is a more appropriate thread.

  107. Robert
    Posted Aug 24, 2007 at 1:48 PM | Permalink | Reply

    I can solve ALL of these adjustment problems. It’s right there, in the second and fourth graphs on this page:

    http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html

    All the corrections cross, at zero, in 1906. All we have to do is collect data exactly as it was done then. Since NOAA has determined that 1906 was the high point of temperature collection methodology, and requires no adjustments, all the adjustments we use today must be simply an attempt to reproduce the accuracy our great-grandparents knew. That technology isn’t lost. We can still build equipment like they used in that golden age, before our 100 year long technological slide. Then, we can just measure the temperature, and immediately post the undisputed data. No TOB, no UHI, no MMTS. No spending a decade debating whether or not 1998 was REALLY warmer than 1934. Just actual measurements.

    Does anyone know Hansen’s phone number? This idea should save him a lot of trouble. I’m sure he’ll jump on it.

  108. Gunnar
    Posted Aug 24, 2007 at 3:03 PM | Permalink | Reply

    >> That technology isn’t lost. We can still build equipment like they used in that golden age, before our 100 year long technological slide.

    (giggling) This comment is priceless. Thank you for brightening my friday afternoon with this great sarcasm.

  109. Murray Duffin
    Posted Aug 24, 2007 at 3:47 PM | Permalink | Reply

    Re: 60 and 65
    I have checked the nearest surrounding stations for each of the 6 HO-83 sites. Ft. Myers, Key West and Apalachicola nearby stations don’t have a same year jump, so it looks like a sensor problem. Tallahassee is mixed so it could be real variation. Pensacola and Savannah nearby stations also show an ’84-’85 jump, so it would seem to be temperature and not necessarily the HO-83. Strangely mixed results. Certainly not conclusive. Apalachicola seems the most likely to have had an instrumentation problem. Murray

  110. Posted Aug 25, 2007 at 2:07 AM | Permalink | Reply

    steven mosher August 24th, 2007 at 1:18 pm,

    Much clearer. Thanks!

    And pretty much what I thought.

    Robert August 24th, 2007 at 1:48 pm,

    Actually it will not be easy to duplicate the old machines. The metals today have different impurities (probably fewer). I suppose analysis could be done and adjustments done. I’m assuming you mean chart recorders.

    Thermometers can be individually calibrated. So that would help. And don’t forget to sling your psychrometer.

    Which makes me think. The old records should have 4 temps and a time. Max, Min, Wet bulb, Dry bulb, time of observation. That might be useful. 50% more temperature data and one data item with a time.

  111. Posted Aug 25, 2007 at 2:35 AM | Permalink | Reply

    Also it seems to me that collecting 10X or 15X as much data by modem would be very helpful. That would mean it would take a few years to fill a 250G hard drive. Say a reading every two hours plus daily min max. And burn DVDs every day as back up.

  112. Geoff Sherrington
    Posted Aug 25, 2007 at 2:49 AM | Permalink | Reply

    Re # 19 Steve

    The Australian Bureau of Meteorology BoM uses a TOB which gives the number of days over which a cumulative reading was taken.

    Thus, if it rained for a week, the weekly downfall could be measured as one figure and fitted into the data with some small error.

    But, if there was a thermometer which recorded only the highest maximum and lowest minimum in that period, the results would be biased towards extremes. I am at present discovering if this TOB was used on temperatures, or only on rainfall. It is possible that the USA is not the only island using some form of TOB.

    Does anyone have information on which models of temperature sensors have been used, and for which terms, in Australia? Do we have an HO-83 problem?(This interest is part-time for me and I do not have inside access via friends here).

    Re # 107 Robert –

    you beat me by a few hours. I was going to suggest that some contemporary temperatures be taken alongside modern instruments to see if there was a systematic difference. There must be many oldies lying around, in museums, in NASA, etc.

    Geoff.

  113. steven mosher
    Posted Aug 25, 2007 at 7:24 AM | Permalink | Reply

    you might find interesting things here on instruments
    used arund the world

    http://www.wmo.ch/pages/prog/www/IMOP/publications/IOM-94-TECO2006/PROGRAMME.HTML

    Also the state of instruments, netwrks in several countries

  114. tom s
    Posted Aug 25, 2007 at 3:50 PM | Permalink | Reply

    With all this information about sensors in mind, and all the pitfalls that are associated with the data-set, the findings of Steve Mc et al and the recent publicity in the press, how long until this current issue sees some light in the MSM? I sense that the House of Usher is beginning to crack and crumble. This data set is not and was not ever meant to measure a ‘global temperature’. It’s been a perposterous notion to this meteorologist since 1988 when my career and AGW hypothesis’ were born.

  115. steven mosher
    Posted Aug 25, 2007 at 8:23 PM | Permalink | Reply

    re 114 Tom

    Go to http://www.surfacestations.org do a survey of a site

    That is your best option for making a difference.

  116. Papertiger
    Posted Aug 26, 2007 at 2:14 PM | Permalink | Reply

    A sort while ago I went looking after why so many stations were closed down in Australia in the early 90’s. Found out that this is when the Aus gov. went full bore to modernize and automate their climate change network. All of their stations are to the siting specifications with clearances from asphalt concrete trees and such. They even have site pictures online.
    http://www.bom.gov.au/climate/change/reference.shtml You just click the orange dot and the picture pops up.

    And with all that due dilligence they still have the warming trend. The stand alone Aus time series looks just about like the world average.
    http://www.bom.gov.au/cgi-bin/silo/reg/cli_chg/timeseries.cgi?variable=tmean&region=aus&season=0112

    I wonder if they are using the HO-83’s?

  117. Willis Eschenbach
    Posted Aug 26, 2007 at 5:21 PM | Permalink | Reply

    Papertiger, thank you for the interesting links. You say:

    A sort while ago I went looking after why so many stations were closed down in Australia in the early 90’s. Found out that this is when the Aus gov. went full bore to modernize and automate their climate change network. All of their stations are to the siting specifications with clearances from asphalt concrete trees and such. They even have site pictures online.
    http://www.bom.gov.au/climate/change/reference.shtml You just click the orange dot and the picture pops up.

    And with all that due dilligence they still have the warming trend. The stand alone Aus time series looks just about like the world average.
    http://www.bom.gov.au/cgi-bin/silo/reg/cli_chg/timeseries.cgi?variable=tmean&region=aus&season=0112

    So I went to the site, and the first one that I clicked on (Townsville, 032040) shows a site on what looks like either gravel or asphalt, in the middle of a grassy field … hmmm … so I tried some more. Moree (005315) has a car parked about twenty feet from the temperature sensor. And Tibooburra Post Office (046037) is in someone’s front yard, with a car parked right next to the sensor. Can’t say I’m too impressed … although many of the sites seem quite good.

    However, the graph that you show contains data since 1910. The climate network that you gave the map to is not all of the Australian stations used in the graph. It is the “Reference Climate Network” (RCN), which was established after a WMO request for such networks in 1990. The RCN contains 103 sites. The graph you showed, on the other hand, is from the Australian High Quality Network, which contains 133 sites. However, only 51 of these sites are RCN sites, and half of them were not established until after 1990.

    In other words, the majority of the data you showed did not come from the network you showed. And the post 1990 data in your graph, which contains (among other stations) the RCN stations, shows little change.

    Finally, there is (AFAIK) no adjustment for UHI in the Australian data.

    w.

  118. Papertiger
    Posted Aug 26, 2007 at 10:37 PM | Permalink | Reply

    Willis

    That is a relief.
    For a while I thought it might be game over.

    Game on! PT

  119. R John
    Posted Aug 27, 2007 at 1:07 AM | Permalink | Reply

    Very interesting discussion…

    As a chemistry professor and long time weather buff I will add that temperature measurement in the laboratory is one of the least precise measurements that I deal with. Scales are easy to calibrate and give consistent numbers regardless of price. On the other hand, whether it is a mercury, alcohol, or electronic thermometer, they have such inconsistent and different results that you question ANY and ALL of them. So, it is not a surprise to question the reliability of data collected by any modern or even older probes.

    #43 – I was in Chicago on 1/1/99 and we could walk across the tops of parked cars in the streets as you would normally walk down the street due to the blizzard – and this was only a day after the second warmest year ever.

  120. SteveSadlov
    Posted Aug 27, 2007 at 1:29 PM | Permalink | Reply

    RE: #119 – I’ve done some Six Sigma coaching, and have found that thermal measurement instruments of any kind are the perfect tool for conducting a “learn by doing” lesson regarding Measurement Systems Analysis (MSA) aka Gage Repeatibility and Reliability (Gage R&R). Funny thing …. whenever I bring this up at Real Climate (or even arguing with the climate science orthodoxy participants at other venues) they cast it aside as “what does this have to do with climate science” or “haven’t you heard about the effects of large numbers” etc. I simply laugh.

  121. DeWitt Payne
    Posted Aug 27, 2007 at 3:31 PM | Permalink | Reply

    R John

    The big problem with liquid-in-glass thermometers in the lab is stem temperature correction. Some thermometers are calibrated as total immersion. This means the entire mercury thread must be in the medium being measured, not necessarily the whole thermometer. Others are calibrated for partial immersion, 76mm being particularly common. In this case, there is an immersion line marked on the thermometer. The problem comes when a partial immersion thermometer is placed in a hot liquid to the correct depth. The calibration assumes the stem is at room temperature, but this isn’t always, or even usually, the case. If the mercury thread above the liquid being measured is also hot, the thermometer will read high. There are tables for stem temperature correction in the CRC handbook, IIRC. Of course, this assumes no thread separation, something else that is quite common. Alcohol in glass thermometers (red liquid usually) are little better than an educated guess. Air temperature measurement is less of a problem since total immersion is the norm.

    As far as analytical balances, yes the precision and accuracy is high, but try weighing a 1 mg precipitate in a 100 g glass vessel. Buoyancy correction becomes the major source of error.

  122. Carl Saffron
    Posted Oct 15, 2007 at 9:50 PM | Permalink | Reply

    A couple of misconceptions here about the HO83…

    The HO83 was used by the NWS (and probably elsewhere) from about 1985 until the early 90s when it was replaced by ASOS, which uses the 1088 (a new and improved HO83). The main difference between the two was the location of the fan and direction of air flow. The HO83 normally read about 1.5 degrees F higher than the 1088. During sunny days with light wind, it would read as many as 5 degrees F higher than the 1088. The problem with the HO83 wasn’t discovered until ASOS went in because it was calibrated to a mercury thermomoeter standard at around 8am every morning. Once both the HO83 and ASOS 1088 were working side by side during the ASOS acceptance period, the problem became obvious. The major cause of HO83 failures was the mirror used to measure dewpoint. It would become pitted or dirty and have a tendency to freeze up during extremely humid weather. The temperature sensor seldom gave any trouble… unless one considers accuracy a problem;-).

5 Trackbacks

  1. [...] has been written about problems with artificially high temperature readings due to the HO83 aspirated air temperature/dewpoint temperature sensor used on NOAA Automated [...]

  2. [...] this sensor has been removed in the NCDC History File. You can find the details on HO-83 bias at Climate Audit and Watts Up With [...]

  3. By Climate Change Politics - Page 34 on May 28, 2010 at 8:29 AM

    [...] [...]

  4. [...] country. The situation has been covered in some detail in a blog by Steve McIntyre on ICECAP.US (http://www.climateaudit.org/?p=1954) for those wishing more detail on the history of this [...]

  5. By Detectives in Tucson « Climate Audit on Oct 30, 2011 at 8:20 AM

    [...] (Sep 4, 2007) I posted further on previous discussion of the HO-83 thermometer on Aug 22 here here.. The paper cited above (Gall et al, 1992 entitled The Recent Maximum Temperature Anomalies in [...]

Post a Comment

Required fields are marked *

*
*

Follow

Get every new post delivered to your Inbox.

Join 3,142 other followers

%d bloggers like this: