Inside the HO83 Hygrothermometer

Hygrothermometer
HO83 ASOS Hygrothermometer
(temperature/dewpoint sensor)

Much has been written about problems with artificially high temperature readings due to the HO83 aspirated air temperature/dewpoint temperature sensor used on NOAA Automated Surface Observing Stations (ASOS). The most famous problem occurred in Tucson, AZ in the mid 1980’s where a malfunctioning HO83 unit created dozens of new high temperature records for the city, even though surrounding areas had no such measured extremes. Unfortunately those new high temperature records includign the all time high of 117 degrees F, became part of the official climate record and still stand today. Here is a New York Times article that highlights the problem and a research paper from Kessler et al outlining similar problems in Albany New York as well as Tucson.

But we haven’t really known much about the detail and inner workings of the HO83. Fortunately, I’ve located a NOAA training course on the original model HO83 and its improved replacement, the model 1088 The NOAA online tutorial provides some detail on its inner workings, with pictorials and schematics.

See the NOAA HO83/1088 online training course.

In an internal NOAA Document from 2002 that outlines a software upgrade that was designed to improve performance and reliability of the ASOS temperature and dewpoint system, they have a description of its operation:

1.1.2 New Dew Point Temperature Replacement Sensor Currently ASOS uses a hygrothermometer (H083 or 1088) sensor for measuring both ambient and dew point temperatures. This sensor uses a platinum wire Resistive Temperature Device (RTD) to measure ambient temperature and a chilled mirror to determine dew point temperature. The mirror is cooled by a thermoelectric or Peltier cooler until dew or frost begins to condense on the mirror surface. The body of the mirror contains a platinum wire RTD and the mirror’s temperature is measured and reported as the dew point temperature. The ambient temperature sensor for both hygrothermometers meets ASOS performance requirements.

The dew point temperature sensor performance is below expectations.

In order to improve the performance of the dew point temperature sensor, the NWS looked for a more reliable technology. The new sensor measures relative humidity via capacitance and then the dew point temperature is calculated and processed through the ASOS algorithms. The ASOS data processing algorithms have not changed; only the dew point temperature sensor has been replaced. The new ASOS dew point temperature replacement sensor is the Vaisala DTS1.

They also issued the ASOS Product Improvement Implementation Plan in 2002 which outlines all of the stations in the USA that were scheduled to get the upgraded temperature/dewpoint sensor.

One of the biggest problems was that the early design of the HO83 allowed exhaust air (warmed by the warm side of Peltier chip) to recirculate from the mushroom shaped cap down the sides of the chamber, and back into the air inlet at the bottom. The problem was solved a few years later by the addition of a metal skirt which deflects the exhaust air.

ho83-original-modified.png

Unfortunately, even though NOAA has modernization plans in place for the ASOS network, there are still some of the original designs that remain in operation today, such as this USHCN station which is the official climate station of record for New Orleans:

NOLA_H083_closeup
Photo from sufracestations.org volunteer Fred Perkins 8/25/07 click for larger photo.

Thus, the HO83 induced bias first noted in the mid 1980’s continues in the surface temperature record even today.

While only 5% of the USHCN network is ASOS, the biases produced by the HO83 are quite large, and there appears to be no site specific adjustments to remove the bias. Since determining the individidual maintenance records and biases of each ASOS station would be a significant task, the simplest solution would be to remove all ASOS stations from the USHCN record set.

49 Comments

  1. Larry
    Posted Jan 11, 2008 at 9:41 AM | Permalink

    That would seem like an error that could be characterized, but it would be highly dependent on wind; the error would only manifest if the air is still.

  2. steven mosher
    Posted Jan 11, 2008 at 10:24 AM | Permalink

    Larry? Got 17 minutes to waste

    http://blog.makezine.com/archive/2008/01/make_your_own_vaccum_tube.html

  3. Sam Urbinto
    Posted Jan 11, 2008 at 10:39 AM | Permalink

    Regarding the original, non-deflecting bottom tube; is the fan powerful enough to deflect the upper air far enough down to enter the bottom under calm conditions in the first place?

  4. Posted Jan 11, 2008 at 10:52 AM | Permalink

    The capacitance sensors have problems. One is drift over time. Another is temperature sensitivity. Also poisoning from airborne contaminants.

    The mirror dew point is the most accurate instrument. The real solution to the problem should have been to upgrade the temperature measurement on the dew point sensor.

    Temperature and humidity are both tough. Drift over time is a problem. The answer is to use two different sensors for each – say RTD and thermistor for temp and mirror and capacitance sensor for RH. When they drift too far apart call in the instrument and find out why. Assuming accuracy and drift are important to you.

    In addition it would be good to have some kind of fan check. i.e. turn on the peltier cooler watch the temps, then turn on the fan and see if the delta T indicates a working fan.

  5. Posted Jan 11, 2008 at 10:57 AM | Permalink

    Sam,

    It is a laminar flow problem in still air. Laminar flow doesn’t take much power even if the heater (peltier cooler) makes the air significantly hotter. The hot air is going to “stick” to the wall of the tube.

  6. Posted Jan 11, 2008 at 11:29 AM | Permalink

    Mosher,

    That one is making the rounds on tech blogs. I saw it 4 or 5 days ago. Very nice.

  7. Larry
    Posted Jan 11, 2008 at 12:37 PM | Permalink

    4, agreed. Or a current sensor. A shunt resistor is a cheap motor monitor.

    I have to wonder how much testing they do on these things before they start shipping them out all over.

  8. DeWitt Payne
    Posted Jan 11, 2008 at 12:52 PM | Permalink

    There’s no substitute for regular calibration. The design is flawed if you can’t easily replace the temperature sensors with freshly calibrated ones at regular intervals. Redundancy helps, but is not sufficient.

  9. Larry
    Posted Jan 11, 2008 at 1:09 PM | Permalink

    8, agreed, although platinum RTDs are quite stable, and the electronics available these days are a lot better than they used to be. Drift isn’t as much of a problem as outright failure (and the fan is probably the most likely thing to fail). You don’t replace RTDs unless they outright fail, they’re never the problem. You calibrate the electronics.

  10. Larry
    Posted Jan 11, 2008 at 1:14 PM | Permalink

    FWIW, the RTD isn’t an el cheapo sensor; if they were trying to cut corners, they’d have used thermistors.

  11. novoburgo
    Posted Jan 11, 2008 at 1:39 PM | Permalink

    Ahh for the good old days: observer takes shaded readings using a sling psychrometer, converts wet bulb to dew point, records result, repeats process at next scheduled time.

  12. DeWitt Payne
    Posted Jan 11, 2008 at 2:09 PM | Permalink

    Re: #9

    You calibrate the electronics.

    But that should be trivial. Just switch in a couple of precision resistors in place of the RTD. I was also under the impression that RTD’s didn’t drift, but I vaguely remember someone commenting a while back that drift was a problem. My experience with an RTD was with one of those three foot long quartz tube enclosed devices and a high precision resistance bridge for calibration of other thermometers.

  13. Anthony Watts
    Posted Jan 11, 2008 at 2:10 PM | Permalink

    The real solution to this problem would have been to have two aspirated assemblies, one for RH/Dewpoint and the other for air temp. Forcing the single aspiration system to do dual duty may be more economical but also leaves the system prone to the bias we’ve seen.

    The upgraded model 1088 hygrothermometer adds the “fan failure” and “dirty mirror” alarm features. So at least they know about the issues when they happen as opposed to before when they could go on for extended periods.

    The current upgrade to a Vaisala DTS1 only affects the dewpoint/RH sensor, the Platinum RTD Temperature sensor remains unchanged. With the removal of the Peltier device from the chamber, the bias issues should start to disappear.

    But sorting out what failed when in the past, and the magnitude of the bias introduced by those failures may be a nearly impossible job. Like I said the simple solution is to remove ASOS stations from USHCN altogether.

  14. Sam Urbinto
    Posted Jan 11, 2008 at 2:36 PM | Permalink

    But taking pictures is a waste of time, waaaahhhh!!!

  15. Anthony Watts
    Posted Jan 11, 2008 at 3:10 PM | Permalink

    RE14, Sam more on that to come, stay tuned…

  16. Sam Urbinto
    Posted Jan 11, 2008 at 3:20 PM | Permalink

    😀

  17. Anthony Watts
    Posted Jan 11, 2008 at 3:30 PM | Permalink

    For those interested, I’ve located the training on the DTS1 upgrade to ASOS. It appears they have added a second aspirated assembly.

    Imagine that!

    See this: http://www.nwstc.noaa.gov/MAINT/DTS1wbt/DTS101.html

  18. Posted Jan 11, 2008 at 4:45 PM | Permalink

    DeWitt,

    That just calibrates the electronics. Which is not the same as temperature calibration. RTDs of the type you would find in such an instrument drift with time. The thin film jobs are better than the wound wire jobs. The problem with the wound wire jobs is that the support structure relaxes with time. That changes the tension on the wire. Which not only changes the calibration resistance which can be factored out. It also changes the response to temperature (slope).

    It is expensive to keep instruments in calibration.

  19. Carl
    Posted Jan 11, 2008 at 4:59 PM | Permalink

    I seem to recall the H083 fan being at the bottom. I also believe that the direction of the air flow was opposite from the 1088 (1088 exhausts via the top and H083 exhausts via the bottom). Actually, I know that the direction of airflow was opposite and if you mount the fan on the 1088 upside down, you’ll get the same performance as the old H083 (eg approx 1.5 degrees F too high in the morning to 5 degrees F too high on sunny days).

    Using a mirror for measuring dewpoint sucks. It’s impossible to keep the mirror clean in the field and the direct/indirect sensors aren’t sensitive enough to distinquish between a clear mirror and ice. Consequently, during cold rainy days when the dewpoint rises thru 32 degrees, the mirror transitions from frost to ice and proceeds to build up a little dome while happily reporting 31 degrees F. It was not uncommon to have every airport in a state reporting 31 degrees F dewpoint after a warm front moved thru during the winter. Pitted and frozen mirrors were the number one maintenance headache of both the H083 and 1088. They won’t be missed.

  20. Posted Jan 11, 2008 at 5:00 PM | Permalink

    Let me add that besides drift of the sensor you have the problem of thermocouple junctions in the sensor reading electronics. Because of the small response of the RTDs microvolts matter. Any delta T in the instrument itself is going to be a problem. You have to actually look at the design of the circuit board to see if the design was done right. And even then if the board is not aluminum clad in the sensor area and any major heat generators kept off the sensor board, well, you have problems. Then there is the thermal mass problem.

    As I said in a previous thread on the subject. Any reasonably competent engineer can get 12 bits of accuracy out of a 24 bit converter. There are any number of subtle mistakes that can be made. And we haven’t even discussed contaminants from the air causing leakage currents.

    BTW if you check with NBS (now NIST) they will give calibration intervals that depend on desired accuracy and environment.

    My guess is that the weather people have no full up calibration schedule. They pop a resistor into the circuit and check the electronics. Very good. Not the same as calibration.

  21. Larry
    Posted Jan 11, 2008 at 5:08 PM | Permalink

    18, industrial RTDs are never calibrated. The electronics are calibrated against a resistance standard, as DeWitt describes. There’s a reason why they use platinum (and it isn’t because it’s cheap).

  22. Posted Jan 11, 2008 at 5:12 PM | Permalink

    Carl,

    There are special dew point mirror sensors that are designed for harsh environments (coal plant smoke stacks). They are much more expensive than the type used for HVAC.

    You keep ice off the mirrors by reversing the peltier junctions periodically to cause the ice to melt.

    To make all this work you have to continually get feedback from the field and adjust your designs accordingly.

    As I said, if accuracy is important, two different kinds of sensors must be used and checked against each other continually.

    Personally I like the transistor sensors for temp. They are fairly linear and produce a large signal. However. All temp sensors are subject to mechanical relaxation with age to varying degrees.

    The best way to think of all this is that the universe is made of rubber of varying stiffness and response to the environment.

    Measurement is hard. Accurate measurement harder.

  23. Larry
    Posted Jan 11, 2008 at 5:12 PM | Permalink

    20, you must be thinking of something else. These are low impedance devices. They’re used in bridges, and they typically are 100 ohms. And the kinds of problems that you’re describing are design problems, not maintenance problems. If it’s properly designed, it won’t have those problems.

  24. DeWitt Payne
    Posted Jan 11, 2008 at 5:15 PM | Permalink

    Re: #18

    It is expensive to keep instruments in calibration.

    But not as expensive as acting on bad data from poorly calibrated instruments.

  25. Posted Jan 11, 2008 at 5:20 PM | Permalink

    Larry,

    #21 – if you don’t put every instrument in a chamber and run a curve on its actual operation and periodically bring units in from the field for rechecking what you have is not a measurement system. What you have is a crap shoot. Even if your thin film RTD is perfect and never changes over time. How do you know you don’t have an infestation of critters?

    Ever hear of traceable to NIST? To get that you have to follow standards.

  26. Carl
    Posted Jan 11, 2008 at 5:26 PM | Permalink

    Simon,

    guess I shoulda said the H083/1088 mirror sucked 🙂

  27. Posted Jan 11, 2008 at 5:40 PM | Permalink

    Larry,

    It is exactly the low impedance that is the problem. If you want high output out of them you have to pump in a lot of current – that causes heat. With low current you have signal level problems. In any case bridge sensors need amplification. RTDs are notorious for their small delta V/delta T. Almost as bad as thermocouples. Even with thin film RTDs you have the problem of substrate relaxation over time. Or stress changes cause by the way they are mounted. RTDs could be considered strain gauges in other contexts.

    Self heating problems are one of the chief error sources in electronic temp measurement with any kind of sensor. Worse with thermistors. Present in all sensor types. What you would like to do is periodically energize them. However, there is danger there as well due to heat stress/unstress changing them.

    Any measurement of a system changes the system. Sad but true facts of life.

    As some one pointed out glass thermometers are good in that respect. However, that introduces operator error possibilities.

    The only way to be sure is periodic calibration. Esp if you hope to keep the errors over time below .1 deg C.

  28. Posted Jan 11, 2008 at 5:44 PM | Permalink

    Carl,

    Ultimately they all suck. The smoke stack sensors can last longer between calibrations. Note that I say “can”. Nothing is guaranteed.

  29. Posted Jan 11, 2008 at 5:48 PM | Permalink

    Trust but verify is not just for arms races.

  30. Carl
    Posted Jan 11, 2008 at 5:52 PM | Permalink

    H083/1088 calibration was performed quarterly and consisted of switching in precision resistors to test the electronics and comparing the current temperature reading to a sling-psychrometer. A value +/- 1.5 degrees F was considered acceptable (evidently, tree rings have better accuracy:-/). We didn’t bring an ice bath with us. Even if we had, the fan at the bottom of the H083 sensor would’ve resulted in something like a fountain. At NWS offices, the temperature was also compared every morning to a thermometer in an instrument shelter. Acceptable tolerance depended on how close the shelter was to the sensor. +/- 2 degrees F was typical. Doing the comparison during the morning turned out to have unfortunate consequences as we noticed when the ASOS’s went in and we could compare the temps 24/7. Oops. Please understand, though, that back in the 90s we didn’t know our data was going to be used to save the world. Hell, we were just trying to keep the meteorologists and airport directors off our backs.

  31. Evan Jones
    Posted Jan 11, 2008 at 6:20 PM | Permalink

    So, lessee. The idea is to pay 5 garbonzos for a thingie that doesn’t need to be checked–that needs to be checked. That costs another thou just to repair. (And the way they are checking it is wrong, anyway?)

    Maybe it would be more reliable just to go with a $1000 Stevenson Station and give the long-suffering volunteer the extra swag? (And actually make ’em walk the lousy 100 feet this time? Free the CRN4?)

  32. Larry
    Posted Jan 11, 2008 at 6:22 PM | Permalink

    27, part of the difference here is industrial instrument people aren’t foolish enough to believe that you can keep a temperature instrument of any kind within 0.1 C over an extended period. That’s an unrealistic spec, regardless of how many people publish it.

  33. DeWitt Payne
    Posted Jan 11, 2008 at 6:30 PM | Permalink

    Larry,

    My Standard Platinum Resistance Thermometer was good to millidegrees, but it wasn’t a field instrument by any measure. The resistance bridge and associated null meter took up a lot of space and the bridge had mercury wetted contacts. A zero check against a water triple point cell was part of the SOP. I actually thought about buying a zinc freezing point cell too.

  34. MikeF
    Posted Jan 12, 2008 at 2:50 AM | Permalink

    I would like address a couple of points here. It seems that prevailing opinion is that you can not design temperature sensor that does not require periodic calibration. It also seems that some people think that RTD is the best way to measure temperature. I believe that those opinions are not entirely correct.
    The last point is my personal opinion that given perfect thermometer it is still extremely hard to measure air temperature accurately.

    RTD has 2 advantages over NTC thermistors – linearity and range. Since we are measuring ambient temperature at places where people live, ambient temperature falls well within normal operating range for thermistors. If designer decides to use fairly simple digital processing instead of Op-Amps then linearity stops being an issue at all. RTD has extremely low temperature response, very low resistance and host of other problems associated with it. It’s great if you want to measure LN2 and melting lead temperature using same sensor. Otherwise it sucks.
    High quality NTC thermistors have about 1mK per year long-term stability. Good “zero-drift” op-amps will get you excellent long-term stability and temperature performance. You can have enough gain to get you less then mK worth of self-heating. You can get 24-bit ADC for under $4. You can get nice micro-controller to deal with thermistor’s nonlinearity and for calibration for under $6. I can design and build a sensor that has relative stability and repetability of better then 0.1 degC for under $50 of cost (including PCBs, assembly and testing in 1K volumes). I have designs in the field that use thermometers that are 0.1 degC stable for 10 years without any maintenance or calibration (in a telecom temperature range, but I see no reasons why they wouldn’t work for wider range without minor modifications). That stability is absolutely critical for proper operation of other components, and had been confirmed by many thousand of hours of reliability testings and field operation for many years. That said, if your goal is to measure outdor temperature remotely you still need to periodically verify proper operation of whole measuring system. Specs of a thermometer is one thing, real world performance out in the field, in outdoor conditions is very different thing. I agree that you have to have some form of independent verification (whether by manual calibration, visual inspection, redundant measurement- and it has to be real redundancy, not just 2 sensors in same enclosure, etc)

    Despite all this I can not say that I can measure temperature with absolute accuracy of 0.1 C. Or keeping same accuracy when you change sensors. Or measuring air temperature (as opposed to measuring temperature of a solid object). Based on my experience with temperature measurements (and granted, I never really cared about absolute temperature measurements, only about using my temperature measurement to make other things temperature independent) I would have trouble measuring temperature to under 1 degC of absolute accuracy.
    The following are my personal observations. I have not read about any research into it, but I did some to understand effects I was seeing. I was trying to understand why 2 sensors would show different temperature depending on their actual positions to each other (they were within few inches away from each other). I come to conclusion that blowing air across object actually changes it’s temperature. And no, I’m not talking about cooling sensor that is being self-heated. Expanding air cools, compressed air heats. Turbulent air is being compressed or expanded at different points in time and space. Blow air just right on the object of same temperature and it gets hotter or cooler. There are few companies that make vortex coolers. Blow air through specially designed passive device with 2 outputs and you get cold air coming out one and hot coming out another. I had observed over 1 degC fluctuations in temperature by just changing airflow patterns over sensors with same temperature as ambient air. Pretty mild airflow too. As far as I’m concerned, fan is a huge no-no if you want to measure air temperature accurately. You’d probably want to have large enclosure which makes air as still as possible, and then embed sensor inside nice heavy block of few pounds of solid copper located in the middle of that enclosure.

  35. Posted Jan 12, 2008 at 3:27 AM | Permalink

    #30 Carl,

    I was expecting calibration would have to be within +/- 2F for purposes of figuring runway length and air speed for take off.

    So that sounds about right to me. Also the 3 month check is not bad either. About what I would expect to prevent more than another 1 deg F drift.

    Other than the accuracy it seems like a pretty good program. If you had to recalibrate was the error at the time of recalibration logged?

  36. Posted Jan 12, 2008 at 3:44 AM | Permalink

    #34 Mike F.

    Thanks. One point:

    You’d probably want to have large enclosure which makes air as still as possible, and then embed sensor inside nice heavy block of few pounds of solid copper located in the middle of that enclosure.

    Lags. You no longer have a minute by minute or “instantaneous reading”. You now have an average. Probably a 10 to 30 minute average without moving air across the block. Of course once you start moving air you have real problems again.

    BTW how did you handle thermocouple effects? (less important for the larger signal a thermistor can produce). How did you handle linearization? When I had to use a thermistor (aircraft engine temps) I did a segment table and interpolation to 32 bits. I could have done 16 bits but I wanted to eliminate all significant rounding errors (I managed under .1 deg F IIRC)

    I looked into thermistors a while back and found that they were used for oil well logging with an individual calibration curve for each. They used the standard thermistor equation which is some kind of funny inverse second order job (if memory serves).

  37. Peter D. Tillman
    Posted Jan 12, 2008 at 10:33 AM | Permalink

    The most famous problem occurred in Tucson, AZ in the mid 1980’s where a malfunctioning HO83 unit created dozens of new high temperature records for the city, even though surrounding areas had no such measured extremes. Unfortunately those new high temperature records includign the all time high of 117 degrees F, became part of the official climate record and still stand today.

    I spent the “official” 117º day logging core in a sheet-iron shed at the Mission mine south of Tucson, and can attest that, even if it wasn’t a record, it was pretty damned hot.

    Cheers — Pete Tillman

  38. Carl
    Posted Jan 12, 2008 at 10:38 AM | Permalink

    Simon,

    I’m trying hard to remember here… I’ll see if I can dig up a manual… but as I recall there was no adjustment for the temperature sensor. If it didn’t meet specs it was replaced. Where I’m a little fuzzy is during the switching in of the precision resistors. Could we make adjustments there? Possibly… either that or we replaced the board. The temperature was hardly ever out of spec, though, so it didn’t make much of an impression on me. There were several adjustments for dewpoint that were always required (we just cleaned the mirror) and are therefore stamped indelibly on my brain.

    We did fill out a calibration sheet that was sent to headquarters. Any sensor or board replacement would’ve been noted there. Would we have noted the temperature before and after the repair? I don’t remember if that was on the form.

    As for the seemingly wide tolerance (+/- 1.5 degrees F at the sensor), that’s probably due to budget and the requirements of the airports. If airports had wanted +/- .1 degrees C, they should’ve asked for it and upped the NWS budget a couple billion… or planted some trees.

  39. steven mosher
    Posted Jan 12, 2008 at 6:33 PM | Permalink

    Here is the future system guys:

    Click to access Goodge.pdf

  40. Posted Jan 12, 2008 at 11:29 PM | Permalink

    #39 Mosher,

    Fascinating report. It just goes to show that measurement is hard and accurate measurement is harder.

  41. Posted Jan 13, 2008 at 5:09 AM | Permalink

    Seems like the USA this Winter needs more Anthony Watts per sq m. Is it true they named this unit after you, Anthony?

    OT, some road traffic revenue raising devices here were calibrated using a music tuning fork and the judicial system accepted that as reasonable.

    On thread, if you think air temperature measurement is hard to sustain with accuracy, think about ice core temperature reconstructions and the accuracy claimed.

  42. Posted Jan 13, 2008 at 11:34 AM | Permalink

    I used in the 1990’s Austrian made KRONEIS dew-point sensors for many years in a poorly ventilated cavern to measure humidity levels between 95% and 100% (which are impossible to measure with capacitive sensors, that get soaked and never quite recover without strong air ventilation). Initially, all dew-point sensors worked fine for a couple of weeks or even months, than, despite a seemingly robust and sturdy construction, they drifted away.
    So my experience with mirror-chilled dew-point sensors is not the best. BTW, I still think that measuring humidity correctly and accurately over long times is the toughest of the tough tasks.

  43. George M
    Posted Jan 13, 2008 at 9:28 PM | Permalink

    Closely reading Steven Mosher’s (39) future systems paper is telling. Some of it sounds like it was written by Anthony Watts in a previous life. But, even with all this inherent inaccuracy, what is the probability that all the temperature measurements will drift in the same general direction, towards hotter? Or is that due to the ‘corrections’?

  44. aurbo
    Posted Jan 14, 2008 at 11:46 AM | Permalink

    Having sepnt 6 years involved with the oversight of the NWS modernization and implementation program back in the 1990s, i have some experience on ASOS problems.

    From the outset, the original design was a disaster. An early document outlining some of these problems can be found here.

    Many of the subsequent fixes in the original equipment (HO-83) and later the HO-1088 sensors were often patches rather than a total revamping. The original contractor for ASOS had never built meteorological instruments prior to their contract, and when deficiences in the product were noted by the ovseright committee, it was too late to start from scratch as budget and personnel requirements were already committed to elimtaing the NWS observer employee specialty and new money was not available.

    One should also check our this detailed comparison of ASOS and CRN temperature measurements.

  45. Geoff Sherrington
    Posted Jan 16, 2008 at 4:49 AM | Permalink

    Re # 44 aurbo

    A lovely feature about Climate Audit is that one can ask a question and be rewarded with information that would be near impossible to find on a search. I had wondered aloud a few months back if spectral response, especially IR, affected different types of temperature devices. Now I have a comparison of two. It would be interesting to see how these compare in a similar experiment with the historic Hg in glass thermometer. The closest I have got does not answer this question, but is worth a revisit. Work in Australia by Jane Warne, Bureau of Meteorology,

    Click to access ITR649.pdf

  46. Geoff Sherrington
    Posted Jan 16, 2008 at 8:19 PM | Permalink

    ASOS (Automated Surface Observing System) reliability.

    I read the report first referenced in aurbo #44. Although the report is late 1990s, records taken in that period that relied on the ASOS system were not without problems. I have selectively quoted two of the worst parts of the report, which admittedly does have some better news sprinkled through it. ASOS costs at that time were $US351 million and procedures such as aircraft safety and flood prediction relied on it to a degree.

    From the report:

    Furthermore, critical failures and errors that were caught and corrected by weather observers were not included in these results. Once these errors are included, ASOS failure and error rates are 730 and 1,680
    hours respectively.

    (Text deleted for brevity here. Not all errors are temperature related. Some are precipitation, wind etc. A MTBF of 700 hours for a remote, automated, long-term system is simply deplorable.)

    The reason that ASOS’ reliability problems were not discovered during testing and corrected prior to system deployment and operation is
    that ASOS program management repeatedly chose to defer testing of
    mean time between failures. Instead, the program office relied on
    the results of a model run by the contractor to predict system
    reliability, rather than testing reliability.

    One might conclude that some current uses of USA observations are beyond the capability of the data because of errors that cannot be adjusted in hindsight.

  47. Raven
    Posted Jan 16, 2008 at 8:30 PM | Permalink

    There is a message in this that goes way beyond the problems with particular sensor:

    Instead, the program office relied on the results of a model run by the contractor to predict system reliability, rather than testing reliability.

  48. Geoff Sherrington
    Posted Jan 17, 2008 at 3:44 AM | Permalink

    Yes Raven, I thought the same which is why I included that quote. Models? Reliability? No testing? Spend $351 million?

    Deamin’, as we say here.

  49. MarkW
    Posted Jan 17, 2008 at 5:30 AM | Permalink

    #46,

    Seems that relying on models instead of actual data is rife in the climatology community.