Another "High Quality" USHCN station

This picture, taken by www.surfacestations.org volunteer Don Kostuch needs no commentary from me other than to say it is the Detroit Lakes, MN USHCN climate station of record.

Detroit Lakes, MN - USHCN station

The complete set of pictures is available in the online database here

Here is the GISS plot:

NASA GISS Plot RAW - Detroit Lakes, MN

SM Note: GHCND identification 42500212142

43 Comments

  1. Fred
    Posted Jul 26, 2007 at 11:22 AM | Permalink

    I ran a quick, super-crude analysis of the Detroit Lakes data, removing the April-October time frame when the air conditioning would be used in Minnesota. I still came up with a significant jump and sustained high plateau starting in 97/98. Was there another change other than the introduction of the units?

  2. SteveSadlov
    Posted Jul 26, 2007 at 11:25 AM | Permalink

    In addition to the plume from that heat pump, I would reckon that this site has major issues with condensation and radiation fog.

  3. EP
    Posted Jul 26, 2007 at 11:27 AM | Permalink

    Something for the skeptics: wouldn’t it be possible to set up an accurate thermometer nearby to some of these stations where a less biased reading could be made and compared with the official ones? Do we know the times at which readings are officially taken?

  4. Posted Jul 26, 2007 at 11:29 AM | Permalink

    Fred, well it was moved to a warmer all around place, see the other pix in the image database. It’s within about 8-10 feet of a building, and also a swamp. The building is a radio station. Note the thing that looks like a fence rail. It’s not. That’s the elevated enclosure for the transmission line to the tower. At 1000 watts driven into the antenna, there’s some loss as heat in the transmission line but mostly it would be from the transmitter which is very likely just on the other side of the wall. The A/C units likely run even in winter. AT my radio station, KPAY, they run year round too.

  5. Posted Jul 26, 2007 at 11:40 AM | Permalink

    I just did a look at the MMS data from NCDC, and it shows its always been located at the radio station and the station was located in the same lat/lon since 1951. Assuming the MMS data is correct, it also appears that they were using the Stevenson Screen up to August 2006, when they switched to MMTS temp sesnor seen in the photo.

    So something had to change in the environment near the sensor. And here it is in the notes from our survey form:

    Curator notes: A/C units were moved from roof of building to current location 5/5/1999.

    But that begs the question; If the A/C units were already in place, why didn’t the NOAA/NWS COOP manager insist on a better location when the MMTS was installed?

  6. Joel McDade
    Posted Jul 26, 2007 at 11:41 AM | Permalink

    The A/C units look new. I they are I wonder if they replaced older eqpt at the same location.

  7. bernie
    Posted Jul 26, 2007 at 2:24 PM | Permalink

    #6
    I had the same impression plus it looks like a heat exchangers rather than an A/C unit? Is there an HVAC expert on call?

  8. JP
    Posted Jul 26, 2007 at 2:58 PM | Permalink

    Let’s see: location is in a swamp. Dirurnal changes in surface temps will be less due to the humid enviorment. The AC Exhausts are good for another couple of degrees, and the fact that it is in a lowland relative to the surronding topography, surface winds will have a lower speed. I’m sure it meets USHCN standards.

  9. CAS
    Posted Jul 26, 2007 at 3:15 PM | Permalink

    Has a new trend been identified to be called RHI, or Rural Heat Island?

  10. Ian Rae
    Posted Jul 26, 2007 at 4:17 PM | Permalink

    Wow, how is anyone supposed to measure 1/100 of a degree per year trends with things like A/C spew cluttering up the data!

  11. Paul
    Posted Jul 27, 2007 at 2:39 AM | Permalink

    OK, this is from a complete novice.

    But I have looked at the post on this blog that describes the adjustments made to the surface record (overwhelmingly positive adjustments). Then, I look at all the potentially problematic recording sites and all of those indicate a need to make a negative adjustment (essentially due to obvious development in the form of introduced tarmac increased car traffic, airconditioning units that obviously weren’tthere 50 years ago).

    Real Cimate are being complete inenguous when they claim that this work is not of any real import.

  12. SteveSadlov
    Posted Jul 27, 2007 at 1:17 PM | Permalink

    RE: #11 – It is clear to me that those who espouse and promote acceptance of the “killer AGW” scenario are highly threatened by serious auditing of the surface record and the network used to obtain it. I have been in both academe and industry, with a career spanning 25 years. I am responsible for world wide efforts with budgets well into the millions. I can well recognize when someone is trying to hide poorly done, low quality science. In this case, I smell blood.

  13. Jim Edwards
    Posted Jul 27, 2007 at 1:33 PM | Permalink

    #6, Joel McDade:

    If you’re looking at the units on site, it’s usually pretty easy to tell if they’re completely new installations. I can’t tell from this angle and can’t access the other photos. If a 3-ton unit is replacing a 3-ton unit, people generally use the existing ductwork for packaged units (which these appear to be from this angle…), or the same insulated copper refrigerant lines for split-system condensing units (as seen in the ‘Rain in Maine’ photos a few threads ago…). Ducted units usually have to have a new transition fabricated, so one might see old ductwork connected to new ductwork, or old brown oxidized copper refrigerant lines connected to shiny fittings on the unit if it’s a retrofit. Other clues are fairly old electrical disconnects next to brand new units, or the outline of a slightly differently shaped prior unit on the concrete pad a unit was sitting on. Also look for old, oxidized copper drain lines with new copper fittings within a foot of the unit.

    If a hypothetical 3-ton unit from the 1980s were replaced with a new (improved, and warmer! …) 5-ton unit, it would be harder to gauge by looking, b/c most of the ductwork, refrigerant lines, and other facilities would be undersized and replaced with new. Sometimes a clue in these cases would be an old, abandoned electrical disconnect near a new larger one for the new A/C unit. Old refrigerant lines may be abandoned on site, b/c they’re a hassle to remove.

    Sometimes when you see two units of different sizes, as you do here, you’ll notice that the smaller, older unit is simply shut off at the electrical disconnect and abandoned in place – so that will tell you quite a lot about the history of the place:

    20 yrs ago, they had 4 employees w/ 2 computers and a coffeepot. It was a 2-ton load but they oversized the installation and put in a 3-ton unit.

    Over the next 15 years they added 3 dedicated employees who were willing to sacrifice for the cause and work in close quarters. They added a copy machine so they could spread the word on AGW. The heat load went up to 4-tons. On hot days they each melted a glacial ice core.

    5 yrs ago the (now) middle-aged director finally sprang for some carbon credits and had a 5-ton unit installed next to the old retired unit. The quarterly fundraising newsletter mentioned that they had stopped using the old A/C unit and were now carbon-neutral.

  14. steven mosher
    Posted Jul 27, 2007 at 1:36 PM | Permalink

    Anthony have you see the Open letter from the State climatologists?
    Asking Congress to FIX the network?
    It’s linked on junkscience.com today

  15. Jeff C.
    Posted Jul 27, 2007 at 1:55 PM | Permalink

    This has gone beyond ridiculous. The entire network needs to be surveyed, but we’ve seen enough that it’s clear these aren’t a few isolated incidents.

    I wonder if Senator Inhofe’s office has been following this as it seems clear an investigation is warranted. The Coop site operators are unpaid volunteers and I appreciate their efforts and don’t blame them. However, there are highly-paid professionals working for NWS, NCDC and GISS involved that have no legitimate excuse. I’m a Systems Engineer responsible for signing off on the performance of commercial satellite payloads prior to them being placed in-orbit. It is my job not only to analyze the test data, but to sign off on the integrity of the data. If I don’t perform due dilligence, I’m responsible for a potential lost satellite that can cost the company hundreds of millions of dollars. Why is it any different when the taxpayers are footing the bill?

  16. Anthony Watts
    Posted Jul 27, 2007 at 1:58 PM | Permalink

    RE14 I have. I saw it last week, I’m in agreement with it, and after sending a letter to Dr. Knight, the chair for AASC to ask if it would be ok to present it, I’m having a meeting scheduled with my Congressman.

  17. Anthony Watts
    Posted Jul 27, 2007 at 2:01 PM | Permalink

    I got this letter today:

    Dear Mr. Watts,

    I fully agree with the points you made in the interview just released on FOX. Your points about the terrible exposures at the majority of NWS Coop sites is inexcusable [the siting, not your points] and calls into question whether the data is worthy of any serious use.

    I spent 2.5 years on a sabbatical assignment at NWS HQ [ended on 10/1/06] attempting to help with COOP modernization. I will tell you I ran into a ‘closed shop’ when it came to the modernization. Outside ideas were not welcomed. I tried to bring the best ideas from the Oklahoma Mesonet with me to Silver Spring. Nobody wanted to hear anything about our successes and our standards. I even wrote three documents on site standards and documenting metadata. Furthermore, I developed the interrelated software designed to store and make available all metadata [excepting the names of individuals].

    Those documents, written in 2005, are still on the shelf and will go nowhere. The software developed has been turned off. What I gave the NWS was a package better than what we used [at the time] in the Oklahoma Mesonet – but the Mesonet since taken advantage of the developments made for the NWS.

    I say: KEEP IT UP.

    [name withheld for privacy reasons]

  18. bernie
    Posted Jul 27, 2007 at 2:17 PM | Permalink

    #10
    The apparently settled wisdom is that when measuring something if you have many repeated estimated measures of that thing you will get a more precise measure of the true thing than the precision of any single estimate. For eample, 1 person guesses my height they are likely to be off by a certain amount. If 20 people guess my height then the average of their guesses is likely to be closer to my true height than the single estimate. With 100 estimates even greater precision is achieved. This is one of the arguments against Anthony’s efforts to identify possible problems with the measurement stations see for example July 22 entry and string at http://www.rabett.blogspot.com . Now for certain things and under certain conditions this is true. However, this does not apply to the existing temperature record. First, there is no real “thing” equivalent to my height, it is a collection of a lot of things that are mathematically combined into a single thing. Secondly, if there were a thing how the measurement is taken does make a difference. For example, if the 100 people who estimated my height stood at varying distances from and relative altitudes to me, in what way are they still actually measuring the same thing, especially if no record is taken of the position relative to me when they estimate my height?
    Bottomline is that more accurate recordings of temperatures and under the same conditions (or with precisely known conditions with known effects on the temperature record) are needed to ensure reasonable estimates of the actual temperature. The folks here are dhelping to do exactly what is needed – though it is causing significant embarrassment to and anxiety among those who have been assuming a high quality data set.

  19. Bill F
    Posted Jul 27, 2007 at 2:28 PM | Permalink

    More importantly, if all of the stations trying to measure temperature are handicapped by having MMTS units with short cables on them that prevents them from being placed any significant distance from a building…then we should assume that ALL of the units will suffer from the same types of biases documented by Anthony’s work so far, until evidence from specific sites proves otherwise.

  20. Anthony Watts
    Posted Jul 27, 2007 at 3:01 PM | Permalink

    RE19: I think one of the most important observations so far from this survey is that indeed, the implementation of the MMTS electronic sensor as a replacement for the mercury max/min thermometers and the CRS (Stevenson screen) is that it has forced the observations to be made closer to the observer’s domicile or office, inviting positive biases.

    Oddly, according to the MMTS specifications: http://www.srh.noaa.gov/ohx/dad/coop/specs-1.html

    The cable can go up to 1/4 mile in length and is “silver coated solid copper”. Yet almost every installation I’ve seen, has a cable length far shorter than that. That means the decision to keep a shorter cable could be one of four things:

    1) I don’t want to dig a ditch and lay conduit that far.

    2) The original spec for 1/4 mile didn’t hold up in field performance tests, and the actual usable max length is less.

    3) The “silver coated solid copper” cable was too expensive to continue deploying for longer distances

    4) Local siting issues imposed by the observer i.e. “my yard isn’t that big” or “to put it at the same location as the CRS was we’ll have to dig up/tunnel under my driveway, walkway, etc. and I don’t want to do that.”

    Since the MMTS started implementation in the late 80’s, it has gradually increased through the network. While I know there is a lab based known positive bias assigned to the MMTS -vs- CRS shelters that is applied, I wonder if “domicile proximity bias” has been applied? And given the gradual nature of the MMTS incursion into the network in the last 15 years, would it be noticed as an artifact if no correction was applied, or would it masquerade as a genuine warming trend? In some cases, the step offset may not be large enough to be detected in the noise or as apparent as it is in the Detroit Lakes GISS plot.

  21. Jeff C.
    Posted Jul 27, 2007 at 3:11 PM | Permalink

    From the NWS San Diego website (http://www.wrh.noaa.gov/sgx/cpm/temperature.php?wfo=sgx)

    “Currently, the MMTS requires a cable to connect the sensor with a display. Future plans are for wireless displays. This would eliminate many of the problems associated with cabled systems.”

  22. BarryW
    Posted Jul 27, 2007 at 3:31 PM | Permalink

    Re 18

    Your example and the opponents of the survey assume the errors are random. If a bias exsists in the data ( in your example say your wearing high heeled cowboy boots) the answers will converge but on the wrong value. It’s obvious from the survey so far that the errors are far from random.

  23. David Smith
    Posted Jul 27, 2007 at 3:41 PM | Permalink

    Re #20 The co-op observer I spoke with in Liberty, TX expressed surprise and mild amusement at the MMTS placement at his site. For many years he walked a long distance in all weather to take readings in the Stevenson box, presumably because sensor distance was important, then when MMTS and indoor display come along they place the sensor just a few steps outside his back door.

    Those of us who live in areas with grass and light wintertime frost know that proximity to structures makes a difference in frost amount. Generally the grass within 15 feet of a house or tree gets no frost while the farther from the house or trees one goes, the greater the frost. Locating MMTS closer to structures similar to what I’ve seen in the photos would, in my estimation, raise the overnight minimum temperature by some amount. It may only be tenths of a degree, but that’s big enough to impact a climatological record.

    I wonder what the outrage would be if drug makers had a similar lack of quality control in their efficacy and no-harm trials. It’s simply amazing.

  24. David Smith
    Posted Jul 27, 2007 at 4:09 PM | Permalink

    By the way, I plan to survey a half-dozen USHCN sites in the lower Mississippi River valley in August. That is one of the few regions in the US, or world, which reports little or no net warming in the last 100 years.

    I wonder what I’ll find nearby – a collection of BBQ pits and AC units, or just grass. The Baton Rouge site, already audited, looks well-placed, surrounded by grass. And it shows no net warming.

  25. Bill F
    Posted Jul 27, 2007 at 4:18 PM | Permalink

    Anthony, I am just guessing, but I suspect that the MMTS units were probably sent out with a standard cable length (probably very short, based on a competitive bid from a government supplier). Getting a longer cable probably meant a special request or having to fill out extra paperwork, and it is doubtful that the people doing the installations felt like doing that extra level of effort…especially if it would mean they had to dig a longer trench to bury the cable.

  26. Jonathan Schafer
    Posted Jul 27, 2007 at 4:28 PM | Permalink

    #20,

    Why don’t they simply design one with a wireless transmitter. Yes, it may be a little more expensive but it would resolve all issues with siting due to cable restrictions.

  27. Sam Urbinto
    Posted Jul 27, 2007 at 4:42 PM | Permalink

    #3: Another reading might tell us something. (impromptu experiments by some here have shown it does change radically depending on where the thermometer is) But you’d really have to set up an identical setting, change 1 variable, and measure it over time. Then put that back and change a different variable, etc. (Which is rather what the paint test is like if it’s still going on.) While certainly, another sensor in a different setup that’s “better” could be used to see if the anomaly is different, but why not just make the site better? And that still does nothing for the historical record but it would show if indeed the siting makes a difference in the anomaly. Which seems to be obvious that it would. But not that important.

    The goals (and/or byproducts) of this project though seem to me to be 1. Photograph the stations so as to begin a photographic record. 2. Identify which sites do not meet the standards. 3. Identify which stations may be contaminated and remove the data or improve the station or both. 4. Prove or disprove the notion that this is a high quality network. 5. Develop a baseline by which to integrate the old data into whatever the CRN produces. 6. Develop a baseline by which to determine the sitings of the CRN network. And so on. It seems that there are not enough datapoints yet to do many of these things, but the goal is not to determine how much of an error there is or what exactly comprises it.

    My train of thought is this: If there are published standards on site settings and audit requirements for data validation etc, there is a reason those standards exist. To ensure the data is meaningful, because it is indicitive and accurate. If a station does not meet those standards, that is prima facie evidence the data is not to be trusted. The requirement is not to prove that the data is good or bad, that it should be trusted or not; the requirement is to point out the standards are not met, therefore the station should either be improved or the data ignored.

    Put another way, it is not anyone’s “job” to prove the data is not good, nor why; it’s who is collecting the data’s job to prove the data is good. The only way to prove it’s good is not to provide adjustments that may or may not be correct, not to explain why we need adjustments, not to say “Well the site is supposed to reflect some other area so that’s why it’s there”, not to question the motives or sanity of the people pointing out your failure to meet the standards, or any other similar thing. The way to “prove” it’s good is by having it meet the exiting standards in place that are there to ensure the data is good.

    That rule of big numbers thing that somebody may bring up; I don’t care how many “guesses” as to what the temperature is at a given spot, you won’t get any degree of accuracy in the “measurements” unless you have a device to measure them and measure them down to two decimals of precsion if you’re going to be tracking them down to one. Has anyone done any experiments to see what’s the best guess anyone’s ever done on the ambient temperature at their hand or ears or elbow or forearm etc and seen how close the best at it is?

    Lastly. I’m not a skeptic. I don’t think most people here are. We just want to know what is acutally going on. I’m not skeptical of anything. I am curious as to what the answers are. I don’t care what they are. I just want them.

  28. crosspatch
    Posted Jul 27, 2007 at 4:45 PM | Permalink

    “Locating MMTS closer to structures similar to what I’ve seen in the photos would, in my estimation, raise the overnight minimum temperature by some amount. It may only be tenths of a degree, but that’s big enough to impact a climatological record.”

    The problem is even worse than that, in my opinion. If all of the stations were replaced with the new equipment in the same year, one would expect to see a large step change and then a continuation of some pattern, albeit slightly different than the pattern before the change.

    But instead, we see a gradual changing out of the stations which means we are going to see a gradual increase in the aggregate output of the network until all stations are replaced. This could be interpreted as some kind of “global warming” signal or trend when in fact it is nothing more than a trend in sensor type and location change. But at some point even that “trend” should level off as the overall majority of stations are changed.

    Still, there appears to be enough error in the location of these sites that I would have little faith in the data produced by the overall network. It seems safe to say that the overall quality of the raw data is low and the nature and degree of siting problems seem to preclude any blanket “adjustment” philosophy. Each site would require its own adjustment scheme that could only be determined by a survey of the site.

    The bottom line is that surface measurement is so poorly done at this point as to be useless in any kind of aggregate sense. The only way we can have any kind of useful record is to use data from only properly sited locations and this should be done in sort of a “blind” process where someone selects sites from only looking at the survey without knowing which station it actually is. That way there can be no influence from someone checking the temperature record of that site before selection in order to fit the station selection to any agenda. I suggest a set of stations be selected without knowing the locations of them and then looking at what the data from those locations show after they have been selected. I would be most interested in the results of such an experiment.

  29. Sam Urbinto
    Posted Jul 27, 2007 at 5:09 PM | Permalink

    The trick is to find, having a history,

    1. Good stations (meet standards)
    2. Those that are dispersed somewhat equidistant as well as providing good coverage (indicating that the coverage is then, well, good coverage)
    3. Are free from unnatural (not of nature) influences on the temperature (e. g. no airport tarmac)

  30. A.Syme
    Posted Jul 27, 2007 at 8:13 PM | Permalink

    It may be a poor location, but it’s better than a parking lot!

  31. Evan Jones
    Posted Jul 27, 2007 at 10:24 PM | Permalink

    Re 10 & 18

    That principle only applies where there is no constant bias (all in the same direction).

    What’s going on here is that I am wearing elevator shoes (the microsite/exurban bias). So the 100 folken who’re estimating my height will have a 2″ bias throughout, and they’ll come out of it roughly 2″ too high–no matter how many are doing the estimating.

    Naturally, the more that do the estimate, the greater the odds are they’ll come out closer to–exactly–2″ too high.

    But it will still be 2″ too high!

  32. Posted Jul 28, 2007 at 6:23 AM | Permalink

    Wait a minute. If they change to wireless sensors doesn’t that give our adjusters another opportunity to adjust?

    This could get more interesting.

  33. bernie
    Posted Jul 28, 2007 at 8:54 PM | Permalink

    #22 & #31:
    Evan and Barry:
    I absolutely agree: There are two issues precision and accuracy. The systematic bias you mention is an issue of accuracy: Are we measuring the right thing, my height or my height in 2″ heels. The question of how a rough measure of temperature, i.e., nearest degree, can produce trends measured in hundreths of a degree per year is about precision and the power of multiple estimates. Ian’s comment (#10) was about precision.

    To pursue the height analogy: We have people trying to estimate the average height of humans, where some are measuring people in Vietnam, some in Holland, some are measuring men, some are women, some are measuring 10 years old, some are 25 years old and some are 90 years old, and of course some are wearing heels, some platforms, some are wearing flip flops. However, all those doing the estimation were told to measure men, 25 years old and wearing no shoes using a standard tape measure and to keep detailed records of where, when and who exactly they measured. They simply have not followed instructions!

  34. gdn
    Posted Jul 29, 2007 at 12:20 AM | Permalink

    #20

    The “silver coated solid copper” cable was too expensive to continue deploying for longer distances

    Silver coated cable, while being about triple the cost of non-coated cable, isn’t terribly expensive in the context of a sensor array. I note the gauge isn’t specified, nor the number of pairs described, but silver coating is used on a variety of common communications cables. Most likely the connector heads were of similar cost as the cables themselves. DS1s use 2 22 AWG twisted pairs, with each pair shielded, plus a ground wire. IIRC, the bulk price is about $150 per thousand foot, and about $500 per thousand foot with “silver tinning”. 30 pair armored ABAM is about $1.75/foot, but that’s unsilvered.

  35. gdn
    Posted Jul 29, 2007 at 1:06 AM | Permalink

    Accurate multiple estimates to the nearest degree can increase your certainty that the temperature is indeed between a range of plus or minus half a degree, but how can it possibly tell you more? …significant digits and all that.

    I can see how you might detect an apparent warming trend within that, but you would only be able to say that it was smaller than a degree, or greater than a degree (or two, three, four), right? Given that the readings were subjective, it could be even less certain than that.

    As I recall, there was a big todo a few years back about average human body temperature and how it wasn’t 98.6F, but was 37C. 37C = 36.5 to 37.5; 98.6 = 98.55 to 98.65.

    As for the USHCN stations, some of them appear to have been biased by as much as 4C in single episodes (of induced bias). If one in five stations showed that level, it’s pull the whole network off by .8C. If looking at the shift over a decade, it’d only take one in fifty stations to have that effect. That’d only be 24 stations.

  36. Steven B
    Posted Jul 29, 2007 at 11:01 AM | Permalink

    Re 35, and others,

    Multiple estimates can give you more significant figures, because you’re getting more data. 100 data points with 3 significant figures each is actually 300 significant figures of data. Nearly of that is redundant – the same bit of information repeated – but if the errors are even partly independent, then it’s equivalent to more than 3 digits of information in total.

    (The business about systematic biases is a separate matter, and you’re all quite right about that. The error in the average converges (if at all) on the average of the systematic biases. There are other problems too.)

    As I point out in another thread (Milestone), there’s a limit beyond which you cannot go. Collecting more data eventually becomes entirely redundant, and you cannot simply go on taking bigger sample sizes to get indefinite improvement in accuracy. But it isn’t correct to say you can’t get any improvement by averaging, either. You can get a bit, but then it stops.

    The people running these networks don’t seem to have thought about it either, and until they do the calculation they can’t be sure they can really get 0.01C accuracy or whatever. They should be rightly criticised for that. But that doesn’t mean you can’t get any improvement – and if you keep on insisting it does, they’ll dismiss the entire argument with contempt.

  37. Posted Jul 29, 2007 at 12:57 PM | Permalink

    Just a Ot comment. I hope they got good fans around the termometers outside the city of Sundsvall. About plus 5 degree Celsius and just a bit snowy today. An image and an article:


    http://www.aftonbladet.se/vss/nyheter/story/0,2789,1129554,00.html

    The AGW alarmism goes on in Sweden anyway. Our prime minister (who I voted for) has dropped all other questions he sais… 😛 (pure insanity)

  38. L Nettles
    Posted Jul 29, 2007 at 1:25 PM | Permalink

    On the issue of cable length for the MMTS, I will note that 3 of the sites I have inspected appear to have cable lengths longer that the standard 15 feet or so. Newberry, SC, Cheraw SC and Sumter SC. Only Cheraw is currently posted.

  39. Posted Jul 29, 2007 at 3:05 PM | Permalink

    re: #36

    Thanks for the heads up Steven. Could that be limited to 100 estimates of the exact same property? Does it apply to measurement of the temperature at the same location but at different times? Do you have a handy reference for us? I have John Mandel, The Statistical Analysis of Experimental Data, John R. Taylor, An Introduction to Error Analysis and F. B. Hildebrand, Introduction to Numerical Analysis. An example calculation and analysis would also be very helpful.

  40. Derek Kite
    Posted Jul 29, 2007 at 3:49 PM | Permalink

    They are air conditioning condensing units. RUUD brand name. The closest one is quite recent, probably 13 SEER, so maybe 2 years old.

    If it is a radio station, they probably run all year round.

    Derek

  41. Steven B
    Posted Jul 29, 2007 at 4:27 PM | Permalink

    Dan,

    It’s not necessarily limited to the exact same property, but it gets more complicated for other cases. It’s just a matter of maths with random variables. The variance of a sum of random variables is the total of all the numbers in their joint covariance matrix – whether the random variables estimate one quantity or many. If you take the mean by dividing the total by n, the variance of this mean is divided by n^2. All that matters is adding up random variables and dividing by a constant, but in context it wouldn’t make much sense to be averaging estimates of different quantities. (If what you’re talking about is the fact that the actual temperature is different at each physical location, that’s OK. If you average these different values to get an estimate of the average global temperature, the average of the errors in them is still supposed to be zero, and our variance calculates the variance in that average.)

    For temperatures at different times you’ll commonly get a more complicated covariance structure. Observations close together in time will be more strongly correlated than those far apart. (To some degree, this applies in the spatial case as well.) Depending on whether this decays to zero and how fast, you can get convergence of the mean, but it will be a lot slower than you expect.

    In the other thread I give a link to a reference. This quotes the only bit of this argument that isn’t derived from perfectly standard properties of variances, which is the fact that the variance of a sum of correlated data is the sum of all the elements of the covariance matrix. Every stats book I’ve come across always assumes the special case of perfect independence. When I first wanted the result myself, I had to work it out from first principles; it involves some messy algebra, but is pretty straightforward as maths goes. It seems like it should be a standard result, but it’s rare to even find it simply quoted (as my linked reference does) let alone derived.

    If you check my Milestone post, you’ll see the formula when the covariances are constant off diagonal is Var(y-bar) = (r +(1-r)/n)Var(y_i) where y is the data being averaged, r the cross-correlation coefficient, and n the sample size. So if r is 0.001, and Var(y_i) is 0.25, then the average of 100 points will have variance (0.001 + 0.999/100)*0.25 = 0.01099*0.25 ~= 0.0025 which gives an accuracy about ten times better. But if we increase the sample size to a million points, we get (0.001 + 0.999/1000000)*0.25 = 0.001000999*0.25 ~= 0.00025 which gives an accuracy only three times better, even though the naive prediction would be that it should be 100 times better. Increasing the sample size any more will make virtually no difference to the accuracy of the average.

    Obviously, I don’t know how independent temperature measurements actually are. It would take a detailed knowledge of the entire measurement process to even make a guess. If you’ve got a huge correlation, around 0.1, then those hundredth of a degree accuracies are in major trouble. If r is down around 0.001 as in the above example, then it’s probably not an issue in the situation under discussion. But if you think about how temperature anomaly (as opposed to the mean temperature) is calculated, how likely do you think it is that the correlation is so small?

    My apologies for the length of the post. I can do a more detailed discussion of any gaps tomorrow if people ask for it, and if Steve M doesn’t object, but I hope that with these pointers it should be possible to leave it as ‘an exercise for the student’. 🙂

  42. Sam Urbinto
    Posted Jul 30, 2007 at 1:58 PM | Permalink

    The question is: Are the individual measurements accurate enough to result in daily min/max numbers accurate enough to combine to get a true .01 or better resolution for the mean on a monthly basis? Is the monthly reading for October 1990 66.11 or 66.22 — Or is in fact the monthly for Oct 1990 66 +/- 1 or +/- .1 or even 66 +/- .01 ? Or can we get it to 66 +/- .001 or better?

    ??

    I doubt it, but even if we are getting an accurate 66 +/- .000001, all we’re getting how the material acts and how it mixes with air 5 feet up.

    Plus, I’d think it’s more like 66 +/- .1 anyway. And even if not, and we consider the temp as accurate and indicitive of the location of the thermometer, it’s not necessarily indicitive of the area “being measaured.”

  43. Posted Aug 1, 2007 at 10:00 PM | Permalink

    You might go read the station history. There is a problem but it is almost certainly not the air conditioner or the building

2 Trackbacks

  1. By Watts Up With That? « Rick’s Weblog on Jul 14, 2008 at 11:19 PM

    […] Detroit Lakes MN last week, surveyed by volunteer Don Kostuch, and cross posted it to the website http://www.climateaudit.org/?p=1828#comments that had two air conditioner units right next to it. It looked like an obvious cause and effect […]

  2. By Anonymous on Jun 3, 2010 at 9:41 AM

    […] una bella foto della stazione dell'Idrografico di Bolzano: Scherzo. Questa viene da qui: Another "High Quality" USHCN station Climate Audit […]