Statistics of Record-Breaking Temperatures

Luboà…⟠Motl has kindly directed our attention to the following interest paper by S. Redner and M. Petersen, On the Role of Global Warming on the Statistics of Record-Breaking Temperatures, scheduled for publication in Phys Rev Letters E, presently online here with abstract:

We theoretically study long-term trends in the statistics of record-breaking daily temperatures and validate these predictions using Monte Carlo simulations and data from the city of Philadelphia, for which 126 years of daily temperature data is available. Using extreme statistics, we derive the number and the magnitude of record temperature events, based on the observed Gaussian daily temperatures distribution in Philadelphia, as a function of the number of elapsed years from the start of the data. We further consider the case of global warming, where the mean temperature systematically increases with time. We argue that the current warming rate is insufficient to measurably influence the frequency of record temperature events over the time range of the observations, a conclusion that is supported by numerical simulations and the Philadelphia temperature data.

I won’t have an opportunity to go through it in detail, but it seemed on a quick browse to be a sensible treatment of the topic. The submission history of the article suggests that it had been previously submitted to the Journal of Climate. Perhaps these findings were inconsistent with Journal of Climate editorial policy. It would be interesting to see why the article was rejected by Journal of Climate (if this was the case) as the treatment seems professional enough.

131 Comments

  1. Jo Calder
    Posted Jan 13, 2007 at 4:44 PM | Permalink

    I think there was mention of this paper at RC, hence the phrasing of this comment. The timing is roughly right for the Redner and Petersen paper to have been mentioned in the Ritson thread or the Mann and Jones thread, both of which are masterpieces in their own way.

  2. Posted Jan 13, 2007 at 5:49 PM | Permalink

    Thanks for the posting, Steve, and thanks to Jo for his extra links. You, Jo, wrote that they assume iid. That’s the simplest model they do with their estimates, and they find a pretty good agreement with this model.

    Temperature is of course heavily auto-correlated but if you look at January 13th of different years only, this autocorrelation implied by inertia goes away. If there were no trend, they would be iid. Of course, these distributions look like variations from something that may look like a trend, whatever it is. But their point is that they can’t prove the existence of such a trend from this kind of statistics.

    What I am somewhat confused about is e.g. their figure 8 (Redner and Petersen) where both record-high and record-low Philly temperatures seem to mimick the k=1 power law. On the other hand, it seems that the record lows are clearly above it while the record highs are below the k=1 curve. Do I read the graph correctly that after 100 years, the probability of record highs is 10 times higher than the probability of record lows? If true, I don’t understand in what sense this effect would be unmeasurable.

    Jo: I don’t see the paper mentioned in the last two threads you mentioned. What do you exactly mean by saying that the threads/papers are masterpieces?

  3. Lee
    Posted Jan 14, 2007 at 12:01 AM | Permalink

    Just to make sure its clear, this argument applies to frequency of daily temperature extremes, and is not relevant to the record of average annual temperatures.

  4. Willis Eschenbach
    Posted Jan 14, 2007 at 2:36 AM | Permalink

    Lee, here are the statistics of the “record of average annual temperatures” you mention. I have used the HadCRUT3 monthly dataset.

    The real question is not whether the trend since 1975 is different from zero. It is whether the trend is unusual for this dataset. I realized yesterday that one way to do this is to look at the distribution of the Mann-Kendall “tau”. Tau is a non-parametric measure of the existence of a trend. Tau for the period of interest (1975-) is 0.63, which is significant (p=0.0002) based on the usual distribution of tau (standard deviation of tau = 0.034 for this size dataset). However, tau is affected by autocorrelation, so we need some other method. To see if this rise is unusual for the dataset, we can calculate the true variance of tau for this dataset. This will let us determine whether any given trend is unusual.

    To do this, I looked at all other trends of the same length (382 months from January 1975 to September 2006) in the dataset. Using just the 1850-1975 portion of the dataset (to avoid influence from the recent warming) we find that the average tau is 0.15 ±.007, with a standard deviation of 0.23 ±0.005. The positive tau, of course, reflects the fact that temperatures warmed over the period.

    This means that the 95% confidence interval for tau is from -0.38 to +0.67 (mean ±1.96 standard deviations, including errors in quadrature). Since the post 1975 tau (0.63) is within the confidence interval of trends in the earlier part of the dataset, we know that the recent result is not anomalous for this dataset. However, it does show that the trend is significantly different from zero, which is not surprising during a warming period.

    So we can conclude (assuming the dataset accurately represents global temperatures) that the world has warmed since 1975, but that the recent warming is not statistically significant.

    w.

  5. Francois Ouellette
    Posted Jan 14, 2007 at 8:22 AM | Permalink

    #4 Willis, you say

    So we can conclude (assuming the dataset accurately represents global temperatures) that the world has warmed since 1975, but that the recent warming is not statistically significant.

    I don’t understand. Do you mean it cannot be distinguished from natural variability? If so, how do you define the “natural” variability? Or do you mean the trend since 1975 is not different from that between 1850 and 1975? Can you clarify your post please?

  6. Douglas Hoyt
    Posted Jan 14, 2007 at 8:43 AM | Permalink

    Back in 1981, I published the following article:

    Hoyt, D. V., 1981. Weather records and climatic change . Climatic Change, 3, 243-249.

    The conclusions in that article were that weather records were equal to 1/n where n is the number of years of observations. Weather records at a number of stations in the US were examined and in each case were indistinguishable from white noise or an iid process. There were some oscillations in the records, but they were never statistically significant.

    The standard deviation of weather records is equal to 1/n – 1/n^2 for white noise. For large n, say n = 100, then the standard deviation is close to 1/n or the mean expected weather records. So to be significant at the 95% level, you need about 3 times the normal number of weather records and even then it wouldn’t be telling you much since this will happen once every 20 years on average anyway.

    A few points can be made: 1) There will always be weather records, no matter how long the weather is observed. 2) It is important to have a homogeneous time series. Sometimes you will get a jump in weather records if you move the station a few feet and these inhomogeneous records should not be used in climate studies. 3) It is very hard, using weather records, to show a change in climate.

    I did the above study because Sir Crispin Tickell said at the time (1970s) that increasing weather records were a sign that people were cooling the Earth due to burning fossil fuels. Now Tickell says increasing weather records are due to warming the Earth due to burning fossil fuels. Neither statements are true in the sense of being statistically significant.

    Before writing the paper, I asked climatologists about the math behind weather records, but none knew what it was. It took me a few days to figure out the math on my own and then later I discovered that it had been known for a long time even in 1980. It is good to see Physical Review publishing on the topic and going beyond white noise to see the effects on the statistics.

  7. Posted Jan 14, 2007 at 4:34 PM | Permalink

    The authors of this paper should have consulted Al Gore, James Hanson or Elizabeth Vargas. Any one of them could have told them confidently that current temperatures are the warmest in at least a million years.

    To #6: Is it possible that climatology students are not being trained in the math of basic weather records? If all that most of them are learning is ad hoc modeling of theoretical climates, how do any of them get the historical grounding in actual weather systems that they need to make public pronouncements?

  8. Willis Eschenbach
    Posted Jan 14, 2007 at 7:48 PM | Permalink

    Francis O, always a pleasure to hear from you. You say:

    Willis, you say

    “So we can conclude (assuming the dataset accurately represents global temperatures) that the world has warmed since 1975, but that the recent warming is not statistically significant.”

    I don’t understand. Do you mean it cannot be distinguished from natural variability? If so, how do you define the “natural” variability? Or do you mean the trend since 1975 is not different from that between 1850 and 1975? Can you clarify your post please?

    Sorry for my lack of clarity. What I meant is that based on the Mann-Kendall tau, we cannot statistically distinguish the post-1975 trend from the rest of the dataset.

    w.

  9. Lee
    Posted Jan 14, 2007 at 8:01 PM | Permalink

    re 4 – willis, those are not “THE” stastitcs of that record, they are an analysis of one feature of that record. It really doesnt matter if the trend is larger, smaller, or the same – the ‘trend’ could be significantly lower than previous, but if that slower trend is taking us into novel temperatures, we are still moving into novel temperatures. I don’t see how your analysis is at all relevant to that issue, or how this is “the real question.”

  10. Willis Eschenbach
    Posted Jan 15, 2007 at 1:43 AM | Permalink

    Lee, since the majority of the proxies I’ve looked at (ice cores, treelines, Mg/Ca cores, pollen records in lakes,etc) say that it was warmer during the MWP than it is now, you must be using a new definition of “novel temperatures” …

    w.

  11. Posted Jan 15, 2007 at 2:01 AM | Permalink

    Here’s another recent article on extreme value statistics:

    PHYSICAL REVIEW E 73, 016130 2006:

    Extreme value statistics in records with long-term persistence , Jan F. Eichner, Jan W. Kantelhardt, Armin Bunde, and Shlomo Havlin

    http://www.uni-giessen.de/physik/theorie/theorie3/publications/PRE-73-016130.pdf

  12. Michael Jankowski
    Posted Jan 15, 2007 at 7:32 AM | Permalink

    Just to make sure its clear, this argument applies to frequency of daily temperature extremes, and is not relevant to the record of average annual temperatures.

    But “daily temperature extremes” are supposed to occur with greater frequency due to anthropogenic global warming, right?

    Maybe you’re more level-headed than the dopes who cry “global warming” with a record high temp, high hourly/daily rainfall, high hourly/daily snowfall, etc. But they at least should be interested in this.

  13. Lee
    Posted Jan 15, 2007 at 3:43 PM | Permalink

    re 10 – willis, that is not an answer to my response to your statement about analysis of the ‘trend.’.

  14. Willis Eschenbach
    Posted Jan 15, 2007 at 8:53 PM | Permalink

    Lee, you seem not to realize that the recent warming trend has been described in both the popular press and scientific papers using terms like “unusual”, “unprecedented”, and the like. My post showed that given the dataset, the recent trend is not any of those things.

    In response, you said that wasn’t the issue, the issue was that it was taking us into “novel temperatures”. I pointed out that the current temperatures are not “novel” in the usual sense of the word, having been exceeded in the recent past during the MWP and the Holocene Temperature Optimum. Actually, it’s worse than that. We don’t even know if we’ve exceeded the temperatures of the 1930s or the 1880s … here’s the HadCRUT3 graph of their own error estimates …

    I fail to see how that is not a response to your statement.

    w.

  15. Lee
    Posted Jan 15, 2007 at 11:29 PM | Permalink

    willis, this thread is about the existence of temperature RECORDS, not the rate of warming.

    I pointed out that the paper cited discussed daily records, not annual. The popular press is not where I get my science – and most of that discussion is actually about actual temperatures anyway, not rate of change of temperatures. I suspect you are right that the rate of increase recently is not statistically distinguishable from the rate of increase earlier – I do note that the fact that it is toward the upper end of ‘indistinguishable’ bears keeping an eye on over the coming years.

    However, that rate is irrelevant to the question of whether the AMOUNT of change is sufficient to show as an increase in the number of record days, or of record years. And it certainly is not THE statistical analysis of that data.

  16. Lee
    Posted Jan 15, 2007 at 11:31 PM | Permalink

    willis, again –

    I also didn’t say we were experiencing novel temperatures, BTW. Go re-read my OP a bit more carefully.

  17. Dano
    Posted Jan 15, 2007 at 11:52 PM | Permalink

    14:

    I pointed out that the current temperatures are not “novel” in the usual sense of the word, having been exceeded in the recent past during the MWP and the Holocene Temperature Optimum.

    This and similarly-minded comment threads are the only places where such a statement of certitude is given.

    The published empirical evidence refutes this statement, and until someone publishes robust numbers or starts their own journal, the rest of the world will be unaware of this knowledge.

    It is your duty, willis, to share your knowledge with the world, as the rest of the world is ignorant of it. You must share this blockbuster with the world. Jus’ sayin’.

    Best,

    D

  18. welikerocks
    Posted Jan 16, 2007 at 6:54 AM | Permalink

    Willis says: “I pointed out that the current temperatures are not “novel” in the usual sense of the word, having been exceeded in the recent past during the MWP and the Holocene Temperature Optimum.”

    Dano says:
    This and similarly-minded comment threads are the only places where such a statement of certitude is given.

    Not true Dano. RC speak and Hockey Stick worshipping doesn’t allow the discussion. This is a symptom of the bubble world you belong to. All anyone has to do is google “Dano+ Climate” and they can get the gist. Get off the internet and sign up for a geology course) and you will find Willis’ statment reasonable and accepted. After all this time posting, you still CAN NOT come up with something you find compelling or new about the topics here. I find this fact illustrates your true intent.

    If you haven’t noticed- many folks have come forward, from all over the world and commented lately here on the site, thanking SteveM for this place (Or maybe you have)

    In the meantime, freezing temperatures in Southern California are killing trees, crops and plants. The trees in my yard now have black and frozen leaves.

  19. Dave B
    Posted Jan 16, 2007 at 12:01 PM | Permalink

    lee said in #9:

    “It really doesnt matter if the trend is larger, smaller, or the same – the “trend’ could be significantly lower than previous, but if that slower trend is taking us into novel temperatures, we are still moving into novel temperatures.”

    then in #16, lee said:

    I also didn’t say we were experiencing novel temperatures, BTW. Go re-read my OP a bit more carefully.

    so lee used some nuance, he qualified his statement by saying “if”…but without showing “novel” temperatures, (or defining “novel”), his statement becomes entirely non-useful blather.

  20. Lee
    Posted Jan 16, 2007 at 12:32 PM | Permalink

    No, Dave, my comment was on the (il)logic of the argument about slope in this context. Whether we are or are not at or entering novel temperatures, in some time context, is of course its own large issue – Ive stated my thoughts on that many times here.

    My point in thsoe posts was that it is the comparative extremes of the temperatures that is at question in this thread – and that willis’ argument about the slope of the increase is irrelevant to that question.

  21. welikerocks
    Posted Jan 16, 2007 at 12:56 PM | Permalink

    Meanwhile in So. Cal plants are freezing and dying. No big deal right? Talk about comparative extremes…

    Here’s a cool website: Historical Climatology of New England link for search you can search through hand written documentation of the climate back to the 1600s (I believe) in America. main page: link

  22. Lee
    Posted Jan 16, 2007 at 1:19 PM | Permalink

    rocks – its winter. Wo what?

  23. Dano
    Posted Jan 16, 2007 at 1:40 PM | Permalink

    In the meantime, freezing temperatures in Southern California are killing trees, crops and plants. The trees in my yard now have black and frozen leaves.

    Regarding weather, in ’91 IIRC the Sac Valley lost a number of its eucalypts due to a freeze, and many others stump sprouted after they lost their tops. Sunset mag’s climate zones explain this well. We could analyze these troughs to see if their frq is increasing, but the difficulty is going back past the 1850s, all the plants on the valley floor were native, making comparison difficult.

    BTW, thanx for your opinion, rx. I checked ISI and I didn’t find any such a statement of certitude wrt MWP. That’s where folk go, BTW.

    Best,

    D

  24. welikerocks
    Posted Jan 16, 2007 at 1:57 PM | Permalink

    Lee and Dano, let me get this straight before I comment. You are saying/arguing that an 8 degree below normal winter in my area can be compared to half of a degree temp rise claimed from a computer model of the earth, that plots on a graph called the Hockey Stick, (using bad statistics and using proxy temperature guesses for the whole globe over huge time scales) or not?

  25. Lee
    Posted Jan 16, 2007 at 2:17 PM | Permalink

    rocks-
    First, your comment is nearly incomprehensible.
    Second, a weeks worth of not-that-unusual winter temperatures in winter in one small part of the world, is utterly irrelevant in itself to questions of global temperature or global temperature change.

  26. welikerocks
    Posted Jan 16, 2007 at 2:26 PM | Permalink

    Lee, I’ll try again then, please clarify this-you guys are arguing for a half of a degree of temperature rise plotted on a graph by a computer model with bad statistics and proxy data; even when talking about the MWP or my local temperatures?? Correct?? No?

  27. Lee
    Posted Jan 16, 2007 at 2:36 PM | Permalink

    no, rocks.

    When we are talking about your local temperature, we are talking about your local temperature. Your local temperature, or mine, or that of any given location anywhere on the world at any give time, is not “a half of a degree of temperature rise plotted on a graph by a computer model with bad statistics and proxy data.” It is your local temperature at that time.

    Mine is currently about 8C, according to the thermometer on the front porch. That is an observation – it is what the instrument says right now. It has NOTHING AT ALL to do with computer models or proxies.

  28. welikerocks
    Posted Jan 16, 2007 at 4:07 PM | Permalink

    Ok Lee, gotcha.

    Too bad California fruit and avacodo growers couldn’t have been more informed. They think it is even colder then other times “freeze” has occurred. link This article mentions no climate change but does mention Mother Nature. Guess She’s just in charge of cold these days.

  29. Dave B
    Posted Jan 16, 2007 at 4:12 PM | Permalink

    20, lee said:

    “My point in thsoe posts was that it is the comparative extremes of the temperatures that is at question in this thread – and that willis’ argument about the slope of the increase is irrelevant to that question.”

    so if the slope is irrelevant to the argument, then so is your entire argument about “novel” temperatures-they are beyond the scope of this article, and are neither relevant, nor useful. just an effort at obfuscation.

    could you please define “novel” in this context? please don’t send a wiki link-just your own working definition will be fine.

  30. Willis Eschenbach
    Posted Jan 16, 2007 at 4:45 PM | Permalink

    Here’s a few thought experiments for y’all about the statistics of temperature.

    1) Suppose I have a room which is perfectly climate controlled to say a thousandth of a degree everywhere in the room. Let’s say the air temperature is 14.716°C. I give a person a typical weather observation thermometer, with a Centigrade scale on it. Since it’s a thought experiment, they are able to measure the temperature without affecting it … yes, I know Mr. Heisenberg, thank you.

    The person reports the temperature of the room to the nearest degree, just like a weather observation. They come back and fill in a form. The form says that the room is at 15°C.

    2) Same setup, only I give the same thermometer to 1000 different trained weather observers. They all come back and every single one reports that the room is at 15°C.

    3) Same setup, only I give the same thermometer to 1000 different poorly trained observers. 923 report 15°, 15 report 14°, 22 report 16°, 1 reports 17°, 1 reports 5°, 3 report 25°, 1 reports 11°, 21 forgot to take the observation, and 19 lost the form.

    4) Same setup as #3, only I use 1000 different thermometers that are all within ±1*C of the mean, with an unknown distribution. 35 report 13°, 300 report 14°, 450 report 15°, 126 report 16°, 11 report 17°, 8 report 24°, 13 report 25°, 7 report 26°, 4 report 5°, 23 forgot the observation, 13 forms are illegible, 9 forms are missing, 1 contains only a drunken screed pouring vitriol on the Weather Service.

    In each case, assuming the room temperature is unknown, what is the measurement and sampling error?

    I ask because Phil Jones et al. say:

    Calculating error bars on a time series average of HadCRUT3 is not a trivial process. Each component of the uncertainty on the gridded values has a different correlation in space and time, and these correlations have to be allowed for when averaging the errors.

    “‚⠠ The station errors have large autocorrelations in time, but no correlation in space, and the grid-box sampling errors have little correlation in either space or time, so these two error components are small for global and large-scale averages.

    “‚⠠ The biases (urbanisation, bucket correction etc.) have strong correlations in space and time, so they are just as large for decadal global averages as for monthly grid-point values.

    “‚⠠ Spatial averages contain an additional source of sampling error as there are regions of the world where there are no observations. We calculate the size of the error bars due to this lack of global coverage by looking at the effect of reducing coverage in a, globally-complete, reanalysis dataset.

    Their estimate of the combination of all of these errors is ±0.147° currently, and ±0.347° in 1850 …

    Finally, one last question. If we take two equal volumes of dry air, one at 0°C and one at 30°C, and we mix them, what will be the final temperature (ceteris paribus).

    w.

    PS – A cop pulls over a speeding physicist, and asks “Hey, buddy, you know how fast you were going?”. The physicist replies “Actually, no … but I know where I was.”

  31. Lee
    Posted Jan 16, 2007 at 4:49 PM | Permalink

    Dave, by definition, a record temperature is novel in the context of the time extent of the record – that is the point of the article under discussion in this thread.

    rocks, there is not one word in that article you cited that says that farmers think this is even colder than past freezes. I grew up in California – I’ve seen colder. Sadlov has recently referred to colder past events. Again – it is cold, it is winter, temperatures this cold are not frequent, but not unusual either.

    And all this is irrelevant to the climate question – you are indulging the same kind of error, with reverse sign, that so many rail at the ‘warmers’ for doing. Not that it isn’t frequently engaged in here, too.

  32. Dano
    Posted Jan 16, 2007 at 5:10 PM | Permalink

    28:

    They think it is even colder then other times “freeze” has occurred.

    This is anecdotal evidence. They are tired from fighting the cold and when they go back and look at their journals, they will be reminded. I lived in Sacto for 14 years in ’91 (IIRC the year) the whole week I left my house and rode my bike – it was below 0ºC, and 3-4 days it was mid-20s. This event has a larger spatial component to it, and your food prices will rise, hence the publicity.

    30:

    Your thought-points 1-2 are germane, 3-4 are not relevant. Unless you can show that observers were so poor that they couldn’t read a Hg thermometer to the nearest degree, only your stats experiment 1-2 counts. I also find it fascinating that some folk think they can ‘splain the whole world by just statistics, but can’t understand the methodology in a paper…

    Best,

    D

  33. JMS
    Posted Jan 16, 2007 at 5:57 PM | Permalink

    Yep, I remember what was it? Just before Xmas ’90. The high in Santa Cruz one day was 19F. The current snap is reputed to be around the same magnitude as the ’98 snap. Y’all ought to come out here to MT! A couple of days ago our high was -5F and our low (according to the digital thermometer on my porch) was -22F! That’s cold!

  34. Dave B
    Posted Jan 16, 2007 at 6:19 PM | Permalink

    lee said:

    “Dave, by definition, a record temperature is novel in the context of the time extent of the record – that is the point of the article under discussion in this thread.”

    so today is colder than the past 10 months where i live (buffalo ny). the temperature is “novel” over the course of that time?

  35. jae
    Posted Jan 16, 2007 at 7:00 PM | Permalink

    Their estimate of the combination of all of these errors is ±0.147° currently, and ±0.347° in 1850

    Willis, what do you think of this estimate? I fail to see how they can come up with an estimate like this. Have they told us how it was done?

  36. jae
    Posted Jan 16, 2007 at 7:02 PM | Permalink

    Finally, one last question. If we take two equal volumes of dry air, one at 0°C and one at 30°C, and we mix them, what will be the final temperature (ceteris paribus).

    Same pressure?

  37. Jeff Weffer
    Posted Jan 16, 2007 at 7:58 PM | Permalink

    Just thought I would note that NO record low temperatures have occured in North America in the last 150 years.

    All the record low temperatures would have happened 18,000 years ago plus when half of North America was covered by mile high glaciers.

    What is the average temperature in New York at this time of year? Answer -15C. For 100,000 years at a time the average temperature (on top of the glacier) is about -20C in January. For 15,000 years at a time, the average temperature is about 4C.

  38. welikerocks
    Posted Jan 16, 2007 at 8:24 PM | Permalink

    These are record breaking temperatures all over the state of California and Schwarzenegger has declared a state of emergency. The state Disaster Assistance Act, was activated, and California National Guard armories were ordered to open to the public as warming facilities. Homeless shelters statewide have been filled to capacity. Along with citrus and avacado growers etc, the “Valentine’s Day” crops of flowers are in danger as well. This is occurring from Northern California all the way down to San Diego.

  39. Dave Dardinger
    Posted Jan 16, 2007 at 11:16 PM | Permalink

    re: #36

    Same pressure?

    Yeah, that’s the trick. Presumably we’re talking two equal volume bottles equilibrated at some constant pressure to the two given temperatures and then sealed and mixed together. The point is that the weight of the two portions of gas will vary since the colder gas will be denser. So the mixture, if measured at the same pressure will be less than 15 deg C.

    but if we instead took two identical bottles put them in the same pressure/temperature regime and then closed them off and equilibrated one to 0 deg C and one to 30. When you mix them they will indeed read 15 deg C (or very close anyway… there are corrections to the ideal gas law which might have to be considered.)

  40. Steve McIntyre
    Posted Jan 17, 2007 at 12:23 AM | Permalink

    #32. Like this example from the quality-controlled HadCRU gridcell: http://www.climateaudit.org/?p=307 -see bottom panel where readings are out by a factor of 10. I noticed this by doing a very trivial form of quality control that CRU didn’t bother doing. (The top panel are autocorrelation coefficients). Similar quality control indicates that the SST data is replete with errors especially in the Southern Ocean.

  41. bruce
    Posted Jan 17, 2007 at 1:30 AM | Permalink

    Given the above discussion, I thought it pertinent to extract a quote from the rebuttal of Stern on another thread.

    The contemporary global temperature series as used by the IPCC plays
    as central a role in climatology as the Consumer Price Index plays in
    national economic research. The Review shows it as Figure 1.3. Yet it is
    not produced by a proper statistical agency working under transparent and
    rigorous protocols. Instead, it is produced by a small, secretive group of
    researchers at the Climatic Research Unit (CRU) at the University of East
    Anglia, an organization closely affiliated with the Hadley Centre. The
    CRU has an explicit policy of refusing to allow external examination of
    how they produce their global temperature series. In response to a request
    to examine the underlying data and methods, Dr Phil Jones of the CRU
    stated: “Why should I make the data available to you, when your aim is to
    try and find something wrong with it?” Since scepticism and efforts to falsify
    hypotheses are fundamental elements of scientific method, we find
    this statement remarkable. The request came from Australian researcher
    Warwick Hughes, who wished to examine possible Urban Heat Island
    (UHI) effects and other bias in the CRU instrumental temperature series.
    Dr Jones repeated his statement to German climatologist Prof. Hans von
    Storch,92 who, in a presentation to the US National Academy of Sciences
    on March 2, 2006, made clear his astonishment and contempt towards this
    attitude.

    I am very interested in what Lee, Dano, and Mr Bloom have to say about this.

    PS, there are equally pertinent comments about the work of Mr Mann et al.

  42. Willis Eschenbach
    Posted Jan 17, 2007 at 3:31 AM | Permalink

    Re 36, 39, same pressure, a.k.a. ceteris paribus. I got to thinking about the difficulty of averaging temperatures in Barrow, Alaska, with temperatures in Fiji. And we haven’t even touched the question of the effect of humidity on total enthalpy …

    Re your question in 32, Dano, since Steve has shown these types of errors both in the past and today, and since the weather observers are human, and since the people typing up the records hit the wrong key occasionally, and since weather observation in many parts of the globe are not well trained and sometimes are drunk or just don’t care, and since records from the last century were written by hand and are sometimes illegible, and since Jones refuses to let outsiders see his data (why?), I think you’ve set a personal best record here “¢’‚¬? your snark is simultaneously massively incorrect, totally pathetic, contains the most meaningless linky I’ve seen to date (click to verify), and is completely out of line. Congratulations, you’re on a roll.

    Re 35, Jae, with thermometers read to the nearest degree, I don’t see any way you can get an error that small. Given the example of the perfectly temperature controlled room and the 1000 trained observers, we still only get an answer to a quarter of a degree. The measurement error in experiments 1 & 2, it seems to me, has to be half a degree, and that’s the best case scenario under perfect conditions.

    Now, consider the real world, where we would have one thermometer about every 30,000 square miles (75,000 square km) on land, with each one covering much more area on the ocean. Then you have to remember that most of them are clustered in certain small parts of the globe, so some are called on to represent much larger areas than that. But I’ll take the average for illustrative purposes.

    Now call me crazy, but I don’t believe that one single solitary thermometer can determine the temperature of 30,000 square miles to an accuracy of ±5°, much less ±0.147° (not ±0.148°, mind you), and that’s just one single area, not the whole world … but then I thought, hey, maybe that’s just me, which is why I asked the question.

    Unfortunately, as noted above, Phil Jones and his merry men won’t let outsiders see the dataset, so we have no way to know how corrupted the underlying data is, or what kind of quality control procedures they have in place. For me, the very first thing I do with a dataset, before taking any other measures to identify possibly bad data, is to graph it for a preliminary scan … but as Steve has shown, they haven’t even done that.

    w.

    PS – IIRC, Michael Mann has stated that we can determine the temperature in the MWP to within ±0.5° … which by coincidence is the same accuracy we can get in a perfectly controlled room today …

    PPS – I haven’t touched on other, subtler errors, such as the all-too-human tendency to round hot temperatures up, and cold temperatures down …

    PPPS – As a quick and not too meaningful test, I just took the average of the HadCRUT monthly dataset (which has 3 significant digits, by some kind of … let me call it “prestidigitation”, given the fact that the data before averaging only has 2 significant digits maximum), N=1882 (coincidentally, about the number of temperature stations worldwide), mean = -0.174. Then I rounded them all to the nearest degree, as though they were measured with a thermometer by a perfect observer, and averaged them … mean = -0.094.

  43. Dano
    Posted Jan 17, 2007 at 6:54 AM | Permalink

    41:

    thank you dubya.

    As the quote is not in context, it is hard to judge what it means, but I suspect that Dr Phil didn’t want to release data to a non-climate scientist because he feared that non-sci would waste his time. Just a thought.

    I’m sure this matter could be cleared up with a quick e-mail, rather than have the worst intentions ascribed to it.

    We know that the data get used in the technical journals, so it’s not as if it doesn’t get circulated, so I’m not sure of the point here. If the point is that climate scientists shouldn’t work on the data, but rather it should be statisticians, that’s like saying plumbers shouldn’t work on copper pipes, metallurgists should. This shouldn’t be construed as saying I think statisticians shouldn’t work on climate datasets; rather, most papers are written in teams.

    I’m sure a team could be assembled (including a statistician), a research proposal written and submitted, and the datasets released. Write it up and let us know.

    Best,

    D

  44. DaleC
    Posted Jan 17, 2007 at 7:06 AM | Permalink

    re #40 and other comments on data quality, I have conducted a rudimentary analysis of the USHCN/Daily data set using the methodology commonly and idiomatically known as ‘eye-balling’, and in quite a few instances it rather appears that Willis has some justification for his data collection scenarios. For a generally well-regarded data set, it does not inspire much confidence. Some clearly bad items are flagged as such, but many others, as far as I can see, are not.

    It is of course entirely possible that I have misunderstood the meta-data flags in some way. If anyone familiar with this data would like to review my charts, I would appreciate any comments.

    The worry is that automated software processes for identifying record-breaking variations in the record or for calculating averages etc are picking up possibly absurd outliers.

    The file is a 6 meg Word doc, so be patient.

  45. welikerocks
    Posted Jan 17, 2007 at 7:56 AM | Permalink

    #43 & 41 I am not a scientist, but I can think like one and my husband is an environmental scientist. He has to release his data as soon as it is published to anybody who asks for it. He doesn’t think twice about that and he and his peers welcome any comments even if sharing it shows there is a mistake. They also don’t care who asks. Are you kidding me you think otherwise of any other scientist Dano? What you are implying is that climate science and the Team are special that way or science has changed. Your excuses here are the lamest yet especially if you super impose it onto what you said in the other thread which I will quote : “That’s the deal with science – being able to test the theory. Otherwise, it’s not science.” You also forget the Wegman Report in regards to examining the statistics of the Hockey Team. What happened? They ignored and and dismissed that as well. Powerful group, but not a group of what I would call good scientists, especially when you remember that website RealClimate they run too.

    Willis, thank you for illustrating what I was trying to ask Lee, what he called nearly incomprehensible-that half of a degree, he and the likes of Dano keep close to their hearts -which drive’s every single thought-makes Lee say to me after I quote an 8 degree below normal event in a very large region of the country: “so, it’s winter”. Well you know what Lee? Global temperature rise of half of a degree in the last 50 years you claim from thermometers spread out every which way and shape-and not calibrated- So what? It’s the Earth.

  46. Dano
    Posted Jan 17, 2007 at 8:14 AM | Permalink

    45:

    Are you kidding me you think otherwise of any other scientist Dano?

    A constant theme of mine here is that people are humans, with egos and suspicions. Many commenters on vanity sites act as if people don’t act like people, and act as if they are shocked when they do; I suspect this is for rhetorical advantage & thus I merely continue to point out this tactic.

    Your second para is incomprehensible; if you are making some point about my 32, my point there is that big freezes occur in CA, and relying on anecdotal evidence to contextualize them is counter to what many on this board profess to want. That is: have dubya make one of his nice charts about temp anomalies and use that rather than news articles to analyze whether the frequencies of freezes are increasing, decreasing, or staying the same.

    Best,

    D

  47. welikerocks
    Posted Jan 17, 2007 at 8:27 AM | Permalink

    No, Dano it’s not incomprehensible-what is incomprehensible, is that you deny that a half of a degree drives your whole nasty attitude toward fellow human beings. Even why you call people names.

  48. Steve Sadlov
    Posted Jan 17, 2007 at 9:55 AM | Permalink

    RE: #30 – Good way to explain gage R & R, similar to how they do it in Six Sigma training (something which I can see Dan-o the troll has either never had, or, is deathly afraid of since Six Sigma and Gage R&R are things AGW fanatics do not want the sheeple to be aware of. If the sheeple were to become Six Sigma aware en masse, they would be asking some very difficult questions.).

  49. jae
    Posted Jan 17, 2007 at 11:55 AM | Permalink

    42, Willis: Could the reliabiliyt of the SAT figure be rationalized by saying that “the errors “cancel out” (average to zero)?”

  50. jae
    Posted Jan 17, 2007 at 11:55 AM | Permalink

    Make that funny word in my last post “reliability.”

  51. jae
    Posted Jan 17, 2007 at 11:59 AM | Permalink

    I just am amazed that there are HUNDREDS of scientists relying on this “global average temperature” value, when there are so many potential problems with it! If the climate science community had any integrity, it would demand an audit on Jones’ calculations. I don’t think we have ANY proof of significant warming, especially since it is not shown by the tropospheric data.

  52. Lee
    Posted Jan 17, 2007 at 12:59 PM | Permalink

    re 42, willis:

    “but I don’t believe that one single solitary thermometer can determine the temperature of 30,000 square miles to an accuracy of ±5°”

    Yes, willis, the spatial correlation of actual temperature is poor. Which is, of course, the reason one uses temperture anomaly, which is spatially correlated, instead of raw temperature.

    But you know this.

  53. EW
    Posted Jan 17, 2007 at 3:43 PM | Permalink

    About the data sharing:

    I’m working in a totally different area – phylogenetics based on DNA sequences. The resulting phylogenetic trees are constructed by various algorithms from aligned sequences of various species. As these sequences differ, the way the alignment is done and which parts are left out (because of untreatable differences) may influence the resulting phylogenetic trees.

    So, the recent standard is, that the aligned dataset of sequences must be given with the manuscript and deposited in the public database and also all the software, options, algorithms and other parameters used in tree modelling must be given and freely accessible in public databases. And if a non-geneticist wants to have some fun, he may download it all. Therefore I just don’t get how someone ca say that he will not share the data and method.

  54. Willis Eschenbach
    Posted Jan 17, 2007 at 6:45 PM | Permalink

    Lee, your posts are excellent examples of what people believe. You say:

    re 42, willis:

    “but I don’t believe that one single solitary thermometer can determine the temperature of 30,000 square miles to an accuracy of ±5°”

    Yes, willis, the spatial correlation of actual temperature is poor. Which is, of course, the reason one uses temperture anomaly, which is spatially correlated, instead of raw temperature.

    But you know this.

    Lee, this may come as a shock to you, but the correlation between two temperature datasets is identical to the correlation between the same datasets expressed as an anomaly. Correlations are not affected by linear transforms, so whether the datasets are normalized, expressed as anomalies, each multiplied by different numbers, or subjected to any other linear transform does not change the correlation in the slightest.

    This is one reason why when I post I try to avoid the condescending tone of your posting, and comments such as “But you know this.”, it makes you look like a malicious fool when you are wrong. I don’t mind being wrong, I’ve been wrong lots of times. But being wrong in a personal snark such as your post is one of the reasons why you are not very popular on this blog.

    w.

  55. Ken Fritsch
    Posted Jan 17, 2007 at 7:20 PM | Permalink

    From all this discussion of temperature/climate anecdotes and average global temperatures, I think we must stop and ask some questions and review issues that which, at least I think, are the pertinent ones.

    Firstly, I would ask those with lots of faith in the current temperature records from Phil Jones to tell me why he should not reveal his methods and whether his failure to do so, for whatever reasons, could/should affect our confidence in them.

    The other questions of global temperature averages and/or temperature anomalies would be one related to differences in temperatures over time and what might effect those differences and the second would be how well a sampling of global temperature (as we currently gather them) can be shown statistically to represent the entire global temperature average. I would like to see a statistical analysis related to the latter as I have not yet discovered a formal one in the literature. As to the former question I would have to concede that unless there has been a documented bias in time with temperature measurements, one would have to be inclined to accept temperature anomalies without necessarily accepting the absolute accuracy of the measurements or the methods used to collect them.

    Secondly, I think that the small globally realized temperature change does not mean much for most people locally. Tacking a half degree C in temperature on each and every day would not adversely affect most people. Those people do observe and perceive their local temperature/climate changes which, of course, occur on a much more variable basis. A warm Midwest winter, such as the one that we here in the Midwest are currently realizing would not arouse a lot of alarm for Midwesterners in terms of global warming and contrarily it would seem to make most of us yearn for more of the same. A very cold local winter would make some question whether we truly have a significant warming and again yearn for more warming. Extremely warm summers on a local basis, on the other hand, would make people more susceptible to the arguments of alarmists and not so alarmists about global warming even though the warming that they are talking about is the smaller incremental global warming. Perception of temperature and climate changes has much to do with people’s reactions to it that come primarily for most people from personal local experiences and emphasis added by the media.

    The dilemma of those delivering an alarmist/warning message about global warming is to put small incremental temperature changes that are estimated for the global condition into a context that means something to people who see much larger variations than these over short periods of time. I think the frustration with this dilemma has encouraged the use of warnings/concerns/alarm about extreme weather/climate changes.

    Another effective alarmist argument might be to talk about small changes in temperature as we estimated occurred during the LIA and the historical correlations with the adversities experience, in general, by the people during that period. The problem with that argument is that if one does not acknowledge that prior periods in recorded history were not warmer than today, the only basis we have for looking at warmer climates is coming out of the ice age. Since that period would be considered as a major improvement for civilization, an extrapolation of that condition would predict more improvements.

    If much of my comment appears related to public perception and public relations, that is because I consider most of these discussions oriented in that direction.

  56. Lee
    Posted Jan 17, 2007 at 7:42 PM | Permalink

    willis – that is irrelevant to the objection you made earlier.

    If the temp in Sacramento is 95F, then the temp in placerville is likely to be about 85F, and the temp in Chico may be, say, 98. The correlation of absolute temps to each other across space is low, temps vary a lot across that area. You are right – the temp in Sacramento is NOT a good predictor of temps in the large area surrounding Sacramento, especially up the foothills, even to within 5oC. Absolute temps in Sacramento and in Placerville do not correlate well at any given time, temsp in Sacramento do not tell us temps in Palcerville – but the relative variation of that temp from local average is likely to be very similar among those sites.

    That is, if temps in Sacramento are 5C above average for a given day, then temps in Placerville and Chico are very likely to also be above average, and by an amount close to (proportional to, at the local site) the 5oC in Sacramento. THOSE are the datasets that will have identical corelation to the anomaly – the temps over time. But that is not what you were objecting to – you were objecting to using Boston to predict temps across a wider area – and taht is not what they are doing, and you know it. They are using Boston to predict CORRELATED DELTAS in temps across that area, expressed as anomaly SO THAT IT IS REFERRING TO THE CORRELATED DELTAS FROM ‘NORMAL’, not the absolute temps.

    It doesn’t matter that temps can vary by greater that 5C within that area and that temps in Sacramento are 5C different from other temps within the area (which was your objection) – IF the temps are expressed as a temp anomaly, the anomaly is, as yo say, the same as the correlation in the time series. And that is the desired information – changes in the correlated times series, not the absolute temperatures.

    Your statement is true – the correlation OVER TIME is the same whether it is expressed as absolute temps or as anomaly. But that isnt what you were objecting to, you were objecting to using Boston to determine temps across a wider area – ie saying that temps in Worcester are not close to those in Boston. This is true, but irrelevant to the actual analysis being done. OVER SPACE at any ONE given time (your objection to predicting temps over an area by looking at one site), the correlation is poor – the temps vary widely. That is why I said SPATIAL correlation.

    Expressing as an anomaly essentially converts those spatially diverse measurements at any given time, to temperatures correlated across time (referring to the baseline) and so makes the correlation ‘the same.’ It collapses those large temp differences across space – as yo say, perhaps 5C or greater differences – into anomalies that tend to be very similar across the entire area.

    This is kindergarten-level basic stuff in the field. It is hard to imagine that your feigned amazement at my statement is anything other than an attempt to finesse past that, or pretend this isn’t the issue. Its the same kind of thing you did in pretending that a line fit to a noisy time series has no error and is an exact representation of the trend of that time series – even though using a different time period of that record yields a trend with different slope. And its the kind of thing that got you schooled by Tamino.

  57. Earle Williams
    Posted Jan 17, 2007 at 10:17 PM | Permalink

    Re #56

    Lee,

    Educate this simple kindergarten soul if you please. What exactly is the ‘normal’ that the anomaly is calculated from?

    Thanks in advance,
    Earle

  58. Bob Weber
    Posted Jan 17, 2007 at 10:27 PM | Permalink

    Per Worldclimate.com, the average temp for Jan
    Sacramento WSO (38.58°N 120.31°W) 46.9°F. The temp at 6:50PM was 42°F or 3.9°F below normal
    Placerville (38.28°N 120.31°W) 40.8°F. The temp at 6:50PM was 39°F or 0.8°F below normal
    Doesn’t this contradict what you say:

    That is, if temps in Sacramento are 5C above average for a given day, then temps in Placerville and Chico are very likely to also be above average, and by an amount close to (proportional to, at the local site) the 5oC in Sacramento.

    Bob

  59. Bob Weber
    Posted Jan 17, 2007 at 10:29 PM | Permalink

    Oops. My quote is from Lee #56

    Bob

  60. paminator
    Posted Jan 17, 2007 at 11:08 PM | Permalink

    RE #30- Willis:

    Have you seen the April 2006 USCCSP report on surface, satellite and radiosonde temperature trends? They claim fantastically tight confidence intervals for the decadal surface trends, some as low as 0.01 degrees C.

    http://www.climatescience.gov/Library/sap/sap1-1/finalreport/default.htm

    Now for some excerpts from the summary and from chapter 3 of the same report:

    Quote 1-
    “Systematic local biases in surface temperature trends may exist due to changes in station exposure and instrumentation over land, or changes in measurement techniques by ships and buoys in the ocean. It is likely that these biases are largely random and therefore cancel out over large regions such as the globe or tropics, the regions that are of primary interest to this Report.”

    Quote 2-
    “Subtle or widespread impacts that might be expected from urbanization or the growth of trees around observing sites might still contaminate a data set. These problems are addressed either actively in the data processing stage (e.g. Hansen 2001) or through data set evaluation to insure as much as possible that the data are not biased (e.g. Jones 1990, etc…).”

    Quote 3 regarding Marine temperatures-
    “Accordingly, only having a few SST observations in a grid box (added- grid box of 2 x 2 or 5 x 5 degrees Longitude or latitude) for a month can still provide an accurate measure of the average temperature of the month.”

    Quote 4 on one of three methods used to verify the absolute errors of the surface temperature record-
    “The second technique is sub-sampling a spatially complete field, such as model output, only where in situ observations are available. Again the errors are small (e.g. the standard errors are less than 0.06 degrees C for the observing period 1880 to 1990; Peterson et al., 1998b).”

    Quote 5-
    “The fidelity of the surface temperature record is further supported by work such as Peterson et al (1999) which found that a rural subset of global land stations had almost the same global trend as the full network and Parker (2004) that found no signs of urban warming over the period covered by this report.”

    Based on this small sampling of caveats in the report, I am not overwhelmed with confidence in the absolute accuracy of the surface temperature record over any length of time. I wish we had microwave sounding units in orbit back in the 1940’s.

  61. Lee
    Posted Jan 17, 2007 at 11:10 PM | Permalink

    re 57 – any common baseline period – it really doesn’t matter, as long as it is consistent. Typically oen sues an average across some common time period for the various sites to establish the baseline.

  62. Lee
    Posted Jan 17, 2007 at 11:12 PM | Permalink

    re 58: No – noise averages across multiple observations and multiple sites.

  63. Earle Williams
    Posted Jan 17, 2007 at 11:18 PM | Permalink

    Re #61

    Lee,

    OK, so help me understand where you’re going please. I’ve got a time series for Sacramento and a time series for Placerville. Is the baseline for Sacramento an average value for Sacramento only, or is it an average for Sacramento, Folsom, Placerville, El Dorado, etc? Also is the baseline a constant over time or does it vary?

    Thanks,
    Earle

  64. Willis Eschenbach
    Posted Jan 18, 2007 at 1:57 AM | Permalink

    Lee, thanks for your post. You say:

    willis – that is irrelevant to the objection you made earlier.

    If the temp in Sacramento is 95F, then the temp in placerville is likely to be about 85F, and the temp in Chico may be, say, 98. The correlation of absolute temps to each other across space is low, temps vary a lot across that area. You are right – the temp in Sacramento is NOT a good predictor of temps in the large area surrounding Sacramento, especially up the foothills, even to within 5oC. Absolute temps in Sacramento and in Placerville do not correlate well at any given time, temsp in Sacramento do not tell us temps in Palcerville – but the relative variation of that temp from local average is likely to be very similar among those sites.

    That is, if temps in Sacramento are 5C above average for a given day, then temps in Placerville and Chico are very likely to also be above average, and by an amount close to (proportional to, at the local site) the 5oC in Sacramento. THOSE are the datasets that will have identical corelation to the anomaly – the temps over time. But that is not what you were objecting to – you were objecting to using Boston to predict temps across a wider area – and taht is not what they are doing, and you know it. They are using Boston to predict CORRELATED DELTAS in temps across that area, expressed as anomaly SO THAT IT IS REFERRING TO THE CORRELATED DELTAS FROM “NORMAL’, not the absolute temps.

    Now, think about this process for a minute. Let’s take our single thermometer in the middle of 30,000 square miles. You are saying that on average, if the temperature at that thermometer goes up by 5°, so on average do all the other thermometers in the 30,000 square miles, because the errors cancel out.

    If this is the case, then it would be also be true for 300,000 square miles, or for 197,378,412 square miles. So why don’t we just use one thermometer to measure the temperature of the earth?

    The rude truth is that if the raw temperatures are not correlated, neither are the temperature anomalies, and vice versa. The other factor is that the correlation between temperature decreases with distance, often dramatically if you go to other climate regimes. You are 100% correct that we can do better at predicting anomalies than temperatures in a 30,000 square mile area, but the errors in both cases are large.

    How large? Well, let’s take your example above, Placerville and Sacramento. They’re only 62 km (39 mi) apart, so their CORRELATED DELTAS should be quite close. The temperatures are fairly well correlated (r^2 = 0.63). I took the overlap period in the records (1890-1977), removed the average monthly variations, converted both to anomalies, and calculated the correlation and RMS error between the two. The R2 is 0.63 (surprise, surprise), and the RMS error is 0.93°C.

    Now, since the RMS error is 0.93°C between two nearby stations that are in the same climate regime, perhaps you’d be prepared to estimate the RMS error between two stations 150 miles apart in different climate regions …

    However, all of this is a digression. To return to our regular programming, I started this whole question because I wanted to see if the HadCRUT3 error estimates were reasonable. So far, I have seen nothing that might convince me that they are. The underlying instrument error is ±0.5°C. Given that, do you really think we can determine the global temperature to ±0.147°C? I’d be very interested in your “yes” or “no”, with explanation.

    w.

  65. MarkR
    Posted Jan 18, 2007 at 2:24 AM | Permalink

    45 year snowy weather record?

    Most of the snow fell south of Sunset Boulevard and just east of the 405 Freeway. Residents told NBC4 that several inches of snow fell in their yards.

    The last snowfall recorded at Los Angeles International Airport was in January 1962, according to the National Weather Service. Trace amounts — less than 0.5 inches — were reported, according to the NWS.

    Link

    Lees take on this (#22 spelling corrected):

    rocks – its winter. So what?

    Where are the headlines; “New Ice Age in Southern Cali”, “Save the Palm Trees”

    This must be the result of those clever climate model forcing multiplier effects, mustn’t it? Or maybe a teleconnection? Never mind, I expect Michael Mann et al, will saw a tree or two in half and tell us all the answer.

  66. welikerocks
    Posted Jan 18, 2007 at 7:38 AM | Permalink

    re 65 MarkR, Yes where are the headlines? It also snowed in Malibu Canyon yesterday -I wonder what Babs and the rest of Hollywood thought about all that? LOL

  67. welikerocks
    Posted Jan 18, 2007 at 8:25 AM | Permalink

    Willis, my thermometer outside on the porch right now says 33 degrees F, but my home page accuweather report says 38 degrees for Huntington Beach (the “beach” or main part of the city (the pier and main street with civic center) is 8 miles away from my house-but I don’t know where that exact thermometer is located either)

  68. Lee
    Posted Jan 18, 2007 at 1:04 PM | Permalink

    sure, willis, we don’t have universal coverage. What we do have is sampling of a very large fraction of the earth’s surface, much larger than just the several thousand immediately local sites where the temperatures are taken, because of the spatial correlation.

    Note that the Sac- Placerville example is toward the ‘bad’ end of the spectrum, given that one is in a large flat valley, and one is in a western-facing ridge-and-stream valley mountain climate – and they still are quite correlated (“surprise, surprise”). For most of the globe, oceanic air temperature for example, spatial correlation is MUCH better. And because sampling does work, one does not need to cover the entire planet.

    “Given that, do you really think we can determine the global temperature to ±0.147°C? I’d be very interested in your “yes” or “no”, with explanation.”
    Answer- Hell yes. Reason – Statistical analysis of a very generous sample. You do remember the role of sampling in statistical analysis, don’t you?

    You are disputing generalities – the papers go into great detail about what they did to correct errors and calculate uncertainties. Rather than ask me to reinvent the wheel, why don’t you actually discuss what they did.

  69. Lee
    Posted Jan 18, 2007 at 1:15 PM | Permalink

    I have come to expect (but am not any less astounded by) the rush by so many ‘denialists’ here to point at single local weather events as evidence against climate warming – especially given that so many of them are the exact same people who decry the exact same behavior by ‘warmers’ on the other extreme.

  70. MarkR
    Posted Jan 18, 2007 at 1:45 PM | Permalink

    #69 Lee, (re post #65), it’s a joke, and the joke is on the warmers, and particularly the media, who trumpet every alleged warm weather “record” event whilst ignoring every cold one.

  71. Earle Williams
    Posted Jan 18, 2007 at 1:50 PM | Permalink

    Re #70

    MarkR,

    You’ve discovered a new relationship in human behavior: the humor detector correlates precisely with the irony detector.

    Cheers,
    Earle

  72. David Smith
    Posted Jan 18, 2007 at 1:56 PM | Permalink

    RE #69 Lee, I think it’s mostly nose-tweaking.

  73. Steve Sadlov
    Posted Jan 18, 2007 at 3:35 PM | Permalink

    Lee’s level of conceptual (lack of) understanding of Gage R & R / variation science is, sadly, widespread amongst far too many individuals in both science and engineering. There are even many PhDs who do not really have a solid, in depth understanding of it. Major caveat – this applies mostly to Westerners, Americans in particular. In Japan they are far less afflicted (witness Taguchi).

  74. Curt
    Posted Jan 18, 2007 at 3:52 PM | Permalink

    The LA Times a couple of days ago had a front-page article on the horrible devastation the present cold wave in California is wreaking on the citrus crop ($1B+ losses). Right below it was an article on the warm winter in the Alps. The article on the Alps immediately put it in the context of global warming; the California citrus article was silent on that subject. That’s the kind of thing people here are reacting to.

  75. Ken Fritsch
    Posted Jan 18, 2007 at 6:10 PM | Permalink

    I do not think the personal attacks in this discussion add much to the knowledge base. What we need are referenced links to discuss and analyze.

    I am most surprised that Lee does not provide a link to a statistical study showing how we can validly determine the sampling error of global temperature measurements and the use of that information to determine a global average anomaly. We obviously have far less than complete or even uniform coverage of the globe for temperature measurements to expect a simple average of all measurements to give a global average. I have done preliminary searches of the internet for studies of this kind without success — to this point.

    I would think that one could at least do a sensitivity study of how well the array of global temperature measurements represent some global mean anomaly, by simply taking various random portions of these measurements and comparing the resulting temperature anomalies.

    Would not such a search and analysis be better than going back and forth with personal harangues?

  76. Steve Sadlov
    Posted Jan 18, 2007 at 6:54 PM | Permalink

    The basics of gage R & R:

    http://www.sixsigmaspc.com/dictionary/RandR-repeatability-reproducibility.html

  77. Steve Sadlov
    Posted Jan 18, 2007 at 6:56 PM | Permalink

    FYI:

    http://www.itl.nist.gov/div898/handbook/mpc/section4/mpc46.htm

  78. Ken Fritsch
    Posted Jan 19, 2007 at 10:56 AM | Permalink

    Steve Sadlov, the ability to precisely measure temperature anomalies over time does not necessarily require that the measurements be absolutely accurate. This is the point that Lee is apparently attempting to make. If we had an instrumental error or even a measuring process error that was consistent over time then, while the absolute measurements would be in error, the differences or changes over time could still be conceivably measured — and with precision and accuracy. If biases in the measurement error occurred for many measurements over time, an average of many measurements could yet accurately and precisely determine the differences if we assume that the biases occurred randomly. If we have a bias like UHI that is in one direction it will not “average” out and must be compensated with estimates by some other means.

    My point in this discussion has been: what do the published results in statistical studies and analyses of the temperature anomaly say about this situation? In the case of a global temperature anomaly you not only have the issue of the biases averaging out (or compensations for them), but also the consideration of whether the sampling measurements, which are obviously not complete in coverage nor uniform in geography, can be used with a reasonably estimated error to represent the global average temperature anomaly. I would guess that such studies do exist and that my not finding them says more about my searching abilities. I would think that comparing some random sub-samples of global temperature anomalies would be a logical starting point for such an analysis.

    It is those studies that I would like to discuss and I thought someone with great confidence in their accuracy and precision would be the logical candidate to help me find them.

  79. jae
    Posted Jan 19, 2007 at 11:49 AM | Permalink

    Willis has an interesting question in #30, about averaging dry air at 0 degrees with air at 30 degrees at the same pressure. It would be about 14.2 degrees, not 15. And the temp. would generally be even lower, since the presssure of cold air is generally higher. This probably doesn’t make much difference when looking at changes from one year to the next, but it certainly matters when looking at actual temperatures.

  80. Steve Sadlov
    Posted Jan 19, 2007 at 12:11 PM | Permalink

    RE: #78 – When we measure temperature, what we are really trying to ascertain, whether we know it or not, are two things – heat content and thermal resistance en masse. Here is my concern. We measure what appears to be a positive anomaly, either at a specific point and time, or over some integral. We might assume that the anomaly is telling us that de facto thermal resistance has increased (the ongoing claim of the “killer AGW” gang). But what if what we were really seeing was an increase in heat content, at least part of which was due to higher input of energy? AGW fanatics will then say – “but we can correct for that.” Perhaps. All “correction” schemes I am aware of assume that only “urban” measuring points would be impacted by increased (presumably anthropogenic) energy input. That is a dangerous assumption. So called “rural” measurement points are not immune to anthropogenic increases in energy input. In fact, how do we know that rural points have not been equally biased, albeit by rural activities and land uses. No one really knows. Although Pielke Jr. seems to be on the right track toward unravelling it.

  81. Steve Sadlov
    Posted Jan 19, 2007 at 12:12 PM | Permalink

    RE: #80 – I meant Pielke Sr.

  82. Ken Fritsch
    Posted Jan 19, 2007 at 2:17 PM | Permalink

    Jae, thanks for the lead to Willis E’s #30 post, as it contained the reference for which I was searching here.

    The link was to an article that contains an analysis of the HadCRUT methods by authors including Jones. They do look at a sub-sampling of the data to estimate the uncertainty of the incomplete coverage of the globe in temperature measurement. It is papers like that one that I would prefer to discuss. While I think some of the areas covered in the report have been mentioned in this thread, it keeps the discussion more on track when one can reference a paper.

  83. Michael Jankowski
    Posted Jan 19, 2007 at 2:26 PM | Permalink

    The article on the Alps immediately put it in the context of global warming; the California citrus article was silent on that subject. That’s the kind of thing people here are reacting to.

    Give it some time…the Cali citrus freeze will probably also be attributed to “climate change due to anthropogenic global warming” at some point. It seems that every year, there is a severe cold weather event that gets blamed on global warming.

  84. Ken Fritsch
    Posted Jan 19, 2007 at 3:23 PM | Permalink

    Give it some time…the Cali citrus freeze will probably also be attributed to “climate change due to anthropogenic global warming” at some point. It seems that every year, there is a severe cold weather event that gets blamed on global warming.

    It has been done in general by referencing global warming and the problems thereof then you put up the main heading: Extreme Weather and then under that you have two sub-headings one Cold and one Hot. Neat.

    http://news.independent.co.uk/environment/article2165452.ece

  85. Steve Sadlov
    Posted Jan 19, 2007 at 3:51 PM | Permalink

    RE: #84 – Interestingly, there is a sort of bimodal pattern to winter outbreaks of this sort here in Cali. I have a strong suspicion that is has to do with the PDO or perhaps phase changes thereof. My reliable personal observations stretch back to around 1970 (prior to that I was too young). During the last negative PDO phase we had one or more outbreaks of this pattern (a very dry Siberia Express) per year, the last and most memorable being the Feb 1976 low elevation snow event (in the midst of the first year of a severe drought …. ye have been warned!) right at the end of that negative phase. After ’76, we had wetter, warmer winters, with the lone exception of early winter 1990 – 1991, until the El Nino of 1997 – 1998 died out. Significantly, its death was dramatic – an Arctic outbreak for a number of days following December 19th that started with a pretty decent low elevation snow event. Since that 1998 outbreak, we have not had what I would deem a proper El Nino – even the ENSO positive years have been cooler and less flood prone than ones like 1977 – 78, 1981 – 83 and 1997 – 98. The other change apparent since 1998 has been lousy summers (in spite of a handful of very hot interior outbreaks which got lots of media hype, but were bracketed by cool overall weather). The other change since 1998 has been the near disappearance of our meterological spring (consumed by a lengthening meteorological winter). Dave Smith opined that he thinks we are flipping or have flipped to negative PDO, I agree.

    I have two areas for investigation. Firstly, just the simple correlation with the general observations above with PDO. Secondly, the possibility that PDO transition times may allow the “Siberia Express” to assert itself more brutally than normal.

  86. Curt
    Posted Jan 19, 2007 at 7:15 PM | Permalink

    It seems that every year, there is a severe cold weather event that gets blamed on global warming.

    I’ve already seen one column trying to blame the early-fall blizzard in Buffalo NY (when the leaves were still green) on global warming. To be fair, it did not appear to be by anyone with any scientific bakcground.

  87. Willis Eschenbach
    Posted Jan 21, 2007 at 10:48 PM | Permalink

    Well, I’ve been thinking more about the HadCRUT3 dataset and the errors therein. I decided to take a look at the coverage, to see what effect that might have on the data. I had read that coverage has been declining lately for GISS, but I hadn’t seen similar figures for HadCRUT. Here’s the coverage, global and by hemisphere, since 1850.

    The HadCRUT3 coverage has decreased, but not as much as the GISS coverage. I next looked at the most recent figures from HadCRUT, for December 2005. Here is the coverage for the most recent month:

    I was surprised to see how poor the coverage was in the polar regions. I knew it was low, but I didn’t realize just how low it was. To get a sense of how this had changed over time, I looked at the historical coverage by latitude band. Here are the two hemispheres, by latitude:

    As you can see, the polar coverage has always been poor, even in the best years. This is particularly true in the southern hemisphere. It is also interesting to see the drop in coverage during WWI and WWII.

    So … how does one get a global average when the coverage is so poor over a crucial part of the planet? For parts of the polar regions, we have no temperature records at all, but HadCRUT still reports an average … how do they do that?

    Well, they use that mainstay of climate science, the EOF. Unfortunately, for their EOF method, you need to have values for every gridcell … which they don’t have. So here’s how they do it:

    Correlation functions and EOFs were calculated from globally complete anomaly fields for 1948-1999. These were created using a Poisson technique (Reynolds, 1988) to interpolate HadCRUTv with 2m temperature anomalies from the National Centers for Environmental Prediction Reanalysis (Kalnay et al., 1996). This allowed truly global or hemispheric error estimates to be made.

    Now the NCEP reanalysis is a computer model with a best fit to the observed data. Fair enough, but as we have seen, we have very, very little observed data for the poles. So the NCEP is a best-guess at the poles. To that best guess, we are re-fitting the observed HadCRUT data … which again is missing polar data. What’s wrong with this picture?

    This still doesn’t solve the problem of the earlier years, however. The NCEP reanalysis only covers 1948-1999. Here’s their description of how they get around that problem:

    In a non-stationary climate, EOFs based on the relatively well observed 1948-1999 period may not be ranked similarly in earlier periods, particularly the first EOF which describes long-term warming. Thus we reordered our fixed EOF modes through time by projecting each of the 1948-1999 EOFs onto the annual HadCRUTv anomalies in overlapping 52-year periods: 1875-1926, 1876-1927, …, 1948-1999. The variances of each projection gave new eigenvalues for each period, reordering the EOFs. We truncated the reordered EOFs to retain approximately 90% of the 52-year variance, so that the number of EOFs retained varied in time.

    Here, we are getting into sketchier territory. I’d like to see both a complete description of the “projection” process, and a theoretical justification of the idea that we can predict earlier years merely by a reordering of the EOFs of later years.

    What about the first and last parts of the record? Here, they say:

    The truncated set of EOFs was used to calculate the optimal average for the 26th year of each 52-year period, though the EOFs selected for 1948-1999 were used for all years from 1974 onwards, and the EOFs selected for 1875-1926 were used for years prior to 1900 because data were sparse. This procedure little affects the optimal averages, but realistically increases e^2 in the nineteenth century. Optimal methods can be biased towards zero anomaly and so underestimate climate change (Hurrell and Trenberth, 1999), but our 52-year periods are long enough to isolate a global warming EOF as an important source of variance.

    I’d like to see something more than a flat statement that “this procedure little affects the optimal averages” to support this procedure. Also, given the ~60 year cycle in the temperature records, I’d like to see something more than a statement that “our 52-year periods are long enough to isolate a global warming EOF as an important source of variance.” It seems to me that using a set of re-ordered EOFs is bound to affect the average, as will the length of the period.

    What is the effect of this procedure on the calculated errors? They say:

    We calculated the effect of the truncation on e^2 [the error squared] for the period 1948-1999 when complete data were available. The value of e^2 was reduced by 8%, so estimates of e^2 calculated from the above equation were increased accordingly in all years.

    This seems doubtful. It seems to me that truncation of the EOFs will have a greater effect when the EOFs are poorly defined because of the lack of data. Using reordered EOFs from complete data to estimate the error from earlier, partial data need some kind of formal justification.

    What can we conclude from all of this?

    1) The underlying EOFs are based, not on data, but on a combination of data and computer model results.

    2) Since it is the difference between the HadCRUT averages and the NCEP reanalysis which is considered as the “error” in the HadCRUT analysis, it does not include the error in the NCEP analysis.

    3) In the polar regions, we don’t really know what’s going on. Coverage is minimal in the north, and almost nonexistent in the south. In those regions, it’s almost all computer model results, not data.

    4) Since Jones refuses to reveal his data, we have no chance of a replication of his underlying methods.

    5) Since the information in the error analysis paper is inadequate to determine the exact methods used for the calculation of the averages and errors, there is no way to determine if they have been done correctly or reported accurately. There may be a more detailed description of the averaging method elsewhere, but I have been unable to locate it. The only reference to the use of the Optimal Averaging method is to a paper by one of the authors of the current paper. No mention is made in that paper of the reordering of EOFs to use on earlier data. In fact, that paper says:

    It is concluded that the regional averaging procedure developed in this paper is reliable and accurate, and takes into account the spatial inhomogeneity. However, our method cannot deal with nonstationarity even though the normalization of the weights in the OA procedure minimizes the distortion of the trends in the data.

    6) The method of reordering existing EOFs to use on earlier data needs justification, either Monte Carlo, theoretical, or subset analysis.

    7) Much is made of the ability of various computer models to replicate the historical HadCRUT averages. However, since these averages are themselves based in part on computer models, the argument is circular.

    w.

  88. Ken Fritsch
    Posted Jan 22, 2007 at 11:05 AM | Permalink

    Re: #87

    Thanks, Willis E, for your detailed analysis as it was just the sort of exercise for which I have been asking on this thread. Much food for thought here.

    I was attempting to put together in my mind the sorts of uncertainities in using available temperature data to obtain estimates of global averages and, of course, the issue of incomplete geographic coverage (which you discuss here) comes to mind, but another issue is that of coverage of the daily temperature cycle and the amount of uncertainty that one would estimate for that condition.

    I assume that modern temperature measurements can be digitally recorded at just about any time period frequency that is deemed as required, but what about readings from further into the past and the effects of those lesser frequencies on the uncertainties of longer term comparisons of temperatures?

  89. Posted Jan 22, 2007 at 11:56 AM | Permalink

    #87

    And then there is the ‘Variance adjustment method’.. Not that I know what it means, it just sounds bad after CVM (or should I say LOC).

  90. Steve Sadlov
    Posted Jan 22, 2007 at 12:13 PM | Permalink

    RE: #87 – There is a story within the story. What we see here is the truth about “urban” vs “rural” stations. The truth is, “rural” stations are really stations located in small to medium sized towns, for the most part. As there are few towns of any sort north of 70, well, there you have it. Almost no stations.

  91. Steve Sadlov
    Posted Jan 22, 2007 at 12:15 PM | Permalink

    I would also be willing to be dollars to donuts that most of the North of 70 stations that do exist are located in Russia and Scandanavia.

  92. Steve Sadlov
    Posted Jan 22, 2007 at 12:16 PM | Permalink

    be > bet

  93. Dave Dardinger
    Posted Jan 22, 2007 at 2:23 PM | Permalink

    re: #91 Steve,

    dollars to donuts

    There’s an idiom which has just about outlived its usefulness. It used to be “dimes to donuts” But with inflation it had to be switched to dollars. But now the fancier donuts already cost a dollar (or 10 for $10 in supermarketese). Pretty soon now it would be a good bet to take as a hedge.

  94. Steve Sadlov
    Posted Jan 22, 2007 at 2:47 PM | Permalink

    LOL …. the “Dollars To Donuts Hedge Fund” …. I like it …

  95. Willis Eschenbach
    Posted Jan 22, 2007 at 7:23 PM | Permalink

    Ken F, thank you for your comments. You say:

    Re: #87

    Thanks, Willis E, for your detailed analysis as it was just the sort of exercise for which I have been asking on this thread. Much food for thought here.

    I was attempting to put together in my mind the sorts of uncertainities in using available temperature data to obtain estimates of global averages and, of course, the issue of incomplete geographic coverage (which you discuss here) comes to mind, but another issue is that of coverage of the daily temperature cycle and the amount of uncertainty that one would estimate for that condition.

    I assume that modern temperature measurements can be digitally recorded at just about any time period frequency that is deemed as required, but what about readings from further into the past and the effects of those lesser frequencies on the uncertainties of longer term comparisons of temperatures?

    This issue has been investigated at some length during the changeover from the older systems to automated systems. The issue is threefold. Some places took temperatures twice daily, 12 hours apart, and the mean of those was used. Some systems had the old-style thermometers, and the mean of the highest and lowest was used. Some places have automated systems that record every hour, or every minute in some cases, and the mean of those is used.

    The first system is also affected by the time of the twice daily observations. A good deal of thought has gone into adjusting for these different systems to bring them all to the same basis. However, I have not seen any analysis of how this affects the error estimates. It is not mentioned in the HadCRUT paper. It is these kinds of adjustments that I was referring to when I said above:

    4) Since Jones refuses to reveal his data, we have no chance of a replication of his underlying methods.

    w.

  96. Willis Eschenbach
    Posted Jan 22, 2007 at 7:51 PM | Permalink

    Steve S, you say:

    RE: #87 – There is a story within the story. What we see here is the truth about “urban” vs “rural” stations. The truth is, “rural” stations are really stations located in small to medium sized towns, for the most part. As there are few towns of any sort north of 70, well, there you have it. Almost no stations.

    As always in the climate world, there is more than one story within the story.

    One is that the northernmost US station at Barrow, Alaska, is known to be badly affected by UHI, and thus corrupts the record much much more than it would if there were more stations around.

    The second is that UHI occurs more easily when the weather is cold. Adding a bunch of houses that are kept at 70°F (20°C) won’t make much difference if that is the ambient temperature, but will have a big effect if the temperature is -40° (either C or F, take your pick, they’re the same …)

    The third, and most important, is how Jones et al. treat the UHI error. He gives it a value of 0°C up to the year 1900, and increasing linearly to ±0.06° in 2006. This seems strange for a couple of reasons:

    1) He provides no justification other than his own 1990 study for the purported size of the error.

    2) He treats the error as symmetrical:

    The urbanization uncertainty could be regarded as one sided: stations cannot be “too rural” but may inadvertently be “too urban” (Jones et al., 1990; Peterson et al., 1999). However, because some cold biases are also possible in adjusted semi-urban data, we conservatively model this uncertainty as symmetrical about the optimum average. We assume that the global average LAT uncertainty (2 sigma) owing to urbanization linearly increases from zero in 1900 to 0.1°C in 1990 (Jones et al, 1990), a value we extrapolate to 0.12°C in 2000 (Figure 1a).

    While stations cannot be “too rural”, it is quite possible that the majority of “rural” station need adjustment for heat island effects. Also, a symmetrical error assumes that he has done the UHI adjustment in some basically correct manner. But we can’t tell, because …

    3) As far as I know he has never provided a detailed description of exactly how he is doing the adjustment for the UHI.

    w.

  97. Posted Jan 22, 2007 at 8:04 PM | Permalink

    Willis and others, do you think I understood the ‘Variance adjustment method(*)’ correctly:

    1) Detrend the data

    To ensure that the series is stationary, the anomalies in individual grid boxes were detrended using a six-year running average centred on the month of interest.

    2) Multiply the data by a factor smaller than one. Resulting data should have the variance of ‘optimal grid-box result’ (grid box full of thermometers)

    3) Put the trend back

    After the adjustment factor was applied, the smoothed series was added back to recover the variance adjusted time series.

    (I don’t know in which data sets this is used, and don’t know if it matters at all. But if I got it right, it scales the signal down, just like CVM..)

    * see Uncertainty estimates in regional and global observed temperature changes: a new dataset from 1850 , P. Brohan, J. J. Kennedy, I. Harris, S. F. B. Tett & P. D. Jones

  98. Willis Eschenbach
    Posted Jan 22, 2007 at 8:07 PM | Permalink

    UC, you raise an interesting question:

    And then there is the “Variance adjustment method’.. Not that I know what it means, it just sounds bad after CVM (or should I say LOC).

    One of the problems with using the gridcell buckets for any kind of further analysis is that the gridcells contain a variable number of stations. Obviously, the variance of a gridcell with only one station will be larger than the variance of a gridcell which is an average of 20 stations.

    HadCRUT is really two complete datasets, one of which is “variance adjusted” and one of which is not. My understanding of the procedure is that the variance of each gridcell average is adjusted (increased) to match the average variance of the stations within the gridcell. For a gridcell with only one station, of course, this makes no difference. I generally use the variance adjusted dataset because it more closely matches the underlying variance of the record, but of course you can use either one.

    w.

  99. David Smith
    Posted Jan 22, 2007 at 8:26 PM | Permalink

    Willis, another factor I’ve wondered about is the impact of increased UHI wind on minimum temperatures. More wind, even a few extra km/hr, mixes colder near-surface air with warmer above-ground air at night. The effect can be substantial.

    A heat island not only has the obvious heat retention but also can have convective air movement, which can draw wind towards the warm region. And, the large open areas of modern airports have a lot of fetch for a night breeze to develop, versus older weather stations closer to treelines. It’s a complex issue when one is looking for temperature changes of a few tenths of a degree.

  100. Willis Eschenbach
    Posted Jan 22, 2007 at 9:57 PM | Permalink

    David, there is an extensive discussion of this very valid question on Roger Pielke’s website here.

    w.

  101. Posted Jan 23, 2007 at 12:38 AM | Permalink

    My understanding of the procedure is that the variance of each gridcell average is adjusted (increased) to match the average variance of the stations within the gridcell.

    Increased or decreased ? Appendix A Eq. 9, k is less than one. Not sure, doesn’t matter, etc., but again something new to learn.

  102. Steve McIntyre
    Posted Jan 23, 2007 at 9:18 AM | Permalink

    I don’t know whether this is applicable but Briffa, Jones, Osborn adjust variance downwards in paleoclimate as the number of contributing series declines using an “effective” number depending on the correlation between contributing series. Maybe there’s something similar here.

  103. jae
    Posted Jan 23, 2007 at 11:24 AM | Permalink

    The urbanization uncertainty could be regarded as one sided: stations cannot be “too rural” but may inadvertently be “too urban” (Jones et al., 1990; Peterson et al., 1999). However, because some cold biases are also possible in adjusted semi-urban data, we conservatively model this uncertainty as symmetrical about the optimum average. We assume that the global average LAT uncertainty (2 sigma) owing to urbanization linearly increases from zero in 1900 to 0.1°C in 1990 (Jones et al, 1990), a value we extrapolate to 0.12°C in 2000 (Figure 1a).

    I don’t understand this; how can there be “cold biases” in the data? How can there be any symmetry here?

  104. Steve Sadlov
    Posted Jan 23, 2007 at 12:20 PM | Permalink

    Whereever there are human constructed dwellings, roads, electrical grid, management of land, flora and fauna, etc, there will be a warm bias, especially as noted, at high latitudes during winter. Most “rural” stations are impacted. Warmers hate to confront this fact.

  105. John Hekman
    Posted Jan 23, 2007 at 1:47 PM | Permalink

    As I understand it, GISS uses the USHCN dataset, with adjustments. Pasadena, CA., where I live, is in the USHCN. Pasadena is contiguous with Los Angeles and is for all intents and purposes part of LA. But it’s in the USHCN. Over the last 100 years, the average temp in the USHCN for Pasadena has risen steadily and is now 5 degrees F higher than a century earlier.

    Of course, much of this warming is due to the growth of Pasadena and LA over that period. So how much “adjustment” is done for this?

  106. Michael Jankowski
    Posted Jan 23, 2007 at 2:15 PM | Permalink

    I don’t understand this; how can there be “cold biases” in the data?

    It probably refers to all those urban areas which were razed and turned into farmland over the 20th century, such as Metropolis, Gotham City, etc.

  107. Mr.welikerocks
    Posted Jan 23, 2007 at 4:26 PM | Permalink

    #104

    Steve S. Again, right on the money. As a young soldier stationed in South Korea in the 80’s, I noticed the effect everytime I got a pass to Seoul in the winter. The temperature difference between Seoul and where I was stationed up north near the DMZ, was on the order of 10 degrees in mid winter, only 20 miles away! It was SO much colder in the really rural areas then in the city I was mindboggled at the time, and as a jr. scientist/grunt I wrote my engineer father to find out why this was. He wrote back explaining the “Heat Island” effect. He also explained why the coldest time of day is right AFTER the sun came up. That was another phenomonun (sp?) we noticed being in field most of the time. Chow used to freeze to our plates if we didn’t eat it immediatly. I had some difficulty explaining the science to my peers, brave guys all, but not always the sharpest weapons in the rack.

  108. jae
    Posted Jan 23, 2007 at 6:00 PM | Permalink

    Comment no. 4 at Pielke’s site is very interesting, relative to the so-called Average Temperature. Wow, it has increased 0.0076 degrees C per year!

  109. jae
    Posted Jan 23, 2007 at 6:06 PM | Permalink

    Re: my post 108. OOps, I should have read further; the satellite data were evidently in error and have been changed. The link is still worth reading, however.

  110. John Baltutis
    Posted Jan 23, 2007 at 8:42 PM | Permalink

    According to http://www.uah.edu/News/newsread.php?newsID=291

    “Averaged globally, the satellite data show a 27-year atmospheric warming of 0.35 C (about 0.63º Fahrenheit) since late November 1978, when the first temperature sensors were launched aboard a NOAA satellite.”

    That equates to 0.013 °C/year over the 27-year period (1978-2205).

  111. John Baltutis
    Posted Jan 23, 2007 at 8:43 PM | Permalink

    Ooopps! 27-year period (1978-2005)

  112. Dano
    Posted Jan 23, 2007 at 10:48 PM | Permalink

    107:

    Now if you can show evidence that the GHCN stations are sited within the UHI and that the corrections for UHI are incorrect, then you’ll really have something more than an anecdote.

    Best,

    D

  113. Mr.welikerocks
    Posted Jan 24, 2007 at 12:21 AM | Permalink

    #112,

    10 degrees is significant. I am not saying I am proving anything. The anecdote is a real world observation, and worth noting since the corrections haven’t yet been audited.

  114. Earle Williams
    Posted Jan 24, 2007 at 3:24 AM | Permalink

    Re #112

    Dano,

    GHCN is the Global Historical Climatology Network developed by NOAA, the U.S. National Oceanic and Atmospheric Administration. See the NOAA web page here.

    On the other hand, the temperature data that has been discussed in this thread is the HadCRUT3 temperature dataset maintained by the Hadley Centre Climatic Research Unit. You may see their web page here.

    What is not apparent from your post in #112 is whether you are aware the temperature data under discussion is the HadCRUT3, not GHCN. At issue is whether HadCRUT3 adequately deals with heat island effect. Because Dr. Jones of the Hadley Centre has not been forthcoming with the methodology and corrections (see other threads at CA for example) we have no way of knowing whether or not heat island corrections are adequate.

    Regards,
    Earle

  115. bruce
    Posted Jan 24, 2007 at 4:58 AM | Permalink

    Re #112, 114: Congratulations Earle on a very polite response to Dano. I prepared a response, but decided not to post!

  116. MarkW
    Posted Jan 24, 2007 at 6:58 AM | Permalink

    42,

    To your thought experiment, now make the temperature vary in a 24 hour period. The high temperature usually
    occurrs around noon, but the exact time varies, sometimes by a lot. Now tell those students that they are
    to take their readings twice a day, at the same time each day. Oh yea, each student is taking temperature readings
    of his own box, and the shape of the temperature curve will be unique for each box. Now we have to assume that
    the readings are being taken according to instructions, by every student. And that the readings, even if accurately
    taken, are actually representative of the high and low for any 24 hour period. Then what does the high and low
    actually tell you about the scenario. Was the temperature curve a smooth sinusoidal? Or did it stay high for 23
    hours of the day, and then plummet as a cold front came through?

    I’ve read that there is a problem with the Siberia numbers during the Soviet period. It seems that towns in that
    region received subsidies from Moscow, to help pay the heating bills. These subsidies were based on the previous
    year’s average temperature. So there existed a reason for the people keeping the records to fudge their readings.
    The colder the temperaturre, the more they were paid. With the break-up of the Soviet Union, that reason disappeared.
    Did the bias also disappear? Was their a bias? That’s the problem, we can’t prove that there was a bias in the
    historical data from the region, and even if there was, it would be almost impossible to prove what the magnitude of
    the bias was.

  117. Steve McIntyre
    Posted Jan 24, 2007 at 7:53 AM | Permalink

    #114. A couple of years ago, before I had any notoriety, I asked Phil Jones for the data used in his UHI study. He said that it was on a diskette somewhere but it would be too hard to find. At the time I wasn’t aware of the systematic obstruction and didn’t pursue the matter.

    Given that Jones’ work has been funded by the U.S. Department of Energy (starting with the Carbon Dioxide Information Center of the Oak Ridges Nuclear Lab in Tennessee), the flaccid administration of the DOE is equally or more to blame for acquiescing in Jones’ general intransigence.

    An interesting sidenote on HadCRU. Briffa’s reported correlation between gridcell temperature and foxtail chronology did not hold for the HadCRU data said to have been used in Osborn and Briffa. When I pursued this, it turned out that he used CRUTem instead of HadCRU data. The CRutem for his gridcell started only in 1888 while the HadCRU started in 1870. He said that the HadCRU dataset had some bad data for 1870-1888. I tried to get an explanation of what was wrong with HadCRU and right with CRUTem but got nowhere with Science. Neither Birffa nor Science felt that a corrigendum was required to provide a correct identification of the data actually used in his calculation.

  118. MarkW
    Posted Jan 24, 2007 at 8:10 AM | Permalink

    The August issue (either 2005 or 2004) of the Journal of the American Meteorological Society deals with the quality
    of rural weather station in (I think it was) Colorado. Sorry for being imprecise, the article in question is at home.

    The conclusion of the article was that quality control on these stations was often poor to awful.

  119. MarkW
    Posted Jan 24, 2007 at 8:11 AM | Permalink

    Another point regarding weather stations, most of the known quality control issues will bias the
    station warm. There are very few that will bias the station cold.

  120. Steve Sadlov
    Posted Jan 24, 2007 at 11:02 AM | Permalink

    RE: #119 – I believe that RTDs will drift upward as they age since in most cases they use resistance as a proxy for temperature. Aging electronics increase in resistance.

  121. Nordic
    Posted Jan 24, 2007 at 11:22 AM | Permalink

    Earle: Thanks for the link to the temp. datasets. I was curious to see what stations they used in my area, but was not able to locate a map or listing. Do you know where I can find such a list? I am most interested in HasCRUT3

  122. Earle Williams
    Posted Jan 24, 2007 at 12:41 PM | Permalink

    Re #121

    Nordic,

    I’m a babe in the woods with respect to these datasets. I haven’t even figured out how to read the HadCRUT3 gridcell data into R. I suggest rooting around the HadCRU web site and emailing them if the info isn’t available.

    Luck,
    Earle

  123. Steve McIntyre
    Posted Jan 24, 2007 at 1:46 PM | Permalink

    #121. You will have no luck trying to get station identifications from HadCRU. This has been an ongoing complaint. Jones said when asked: We have 25 years invested in this – why should we let you see the data when your only objective is to find something wrong with it”?

    On downloading HadCRU3 into R – ‘ve done this recently and I’ll post up a methodology for retrieving it.

  124. MarkW
    Posted Jan 24, 2007 at 2:03 PM | Permalink

    120,

    I was thinking more on the lines of environmental issues.
    Anything that decreases average wind velocity in are of the sensor will make it read hotter. Be it a hedge row, or
    dust collecting on the vent screens.
    The boxes are supposed to be painted pure white, but as they age, the paint starts turning grey, so they have to be
    repainted regurally. Concrete or asphalt being laid down in the vicinity of the sensor. Even the adding of lights to
    the exterior of nearby buildings can have a small influence.

    Off the top of my head, the only things that will make the sensors bias cold, would be a plant that partially shaded
    the sensor.

    The JAMS article I referenced earlier was rather shocking. The boxes are supposed to be on poles (6 ft tall, I think)
    well away from buildings. They had a picture of one that was mounted to the southern side of a building, just a few
    feet away from an air conditioner unit. It’s been awhile since I read the report, but my recollection is that of the
    surveyed sensors, at some 20 to 30% had environmental problems severe enough, that in the opinion of the authors,
    the data from those sites was compromised.

  125. Nordic
    Posted Jan 24, 2007 at 3:17 PM | Permalink

    Steve: RE #123 I have seen that quote posted here before, but had no idea they didn’t even release the names of the stations they had used. That is just incredible.

    Not releasing the details of how they have adjusted and corrected data is one thing, but not even allowing people to see what stations were used and experiment with different assumptions themselves is preposterous.

  126. Earle Williams
    Posted Jan 24, 2007 at 3:30 PM | Permalink

    Re #125

    Nordic,

    I agree with you 100%, but it is at least worth a shot. You may be lucky enough to get a clerk or student to send you the info.

    Earle

  127. jae
    Posted Jan 24, 2007 at 4:40 PM | Permalink

    Off the top of my head, the only things that will make the sensors bias cold, would be a plant that partially shaded
    the sensor.

    I don’t see this as a cold bias. They should all be shaded. A white box still picks up a lot of solar and IR radiation.

  128. bez
    Posted Jan 25, 2007 at 2:43 AM | Permalink

    Hi All, been lurking for a while now.

    Re: 125 and previous posts

    CRU is part of UEA hence

    http://www1.uea.ac.uk/cm/home/services/units/is/strategies/infregs/foi/

    May or may not be some use for getting data out of them.

  129. MarkW
    Posted Jan 25, 2007 at 11:23 AM | Permalink

    jae,

    I haven’t been able to find a copy of the regulations, but every picture I have ever seen of a sensor box has them
    standing in an open grassy area. No shade.

  130. MarkW
    Posted Jan 25, 2007 at 11:46 AM | Permalink

    This might have been the article I was thinking of. I hadn’t noticed this the first time,
    but our own Roger A. Pielke Sr. was one of the authors.

    http://ams.allenpress.com/archive/1520-0477/86/4/pdf/i1520-0477-86-4-497.pdf

    I did make two mistakes in my above reference. It was the April addition, not August, (2005)

    and it’s the Bulletin of the American Meteorological Society. Not the Journal.

  131. Michael Jankowski
    Posted Jan 25, 2007 at 12:23 PM | Permalink

    Off the top of my head, the only things that will make the sensors bias cold, would be a plant that partially shaded
    the sensor.

    In addition to the fact that the sensors should already be shaded: if such a plant were close to the sensor, it would impact the humidity. I’ve seen in recommended by energy convervation folks that while well-placed trees can help shade and cool a home in the summer time, poorly-placed trees will make it more difficult to cool the house.

    Maybe someone more familiar with this could give better details, but it seems to me that in such a situation one would see night-time lows higher near a plant than away from it.

Follow

Get every new post delivered to your Inbox.

Join 3,318 other followers

%d bloggers like this: