First Thoughts on BEST

Rich Muller sent me the BEST papers about 10 days ago so that I would have an opportunity to look at them prior to their public release. Unfortunately, I’ve been very busy on other matters in the past week and wasn’t able to get to it right away and still haven’t had an opportunity to digest the methods paper. (Nor will I for a week or two.)

As a disclaimer, Rich Muller is one of the few people in this field who I regard as a friend. In 2004, he wrote an article for MIT Review that drew attention to our work in a favorable way. I got in touch with him at the time and he was very encouraging to me. I started attending AGU at his suggestion and we’ve kept in touch from time to time ever since. While people can legitimately describe Phil Jones as not being a “nuclear physicist”, the same comment cannot be made of Rich Muller in either sense of the turn of phrase.

The Value of Independent Analysis
The purpose of audits in business is not to overturn the accounts prepared by management, but to provide reassurance to the public. 99% of all audits support management accounts. I’ve never contested the idea that it is warmer now than in the 19th century. If nothing else, the recession of glaciers provides plenty of evidence of warming in the last century.

Nonetheless, it is easy to dislike the craftsmanship of the major indices (GISS, CRU and NOAA) and the underlying GHCN and USHCN datasets. GISS, for example, purports to adjust for UHI through a “two legged adjustment” that seems entirely ad hoc and which yields counterintuitive adjustments in most areas of the world other than the US. GISS methodology also unfortunately rewrites its entire history whenever it is updated. CRU notoriously failed to retain its original separate data sets, merging different stations (ostensibly due to lack of “storage” space, though file cabinets have long provided a low-technology method of data storage. GHCN seems to have stopped collecting many stations in the early 1990s for no good reason (the “great dying of thermometers”) though the dead thermometers can be readily located on the internet.

Even small changes in station history can introduce discontinuities. Over the years, USHCN has introduced a series of adjustments for metadata changes (changes in observation times, instrumentation), all of which have had the effect of increasing trends. Even in the US where metadata is good, the record is still plagued by undocumented discontinuities. As a result, USHCN recently introduced a new method that supposedly adjusts for these discontinuities. But this new method has not been subjected to thorough scrutiny by external statisticians.

The US has attempted to maintain a network of “rural” sites, but, as Anthony Watts and his volunteers have documented, these stations all too often do not adhere to formal standards of station quality.

The degree to which increased UHI has contributed to observed trends has been a longstanding dispute. UHI is an effect that can be observed by a high school student. As originally formulated by Oke, UHI was postulated to be more or less a function of log(population) and to affect villages and towns as well as large cities. Given the location of a large proportion of stations in urban/town settings, Hansen, for example, has taken the position that an adjustment for UHI was necessary while Jones has argued that it isn’t.

Unlike the statistical agencies that maintain other important indices (e.g. Consumer Price Index), the leaders of the temperature units (Hansen, Jones, Karl) have taken strong personal positions on anthropogenic global warming. These strong advocacy and even activist positions are a conflict of interest that has done much to deter acceptance of these indices by critics.

This has been exacerbated by CRU’s refusal to disclose station data to critics, while readily providing the same information to fellow travellers, a refusal. Nonetheless, as I reminded CA readers during CRU’s refusal of even FOI requests, just because they were acting like jerks, didn’t mean that the indices themselves were in major error. Donna Laframboise’s “spoiled child” metaphor is apt.

The entry of the BEST team into this milieu is therefore welcome on any number of counts. An independent re-examination of the temperature record is welcome and long overdue, particularly when they have ensured that their team included not only qualified statistical competence, but eminent (Brillinger).

Homogeneity
They introduced a new method to achieve homogeneity. I have not examined this method or this paper and have no comment on it.

Kriging
A commenter at Judy Curry’s rather sarcastically observed that, with my experience in mineral exploration, I would undoubtedly endorse their use of kriging, a technique used in mineral exploration to interpolate ore grades between drill holes.

His surmise is correct.

Indeed, the analogies between interpolating ore grades between drill holes and spatial interpolation of temperatures/ temperature trends has been quite evident to me since I first started looking at climate data.

Kriging is a technique that exists in conventional statistics. While I haven’t had an opportunity to examine the details of the BEST implementation, in principle, it seems far more logical to interpolate through kriging rather than through principal components or RegEM (TTLS).

Dark Areas of the Map
In the 19th century, availability of station data is much reduced. CRU methodology, for example, does not take station data outside the gridcell and thus leaves large portions of the globe dark throughout the 19th century.

BEST takes a different approach. They use available data to estimate temperatures in dark grid cells while substantially increasing the error bars of the estimates. These estimates have been roundly condemned by some commenters on threads at Judy Curry’s and Anthony Watts’.

After thinking about it a little, I think that BEST’s approach on this is more logical and that this is an important and worthwhile contribution to the field. The “dark” parts of the globe did have temperatures in the 19th century and ignoring them may impart a bias. While I haven’t examined the details of their kriging, my first instinct is in favor of the approach.

The Early Nineteenth Century
A second major innovation by BEST has been to commence their temperature estimates at the start of the 19th century, rather than CRU’s 1850/1854 or GISS’s 1880. They recognize the increased sparsity of station data with widely expanded error bars. Again, the freshness of their perspective is helpful here.(They also run noticeably cooler than CRU between 1850 and 1880.) Here is their present estimate:

The differences between BEST and CRU have an important potential knock-on impact in the world of proxy reconstructions – an area of technical interest for me. “Justification” of proxy reconstructions in Mannian style relies heavily on RE statistics in the 19th century based on CRU data. My guess is that the reconstructions have been consciously or subconsciously adapted to CRU and that RE statistics calculated with BEST will deteriorate and perhaps a lot. For now, that’s just a dig-here.

It’s also intriguing that BEST’s early 19th century is as cold as it is.

BEST’s estimate of the size of the temperature increase since the start of the 19th century is much larger than previous estimates. (Note- I’ll update this with an example.)

The decade of the 1810s is shown in their estimates as being nearly 2 degrees colder than the present. Yes, this was a short interval and yes, the error bars are large. The first half of the 19th century is about 1.5 degrees colder than at present.

At first blush, these are very dramatic changes in perspective and, if sustained, may result in some major reinterpretations. Whereas Jones, Bradley and others attempted to argue the non-existence of the Little Ice Age, BEST results point to the Little Ice Age being colder and perhaps substantially colder than “previously thought”.

It’s also interesting to interpret these results from the context of “dangerous climate change”, defined by the UN as 2 deg C. Under BEST’s calculations, we’ve already experienced nearly 2 deg C of climate change since the early 19th century. While the authors of WG2 tell us that this experience has been entirely adverse, if not near catastrophic, it seems to me that we have, as a species, not only managed to cope with these apparently very large changes, but arguably even flourished in the last century. This is not to say that we would do equally well if faced with another 2 deg C. Only that if BEST estimates are correct, the prior 2 degrees do not appear to have been “dangerous” climate change.

Comparison to SST
They do not compare their land results to SST results. These two data sets have been said to be “independent” and mutually reinforcing, but I, for one, have had concerns that the results are not truly independent and that, for example, the SST bucket adjustments have, to some extent, been tailored, either consciously or subconsciously, so that the SST data cohere with the land data.

Here is a plot showing HadSST overlaid onto the Berkeley graphic. In the very early portion, the shape of the Berkeley series coheres a little better to HadSST than CRUTem. Since about 1980, there has been a marked divergence between HadSST and the land indices. This is even more marked with the Berkely series than with CRUTem.

Station Quality
I have looked at some details of the Station Quality paper using a spreadsheet of station classification sent to me by Anthony in August 2011 and cannot replicate their results at all. BEST reported a trend of 0.039 deg C/decade from “bad” stations (CRN 4/5) and 0.0509 deg C/decade from “good” stations” (CRN1/2) [arrgh - this is fixed to reflect their units of deg C/century]. Using my archive of USHCN raw data (saved prior to their recent adjustments), I got much lower higher trends [arrggh- corected since they reported in deg C/century not,as I assumed deg C/decade], with trends at good stations being lower than at bad stations in a coarse average. The station counts for good and bad stations don’t match to the information provided to me. [Perhaps they've applied their algorithm to USHCN stations. Dunno.]

Watts Count

Rohde Count

Rohde Trend

Trend(1950)

Trend (1979)

506

705

0.0388

0.16

0.27

185

88

0.0509

0.15

0.22

As was observed in very early commentary on surface stations results, there is a strong gradient in US temperature trends (more negative in the southeast US and more positive in the west). The location of good and bad stations is not spatially random, so some care has to be taken in stratification.

In my own quick exercises on the topic, I’ve experimented with a random effects model, allowing a grid cell effect. I’ve also experimented with further stratification for rural-urban (using the coarse USHCN classification) and for instrumentation.

On this basis, for post-1979 trends, “rural bad” had a trend 0.08 deg C/decade greater than “rural good”; “small town bad” was 0.07 deg C decade greater than “small-town good” and “urban bad” was the opposite sign to “urban good” : 0.01 deg C/decade cooler.

Stratifying by measurement type, “CRS bad” was 0.05 deg C/dec warmer than “CRS good” while “MMTS bad” was 0.15 deg C warmer than “MMTS good”.

Combining both stratifications, “MMTS rural good” had a post-1979 trend of 0.11 deg C/decade while “CRS urban bad” had a corresponding trend of 0.42 deg C/decade.

Details of the BEST calculation on these points are not yet available, though they’ve made a commendable effort to be transparent and I’m sure that this lacuna will be remedied. I’ve placed my script for the above results online here. (The script is not turnkey as it relies on a spreadsheet on station quality that has not been released yet, but the script shows the structure of the analysis.)

Conclusion
As some readers have noticed, I was interviewed by Nature and New Scientist for their reports on BEST. In each case, perhaps unsurprisingly, the reporters chose to emphasize criticisms. For example, my nuanced criticism of the analysis of the effect of station quality was broadened by one reporter into a sweeping claim about overall replicability that I didn’t make.

Whatever the outcome of the BEST analysis, they have brought welcome and fresh ideas to a topic which, despite its importance, has had virtually no intellectual investment in the past 25 years. I am particularly interested in their 19th century conclusions.


223 Comments

  1. Bruce
    Posted Oct 22, 2011 at 9:47 AM | Permalink

    “Stratifying by measurement type, “CRS bad” was 0.05 deg C/dec warmer than “CRS good” while “MMTS bad” was 0.15 deg C warmer than “MMTS good”.”

    Was the 2nd measurment of this sentence supposed to be 0.15 deg C/dec ?

  2. Craig Loehle
    Posted Oct 22, 2011 at 9:48 AM | Permalink

    The large amount of warming since early 19th century can not be blamed on human impacts without serious logical contortions.
    Even if everything they are doing is wrong (which is not true, of course) just the fact that someone is doing this who is not part of IPCC or allied with Greenpeace is a huge huge thing.

    • cce
      Posted Oct 22, 2011 at 10:49 AM | Permalink

      There were 5 major volcanic eruptions between 1812 and 1815, including Tambora — the largest eruption in over a thousand years. This was also during the Dalton Minimum.

      • Craig Loehle
        Posted Oct 22, 2011 at 10:52 AM | Permalink

        The Dalton Minimum is a solar effect, not human.

        • cce
          Posted Oct 22, 2011 at 5:40 PM | Permalink

          Nor is volcanic activity, which is the point. Few would “blame” human effects for the subsequent warming. Perhaps Steve could overlay BEST on WG1 figure 6.13. (which isn’t exactly the same thing, of course).

          Although, I wonder how accurate BEST’s uncertainty bars are during periods of extreme volcanism. Ordinary spatial relationships might fly out the window in such circumstances.

      • Don McIlvin
        Posted Oct 22, 2011 at 4:09 PM | Permalink

        In fact I believe 1815 was the “year without a summer” with snow in July in New England – due entirely to the effects of volcanic activity.

        • John
          Posted Oct 22, 2011 at 5:55 PM | Permalink

          Don, from what I have read — not definitive, obviously — the combination of the Dalton Minimum plus Tambora is what gave New England the year without a summer.

        • Don McIlvin
          Posted Oct 22, 2011 at 7:50 PM | Permalink

          Agreed (delete “entirely”).

      • Posted Oct 22, 2011 at 7:23 PM | Permalink

        To all in this sub-thread, notice that the downward slope of 1800-1815 or so was pretty much over by the time of Tambora or the other 1812-1815 eruptions. If anything, shortly after the year without a summer, the slope climbed like a mofo. So what was with that?

        One might as well point to the 1811-1812 winter earthquakes near New Madrid, MO. Quakes don’t have any connection, and volcanoes only brief connections. But volcanoes also don’t have >10 year effects, positive or negative.

        • Don McIlvin
          Posted Oct 22, 2011 at 7:48 PM | Permalink

          Well one factor would be the peak warmth of the 60 Pacific Oscillation. The late 90s had the highest 60PO amplitude ever recorded. If you count back, 1818 would be a 60OS peak. Of course the 60OS may not be precisely 60 years. There also may be slight anomalies in the time calibration of the graph. Point being there are multiple factors producing a forcing in different directions in that short period of time.

        • Don McIlvin
          Posted Oct 22, 2011 at 7:53 PM | Permalink

          But, yea one might expect a even more precipitous drop than represented in the graph. But other volcanic events seem to have a decline of a year or two then rebound there after of course with lesser effect.

        • Geoff Sherrington
          Posted Oct 23, 2011 at 1:25 AM | Permalink

          Remember that there is averaging of NH and SH, which might be out of phase; of ocean and land which have different response times; of Tmax and Tmin which otherwise might show more character if shown separately; and of volcanos whose effect might be felt more strongly in the hemisphere of occurrence when distant from the Equator. There is a lot of data smearing and smoothing on a single summary graph and all of this this can make an event seem smaller than intuition would suggest. After all, 1.7 deg in the 200 years of Tav is fairly small compared with every day to every night variation at home.
          As cross-posted at the Bishop’s, I have prepared a large set of Australian data and given it to BEST, who had given me much of their early data a year ago. Australia is different to USA because it has about 5% of the population density but a lot of weather stations, so there are many I have labelled as “pristine” to set a baseline for comparison with UHI changes. All of the info is available to anyone who wants to massage it as a sub-case of the global BEST. I’ve added features like Census population for some years, distance of station from ocean (watch this one), checked coords and altitude, looked at several hundred stations on Google Earth, measured the present distance from weather station to nearest suburb and a number of other factors. In my very limited analysis so far, I can’t disagree much with the BEST results, but the big sleeper that I want to read about is the amount of adjustment done to the data by others before BEST got it.

          I have often wondered, if 1800 was near the low of the little ice age, where the heat came from to give that 2 deg climb that so many people accept. I can see a portion of it in the change from glass thermometers to thermistors and the derivation of daily Tmax and Tmin and I hope to show more of this in due course.

          sherro1 at optusnet com au

  3. Scott Brim
    Posted Oct 22, 2011 at 10:04 AM | Permalink

    Steve, would you have an opinion as to which particular varieties of the Kriging interpolation method are most applicable to temperature and to temperature trends?

  4. Ron Cram
    Posted Oct 22, 2011 at 10:11 AM | Permalink

    Steve,
    Thank you for this analysis. I am glad to see your comments on BEST’s analysis on LIA. I have often thought CRU and others have fudged the data on LIA.

    I am also intrigued by this comment:

    “Combining both stratifications, “MMTS rural good” had a post-1979 trend of 0.11 deg C/decade while “CRS urban bad” had a corresponding trend of 0.42 deg C/decade.”

    Surely this is an important observation from your analysis. I don’t understand how BEST could be downplaying or making errors on something like this. To me, it is a foregone conclusion you are correct. I expect Muller will have a much different attitude than Michael Mann which you identify his errors. I look forward to reading more about this.

  5. Posted Oct 22, 2011 at 10:43 AM | Permalink

    Ha,

    I suggested to Andy Lascis that the most interesting aspect of BEST was the relation to proxy reconstructions. crickets after that.

    In my mind the next logical step is to recompile the recons!

    Hansen says recons are more vital than models.

  6. Ron Cram
    Posted Oct 22, 2011 at 11:11 AM | Permalink

    Steve,
    I know next to nothing about kriging. My guess is that it might work very well at fairly long distances, maybe 400 miles, in the huge prairie of west Texas, but no so well at shorter distances with different microclimates. Very different microclimates exist on either side of the Sierra Nevada mountains. One day weather stations on both sides might show the same temperature, but the next day one could be up 10 degrees and the other down 10 degrees. From a satellite, the two locations look to be just a few miles apart.

    Similarly, San Bernardino valley is surrounded on three sides by mountains. On the other side of the mountains to the north is the High Desert area of Victorville. On the other side of the mountains to east and south is the Low Desert of Palm Springs area. Three very different microclimates in close proximity. Temperatures do not go up and down in any kind of relationship. I have been in Palm Springs in June when it was 100F, took the tram to the top of the mountain where it was 60F and had patchy snow on the ground. Needless to say, a 40 degree difference between mountain top and desert floor is not constant.

    • Posted Oct 22, 2011 at 11:40 AM | Permalink

      Very different microclimates exist on either side of the Sierra Nevada mountains. One day weather stations on both sides might show the same temperature, but the next day one could be up 10 degrees and the other down 10 degrees. From a satellite, the two locations look to be just a few miles apart.

      It doesn’t take a mountain range to have such an effect. I live on Whidbey Island, towards the middle of the northern end at an elevation of about 220′. On the west side of the island at roughly the same altitude, facing Puget Sound, the temperature can range more than 10 degrees different than my house, and the distance is less than 10 miles. It depends on what the water is doing any particular day.

      • Posted Oct 22, 2011 at 1:59 PM | Permalink

        Wrong parameter.
        What youre interested in is the TREND, not the temperature.
        read through their paper and look at the work on correlation

        • Bruce
          Posted Oct 22, 2011 at 3:09 PM | Permalink

          Maybe some of us are interested in the TREND in relation to the quality of the stations. If Tokyo and Paris warm by 2C or 3C we know why. If truly rural stations do not warm or they even cool, while 27% of the station (urban) warm by 1 or 2C, and drag up the overall trend then we want to know.

        • Steven Mosher
          Posted Oct 22, 2011 at 6:18 PM | Permalink

          do some math on percentages

        • Bruce
          Posted Oct 22, 2011 at 8:19 PM | Permalink

          ““CRS urban bad” had a corresponding trend of 0.42 deg C/decade.”

          Which is .31 deg C/decade more than “MMTS Rural Good”.

          3.1C / century

          We don’t know the ratio of urban to rural in each grid cell or when exactly UHI started or whether those in between rural and urban had a slightly smaller UHI, but assume evenly spread 27% gets me to .775 C for the 20th century just from UHI.

        • Posted Oct 23, 2011 at 5:53 PM | Permalink

          check your math. use google

        • Bruce
          Posted Oct 24, 2011 at 11:10 AM | Permalink

          Say something useful.

        • Posted Oct 24, 2011 at 6:41 PM | Permalink

          I did. u missed it as usual

        • Ron Cram
          Posted Oct 22, 2011 at 9:52 PM | Permalink

          Mosher,
          If the trend can be different in the short term, it can be different in the longer term.

        • Steven Mosher
          Posted Oct 24, 2011 at 2:16 AM | Permalink

          Of course, there is no law of logic dictating it.
          The question is .. what is the short term, what is the long term, and how different.

          It would see quite strange that one spatial location would see warming over 30 years, while another 10 miles away would fail to see the same warming. One thing that could drive that would be special geography.. like a coastal station ( which follows SST) versus an inland station.

        • Brian Eglinton
          Posted Oct 24, 2011 at 7:33 AM | Permalink

          There has been quite a lot of reaction to BEST so far – and it would appear that there are a lot of tangents being taken.

          As some have remarked – it has opened up a lot of new data and analysis for perusal. And amongst that is the most remarkable summary observation that 1/3 of sites show a cooling trend and 2/3 of sites show a warming trend. Hence – add them all together and behold a warming conclusion.

          Does anyone find this a little strange though. My first thought was to wonder at the distribution of the cool vs warm trends. It seems BEST have actually published some of this. It is posted at http://notalotofpeopleknowthat.wordpress.com/2011/10/22/kansas-temperature-trend-updatemuller-confirms-there-is-a-problem/
          and it shows a reasonably even mix across the USA – and a lot of contrasting stations very close together.
          Hence furthering Steven’s statement above “It would seem quite strange that one spatial location would see warming over 30 years, while another 10 miles away would fail to see the same warming.” – I think we can say that the BEST results are “quite strange”

          Perhaps what it highlights is that there really is a problem with the input data quality?

        • Bruce
          Posted Oct 24, 2011 at 9:37 AM | Permalink

          Wow. What a map. Thanks!

          I wonder if the blue and red stations have something in common other than changes in CO2? UHI? Sensor type? Cleaner air? Dirtier air? Albedo changes? Altitude?

        • Phil R
          Posted Oct 24, 2011 at 6:21 PM | Permalink

          I’m certainly on the layman’s side of the discussion, but I like the quote in the link where he almost offhandedly states, “though some clumping is present…”. Being slightly familiar with U.S. geography, it appears that there is obvious clumping (of red dots) around the Seattle area, the west coast of California (SF & LA), possibly Las Vegas (though there are no state borders on the map), and what is generally referred to the Megalopolis (Washington-Baltimore-Philadelphia-New York-Boston corridor). Not sure if this means anything since UHI apparently does not have much of an effect, but just stood out to me.

        • Posted Oct 24, 2011 at 6:47 PM | Permalink

          That same phenomena has been noticed before, here at CA for example with USHCN.
          tonyb also noticed it. I’ve got reams of examples of it. They tend to share some common pattern..

          The existence of persistent cooling trends in the midst of a field that is warming is very interesting. It doesnt however change radiative physics or energy balance physics.

          Long ago I started a regression project to try to identify some characteristics these stations share. Geography ( being coastal ) is a player. Being in a harbor is also a player. mountain valleys.. also important.

          I view them as ‘eddies” in the field, understanding their structure and evolution is not easy given available data.

        • Bruce
          Posted Oct 24, 2011 at 8:12 PM | Permalink

          UHI explains the warming ones.

        • Steven Mosher
          Posted Oct 26, 2011 at 12:47 AM | Permalink

          oh we are still in the LIA, whens the frost fair

        • Bruce
          Posted Oct 26, 2011 at 2:58 PM | Permalink

          Did you know 1826 was 2nd warmest Jun/Jul/Aug in HADCET?
          And 1846 was the 6th warmest Jun/Jul/Aug?
          And 1818 was the 4th warmest Sep/Oct/Nov?
          And 1834 was the 2nd warmestDec/Jan/Feb?

          The 1818/1840 time frame was as warm as now for some seasons in some years. The difference between now and then is they had some colder winters too to bring down the average.

          Your cracks about the LIA show you are quite misinformed.

          http://www.metoffice.gov.uk/hadobs/hadcet/ssn_HadCET_mean_sort.txt

        • Brian Eglinton
          Posted Oct 25, 2011 at 12:16 AM | Permalink

          I don’t often comment, but taking a step back and getting a wider view, it seems that part of the issue is a determination to get an answer.

          I mentioned this awhile ago when discussing paleo results. Everyone was arguing about what the proxies revealed, but few were prepared to consider that the proxies may not be proxies at all. If a pattern was drawn out of all the noise then surely that was a real measurement of the temperature history. My comment at the time is that being smart humans, we would always be able to derive some sort of signal out of the noise, but to attribute a meaning to it was probably to fool ourselves.

          The BEST project and similar ones do not even consider this a question worth asking. But I suggest to you that at 1/3 of all sites showing a cooling trend, this is not a small anomoly or climate “eddies”, but an essential feature of the statistics. As such it should be explored in depth before any conclusions are made about the results.

          The quote from BEST is:
          “As with the world sample, the ratio of warming sites to cooling ones was in the ratio of 2:1. Though some clumping is present, it is nonetheless possible to find long time series with both positive and negative trends from all portions of the United States. This reemphasises the point that detection of long term climate trends should never rely on individual records.”
          I think the last sentence should be: “This emphasises the point that a sane measurement of a long term climate trend should not be expected from an average of many records.”

          Steven, you have made it clear that you are as thoroughly pursuaded about the basic physics as Gavin and lots of others are. And I suspect you know that lots of other “sceptics” are pursuaded as well. But that aught not be a foundational principle in approaching this data analysis, as if all that is required is to wrangle out by way of complex statistical manipulation a figure for the rising temperature trend. ie Because we ‘know’ that temperature must be rising, therefore the data we are using ‘must’ be able to give us a figure showing this.

          I greatly appreciate the analysis efforts you have put into this subject over a long time. But as you say -
          “understanding their structure and evolution is not easy given available data.”
          Yet if BEST does anything, it should be the provision of a better platform to explore just this very issue. Dealing with this anomoly is far more importatnt in my view than glossing over it to pronounce some agreed temperature profile.

        • Romain
          Posted Oct 26, 2011 at 7:45 AM | Permalink

          Brian Eglinton, I could not agree more.

          Having warming stations so close to cooling stations is intriguing.
          Apologies if this have been said numerous times in the past, but you would intuitively expect a stronger spatial correlation. And a strong spacial correlation is the prerequisite for averaging temperatures from somewhat distant stations.
          How are the authors dealing with this when evaluating their uncertainties? Surely having a grid much larger that the signal auto-correlation must impact the uncertainty? Is this documented somewhere?
          (I must say in the few papers I’ve read, including these BEST ones, the uncertainty estimation were very poorly covered)

        • Posted Oct 25, 2011 at 5:18 AM | Permalink

          My original post detailed temperature trends of all USHCN stations in Kansas over the past century. It was interesting that BEST confirmed the same sort of picture across the whole US.

          I came out with a maximum divergence of 1.69C between stations that should be rural.

          http://notalotofpeopleknowthat.wordpress.com/2011/10/20/temperature-trends-in-kansas/

  7. Doug
    Posted Oct 22, 2011 at 11:13 AM | Permalink

    Doug Keenan’s post-BEST paper correspondence with Richard Muller and Energy & Environment Editor of The Economist James Astill can be read at the link below.

    http://www.bishop-hill.net/blog/2011/10/21/keenans-response-to-the-best-paper.html


    Steve: Muller’s lengthy reply to Keenan looks thoughtful and responsive to me.

    • Duster
      Posted Oct 22, 2011 at 4:23 PM | Permalink

      They were both rather grumpy in a more or less polite and academic fashion. Keenan outright states that the BEST application of statistical methods would be a fail in a third year college course in one response to Astill. Muller, also writing to Astill, indicates more subtly that he expects that he expects that Keenan will be quite pig-headed about the use of data generated from smoothing raw data and that they may never see eye to eye about it. Both had good points.

  8. Rob Wilson
    Posted Oct 22, 2011 at 11:30 AM | Permalink

    Hi Steve,
    Och – entering the fray again.
    Ever the muggins me.

    re. your comments:
    “THE DIFFERENCES BETWEEN BEST AND CRU HAVE AN IMPORTANT POTENTIAL KNOCK-ON IMPACT IN THE WORLD OF PROXY RECONSTRUCTIONS – AN AREA OF TECHNICAL INTEREST FOR ME. “JUSTIFICATION” OF PROXY RECONSTRUCTIONS IN MANNIAN STYLE RELIES HEAVILY ON RE STATISTICS IN THE 19TH CENTURY BASED ON CRU DATA. MY GUESS IS THAT THE RECONSTRUCTIONS HAVE BEEN CONSCIOUSLY OR SUBCONSCIOUSLY ADAPTED TO CRU AND THAT RE STATISTICS CALCULATED WITH BEST WILL DETERIORATE AND PERHAPS A LOT. FOR NOW, THAT’S JUST A DIG-HERE.
    IT’S ALSO INTRIGUING THAT BEST’S EARLY 19TH CENTURY IS AS COLD AS IT IS.
    BEST’S ESTIMATE OF THE SIZE OF THE TEMPERATURE INCREASE SINCE THE START OF THE 19TH CENTURY IS MUCH LARGER THAN PREVIOUS ESTIMATES. (NOTE- I’LL UPDATE THIS WITH AN EXAMPLE.)
    THE DECADE OF THE 1810S IS SHOWN IN THEIR ESTIMATES AS BEING NEARLY 2 DEGREES COLDER THAN THE PRESENT. YES, THIS WAS A SHORT INTERVAL AND YES, THE ERROR BARS ARE LARGE. THE FIRST HALF OF THE 19TH CENTURY IS ABOUT 1.5 DEGREES COLDER THAN AT PRESENT.”

    I am not sure I agree with you. I think the myriad of attribution studies examining forcing for the last 200 years or so are quite clear w.r.t. volcanic influences, changes in the sun’s output and anthropogenic forcing.

    Admittedly, there is quite a large range in the different large scale proxy records (see IPCC 07 spaghetti plot) which will influence attribution studies, but it is clear that the reconstructed amplitude has generally increased over the last decade as new large scale composites enter the mix. For example, just look at Figure 4 in my 2007 paper:

    Wilson, R., D’Arrigo. R., Buckley, B., Büntgen, U., Esper, J., Frank, D., Luckman, B., Payette, S. Vose, R. and Youngblut, D. 2007. A matter of divergence – tracking recent warming at hemispheric scales using tree-ring data. JGR – Atmospheres. VOL. 112, D17103, doi:10.1029/2006JD008318

    http://www.st-andrews.ac.uk/~rjsw/all%20pdfs/Wilsonetal2007b.pdf

    Just looking at this study alone, I am not surprised by the BEST estimates for the early 19th century as they are entirely in line with my dendro based reconstruction (and others) – although note I do conclude the 2007 paper by stating that the TR data should probably be reconstructing summer temperatures and not annual – but that is another story.

    Rob

    • Steve McIntyre
      Posted Oct 22, 2011 at 2:03 PM | Permalink

      Responding to Rob Wilson’s observation, here is the AR4 spaghetti graph with the D’Arrigo Wilson reconstruction replotted with black dots and Berkeley temperature (resmoothed) in red:

      A couple of observations.

      It looks to me like my surmise that RE statistics for rconstructions in the Mannian style would deteriorate using the extended temperature record as there is a pronounced divergence in the early 19th century for the lower amplitude series. This is only a surmise and would depend on SST as well (Berkeley is only land.).

      The Esper reconstruction is the one that runs colder in some earlier periods. The reason why the Esper recon looks different than the others is not for the reasons attributed in the litchurchur, but primarily because it uses Polar Urals rather than Yamal – the only reconstruction to do so.

      As Rob observed, the D’Arrigo Wilson reconstruction is one of the higher amplitude reconstructions and, in the early 19th century, is the coldest of all the AR4 reconstructions. I’ve commented elsewhere on the D’Arrigo Wilson reconstruction which is pretty much equivalent in site selection for MWP comparison to Briffa 2000. As I’ve noted elsewhere, although D’Arrigo et al said that they used Polar Urals in their reconstruction, this was an incorrect representation in their article that they did not feel obliged to correct. In fact, they use the Yamal series. It’s too bad that Briffa hasn’t released the combined Polar Urals-Yamal regional reconstruction referred to in Climategate email and that CRU has refused FOI requests for this data.

    • Posted Oct 22, 2011 at 2:03 PM | Permalink

      Sounds like a testable hypothesis.

      the temperature series of BEST and CRU differ substantially in the early periods.

      Surely it makes sense to perform due diligence and re compile recons.

      If it makes no difference ( the .5C delta) that is also indicative of something I would think.

      clearly a proof of sorts is in order, and easy to do

  9. Ivan
    Posted Oct 22, 2011 at 11:36 AM | Permalink

    I am always wondering why people choose the more complicated path, instead of a very simple and obvious one? If they did not believe the official indices (and there are very good reasons to be skeptical, the divergence in recent decades between them and HadSST being just one of the most obvious) why not try to isolate the rural stations all over the world and then see what happens? One author already did the same thing for the USA 48, that have arguably the best kept weather network in the world, and surprise, surprise, he found almost no significant warming in recent three decades or so! The rural rate of warming is even almost 3 times lower than the UAH data for thew USA 48! Is not that weird? Especially when coupled with the similar finding of a Russian team about their rural-versus-urban data differences that show similar divergence, is not this where the auditing efforts should concentrate the most? Yes, the world has warmed since the 19th century, but that is besides the point. The IPCC itself says that warming before 1975 has little to do with human contribution to CO2 emissions, so of what importance is to confirm that there has been some warming since 1800?

    If this is the case, is not then the real auditing task to see how much world has warmed since 1975? HadSST and other indices appear to agree very well until 1970. After that we have a very strong divergence. How come? Or how come that almost the entire warming trend over the USA 48 has been created by artificially “warming” the cool rural data, rather than “cooling” the urban ones? Was it any different in the rest of the world? Possibly, but none knows at this point.

    This BEST experiment is just a red-herring.

    • kramer
      Posted Oct 22, 2011 at 1:08 PM | Permalink

      Ivan,

      I wrote an email to the BEST team a few months ago asking that they just plot the unadjusted raw rural global data and I got a response a few days ago saying it would be out in this latest report.

      I also think it would be more accurate to just plot the rural unadjusted data. If the Earth is warming, it will show up in this data set. I also think it would be easier than massaging (adjusting) the urban data (unless that’s a form of ‘job security’ for some scientists…).

      • Rob Burton
        Posted Oct 23, 2011 at 12:02 AM | Permalink

        Ivan, kramer.

        I agree that why doesn’t anyone take the easiest option and just work with the best data that we have rather than try to massage the poorer data. Any warming will show up in the best/cleanest sites (the long Armagh record for example). UHI can’t actually be corrected for explicitly. The best you could do is compare the Urban record with nearby ‘good’ rural records and say any divergence is UHI. There is no way that this could ever be proved though so it would be best to just throw the urban records away.

        I still think these reconstructions have UHI in them and exaggerate to some degree the real warming that has occurred over the last few decades. Are the raw records in an available database somewhere??

    • Posted Oct 23, 2011 at 7:38 PM | Permalink

      I’ve done that study numerous times.

      Rural warms more than Urban. it’s weird. same thing in the 30s.

      And dont cite long’s study, it’s garbage. Un documented poorly executed garbage

      • Dave Dardinger
        Posted Oct 23, 2011 at 9:58 PM | Permalink

        Rural warms more than Urban. it’s weird.

        I think it’s pretty clear why that’s the case. UHI scales with the log of population (roughly… there are obviously other factors involved). So what we really should be looking for are constant population / etc. areas. Even if they are urban, a constant population, with other factors figured in, should not have an UHI increase over time. OTOH, a small village, which doubles or triples it’s population over a period of time will have a greater UHI increase than a city which only increases population by say 50%. Given the world population has more than doubled in the past century, just how many rural areas aren’t going to have a largish UHI increase? Not too many, I’d think.

      • Ivan
        Posted Oct 24, 2011 at 11:56 AM | Permalink

        Somebody done the comparison between BEST and global rural network. Seems pretty startling:

        http://hidethedecline.eu/pages/posts/ruti-global-land-temperatures-1880-2010-part-1-244.php

    • Posted Nov 8, 2011 at 10:48 PM | Permalink

      “…why not try to isolate the rural stations all over the world and then see what happens? One author already did the same thing for the USA 48, that have arguably the best kept weather network in the world, and surprise, surprise, he found almost no significant warming in recent three decades or so!”

      YES!! this is so important to do properly (proxies are not good enough). The ‘very-rural’ group is sufficiently small to be checkable – who is going to do it? I have first hand experience looking at data from Australian ‘really-rural’ sites compared with 2 big metropolitan sites: Melbourne and Sydney are MUCH hotter while ‘really-rural’ sites hardly change in temperature over the last 100 years.

  10. Oakden Wolf
    Posted Oct 22, 2011 at 11:43 AM | Permalink

    Loehle, are you kidding me? Blaming warming since 1975 on anything other than primarily human influence requires dismissal of 99% of climate science.

    • Mark F
      Posted Oct 22, 2011 at 12:41 PM | Permalink

      I agree, but properly punctuated – climate “science” /sarc off

    • Bruce
      Posted Oct 22, 2011 at 1:33 PM | Permalink

      Oakden, some people in “Climate Science” study albedo (Earthshine Project) and Bright Sunshine (Martin Wild and the BSRN).

      Albedo was down in the late 1990s, and bright sunshine was up. By significant amounts.

    • John
      Posted Oct 22, 2011 at 6:05 PM | Permalink

      Not so — blaming warming since 1975 on anything other than primarily human influence does NOT require dismissal of 99% of climate science — and BEST says so.

      Look at page 12 of the temperature report, where the BEST authors note that the AMO has increased by 0.55 degrees since 1975, and that the BEST land based record has increased by 0.8 degrees. Earlier, the authors make that point that land temperatures are highly correlated with the AMO. If the AMO drives land temperature, and anthropogenic influences are added on, then these human influences — CO2 emissions, methane and ozone and black carbon emissions, deforestation, sulfate reductions — in sum might amount to about 0.25 degrees or more. The BEST authors note that while it is possible that the AMO drives land temperatures, it is also possible that human influences drive both, they are being thoughtfully scientific about it. They conclude that if the AMO drives temperatures to some degree, then “…human component of global warming may be somewhat overestimated.”

      • Don McIlvin
        Posted Oct 22, 2011 at 8:22 PM | Permalink

        And solar irradiance. Learn et al 1995 reconstructs from 1610 show the major effect since the Maunder Minimum. They say solar irradiance accounts for 1/3rd of warming for the period 1970 (to 1995). Since then the solar scientists have all been talking about the solar maximum in the early 2000s and lately forecasting a reduction of solar irradiance amplitude in the next solar cycle.

    • Carrick
      Posted Oct 22, 2011 at 9:11 PM | Permalink

      Oakden Wolf:

      Loehle, are you kidding me? Blaming warming since 1975 on anything other than primarily human influence requires dismissal of 99% of climate science.

      Considering this:

      1) Before circa 1975, the IPCC AR4 modeling suggests that net anthropogenic was not resolvable from natural warming.
      2) That means we can ascribe all of the warming without dismissing any climate science prior to 1975 to natural warming.
      3) There was a period of similar warming 1905-1940 which is thought to be entirely natural in original, and which is statistically indistinguishable from the 1975-2010 warming period (intervals are 35-year each, apples to apples, according to my estimates the current period has 1-sigma more warming than the earlier period).
      4) The difference in the central values of the two trends is about 50% more warming from 1975-current (1-sigma more warming). That means in comparing the two periods (central values only), 1/3 of the warming from 1975-current is associated with anthropogenic forcings, 2/3s from natural forcings.
      5) The uncertainties are very large, but given natural warming of similar magnitudes in the past, there is roughly a 33% chance that none of the warming is anthropogenically induced, and 5% chance that all of it is anthropogenically induced.

      So unless, you mean something else when you said science, like science in scare quotes, I think you just dismissed 99% of climate science when you said that.

  11. Posted Oct 22, 2011 at 11:43 AM | Permalink

    I would just like to know where McKittrick’s “There is no global temperature” fall into this.

    • TomRude
      Posted Oct 22, 2011 at 12:25 PM | Permalink

      Right on Jeff! But that would spoil a good deal of discussion on something irrelevant to climatic evolution. This of course is one of BEST objective imo.

    • Posted Oct 22, 2011 at 2:04 PM | Permalink

      see the first few paragraphs of their paper when they develop the statistical model

      • TomRude
        Posted Oct 22, 2011 at 7:54 PM | Permalink

        Indeed it shows how this entire concept of evaluating climatic changes is flawed, be CRU, or BEST.

        • Posted Oct 22, 2011 at 8:22 PM | Permalink

          Was the LIA cooler than today?

        • TomRude
          Posted Oct 23, 2011 at 11:34 AM | Permalink

          Surface temperature distribution depends on lower lawyers atmospheric circulation and the intensity of meridional exchanges. The same temperature data can reflect various synoptic realities, depending on the length of the achieved Tmax or Tmin and other factors. It is those synoptic realities and their evolution that represnts a robust consequence of climatic evolution. Temperature by itself is not a robust parameter to evaluate climatic changes. Let me quote Marcel Leroux: “Our approach takes firmly into account the fact that meteorology-climatology is incontrovertibly a geographical discipline, a science of Nature: geographical factors (normally ignored) existing in the lower layers of the troposphere do in fact have a fundamental thermal and dynamic role.

          The essential aim (…) the workings of meridional exchanges, variations in the intensity of general circulation, and the production and spatial distribution of weather with special reference to the migration of pluviogenic structures. This basic knowledge (with which all real climatologists ought to be thoroughly familiar) about the real mechanisms of meteorological phenomena, and about the processes whereby climatic modifications are transmitted, is necessary for the analysis and understanding of climatic evolution, across all scales of intensity, space and time.”

          So was the LIA colder? My answer will be yes but it depends where, as renewed warm air advections along certain path could see warmer trends regions in a cooling phase. Hence BEST or worst or CRU are the products of models minded people that hardly take in account the reality of circulation.

        • TomRude
          Posted Oct 23, 2011 at 11:36 AM | Permalink

          correction: lower layers circulation… lawyers are only for bottom circulation… LOL

    • bill
      Posted Oct 23, 2011 at 5:45 AM | Permalink

      Yes, how do you construct a meaningful global temperature when records from all of Africa (in colonial era) were patchy & maintaining them properly not high on anyone’s agenda; and post-colonially, records are much worse; when nothing from post 1949 china can be trusted; nor from post 1917 Russia; add in period of chaos in Europe 1939-1945; chaos in Russia and Eastern Europe 1990-2000 etc etc etc. We can infer, we can extrapolate, we can make intelligent assumptions but we can’t know what past global temperatures were and not knowing that we can’t certainly say the global temperature is warming.

      • bill
        Posted Oct 23, 2011 at 5:50 AM | Permalink

        The best science can say as to whether global temp is rising is, “well, probably”. So is science’s ‘probably’ a sure enough foundation to take us through the next step, the cause of warming is man+CO2, to fundamental shifts in public policy?

  12. Carrick
    Posted Oct 22, 2011 at 11:44 AM | Permalink

    Rob:

    although note I do conclude the 2007 paper by stating that the TR data should probably be reconstructing summer temperatures and not annual – but that is another story.

    Summer or daytime summer? Isn’t it true that the temperature matters more when light energy is available for photosynthesis? (We have an issue with changing daily integration periods over the summer if this is true too since farther north sites have longer summer daylight hours.)

  13. Joel Heinrich
    Posted Oct 22, 2011 at 11:48 AM | Permalink

    So, they either made up temperatures for the rest of the world in the early 19th century or they are comparing temperature anomalies from Europe and eastern US then to world anomalies now. Both possibilities seem worthless to me.

    And they definitely didn’t make a thorough homogenisation, as there are still obvious bogus station (metadata).

    BTW has anyone got a useable version of their data.txt file? They put up a new version today but it still has data like Tavg for Chicago in Jan. of 10.4°C and 12.3°C in July.

  14. FrancisT
    Posted Oct 22, 2011 at 11:56 AM | Permalink

    William Briggs seems somewhat less impressed, despite the statistical eminence involved in the paper. In addition to the Keenan critique, he has a number of complaints regarding uncertainty and error bars

    http://wmbriggs.com/blog/?p=4530

    I’m not enough of a stats guy to know whose right here but it seems that these results are maybe not quite as definitive as they have been PRed to be.

    On the other hand it does look like they’ve released the code and data (more or less) so no doubt we can try varous tweaks and see what happens. That is something I’m strongly in favor of as it means that if they have made some kind of statistical howler it should be fairly easy to redo the analysis fixing that one error

  15. Nullius in Verba
    Posted Oct 22, 2011 at 11:58 AM | Permalink

    I would agree Kriging is a great improvement on previous methods, although I would say more work is needed on the details.

    Looking at the averaging methodology paper, figure 1 shows a broad range of correlations narrowing with greater distances. The average dips and then rises, approaching a small positive value at the longest ranges. The reasons for this are not demonstrated, but judging from figure 3 of the same paper, seem likely to be due to the correlation structure being non-uniform. Near the equator, the zero-range correlation is much lower, and the long-range correlation much higher. It seems likely that correlations show a more complex structure, and that sharp boundaries in climatology (due to coastlines and mountain ranges) will result in nearby stations spanning the borders showing uncorrelated behaviours. Some of this has evidently been tested for and found, but I would expect that assuming a single mean correlation structure everywhere will result in some observations from stations near boundaries being extended further across them, with less uncertainty, than is desirable.

    They note that the limit of the best fit correlation as separation tends to zero is about r=0.88, which gives r^2=0.77. (I’m assuming this is differences from the height/latitude climatology averaged annually; that wasn’t clear.) They conclude from this that 12% of the variation is due to independent errors. I don’t understand the logic here. If 77% of the variance is explained by the location, that leaves approximately 23% to be explained jointly by the errors in each station. Even divided equally between the two stations, 12% of the variance doesn’t mean 12% of the error’s magnitude.

    Kriging is an interpolation method. The observations are presumed to be exact; only the values of the function between them are uncertain. I’m not clear on how the uncertainty in the measurements themselves has been incorporated. Looking at the results in general, I find myself surprised that despite taking more sources of error and uncertainty into account, with a more careful study, the uncertainty in the result has nevertheless reduced, to an amazingly precise 0.06 C. I’d be interested in a more intuitive accounting of how that happened.

    I was also surprised that the “dying of the thermometers” didn’t have a very big effect on the estimated error. (Figure 4.) Apparently, the interpolation between them is considered sufficient to fill in the gaps left.

    Figure 5 shows the reconstruction with error bounds. It’s very noticeable that the variance is far bigger in the early record than at the end. They comment on this as a real but unexplained effect. Their error estimates are not broad enough to explain it as error/uncertainty. The rate of change during some of the early part of the record appears greater than that at the end. They also comment on the 10-year running average showing no levelling off at the end, which seems odd as the unsmoothed data does appear to do so. (Although it doesn’t look significant.)

    A few other comments on the method – they say they include outliers with a small, fixed (and arbitrary?) weight, and their “scalpel” doesn’t include station moves and instrumentation changes (they say they don’t have the metadata). There are no doubt many other quirks and details to look at.

    A big step forward, I think many of us would agree, but I’m not impressed at the press coverage treating it as a definitive vindication of the existing reconstructions.

    • Chris E
      Posted Oct 22, 2011 at 4:48 PM | Permalink

      There’s been some considerable efforts put into kriging for climate interpolations in Austria, expanding on the US ‘DAYMET’ model. Have a look at: Petritsch, R., Hasenauer, H., 2007. ‘Interpolating input parameters for large scale ecosystem models’. Austrian Journal of Forest Science 124, 135–151. The journal is not online yet, but if you Google around a bit you can find a copy of the paper.

    • Chris E
      Posted Oct 22, 2011 at 5:01 PM | Permalink

      Oops, sorry, wrong paper. Petritsch and Hasenauer 2011, ‘Climate input parameters for real-time online risk assessment’, Natural Hazards, DOI: 10.1007/s11069-011-9880-y

  16. Joe Crawford
    Posted Oct 22, 2011 at 12:24 PM | Permalink

    …to a topic which, despite its importance, has had virtually no intellectual investment in the past 25 years.

    Ouch!

    • Posted Oct 22, 2011 at 8:08 PM | Permalink

      Well, Muller DID say in his YouTube video that he wouldn’t trust the science of these people. (paraphrased)

  17. Oxbridge Prat
    Posted Oct 22, 2011 at 12:30 PM | Permalink

    Like Ivan I find the growing divergence between the various land temperature indices and HadSST the most salient feature.

    • David Holland
      Posted Oct 23, 2011 at 8:53 AM | Permalink

      That divergence struck me too.

      Almost a decade ago when I started to look this area, I looked up the the world cement production figures up to 2000. I have put an Excel file here. Cement production has a remarkable uptick at about 1945 and plotting the cumulative figures shifts the point at which the curve steepens to about 1960. Cement and asphalt which has similar growth have a long residency as people do not always take it away when they stop using it. When they do it often goes in to new buildings or roads anyway.

      This is the plot for cumulative world cement production.

      • David Holland
        Posted Oct 23, 2011 at 8:57 AM | Permalink

        If the plot Cumulative Cement graph does not appear it is here

  18. Carrick
    Posted Oct 22, 2011 at 12:46 PM | Permalink

    Nullus:

    I would agree Kriging is a great improvement on previous methods, although I would say more work is needed on the details.

    It’s not obvious to me that kriging is better than an EOF-based interpolation method (e.g. NCDC’s approach). I think there is enough data over long enough of a period of time for the two methods to be compared, but where there is overlap they don’t look remarkably different to me.

  19. Stephen Richards
    Posted Oct 22, 2011 at 12:52 PM | Permalink

    Steve

    Thanks for this initial work.. Your comments look, as do Muller’s, well considered and thoughtful. I await with bated breath your further analysis which I would do myself if I had your talents.

  20. Anthony Watts
    Posted Oct 22, 2011 at 1:41 PM | Permalink

    Thanks Steve,

    Muller does not have the current siting metadata, nor did he ask for it. That explains much of the difference.

    I think if you’ll run the calculations against Menne et al 2010, you’ll see they don’t match his results either.

    • Posted Oct 23, 2011 at 12:38 PM | Permalink

      would it have been a good idea to have sent him you data when you knew he was doing the research?

      • Steven Mosher
        Posted Oct 26, 2011 at 12:50 AM | Permalink

        I wondered about that as well. wasnt there a ruckus when Muller testified about using data he was given in confidence? was that not the right data? Im confused.

    • miketor
      Posted Oct 23, 2011 at 10:24 PM | Permalink

      Watts you say Anthony Watts Posted Oct 22, 2011 at 1:41 PM | Permalink | Reply

      Muller does not have the current siting metadata, nor did he ask for it. That explains much of the difference

      That seems very “nasty”!!!!

      At the project start you were helping Muller.
      If he was using your data why did you not provide the latest?
      If you updated your data why did you not automatically update him?

      Did you retain the latest so you could discredit the result if it did not fall into your beliefs?

      Your attitude to this project is very bipolar!

      • dickj
        Posted Oct 23, 2011 at 10:36 PM | Permalink

        miketor Posted Oct 23, 2011 at 10:24 PM | Permalink | Reply

        Have to agree there, The attitude of watts to his data is unbelievable.

        What was the call – Free the Data, Free the code!!!!!

        • Posted Oct 24, 2011 at 12:57 AM | Permalink

          Can you guys read? AW said Muller didn’t ask for it. All he had to do was ask.

        • CoPete
          Posted Oct 24, 2011 at 2:07 AM | Permalink

          Why did he need to ask? isn’t the file the one stored as SI to the Fall et al paper:

          http://www.surfacestations.org/fall_etal_2011.htm

          Or is there some new file that is supposed to be better?

          I thought archiving all data/code was supposed to take out the asking and deciding factors in this sort of thing as well as the ambiguity in what was actually being done? Why isn’t this working out that way????

        • Posted Oct 24, 2011 at 3:01 AM | Permalink

          Ask him why he didn’t need to ask. Apparently he didn’t feel he needed it. Since you’re so indignant, maybe you can get Lonnie Thompson or Mike Mann to provide stuff that people HAVE been asking about for years.

        • Posted Oct 24, 2011 at 6:35 AM | Permalink

          Jeff Alberts Posted Oct 24, 2011 at 12:57 AM | Permalink | Reply
          Can you guys read? AW said Muller didn’t ask for it. All he had to do was ask.
          As I recall Watts was consulted early on. Would it have hurt Watts to have suggested at that point that the latest data be used?

        • Jeff Id
          Posted Oct 24, 2011 at 10:27 AM | Permalink

          I’m not sure but I think Steve has provided links to the data at CA for some time. The process would be to request the data though.

        • Steven Mosher
          Posted Oct 26, 2011 at 12:55 AM | Permalink

          But hasnt Anthony’s complaint been that somehow he gave access to the data to Muller and he was first upset that Muller used that data and talked to congress and now mad because he used 60 years,, and now reveals that he what? gave Muller the wrong data because muller did not ask for the right data?
          Im confused.

        • Posted Oct 26, 2011 at 2:35 AM | Permalink

          wait. Anthony gave him the data.. I thought

          Letter of response from Anthony Watts to Dr. Richard Muller testimony 3/31/2011
          It has come to my attention that data and information from my team’s upcoming paper, shared in
          confidence with Dr. Muller, is being used to suggest some early conclusions about the state of the
          quality of the surface temperature measurement system of the United States and the temperature data
          derived from it.

        • Posted Oct 26, 2011 at 9:49 AM | Permalink

          This sub-thread was about the metadata for each site. Something that was collected via the Surfacestations project, as far as I know.

  21. Anthony Watts
    Posted Oct 22, 2011 at 1:50 PM | Permalink

    I should add, Muller sent me his siting paper about the same time as you. He made a change to the title per my suggestion, adding “United States” because I said he was giving a false impression of world coverage, but he left other errors I pointed out, including getting the citation of Fall et al wrong at least six times. I got confirmation via email that he was aware of the errors I pointed out.

    Most importantly, the siting metadata is for a 30 year span. Nobody knows what the siting characteristics of these stations were from 1950-1979, much of that has been lost in the fog of time. Muller used a 60 year analysis back to 1950, where we (and Menne et al) used a 30 year period of 1979-2008 because we both knew that the siting metadata was invalid beyond that.

    Yet Muller left the 60 year period despite my objections. Normally we have these things resolved in peer review, but that chance for quiet reflective discourse has been pretty much blown with the BEST media blitzkreig on March 20th.

  22. justbeau
    Posted Oct 22, 2011 at 1:51 PM | Permalink

    Brillinger is a reputable statistician, probably no rabid activist like Dr. Hansen.
    And Dave hails from Toronto, IIRC.

    • Posted Oct 22, 2011 at 2:29 PM | Permalink

      Indeed he is. But oddly, he’s not listed as an author of the main stats paper on Averaging.

    • diogenes
      Posted Oct 22, 2011 at 6:25 PM | Permalink

      but Briilinger did not co-author the paper, make of that what you will

  23. pax
    Posted Oct 22, 2011 at 1:52 PM | Permalink

    Doug Keenan is very critical of feeding smoothed data sets into further statistical analyses. Any thoughts on that?

  24. DocMartyn
    Posted Oct 22, 2011 at 2:05 PM | Permalink

    In 1812 Napoleon’s army retreated in the face of ‘General Winter’. Moscow had three very cold winters in 60 years, 1940/40 (-42.2C) and 1941/42 (-40C) and most recently in 1997.
    The winter of 1941-42 was the coldest European winter of the 20th Century.
    I can see the Volga freezing in 1812, along with the famine throughout the Mediterranean and China.
    I can’t see the freezing of the Volga in 1940/41 and 41/42.
    I can’t see the great freeze of 97/98.
    Now I know that this is cherry picking.
    But why is it the European freeze and the US East coast, roughly from 1939-42 isn’t in the record?
    This is where the peak is.

  25. Steve McIntyre
    Posted Oct 22, 2011 at 2:17 PM | Permalink

    BTW for people wanting to plot the Berkely data, here is a retrieval script:

    download.file(“http://www.berkeleyearth.org/downloads/analysis-data.zip”,”temp.zip”,mode=”wb”)
    handle=unz(“temp.zip”,”Full_Database_Average_summary.txt”)
    x=scan(handle,sep=”\n”,what=””)
    close(handle)
    writeLines(x,”temp”)
    x=read.table(“temp”,skip=19,colClasses=rep(“numeric”,5) )
    names(x)=c(“year”,”anom”,”unc”,”smooth”,”sm_uncert”)
    berk=ts(x[,2:5],start=x[1,1])
    ts.plot(berk[,"anom"])

    • Bruce
      Posted Oct 22, 2011 at 4:42 PM | Permalink

      It would have been nice if BEST had broken it down by hemisphere and/or grid square.

    • Duster
      Posted Oct 22, 2011 at 5:16 PM | Permalink

      There’s a rather random mix of left and right quotation marks and standard double (keyboard style) apostrophes. Is that my browser (FF) or the site software doing that?

      • Bruce
        Posted Oct 22, 2011 at 6:21 PM | Permalink

        I replaced everything with single quotes.

      • Jeff Id
        Posted Oct 24, 2011 at 10:28 AM | Permalink

        WordPress causes the quote thing. There is a bit of HTML which Steve could use to correct for that but just do a replace on the quotes with the normal style.

    • HenrikM
      Posted Oct 23, 2011 at 2:48 PM | Permalink

      Where is 2010?

  26. Leonard Weinstein
    Posted Oct 22, 2011 at 2:22 PM | Permalink

    Tambora (1815), and several smaller volcanic eruptions just before that time were (as mentioned by cce) the main cause of the large dip in that period. Tambora was the only VEI 7 eruption in the last several hundred years. The large dip due to Krakatoa (high VEI 6) in 1883 is also clear. Smaller dips in recent times also show up due to Santa Maria (low VEI 6) in 1902 and Novarupta (medium VEI 6) in 1912, and Pinatubo (medium VEI6) in 1991.

    The several large variations following the large 1815 dip to about 1880 may have been some sort of moderately damped oscillatory recovery from Tambora rather than new volcanic eruptions, since no major eruption is known in that period.

  27. Dr Darko Butina
    Posted Oct 22, 2011 at 2:42 PM | Permalink

    Steve,

    I am greatly disappointed the way your blog site is rapidly becoming site for rumblings of statisticians that are completely loosing the touch with reality and experimental world. There is a saying that if you have nothing to say, it is better to say nothing. You start with statement that yoy do not have time to analyse the BEST papers, and then you bring in your friendship with ‘Rich’ and how helpful he was to you. What is that to do with the science? If you did not have time to properly read their paper why having 6 pages of your ethereal thoughts on the subject? You emphasise the ‘value of independent analysis’ and yet the data comes from the same sources! You talk about filling in ‘dark areas of the map’ as ‘worthwhile contribution to the field’ yet anyone looking at the experimental data, i.e. thermometer readings, knows very well that the temperature patterns are chaotic, global temperature is utterly useless parameter that has nothing to do with reality. Therefore, anyone suggesting anything to fill in those holes is correct, since it is impossible to prove it either right or wrong. It is real shame that you are disappearing in the virtual world of statistics where you spend pages discussing validity of R^0.007! Your blog site is not worth visting for anything scientific.

    Dr DB-UK

    • Willem Kernkamp
      Posted Oct 22, 2011 at 4:15 PM | Permalink

      Dr. Butina,

      The BEST papers have received a lot of publicity. I welcome Steve’s first impressions here. In fact, I had been looking for them on a daily basis. At the same time, I would expect more thorough examination to follow in due time as usual. Something to look forward to for both of us.

      Regards,

      Will

    • John Tofflemire
      Posted Oct 22, 2011 at 7:53 PM | Permalink

      Dr. Butina,

      Steve noted his friendship with Dr. Muller for full disclosure. That seems reasonable.

      The BBC Environmental “correspondent” Richard Black’s initial report on the BEST study data and paper release sneered that Climate Audit failed to react to that release, suggesting that “skeptics” like Steve were left dumbfounded by the BEST results. Although Black has curiously erased his comments regarding Climate Audit’s lack of immediate reaction from the BBC website, everyone who read his initial report would have seen his snark comment on Steve’s “failure” to react instantly to the BEST release. In that light, Steve’s initial reaction here is reasonable.

  28. Carrick
    Posted Oct 22, 2011 at 4:25 PM | Permalink

    pax:

    Doug Keenan is very critical of feeding smoothed data sets into further statistical analyses. Any thoughts on that?

    Addressed to Steve, but I’ll respond anyways.

    There’s a difference between low-pass filtering then decimating (sample rate reduction) and “smoothing”.

    When you are low-passing data, it’s often because you don’t trust the data over a certain frequency to be accurate, either due to loss of coherence in the data, noise, aliasing issues, and so forth.

    Knowing what I know right now, I probably wouldn’t use subannual data were I to use the anomaly method, as per RomanM’s objection.

    Whether you decimate the data after low-pass filtering it depends on how good your statistical kung fu is. You can handle the autocorrrelation introduced by the smoothing operation in other ways than decimating at the Nyquist sampling rate (“resampling”).

    It’s arguably easier to handle autocorrelation introduced by smoothing, because you at least have a model for where the autocorrelation arose from. In “real world” unprocessed data, underlying noise model associated with the autocorrelation may not be fully understood.

    It’s really only really a mistake if either a) you are throwing away information by smoothing or b) you didn’t appropriately model the effect of the smoothing on your estimate of uncertainty in the corresponding results.

    • RomanM
      Posted Oct 22, 2011 at 5:59 PM | Permalink

      Carrick, I am in complete agreement with you. More succinctly put, IMHO, you can use ‘smoothing” or other such similar data techniques that you refer to in your comment IF you understand what you are doing and can legitimately account for the impact of using such a procedure as the analysis progresses down the road.

      For some of us, this is a part of our professional bread and butter, however the caveat I have always advised is “Don’t try this at home without consulting competent help.”

      • Posted Oct 23, 2011 at 11:32 PM | Permalink

        I’ve finally read the papers and don’t see any reference to individual time series smoothing, only spatial smoothing by Kriging. I can’t figure out Keenan’s critique. He discusses corrections for autocorrelation along these lines, but the paper’s jackknife method seems reasonable as did the spatial covariance derivation. I’ll probably write a post this week sometime but more than one critique has been made which I don’t understand or simply don’t agree with.

        • HAS
          Posted Oct 24, 2011 at 12:26 AM | Permalink

          I had assumed that the criticism relates to the lack of the global temp field at earlier time periods on the RHS of equation [1] in the Averaging Process paper.

        • Jeff Id
          Posted Oct 24, 2011 at 10:24 AM | Permalink

          I’ll have to reread the critique and eq 1 again later but if anyone can shed some additional light on why Keenan thinks that the paper misunderstood autocorrelation, it would be appreciated.

    • RC Saumarez
      Posted Oct 23, 2011 at 5:53 AM | Permalink

      I recently (18th October) wrote a post on this very topic at Judith Curry’s blog: “Does the Alising Beast feed the uncertainty Monster”.

      http://judithcurry.com/2011/10/18/does-the-aliasing-beast-feed-the-uncertainty-monster/

      As a signal processor, I do worry about the way that statisticians treat time series.

      Why do they smooth and then why do they use running means? A running mean is a filter, but it has a lousy frequency response. In performing this operation they are making a judgement about which parts of the signal are important and which are not. It would be better to state this explicitly and design a proper filter to process the data explicitly. As for not being able to smooth the last few years of the record because one is using a running average,this problem can be vastly reduced using properly designed filter.

      The effects of filtering on signal statistics are well known and straight forward, at least for data with Gaussian distributions. Filtering simply changes the bandwidth and DoFs and variances are easily calculable from the distribution of the power spectrum (chi squared, with each harmonic having 2 DoF and there negative exponentially distributed)) allowing one to apply small sample statistics to the data.

      I agree with comments that the most interesting part of this reconstruction is the 19th and early 20th century. Obviously, the errors are far greater during this period, but even so, the recent warming looks less exceptional than I had assumed.

  29. Chip
    Posted Oct 22, 2011 at 4:48 PM | Permalink

    The dip at 1810 corresponds well to the horrific winter experienced in 1812 by Napoleon’s army, so that makes sense to me.

    • Bruce
      Posted Oct 22, 2011 at 6:38 PM | Permalink

      1944 to 2007
      HADCRUT3 .121 to 0.402 = .281
      BEST .255 to 1.205 = .940

      Over triple the difference.

      1944 was the peak warming for both datasets in the 1940s. The peaks are a little difference in the 2000s.

      • Posted Oct 22, 2011 at 8:32 PM | Permalink

        Bruce as Carrick points out you used the wrong data set.

        Further as he notes you fail to account for the manner in which CRU do their landmasking.
        Finally, when Roman and jeff similarly showed a series warmer than CRU where were you?

        go back to googling, that way you can blame your mistakes on others

      • HaroldW
        Posted Oct 22, 2011 at 8:54 PM | Permalink

        Bruce,
        You shouldn’t compare BEST to HadCRUT3 which is a global average, but to CRUTEM3 which is land-only (as is the BEST data).

        For the same years as you chose, viz. 1944 & 2007,
        CRUTEM3 0.082 to 0.678 = .596 rise

        So for those years, BEST’s estimator has increased ~50% more than CRU’s. But focusing on single years is not likely to give a reliable impression. For example, one could choose 2009 (last full calendar year in BEST):
        BEST .905 = +.650 since 1944
        CRUTEM3 .642 = +.521 since 1944
        and now BEST is higher by 25%

        It’s apparent in the graph above that BEST, in recent years, is above CRUTEM3, but not by anything approaching 50%. If I’ve done the math right, the final points in the graph, representing an average over the 10-year period ending in May 2010, are (relative to the baseline 1950-1979)
        BEST 0.891
        CRUTEM 0.730
        and BEST is higher by 22%

        • HaroldW
          Posted Oct 22, 2011 at 8:55 PM | Permalink

          nevermind…answered already

  30. Don McIlvin
    Posted Oct 22, 2011 at 5:14 PM | Permalink

    Steve, you wrote above;

    “Unlike the statistical agencies that maintain other important indices (e.g. Consumer Price Index), the leaders of the temperature units (Hansen, Jones, Karl) have taken strong personal positions on anthropogenic global warming. These strong advocacy and even activist positions are a conflict of interest that has done much to deter acceptance of these indices by critics.”

    That brings to mind a quote in an article I read today of a book review today in the online WSJ.

    “All scientists, not least Social Scientists, should be wary of adhering to any belief system in their professional lives other than the one that requires fidelity to their data. As soon as you have a cause, you have a conflict of interest.”

    - Mr. Christopher F. Chabris a psychology professor at Union College and a co-author of “The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us.”
    In a laudatory review of Daniel Kahneman’s book “Thinking, Fast and Slow”, a cognitive psychologist who won the Nobel Prize in economics in 2002.

  31. Carrick
    Posted Oct 22, 2011 at 7:03 PM | Permalink

    Bruce:

    1944 to 2007
    HADCRUT3 .121 to 0.402 = .281
    BEST .255 to 1.205 = .940

    Over triple the difference.

    If you want to do this right, you need to compare BEST to CRUTEMP, and then you need to use linear regression to estimate the trend, not pick individual years.

    I get 0.128 °C/decade (CRUTEMP3GL) and 0.155 °C/decade (BEST) for 1944-2007 inclusive. To do a detailed comparison you’d have to understand how they do their land-only mean, since different groups do this part differently.

    (It also appears to me that you cherry-picked your starting and ending intervals.)

    Anyway… triple the difference? Hardly.

    • Bruce
      Posted Oct 22, 2011 at 8:27 PM | Permalink

      I don’t need to compare BEST to CRUTEMP. And I picked 1944 because in both datasets it was the peak year in the 1940s. It stands out in the HADCRUT3 graph.

      I picked 2007 because it was the largest value in the 2000s for BEST.

      It may be unfair to compare BEST to HADCRUT3 because BEST makes up data where there is none and HADCRUT3 does not.

      BEST is kinda GISTEMP on steroids. A gross exaggeration of what may have happened.

      • Carrick
        Posted Oct 22, 2011 at 10:24 PM | Permalink

        BEST and CRUTEMP are land-only. HADCRUT combines land + ocean, so comparing BEST with HACRUT is meaningless. I take it you don’t know how to compute trends.

        • Bruce
          Posted Oct 22, 2011 at 11:09 PM | Permalink

          I prefer to look at extremes to get a feel for what is being claimed. Can anyone take seriously that it warmed 1.418C from 1976 to 2007? Thats almost .5C a decade — not the .155C you claim.

          And since one third of the land cooled, that means the regions that warmed must have warmed by even more than that. Not believable.

        • Bruce
          Posted Oct 22, 2011 at 11:51 PM | Permalink

          Even stranger.

          CRUTEM3 warmest year 1998 = .820.

          BEST warmest year 2007 = 1.205 which is .297 warmer than 1998.

          (HADCRUT3 – 1998 – .548)

  32. Carrick
    Posted Oct 22, 2011 at 7:24 PM | Permalink

    RomanM:

    For some of us, this is a part of our professional bread and butter, however the caveat I have always advised is “Don’t try this at home without consulting competent help.”

    Yes I agree.

    It is possible for smoothing to negatively impact the results of your analysis, but it’s also possible for high-frequency noise to affect the results (the shot noise example I gave above, or any blue noise source will negatively impact your effective signal to noise if not handled correctly).

    I just wanted to point out that tt’s overstating the case that it’s always better to use unfiltered or unsmoothed data.

    If it really matters whether you do or not smooth, and you’re not trained how to handle this situation (in particular if the word “verification” doesn’t mean anything to you), you probably should consult competent help before trying to analyze the data.

  33. Ursus Augustus
    Posted Oct 22, 2011 at 8:16 PM | Permalink

    I know some seemed disappointed that the BEST work seemed to support the HADCRU/GISS/NOAA temperature records and the the apparently rapid temperature increase over the later 20th century etc but as Steve points out the implications of the cooler 19th century are significant in that they undermine the hockeyschtick thesis and also illustrate just how much global temperature can change and how quickly without any help from us evil humans.

    Not so much a frontal assault on the AGW castle gates as a sapping under the walls or even that old favourite, a trojan horse it seems to me.

    I am most interested to hear the final verdict on the kriging interpolation. My concerns are that this may be a simplistic mathematical fit using say a polynomial of some order rather than a form derived from detail, empirical understanding of the actual temperature variation cell grid by cell grid or the steepness of the gradient in the vicinity of any given UHI.

    If for example UHI effects dissipate steeply as one moves away from the urban zone but the kriging model has them dissipate slowly, the effect is to overstate the UHI. It is unlikely that an overly steep gradient will be modelled. It is the AGW mob that point out that UHI is only some 2% of the planet’s surface, which is fine, ( I can assume for the moment that is a fairly accurate figure) but the issue is what is mean effective area under the mathematics of the global temperature analyses.

  34. Posted Oct 22, 2011 at 8:50 PM | Permalink

    I am curious about the ‘Rich Muller as reasonable guy’ hypothesis. In 2008 he was quoted as saying ‘If [Al Gore] reaches more people and convinces the world that global warming is real, even if he does it through exaggeration and distortion — which he does, but he’s very effective at it — then let him fly any plane he wants.’ http://www.grist.org/article/lets-get-physical

    Does this mean that exaggeration and distortion are simply means to an end for Muller, or does this mean that he was and/or is unguarded in his comments?

    • BlueIce2HotSea
      Posted Oct 25, 2011 at 1:56 AM | Permalink

      ZT

      Muller holds politicians and lawyers to a lower standard. Perhaps because exaggeration and distortion are tools of their trade?

      Look in your linked article. Muller says James Hansen has been cherry-picking and is not really being a scientist. He says Hansen is being a lawyer. Muller disapproves.

      • Posted Oct 25, 2011 at 10:14 AM | Permalink

        And does that mean that Muller holds the public with whom politicians communicate in contempt and unworthy of the truth?

        • BlueIce2HotSea
          Posted Oct 25, 2011 at 5:25 PM | Permalink

          ZT

          No. I think Muller’s problem is more with Al Gore. Praise for “exaggeration and distortion” is not exactly unqualified admiration. Muller was more clear (elsewhere) when he called An Inconvenient Truth a pack of half-truths. And when he blew his stack at the ‘Team’ for hiding the declining proxy temps.

          Muller is an activist for “non-partisan science”. Maybe that explains the crooked path he has taken through the political minefield.

  35. Ed_B
    Posted Oct 22, 2011 at 9:04 PM | Permalink

    I can’t reconcile the chart of anomolies with so many record temperatures of the 1930s. If the temperatures have climbed that much, why do these old record highs still stand? Maybe I am mistaken.

  36. ianl8888
    Posted Oct 22, 2011 at 9:15 PM | Permalink

    SMc quote:

    >Indeed, the analogies between interpolating ore grades between drill holes and spatial interpolation of temperatures/ temperature trends has been quite evident to me since I first started looking at climate data.<

    Yes, indeed. I regard kriging as the better method for infilling "empty spaces", second only to actual empirical data

  37. Manfred
    Posted Oct 22, 2011 at 10:56 PM | Permalink

    The difference to sea surface temperatures increases even more using the new HadSST3

    http://climateaudit.org/2011/07/12/hadsst3/

    From the peak in the 1940s there is only an increase of 0.3 deg to the recent warming peak, and possibly only 0.2 deg without measured data changes you reported.

    • DocMartyn
      Posted Oct 23, 2011 at 5:47 AM | Permalink

      “From the peak in the 1940s”
      From this

      ‘The winter of 1939/1940 was one of the coldest on record, with persistent cold weather from 22nd December through January. Temperatures were the lowest for at least 100 years in many parts of Europe. It is now theorised that the intense military activity in the North Sea was responsible for disturbing the sea temperature and therefore the climate. The movement of ships in convoys, the laying of huge mine fields by both Britain and Germany, the sinking of ships and the widespread use of explosives to sink mines, and in the use of depth charges to seek to destroy submarines were responsible for ‘mixing up’ the warm and cold water layers of the sea. Up to 10,000 depth charges a month were being used to systematically hunt for submarines, using patterns of explosives designed to cover wide areas and different depths. The loss of heat from the sea led to more cold air from the arctic being pulled into the European region, resulting in much colder weather overall.’

      http://ww2today.com/a-cold-winter-arrives-in-europe

      • Scott Brim
        Posted Oct 23, 2011 at 11:38 AM | Permalink

        Doc, the explanation for the cold winter of 1939/1940, as published on ww2today.com, strikes me as being being preposterous on the face of it.

        The total energy inputs from these various manmade sources — those which might affect ocean mixing patterns if they had enough oomph — couldn’t hold a candle to those energy inputs supplied by natural processes.

      • diogenes
        Posted Oct 23, 2011 at 1:03 PM | Permalink

        Doc…the Winter of 1939/40 was hardly the highpoint of the Battle of the Arlantic. By this logic, the following Winter should have much colder as the Battle was much more intense then.

        http://en.wikipedia.org/wiki/Losses_during_the_Battle_of_the_Atlantic_%281939-1945%29

      • Sean Inglis
        Posted Oct 26, 2011 at 4:23 AM | Permalink

        I note the total failure to account for the role of mermaids and sea-serpents.

        No more ludicrous than any other “cause” described here.

  38. Dale R. McIntyre
    Posted Oct 22, 2011 at 11:04 PM | Permalink

    Dear Mr. McIntyre

    Thank you for the thoughtful and insightful analysis of this new and important paper: after reading RealClimate, Climate Progress and Dot Earth, it is always a pleasure to turn to Climate Audit for some adult conversation.

    I feel one point needs clarification, however: the web post above states that Muller et al is finding a trend of 0.388 degrees per decade in “bad” stations.

    The copy of Muller et al’s “Earth Atmospheric Land Surface Temperature and Station Quality” paper which I was able to obtain shows, in its Table 1, (p.7) that those “bad” stations had a mean slope of 0.388 degrees C per century, not per decade.

    Do you know which is correct? I have not yet been able to obtain all four of the papers in question.

    Best Regards, and please keep up your work on this vital topic.

    Dale R. McIntyre

    Steve: You’re right. Their trends are lower than the ones that I’d calculated from USHCN. I had presumed that their trends would be higher and thus didnt pick up that they were denominated in deg C/century rather than deg C/decade. I’ve updated my comments. The inconsistency remains. Perhaps they will archive their code for this article so that I can reconcile.

  39. Geoff Sherrington
    Posted Oct 23, 2011 at 1:53 AM | Permalink

    Just for memory lane, I emailed Phil Jones in 2006 suggesting he explore geostatistics (of which kriging is a part). He replied that CRU had looked at it. It has become rather complex though widely used in ore resource calculations. One of the initial steps is to try to determine the separation of points which are close enough to have predictive effects on each other. Maybe this starts to answer some of the speculations above about locations either side of the Rockies. It would be prudent for those not very familiar with geostatistics to brush up on its present strengths and weaknesses.

  40. Larry Huldén
    Posted Oct 23, 2011 at 9:08 AM | Permalink

    DocMartyn wrote: “In 1812 Napoleon’s army retreated in the face of ‘General Winter’”.
    Fact is that that the temperature did not matter. The bulk of his army (about 80%) perished of diseases before the first frost in the autumn of 1812.
    Look at the troup strength timetable:
    22 Jun: 600,000 (422,000 + foreign service) expedition starts by crossing Nieman River towards Kaunas (Kovno). Napoleon loose 470,000 in diseases before Borodino
    07 Sep: 130,000 only BIG battle in Borodino, Napoleon loose 30,000
    14 Sep: Napoleon enter Moscow, 100,000 men, no frost
    18 Oct: Napoleon start retreat, still no frost, 75,000 lost mostly from diseases before Krasnoi
    14 Nov: cold spell -26 centigrade, 24,000 leave Krasnoi
    05 Dec: Napoleon leaves the remnants of his troups, about 12,000 men
    06 Dec cold spell, -38 centigrade
    13 Dec 1300 men returned to Kaunas, +6000 escaped foreign troups survived the expedition
    There are differencies in the troup number estimates given by 25 different historians but the big pattern does not change.

    The famous impressive paintings by the French officer Faber Du Faurin has over-emphasized the cold winter and distorted the interpretation of the causes of the failure of Napoleons expedition. The two major factors which actually contributed are: 1) vector borne diseases and 2) failure of Napoleon to secure the logistics and service along the plundered route to Moscow. Returning troups finally starved.

    Larry Huldén
    Finnish Museum of Natural History

    • JT
      Posted Oct 23, 2011 at 7:35 PM | Permalink

      Don’t keep us in suspense, what were the diseases? Not Malaria, by any chance?

      • Larry Huldén
        Posted Oct 24, 2011 at 12:58 AM | Permalink

        Highest mortality among Napoleons troups was caused by spotted fever (typhus) and dysentery.

        Spotted fever is vectored by body lice which was very common among soldiers which could not wash their clothes in field conditions. Relapsing fever is also vectored by body lice but was probably not important in this case.

        Dysentery (severe diarrhoea with blood) is caused Shigella bacteria and Entamoeba histolytica. Several fly species are spreading these agents among soldiers during the warm summer.

        Occasional malaria (Plasmodium vivax) certainly occurred but was completely marginal because mortality of the northern malaria was only about 2%. In addition a malaria epidemic can occur only when troups are stationary for a longer time (like during the Continuation War in Finland in 1941-44). In the case of Napoleons troups they were mostly on the move.

    • Alex Heyworth
      Posted Oct 25, 2011 at 5:26 AM | Permalink

      Many also died during river crossings because they couldn’t swim. This is evident in Tufte’s famous map showing the progress of the advance and subsequent retreat (one of the most impressive pieces of graphical presentation of data).

  41. Noblesse Oblige
    Posted Oct 23, 2011 at 9:41 AM | Permalink

    Does anyone have an idea what is going on here http://sowellslawblog.blogspot.com/2011/09/us-long-term-temperature-trend-from.html The NOAA area-weighted trend of the 48 individual states appears to be signficiantly less than the published trend for the colntinental 48 as a whole.

  42. Kenneth Fritsch
    Posted Oct 23, 2011 at 10:17 AM | Permalink

    Damned spaghetti graphs. The important point of these graphs to me is how the reconstructions handle the sudden warming in the most recent times and how does that compare to earlier times. The spaghetti and the obligitory insrumental spliced to end always get in the way or worse are never discussed.

    Take temperature series with an assumed long term persistence and parameters obtained from some of these long term reconstructions (and without assumed physically attributable trends) and I can obtain any of these peaks and valleys and runs in between.

  43. Kenneth Fritsch
    Posted Oct 23, 2011 at 10:49 AM | Permalink

    “Most importantly, the siting metadata is for a 30 year span. Nobody knows what the siting characteristics of these stations were from 1950-1979, much of that has been lost in the fog of time. Muller used a 60 year analysis back to 1950, where we (and Menne et al) used a 30 year period of 1979-2008 because we both knew that the siting metadata was invalid beyond that.

    Yet Muller left the 60 year period despite my objections. Normally we have these things resolved in peer review, but that chance for quiet reflective discourse has been pretty much blown with the BEST media blitzkreig on March 20th.”

    I would doubt that the 30 year period is that meaningful either since you do not know (1) what portion of the stations of a particular rating group changed during that period nor from what rating to the current rating did a station change. I have seen more significant changes occuring looking at trends versus CRN rating group as one goes back in time many decades before 1980. These longer term trends suffer from the same effects that I mentioned above for a 30 year recent history.

    Overall I think the implied changes from Menne or Fall have to be thought as not quantified measures but only qualitative measures that only measure a part of the effect of all stations going from a given CRN rating to another given rating.

    Further it might well be that the criteria for rating and differentiating these stations is leaving out some important yet-to-be- discovered differences. Perhaps a more direct way would be to construct stations in the same location with the various different characteristics noted in the Watts’ evaluation and make measurements.

    The variations in trends within a CRN group are large and statistical significance takes either large differences in group averages or large numbers of samples. The 9 regions used in Fall show a signifiacant and large varaition from region to region and these differences have to be accounted for in any comparison of CRN groupings. The regions could cover the latitude and longitude effects, but I notice that no accounting was made for altitude differences.

    By the way, I have been impressed with the BEST paper and find it the most comprehensive treatment of temperature averaging and infilling of all papers I have read to date. It covers most of the effects that I have judged important and that other papers have ignored or only talked about for future work. I need to read more in order to determine were all the assumptions in the BEST paper lead and whether their analysis accounts for all the uncertainties those assumptions imply. I have been looking at the breakpoint algorithms used by Menne and BEST for homogenization and attempting to make sense of my analysis of their results.

    • j ferguson
      Posted Oct 23, 2011 at 11:44 AM | Permalink

      Kenneth:
      I like the idea of building a “proper” thermometer station near maybe 30 of the existing doubtful stations and running a couple of years of readings to see how they differ. At first, it occurred to me that there would be a difference simply because the new ones would be new and the old ones not new. But the cure for that would be to erect two new stations associated with each site, one co-located at the doubtful site and subject to the same signal contaminations and the other in a “clean” site.

      The co-located station would give you the new/old step, if any. If there is any value to this scheme, wouldn’t you think you could get usable data within maybe three seasons?

      I suppose someone would say that this would never be done because of the cost.

    • Posted Oct 24, 2011 at 2:01 PM | Permalink

      The “field” test you suggest has been done. CRN ratings represent the SPREAD of high to low, the variance, NOT the bias to the mean. The mean bias, measured in the field, was a constant .1C for every rating.

      • j ferguson
        Posted Oct 24, 2011 at 5:25 PM | Permalink

        Thanks Mosh:
        with 0.1C “bias,” which I assume was collective, one wonders whether the fuss about poor siting was anything other than aesthetic.

        • Steven Mosher
          Posted Oct 24, 2011 at 8:33 PM | Permalink

          There was actually a huge disagreement over that CRNX meant, where X is a number.

          Anthony ( and me for a while) held that it must mean a BIAS of X.

          However, one of the scientists who did the field test actually appearred here and then later at Lucias to explain what they meant.

          Hu, I believe, translated one of their “papers” from French.

          The story was, according to him, that the number was given to represent the maximum divergence you could see at any given time. over time there was a slight warm bias.

          When you think about the physics it makes sense.

          Not that people listened. It took me a while to change my mind, but there you have it

      • diogenes
        Posted Oct 24, 2011 at 5:50 PM | Permalink

        really? doess that seem credible or merely convenient? unless you are using very imprecise instruments….

        • Posted Oct 24, 2011 at 6:30 PM | Permalink

          Doesn’t seem credible at all to me. Place a thermometer over grass for a few days, then over asphalt for a few days and tell me there isn’t a difference.

        • Steven Mosher
          Posted Oct 25, 2011 at 10:59 AM | Permalink

          well you guys want experiments. long before this became an issue the CRN guidelines were set up.
          the people involved did a field test. they used the results of that test to establish the rating 1-5. the 1-5 refers to the spread. a CRN 5 with see temps -2.5 to +2.5 about the mean.
          crn sees -2 to 2.

          later, I found the guideline and pointed Anthony at it. Then I went looking for the orginal work.
          at some point one of the scientists came here to CA to explain in the comments. Later again at Lucia’s.

          You dont like the experiment? re do it. The problem is you have to control for things like season and wind and raid, all sorts of things that drive the spread of the data and drive the mean

          The other thing is that the bias has to effect TMAX or TMEAN.. that means it has to be timed perfectly. same with AC units.. its not enough to raise the temp after TMAX.

        • Bruce
          Posted Oct 25, 2011 at 12:06 PM | Permalink

          “Class 1 – Flat and horizontal ground surrounded by a clear surface with a slope below 1/3
          (<19º). Grass/low vegetation ground cover 3 degrees.
          Class 2 – Same as Class 1 with the following differences. Surrounding Vegetation 5º.
          Class 3 (error 1ºC) – Same as Class 2, except no artificial heating sources within 10
          meters.
          Class 4 (error ≥ 2ºC) – Artificial heating sources <10 meters.
          Class 5 (error ≥ 5ºC) – Temperature sensor located next to/above an artificial heating
          source, such a building, roof top, parking lot, or concrete surface."

          http://www1.ncdc.noaa.gov/pub/data/uscrn/documentation/program/X030FullDocumentD0.pdf

        • Posted Oct 25, 2011 at 2:44 PM | Permalink

          you need to go to leRoys original work. and contact the guys who actually came up with the rating system. that document is the BEGINING of the journey. I started it 4 years ago. get crackin

        • Bruce
          Posted Oct 25, 2011 at 2:50 PM | Permalink

          So … NCDC is lying? And you have no reference for your version? Uh huh.

        • Steven Mosher
          Posted Oct 25, 2011 at 2:53 PM | Permalink

          wrong NCDC got the ranking from leRoy. leRoy started his working in 1998. His early work is in french, I believe Hu has a copy of that, you can find it for yourself. Recently leRoy has offered the ranking system up to the WMO.

          Here is one of the latest. Note closely that the number ( 2,3,4,50 refers to the increase UNCERTAINTY, that is the spread in the data, not the bias

          http://www.jma.go.jp/jma/en/Activities/qmws_2010/CountryReport/CS202_Leroy.pdf

        • Bruce
          Posted Oct 25, 2011 at 4:07 PM | Permalink

          I’m not going to do your research for you mosher. You made a claim. Back it up.

        • Bruce
          Posted Oct 25, 2011 at 4:13 PM | Permalink

          I found Leroy’s ppt. I was right. You were wrong.

          “Class 3 (additional estimated uncertainty added by siting up to 1°C)
          Class 4 (additional estimated uncertainty added by siting up to 2°C)
          Class 5 (additional estimated uncertainty added by siting up to 5°C)

          http://www.google.ca/url?sa=t&rct=j&q=meteo-france%20siting&source=web&cd=7&ved=0CEcQFjAG&url=http%3A%2F%2Fwww.knmi.nl%2Fsamenw%2Fgeoss%2Fwmo%2FTECO2010%2Fppt%2Fsession_5%2F5(01)_leroy.ppt&ei=TCOnTpLYKuLkiAKglYybDQ&usg=AFQjCNHb8OJAQKFunObdPH9QdrQ-yS18DQ

        • Steven Mosher
          Posted Oct 26, 2011 at 1:00 AM | Permalink

          Your wrong bruce. understand the difference between UNCERTAINTY and BIAS

          horses mouth.

          http://rankexploits.com/musings/2010/uhi-in-the-u-s-a/#comment-38730

          http://rankexploits.com/musings/2010/uhi-in-the-u-s-a/#comment-38741

          http://rankexploits.com/musings/2010/uhi-in-the-u-s-a/#comment-38772

          Christian also posted the same thing here in 2007 when we discussed it.

          The ranges are the 95% confidence interval for the bias.

          If you want the internal report from leRoy, write him or Christian and learn French

        • diogenes
          Posted Oct 25, 2011 at 5:13 PM | Permalink

          I read French….is this the Mosher who wants everything to be open?

        • Steven Mosher
          Posted Oct 25, 2011 at 3:14 PM | Permalink

          For some history and the leRoy connection go back to 2007

          http://wattsupwiththat.com/2007/08/24/specs-on-weather-stations/

          http://www.realclimate.org/index.php/archives/2007/08/friday-roundup-2/#comment-50403

          or try this.. where did NCDC get the rating system?

          http://www.google.com/url?sa=t&rct=j&q=leroy%20france%20siting%20guideline%20wmo&source=web&cd=8&ved=0CFYQFjAH&url=http%3A%2F%2Fams.confex.com%2Fams%2Fpdfpapers%2F71817.pdf&ei=WBKnTtmsNuemiQKh3a2gDQ&usg=AFQjCNFMPu87TjQRD5ab6dKZJm3hw_miKw&sig2=NM7uu-PItDbfSeqbtu92Jg

          See this for leRoy explaining that he is the originator of the rating system and that it is now in use by the US.

          Like I said, you have 4 years of catching up to do. leRoy came up with the system. did the field experiment. applied the standards to frances network. the number he assigned has not always been clear to casual readers. its the peak error you can see. that is, on some days, you might see a 2C delta.. on AVERAGE you dont. he is refering to the uncertainty bounds.. peak to trough.. not the average BIAS. here’s another

          http://www.google.com/url?sa=t&rct=j&q=leroy%20france%20siting%20guideline%20wmo&source=web&cd=9&ved=0CF8QFjAI&url=http%3A%2F%2Fwww.knmi.nl%2Fsamenw%2Fgeoss%2Fwmo%2FTECO2010%2Fppt%2Fsession_5%2F5(01)_leroy.ppt&ei=WBKnTtmsNuemiQKh3a2gDQ&usg=AFQjCNHb8OJAQKFunObdPH9QdrQ-yS18DQ&sig2=34YaBfiDIXKEd_N-kRJbVA

          and another

          http://www.google.com/url?sa=t&rct=j&q=leroy%20france%20siting%20guideline%20wmo&source=web&cd=16&ved=0CDQQFjAFOAo&url=ftp%3A%2F%2Fwww.wmo.ch%2FDocuments%2FSESSIONS%2FCIMO-XV%2FEnglish%2FDOCs%2Fword%2Fd04_en.doc&ei=oBanTqnzNq3KiALRhpigDQ&usg=AFQjCNGTWCjm0RQjPkFz3GwBmiCIuv3GDw&sig2=6ZGSsj2gR0rds-uie9bpuQ

          If you question the quality of leRoys work, have a gander at his lab work on rainfall guages

          http://www.ccrom.org/documentspublics/2007thematique/documentstechniques/Rapport_essai_pluviometre-rapportR045.pdf

          More? if you want to read about the field test of CRN1-5 for temperature, read all of CA, youll find it. or try lucia’s the scientist showed up there. write him.

        • diogenes
          Posted Oct 25, 2011 at 5:15 PM | Permalink

          ok…Mosher holds the cards….can we play LORD MOSHER? please?

        • Steven Mosher
          Posted Oct 26, 2011 at 1:07 AM | Permalink

          No I get a little peeved with people who have not followed this from its inception 4 years ago and who expect me to do their homework. You can find the comments from christian yourself here on CA. I’ve linked his other conversation on Lucia’s. You can write him and ask him for the reports or just use your head. Go back to 2007 and read every post and all the comments or GIYF. Im not.

          Class 4 (additional estimated uncertainty added by siting up to 2°C)

          What does that mean.. according to Christian its means this

          a class 4 will see -2 to +2 C swings. this is the 95% CI of the instantaneous bias.
          its not the MEAN bias. its not just warming its also cooling ( from shade)

          If leRoy wrote

          Class 4 (additional estimated BIAS added by siting up to 2°C)

          then that would mean something entirely different.

          And if the mean bias was 2C for these stations… well think about it. according to BEST we are 1.5C warmer than the LIA. if the mean BIAS was 2C… well, cool frost fairs, we are still in the LIA.

        • Steven Mosher
          Posted Oct 25, 2011 at 3:18 PM | Permalink

          here bruce start reading about leRoy here where TCO and I discuss it back in 2007.

          http://climateaudit.org/2007/08/05/more-on-asphalt/#comment-97948

          get crackin..

        • Steven Mosher
          Posted Oct 25, 2011 at 3:33 PM | Permalink

          http://www.powershow.com/view/230d90-N2Y1M/METADATA_TO_DOCUMENT_SURFACE_OBSERVATION_flash_ppt_presentation

          or have a look at his TECO98 presentation in casablanca.

          other resources

          http://www.wmo.int/pages/prog/www/IMOP/publications-IOM-series.html

        • Kenneth Fritsch
          Posted Oct 25, 2011 at 4:34 PM | Permalink

          Steve Mosher, I do not believe that what you linked bears on my suggestion for experiments to estimate the change in station CRN effect on trends.

          The CRN ratings as originally developed were apparently as you have noted and the field measurements provided: a variation in temperatures to be expected from a given rating, with the higher the numerical rating, the higher the variation. That effect by itself would indicate that a CRN 5 rated station would show more variance and thus uncertainty than a CRN 1 station. I would also think that the more variation the more likely one could see a bias in the resulting measurements.

          The Fall and Menne papers dealt with trends and as I recall did not deal with the variation in trends with CRN rating. I need to go back and look. Also I am not at all sure what the measurement was that gives a quantitative or semi-quantitative measure of the temperature variation with CRN rating.

          My observation was that while one can safely state that neighboring USHCN station temperature series correlate well, the variations in trends over time can vary greatly both within a CRN rated group and the same regional group. Looking for significant difference in trends between groups requires, as I have stated too many times already, large differences in group averages and/or large numbers of samples (stations). How much of that station to station variation in trends can be attributed to the CRN rating? The large variation from station to station that I note above is in the context of the average US trends over time.

        • Posted Oct 26, 2011 at 12:41 AM | Permalink

          Yes kenneth. I am trying to put to rest the silly notion that the mean bias is 2C, 3C 4C or 5C. If it were Anthony would have found that, as would you.

        • diogenes
          Posted Oct 25, 2011 at 5:47 PM | Permalink

          LORD MOSHER…what are you trying to suggest…this presentation suggests that 10% of the French climatological network is unreliable…it says nothing about your assertions about biases around mean temperatures. It does not talk about the global system of gathering climate data. The other presentations I have looked at seem to replicate what Bruce has already shown. This is just an attempt to appear knowledgeable on your part. Can you actually back up your assertion about the bias being the same across all classes of thermometer?

        • Posted Oct 26, 2011 at 12:37 AM | Permalink

          First things first

          1. establishing that leroy is the orginator of the metric as I claimed.

          Then you have to actually read all the material and see that he talks about the percentage of error. Then you have to read the dialogues between sod myself and leRoys co worker who explains that the errors are “peak” values and not MEAN values. Dont expect me to do all your work.
          Then you need to put your thinking cap on. IF the MEAN error was 5C you would not see temperatures rising only .8C from 1900 to today. If the mean error was 2C you would not see temps only rising .8C from 1900 to today.
          If the MEAN BIAS was 1.5C we would be having LIA frost fairs. get it?

          On any given day you might see a 2C or 5C spike. Not every day. the mean bias is a function of the frequency of these spikes. The CRN value captures the peak error, not the average error over time. But you are welcome to go find a 5C difference between CRN5 and CRN1. Anthony found…….no difference.
          Hypothesis tested… and no cookie.

          What did we find when we looked at CRN1 versus CRn5 with 300 stations.. I think JohnV and I found… about .15C.. go figure.

          Did that square with what we learned from leRoys collegue? yup

        • Kenneth Fritsch
          Posted Oct 26, 2011 at 10:06 AM | Permalink

          I saw no reply button at your latest post, Steven Mosher, so I’ll reply to the previous post.

          Your input that the CRN ratings are only a measure of some less than quantifiable variation to be expected from the station assigned that rating is valuable and informative. I would suspect that most people who have given the CRN ratings any thought would have come to a similar conclusion. Where I would disagree with what I surmise is your take on these findings and those of Menne and Fall concerning the effects of CRN ratings on temperature trends is that I do not think that these analyses or the revelations from leroy by themselves resolve, finally and totally, the effects of a change in the CRN ratings for a station on temperature trends.

          To reiterate what I have said too many times previously: If a CRN5 rated station was CRN5 from the beginning and never changed and the same for a CRN1 rated station, I would expect these stations, if the stations were measuring the identical climates, to probably have nearly the same trends with perhaps the CRN5 stations trend being more uncertain. That uncertainty would, of course, have to be determined using a number of CRN1 and CRN5 stations. Therefore if the period analyzed had no changes in stations CRN ratings I would expect no significant differences in the trends of groups – given that other factors like regional and altitude are taken into consideration.

          Another feature of these analyses has to be that the ratings were only a snapshot in time and thus while the stations current ratings are known at the snapshot we do not know what the CRN ratings of the stations were at any past potential change, i.e., are we looking at a change from CRN1 to CRN5 or a change CRN4 to CRN5. We therefore cannot quantify a CRN rating to another CRN rating change. Another complication is that even if we suspect changes in some stations occurred in a given time period since it would include some unknown fraction of all stations we cannot quantify.

          I have been studying the questions posited above with an analysis that includes breakpoints. I am continuing to attempt to resolve some rather complicated issues or at least to satisfy my own curiosities.

          My general complaint with some who post at the more skeptical blogs is that in answering some rather simplistic and wrongheaded issues coming out of the less informed skeptics group, they sometimes appear to seem to think, or at least leave the impression, that their answers somehow close the issue.

          If leroy would have used his field data to estimate the effects of a CRN rating change on long term temperature trends that would have been informative, but on my quick read through the papers, I saw no such attempt. Or even if he had related the field data to uncertainty in trends, we would have gained valuable knowledge.

        • Steven Mosher
          Posted Oct 26, 2011 at 11:49 AM | Permalink

          Yes, I had a brief discussion with Muller about this and suggested that he look for differences in variance between the best and the worst. I never followed up with him to see if they tried that. I agree that the complications that arise from only having a snapshot in time are considerable. Also, there is precisous little information on the actual field test. for me it was enough to settle what I thought was an obvious misunderstanding. I find the same misunderstanding in UHI. People point to a single city, they cite a peak UHI which is usually reported to show how bad the problem can be and they walk away thinking that cities see this amount of UHI every day. The bottom line for me is that both of these effects, UHI and micro site, are small.
          They have to be otherwise we are no warmer than the LIA.. haha. this combined with the noise in the record makes identifying a signal difficult. Look at BEST 2sg spread +- .19C That spread combined with a small effect size makes detection a PITA.. no power in the test. I apply some simple logic. If the effect was large it would be easy to find.

          On the relation of CRN rating to trends. I was thinking about doing some synthetic testing. But got distracted by other issues.

        • Kenneth Fritsch
          Posted Oct 26, 2011 at 3:05 PM | Permalink

          Mosher, to be fair to you I know that you questioned Zeke over at the Blackboard on calculating the variance of the various CRN rated trends. He did find the CRN1 rated stations lower, but I did this also and there is a problem with the small sample size for CRN1 stations. If I recalled incorrectly and he did CRN12 together then I retract my comment. It might have been that he found CRN5 variation larger. Actually I have done all this and need to review my own analysis.

          I guess my biggest problem with measuring and reporting temperature trends is not so much the mean trend found but rather reporting correctly the uncertainty in those measurements and particulary as we go back in time. Or reporting when we are uncertain about the overall uncertainty measure. I thought BEST covered their assumptions well.

        • Kenneth Fritsch
          Posted Oct 26, 2011 at 3:17 PM | Permalink

          I went back to my data on variations in trends for the various groups of CRN rated stations and found no apparent significant differences between groups for the TOB or Adjusted series. Either leRoy’s field data variations do not translate to the trends or his measured variations are not applicable for quantifying.

        • Posted Oct 27, 2011 at 2:06 AM | Permalink

          Clarify Kenneth.

          How did you do your test.

          The notion would be something like this: for a given area, youve got a
          variance due to daily changes in the weather, a variance due to siting
          and a varience due to measurement.

          You’d best find stations in the same area and look at daily data.

        • Kenneth Fritsch
          Posted Oct 27, 2011 at 11:28 AM | Permalink

          I corrected all the CRN group data using the 9 region differences from Fall et al (2011) and using monthly data. I have also constructed linear models using CRN Ratings and the 9 Regions as factors – and will include altitude in future analysis.

          It appears that you are saying that perhaps the variations that leRoy found would be revealed better in variations in daily temperature data. That will be an interesting analysis for me to do in the future. I suspect that any variations would be detected over short time periods and could be measured close to the snapshot time of the CRN ratings and thus making that association more reliable.

          I would think, however, that average daily temperature variability would translate to monthly variability.

        • Bruce
          Posted Oct 27, 2011 at 2:24 PM | Permalink

          Mosher: “People point to a single city, ”

          Nonsense. How about 42 cities.

          “By comparing 42 cities in the Northeast, they found that densely-developed cities with compact urban cores are more apt to produce strong urban heat islands than more sprawling, less intensely-developed cities.

          The compact city of Providence, R.I., for example, has surface temperatures that are about 12.2 °C (21.9 °F) warmer than the surrounding countryside, while similarly-sized but spread-out Buffalo, N.Y., produces a heat island of only about 7.2 °C (12.9 °F), according to satellite data. Since the background ecosystems and sizes of both cities are about the same, Zhang’s analysis suggests development patterns are the critical difference.”

          http://www.nasa.gov/topics/earth/features/heat-island-sprawl.html

  44. ferd berple
    Posted Oct 23, 2011 at 11:56 AM | Permalink

    I am surprised that none of the analysis of temperatures looks at subtracting the trend in sea surface temperatures from the trend in land surface temperatures, to isolate that portion of the trend that is specific to land use.

    It would seem likely to me that human land use, such as land-clearing, agriculture and UHI has much more of an effect on land temperatures than on ocean temperatures. Looking at the graph above, the divergence between land and ocean temperatures is something like 0.4 C since 1980, which is a significant amount of the unexplained heating referred to by the IPCC which scientists attribute to AGW.

    From what I can see in the graph, this heating is primarily over the land, which doesn’t make sense if CO2 is the primary cause.

  45. Hu McCulloch
    Posted Oct 23, 2011 at 12:13 PM | Permalink

    Whereas Jones, Bradley and others attempted to argue the non-existence of the Little Ice Age, BEST results point to the Little Ice Age being colder and perhaps substantially colder than “previously thought”.

    While the 19th c was clearly cooler than the 20th, the LIA per se was pretty much already over. See the bimillenial Loehle and McC (2008) reconstruction at
    http://econ.ohio-state.edu/jhm/AGW/Loehle/ . The LIA in the data Craig assembled runs roughly 1450-1750. The 17th c comes in about 0.4 dC below the 19th. Even then, as Craig has noted, uncertainties in proxy dating tend to attenuate the measured composite signal.

    • Bruce
      Posted Oct 23, 2011 at 12:34 PM | Permalink

      For me, the advance of glaciers in Europe 1816-25 would suggest the LIA was not over in 1750.

      “1806-08: Crop failures and famine in Estonia (Tannberg et al. 2000).

      1810-1819: Coldest decade of the last 1250 years in the French Alps, according to tree-ring data (Corona et al. 2011). Mean summer temperature was 3 °C lower than the warmest decades (810s and 1990s).

      1816: Coldest single year on record in many places in Europe and North America, following the 1815 eruption of Mount Tambora in Indonesia.

      1816-25: New glacier offensive throughout Europe; all Alpine glaciers showed advances reaching positions slightly short of 17th century Alpine maximum limits.

      1830-40s: Moderate retreat shown by many glaciers.”

      http://academic.emporia.edu/aberjame/ice/lec19/holocene.htm

      • Steven Mosher
        Posted Oct 23, 2011 at 10:12 PM | Permalink

        Corona

        “About 45% of the temperature variance is reconstructed. Despite the use of the newly updated meteorological data set, the reconstruction still shows colder temperatures than early instrumental measurements between 1760 and 1840.”

        So, the study you cite shows colder data than instruments. But you like this data, so you endorse treemometers. Except in those cases where you don’t endorse treemometers.

        • Bruce
          Posted Oct 29, 2011 at 9:45 AM | Permalink

          Didn’t I highlight the part about advancing glaciers? Did “all Alpine glaciers” advance or didn’t they?

          The early 1800s are quite dramatic climate-wise. Way more ups and downs than now. HADCET has 1826 as the 2nd warmest summer ever. Yet there were the years without summer too.

          I mean, if CO2 can’t produce a warmer summer than 1826 it’s quite whimsical isn’t it?

      • Carrick
        Posted Oct 23, 2011 at 10:53 PM | Permalink

        Bruce:

        For me, the advance of glaciers in Europe 1816-25 would suggest the LIA was not over in 1750.

        I would agree, 1550-1850 is the general period for the LIA in most chronologies.

        It may have started earlier that 1550. If you read The Little Ice Age: How Climate Made History, the pattern may have started in the 1300s.

  46. Oxbridge Prat
    Posted Oct 23, 2011 at 12:40 PM | Permalink

    Fred Berple, that was exactly my point. The strong divergence between land and sea is exactly whay one would expect from UHI and other land use change effects.

    Is this divergence discussed properly anywhere?

  47. Kenneth Fritsch
    Posted Oct 23, 2011 at 2:50 PM | Permalink

    “The LIA in the data Craig assembled runs roughly 1450-1750. The 17th c comes in about 0.4 dC below the 19th. Even then, as Craig has noted, uncertainties in proxy dating tend to attenuate the measured composite signal.”

    You should have added with a 95% CI range of approximately 0.8 degrees C and perhaps why it is again we should have more confidence in non dendro over dendro and not merely consider another reconstruction as a representation of a proxy measure with long term persistence and not necessarily responded faithfully to temperature.

  48. TopCD
    Posted Oct 23, 2011 at 3:43 PM | Permalink

    One thing that sticks out in this article is SMcI endorsement of Kriging for interpolating temperature data. I am a geologist (not a geostatistician) who writes code to carryout Kriging on geological phenomena. So I like SMcI agree that it is probably the best method that one could use for regional (in the local sense) variation. But if BEST are using this to interpolate temperature data on a global level then I cannot agree. In general language:

    Firstly, how does one deal with regional and global trends (e.g. altitude/distance from coast or latitudinal variation). Surely, this raises questions about the Kriging type. One would need to Krige with variable trend which is the same as interpolating via a spline and then Kriging the residuals. In which case the estimate of confidence supplied by the Kriging variance will superficially suppressed beyond the range of “spatial correlation” and ultimately all your expressing where data is sparse is the variable trend (equivalent to the spline). I hope they haven’t done this.

    Secondly, the model of spatial correlation will vary depending on the part of the world your in. Therefore, using a single global model will produce erroneous interpolation. In short I’m not sure Kriging could be practically applied while preserving the strengths of the technique – we can only predict what the nature of the control allows.

    Finally, if the covariance is time dependent – varies from year to year – is the scientific priority not to find out why whether than to just go ahead and interpolate blindly.

  49. DocMartyn
    Posted Oct 23, 2011 at 4:07 PM | Permalink

    I have a statistical request.
    Let us just use the BEST land data from 1900-2010.
    Now given the 95% CI, can any one tell me what is the THEORETICAL largest fraction of stations that can have a negative slope, with the others having a positive slope?
    I wish to understand if the contention that one third of the stations exhibit a negative slope is plausible, given their confidence intervals.
    I suspect it isn’t, unless they are using the term confidence interval in a way I am unfamiliar with.

  50. Mark
    Posted Oct 23, 2011 at 4:12 PM | Permalink

    I’m happy to hear that Steve has a working relationship with Muller. Once Steve dives into the data and does what he does so well (ask insightful questions about apparent lurking anomalies), his pre-existing relationship may provide him access to encourage BEST to release further data (perhaps intermediate steps or alternative subset slices). This would be beneficial to the cause of science.

    As for the BEST release itself, I’m saddened by their PR-centric grab for attention. It undermines the reputation they were trying build. However, I’m not surprised by their findings, except the apparent inability to perceive UHI. That’s going to be fascinating to watch the community untangle. The most disappointing aspect was seeing how much the media continues to take fairly neutral and unsurprising results and spin them in one direction only.

    That’s why the most important thing about the BEST effort remains their stated intention to be fully transparent. It doesn’t matter so much what BEST’s interpretation of the data is as long as everyone else can get full access to make their own interpretations. That’s the long-term value here for those that care about understanding what’s really happening in the physical world.

  51. ausiedan
    Posted Oct 24, 2011 at 6:31 AM | Permalink

    New report just up at WUWT
    Heading QUOTE
    Unadjusted data of long period stations in GISS show a virtually flat century scale trend
    UNQUOTE
    Reference: http://wattsupwiththat.com/2011/10/24/unadjusted-data-of-long-period-stations-in-giss-show-a-virtually-flat-century-scale-trend/#more-49836

    I’m still not completely convinced that we have a good handle on global temperature.

    • Steven Mosher
      Posted Oct 24, 2011 at 8:28 PM | Permalink

      It’s a pretty feeble attempt. You cannot use unadjusted data because of known biases WRT TOBS. this has been discussed here ad naseum. You have two choices.

      1. apply a TOBS correction
      2. cut the data series and use a BEST like method.

      next, the averaging method is un tested. he needed to do synthetic tests on the method. Missing months are NOT randomly distributed.. think jan and dec

      Next there was no area weighting.

      Next, he seems to have a different data source and doesnt call it out directly

      Giant waste of time

      • JamesG
        Posted Oct 29, 2011 at 7:45 AM | Permalink

        TOBS is not a known bias, it is an assumed bias with no real justification and with a standard correction by Peterson that is admitted to be largely guesswork with zero testing against reality.

        Why assume a TOBS bias anyway? It is a correction that makes great sense over a few days but you’d expect it to even out over the long term unless people did the obs consistently later or earlier over a multi-year span. And of course, the correction is not applied consistently which makes it even more ridiculous to combine the differing datasets in the first place.

        • Steven Mosher
          Posted Oct 29, 2011 at 11:27 AM | Permalink

          Wrong. read Karls paper. Or read the TOBS thread here. Or get the data and prove it to yourself.

          1. It was tested against reality, in fact its how the method is developed.
          2. There is a BIAS you can see this for yourself, Look at the data posted by JerryB. If you
          dont now who he is, go read the rest of CA before you comment
          3.The rest of your comments make no sense.

        • Hu McCulloch
          Posted Oct 29, 2011 at 1:18 PM | Permalink

          JamesG — I emphathize with you completely, since I too was once a TOBS Denialist. However, after a lengthy discussion on CA, at http://climateaudit.org/2007/09/24/tobs/ , Mosh, Aurbo and others made me into a True Believer. See my concession comment on that thread 9/28/07, 9:20 AM.

          Although TOBS is a substantial issue, my preference would be to just treat a change in TOBS as a break in the series, rather than to apply Karl’s complicated adjustment.

          (The 2007 CA TOBS thread is unusual because it has no lead article. At my request, Steve pasted over about 20 comments from an earlier thread on USHCN biases, and then the discussion begins in earnest.)

    • JamesG
      Posted Oct 29, 2011 at 8:06 AM | Permalink

      This is US temperature I believe, not global temperaturee. But it is no surprise there is no significant long term trend in the US. Neither is there any in the Arctic data.

      Ok the world MAY have warmed despite the only two datasets with sufficient data to be relied upon showing insignificant long-term warming, but would anyone here make a bet on that in any other sphere where their own money was at stake?

      Of course 0.6 degrees or so over the last century could easily be regarded as remarkable stable anyway. It all depends on your terms of reference or even how optimistic or pessimistic you are. And prior to the last century it is blindingly obvious that all signals are much smaller than the error bounds. As such, all of these efforts at global temperature spatial and temporal infilling are largely meaningless: The data is just not sufficient so adjustments, extrapolations, smoothings etc are just plain unreliable. This used to be a principle that was well understood.

  52. Jeff Patterson
    Posted Oct 24, 2011 at 8:50 AM | Permalink

    I am curious about the 19th century oscillations that are characteristic of the step response of an underdamped system (note the exponential reduction in the swing amplitude as time progresses). One wonders if there wasn’t some 18th century event that caused a step change in forcing. If so, eigenvalue analysis could yield important clues about climate sensitivity and stability.

    • Posted Oct 24, 2011 at 1:57 PM | Permalink

      You can draw a line through the 95% CIs without these swings, so they (or a large part of them) may just be measurement error caused by the sparse coverage.

      The big dip centered on 1816 is surely due to Tambora and for real. It starts before 1816, but then this is a 10-year, presumably centered, average. (I’m not sure how they center a 10-year average, but it’s no big deal for this discussion.)

      I doubt that there is any mechanism that would lead to a harmonic echo of this dip 20 years later, however, so the second dip is either just another chance event or measurement error.

      • Kenneth Fritsch
        Posted Oct 24, 2011 at 2:59 PM | Permalink

        I wrote a previous post (in moderation) that incorrectly noted that the non dendro LM reconstruction had a 95% CI limits range of 0.8 degree C when in fact the range is closer to 0.6 degrees. Nonetheless that uncertainty precludes, in my view, a capability of determining where a Little Ice Age might have ended. I was also taken aback a bit by the seeming assuredness that the LM reconstruction was the arbiter in the matter of the ending of the LIA. I thought the LM non dendro reconstruction was used as illustration to show that depending on the proxies included that different reconstructions can show significantly different past temperature series. Has something changed from that view?

        • Posted Oct 26, 2011 at 12:09 PM | Permalink

          Ken —

          I can’t speak for Craig, but I certaintly don’t regard LM as the “arbiter” of when the LIA ended. It’s merely our two bits (or rather Craig’s two bits, with my CIs) about what the bimillenial temperature looked like. However, I’d take it over the HS or Briffa MDX any day, which I don’t think are worth even a plugged nickel.

          I would hope that Craig or someone would eventually update it with more series. In itself, it’s largely an extension of Moberg (2005)’s low-frequency reconstruction, which was based only 9 series, a few of which were dendro. Craig expanded Moberg’s set to 18 and threw out the dendros, with rather similar point estimates in terms of an LIA and MWP, but greater precision. (Moberg didn’t even compute CIs on his low-frequency reconstruction, but with 18 you have to do a lot better than with 9.)

          The dating is imprecise on most of the LM series. This has 2 effects: First, it attenuates the signal, as Craig has noted, so that the actual signal (whenever it was) could well have been stronger than we show, and second, it makes the dates of changes in the reconstruction less accurate. So perhaps an improved/expanded update of LM will show the LIA extending into the 19th c.

          As it stands, however, the point estimates show it receding around 1750. This is not to say that the 19th ws not still cool, just less extremely so. All the series were interpolated and then subjected to a tridecadal smoother, so an event like 1816 would not be expected to show up very much.

          BEST uses precisely dated thermometer data, and so dating is not an issue. It’s early 19th c sample is apparently quite small (and local to Europe?), but it does show the early 19th c as cooler than the later 19th, so perhaps this was still LIA. However, BEST doesn’t run earlier than 1800, so one can’t directly compare its 19th c to the 17th c.

          I don’t view LM as just an exercise to see what difference excluding dendro data makes, but rather as an attempt to reconstruct past temperatures without relying on the particularly questionable dendro data. Even apart from selection issues, TR width and MXD are questionable because they should not be monotonic functions of temperature. If they were, the bristlecone pines in the Amazon would the be size of redwoods! But in fact bristlecones prefer (or at least do relatively well in, compared to other species) harsh climates. Same goes for larches.

          See LM recon at http://econ.ohio-state.edu/jhm/AGW/Loehle/ .

        • Posted Oct 26, 2011 at 12:54 PM | Permalink

          Note that my LM08 CI’s are actually tighter than the initial BEST CI’s when their series begins in 1800, by a factor of about 2. To be sure, I took no account of possible correlations from closeness of series, since Craig’s 18 series are for the most part pretty far apart. I suspect the BEST series for 1800 are all in Europe and quite close to one another, relatively speaking.

        • Kenneth Fritsch
          Posted Oct 26, 2011 at 2:51 PM | Permalink

          Hu, you bring to fore an important point and that is how are the spatial uncertainties determined for reconstructions that cover only a few 10s of locations when they claim or imply to be making an estimate of golbal temperatures.

        • Posted Oct 27, 2011 at 3:59 AM | Permalink

          Hu,
          “I suspect the BEST series for 1800 are all in Europe”
          Mostly, but not quite. Here is a list. They are basically the GHCN stations. There are a few in N America – Philadelphia starts 1758, and the next one, surprisingly, is Churchill on Hudson Bay, starting 1768.

        • Tom Gray
          Posted Oct 27, 2011 at 8:09 AM | Permalink

          The Hudson Bay Company was founded in 1670 so it may not be as surprising os it first seems. They have been trading in the north for 341 years with forts and factories (trading psots) for all of that time. Perhaps there are more early temperature records on the records of “the Bay”

        • Tom Gray
          Posted Oct 27, 2011 at 8:12 AM | Permalink

          York Factory is also on the list. This is another Hudson Bay Company Trading post. So perhaps the Bay’s records would have more

        • Hu McCulloch
          Posted Oct 28, 2011 at 9:46 AM | Permalink

          Thanks, Nick – There are a lot more 1800 stations in all than I would have suspected, but indeed most of them are in Europe.

          One interesting exception is Boston Logan Airport, which goes all the way back to 1743!

          Also — Philadelphia, Churchill on Hudson Bay, Pamplemousses in Mauritius, Albany, Madras (Chenai) and Natchez Miss. So there’s a little Indian Ocean in addition to N Am, but not much outside Europe really.

        • Steven Mosher
          Posted Oct 29, 2011 at 11:23 AM | Permalink

          NCDC added the colonial series into the mix,

        • Hu McCulloch
          Posted Oct 29, 2011 at 1:49 PM | Permalink

          Nick:

          They are basically the GHCN stations. There are a few in N America – Philadelphia starts 1758,

          Indeed, Google Earth shows BEST’s Philadelphia coordinates (39.87N, 75.23W) as being within rounding error of the S end of Runway 35 at PHL.

          The heavy reliance of GHCN/GISS/CRU/BEST on airports makes me wonder how much of the increase they show since 1950 is just the growth of aviation since then. Not to mention since 1758! ;-)

        • Kenneth Fritsch
          Posted Oct 27, 2011 at 11:04 AM | Permalink

          “I don’t view LM as just an exercise to see what difference excluding dendro data makes, but rather as an attempt to reconstruct past temperatures without relying on the particularly questionable dendro data. Even apart from selection issues, TR width and MXD are questionable because they should not be monotonic functions of temperature. If they were, the bristlecone pines in the Amazon would the be size of redwoods! But in fact bristlecones prefer (or at least do relatively well in, compared to other species) harsh climates. Same goes for larches.”

          I guess because there are more obvious reasons for doubting dendro proxies as faithful responders of temperature changes would not make me inclined to judge that that circumstance makes non dendro proxies better.

          I will note that none other than Mann 2008 made an effort to obtain a better visualizations of the proxy responses during the most recent part of the modern warming period. The divergence of dendro proxies/reconstructions was pointed to and then almost as an afterthought the non dendro proxies/reconstructions divergence was also noted. It appears that what we see as divergence is not unique to dendro proxies and could well be reasoned to be an expectation of a poor model constructed with in-sample data being tested out-of-sample. In an earlier Mann paper dendro proxies were trunctuated and rejiggered in the instrumental period and before because the response was much larger than expected.

          I have found that plotting and modeling individual temperature proxies a very constructive and informative strategy. I see lots of peaks and valleys in these proxies that would correspond with those of models with LTP and without a deterministic trend.

          I would never, based on what I surmise of the proxies, assign a model involving LTP to temperatures because I judge that the proxies are not much affected by temperature. I found it also interesting that those proxies that have a shorter term time period have a better “chance” of correlating better with the instrumental record while those of a longer period much less chance. I associate this situation to the fact that those constructing the reconstructions had fewer proxies to chose from as you search for ones with longer time periods.

          Having said all that I plan to see how well an ARFIMA model fits the BEST data.

      • Jeff Patterson
        Posted Oct 25, 2011 at 9:40 AM | Permalink

        The decline in temp appears to start well before Tambora erupted in 1815. In fact the eruption is coincident with a steep rise in temp which seems counter-intuitive. As to your other point, an under-damped 2nd order system does produce what you call echos when driven with an impulsive forcing which look strikingly similar to the plot, both in terms of the decreasing swing amplitude and increasing period (time between lows). Probably coincidence but an intriguing one to a control systems guy like me.

  53. Jeremy
    Posted Oct 24, 2011 at 10:30 AM | Permalink

    …The “dark” parts of the globe did have temperatures in the 19th century and ignoring them may impart a bias…

    Ah, it seems here that you’re arguing that imagination is better than acknowledged ignorance. If that’s oversimplifying, I’ll listen to a technical explanation why I’m wrong for saying that.

  54. DocMartyn
    Posted Oct 24, 2011 at 1:38 PM | Permalink

    May I draw attention to Figure 3 of the UHI paper.
    Here the distribution of the temperature change is plotted as a function of record length.
    The plots of 30 year are distinctly non-Gaussian.

  55. Edward McCann
    Posted Oct 24, 2011 at 6:47 PM | Permalink

    “Homogeneity
    They introduced a new method to achieve homogeneity. I have not examined this method or this paper and have no comment on it.
    Kriging
    A commenter at Judy Curry’s rather sarcastically observed that, with my experience in mineral exploration, I would undoubtedly endorse their use of kriging, a technique used in mineral exploration to interpolate ore grades between drill holes.
    His surmise is correct.
    Indeed, the analogies between interpolating ore grades between drill holes and spatial interpolation of temperatures/ temperature trends has been quite evident to me since I first started looking at climate data.
    Kriging is a technique that exists in conventional statistics. While I haven’t had an opportunity to examine the details of the BEST implementation, in principle, it seems far more logical to interpolate through kriging rather than through principal components or RegEM (TTLS).
    Dark Areas of the Map
    In the 19th century, availability of station data is much reduced. CRU methodology, for example, does not take station data outside the gridcell and thus leaves large portions of the globe dark throughout the 19th century.
    BEST takes a different approach. They use available data to estimate temperatures in dark grid cells while substantially increasing the error bars of the estimates. These estimates have been roundly condemned by some commenters on threads at Judy Curry’s and Anthony Watts’.
    After thinking about it a little, I think that BEST’s approach on this is more logical and that this is an important and worthwhile contribution to the field. The “dark” parts of the globe did have temperatures in the 19th century and ignoring them may impart a bias. While I haven’t examined the details of their kriging, my first instinct is in favor of the approach.”

    All the above is just great at masking UHI. Even kriging using rural areas that are growing will mask UHI. All these methodologies used in the datasets are just smoke screens to hide the UHI. Do not trust any liberal scientist where the liberal cause ( global warming) justifies the means even if you think one is your friend as I said before in the early hockeystick days.

    • Bernie
      Posted Oct 24, 2011 at 7:27 PM | Permalink

      Edward:
      I agree with you on the UHI/Micro climate issues. Whatever interpolation takes place has to include a proper array of meta-data including local human factors in order to deal with “UHI” effect. It will be interesting to hear what Ross McKitrick’s take is on BEST.

      • Posted Oct 25, 2011 at 9:35 AM | Permalink

        Yes, I expect some observers will find my analysis of BEST to be of interest, when I am in a position to release it. I agreed to serve as a referee back at the beginning of September and submitted my report almost a month ago. In so doing I accepted the journal’s request not to discuss the matter publicly while the paper is under review, and I intend to respect that commitment. It did not occur to me at the time that the BEST authors would fail so spectacularly to respect their corresponding obligations, or that the media, having shown zero interest in the many peer-reviewed and published papers on the topic, would so willingly join the tub-thumping on behalf of an unpublished, unreviewed PDF on someone’s website. I suppose all this should have occurred to me at the time, you’d think I’d have learned by now.

        • Sean Inglis
          Posted Oct 26, 2011 at 7:40 AM | Permalink

          On the bright side, it’s material for Donna L’s follow-up book.

        • cce
          Posted Oct 26, 2011 at 10:43 PM | Permalink

          You mean like tub-thumping the “T3 tax” 3 years before the paper was published?

        • Posted Oct 27, 2011 at 11:02 AM | Permalink

          I was thumping the tub for a simple idea, not a set of unpublished empirical results on a large new data set. Had I gone around in 2008 making claims about the results of calculations that didn’t appear until the publication in 2010 then your point would be valid. The intuitive argument behind the T3 tax itself was transparent and easily conveyed in op-ed formats. The derivation of the specific form (equation 8 in the paper) required going through all the math, and the empirical demonstration of its correlation to the unobservable optimum required 10,000 simulations, and I didn’t make any claims about those things until after the paper was published.

          By way of contrast, Tim Vogelsang and I have some empirical results on the topical troposphere that I think are quite important. We put out a discussion paper and we’ve presented the results at universities. That’s legit, and if that’s all Muller were up to then it wouldn’t be grounds for concern. But if I were to call up news reporters and announce the findings through the press, then write an op-ed for the WSJ and launch an international media campaign while the paper is still under review, I would rightly be condemned for circumventing and undermining the peer review process.

        • JamesG
          Posted Oct 29, 2011 at 8:22 AM | Permalink

          I guess he was jealous that his name hadn’t been in the papers much up to that point. Now he’s semi-famous. Tub-thumping gets you on the band-wagon. Lucrative speaking engagements to follow…maybe a prize or two down the line.

        • cce
          Posted Oct 30, 2011 at 10:43 AM | Permalink

          Why do you consider a “simple idea” more valid than “unpublished empirical results on a large new data set” which happen to be consistent with “published empirical results on large old datasets.” Why is the first worthy of op-eds but not the second? And your “simple idea” wasn’t “transparent” as anyone who knows anything about satellites and what they actually measure pointed out.

          You find the land surface data extremely suspect despite broad agreement between various analyses, but you want to set tax policy on the average of UAH and RSS? A little intellectual consistency would be great.

        • bernie
          Posted Oct 27, 2011 at 8:06 AM | Permalink

          Ross:
          I assume “tub thumping” is the Aristotlean equivalent of “pot banging”?

  56. Posted Oct 26, 2011 at 11:05 AM | Permalink

    The question arises – What is the best metric for a global measure of and for discussion of global warming or cooling. For some years I have suggested in various web comments and on my blog that the Hadley SST data is the best metric.
    For the following reasons
    1. Oceans cover about 70% of the surface.
    2. Because of the thermal inertia of water – short term noise is smoothed out.
    3. All the questions re UHI, changes in land use local topographic effects etc are simply sidestepped.
    4. Perhaps most importantly – what we really need to measure is the enthalpy of the system – the land measurements do not capture this aspect because the relative humidity at the time of temperature measurement is ignored. In water the temperature changes are a good measure of relative enthalpy changes.
    5. It is very clear that the most direct means to short term and decadal length predictions is through the study of the interactions of the ocean current and temperature regimes – PDO ,ENSO. SOI AMO AO etc etc. and the SST is a major measure of these systems.

    Certainly the SST data has its own problems but these are much less than those of the land data.
    It would be nice if everyone could get on the same page by using SSTs as the generally accepted global climate metric.

  57. EdeF
    Posted Oct 29, 2011 at 12:49 PM | Permalink

    Have just returned from an overseas trip to England and Scotland, and what I am struck by upon visiting those countries is the amount of forest management over many centuries. Large tracts of land have been specifically managed to create parkland, or to produce pulp for paper mills or colleries, etc. I would be cautious about relying on tree ring data from trees on large country estates due to selective cutting and thinning of surrounding trees, and I am not sure how many centuries you would have to go back until this is not a problem. I even found at Culzean Castle on the west coast of Scotland
    in the castle history room some recent cross-sections of pines that were cut down
    due to a large fire circa 1998. They showed the typical tree growth pattern, wide rings
    to start with with small rings with age, up to a point where the tree was damaged, then
    very wide rings out of the damage period for awhile, much like the tree under discussion
    during the start of climate gate. I would handle very carefully any urban data and
    get way out in the hinterlands or use ocean data where possible.

    • EdeF
      Posted Oct 29, 2011 at 1:03 PM | Permalink

      Sorry, that was a large storm circa 1998 on the Argyl coastline that took out many trees,
      including about 100 mature ones.

  58. Don McIlvin
    Posted Oct 22, 2011 at 5:03 PM | Permalink

    That was minus 1.5Wm

19 Trackbacks

  1. [...] as Steve McIntyre demonstrates, the BEST work with my station siting quality isn’t replicable. The devil is in the details. He says boldly: I have looked at some details [...]

  2. [...] as Steve McIntyre demonstrates, the BEST work with my station siting quality isn’t replicable. The devil is in the details. He says boldly: I have looked at some details of [...]

  3. [...] exaggerated the effect. He says no, but his efforts in that regard have already been taken apart by McIntyre, Watts, Eschenbach and Keenan (the latter in email correspondence with Astill). In any case, it [...]

  4. [...] Steve McIntyre:  First Thoughts on BEST [...]

  5. [...] “First thoughts on BEST” by Steve McIntyre, October 22, 2011. [...]

  6. [...] Climate Audit välkomnas BEST av Steve McIntyre. Han pekar bl.a. på en intressant sak i deras rekonstruktion. [...]

  7. [...] in Reihen der Republikaner und weiterhin erklärter und überzeugter Gegner wie Steve McIntyre (Climate Audit: First Thoughts on BEST) fernab der leise vor sich tropfenden Polkappen über solcherlei frevelhaftes Tun nicht für [...]

  8. By First Thoughts on BEST « Bee Auditor on Oct 24, 2011 at 2:31 AM

    [...] Source: http://climateaudit.org/2011/10/22/first-thoughts-on-best/ [...]

  9. By Climate Audit « Newsbeat1 on Oct 24, 2011 at 5:48 PM

    [...] Steve McIntyre [...]

  10. By BEST audit…data surprises | pindanpost on Oct 24, 2011 at 9:19 PM

    [...] First Thoughts on BEST Oct 22, 2011 – 9:34 AM [...]

  11. [...] First Thoughts on BEST [...]

  12. [...] expect debate over a number of points of the analysis from different quarters, including Steve McIntyre at Climate Audit. After all, the magnitude of actual/factual warming is a theory-driver about the sensitivity of [...]

  13. [...] McIntyre at climateaudit.org has pointed out the BEST results show that temperatures from 1810 to 1820 were about 2 degrees [...]

  14. [...] So this is a city cooling when we argue about Urban Heat Islands, cities warming faster than the surrounding rural areas, and whether or not there should be compensation for UHI in global temperature calculations.  To quote Steve McIntyre: [...]

  15. [...] creates numerous problems with the methodology of the BEST study.  Steve McIntyre, proprietor of Climate Audit noticed “BEST’s estimate of the size of the temperature increase since the start of the 19th [...]

  16. By Global Warming Data Twisted By Advocates on Aug 30, 2012 at 6:56 PM

    [...] creates numerous problems with the methodology of the BEST study.  Steve McIntyre, proprietor of Climate Audit noticed “BEST’s estimate of the size of the temperature increase since the start of the 19th [...]

  17. By BEST(?) PR | Climate Etc. on Sep 28, 2012 at 12:24 PM

    [...] The reason I think a press release was appropriate for the release of the BEST data is aptly stated by Steve McIntyre: [...]

  18. [...] [...]

  19. […] McIntyre, proprietor of Climate Audit noticed “BEST’s estimate of the size of the temperature increase since the start of the 19th […]

Follow

Get every new post delivered to your Inbox.

Join 3,381 other followers

%d bloggers like this: