Closing Thoughts on BEST

In the 1980s, John Christy and Roy Spencer revolutionized the measurement of temperature data through satellite measurement of oxygen radiance in the atmosphere. This accomplishment sidestepped the intractable problems of creating (what I’ll call) a “temperature reconstruction” from surface data known to be systemically contaminated (in unknown amounts) by urbanization, land use changes, station location changes, measurement changes, station discontinuities etc etc.

Also in the 1980s, Phil Jones and Jim Hansen created land temperature indices from surface data, indices that attracted widespread interest in the IPCC period. The source data for their indices came predominantly from the GHCN station data archive maintained by NOAA (who added their own index to the mix.) The BEST temperature index is in this tradition, though their methodology varies somewhat from simpler CRU methods, as they calculate their index on “sliced” segments of longer station records (see CA here) and weight series according to “reliability” – the properties of which are poorly understood (see Jeff Id here.)

The graphic below compares trends in the satellite period to March 2010 – the last usable date in the BEST series. (SH values in Apr 2010 come only from Antarctica and introduce a spurious low in the BEST monthly data.) Take a look – comments follow.


Figure 1. Barplot showing trends in the satellite period. (deg C/decade 1979-Mar 2010.) Left – “downscaled” to surface; right – not downscaled to surface.

The BEST and CRU series run hotter than TLT satellite data (GLB Land series from RSS and UAH considered here), with the difference exacerbated when the observed satellite trends are “downscaled” to surface using the amplification factor of approximately 1.4 (that underpins the “great red spot” observed in model diagrams). An amplification factor is common ground to both Lindzen and (say) Gavin Schmidt, who agree that tropospheric trends are necessarily higher than surface trends simply though properties of the moist adiabat. In the left barplot, I’ve divided the satellite trends by 1.4 to obtain “downscaled” surface trends. In a comment below, Gavin Schmidt observes that an amplification factor is not a property of lapse rates over land. In the right barplot, I’ve accordingly shown the same information without “downscaling” (adding this to the barplot in yesterday’s post.) (Note Nov 2, 2011 – I’ve edited the commentary to incorporate this amendment and have placed prior commentary in this section in the comments below.)

The UAH trend over land is 0.173 deg C/decade (0.124 deg C downscaled) and the RSS trend from 0.198 deg C/decade (0.142 deg C/decade). I will examine this interesting property of the Great Red Spot on another occasion, but, for the purposes of this post, defer to Gavin Schmidt’s information on its properties.

The simple barplot in Figure 1 clearly shows which controversies are real and which are straw men.

Christy and Spencer are two of the most prominent skeptics. Yet they are also authors of widely accepted satellite data showing warming in the past 30 years. To my knowledge, even the most adamant skydragon defers to the satellite data. BEST’s attempt to claim the territory up to and including satellite trends as unoccupied or contested Terra Nova is very misleading, since this part of the territory was already occupied by skeptics and warmists alike.

The territory in dispute (post-1979) is the farther reaches of the trend data – the difference between the satellite record and CRU, and then between CRU and the outer limits of BEST.

BEST versus CRU
On this basis, BEST (0.282 deg C/decade) runs about 0.06 deg C hotter than CRU (0.22 deg C/decade). My surmise, based on my post of Oct 31, 2011, is that this results from the combined effects of slicing and “reliability” reweighting, the precise proportion being hard to assign at this point and not relevant for present purposes.

Commenter Robert observed that CRU now runs cooler than NOAA or GISS. In the corresponding 1979-2010, NOAA has a virtually identical trend to BEST (0.283 deg C/decade) also using GHCN data. It turns out that NOAA has changed its methodology earlier this year from one that was somewhat similar to CRU to one that uses Mennian sliced data. (I have thus far been unable to locate online information on previous NOAA versions.)

This indicates that the difference between BEST (and NOAA) versus CRU is probably more due to slicing than to reweighting.


CRU vs Satellites

CRU runs about about 0.03-0.05 deg C/decade warmer than TLT satellite trends over land and about 0.08-0.10/decade warmer in the 1979-2010 period than downscaled satellite data.

Could this amount of increase be accounted for by urbanization and/or surface station quality problems?

In my opinion, this is entirely within the range of possibility. (This is not the same statement as saying that the difference has been proven to be due to these factors. In my opinion, no one has done a satisfactory reconciliation.) From time to time, I’ve made comparisons between “more urban” and “more rural” sites in relatively controlled situations (e.g. Hawaii, around Tucson following a predecessor local survey) and when I do the comparisons, I find noticeable differences of this order of magnitude. I’ve also done comparisons of good and bad stations from Anthony’s data set and again observe differences that would contribute to this order of magnitude. But this is not the same thing as proving the opposite.

In the past, I’ve been wary of “unsupervised” comparisons of supposedly “urban” and supposedly “rural” subpopulations in papers by Jones, Peterson, Parker and others purporting to prove that UHI doesn’t “matter”. Such papers set up two populations – one “urban” and one ‘rural’, purport to show that the trends for each population are similar and claim that this “shows” that UHI is a non-factor in trends. In my examination of prior papers, each one has tended to founder on similar points. All too often, the two populations are very poorly stratified – with the “rural” population all too often containing urban cities, sometimes even rather large cities.

The BEST urbanization paper is entirely in the tradition of prior studies by Jones, Peterson, Karl etc. They purport to identify a “very rural” population by MODIS information and show that they “get” the same answer. Unfortunately, BEST have not lived up to their commitment to transparency in this paper. Code is not available. Worse, even the classification of sites between very rural and very urban is not archived, with the pdf of the paper disconcertingly pointing to a warning that the link is unavailable (making it appear like noone even read the final preprint before placing it online.) Mosher has noted inaccuracies in their location data and observes that there are perils for inexperienced users of MODIS data, Mosher reserving his opinion on whether the lead author of the urbanization paper, a grad student, managed to avoid these pitfalls until he’s had an opportunity to examine the still unavailable nuts and bolts of the paper.

Mosher, who’s studied MODIS classification of station data as carefully than anyone, observes that there are no truly “rural” (in a USHCN target sense) locations in South America – all stations come from environments that are settled to a greater or lesser degree. Under Oke’s original UHI concept, the cumulative UHI effect was, as a rule of thumb, proportional to log(population). If “urbanization” is occurring in towns and villages as well as in large cities – which it is, then the contribution of UHI increase to temperature increase will depend on the percentage change in population (rather than absolute population). If proportional increases are the same, then the rate of temperature increase will be the same in towns and villages as in cities.

If one takes the view that satellite trends provide our most accurate present knowledge of surface trends, then one has to conclude that the BEST methodological innovations (praised by realclimate) actually provide a worse estimate of surface trends than even CRU.

In my opinion, it is highly legitimate (or as at least a null hypothesis) to place greatest weight on satellite data and presume that the higher trends in CRU and BEST arise from combinations of urbanization, changed land use, station quality, Mennian methodology etc.

It seems to me that there is a high onus on anyone arguing in favor of a temperature reconstruction from surface station data (be it CRU or BEST) to demonstrate why this data with all its known problems should be preferred to the satellite data. This is not done in the BEST articles.

“Temperature Reconstructions”

In discussions of proxy reconstructions, people sometimes ask: why does anyone care about proxy reconstructions in the modern period given the existence of the temperature record? The answer is that the modern period is used to calibrate the proxies. If the proxies don’t perform well in the modern period (e.g. the tree ring decline in the very large Briffa network), then the confidence, if any, that can be attached to reconstructions in pre-instrumental periods is reduced.

It seems to me that a very similar point can be made in respect to “temperature reconstructions” from somewhat flawed station records. Since 1979, we have satellite records of lower tropospheric temperatures over land that do not suffer from all the problems of surface stations. Yes, the satellite records have issues, but it seems to me that they are an order of magnitude more tractable than the surface station problems.

Continuing the analogy of proxy reconstructions, temperature reconstructions from surface station data in the satellite period (where we have benchmark data) should arguably be calibrated against satellite data. The calibration and reconstruction problem is not as difficult as trying to reconstruct past temperatures with tree rings, but neither is it trivial. And perhaps the problems encountered in one problem can shed a little light on the problems of the other.

Viewed as a reconstruction problem, the divergence between the satellite data and the BEST temperature reconstruction from surface data certainly suggests some sort of calibration problem in the BEST methodology. (Or alternatively, BEST have to show why the satellite data is wrong.) Given the relatively poor scaling of the BEST series in the calibration period relative to satellite data, one would have to take care against a similar effect in the pre-satellite period. However, the size of the effect appears likely to have been lower: both temperature trends in the pre-satellite period and world urbanization were lower in the pre-satellite period.

One great regret about BEST’s overall strategy. My own instinct as to the actual best way to improve the quality of temperature reconstructions from station data is to really focus on quality, rather than quantity. To follow the practices of geophysicists using data of uneven quality – start with the best data (according to objective standards) and work outwards calibrating the next best data on the best data.

They adopted the opposite strategy (a strategy equivalent to Mann’s proxy reconstructions). Throw everything into the black box with no regard for quality and hope that the mess can be salvaged with software. Unfortunately, it seems to me that slicing the data actually makes the product (like NOAA’s) worse product than CRU (using satellite data as a guide). It seems entirely reasonable to me that someone would attribute the difference between higher BEST trend and satellite trends not to the accuracy of BEST with flawed data, but to known problems with surface stations and artifacts of Mennnian methodology.

I don’t plan to spend much more time on it (due to other responsibilities).
[Nov 2 – there’s a good interview with Rich Muller here where Muller comes across as the straightforward person that I know. I might add that he did a really excellent and sane lecture (link) on policy implications a while ago that crosscuts most of the standard divides. ]

A Closing Editorial Comment
Finally, an editorial comment on attempts by commentators to frame BEST as a rebuttal of Climategate.

Climategate is about the Hockey Stick, not instrumental temperatures. CRUTEM is only mentioned a couple of times in passing in the Climategate emails. “Hide the decline” referred to a deception by the Team regarding a proxy reconstruction, not temperature.

In an early email, Briffa observed: “I believe that the recent warmth was probably matched about 1000 years ago.” Climategate is about Team efforts to suppress this thought, about Team efforts to promote undeserved certainty – a point clearly made in CRUTape Letters by Mosher and Fuller.

The new temperature calculations from Berkeley, whatever their merit or lack of merit, shed no light on the proxy reconstructions and do not rebut the misconduct evidenced in the Climategate emails.

[Nov 2. However, in fairness to the stated objectives of the BEST project, I should add the following.

Although I was frustrated by the co-mingling of CRUTEM and Climategate in public commentary – a misunderstanding disseminated by both Nature and Sarah Palin – and had never contested that fairly simple average of GHCN data would yield something like CRUtem, CRUTEM and Climategate have become co-mingled in much uninformed commentary.

In such circumstances, a verification by an independent third party (and BEST qualifies here) serves a very useful purpose, rather like business audits which, 99% of the time, confirm management accounts, but improve public understanding and confidence. To the extent that the co-mingling of Climategate and CRUTEM (regardless of whether it was done by Nature or Sarah Palin) has contributed to public misunderstanding of the temperature records, an independent look at these records by independent parties is healthy – a point that I made in my first point and re-iterate in this post. While CA readers generally understand and concede the warming in the Christy and Spencer satellite records, this is obviously not the case in elements of the wider society and there is a useful function in ensuring that people can reach common understanding on as much as they can. ]

252 Comments

  1. Steve Fitzpatrick
    Posted Nov 1, 2011 at 12:48 PM | Permalink

    Steve,
    Does ‘down-scaling’ the satellite data greatly reduce the trend? I cant see any big difference between RSS and Hadley land only trends.

    http://www.woodfortrees.org/plot/best/from:1979/to:2009/offset:-0.2/trend/plot/crutem3vgl/from:1979/to:2009/trend/plot/rss-land/from:1979/to:2009/offset:0.26/trend/plot/uah-land/from:1979/to:2009/offset:0.4/trend

    Steve: from ~0.17 tp ~0.124 deg C/decade for UAH. I’ve edited to add this info.

    • Steve Fitzpatrick
      Posted Nov 1, 2011 at 1:17 PM | Permalink

      Steve McIntyre,

      Thanks for that clarification. But I thought that level of “downscaling” (1.4, based on models) was a lot less than certain (eg. the long running controversy over lack of a tropical tropospheric hot-spot), more applicable over ocean (high surface level humidity) than over land, and more applicable to the mid troposphere than the lower troposphere. I could be wrong about this, of course.

  2. phi
    Posted Nov 1, 2011 at 12:48 PM | Permalink

    Beautiful!
    And now, just the last little bit:

  3. Posted Nov 1, 2011 at 12:58 PM | Permalink

    Somebody ought now to list all the cretins who prematurely celebrated BEST, for future reference …

  4. Posted Nov 1, 2011 at 1:05 PM | Permalink

    Phi – you ought to provide more details. Who’s Schweingruber and where is.the 1C/century CRU bias coming from?

  5. Posted Nov 1, 2011 at 1:29 PM | Permalink

    Beuatifully put. I don’t think I’ve ever seen my own opinions on the temperature records so clearly described before.

    1) The only real purpose of thermometer records (whether land or sea) is to attempt to reconstruct temperatures in the pre-satellite era, and until we can resolve this divergence problem it is hard to have substantial confidence in current reconstructions.

    2) Current methods of looking for UHI are flawed, as the trend is affected by urbanisation not urban-ness itself. The only comparison that makes sense is between pristine rural sites and others. (The divergence between land and sea is a strong hint, but not more than that.)

    3) The situation pre-thermometers is obviously much worse.

    • Steven Mosher
      Posted Nov 1, 2011 at 1:52 PM | Permalink

      Well then the onus is on your to do the following

      1. define what you mean by pristine.
      2. provide some case studies that indicate the kinds of effects one can see in moving from “pristine”
      to non pristine.

      Much of the UHI argument gains traction from images of Tokoyo, transects across Reno, huge UHI is a poster child of sorts ( the skpetics version of an HS Icon). However, I have found that as I offer up definitions of rural.. many people move the goal posts. Now the goal post is being moved to pristine. Not even an outhouse in view. If there were as much research on the move from “pristine” to 3 sheds and cow and how much that influenced the record, then I would give more weight the argument. As it stands all the detailed work on UHI ( for city planning) is focused on larger urban areas and the suburbs. Simply, there is little field research to give us an idea of the effects one sees in going from pristine to 3 sheds and a cow. I would not rule out an effect, there is simply not a wealth of research indicating the size of the effect.

      Here is an idea. Take a look at the 200 or so CRN stations. If you classify those as Pristine, then it’s a small matter to find similar sites. Again, this work is long and hard and sometimes the programs run for 2 days. So, I’m not going to go in search of pristine unless someone defines what they mean.

      One could of course say that no site is pristine and we can say nothing and are stuck with what Steve shows.

      There is a difference in trend that can be associated with one of three options.

      1. Models have issues with amplification
      2. the satellite record is wrong
      3. UHI is bounded by the descrepency between the the trends.

      • don monfort
        Posted Nov 1, 2011 at 2:20 PM | Permalink

        The pristine sites are the sites that do not have thermometers. Isn’t that a problem, Steve? Isn’t the onus on climate science to settle the argument over UHI effect, and haven’t they had sufficient time and resources to do it? Or at least to have made a better effort than their BEST effort? I believe that with a few bucks and a little help from a couple of grad students, you could do a significantly better job of it. No offense to you, but that is a sad commentary on the climate science.

        • Posted Nov 1, 2011 at 3:20 PM | Permalink

          That’s a rather assumptive definition. What you are effectively saying is that the very act of placing a thermometer alters the temperature in a measureable way. Kind of heisenberg like except at a macro level. In fact, it looks to be almost untestable. There isnt a simple onus in this matter. The UHI effect is real and has been measured many times under many conditions. But to my knowledge there is no evidence that the mere placing of a thermometer changes the temperature. That is what you are claiming. Now, we have physics that explains why pouring concrete changes the urban energy balance. We have physics that explains why building height is the principle driver, for example, of UHI. But I know of no empirical study which supports what you claim. So, since you imply that the mere placing of a thermometer changes the temperature, how do you support this claim? code and data please. conjecture in ink, doesnt cut it.

        • don monfort
          Posted Nov 1, 2011 at 4:17 PM | Permalink

          You took that very literally Steve. That was shorthand to point to the fact that the takers of temperature have not placed a lot of thermometers in areas that are not populated. And areas that are not populated would serve as a plausible surrogate for pristine, wouldn’t they?

          As I recall 27% of GHCN stations are/were in cities of 50,000 or more. Maybe you can say what percentage of thermometers are placed in, or in close proximity to towns/cities of 5,000 or more? How about 2,000? And so on. How close to dividing populated areas from almost unpopulated areas can you get, Steve? I know that you can do a better job than BEST did, and they claim that they have this crap nailed with their negative UHI finding.

          I wonder how hard the climate science community is trying to figure this out. Isn’t it important? You have allowed that UHI effect could be as much as .3C of warming? Is that chickenfeed with regards to attribution? Doesn’t this stuff add up? The alarmists look the other way on these little details, or employ some kind of misdirection to change the subject. Should we not worry about this?

        • Steven Mosher
          Posted Nov 2, 2011 at 2:48 AM | Permalink

          Lets assume that you give me station location data that is good to 1/100th of a degree or so. Then I can tell you the population at that location. And, you will find zero population stations. you’ll find everything from 0 people to 10s of thousands. Also dont trust the GHCN population data. Its old.

          Meh.. on your other questions

      • Bruce
        Posted Nov 1, 2011 at 2:38 PM | Permalink

        I think of UHI affecting every city. Not just Tokyo. And by huge amounts.

        “Summer land surface temperature of cities in the Northeast were an average of 7 °C to 9 °C (13°F to 16 °F) warmer than surrounding rural areas over a three year period, the new research shows. The complex phenomenon that drives up temperatures is called the urban heat island effect.”

        http://www.nasa.gov/topics/earth/features/heat-island-sprawl.html

        But I think this is the key:

        “The compact city of Providence, R.I., for example, has surface temperatures that are about 12.2 °C (21.9 °F) warmer than the surrounding countryside, while similarly-sized but spread-out Buffalo, N.Y., produces a heat island of only about 7.2 °C (12.9 °F), according to satellite data. Since the background ecosystems and sizes of both cities are about the same, Zhang’s analysis suggests development patterns are the critical difference.

        She found that land cover maps show that about 83 percent of Providence is very or moderately densely-developed. Buffalo, in contrast, has dense development patterns across just 46 percent of the city.

        Providence also has dense forested areas ringing the city, while Buffalo has a higher percentage of farmland. “This exacerbates the effect around Providence because forests tend to cool areas more than crops do,” explained Wolfe.

        If it used to be forest, and it isn’t anymore, it is not pristine and has UHI.

        • Posted Nov 1, 2011 at 3:31 PM | Permalink

          Bruce.

          Nobody argues that Tokoyo or providence is free from UHI. Both are urban by any and all classification systems. That is not the issue, never has
          been the issue, and is utterly beside the point.

          If there were 10,000 Tokoyos in the station inventory or 10,000 cities the size of providence, finding UHI would be so easy even you could do it.

          However, a sizeable portion of stations ( thousands) have NO population, while a few are like tokoyo and providence. You can in fact go look on the web for the nice little study that “residual analysis” did on the UHI effect of large cities in GHCN ( pop greater than 1 million). the problem? those cases dont happen with a great frequency. remove them and your mean moves a tiny bit.

        • Bruce
          Posted Nov 1, 2011 at 4:08 PM | Permalink

          You didn’t get one of the points I was making. NASA discovered that farmland has UHI compared to forests. Neither have populations of any significance.

          If you read WMO/CRN guidelines for placing thermometers, they always say keep it away from trees.

          “Flat and horizontal ground … No shading when the sun elevation >3 degrees.”

          The siting guidelines ensure warmer.

        • Steven Mosher
          Posted Nov 2, 2011 at 2:50 AM | Permalink

          U = urban
          H = heat
          I = Island

          are you goddard in disguise?

        • Ron Cram
          Posted Nov 2, 2011 at 8:59 AM | Permalink

          Land use changes are similar to UHI. While technically different, land use changes should not be excluded from the conversation about bias.

        • Steven Mosher
          Posted Nov 2, 2011 at 4:54 PM | Permalink

          Ron nobody is excluding discussions of land use change. However, Bruce has a habit of changing the topic rather than following through with a discussion. I object to that. We can discuss UHI, raise issues, close on issues, suggest studies. If after that somebody wants to discuss land use changes.. I got that covered as well, but Bruce, unlike Don, is not interested in discussion. He does not read beyond the abstract and is a waste of my time.

        • Bruce
          Posted Nov 2, 2011 at 9:28 PM | Permalink

          I’m not sure why you keep ignoring NASA’s findings. They found 42 cities in the northeast with UHI and they found that farmland is warmer than forest and the UHI effect is larger when cities are surrounded by cooler forests. Same study.

          The ability to measure current UHI lies with satellites, not a bunch of code massaging bad station data. Past UHI is problematic, but it won’t be solved by massaging bad data either.

        • Steven Mosher
          Posted Nov 2, 2011 at 11:25 PM | Permalink

          the issue isnt finding cities with UHI

          the issue is finding STATIONS with UHI.

          we all know that cities have UHI. the question is which STATIONS are in urban areas, which urban areas and how bad is the UHI in THOSE urban areas. Not some other areas, the actual areas where the stations are located.

          Cities have UHI. some big some small. Cities do have UHI. more than 42 I can assure you.

          Some stations are in some cities; some big, some small

          Some stations are not in cities. anywhere between 25 and 40% depending on how we categorize sites.

          The question is NOT do cities have UHI. they do. we map it. we measure it. cities have UHI
          The question is which STATIONS are either IN cities or close enough to cities to ALSO have
          some measure of UHI. and How much UHI do THEY have. not 42 cities that Nasa studied, but the actual stations
          used in the global average.

          Cities have uhi. they do have it. that has never been the question and its why I ignore your obvious statements of fact.

        • Posted Nov 3, 2011 at 8:11 AM | Permalink

          Mosher
          Isn’t the problem the time line change in UHI effect. A static UHI gives an offset and possibly max min differences but the rate of change changes little with uhi or no uhi

        • Bruce
          Posted Nov 3, 2011 at 9:46 AM | Permalink

          Not just cities. And we know the UHI is high. As high as 12.2C.

          I suspect what you are really ignoring is that measuring UHI by coming up with a new GAT like BEST won’t work. Satellites are going to be needed.

        • Steven Mosher
          Posted Nov 3, 2011 at 10:36 AM | Permalink

          well Bruce cities have UHI. U = urban; H= heat; I = Island.
          if you want to document some other kind of heat island.. like farmers field in Iowas Island
          please come up with a snappy acronym. Until such time UHI is the heat island you get in cities

          Yes UHI is 12.2C and Higher!! I’ve seen UHI of way more than that.

          We dont measure UHI with GATs

          no GAT purports to measure UHI

          The question is? How come those 12C UHI bias never show up in the GAT? Who is stealing all the UHI?

        • Bruce
          Posted Nov 3, 2011 at 12:03 PM | Permalink

          “How come those 12C UHI bias never show up in the GAT? Who is stealing all the UHI?”

          Very good questions. Someday someone might figure it out. Maybe they will take the NASA research/data and compare it to temperature data and try and make some sense of it.

          And playing ostrich about land use and new discoveries is not science … just because you are too literal about what the U stands for.

        • Ed Snack
          Posted Nov 4, 2011 at 2:24 AM | Permalink

          Perhaps your problem Steven is that you seem fixated in “Cities”. We’re talking of “Urban”, which is the U in UHI. And, FYI, urban =/= Cities. OK ? UHI is known to exist in population “centres” as low as 1-2000 people; and populations change over time.

          And although maybe digressing a bit from strict UHI, land use changes are a variable that ultimately needs to be accounted for. There are undoubtedly land use changes over the past 100 years or so, all over the world.

          Then again, in the end we are bound by what data we actually have. If we can define a set of sites with minimal population changes and minimal land use changes; AND, the data we derive from those stations is not so different from the data for all stations, then we do need to deal with that fact. That UHI exists and can be very significant, and that it seems to vary with population (log relationship seems generally accepted) seems incontestable, so why might we not be seeing it ? Is this data already processed to remove such affects ?

        • Posted Nov 5, 2011 at 9:15 PM | Permalink

          Ed.

          Listen carefully. When you select very rural with no built pixels within 11km you get areas where the population is 0 to 5 people per sq km

          And you can also look at land use.

          And when you look at stations with zero population and no land use issues..

          Guess what?

          they are not on average much different from all the rest. Less than .1C per decade.

          The world is warming.

        • Bruce
          Posted Nov 2, 2011 at 9:52 AM | Permalink

          mosher, why are you ticked with NASA for noting that “forests tend to cool areas more than crops do”? Aren’t you interested in science and discovering new things?

          Of course, John Christy noted this a few years ago (as I’ve pointed out to you several times).

          “Irrigation has turned much of the San Joaquin Valley’s dry, light-colored soil dark and damp, says Dr. John Christy, director of the Earth System Science Center at The University of Alabama in Huntsville (UAH). While the valley’s light, dry desert ground couldn’t absorb or hold much heat energy, the dark, damp irrigated fields “can absorb heat like a sponge in the day and then, at night, release that heat into the atmosphere.”

          Irrigation most likely to blame for Central California warming

        • Steven Mosher
          Posted Nov 2, 2011 at 4:58 PM | Permalink

          Bruce. one topic at a time.

          1. you haven not gotten back to me with the Christy code or data.
          2. i know that irrigation matters that is why I have metadata on crop land, rain fed cropland, irrigated cropland, and bluewater consumption for every damn station.
          3. I also have forest data and historical changes to land use

          When you have a paper I havent read or a data set I havent looked at, I will let you know.

          UNtil such time please go bug Christy for his code and data. And dont forget it was guys like willis and steve and me who bugged climate science for code and data. do your part and get back to me when you have actual data rather than ink on paper

        • Steven Mosher
          Posted Nov 3, 2011 at 11:53 AM | Permalink

          get that Christy data and code yet? fetch it Bruce

        • Tom Gray
          Posted Nov 3, 2011 at 12:14 PM | Permalink

          Steven Mosher would be much more convincing and effective without the condescension and patronizeing tone. This mis wothout rancor. The interaction could then be about the science only

        • Gary
          Posted Nov 2, 2011 at 8:10 AM | Permalink

          The Providence station has been located at the State airport ten miles south of the city since the early 1950s (see: http://gallery.surfacestations.org/main.php?g2_itemId=2164). The location is densely suburban and the forested ring itself is ten miles to the west. The east is bounded by Narragansett Bay with its own climatic influence (sea breezes and moderating effect of the water). Previous to the move the station had been located at various spots in Providence since the 1831, each with its own local microclimate, according to the metadata. So the UHI effect here is tangled up in more than just a growing population.

      • BarryW
        Posted Nov 1, 2011 at 2:55 PM | Permalink

        How about the actual CRN sites? We’ve got a number of years of data now. They are supposedly “pristine”.

        Steve: BEST has some CRN sites in it. Unfortunately they’ve “seasonally adjusted” the data. It would be nice if they also archived their data as they commenced using it.

        • Posted Nov 1, 2011 at 3:23 PM | Permalink

          I have a package to get CRN data.

          I can also create the metadata for CRN. Here is my suggestion. people can go look at the CRN sites and say whether they are rural or not.

          then I’ll tell you how they’d be classified by Modis

        • Ron Cram
          Posted Nov 1, 2011 at 3:33 PM | Permalink

          Steven,
          Why can’t we get the data as used by BEST? That’s what you demanded of Nicola Scafetta.

        • Steven Mosher
          Posted Nov 2, 2011 at 2:52 AM | Permalink

          Why dont you ask both of them.

          One will say they have delivered preliminary data and plan a second release
          The other will insult you.

          Next brain buster.

        • Ron Cram
          Posted Nov 2, 2011 at 9:04 AM | Permalink

          Steven,
          Your bias is showing. BEST promised transparency but have studiously avoided it. I don’t trust people who promise one thing and do another. Delivering preliminary data looks to be an excuse to put out results-oriented claims. Nicola insulted you because he thinks you are lazy. It is much easier to reproduce Nicola’s results than BEST results because the data sets are much smaller.

          Steve: it is not fair to say that they have “studiously” avoided transparency. They’ve archived a lot of data and code. The archive is incomplete for a couple of papers. But at this point, I’m willing to attribute this to oversight rather than avoidance. There are lots of things in the organization of the archive that I dislike, but I’m hopeful that they will address these things and do not agree with condemnation.

        • Ron Cram
          Posted Nov 2, 2011 at 3:00 PM | Permalink

          Steve,
          Perhaps I was a little tough on them, but when transparency is the entire reason BEST was founded I had high hopes they would take it seriously. They haven’t.

          I can understand your personal loyalty to Muller and think it is commendable. But I’m certain you will understand that others will not feel the same sense of loyalty that you do. I am hopeful that Muller will correct the deficiencies in transparency and his penchant for overstating the meaning of his results – but I’m not as confident of these as I would like to be.

        • Steven Mosher
          Posted Nov 2, 2011 at 5:01 PM | Permalink

          Studiously avoided it?

          1. They shared an early release with me so I could get some work done for it in R
          2. they pre released data and code
          3. they promise a full release

          I am on record with the NYT saying I will not be happy until everything is done in R.

          Contrast that with scaffetta. If you cant see the difference your bias is showing

      • KnR
        Posted Nov 1, 2011 at 3:21 PM | Permalink

        pristine! well a start would be that the site is actual know to meet the requirements for this type of station as they are laid-down .
        How many KNOW sties like that exist , that is those that have be actual physical checked not guessed at ?
        After all there is a reason why there is supposed to be a standard and no one would accept the use of lab equipment that failed to meet necessary standards so why should it be different outside the lab?

  6. Posted Nov 1, 2011 at 1:50 PM | Permalink

    This is an extremely valuable piece of writing–clear, logical and cogent. (Not saying you don’t normally write to this standard, just that this is exceptional.) I would hope you would consider trying to find wider distribution for this.

    • justbeau
      Posted Nov 1, 2011 at 7:36 PM | Permalink

      Yes, this essay is cogent and powerful.
      It will be read at this web site, as well as relayed far and wide, by popularizers.

    • stan
      Posted Nov 1, 2011 at 7:42 PM | Permalink

      Makes for a nice companion to Matt Ridley’s recent talk which is a very clear, well-structured argument in support of his views.

  7. Gary
    Posted Nov 1, 2011 at 2:06 PM | Permalink

    “It was the BEST of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us, we were all going direct to heaven, we were all going direct the other way – in short, the period was so far like the present period, that some of its noisiest authorities insisted on its being received, for good or for evil, in the superlative degree of comparison only.”

    Charles Dickens, A Tale of Two Cities

  8. Stephen Richards
    Posted Nov 1, 2011 at 2:22 PM | Permalink

    thomaswfuller

    Exactly. Mosher as well and I rarely agree with the moshpit. There can be NO “pristine”. Evry site will have its’ ‘local’ parametres. Pristine to me means clinically perfect. Not possible.

    • Steven Mosher
      Posted Nov 1, 2011 at 6:21 PM | Permalink

      yes.

      for example, you can well imagine that some people would think there were problems with this

      30.54850,-87.87570

      They might be forced to admit that the UHI here is less than Tokoyo, but they would never say
      it was a good site.

      • Bruce
        Posted Nov 1, 2011 at 6:48 PM | Permalink

        “The urban heat island effect can modify rainfall patterns. Mobile, Alabama is one
        example (Taylor, 1999). The city has expanded rapidly since the 1980s, replacing
        forests with impervious surfaces. The resulting heat island appears to intensify
        daily summer downpours. Sea breezes are laden with moisture and are the source
        of daily rainfall in the summer. Northeast Mobile has a concentration of mall
        parking and local annual rainfall can be 10 to 12 inches more than less paved
        areas. Consequently, nearby croplands receive less rain.”

        Click to access Trees_Parking.pdf

        • Steven Mosher
          Posted Nov 2, 2011 at 2:57 AM | Permalink

          yes, the effect is known in some cases to extend 20km downwind of the city.

        • Stephen Richards
          Posted Nov 2, 2011 at 7:56 AM | Permalink

          Mosh

          It’s Tokyo

        • Steven Mosher
          Posted Nov 2, 2011 at 6:00 PM | Permalink

          There are 39000 tokyo’s in the dataset I cant be expected to spell every one of them the same.

        • simon abingdon
          Posted Nov 3, 2011 at 10:01 AM | Permalink

          Every Tokoyo makes me say Tokyo to myself with a broad brummie accent. Can’t help it.

        • Steven Mosher
          Posted Nov 3, 2011 at 2:15 PM | Permalink

          its Tokeyo

        • cdquarles
          Posted Nov 4, 2011 at 10:22 AM | Permalink

          What do they mean by expanded rapidly? Mobile the city’s population has barely changed since 1970. I lived in Mobile for a time in the 80s. I can tell you that Mobile itself has hardly grown. The surrounding suburbs, particularly those across the Bay in Baldwin County have indeed grown rapidly since the 60s.

  9. Posted Nov 1, 2011 at 2:40 PM | Permalink

    What do I mean by pristine? Well off the top of my head, something like

    0) No station moves of any kind;
    1) No man made structures (other than the Stephenson screen, or equivalent, itself) within 20m or ten times its own height, which ever is the larger, at any point;
    2) No significant change in land use over the period considered (including no change in grazing patterns, no change in forestation, and no change in nearby lake extents);
    3) No change in recording method over the period considered unless there is at least 5 years overlap of the two recording methods to enable cross-calibration.

    I strongly suspect there are no pristine stations by this definition which would allow us to extend the temperature record back to (say) 1900. Pristine does indeed meean clinically perfect. And it is indeed not possible.

    • Steve McIntyre
      Posted Nov 1, 2011 at 3:29 PM | Permalink

      In practical terms, it’s presumably a matter of “more pristine” and “less pristine”, In the 2000s, the CRN stations are designed for climate recording and are about as good as one could hope for. I looked at the one near Tucson and there was a noticeable difference in trend between CRN and Tucson airport in only a few years.

      My own take on the past 30 years is that it also places a pretty hard ceiling on potential urbanization contributions. The amount of urbanization in the past 30 years has been unprecedented. The contribution in earlier periods to any warming will be less (but so is the warming.)

      The larger problem in earlier periods are Anthony-type problems. Undocumented changes in station quality. Because the station populations are sparse, it also becomes harder to crosscompare. Some of the high 19th century temperatures in CRU look suspicious to me and likely to be something in the measurement method.

      • Posted Nov 1, 2011 at 3:54 PM | Permalink

        The CRN stations are indeed as good as we’re likely to get. If we had a network of CRN stations across the world dating from 1900 we would be having a very different conversation than we are now. But we don’t, and we aren’t.

        Given the limited temporal and spatial coverage of the CRN our only hope is to use them to try to estimate how bad the other stations are, and then try to make some sort of correction for it. That would require not just making comparisons between CRN stations and others, but also making direct measurements of the effects of transitions from Stephenson screens to MMTS, and from Min/Max recording to fixed time recording, and so on. But once again I agree with you (Steve McIntyre): even if we had all that sorted out our knowledge of the detailed history of many of the stations is too patchy to have great confidence in any correction method.

        • Steven Mosher
          Posted Nov 1, 2011 at 6:07 PM | Permalink

          Well maybe I’ll do a post on CRN metadata.

          Mc what do you think

        • Steven Mosher
          Posted Nov 1, 2011 at 6:14 PM | Permalink

          Examples

          30.54850,-87.87570

          30.54850,-87.87570

        • cdquarles
          Posted Nov 2, 2011 at 11:15 AM | Permalink

          Mobile AL huh Mosh? Ah Fairhope, Alabama, suburban Mobile, and I’d call it urban 🙂 I once worked near there and lived in Mobile near the Bay.

        • Steven Mosher
          Posted Nov 2, 2011 at 6:04 PM | Permalink

          You can call the moon urban because we left junk there

          http://maps.google.com/maps?q=30.54850,-87.87570&hl=en&ll=30.547645,-87.874167&spn=0.011679,0.026157&sll=37.0625,-95.677068&sspn=43.799322,107.138672&vpsrc=6&t=h&z=16

          The point is that people will shift there definition of urban as cases come up. ether that or they wont define it. On one day they will point to a station ( CRN) and say.. THIS is the standard. and on another day they will point to the same station and say… oh wait.. I call that urban.

        • cdquarles
          Posted Nov 4, 2011 at 10:10 AM | Permalink

          I looked at Google Earth before replying Mosh 🙂 . The pointer ended up in the road until you zoomed in very close. Baldwin county, Al has been the fastest or second fastest growing county in Alabama for the last 40 years by population percentage change rate. The area you point to has built up even more since I left it.

          Largest county population in Alabama by rank: Jefferson, Mobile, Madison, Montgomery, Shelby, Tuscaloosa, Baldwin, then it gets muddled with Morgan, Houston, Lee, Calhoun, Etowah, Lauderdale, Marshall, Limestone, Talladega, Cullman, St Clair, Walker, Autauga, Elmore, Blount, Coffee, Colbert, Dale, Dallas, and, well my memory is failing me now.

        • cdquarles
          Posted Nov 4, 2011 at 10:41 AM | Permalink

          One final note on definition of rural. I call rural an area of less than 10 people per square mile rural. Take Sumter County, AL as an example. Most of that 1000 sq mi county is rural and has been so for 60 or more years. Urban is anything more than that.

        • Posted Nov 5, 2011 at 9:18 PM | Permalink

          Then the BEST very rural count as rural.

    • Steven Mosher
      Posted Nov 1, 2011 at 3:47 PM | Permalink

      0) No station moves of any kind;
      1) No man made structures (other than the Stephenson screen, or equivalent, itself) within 20m or ten times its own height, which ever is the larger, at any point;
      you forgot tree that grow and shade the site. What is the effect if the structure is within 19 meters?
      maybe it should be 50 meters? some field research would be nice.. got any?
      2) No significant change in land use over the period considered (including no change in grazing patterns, no change in forestation, and no change in nearby lake extents)
      within what radius? for example. If there is a lake 10km away from the site and its extent
      changes by 3 inches, does that count? 2km away and changes by 1 mm? Grazing patterns? you
      will probably want to equip cows with GPS to determine that grazing patterns have not changed.
      no change in forestation? order trees to stop growing? you mean the exact same number of trees
      no change in area? how about tree type? tree height?
      The issue here is you
      have no basis to assume that every change will cause a measureable change in temperature.
      I will contrast that with evidence we have that large scale changes have a measureable effect. That’s
      a real concern. The blanket.. no changes whatsover, is a practical impossibility, worst than that
      its undefinable, or rather you have not defined it very well.
      3) No change in recording method over the period considered unless there is at least 5 years overlap of the two recording methods to enable cross-calibration.
      how about triple redundant sensors?

      You can effectively define all measurements out of existence. that has never been the question. There is always an approach which results in us knowing nothing rather than knowing something with uncertainty.

  10. Stephan
    Posted Nov 1, 2011 at 3:00 PM | Permalink

    Well, for “pristine” areas we’ve got pretty good coverage via the mountain glacier record. And there’s overwhelming evidence of global glacier recession.

    • phi
      Posted Nov 1, 2011 at 3:57 PM | Permalink

      snip – not related to the specific issue here

  11. Posted Nov 1, 2011 at 3:03 PM | Permalink

    Steve,

    I’m thinking you have the right approach. With no response to my emails yet, I’m not sure they are taking my critique seriously. Currently they have a trend which is higher than the rest and some stats which are absolutely worse than the rest. In a recent article at Nature blog, they indicated that the models and data were closer than ever. I think we know now which one was changed.

    • don monfort
      Posted Nov 1, 2011 at 4:25 PM | Permalink

      Jeff,

      You may have seen it, but Judith says that Muller has assured her that they are taking critiques seriously. He also told her that the premature press blitz was to get the attention of the IPCC. Of course the press release says they are in the IPCC AR5, and other exlnations are given in the FAQs with not mentiuon of the IPCC. So the discrepancy between what he says and what he does is not surprising.

      • Posted Nov 1, 2011 at 5:35 PM | Permalink

        I’m still hoping to hear something but my patience for stalling or non-response is limited.

      • Stephen Richards
        Posted Nov 2, 2011 at 7:59 AM | Permalink

        I think that JC has been somewhat niave. Muller is playing with her emotions and her loyalty. Sad, very sad.

        • Posted Nov 2, 2011 at 8:16 AM | Permalink

          JC is also very pushed for time. I understand that patience may be limited but I predict it will look more like wisdom by the end of the week than it does today.

        • Posted Nov 2, 2011 at 8:24 AM | Permalink

          Hmm, that URL’s had an climateaudit.com prefix inserted, presumably to deter spammers (and temporary?) Remove anything between the http:// and judithcurry.com and you’ll arrive where I intended. Judy’s recent comment immediately below mine is well worth taking in.

  12. DocMartyn
    Posted Nov 1, 2011 at 3:04 PM | Permalink

    “The calibration and reconstruction problem is not as difficult as trying to reconstruct past temperatures with tree rings, but neither is it trivial”

    Doesn’t the BEST heterogeneity plot of temperature rates suggest that even with the ‘unprecedented’ warming over the last 50 years the chances of an individual trees, stuck as it is in one location, recording a negative trend is 1:3?

  13. TJA
    Posted Nov 1, 2011 at 3:05 PM | Permalink

    How hard would it be to gather such data? I have noticed that on exceptionally cold nights, -25F around here, that the temp varies widely it might be -15 in the city, Burlington, VT, -19 on the highway, -22 on the lightly traveled road to my house, and -25 at my garage door, 700 ft from the road.

    I assume the effect is from automobiles disturbing the stratification of the air.

  14. TJA
    Posted Nov 1, 2011 at 3:11 PM | Permalink

    How hard would it be to plant solar powered satellite reporting temp stations in wilderness areas? Really, how hard?

    • Steven Mosher
      Posted Nov 1, 2011 at 3:49 PM | Permalink

      Wilderness is not pristine according to the definition given above. dont confuse the definitional question with the measurement issue

      • don monfort
        Posted Nov 1, 2011 at 4:29 PM | Permalink

        Steve, you are just knocking down others thoughts and ideas on this without proposing how to tackle the problem. Is this all we are going to get? If so, I will stop wasting my time asking you about it.

        • Steven Mosher
          Posted Nov 1, 2011 at 5:57 PM | Permalink

          No I am trying you to think hard about the problem. I know that if I define rural that somebody somewhere will say.. oh look three sheds and cow, I bet that cow breathes on the thermometer. So, you point to tokoyo as a poster child, and then when you are shown a remote station located at a ranger station, you say.. oh thats not pristine, theres a person who reads the thermometer.

          What I can do is this. Looking at the literature from Oke on we can define certain things known to cause UHI.
          Know and proven to cause it. cause it in a way that measureable. I can sort through sites that dont have those issues. When I do that work, and its brutal effin work, I know that some arm chair bozo will say.. holy crap theres a lake 10 miles away and maybe the water level changed. Thats not rural.

          So, what I’m expecting isnt much, except to thnk about the problem in a way where some progress can be made.

          I have a proposal for Steve. Its related to CRN which Ive just finished processing

        • don monfort
          Posted Nov 1, 2011 at 7:16 PM | Permalink

          Thank you Steve. I am not trying to give you a hard time. And I haven’t said anything about Tokyo (Bruce), or criticized the work you have reported on sorting out the rural/very rural areas, using MODIS, and cross-checking with the light thingy, and population densities. That was Tilo. And he may have a little point or two, but I believe you are capable of and are interested in doing it right, and I trust you to be honest. That is why I keep asking you. I will look forward to anything that you and Steve M. will share with us. snip – policy

        • Steven Mosher
          Posted Nov 2, 2011 at 3:06 AM | Permalink

          I realize that don, sorry. There are those folks who want to insist that every station has 9C of UHI, that the world is really cooling and that, opps, we are also coming out of an LIA. err wait.. my favorite, its not getting warmer and increased sunshine is the cause.. err.. wait.. globale temperature doesnt exist, but that non existent thing was greater in the MWP… or the temperature record is a total crock, except when it correlates with the sun.

          I’ll put together a little example of what kinds of things you can know and how that might help.

        • Posted Nov 2, 2011 at 11:17 PM | Permalink

          “There are those folks who want to insist that every station has 9C of UHI”

          Really? Can you name one? Or is this just hyperbole, again?

        • Steven Mosher
          Posted Nov 2, 2011 at 11:29 PM | Permalink

          So I take it you think that the UHI bias is less than 9C?

          great, care to give an estimate? you can use steves work above to take a stab at it?
          If not, then I’ll assume you think its 0.

        • Posted Nov 3, 2011 at 12:20 AM | Permalink

          You’re moving the pea. You stated “There are those folks who want to insist that every station has 9C of UHI”. I asked you to name one. Instead you changed the subject.

          As you stated elsewhere, it’s not whether UHI exists, it’s whether STATION UHI exists. I’ve never seen anyone state “There are those folks who want to insist that every station has 9C of UHI”.

        • don monfort
          Posted Nov 3, 2011 at 1:26 AM | Permalink

          I think Steve is talking about Bruce, but employing a little bit of hyperbole.

          “I’ve never seen anyone state “There are those folks who want to insist that every station has 9C of UHI”.” That is incorrect, since we have just seen Steve say that very thing. See how it works? Anyone can be pedantic. Your do-rag suits you.

        • Posted Nov 3, 2011 at 9:35 AM | Permalink

          If being pedantic means expecting someone to backup their statements, then we should all be more pedantic. I suppose you agree with Mosh them.

          How about this: I’ve never seen anyone state “every station has 9C of UHI”. Better?

        • don monfort
          Posted Nov 3, 2011 at 10:54 AM | Permalink

          OK, if you think that Steve meant that literally, then you are not being pedantic. You are just being silly.

        • Steven Mosher
          Posted Nov 3, 2011 at 2:26 PM | Permalink

          Of course its hyperbole. The simple fact is this. If you try to have a serious discussion about UHI, an honest discussion, you will find a Bruce in every crowd. So let me ask you.

          You see steves analysis, It is rather logical

          Satellite measures the temperature far above the surface away from UHI.
          Lets stipulate that the surface ( 1% of it) has UHI.
          The concern is that we are measuring the 1% and avoiding the 99% so that we have a bias.
          Fair enough concern.

          We can also note that by the time the air parcels from this 1% rise and mix with surrounding air
          the effect is washed out. That means when we measure a trend miles above the surface that trend is
          free of UHI.

          Let’s say that Trend is .2C per decade
          We note that the land trend is .18C per decade

          Note the above is just done for illustration of the logic.

          What that implies is that the UHI bias could be as high as .2 – .18 or .02C decade

          In the context of this argument it does not make sense from Bruce or you to make comments like
          providence has a UHI of 5 or 4 or whatever.

          The debate is over methods to estimate the final and full effect of UHI on the entire record

          yes, you will find cities and fields and all sorts of hot spots on the surface. The point is what is
          the BIAS after collecting the samples at the surface.. what is the total bias.

          Simple question: do you agree with Steves approach. remember it was an approach used by skeptics.

        • Posted Nov 4, 2011 at 9:26 AM | Permalink

          “OK, if you think that Steve meant that literally, then you are not being pedantic. You are just being silly.”

          That was the reason I mentioned it. His comments in this thread have been laced with hyperbole, which means you can’t have a serious discussion. He’s the one being silly, not me.

      • TJA
        Posted Nov 1, 2011 at 5:01 PM | Permalink

        I disagree with your premise. Certainly, even in a wilderness site, there will be factors that will change over time unrelated to climate. The big question is in a systematic bias of population growth. To get a ten year reference period far from the madding crowd, and to compare it to other stations for trend or lack of trend does not seem an overly expensive project for the value returned.

        Whatever your point about uncertain data, we know that certain uncertainties are ignored in the noise and bluster. I would like to know if UHI is an issue or not.

  15. Eric Anderson
    Posted Nov 1, 2011 at 3:47 PM | Permalink

    “Throw everything into the black box with no regard for quality and hope that the mess can be salvaged with software.”

    Beautiful description.

  16. DaveS
    Posted Nov 1, 2011 at 3:53 PM | Permalink

    I’ve asked this before but have never been organised enough to check for any responses. All the discussion on UHI that I’ve seen is in the context of increasing population. If the theories put forward are correct, then the opposite effect should be apparent in areas of decreasing population. So can anything be learned from somewhere like Detroit, which I gather has experienced quite a decline in population?

    • mpaul
      Posted Nov 1, 2011 at 4:14 PM | Permalink

      It seems to me that a lot of factors could go into UHI, not all of which reverse with declining population. For example, Detroit built up a lot of paved roads, concrete structures, etc, etc. These things are still there, even though they are experiencing a declining population.

      • Posted Nov 1, 2011 at 8:04 PM | Permalink

        Also consider that somewhere in the urbanization process, asphalt streets are replaced with concrete. I noticed this driving US-287 in north-west Texas at 104-108 deg F. Blacktop outside the city, but white pavement in the towns. It might be a stretch to think the effect strong enough to create Urban Cooling, but it could mitigate other heating.

        • cdquarles
          Posted Nov 2, 2011 at 11:09 AM | Permalink

          In my own lifetime I have seen interstates go from concrete to asphalt then back to concrete then back to asphalt both in and out of the cities. The materials used depend upon costs of materials at the time and/or subsidies to try out ‘new’ materials.

          What gets lost, in my opinion, is that all biological organisms alter the local environment to enhance their own survival; and that the most altering organisms of all to the local environments over time are single celled plants, fungi and bacteria.

          Urban heat island and other land use changes make measurable and notable changes in the local weather. We should be looking for measurable changes in the local weather attributable to changes in the atmospheric boundary layer first.

  17. Manfred
    Posted Nov 1, 2011 at 4:30 PM | Permalink

    Assuming satellite trends and/or model scaling factors are wrong and UHI / land use change effects are about zero

    – would not explain the increasing difference between sea surface and lad based trends. Land is warming faster, indeed, but after 150 years of warming and CO2 increase, one would expect trends to converge and not diverge as they did in recent decades.

    – It would not explain McKitrick et Altris’ results about correlation between socio-economic development and temperature increase.

    – Would not explain Pielke et Altris’ results about correlation between land use changes and temperature increase.

    Only attributing the difference to down scaled satellite trends to UHI and land use changes would tick all boxes and prettily even match the grossly estimated sizes of the effects.

    The writing is on the wall !

  18. theduke
    Posted Nov 1, 2011 at 4:47 PM | Permalink

    Yes, Steve, this is a very fine summation. Thank you for providing clarity.

  19. Tilo Reber
    Posted Nov 1, 2011 at 5:12 PM | Permalink

    “To follow the practices of geophysicists using data of uneven quality – start with the best data (according to objective standards) and work outwards calibrating the next best data on the best data.”

    This objection can be illustrated using a UHI example. BEST fits the pieces of their short or broken temperature time series according to the idea that variation between two time series is only influenced by altitude and latitude. So if a series is included in their model and it differs from those around it, then either that difference has to be justified by a difference in latitude or altitude, and if it is not, then they assume a discontinuity that must be resolved by adjustment. Since BEST does not allow for variation by UHI, then any significant differences between urban stations and nearby rural stations is a discontinuity that must be adjusted for. This does not get rid of the UHI, it simply distributes it equally across the nearby stations. The right way would have been to begin with the truely rural stations (less than 2% built) and adjust nearby temperature strings to those stations, rather than simply stiring it all together.

  20. Posted Nov 1, 2011 at 5:28 PM | Permalink

    I’d like to comment briefly on what Steve Mosher was writing about in regards to pristine environments and the UHI.

    As noted by TJA,even a pristine environment would change over time, and this would affect climate. However, absent other factors these changes would broadly even out.

    It should also be noted that there are effectively zero ‘pristine’ environments around–it’s not a real world issue.

    In my opinion (just an opinion), the first cut is the deepest, in the sense that humanity’s first impacts on an environment are likely to be the most severe–wide ranging burning, extinction of grazing megafauna, etc. What happened on Easter Island probably didn’t affect climate much–but it certainly affected the environment.

    But again, those effects might easily balance out, and they’ve certainly been with us through up and down cycles of temperatures. UHI is meant to be specific–whether it succeeds in that is apparently still somewhat in question. It was first measured in London a couple of centuries ago, and it is real, quantifiable and projectable.

    What we are postulating in this conversation is a suburban heat island, a rural heat island and maybe even more. And it may all be real. But I don’t think the work done on this issue to date is easily relate-able to those possible effects. So Steve’s point is (IMO) more pertinent to the wider discussions about UHI, its scope and effect on global temperatures (recalculated by Phil Jones in 2005 as about 0.5C, IIRC).

    The effect you are discussing here, however you label it, is probably worth examining. A smaller temperature change over a larger area of landmass due to lesser effects is a reasonable hypothesis, although I sincerely wish you all good luck in figuring out how to measure it.

    But I do believe it’s a different discussion altogether.

    • don monfort
      Posted Nov 1, 2011 at 6:16 PM | Permalink

      I hoped that we were past semantic quibbling over what pristine is. The world’s population increased from 2.5 billion to nearly 7 billion, from 1950 to 2010. Most of the thermometers are where the people are, not where the buffalo roam. As cities make up less than 1% of the land surface, all the more reason to be concerned about the under-representation of the more sparseley populated areas in the temperature record. I won’t believe, given what we know for sure about human impacts on the environment, that rapid population growth along with the accompanying infrastructure has not had a significant effect on the temperatures that we have been attempting to measure.

      • Steven Mosher
        Posted Nov 2, 2011 at 4:44 PM | Permalink

        In H2010 hansen used Nighlights ( pitch black) to classify the 1200 or so USHCN stations.
        he found 300. thats 25% The population associated with that is less than 10 people per sq km, heavily skewed toward 0 people. so, perhaps 25% of the stations are where the buffalo roam. BEST had 40%, but I think they need to refine their protocal somewhat to get to very rural.

        • don monfort
          Posted Nov 3, 2011 at 1:47 AM | Permalink

          Actually Steve, I set you up. The buffalo no longer roam. Virtually all of them were killed by human hunters, who themselves have since left the prairies for the big cites. But less than 10 people per sq km, heavily skewed toward nobody, is interesting. Are you suggesting that maybe 25% of the 39,000 stations in the BEST analysis would fit that description? I wonder why they didn’t go that route, instead of that rural vs. very rural BS.

  21. Robert
    Posted Nov 1, 2011 at 5:38 PM | Permalink

    This post makes several assumptions that I think should be noted.

    Firstly the assumption is made that the satellite data from UAH and RSS is still free from any impacts of cooling. This is not necessarily the case. In fact a recent paper (zou et al. 2010) has found issues with satellite data that has resulted in more cooling in the satellite record. STAR has released their analysis updated with the new corrections and their trends are stronger at the different altitudes compared to both UAH and RSS. They are in the process of working on a TLT channel and if the relationship between the higher altitudes and TLT (a synthetic channel) is similar to those of UAH and RSS then the STAR analysis will produce a significantly higher trend than UAH and RSS.

    Secondly (and most importantly) Steve compares only BEST with Cru in his figures and seemingly ignores the strong agreement between NOAA Land, GISS (after Land mask has been applied) and BEST with Cru appearing as an obvious outlier. Now according to work JeffID has done previously, Cru’s station combination method will tend to underestimate trends. Secondly (and what I call more importantly), the Cru dataset is missing some of the regions which warmed the most over the past ten years. This is confirmed by a the European Center for Medium and Long Range Weather Forecasting.

    http://www.metoffice.gov.uk/news/releases/archive/2009/land-warming-record

    Although it is clear that BEST does provide too much confidence in their answer and how they answered the UHI question (and AMO one) is debatable, it is not warranted to consider them as being unreasonably high when you consider their agreement with the other (Arctic including) temperature series, and when you consider the significant uncertainties with the satellite data.

    • Steve McIntyre
      Posted Nov 1, 2011 at 6:02 PM | Permalink

      I disagree with your point about NOAA and GISS if the satellite data is as repreented. If the same barplot with NOAA, for example, has a larger difference to downscaled satellite, then that simply means that the urbanization and quality issues are worse than we thought – with CRU’s lesser coverage partly offsetting the error. Saying that NOAA runs hotter than CRU is not in itself an argument that a person who holds a reasonable reliance on satellite results, since, Like CRU, NOAA makes no adjustment for urbanization.

      I’m not arguing that one side or other has won on the urbanization contribution. I’m merely observing that someone can reasonably rely on the satellite record and the act that different parties have calculated warmer trends using the same contaminated GHCN data doesnt, in itself, oblige that reasonable person to change his mind.

      I’m not familiar with Zou et al, Higher satellite trends would obviously reduce the difference and leave less to be attributed to urbanization. I presume that Zou’s trends remain less than those observed in the station data.

    • Steven Mosher
      Posted Nov 1, 2011 at 6:03 PM | Permalink

      Robert good points all around. However, I think you can agree with the methology.

      Satillite TLT trend = X
      model ampflication = 1.4X
      Imputed ground trend Z = X/1.4
      Observed Land Trend Y

      Y – Z ~0

      Deviations from zero need to be explained. that does not say what the explaination is. It just says
      one is needed.

      At least you can agree to that.. then the fiddling begins.. but all the fiddling happens within this general
      model of how things should be.

      • Robert
        Posted Nov 1, 2011 at 6:34 PM | Permalink

        I might need a reference for the model amplification factor of 1.4x. Where can I find that ?

        • Gavin
          Posted Nov 1, 2011 at 8:12 PM | Permalink

          Nowhere. The expected land-only amplification of MSU-LT over SAT is close to zero (actually equivalent to a factor of ~0.95 +/- 0.07 according to the GISS model).

          The ocean-only tropical amplification is related to the moist adiabat which is not the dominant temperature structure over land since deep convection is mostly an tropical ocean phenomena. Ocean temperatures are rising slower than over land, therefore even if tropical land tropospheric temperatures were being set by a moist adiabat over the ocean, it would still have a smaller ratio with respect to the land temp.

        • Posted Nov 2, 2011 at 1:29 AM | Permalink

          I’ve just read Steve’s survey for the first time and this seems the only important correction. I presume you are disputing the following for temperature recontructions from land-based thermometer records:

          Both Lindzen and (say) Gavin Schmidt agree that tropospheric trends are necessarily higher than surface trends simply though properties of the moist adiabat.

          Do Lindzen and Schmidt agree on the ~0.95 over land?

        • Posted Nov 2, 2011 at 7:56 AM | Permalink

          What John Christy once permitted me to publish was this:

          The global-mean short term tropospheric amplification factor of 1.2 (it’s 1.3 in the tropics) indicates (a) that the ocean’s thermal inertia (sfc datasets use SSTs) works against large shorter-term changes while the atmosphere is much less massive and can respond to a greater extent and (b) there is a lapse-rate feedback process where the lapse rate tends to move toward the moist adiabat when thermally forced from below. Why we don’t see this amplification factor in the trend metric (which models show also occurs for the trend) likely deals with the feedbacks of the climate system – there appear to be negative feedbacks on longer time scales that models don’t capture. This is a hypothesis we want to test.

          John C.

          It was presented to me that because the data was farther from ground, the variance was less repressed by thermal inertia which didn’t affect long term trend. I don’t beleive the 1.2 number was model related but rather radiosonde.

        • Steve McIntyre
          Posted Nov 2, 2011 at 8:58 AM | Permalink

          Thanks for this comment. I’ve amended the commentary to reflect this observation.

        • Steven Mosher
          Posted Nov 2, 2011 at 4:45 PM | Permalink

          Thanks Gavin that helps immeasurably

        • phi
          Posted Nov 2, 2011 at 5:30 PM | Permalink

          Gavin,

          Correct me if I’m wrong. The factor 0.95 is relative to the whole land, so including Antarctic. From the 60th parallel, the expected ratio TLT-T2M is reversed. In the case of the northern hemisphere alone (represented relatively few beyond the 60th parallel) we can expect about a factor of 1.1. The observations (UAH-CRUTEM) give a ratio of 0.69!

        • Carrick
          Posted Nov 3, 2011 at 10:46 PM | Permalink

          Gavin, sorry that I’m late to the party, but have you seen anybody who studies the effect of the land atmospheric boundary layer on temperature measurements, and whether this can lead to an additional amplification of the surface atmospheric temperature over land?

          Things get really crazy near the ground, especially in the first couple of meters. On a hot day you get perhaps 5-7°C change in the first 10-meters off the ground (and at night you have a similar temperature inversion).

          Here’s some representative data. (Source and data on request.)

          Regardless, I think it’s worth noting that satellite measurements don’t measure the same quantity (in general) as surface measurements. Marine measurements are actually performed by measuring sea surface temperature, and it is assumed that this can be equated with air temperature (after anomalizing to remove hopefully a constant offset, but the constancy of the effect not established and probably depends on meteorology)..

        • curious
          Posted Nov 4, 2011 at 1:19 AM | Permalink

          Carrick – is “daytime” midday?

        • Carrick
          Posted Nov 4, 2011 at 11:26 PM | Permalink

          If I remember right, it was 2pm.

        • Tilo Reber
          Posted Nov 1, 2011 at 11:29 PM | Permalink

          Robert: “Where can I find that ?”

          Here:

          Remote Sensing 2010, 2, 2148-2169

          “What Do Observational Datasets Say about Modeled Tropospheric Temperature Trends since 1979?”

          John R. Christy 1,*, Benjamin Herman 2, Roger Pielke, Sr. 3, Philip Klotzbach 4, Richard T. McNider 1, Justin J. Hnilo 1, Roy W. Spencer 1, Thomas Chase 3 and David Douglass 5

          Abstract: Updated tropical lower tropospheric temperature datasets covering the period 1979–2009 are presented and assessed for accuracy based upon recent publications and several analyses conducted here. We conclude that the lower tropospheric temperature (TLT) trend over these 31 years is +0.09 ± 0.03 °C decade−1. Given that the surface temperature (Tsfc) trends from three different groups agree extremely closely among themselves (~ +0.12 °C decade−1) this indicates that the ―scaling ratio‖ (SR, or ratio of atmospheric trend to surface trend: TLT/Tsfc) of the observations is ~0.8 ± 0.3. This is significantly different from the average SR calculated from the IPCC AR4 model simulations which is ~1.4. This result indicates the majority of AR4 simulations tend to portray significantly greater warming in the troposphere relative to the surface than is found in observations. The SR, as an internal, normalized metric of model behavior, largely avoids the confounding influence of short-term fluctuations such as El Niños which make direct comparison of trend magnitudes less confident, even over multi-decadal periods.

        • Robert
          Posted Nov 2, 2011 at 1:30 AM | Permalink

          According to Gavin the number is 0.95 over land rather than 1.4

          This gives trends of the following:

          UAH
          0.182 (°C/decade)

          RSS
          0.208 (°C/decade)

          If I recall correctly
          BEST
          0.28 (°C/decade)

          GISS (land-masked using CCC)
          0.24 (°C/decade)

          NOAA
          0.28 (°C/decade)

          Cru
          0.22 (°C/decade)

        • Steven Mosher
          Posted Nov 2, 2011 at 6:27 PM | Permalink

          Nice;

          BEST 0.28 (°C/decade) – UAH = .1C “budget” for possible UHI
          BEST 0.28 (°C/decade) – RSS = .08C “budget” for possible UHI
          CRU .22 -UAH = .04C “budget” for Possible UHI
          CRU .22 – RSS = .02C “bugget” for possible UHI

          so between uncertainties in the satellite record, uncertainty in the downscalling and uncertainty in the surface record.. it would appear that there is possibility for a modest UHI bias.
          not tokyo sized.

          not zero. not tokyo sized.

        • Steve Fitzpatrick
          Posted Nov 3, 2011 at 8:51 AM | Permalink

          Yes, any plausible UHI effect during the satellite era must be modest. If I remember correctly, Douglass, Pielke Sr and others suggested the discrepancy between land surface and satellite lower troposphere trends is mainly due to changes in the surface boundary layer in low wind (esp. winter night) conditions. If that is correct, then the remaining plausible UHI effect during the satellite era has to be very small.

          That doesn’t prove it was similarly small before the satellite era, but it is hard to see why the situation would suddenly change when the satellites started measuring oxygen microwave emissions in 1979. I’ve never been able to get too excited about UHI effects… and still can’t.

        • Tilo Reber
          Posted Nov 2, 2011 at 6:55 PM | Permalink

          Robert: “According to Gavin the number is 0.95 over land rather than 1.4

          Is there a source for that .95 other than Gavin?

        • Steven Mosher
          Posted Nov 2, 2011 at 11:31 PM | Permalink

          go get the model data and calculate for yourself. Christy’s paper used 1.1, is that source ok?

        • Jeremy Harvey
          Posted Nov 2, 2011 at 4:23 AM | Permalink

          Tilo, this issue of the scaling of surface temp changes to satellite-detected tropospheric temperature changes seems to be a contentious one, as shown by Gavin’s rapid claim that Steve’s use of a downscale by 1.4 is incorrect and that he should have used 1. Reading the characteristically snarky RealClimate post he links to shows indeed that this point has been much argued about, and also shows that one must make distinctions between land-only, sea-and-land, and ‘lower’ troposphere (20 deg. S to 20 deg. N) and global troposphere.

          I note that whether the factor is 1 or 1.4 is not so important concerning one of the key parts of Steve’s beautifully clear essay, i.e. it is irrelevant when trying to judge whether BEST is indeed notably better for recent periods than CRU or other such as GISS. Nevertheless, it does matter for the other argument Steve presents, which is that satellite estimates of surface temperatures should be used to calibrate surface-based measurements and to attempt to estimate the role of UHI in the latter. To do that, you need to sort out what the inferred UHI-free surface temperature trend over land is, based on satellite observations. It would be very helpful if someone could point to any references about that question.

          Steve: I’ve restated the section in question to show the results without the proposed downscaling as well.

        • Manfred
          Posted Nov 2, 2011 at 2:03 AM | Permalink

          Another paper uses 1.1 over land and 1.6 over sea:

          Click to access klotzbachetal2009.pdf

          With 1.1, there would still be significantly higher land based trends.

          Factors of 1.4 or 1.6 as given for sea surface, make sea surface data significantly higher as well (or the models produce false factors). The size of this error would affect the global trend even more than the land error.

        • Steven Mosher
          Posted Nov 2, 2011 at 6:32 PM | Permalink

          “completely contained within the 30-year record used here.
          Thus, in 19 realizations this consistent ratio was calculated.
          This was also demonstrated for land-only model output
          (R. McKitrick, personal communication, 2009) in which a
          24-year record (1979 – 2002) of GISS-E results indicated an
          amplification factor of 1.25 averaged over the five runs.”

          But Ross says this is un acceptable referencing
          and the data he referred to wasnt really the right data?

          How does one get 1.1?

        • HaroldW
          Posted Nov 2, 2011 at 6:55 PM | Permalink

          The factor of 1.1 is derived in http://pielkeclimatesci.files.wordpress.com/2010/03/r-345a.pdf with the following: “Utilizing an appropriate landmask and data provided on his [Gavin Schmidt’s] FTP site at http://www.giss.nasa.gov/staff/gschmidt/supp_data_
          Schmidt09.zip, we have redone our calculations and found amplification factors of 1.1 over land and 1.6 over ocean.”

        • Steven Mosher
          Posted Nov 2, 2011 at 11:32 PM | Permalink

          however that data was only a subset as gavin pointed out.

        • HaroldW
          Posted Nov 2, 2011 at 11:57 PM | Permalink

          Steven Mosher,
          I didn’t notice where Gavin made a point about the Klotzbach et al. paper using a subset…Although obviously I noted that his factor of 0.95 differs from Klotzbach’s 1.1. Can you clarify the “subset” reference, please?

        • Steven Mosher
          Posted Nov 3, 2011 at 2:29 PM | Permalink

          For reference, the amplification is related to the sensitivity of the moist adiabat to increasing surface temperatures (air parcels saturated in water vapour move up because of convection where the water vapour condenses and releases heat in a predictable way). The data analysis in this paper mainly concerned the trends over land, thus a key assumption for this study appears to rest solely on a personal communication from an economics professor purporting to be the results from the GISS coupled climate model. (For people who don’t know, the GISS model is the one I help develop). This is doubly odd – first that this assumption is not properly cited (how is anyone supposed to be able to check?), and secondly, the personal communication is from someone completely unconnected with the model in question. Indeed, even McKitrick emailed me to say that he thought that the referencing was inappropriate and that the authors had apologized and agreed to correct it.

          So where did this analysis come from? The data actually came from a specific set of model output that I had placed online as part of the supplemental data to Schmidt (2009) which was, in part, a critique on some earlier work by McKitrick and Michaels (2007). This dataset included trends in the model-derived synthetic MSU-LT diagnostics and surface temperatures over one specific time period and for a small subset of model grid-boxes that coincided with grid-boxes in the CRUTEM data product. However, this is decidedly not a ‘land-only’ analysis (since many met stations are on islands or areas that are in the middle of the ocean in the model), nor is it commensurate with the diagnostic used in the Klotzbach et al paper (which was based on the relationships over time of the land-only averages in both products, properly weighted for area etc.).

        • HaroldW
          Posted Nov 3, 2011 at 9:55 PM | Permalink

          Thanks Steven, that was perfectly clear and very helpful.

        • Steve Fitzpatrick
          Posted Nov 2, 2011 at 1:47 PM | Permalink

          I said down scaling was.. “more applicable over ocean (high surface level humidity) than over land, and more applicable to the mid troposphere than the lower troposphere.”
          It seems Gavin and I agreed on something! That is at least a 3 sigma event.

    • Posted Nov 1, 2011 at 6:35 PM | Permalink

      Robert,

      Non-detection of UHI is NOT a sign of an accurate result. There isn’t much to debate IMHO. They didn’t detect it for the same reasons that others haven’t – bad methods. When a kid can drive a thermometer through the center of a city and find a 2C hump (old science project at WUWT), we know the effect happens because the city wasn’t there two hundred years ago. It is also visible in steps in the temp data when you plot it yourself. I’m very very skeptical about the UHI result.

      Steve’s critiques about how steps are detected and corrected are also pertinent and represent a potential bias far greater than Roman’s station combination methods fix.

      It’s all just fun though. How well can we mash together the data? What is the right method?

    • Steve McIntyre
      Posted Nov 1, 2011 at 9:38 PM | Permalink

      I just looked at the new NOAA version of GHCN. My recollection was that NOAA used to be fairly close to CRU. However, NOAA’s switched to sliced segments and its trend is now exactly the same as BEST’s. See ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/blended/ghcnm-v3.pdf for information on the changeover to Mennian methods.

      I can’t locate any backhistory of past NOAA versions. If anyone can locate them or has back copies, I’d appreciate it.

      • Tilo Reber
        Posted Nov 1, 2011 at 10:55 PM | Permalink

        One of these days one of these groups is going to make a modification that actually cools the results, and I will faint.

  22. Tilo Reber
    Posted Nov 1, 2011 at 5:51 PM | Permalink

    As an illustration of Steve’s point about UHI, I posted this at Judith’s about a week ago as a highly simplified explanation:

    Let’s say the year is 1950 and we are going to put a thermometer in a growing city. But the city is already there and already has a very high built density. So, let’s say that the city already has 1C of UHI effect. Over the next 60 years the city continues to grow, mostly around the perimeter. The UHI effect goes up, and by 2010 there is 1.5C of UHI effect. The thermometer was only there since 1950, so the thermometer will only see the delta UHI change from 1950 to 2010 as an anomaly. So, by that thermometer, the delta UHI effect for that period is .5C.

    Now, in the same year, 1950, we put another thermometer into a medium size town. Let’s say that it has a UHI effect of .1C at the time we put the thermometer there. The town grows over the next 60 years, there is a lot of building that happens close to the thermometer, and by 2010 it has .6C of UHI effect. Again, the thermometer will not register that first .1C as anomaly. But it will register the next .5C as anomaly.

    So, in 2010, what we end up with is that the urban thermometer has 1.5C total of UHI effect, and the rural thermometer has .6C of total UHI effect. But, the delta UHI for both thermometers since they were installed is .5C. It is that .5C that both of them will show as anomaly.

    Now BEST comes along and decides that they will measure UHI by subtracting rural anomaly from urban anomaly. Let’s also say that there has been .3C of real warming over those 60 years. So the rural thermometer shows .8C of warming anomaly and the urban thermometer shows .8C of warming anomaly. BEST subtracts rural from urban and gets zero. Their conclusion is, “either there is no UHI or it doesn’t effect the trend”. But, as we have just seen, .5C of the .8C in the trend of both the urban station and the rural station were UHI.

    With their results, BEST has failed to discover the pre thermometer urban UHI effect, the post thermometer urban UHI effect, the pre thermometer rural UHI effect, and the post thermometer rural UHI effect. They have also failed to discover the UHI addition to the trend in either place. In other words, their test is a total fail. Even if they did their math perfectly, ran their programs perfectly, and did their classification perfectly, their answer is still completely wrong. Why? Because the design of the test never made it possible to quantify UHI. Now, many of you may object to my scenario.

    Some of you may wonder if it is reasonable to expect a small town to grow at a rate that pushes up the delta UHI as fast as a city. This is where the definitions of rural and urban come in. Modis defines an urban area as an area that is greater than 50% built, and there must be greater than 1 square kilometer, contiguous, of such an area. So, for example, if you have two .75 square kilometer areas that are 60% built, separated by one square kilometer of 40% built, it’s all rural. So the urban standard is high enough that an area must be strongly urban to qualify. The rural standard is anything that is not urban. And that allows for a whole lot of built. 10 square kilometers of 49% built is all classified as rural.

    BEST then goes on and further refines the rural standard as “very rural” and “not very rural”. Unfortunately, they make no new build requirements for “very rural”. The only new requirement is that such an area be at least 10 kilometers from an area classified as urban. But a “very rural” place could still have up to 49% build.

    This means that you can have towns, small cities, and even some suburbs that are classified as rural. In such areas there is still plenty of room to build and build close to the thermometer. In the urban areas, there is little room to build. So either structures are torn down in the city to make room for new structures, or structures are put up at the edge of the city, expanding it. The new structures being put up at the edge of the city are far from the thermometer and while they still effect it, the further away they are, the less effect they have.

    In the rural area there is still space to grow close to the thermometer. So, in the rural area you can actually have more UHI effect with less change in the amount of build. So, if a rural area goes from 10% built to 30% built it will still be rural and it can have the same UHI effect on the thermometer as the city where most of the new building is around the edges. The urban area may go from 75% built to 85% built around the thermometer, and it may have it’s suburbs growing, but the total effect will be close to that to the rural build.

    All of this is essentially confirmed by Roy Spencer’s paper and by BEST’s own test results.

  23. Vigilantfish
    Posted Nov 1, 2011 at 6:30 PM | Permalink

    I want to echo the comments commending the beautiful clarity of what is essentially a clinical, calm evisceration of BEST methods. I really appreciated your discussion of the BEST and CRUtem approaches to ‘resolving’ the UHI issues in temperature reconstructions, Steve. Thanks!

  24. stan
    Posted Nov 1, 2011 at 7:52 PM | Permalink

    “The new temperature calculations from Berkeley, whatever their merit or lack of merit, shed no light on the proxy reconstructions and do not rebut the misconduct evidenced in the Climategate emails.”

    It is really unfortunate that Steve even has to write this. Clearly there have been cheerleaders who made the silly claims requiring his rebuke, but how could anyone with a straight face even suggest that BEST’s stat crunching (or anyone’s stat crunching) could ‘cleanse’ a record of misconduct?

  25. DocMartyn
    Posted Nov 1, 2011 at 8:37 PM | Permalink

    “Steven Mosher
    No I am trying you to think hard about the problem. I know that if I define rural that somebody somewhere will say.. oh look three sheds and cow, I bet that cow breathes on the thermometer.”

    In other fields scientists gave this thing they call an ‘experiment’.

    One could take a large field, large lake site and large desert; cover the area with thousands of thermometers and then add things like tarmac/concrete roads, buildings, irrigation ditches; you know, stuff. This would inform one of what affects mirco-climates and what doesn’t.
    However, this would involve more than looking at noisy data-sets.

    • Tilo Reber
      Posted Nov 1, 2011 at 10:23 PM | Permalink

      DocMartyn:

      Actually, Doc, there is a simpler way to do it. Simply look at the satellite images and compare how much heat the cities are putting out compared to the rural areas around them. That is what was done here.

      http://www.nasa.gov/topics/earth/features/heat-island-sprawl.html

      And the UHI that they found was huge. That is what makes the BEST results of a negative UHI effect so absurd.

      • Coldish
        Posted Nov 7, 2011 at 3:33 PM | Permalink

        The NASA workers found, for instance, that the UHI effect at Providence, Rhode Island (which is set in largely forested surroundings) peaks at about 12 degrees C (22 F), while at Buffalo, NY (mostly surrounded by farmland), it peaks at about 7 degrees C (13 F). How did BEST account for such data in their finding that UHI is negligible?
        At Providence some of the hottest areas appear to be in and around the airport. How do the BEST team take the ‘Airport Effect’ into account in calculating single station temperature trends for those stations which happen to be sited at airports?
        As Buffalo and Providence have roughly similar population size, can one also conclude that there exists a ‘Farmland v Forest’ effect amounting in some cases to something of the order of 5 degrees C (9 F)?

  26. sky
    Posted Nov 1, 2011 at 8:44 PM | Permalink

    There are two components to the BEST “philosophy” that cloud the physical significance of all their results. First is the bureaucratic impulse to take ALL data–no matter how obviously bad–and throw it into a processing hopper. Second is the blithe reliance upon statistical models that, at best, are academic idealizations to massage the indiscriminate data into a plausible result.

    No one truly experienced with geophysiscal data–met data in particular–would take such a sanguine view of the multiple flaws that can occur in station records or of the power of statistical models to establish accurate datum levels where no reliable data are available. Both the temporal and spatial variability of actual near-surface temperatures, as evidenced by cross-spectrum analysis, are too great and too complexly divergent to be reliably modeled under the assumption of statistical homogeneity that is tacitly incorporated in BEST’s algorithm.

    In particular, there is no universal “correlation distance” that allows reasonable interpolation of low-frequency variations at stations a few hundred kilometers apart. Nor is there any reliable way of estimating temperatures at a single station that are a few decades apart. The actual uncertainties in both cases are of the same order as the variations. Meanwhile, throughout much of the globe, such lacunae in reliable data coverage are commonplace.

    BEST’s reults speak more to the stylized properties of the data base, which is largely urban to begin with, than to the properties of the actual temperature field.

    • Tilo Reber
      Posted Nov 1, 2011 at 10:06 PM | Permalink

      Sky, when you look at the BEST video of global temps through time you can see where they show India, right at the beginning. You can also see the arc of coverage across the top of India. I think the radius of that arc is equivalent to the distance that they are willing to interpolate (or krig) to. My rough guess is 2000 kilometers – maybe more. That’s even further than GISS does it. Also notice that all of India changes color together, indicating that the source is one single thermometer. The problem with doing that kind of thing is that it ignores the fact that there can be very large thermal discontinuities over that kind of distance. So any interpolation method that crosses mountains is likely to have large errors. Interpolating (or kriging) from ocean shore stations that are warmed by ocean currents inland to areas that are not warmed by ocean currents can cause large errors. But that is what is done in Siberia, Northern Canada and Greenland – by both BEST and GISS. The result is that these inland northern regions are assigned temperatures that are too hot. So the fact that GISS and BEST support each other only means that they commit the same errors. Here is an article about how such interpolation across the Arctic ice from just melted shore stations will cause errors. The idea applies equally well to interpolation inland from such shore stations.

      http://reallyrealclimate.blogspot.com/2010/01/giss-temperature-record-divergence.html

      • sky
        Posted Nov 2, 2011 at 6:55 PM | Permalink

        I very much agree that extrapolation of temperature variations over distances ~2000km is an academic pipedream. There need not even be great intervening mountain ranges for the complete extinction of spatial correlation at such a range. Witness the R^2 =.024 between the yearly series at Begampet on the Deccan plateau and Srinagar in the Vale of Kashmir (1917km away). Similarly, R^2 = .002 between Begampet and Dibrugarh (2035km away). All three locations are south of the Himalayan range.

        But you are preaching to an old member of the skeptic choir, who never once believed that the determination of climatic variations is just a routine matter of running “the code” on data of dubious integrity.

  27. dixonstalbert
    Posted Nov 1, 2011 at 10:11 PM | Permalink

    Regarding ‘pristine’ sites;

    I am wondering if you could work backwards; Is it possible to examine the raw records and identify the “bottom” 500 stations, that is, the ones with the largest ‘cooling’ trend, then study their physical characteristics to see if their is a common factor that is making them ‘more rural’ than the others.

    • Max_OK
      Posted Nov 2, 2011 at 11:04 AM | Permalink

      Sounds like a good idea.

    • Steven Mosher
      Posted Nov 3, 2011 at 2:31 PM | Permalink

      yes, its called data snooping.

      Its pretty easy to pull the trends and the metadata and do a stepwise regression to hunt for variables that make sense. So, you’d have to control for that.

      • EdeF
        Posted Nov 3, 2011 at 5:58 PM | Permalink

        In California, the coolest stations appear to be located right on the beach, such as when they moved LA to LAX. You get that constant
        on-shore breeze. I have found wx stations up in the boonies that look good, but they are not used in the big studies. They only use
        SF, LAX, San Diego and I think Sacramento. No “rural” stations, possibly due to record length, data completeness, etc. Weather records
        are most numerous where the people are as someone said above. I grew up in a lumber town in the Sierras that doesn’t exist anymore. Great
        wx records from 1941 to 1971, then pfthft.

        • Steven Mosher
          Posted Nov 3, 2011 at 9:33 PM | Permalink

          coastal stations warm less rapidly than inland stations.

  28. Max_OK
    Posted Nov 1, 2011 at 10:34 PM | Permalink

    Land temp trends for RSS and CRU are different in Figure 1 above, but look the same at woodfortrees.org.

    http://www.woodfortrees.org/notes.php#best

    Does anyone know why?

  29. BioBob
    Posted Nov 1, 2011 at 10:45 PM | Permalink

    I must admit to some personal confusion about the future station siting / UHI /Pristine discussion.

    If you wish to collect valid samples from a notional normally distributed population, one assumes that n replicated random samples which are defined by acceptable variance and standard errors are required.

    If the n random samples meet your variance/error requirements, the nature of the site does not matter since error and variance are within acceptable limits and the sampling sites purportedly reflect reality of the normally distributed temperature data set. You could also stratify your random sampling scheme to test UHI, etc. effects but this should not really be required at the basic level.

    Excuse me if I misremembered some important aspect of my graduate stats classes. I realize that random sampling on this level is unlikely and difficult but the alternative may be simply a repeat of what ‘we’ struggle with now.

  30. greg smith
    Posted Nov 1, 2011 at 11:23 PM | Permalink

    Steve
    A very cogent post as always. I see you are getting the attention of the team following Gavin’s dropping over. Quite a change from a few years back. The UHI, to me, is one of the principal issues in this whole mess and I am astounded, given the money which has thrown at climate science in general over the years, how little serious experimentation and measurement has been devoted by any workers to properly document the effect rather than to manipulate uncertain data sets

  31. Geoff Sherrington
    Posted Nov 1, 2011 at 11:26 PM | Permalink

    Thank you Steve Mc, for a summary which neatly reflects the position of my colleagues.

    Here is a pristine Australian weather station, look on Google Earth at -15.7426, 136.8192 Australia is a good subset for study because of a low population, low land use changes over large areas and a reasonably long official climate record.

    Here is a spreadsheet that I presume was produced by BEST a couple of years ago. It has 589 Australian stations, of which more than 10% can sensibly be classed as pristine. Indeed I have classified all 589 into 4 classes. My added data form columns I to Q incl.
    http://www.geoffstuff.com/Working%20UHIAPRIL011.xls

    If you want to improve understanding of UHI, use this spreadsheet. All you have to do is buy a copyrighted CD of climate data from the Australian Bureau of Meteorology, then compute.

    You will find that separation of signal and noise will require a judgement call at some stage. Your conclusions will reflect your call. This is what happened to BEST, this is what happened to Phil Jones in the 1990s.

  32. Manfred
    Posted Nov 2, 2011 at 2:05 AM | Permalink

    However, I would guess, that the factors deduced from the models are results of “convective and cloud parameterization, feedback effects and other energy transfer processes”. And these parameters themselves are in part deduced by training with measured data, which includes UHI/LUC. High UHI/LUC would then generate too low factors over land and the low factors hide UHI / LUC ?

  33. Alexej Buergin
    Posted Nov 2, 2011 at 6:48 AM | Permalink

    Did I read on WUWT that a US schoolgirl wrote a study comparing rural and urban stations located relatively near to each other? I remember thinking to myself: That is the best I have ever seen about that topic.
    (Yes, her father helped her.)

  34. David
    Posted Nov 2, 2011 at 8:39 AM | Permalink

    Steve, I’ve commented on this before and it bears repeating. You refer to the grad student who is the lead author on the urbanization paper. During my grad student days in Physics, I was lead author on a Physical Review Letter. However, make no mistake about it, my advisor was truly the driving force behind the experiments and write-up. Although I did most of the grunt work and the writing and did have a full understanding of the Physics behind the theory and the experiments, everything I did and wrote had to go through my advisor and he was very demanding. The point I’m trying to make is to say that just because a grad student was the lead author of the paper, doesn’t mean that the research was substandard. The quality of the research was fully dependent on the advisor, who would be one of the other authors on the paper. I would certainly allow its fully possible that neither the grad student nor the advisor fully understood the MODIS data and how to analyze it properly. However, I would suggest that stating that the lead author is a grad student is irrelevant. If an advisor has a grad student in their team, papers from the advisor almost always have a grad student as the lead author.

  35. Steve McIntyre
    Posted Nov 2, 2011 at 9:07 AM | Permalink

    I edited the text discussing the barplot to reflect Gavin Schmidt’s observation about the absence of a Great Red Spot over land. The text previously read:

    The BEST and CRU series run hotter than TLT satellite data (GLB Land series from RSS and UAH considered here), with the difference exacerbated when the observed satellite trends are “downscaled” to surface. For the purpose of this barplot, I’ve divided the satellite trends by 1.4 (relying on models here) to obtain “downscaled” surface trends. Both Lindzen and (say) Gavin Schmidt agree that tropospheric trends are necessarily higher than surface trends simply though properties of the moist adiabat. The downscaling described here reduces the UAH trend from 0.173 to 0.124 deg C/decade and the RSS trend from 0.198 to 0.142 deg C/decade.

    On this basis, BEST (0.282 deg C/decade)runs more than twice as hot as either downscaled UAH (0.124 deg C/decade) or downscaled RSS (0.142 deg C/decade). CRU (0.221 deg C/dec) is .08-.10 deg C/decade warmer than the downscaled satellite series. A direct comparison with RSS (or UAH) shows a smaller difference, but such a comparison does not allow for amplification of trends in the troposphere. BEST is a further 0.06 deg C/decade warmer than even CRU.

    For further information, here is a direct plot of BEST and the downscaled-to-surface UAH lower troposphere data. link

    • Posted Nov 2, 2011 at 9:29 AM | Permalink

      Ah, an audit trail. That makes me think a good name for a climate blog. Oh yes, you already thought of that. Like most things.

  36. Bill Illis
    Posted Nov 2, 2011 at 9:20 AM | Permalink

    Peter Thorne and Thomas Peterson report in this paper that the climate models have the tropospheric level that UAH and RSS measure at, as increasing at 1.272 times the surface in the Tropics (Figure 7).

    Click to access ThorneEtAl.WIREs2010.pdf

  37. Steve McIntyre
    Posted Nov 2, 2011 at 10:07 AM | Permalink

    I’ve added the following commentary at the end:

    However, in fairness to the stated objectives of the BEST project, I should add the following.

    Although I was frustrated by the co-mingling of CRUTEM and Climategate in public commentary – a misunderstanding disseminated by both Nature and Sarah Palin – and had never contested that fairly simple average of GHCN data would yield something like CRUtem, CRUTEM and Climategate have become co-mingled in much uninformed commentary.

    In such circumstances, a verification by an independent third party (and BEST qualifies here) serves a very useful purpose, rather like business audits which, 99% of the time, confirm management accounts, but improve public understanding and confidence. To the extent that the co-mingling of Climategate and CRUTEM (regardless of whether it was done by Nature or Sarah Palin) has contributed to public misunderstanding of the temperature records, an independent look at these records by independent parties is healthy – a point that I made in my first point and re-iterate in this post. While CA readers generally understand and concede the warming in the Christy and Spencer satellite records, this is obviously not the case in elements of the wider society and there is a useful function in ensuring that people can reach common understanding on as much as they can.

    • Posted Nov 2, 2011 at 11:10 AM | Permalink

      a misunderstanding disseminated by both Nature and Sarah Palin

      You have a way of twisting the knife Steve. Don’t tell me you don’t know how embarrassing such a reminder will be of the dodgy company kept at the end of 2009 – by the former governor of Alaska.

  38. Posted Nov 2, 2011 at 10:57 AM | Permalink

    After reading all the posts here with great interest I am convinced that trying to measure land surface temperatures with any degree of precision is an exercise in futility. Many years ago as a young air conditioning engineer I learned very quickly that a 36,000 btu air conditioner installed in the heart of a city in a 2000 s/f house of similar construction as one in the hinterlands was not adequate while the same size ac in the boondocks was more than adequate.

    • Max_OK
      Posted Nov 2, 2011 at 12:25 PM | Permalink

      It’s change in temperature that’s being measured.

      • stan
        Posted Nov 2, 2011 at 12:44 PM | Permalink

        Yes, and cities change much more and more often than the hinterlands.

        • Max_OK
          Posted Nov 3, 2011 at 12:29 AM | Permalink

          Big cities are too big to change fast.

  39. Posted Nov 2, 2011 at 11:10 AM | Permalink

    A few points:

    First, if you had applied the same logic of which record should be considered better (satellite vs surface) in the 1980s when the series were first released, the surface record would have been much closer to our current best understanding of global temperatures (via either record). You need to bear in mind that satellites are hardly direct measurements of temperature, and subject to their own (historically rather large) biases and uncertainties. Granted, satellite records have been much improved in the interrum, but its not beyond the realm of possibility that there are still uncorrected biases lurking in the data.

    Second, there are differences between CRU and NCDC (or BEST) land records that go beyond the choice of homogenization procedures. NCDC version 1 (based on GHCN v2) had a record nearly identical to that of NCDC version 2 (based on GHCN v3), and was still quite a bit higher than CRU (see http://moyhu.blogspot.com/2010/09/beta-version-of-ghcn-v3-is-out.html ). CRU uses a rather novel weighting strategy for its land record where the hemispheres are calculated separately and combined into a single record via 0.68 × NH + 0.32 × SH, whereas both NCDC and BEST use spatial weighting (grid boxes and krieging) over all available land area (with a land mask applied).

    Finally, while the BEST approach to UHI is far from perfect (and there will be more on this in the next year), no matter how you slice it there does not seem to be a large residual UHI effect after homogenization (at least for the US; global work is ongoing): http://rankexploits.com/musings/2011/uhi-presentation-at-the-ams-conference/

    • Posted Nov 2, 2011 at 11:13 AM | Permalink

      I should add that the NCDC record based on GHCN version 2 had very minor differences between adjusted and raw versions of the dataset: http://rankexploits.com/musings/wp-content/uploads/2010/07/Picture-477.png

      • Steve McIntyre
        Posted Nov 2, 2011 at 12:30 PM | Permalink

        Zeke, did you happen to save the older versions?

        • Posted Nov 2, 2011 at 12:48 PM | Permalink

          Steve,

          Its available here: ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v2/

          v2.mean is raw data, v2.mean_adj is adjusted.

          Its worth noting that various reconstructions using v2.mean (by myself, Mosher, Nick Stokes, Chad, Jeff Id, etc.) all show results quite close to NCDC’s record.

        • Tilo Reber
          Posted Nov 2, 2011 at 6:48 PM | Permalink

          “Its worth noting that various reconstructions using v2.mean (by myself, Mosher, Nick Stokes, Chad, Jeff Id, etc.) all show results quite close to NCDC’s record.”

          And how did you, Mosher, Nick Stokes, Chad, Jeff Id, etc. deal with UHI?

        • Posted Nov 2, 2011 at 12:50 PM | Permalink

          Also, here is the official NCDC temperature record using GHCN v2: ftp://ftp.ncdc.noaa.gov/pub/data/anomalies/usingGHCNMv2/


          Steve: does that use sliced data or not?

        • phi
          Posted Nov 2, 2011 at 3:32 PM | Permalink

          There are a lot of smoke in those data.
          Raw data are probably not raw but homogenized and adjustments are probably variance adjustments.

        • phi
          Posted Nov 2, 2011 at 4:06 PM | Permalink

          Well, I may be wrong but there is something odd somewhere:

          This effect of homogenization is of the same order of magnitude all over the world. So, why this comparison between raw and adjusted gives tiny differences?

        • Steven Mosher
          Posted Nov 3, 2011 at 2:06 AM | Permalink

          jeez that chart is so USHCNv1-v2..

        • Steven Mosher
          Posted Nov 3, 2011 at 2:13 AM | Permalink

          The chart you show is more than homogenization. back in the old days there were several steps or adjustments

          ( gosh lets see of I can remember them all)

          1. SHAP. station history adjustments. basically, when you move a station it would get adjusted
          so a change in elevation would get a lapse rate adjustment
          2. FILNET. Filled in missing values
          3. TOBS — most the rise you see there. It has been empirical demonstrated that changing the TOB from say
          5pm to 7AM imposes a change in the average temperature computed. so a lot of what you see there is
          TOBS.
          4. Homogenization..

          This whole process has been redone, so the chart you have is out of date. circa 2007. It was one of the early charts that got me really interested in the whole adjustment thing. Then I actually read papers and look at the process. suddenly that chart wasnt so interesting. shrugs.. sometimes when you question things you find out that your good question had a good answer.

        • phi
          Posted Nov 3, 2011 at 2:40 AM | Permalink

          Whatever it is, it heats up so it has a bias. What is the explanation?

          There may be a component Tobs but it has no reason to have a linear aspect on such a period.

          In the case of CRUTEM, from version to version, the bias tends to increase.

        • Posted Nov 3, 2011 at 4:48 AM | Permalink

          phi, you can read more about the sources that make up the adjustments in the graph you linked to here. The main adjustment contributing to the warming is TOBS.

        • phi
          Posted Nov 3, 2011 at 5:38 AM | Permalink

          The link does not answer yet. But a few points:

          – TObs Issue is generally characterized by one or two adjustments per century to well-defined days. This does not look anything like a linear progression which is observed on the graph.

          – There may be a specialty for the United States, such as changes in hours coinciding systematically with displacement of site (?). In this case, it is not possible a priori to separate TObs effect from site effect.

          – Case studies show that it is site effect which is primarily responsible for the discontinuity cooling bias and that this bias is of the same order of magnitude as that which can be read on the graph.

        • phi
          Posted Nov 3, 2011 at 8:03 AM | Permalink

          Now the link is ok. I can add.

          It seems that the changes of hours of observations are more frequent in the United States than elsewhere and follow a pattern very different.

          A priori, Tobs (black) is not expected to change the trend (neutral). The fact that the effect is not neutral probably means that something else is involved and that something else is probably related to stations moves (the binding of the two is not surprising).

          “Application of the Station History Adjustment Procedure (yellow line) resulted in average year Increase in US temperatures, Especially from 1950 to 1980. During this time, Many Were sites relocated from city locations to airports and from roof tops to grassy Areas.”

          This is the typical case. If this treatment was performed before Tobs, it probably would form the sum of the two.

          Note that the explanation of Adjustment Procedure implicitly admits that the UHI effect is important but it has not increased since measurements began. Very odd!!!

        • Steven Mosher
          Posted Nov 3, 2011 at 10:57 AM | Permalink

          speak english

          1. the slide you showed is out of date and does not reflect how the process works.
          2. I will not waste my time explaining to you what you can learn for yourself, the hard way by
          reading papers.
          3. start by reading this blog. every post and every comment.

        • phi
          Posted Nov 3, 2011 at 11:25 AM | Permalink

          “speak english”

          I try hard but the result is not very good. Sorry.

          1 : I gave it for the magnitude of the adjustments, this range has not changed.

          2 : I could say the same thing as you.

          3 : This does not mean anything. Specify.

          I still do not see why the graph of Zeke showed no differences between the raw and adjusted data. Have an idea?

        • Steven Mosher
          Posted Nov 3, 2011 at 2:34 PM | Permalink

          start by reading the TOBS thread on this site
          Then go read the primary literature
          then go get JerryBs dataset and do the numbers for yourself.

        • sleeper
          Posted Nov 3, 2011 at 4:55 PM | Permalink

          You’ve been cleaning bender’s pool too long.

        • Steven Mosher
          Posted Nov 3, 2011 at 9:32 PM | Permalink

          Bender schooled me well. I took direction. it was the best thing I ever did. why wouldnt I pass on great advice

        • Tom Gray
          Posted Nov 3, 2011 at 7:13 PM | Permalink

          2. I will not waste my time explaining to you what you can learn for yourself, the hard way by
          reading papers.

          “God said let their be Newton and All was light.”

          We can be thankful that Newton was not of tehe same opinion

        • Posted Nov 2, 2011 at 5:43 PM | Permalink

          The v2.mean data is completely unhomogenized apart from a basic QC check (exceeds world records and whatnot). I’m actually not sure what homogenization is applied to v2.mean_adj, though I know it wasn’t the Menne and Williams PHA.

          Though it is worth noting that a few very old records may have been homogenized by national MET offices with TOU adjustments and whatnot prior to being sent to the WMO (and the unhomogenized numbers were not retained), but since the 1930s or so it should all be just raw monthly means.

        • phi
          Posted Nov 2, 2011 at 5:54 PM | Permalink

          How do you reconcile these two charts knowing that for all regions where the information is available, homogenization cause a rise of the same order of magnitude and being aware that CRUTEM apply these homogenization?


        • Posted Nov 3, 2011 at 12:47 AM | Permalink

          Easy, one is for the US and the other for the globe. Oddly enough, the two are not equivalent.

        • phi
          Posted Nov 3, 2011 at 2:55 AM | Permalink

          I know. That is not the question. The bias is about 0.5 ° C per century on CRUTEM3 (graph USHCN is only there for the order of magnitude). NCDC exceeds CRUTEM. NCDC : the difference between raw and adjusted has yet to be greater than 0.5 ° C per century. This does not appear in your graph, why?

        • Steven Mosher
          Posted Nov 3, 2011 at 2:15 AM | Permalink

          Phi, the uschn chart is very much out of date and represents an entirely different process. Its more than homogenization. Its filnet, shap and Tobs

        • JR
          Posted Nov 2, 2011 at 5:58 PM | Permalink

          It is worth noting that there are more than “a few” records that have been homogenized in v2.mean, and they extend beyond the 1930s – to 1950 in many cases.

        • Posted Nov 2, 2011 at 6:58 PM | Permalink

          JR,

          I’d love to see some citations for that, because I’ve only found a few very old temperature books with evidence of TOB adjustments.

        • JR
          Posted Nov 2, 2011 at 8:18 PM | Permalink

          They may exist, but I have not seen citations for these adjustments. One needs to look in the notes section at the beginning of the World Weather Records volumes and then compare what is in GHCN v2.mean with what is printed in WWR. Sometimes GHCN has the raw data and sometimes the homogenized. It is not trivial work. For U.S. stations that were “homogenized” prior to 1951, the adjustment tends to be a slight cooling compared to the raw data – big surprise, huh?

    • Tilo Reber
      Posted Nov 2, 2011 at 6:32 PM | Permalink

      Zeke: “Finally, while the BEST approach to UHI is far from perfect (and there will be more on this in the next year), no matter how you slice it there does not seem to be a large residual UHI effect after homogenization (at least for the US; global work is ongoing):”

      I can understand how homogenization might reduce or eliminate the difference between rural and urban, but that is simply taking the effect and spreading it out evenly. How do you conclude that homogenization actually gets rid of the UHI effect?

      • Posted Nov 2, 2011 at 6:59 PM | Permalink

        Tilo,

        Watch the presentation. I go into detail about looking at a new varient of homogenization using only rural stations to homogenize rather than all co-op stations, for different rural urbanity proxies.

        • Tilo Reber
          Posted Nov 2, 2011 at 10:40 PM | Permalink

          Zeke:

          Once again you sent me to a source that did not answer my question – just like you did regarding the “MMTS cooling bias”. A question that you still haven’t answered.

          Zeke, you do understand the difference between removing UHI as a difference between urban and rural on the one hand and actually removing the UHI effect on the other. After watching your video, I still see nothing that tells me how homogenization can possibly remove the UHI effect. Again, as I said before, I can see how it can distribute the effect across both sets, but I don’t see how it can remove it.

          Now one of your variants was to use only rural stations to homogenize to, and this is certainly a better idea. However, that still leaves you with whatever UHI existed in the rural stations. And since your samples were such that for some, the majority of stations were classified as rural and for others at least half were classified as rural, it seems to me that you could still have rural stations with UHI. And that’s not to mention what you were calling the micro effect of things like parking lots next to thermometers in rural areas. Also, you show a recognition of the fact that a large UHI effect may already exist in an urban area before a thermometer is ever placed there. But I don’t see anything in your method to deal with that.

          Then I want to get back to your reference to all the people who have produced temperature reconstructions that support NCDC – You Mosher, Nick Stokes, Chad, Jeff Id, etc. How are they all dealing with UHI? Do they even do the homogenization to rural thermometers only that you did in your study?

        • Posted Nov 3, 2011 at 12:49 AM | Permalink

          For the “UHI that exists for rural stations” (and mind you, this is for four different definitions of urbanity via nightlights, ISA, GRUMP, and population growth), there may well be residual uncorrected bias. If you have suggestions on the best way to define truly rural (e.g. no UHI effect), I’d be happy to try them out.

        • Tilo Reber
          Posted Nov 3, 2011 at 9:47 AM | Permalink

          Zeke:
          We are not talking about residual uncorrected bias here Zeke. I’m telling you that you didn’t correct for any of the bias in most of your tests and you only corrected for part of your bias in your “homogenize to rural tests”.

          Look, your own tests found substantial UHI effect in unhomogenized data sets. So when you remove the UHI that means that you loose that warming and it means that the slope of your temperature trend decreases. If that didn’t happen as a result of your homogenization, then you didn’t remove UHI bias through homogenization – you simply moved it around.

          Again, I see no answer to the question of what you and all the others are doing about UHI. It looks like you are not even doing the homogenize to rural step. So how is your claim that NCDC is correct because you all got the same results meaningful?

        • Posted Nov 3, 2011 at 10:07 AM | Permalink

          In the US, TOBS and MMTS adjustments are by far the largest positive contributors to the trend, and are issues that are mostly independent of UHI.

          By the way, since you are still focusing on it, what exactly is difficult to understand about the MMTS adjustments? If you adjust the MMTS stations up to correct for the mean cooling bias, or if you adjust the CRS stations down under the assumption that the MMTS instrument is more accurate, it produces identical anomalies and identical trends. As long as the baseline remains constant, both approaches are the same. In practice, MMTS transitions are treated like any other inhomogeniety and detected via breakpoint analysis and corrected by reference to other surrounding stations that do not have noticeable inhomogenities around the same period.

        • Tilo Reber
          Posted Nov 3, 2011 at 11:49 AM | Permalink

          Zeke:
          “In the US, TOBS and MMTS adjustments are by far the largest positive contributors to the trend, and are issues that are mostly independent of UHI.”

          Okay, Zeke, I think we are making progress. I take it by your lack of response that you understand that homogenization doesn’t remove UHI, but it simply spreads it around. I also take it, again by your lack of response, that despite the fact that you found a real UHI effect in your test, that you and the others you mentioned do nothing to account for UHI in your reconstructions. And so your finding that you get the same results as NCDC is hardly a recommendation for NCDC, or for BEST.

          Getting back to your above comment, the fact that TOBS and MMTS are the largest positive contributors to the trend in the US is a known. And yes, they are independent issues from UHI. But I don’t see how that has anything to do with the question of the UHI effect being properly accounted for in the temperature records. It looks like there should be a substantial decrease of warming and of temperature trend due to UHI. The fact that this is not done is not evidence that it should not be done. It is evidence of an error in the temperature constructions.

          Zeke: “If you adjust the MMTS stations up to correct for the mean cooling bias, or if you adjust the CRS stations down under the assumption that the MMTS instrument is more accurate, it produces identical anomalies and identical trends.”

          No Zeke, there is no evidence that it is an MMTS cooling bias. It could just as easily be a CRS warming bias. All that we know is that there is an MMTS CRS divergence. We don’t know where the bias is? If you adjust for cooling bias you get more heat and more trend. If you adjust for warming bias you get less heat and less trend. Naturally, as always, the assumption was made for adding heat and trend. What I was asking you is, what is the physical rationalization for such an assumption?

        • Steven Mosher
          Posted Nov 3, 2011 at 2:04 AM | Permalink

          Go ask JeffId or RomanM.

          As for me, how did I deal with UHI?

          First off, I looked for Tilo’s definition, objective definition, of what consitituted a rural station versus an urban station…

          opps, he didnt give one.

          So, I tried a whole bunch of definitions.. population, nightlights, isa, lots of them, looking for that huge UHI signal…

          Buzzt. didnt find one.

          So, I’m waiting for Tilo’s definition. When I have it, then we can make a bet.

        • Tilo Reber
          Posted Nov 3, 2011 at 11:23 AM | Permalink

          I know that you are very proud of that temperature reconstruction that you did Mosher. But you didn’t account for UHI and so now you want to fight for the idea that there is no UHI.

          But go back and read Steve’s post on the subject:

          New Light on UHI

          And go back and read this NASA site on the subject:

          http://www.nasa.gov/topics/earth/features/heat-island-sprawl.html

          What did they find?

          “Summer land surface temperature of cities in the Northeast were an average of 7 °C to 9 °C (13°F to 16 °F) warmer than surrounding rural areas over a three year period, the new research shows. The complex phenomenon that drives up temperatures is called the urban heat island effect.”

          Here are some of the numbers:

          Providence, RI – 12.2 C of UHI
          Buffalo, NY – 7.2 C of UHI
          Philadelphia, PA – 11.7 C of UHI
          Lynchburg, VA – 5.5 C of UHI
          Syracuse, NY – 10.6 C of UHI
          Harrisburg, PA – 7.6 C of UHI
          Paris France – 8.0 C of UHI

          Note: Lynchburg has a population of only 70,000

          Here is the abstract from the Imhoff paper:

          http://pubs.casi.ca/doi/abs/10.5589/m10-039

          “Globally averaged, the daytime UHI amplitude for all settlements is 2.6 °C in summer and 1.4 °C in winter. Globally, the average summer daytime UHI is 4.7 °C for settlements larger than 500 km2 compared with 2.5 °C for settlements smaller than 50 km2 and larger than 10 km2.”

          And here are some charts from Spencer showing the relationship between UHI and population. Notice that the effect starts at very low population densities.

          It looks like even this young boy is able to find the UHI.
          http://www.youtube.com/user/TheseData?blend=21&ob=5

        • don monfort
          Posted Nov 3, 2011 at 11:45 AM | Permalink

          Steve, I hope you will entertain a hypothetical question: Let’s suppose that in 1950 a couple of smart guys like us predicted which uninhabited places on land, would remain that way up to 2010. Uninhabited meaning less than 2 people per sq km (not enough to procreate). Let’s say we found that 18% of land area qualified, as pristine. That’s what we called it back then. We put some number in the thousands of thermometers in those areas spaced so that we could use the results recorded there to compare them with the results of the 39,000 records of the stations in the BEST study. Comparing our thermometers’ readings to the BEST 39000, do we expect to find a similar trend? How similar? Comparing our pristine results to stations in cities over 50,000 population, what do we expect? Notice I did not mention Tokyo.

        • don monfort
          Posted Nov 3, 2011 at 6:10 PM | Permalink

          OK Steve, I will reveal the answer. It’s .3423C less warming in the pristine areas. We have determined the hypothetical effect of human habitation on the measurement of the earth’s temperature record from 1950 to 2010. Probably with as much certainty as anybody else has done.

    • targs
      Posted Nov 4, 2011 at 7:35 AM | Permalink

      Zeke and Mosh,

      How do you reconcile the BEST findings with McKitrick and Nierenberg (2010)? I know that you are both (certainly Mosh is) familiar with this paper which seems to find persuasively that “evidence for contamination of climatic data is robust across numerous data sets, it is not undermined by controlling for spatial autocorrelation, and the patterns are not explained by climate models.” To my knowledge this paper has never been refuted and Gavin’s attempt to find fault with earlier McKitrick papers on the topic was undone by Gavin’s mistake with regards to autocorrelation in the residuals. So, assuming MN 2010 stands, does this suggest a problem with the BEST paper?

  40. John Hekman
    Posted Nov 2, 2011 at 12:30 PM | Permalink

    Some great commentary here on the definition of urban. The problem with most if not all of the quantifiable definitions is that they are done from some guy’s office instead of being the result of going out in the field and studying what really causes local heat effects. I think the most accurate measurement of UHI will eventually come from someone’s careful use of the Surfacestations data and photos. Thanks again, Anthony!

    • stan
      Posted Nov 2, 2011 at 12:47 PM | Permalink

      snip – editorializing

  41. John Whitman
    Posted Nov 2, 2011 at 1:51 PM | Permalink

    Strategically, in the effort to leverage more confidence in satellite measurement of lower troposphere temps, more resources in orbit and development of critical analytical venues would be very prudent.

    With government budget cuts, reaching out to private ventures from within the current government sponsored satellite facilities will mitigate loss of satellite measurement and analysis capability.

    I am personally willing to voluntarily participate in creating and coordinating future private and existing public satellite integration.

    John

  42. Posted Nov 2, 2011 at 8:02 PM | Permalink

    Great approach but how do you conclude that homogenization can actually get rid of the UHI effect?

  43. P. Solar
    Posted Nov 2, 2011 at 10:06 PM | Permalink

    One thing struck me on first seeing the BEST temp graph: there was no recent plateau, it just kept going up at a fairly even rate.

    I know Dr. Curry has been very displeased by the way the running average results in much of the recent non warming not being shown but I could not see how it could hide it all.

    Today I found the other reason.

    They chopped 1998 El Nino in half and it now falls in line with the general long term trend rather than being the end of that trend.

    All other datasets show 1998 as a major feature. In BEST it’s just a bump like the rest. 1998 was nothing special.

    Like pretentiously calling their project BEST, their algo gets called a “scalpel”. By implication their use of it will have surgical precision.

    That is again rather silly and misleading PR word games rather than science.

    Attention Dr Muller, a scalpel is merely a dangerously sharp object. The surgical precision depends on how you use it. What you have actually achieved is an accidental mastectomy.

    • Posted Nov 3, 2011 at 10:55 AM | Permalink

      There is little wonder why BEST could not isolate UHI (or even dUHI) in its processed data. UHI and GW are very low frequency signals. The scalpel technique is a low-cut filter. It eliminates low frequencies when creating the “good” splices. (Rasey here in CA 11/1/11)

      Best’s use of scalpel to cut the record up into short bits of good data and splice them back to get a good long record reminds me of “Emperor’s New Clothes”. We are meant to marvel at BEST’s fine garment of Global Warming record, yet we know that back on the spinning wheels BEST was cutting out every long-term, low-frequency thread of data before it got to the loom. The end product is stitched together lint and fuzz where all the trustworthy bits are in discarded heaps about the feet of the weavers.

      • Posted Nov 3, 2011 at 11:05 AM | Permalink

        Clarification: “BEST was cutting out every long-term…thread”

        “Cutting out” was the wrong verb. “Dicing” is closer to the mark.

        The sought-after Low Frequencies weren’t just removed by the scalpel. They were pureed into high frequency fuzz. The only Low Frequencies that result are from the labor of the tailor.

      • P. Solar
        Posted Nov 4, 2011 at 5:31 AM | Permalink

        They have not cut out all LF, they have kept the long a long term trend. I’ve looked at the FFT of their data and there is are small amounts of LF and finite ,zero freq dT/dt.

        However they have cut out nearly all intermediate frequencies.

        Here are plots of the 10y and 12y OLS “trends” over the entire record. 10y is about what I would expect. But look at the 12y plot.

        10y: http://tinypic.com/r/21ophd/5
        12y: http://tinypic.com/r/1449ol/5

        According to B-est there was virtually NO 12y period in the last 200 years that had a non-zero trend.

        • HaroldW
          Posted Nov 4, 2011 at 10:28 PM | Permalink

          P.Solar —
          I think you have a scripting problem. I get the following for 10- and 12-year OLS trends:
          10 yr: http://img15.imageshack.us/img15/9977/longhistorybest10yeartr.jpg
          12 yr: http://img33.imageshack.us/img33/2779/longhistorybest12yeartr.jpg
          [NB: these graphs use the *start* of the associated interval as the independent variable.]

          I agree with your comment that it’s odd that BEST does not plateau over the last ~10 years; the trend stays at about the same value (2-3 deg C/century). This is in sharp contrast with the global metrics (GISS, HadCrut3) and also with the CRUtem3 land-only index. All of those show the most recent 10-year trend as near zero, either slightly positive or slightly negative, but much lower than, say, 10 years ago.

  44. Richard
    Posted Nov 3, 2011 at 5:34 AM | Permalink

    I do wish we could get the basic terminalology correct. People are conflating UHI with dUHI.

    UHI is the well observed temperature differential that occurs with human based modification to the land surface. BEST did not comment on that as far as I can tell.

    dUHI is the rate of change in temperature over time between sites that definitely show UHI and those that supposedly don’t. BEST observes that this component is small from their calculations. That is, regardless of which group a site is placed in (UHI/Non UHI), the overall trend in temeratures is similar.

    • Tilo Reber
      Posted Nov 3, 2011 at 1:04 PM | Permalink

      Richard: “BEST observes that this component is small from their calculations.”

      Their test, as designed, is incapable of determining either total UHI or delta UHI. See my post here on Nov 1 at 5:51.

      • Richard
        Posted Nov 3, 2011 at 3:25 PM | Permalink

        I did not comment on the value of their calculations, only that there are a number of people confusing UHI (which NEST has no opinion on) with detlta dUHI, a completely different beast.

        Therefoe we get things like ‘BEST must be wrong because they show no UHI” when it should be “BEST caculations show no observable dUHI effect”.

        Note that this does not make any comment on the validity of the dUHI calculation which may well be wrong or right.

        It is just that discussing the right thing helps in clarity.

        • Tilo Reber
          Posted Nov 3, 2011 at 3:49 PM | Permalink

          Richard:
          In order to get a meaningful temperature record it’s necessary to get rid of both UHI and delta UHI. 27% of the thermometers are in urban areas. But urban areas only make up about .5% of the total earth surface. So basically, BEST is wrong because they don’t find the UHI that is really there and they don’t find the delta UHI that is really there. The test that they designed never had the capability to find either.

          BEST is also wrong because they do statistical extrapolation and interpolation between locations that have thermal discontinuities that extend beyond latitude and altitude.

        • Steven Mosher
          Posted Nov 3, 2011 at 10:07 PM | Permalink

          Since Tilo wont answer the question I will ask you.

          If UAH TLT warms at .18C decade
          If BEST thinks the land warms at .28C decade
          If the ampflication is ~1 as Gavin suggests ( or 1.1 as Pielke suggests)

          Can you estimate the UHI Bias ( upper bound ) by looking at

          .28 – .18/amplification?

          or do you think that TLT can warm while the surface cools and magic happens between the surface and the Trop?

          Put another way.

          If the surface has zero warming since 1979, would you expect the TLT to warm?

        • Tilo Reber
          Posted Nov 4, 2011 at 12:47 AM | Permalink

          Okay, with that wild April 2010 data point ~corrected, I get .288 C/decade for BEST. And we’ll use .173 for UAH. So we get a divergence of about .11 C/decade for the satellite era. So, is that the UHI signal? Who knows? If you are just asking me my opinion, I would guess that more than half of it is UHI. Again just opinion, but I think that some of that .11C is also due to inappropriate kriging across thermal discontinuities. So if we dropped .1C per decade off BEST, would I consider them to be in the ballpark? Probably. Note, that is not the same as saying that the trend since 79 will be the trend for the next one hundred years.

          Mosher: “or do you think that TLT can warm while the surface cools and magic happens between the surface and the Trop?”

          Nope. Who’s claiming that the surface has cooled since 79? I certainly don’t hold that opinion. I do, however, hold the opinion that the claim of no UHI warming is a big blunder.

          There does seem to be a little magic going on though. Spencer is showing the SSTs going down for the last 10 years while the LT is going up. I can buy the ocean going up slower than the surface or the LT, but I can’t buy it going in the other direction. RSS, on the other hand, has the LT going the same direction as the SSTs. But then the differences are small enough and the time is short enough that it’s hard to draw any firm conclusions.

        • Steven Mosher
          Posted Nov 4, 2011 at 3:11 PM | Permalink

          “Okay, with that wild April 2010 data point ~corrected, I get .288 C/decade for BEST. And we’ll use .173 for UAH. So we get a divergence of about .11 C/decade for the satellite era. So, is that the UHI signal? Who knows? If you are just asking me my opinion, I would guess that more than half of it is UHI. Again just opinion, but I think that some of that .11C is also due to inappropriate kriging across thermal discontinuities. So if we dropped .1C per decade off BEST, would I consider them to be in the ballpark? Probably. Note, that is not the same as saying that the trend since 79 will be the trend for the next one hundred years.”

          Some Progress.

          So, if we use RSS we get .288 -.198 = .09C
          and if we use UAH we get .11C

          And you guess that more than half of that lets say .05C-.1C is UHI

          That seems to be a reasonable estimate.

          people sometime dont get the importance of making this estimate. what it means is this

          1. Noise in the trends will hurt your ability to detect this delta
          2. errors in your classification will also hurt you.

          So, if the debates we have, the debates between you and me and don monfort and anthony and willis
          and every other person of good will happens within this framework we can make progress.

          we cant make progress if party A says “there is no UHI” and party B says “Tokyo, Tokyo,Tokyo”
          we cant make progress if we try to discuss UHI and people change the topic– hehe a high schooler can find UHI

          when we bound the size of the thing we are looking for and understand the complexity then we have a hope of progress.

          personally, I have no trouble saying UHI is between .01C and .1C decade, since 1979.

        • don monfort
          Posted Nov 4, 2011 at 4:21 PM | Permalink

          So the .3423C for 1950-2010 that we calculated in our little experiment above, on the pristine areas, is sort of mas o menos in the middle of our acceptable range 🙂 Assuming that prior to 1979, the UHI effect was no less than it has been since 1979. Is that a reasonable assumption? (say yes, and we can move on)

        • Steven Mosher
          Posted Nov 4, 2011 at 6:06 PM | Permalink

          I need to think about whether it could be less.. from the log argument it is not likely to be less.

          One weird thing I noted when I did some studies of 1900 to 1940 was that rural was warming more.

          For that I used historical population data.. might be a land use issue.

          The way I work, I would say the estimate is that its the same before as after.

          Then. look at srguments for greater and arguments for lessor.

          Then see if those arguments are testable.

          So I wouldnt assign a > or < up front. Id say, absent other information we start with assume the
          past was like the present. Then take arguments on both sides.

        • Tony Hansen
          Posted Nov 5, 2011 at 6:57 AM | Permalink

          ‘One weird thing I noted when I did some studies of 1900 to 1940 was that rural was warming more’.

          Could precipitation/cloud cover have some sort of affect on this?

          If the 30 year precip average can be +/- 10%…. but then again is cloud cover any indication of precipitation?

        • P. Solar
          Posted Nov 5, 2011 at 8:03 AM | Permalink

          perhaps the proportional increase in rural pop. was more significant than prop. increase in urban. If Oke’s log effect is generally valid a lot of people seem to have blocked on a simplistic “city are warmer therefore must see more warming”.

          It’s easier for a rural community to double than a metropolis.

          Most lay commentators and a lot of scientists seem to be missing this.

          How does that relate to the studies you did? May be what you saw was not so weird as you think.

        • Manfred
          Posted Nov 5, 2011 at 1:59 PM | Permalink

          I would also guess that rural settings experience higher increases from land use change (which do not matter that much in an already urban setting), paving of dirt ways or energy use increase and installation of AC, if the rural station is close to a building.

          I would also guess, that poor urban settings have been relocated more frequently, because they may have become xtremely poor.

        • steven mosher
          Posted Nov 5, 2011 at 5:55 PM | Permalink

          What Land use change would you like to test?

          Looking at the trends of long stations and regressing on land use.. yeilds… bupkis.

          One probably needs to look at historical changes, but before I look, I’d rather have somebody predict which land use change will be the most powerful. Since there are more than a handful of land use categories looking at all combinations isnt something I want to devote my spare time to unless somebody has an actual testable hypothesis

        • steven mosher
          Posted Nov 5, 2011 at 5:51 PM | Permalink

          My test went like this.

          select all sites with ZERO population within a 5 arc minute grid in 1900.

          Divide sites according to their 1940 population:

          Zero and non zero

          Calculate a global trend for each subset
          subtract
          take the trend of the difference.

          Buzzznt. I got a nonsense answer, so I decided to focus on building better software tools.

        • steven mosher
          Posted Nov 5, 2011 at 6:02 PM | Permalink

          The real issue I have with population is that it is not clearly a direct cause of UHI.( except as waste heat)

          My favorite example is a city in australia that has ZERO population, but should have UHI if physical theory is correct

          Huh? a city with no population?

          http://en.wikipedia.org/wiki/Moomba,_South_Australia

          in a population database this city has zero official population. But by all other measures it shows up as urban.
          Australia has a number of these that made me pull my hair out.

          The other issue is that population figures come in two varieties: Ambient and resident.

          or daytime and nighttime population.

          buildings and surfaces dont move. they are also what causes UHI.

          Airports have zero population.

          dont get me started

        • Geoff Sherrington
          Posted Nov 5, 2011 at 8:32 PM | Permalink

          Steven M, re Moomba
          If you look at Google Earth coords -28.1125, 140.2102 you will see that the weather station is surrounded for up to 1 km away by heavy industrial plant for the compression, purification and pumping of natural gas, whose first step is the separation of CO2 from the n atural gas and its release (into the air, I presume) probably with significant leakage of methane. The weather station was erected about the same time as the plant, so the only data are for the industrial period. What’s more, some of the people might be fly in-fly out, but they do not all do this daily, but over a period of a weeks or more, typically 3 weeks on, one week off. You don’t leave an installation like this alone in the desert. There is a resident population at the camp, they just go to Adelaide now and then. Finally, the plant and any stray heat it produces is 24/7. It is NOT a candidate for uncontaminated rural by any criterion.

          It is an neat example that UHI can’t be completely solved by working from population. Each case really has to be classed on its merits and that’s what I’ve tried to do for you. Local knowledge helps.

        • Posted Nov 6, 2011 at 10:52 AM | Permalink

          “buildings and surfaces dont move. they are also what causes UHI.”

          No, but they do appear over time where they didn’t exist before.

        • Bruce
          Posted Nov 7, 2011 at 2:01 PM | Permalink

          NASA suggests that in desert regions UHI might be low or negative.

          “In a quirk of surface heating, the suburban areas around desert cities are actually cooler than both the city center and the outer rural areas because the irrigation of lawns and small farms leads to more moisture in the air from plants that would not naturally grow in the region.

          “If you build a city in an area that is naturally forested — such as Atlanta or Baltimore — you are making a much deeper alteration of the ecosystem,” said Imhoff. “In semi-arid areas with less vegetation — like Las Vegas or Phoenix — you are making less of a change in the energy balance of the landscape.””

          http://www.nasa.gov/mission_pages/terra/news/heat-islands.html

        • steven mosher
          Posted Nov 5, 2011 at 6:05 PM | Permalink

          it could be any number of things

          1. bad population data
          2. low station count
          3. spatial inhomegniety
          4. luck
          5. land changes
          6. station changes
          7. UHI is smaller than you all think

          The list of why you dont find UHI is pretty long.

        • Steve McIntyre
          Posted Nov 5, 2011 at 7:32 AM | Permalink

          when we bound the size of the thing we are looking for and understand the complexity then we have a hope of progress.

          part of the problem in this field is that people too often frame things in terms of a “null hypothesis” when the issue is the size of the effect. We see this in the trenberth v Curry thing where Trenberth is engaging in a stale argument about a null hypothesis of Is there AGW? As opposed to the live issue of the size and sign of feedbacks.

    • Manfred
      Posted Nov 3, 2011 at 1:09 PM | Permalink

      According to the log population law, dUHI is larger for a population increase from 10 to 20 than from 1 to 1.5 million.

      An increase of 10 people may have more effect than an increase of 500000. A paved pathway around a sensor may have mor effect than an increase of 500000.

      BEST did not “observe” small dUHI, their method didn’t observe it.

      Beyond this, part of land use changes may already increase land temperatures (and perhaps tropospheric land temperatures} on a global scale, due to the vast areas of urbanisation, agriculture, irrigation etc. This component isn’t measurable directly at all with any thermometer subset and it is usually not attributed.

      • Richard
        Posted Nov 3, 2011 at 3:27 PM | Permalink

        I am unaware of any dUHI papers. I am aware of many UHI papers/ Can you help?

        • Manfred
          Posted Nov 3, 2011 at 6:23 PM | Permalink

          dUHI may in part be extracted from dTemp over population or dTemp over population density plots. See here:

          Spencer's UHI -vs- population project – an update


          http://www.drroyspencer.com/2010/03/

          An approximately logarithmic (or sometimes square root) law is not new in peer review, square root dates back to Mitchel 1953 and logarithmic to Oke 1973.

          Such plots, however, cover dUHI only for a thought experiment at a single point in time, when increasing population from A to B. It doesn’t include additional dUHI over time due to increased energy consumption, advent of new technologies such as air conditioning, paving of roads etc.

      • Steven Mosher
        Posted Nov 4, 2011 at 3:27 PM | Permalink

        I’m surprise how few people actually read Oke paper to understand the limitations of the log “rule” and how few actually follow his subsequent writing.

        Some issues.

        1. The actual size ( physical size) of the cities studied. This is a problem for stations that are located
        away from a city, but given the name of that city.

        2. The limited range of cities Oke studied populations 1000 to 2million. extrapolating the “log rule”
        into the sub 1000 range would require some actual validation

        3. Geographical homegeniety of the cities he studied. basically western civilization

        Some specific Issues with Spencer

        1. It looks like he used Grump for population. Its at Alpha release. There is also a question about whether
        it tracts resident or ambient population. basically, Id like to see his data.

        2. The dataset. Not QAd

        3. The location data. Not QAd

        Its an interesting approach, however, people forget to ask the question about what the average population is
        atthe Best very rural stations?

        Want to guess?

  45. Jacob
    Posted Nov 3, 2011 at 9:04 AM | Permalink

    We know for sure UHI is real. It’s also clear that it is terribly difficult, or outright impossible, to quantify it.

    Does it follow that a zero UHI correction – as per BEST – is the correct answer?
    I think it is obvious and certain that the zero UHI correction is wrong. We don’t know what is the correct amount, but zero is absolutely wrong. (and they actually found -0.19, not zero).

    Compare that to the aerosol “fudge factor”. The aerosol influence is even more difficult to quantify. Still some number is assigned to it, by a pure guess, and used in models.

    So, the (rhetoric) question is: why do they assign some influence to aerosols, but zero to UHI ?

  46. Jacob
    Posted Nov 3, 2011 at 12:52 PM | Permalink

    About defining parameters for “urban”:
    I think two concepts have been mixed here. The rural station which is near the farmers house and parking lot and cow and AC exhaust – suffers from poor siting (per surfacestations) and not UHI. These are distinct problems, despite being somewhat similar.

    UHI stations shoud be defined as those that have some density of housing and population in the adjacent area.

    And one doesn’t have to define just one value for these variables (population density and adjacent area). You can try multiple analyses, with different values, and see where it takes you. You don’t need an ironclad definition of urban before you start the investigation.

  47. Richard Sharpe
    Posted Nov 3, 2011 at 10:19 PM | Permalink

    Gneiss accuses McI of skullduggery with respect to demos that the hockey stick can be extracted from random data.

    Has a reference to Best.

  48. P. Solar
    Posted Nov 4, 2011 at 7:42 AM | Permalink

    An earlier post is awaiting moderation, probably more than one image blocking it. Please refer to that once it passes for more details.

    Here is a plot of 12y trends in B-est data over the full record:

    http://tinypic.com/r/1449ol/5

    The odd thing is, there’s 200 years of noisy data where virtually every possible 12y mean is ZERO.

    Something badly wrong there.

    • Steven Mosher
      Posted Nov 4, 2011 at 5:59 PM | Permalink

      weird. somethings wrong

  49. Posted Nov 4, 2011 at 10:19 AM | Permalink

    I made a couple of enquiries of Spencer at wuwt none yet answered. Both question the accuracy/repeatability of satellite temperatures

    “may I enquire what happened to the LT 1km level temperatures which were discontinued in 2009. These were showing a .9C rise from 1998 to 2009. I see even the early data has been expunged from the discover page.

    http : //
    3.bp.blogspot.com/–0yttATfXBw/TgYPk6xfuVI/AAAAAAAAAIU/MynlTkN63mM/s1600/low.jpg

    Also is there any record of the reason for the changes to data occurring around July last year [in most data streams] (not massisve but noticable).”

    followed by:

  50. jjthoms
    Posted Nov 4, 2011 at 10:37 AM | Permalink

    followed by:
    Roy Spencer says: November 3, 2011 at 2:07 pm
    JJThoms:
    AMSU ch. 4 *failed* on Aqua, it wasn’ t “expunged” from the record. That’s my story, and I’m sticking to it. 🙂

    Channel 4 was not the one I was referring to – it was channel LT the 1km temperature.
    The data is no longer available from your discover site.

    I understand that satellites die and you use another. However in the swap to AQUA many of the same-named channels have large errors compared with original. if you are measuring the temp at the same height why is there this error. For example the data for CH13 shows an error of up to 0.3C in 2002 see
    http : //
    4.bp.blogspot.com/-3TGf7juf3EM/TgYCVKXHwBI/AAAAAAAAAIQ/00YqMQ_RRGc/s1600/36km.jpg

    Is it possible that the current qua temperatures have a possible 0.3C error – if not how can you prove it?
    Thanks

    As I said above – no answer yet.

    But if NOAA15 data was believed to be correct for 12 or more years and then AQUA comes along and now its data over the ovelapping period (7 years) is different which is to be believed.

    In the data downloads there is nothing to say what has changed

  51. Jacob
    Posted Nov 4, 2011 at 6:55 PM | Permalink

    The approach of calculating the bounds of UHI by doing Best-UAH is of limited value.
    What you actually say is that you believe in UAH but not in BEST, so the difference must be the UHI that BEST discarded. This amounts to discarding the BEST analysis, and the surface temps as an independent data set, assigning to them zero credibility, and to UAH 100% credibility.
    It’s well possible that BEST has other errors besides UHI, and that the UAH set also has biases.

    I think that case studies might give us some indication about the UHI effect magnitude.
    Select a few (10 or 50) rural stations, and the same number of near urban stations and calculate the diff in trends. You will not be sure that this is the correct global UHI effect, but you’ll get some indication about it’s possible magnitude, a tangible indication, not a speculative one.

  52. ferd berple
    Posted Nov 5, 2011 at 12:18 AM | Permalink

    “If proportional increases are the same, then the rate of temperature increase will be the same in towns and villages as in cities.”

    This seems so obvious that it is surprising that climate scientists did not account for this in their UHI studies. You need to compare towns that have grown into cites against towns that have remained unchanged, and see if there is any difference in the rates of temperature increases.

    Otherwise, if you simply consider current size, you are effectively trying to prove that big cities are warmer than rural towns, which makes no sense. The cites and rural towns are in different locations and will have different temperatures for reasons unrelated to size, which will mask any attempt to find a correlation.

  53. P. Solar
    Posted Nov 5, 2011 at 8:11 AM | Permalink

    Relating to my suspicions on how B-est splicing would handle a volcanic event:

    http://tinypic.com/r/29ff6zb/5

    It seems that the cooling caused by Mt Agung eruption occurred in the middle of a local climatic hotspot. The notoriously cold winter of 1963 was in fact close to the long term anomaly zero and was followed by an exceptionally warm summer. !??

    hmm.

  54. P. Solar
    Posted Nov 5, 2011 at 8:13 AM | Permalink

    [OH dear WP put my post above content from 3rd Nov, retry:]

    Relating to my suspicions on how B-est splicing would handle a volcanic event:

    http://tinypic.com/r/29ff6zb/5

    It seems that the cooling caused by Mt Agung eruption occurred in the middle of a local climatic hotspot. The notoriously cold winter of 1963 was in fact close to the long term anomaly zero and was followed by an exceptionally warm summer. !??

    hmm.

  55. Robert
    Posted Nov 5, 2011 at 6:19 PM | Permalink

    Steve, SKepticalScience discusses some of the stuff done here (and by Willis). It is what it is but it might be worth pointing out that the Land-only GISS has been found.

  56. don monfort
    Posted Nov 5, 2011 at 6:49 PM | Permalink

    Steve,

    “My test went like this.

    select all sites with ZERO population within a 5 arc minute grid in 1900.

    Divide sites according to their 1940 population:

    Zero and non zero

    Calculate a global trend for each subset
    subtract
    take the trend of the difference.”

    How about comparing zero, to over 1,000?

    I wonder what you would find if you did this by continent (or country). Do a separate analysis for North America, and Africa. Or do a separate analysis for the US and Nigeria. Compare. I have done this in my mind 🙂 and detected a clear UHI effect in North America, particularly in the US. Don’t ask for details. I didn’t write anything down.

    Another interesting field to plow might be the US Great Plains. Compare areas with declining population to the growing cities in and around the Great Plains.

    Click to access p25-1137.pdf

    I will do this myself, this evening, when I am into to my third glass of scotch.

  57. don monfort
    Posted Nov 5, 2011 at 7:35 PM | Permalink

    I wonder if we can learn anything about UHI by comparing data from Remote Automated Weather Stations to data from urban areas? No a lot of data in RAWS, but who knows? Not me.

  58. Posted Nov 6, 2011 at 10:33 AM | Permalink

    I have been looking at the output of the “discover” page run by Spencer.
    I believe this is the valid satellite record? If not the I have wasted my time!

    see here:
    http://climateandstuff.blogspot.com/

    I have recorded rge data before NOAA-15 started failing, and have compared this data with the now used AQUA.

    Despite an overlap of 7+ years the two data streams have not been “homogenised”

    In consequence the absolute temperatures have differences of up to 2K and the slopes of the data vary all over the place.

    Why is the latest values from these proxy temperatures considered a golden standard when the whims of the convertor can simply change the data??!!!

9 Trackbacks

  1. […] though, most definitely does have very finely honed skills as a mathematician and statistician. His verdict on BEST: I don’t see anything in the BEST corpus that would cause a reasonable person with views […]

  2. […] Closing Thoughts on Best […]

  3. By Closing Thoughts on BEST « Bee Auditor on Nov 3, 2011 at 6:28 PM

    […] Source: https://climateaudit.org/2011/11/01/closing-thoughts-on-best/ […]

  4. […] has been renewed interest in comparisons of different land records, with good discussions here and here for example. There has been quite a bit of confusion over differences between BEST, NCDC, CRUTemp, […]

  5. […] […]

  6. […] discussed a range of problems with recent posts by Willis Eschenbach at WUWT and Steve McIntyre at ClimateAudit. As a part of this we detailed how Eschenbach had presented data from the UAH and RSS Satellite […]

  7. […] […]

  8. By Un-Muddying the Waters « Climate Audit on Nov 7, 2011 at 11:03 AM

    […] few days ago, in a comment at CA, NASA blogger Gavin Schmidt stated that, over land, the ratio of lower troposphere trends to […]

  9. By Surface Stations « Climate Audit on Jul 31, 2012 at 9:05 AM

    […] impact of contamination of surface stations, a point made in a CA post on Berkeley last fall here. Over the continental US, the UAH satellite record shows a trend of 0.29 deg C/decade (TLT) from […]