Hansen and "False Local Adjustments"

Over the last few days, I’ve shown that Hansen et al 1999 illustrated and discussed the effect of the NASA adjustment for two stations (Phoenix, Tokyo) where the NASA urban adjustment yielded the expected adjustment (denoted in these posts as a “positive” adjustment). In an earlier post, I’d observed that negative urban adjustments (i.e. for nonclimatec urban cooling) had occurred in some Peruvian stations, followed by a post carrying out an inventory of all NASA adjustments – the number of negative adjustments in the ROW proved to be only slightly lower than the number of positive (expected) adjustments.

Unfortunately Hansen et al 1999, the primary reference, did not contain a systematic discussion of any sites with negative adjustments. However, Hansen et al were aware of the existence of negative urban adjustments and it will now be useful to review the original account of the present-day NASA adjustments.

Here’s an important paragraph from Hansen et al 1999:

Examination of this urban adjustment at many locations which can be done readily via our website shows that the adjustment is quite variable from place to place and can be of either sign. In some cases the adjustment is probably more an effect of small-scale natural variability of temperature (or errors) at the rural neighbors rather than a true urban effect. Also the nonclimatic component of the urban temperature change can encompass many factors with irregular time dependence such as station relocations and changes of the thermometer’s environment, which will not be well represented by our linear adjustment. Such false local adjustments will be of both signs and thus the effects may tend to average out in global temperature analyses but it is difficult to have confidence in the use of urban records for estimating climate change. We recommend that the adjusted data be used with great caution, especially for local studies.

Later in the paper, Hansen added the following:

local inhomogeneities are variable; some urban stations show little or no warming, even a slight cooling relative to rural neighbors. Such results can be a real systematic effect e.g. cooling by planted vegetation or the movement of a thermometer away from the urban center or a random effect of unforced regional variability and measurement errors. Another consideration is that even rural locations may contain some anthropogenic influence.

I didn’t notice any other relevant discussions, but will amend this post if any other relevant quotes are brought to my attention.

Let’s review the negative adjustment of 3.3 deg C at Puerto Maldonado in the context of these reviews.

First, I submit that a negative adjustment of 3.3 deg C rises above a “slight cooling relative to rural neighbors”, In an engineering-quality assessment, such results would require specific investigation and explanation.

Second, while “cooling by planted vegetation” can be a feasible mechanism for a type of negative urban heat island in desert settings e.g.here , it seems implausible that this has affected Puerto Maldonado, which is located on an Amazon tributary, or that this is relevant to the vast majority of sites receiving negative urban adjustments.

Third, while we know virtually nothing of the metadata for Puerto Maldonado, it seems unlikely that the cooling relative to “rural” neighbors is due to the “movement of a thermometer away from the urban center”.

Fourth, while I know nothing of the local particulars of Puerto Maldonado climate relative to (say) nearby Cobija, I’d be amazed if there was a 3.3 deg C swing due to “unforced regional variability”. One could certainly not just assume this without some kind of proof.

It seems by far the most likely that the inconsistency between Puerto Maldonado and Cobija is due to something in the staiton histories – some change in instrumentation at Puerto Maldonado or, perhaps in the Bolivian sites, or perhaps both. In the US, where extensive metadata is available, step changes for station moves or instrument changes are included in the various USHCN adjustments (TOBS, MMTS, SHAP, FILNET), none of which are done for the Peruvian and Bolivian stations and is not a “true urban” adjustment.

In Hansens’s terminology, this adjustment would be a “false local adjustment”.

“Averaging Out”

Hansen postulated in very guarded language that these “false” local adjustments would cancel out:

Such false local adjustments will be of both signs and thus the effects may tend to average out in global temperature analyses.

In an engineering-quality study (as opposed to an exploratory scientific article), it would obviously be unacceptable to leave matters in such a state. An engineer would have been obliged to determine whether the effects actually did average out and would not have been permitted to simply leave the matter hanging.

Note that Hansen did not limit the concept of “false local adjustments” to negative adjustments. He clearly contemplated and stated that false local adjustments (i.e. adjustments that did not adjust for urban effect but for local station history issues) would be “of both signs” and, as noted above, that false positive and false negative local adjustments would “average” out.

In my previous post, I calculated the total number of positive and negative NASA adjustments. Based on present information, I see no basis on which anything other than a very small proportion of negative urban adjustments can be assigned to anything other than “false local adjustments”. Perhaps there are a few incidents of vegetative cooling resulting in a true physically-based urban cooling event, but surely this would need to be proved by NASA, if that’s their position. Right now, as a first cut, let’s estimate that 95% of all negative urban adjustments in the ROW are not due to “true urban” effects i.e. about 1052 out of 1108 are due to “false local adjustments”.

On the reasonable assumption that there will be an equal number of positive “false local adjustments” as negative “false local adjustments”, this will yield a total of approximately 2100 “false local adjustments” out of a total population of 2341 adjustments (disregarding bipolar adjustments.) In other words, there is a valid case that about 90% of all NASA adjustments are “false local adjustments”.

If the purpose of NASA adjustments was to do station history homogenizations (a la USHCN), then this wouldn’t matter. But the purpose of the NASA adjustments was to adjust for the “true urban” effect”. On this basis, one can only conclude that the NASA adjustment method is likely to be completely ineffective in achieving its stated goal. As other readers have observed (and anticipated), it appears highly likely that, instead of accomplishing an adjustment for the “true urban effect”, in many, if not most cases, the NASA adjustment does little except coerce the results of one poorly documented station to results from other equally poorly documented stations, with negligible improvement to the quality of whatever “signal” may be in the data.

This does not imply that the NASA adjustment introduces trends into the data – it doesn’t. The criticism is more that any expectation of using this methodology to adjust for urban effect appears to be compromised by the overwhelming noise in station histories. Needless to say, the problems are exacerbated by what appears to be poor craftsmanship on NASA’s part – pervasive use of obsolete station versions, many of which have not been updated since 1989 or 1990(!), and use of population data that is obsolete (perhaps 1980 vintage) and known to be inaccurate.

Some readers have wondered why Hansen even bothered with the entire NASA adjustment project. That seems a very reasonable question. There are a lot of stations where there is no adjustment – wouldn’t it make sense just to use these stations? This takes you into the metadata problem – right now Hansen shows a lot of ROW “rural” stations, but how many of them are actually tropical towns and cities?

I’ll discuss this further tomorrow.


  1. BarryW
    Posted Mar 1, 2008 at 7:45 PM | Permalink | Reply

    Yet Hansen threw out Northern Ca stations that showed anomalous negative trends. Why adjust some and reject others?

  2. steven mosher
    Posted Mar 1, 2008 at 7:56 PM | Permalink | Reply

    Sometimes I wonder if they regret telling folks to just read the papers. The funny thing is every engineer who reads this knows he would have to examine and categorize the effects of the algorithm
    on the raw data and do exactly what you are doing. The glibness with which they assume that “errors will zero out” is head shakingly sad. That’s what you assume, but you have actually have to prove it, not assert it. I’ve seen this sleight of mind in Jones and Hansen and others. It’s a vacation pass from doing grunt work. A varient of the cool parks protocal.

  3. steven mosher
    Posted Mar 1, 2008 at 8:08 PM | Permalink | Reply

    RE 1. he deleted portions of records that showed anomalous early 20th NORCAL century cooling trends. Effectively this creates a less steep warming trend. H2001 removal of this data is inconsequential. The problem is not documenting the rationale in each and every case. I looked at one case, Lake Spaulding, and on its face, the station had trends that were divergent from its rural neighbors, and
    UN NATURAL, a nearly linear decrease indicative of some kind of non climatic, non weather
    causation. In any case, for every station snipped a proper study would have documented the
    rational used for the snipping.

    I dont know if its wrong, but it’s sloppy. Like my typing.

  4. Posted Mar 1, 2008 at 8:40 PM | Permalink | Reply

    Okay, another rookie question. Sorry, but I’ve spent way too much time reading here today, my head hurts, and I need to ask one final question before calling it a night. And if there is a more appropriate place for me to ask the following, feel free to move it there and tell me where to go to find it.

    If Hansen is contorting raw data in an effort to compensate for UHI, does that mean he rejects the conclusion in Jones et al 1990 that UHI’s effect is negligible? Because if Jones 1990 is right, which I doubt, then there is no reason for Hansen’s statistical contortions. However, if Hansen is right that UHI needs to be compensated for, then Jones et al should be ignored (probably should be regardless of what Hansen thinks).

    They can’t both be right. Either UHI matters, or it doesn’t.

    Jones and Hansen seem to be members of the same team, yet their work goes in opposite directions, but they arrive at the same destination.

    Has the mainstream climate science community discussed the contradiction between the two? Is it considered a contradiction? Am I making more of it than what is actually there?

    And yes, my head is spinning.

    Finally, a word of praise for Mr. McIntyre. I have read nearly every one of the now 284 comments over at Tamino’s Open Mind blog and couldn’t help but be overwhelmed by the vitriol oozing out of that place, and yet Steve McIntyre still provides a link to Tamino, which is not reciprocated. Just that one gesture alone tells me who has an open mind and who doesn’t.

    Thanks for maintaining a civil tone on the issue and for tolerating naive and uninformed questions from rookies.

  5. Barclay E. MacDonald
    Posted Mar 1, 2008 at 8:51 PM | Permalink | Reply

    I would expect that a conclusion in Hansen ’99 like “…it is difficult to have confidence in the use of urban records for estimating climate change. We recommend that the adjusted data be used with great caution…”, would be closely followed by diligent efforts to monitor, maintain and improve the data collection and analysis at all sites. So far I have not seen evidence of any such new and focused programs. Am I missing something obvious? It does appear they are “working hard” with the analysis.

  6. steven mosher
    Posted Mar 1, 2008 at 8:59 PM | Permalink | Reply

    RE 4. this is a puzzle. There are two claims that get made.

    1. ONLY rural sites are used in GISS final numbers so dont worry.
    2. We adjust Urban to match rural so dont worry.

    Well, if #1 is true WHY INGEST NON RURAL DATA? to excercise the bits?
    Does testing the urban adjustment in the US mean that hansen uses urban adjusted data
    in the ROW? Did he prove the approach in the US and then use the adjusted data in the ROW?
    Seems weird. Nothing can be concluded from this.

    Here is what one wants: When the GISS chart goes up for 2008 one wants the list of stations
    that are used for that year. Not the list of stations ( station_list.txt) read into the prgram.
    But the stations actually used.

    Steve: In my collation giss.info.dat, if you look at the column end_adj, the ones with 2008 are the current contributors.

  7. jae
    Posted Mar 1, 2008 at 9:21 PM | Permalink | Reply

    What a messy maze. Hope someone can find a way through it.

  8. Jim Clarke
    Posted Mar 1, 2008 at 9:33 PM | Permalink | Reply

    It sounds like the bottom line here is that you have falsified the contention that surface temperatures have been adjusted for UHI. It also sounds like a true determination of UHI can not be accomplished without a detailed study of the histories of every site in the study. Making broad base assumptions and applying a simple algorithm based on population, neighboring stations or anything else, would be useless considering the large potential variability of non-climate factors at individual sites, whether urban or rural.

    I believe that more detailed studies have identified that the UHI effect is real, but the above indicates that we can not assume the value of the UHI from one location to the next.

    Can we conclude that the surface temperature trend is still contaminated with a spurious warming due to the UHI effect, even if we can not quantify it?

    While this does not invalidate the AGW crisis theory one way or another, I have to agree with Mr. McIntyre that it does provide another example of the poor scientific practices that are readily embraced by the AGW crisis community to support their position.

  9. Posted Mar 1, 2008 at 9:41 PM | Permalink | Reply

    Steve mentioned population data that are out of date. Even with up-to-date population data, there is still a problem using the data: in most countries, assessment of urbanization depends both on the population data and the stability of administrative boundaries.

    Cities can be under-bounded (spilling over into rural areas) or over-bounded (containing rural areas), characteristics that change with time. To determine the degree and extent of urbanization, the researcher needs a time series of maps showing boundaries for the city and nearby counties.

    Urbanization can be depicted by a series of maps showing density of population. However, in a study of urbanization in Ohio during the period 1860 to 1920, it could not be determined whether the population increase for a specific county arose through the growth of scattered unincorporated urban centers, ribbon development along roads or expansion and overspill of the built-up area of the bordering city.

    In many countries data for counties often masks the existence of unincorporated urban areas. Even when cities can be identified their urbanization history cannot be determined. Census data for the island of Java is available by ward (village/desa), with about 400 wards per county. Processing the data is an onerous task. Similar considerations apply in other countries, including those in Europe and Latin America.

    Urbanization can be modeled based on data for small areas, but the geography of an area must be known: some cities expand in the direction of groundwater, such as Cairo.

    Even in rural areas land-use change affects surface temperatures. In developing countries, rural sites used as controls may be biased by clearing for cropland. In the US, some sites may be biased by rural depopulation.

    When it comes to measuring variations in the urban heat-island effect, the noise in the population data swamps the signal. In my opinion, surface temperature series reflect population change, but in non-systematic ways. Adjusting such data is equivalent to applying a random variable.

  10. John Norris
    Posted Mar 1, 2008 at 10:59 PM | Permalink | Reply


    First, I submit that a negative adjustment of 3.3 deg C rises above a “slight cooling relative to rural neighbors”, In an engineering-quality assessment, such results would require specific investigation and explanation.

    I agree. Adjustments are okay if you can substantiate them with some test results or analysis, AND the adjustments are not overwhelming in scope. If you are adjusting an input 3.3 degrees, and you think you are getting accuracy from the entire data set to tenths of a degree, you need to think again.

    You need a UHI test or UHI analysis to show that your adjustments are good to tenths of a degree; particularly if you are adjusting 47% of your inputs.

    Show the UHI test or UHI analysis that provides that accuracy, or go back and get data that accurate. No further review required until then.

  11. Steve McIntyre
    Posted Mar 1, 2008 at 11:00 PM | Permalink | Reply

    #6. CAn you figure out where the Summary of Day information comes from?

  12. Gary Luke
    Posted Mar 1, 2008 at 11:01 PM | Permalink | Reply

    I want to thank you all for confusing the hell out of me. I used to think that temperature was something that could be measured. Now it doesn’t seem to matter whether the nominal temperature is in farenheit or centigrade, with a supplementary optional positive or negative twist. Down here in Australia we used to say that we don’t have weather, we ony have climate, but now I don’t know even where they’ve hidden that. The Y2K apocalypse folly at least had a cut-off date. When do we get to the end of this horrible warm period that’s causing all the cold.

    Future sociologists are going to write PhDs about our era – about how we rediscovered mediaeval superstitions and transfered them onto something we used to call science.

    Thank you all for doing all that valuable (and entertaining) slog work. This lurker really appreciates it.

  13. bkc
    Posted Mar 1, 2008 at 11:36 PM | Permalink | Reply

    This does not imply that the NASA adjustment introduces trends into the data – it doesn’t.

    However, if the false negative adjustments cancel the “true” positive UHI adjustments, then the spurious positive trend caused by UHI “may” not be corrected.

  14. Steve McIntyre
    Posted Mar 1, 2008 at 11:51 PM | Permalink | Reply

    #15. that’s the issue and I’ll discuss it some more tomorrow. It looks to me like the NASA adjustments just shuffle things around a little so that if there was UHI in the original data set, it would remain in the “adjusted” data set. This post would not prove the existence of UHI in the original data set or not – merely that the supposed adjustment does not seem to do the job that it was supposed to do.

  15. Bernie
    Posted Mar 2, 2008 at 12:06 AM | Permalink | Reply

    “Shuffle things around a little” – but how do we know what the impact of the UHI adjustments actually is?

  16. Brooks Hurd
    Posted Mar 2, 2008 at 1:35 AM | Permalink | Reply

    Steve M,
    Another issue of which Hansen seems to be unaware is the error that his adjustments add to the data set. Adjustments could be used to reduce errors, but only after the sort of engineering evaluation which you and Steven M have mentioned. Without performing a case by case analysis prior to adjusting the data it is nothing but an asusmption on Hansen’s part to say that the the errors “may tend to average out in global temperature analyses.”

    All measurements have errors. Instruments have errors. There are errors in reading instruments. We have discussed the errors added to the record by moving an instrument or changing the time of observation. However, when Hansen blindly allows records to be adjusted then he is adding large errors into the temperature record. In my opimion, until it can be proven otherwise (and not by arm waving) the adjustments themselves must be considered to be errors.

  17. Bob Koss
    Posted Mar 2, 2008 at 3:31 AM | Permalink | Reply

    Here are a few stations in the ROW still being called rural. They weren’t even rural 20 years ago.

    id city country lat. lon. population
    228485000004 PRACHUAP KHIR THAILAND 11.83 99.83 494000
    130602650001 OUARZAZATE MOROCCO 30.93 -6.9 56000
    205510760002 ALTAY CHINA 47.73 88.08 89000
    205565710002 XICHANG CHINA 27.9 102.27 127000

  18. RW
    Posted Mar 2, 2008 at 3:40 AM | Permalink | Reply

    I see frequent statements here that such-and-such wouldn’t happen in an engineering-quality study, and so-and-so doesn’t meet engineering standards. These statements are all completely meaningless. You never define what ‘engineering quality’ is, or why climate science (or any science) should meet it. Can you state what the standard is, please, and list some example scientific papers that you consider to meet it?

  19. James Erlandson
    Posted Mar 2, 2008 at 6:26 AM | Permalink | Reply

    Two engineering-quality studies for your consideration:
    NTSB Aircraft Accident Brief concerning the in-flight breakup of a motorized glider. (large PDF)
    Simulation of Crack Propagation in Steel Plate with Strain Softening Model. (another large PDF)

  20. Ron Cram
    Posted Mar 2, 2008 at 7:00 AM | Permalink | Reply


    I agree. GISS claims UHI is adjusted out, but the adjustment is reversed through unwarranted adjustments of the opposite sign. In your previous post, your analysis looked at the size of the problem. I have one other question. Here it is:

    Assuming that 80% or 90% or 95% (you pick the number) of these opposite sign adjustments are unwarranted, and selecting those stations randomly on the global grid – how much of late 20th century warming is a product of unwarranted adjustments?

    In other words, is this a huge problem or is it statistically insignificant? My guess is that it is fairly significant, but I would like to see your analysis.

    Keep up the great work, Steve!

  21. Tim Ball
    Posted Mar 2, 2008 at 8:24 AM | Permalink | Reply

    Ironically, there is also a climate change problem with any urban heat island effect (UHIE) especially in the middle latitudes (30° to 65°). In the late 1960s and early 1970s we did heat island studies in Winnipeg, Manitoba. It provides an ideal location because it is essentially an isotropic plain with little topographic effect. The weather station is on the northwest side of the city and has been gradually engulfed as the city grew. We found the Central Business District (CBD) manifest the greatest UHIE (3.3°C average at time of publication), however this shifted through the day as the wind moved the warm dome downwind. Over time the wind patterns shifted as the Circumpolar vortex (improperly called the Jet stream) shifted from zonal to meridional flow. I would suggest a standard UHIE adjustment is likely invalid unless changes in wind pattern are considered.
    Also at some point the wind patterns are affected by the local circulation created by the UHIE dome itself. The dome becomes like an adiabat with warm air rising in the centre drawing air from outside. A critical factor in the pattern of the shifting of the dome downwind and the establishment of the small scale adiabat was the strength of the large scale pressure pattern winds. Changnon first identified the shift of the UHIE downwind with his studies of La Porte Indiana. This raises a question about the location of the rural stations relative to a large urban center and their suitability for comparative studies. Atkinson studied the UHIE on precipitation patterns in London and they have implications for temperature measures because of moisture variations and evaporative cooling.

    So the application of a standard adjustment is simply unjustified for urban stations as comment #9 says, but for some rural stations as well and then the adjustments would have to change over time with changing wind conditions.

  22. Paul Linsay
    Posted Mar 2, 2008 at 9:19 AM | Permalink | Reply


    Could you plot up the average adjustment so that we can see the net effect of all the tinkering and test Hansen’s claim of them randomly averaging to zero? Maybe three plots showing the US, Europe, and the ROW?

  23. Scott-in-WA
    Posted Mar 2, 2008 at 9:21 AM | Permalink | Reply

    RW #20 I see frequent statements here that such-and-such wouldn’t happen in an engineering-quality study, and so-and-so doesn’t meet engineering standards. These statements are all completely meaningless. You never define what ‘engineering quality’ is, or why climate science (or any science) should meet it. Can you state what the standard is, please, and list some example scientific papers that you consider to meet it?

    Where I come from in the world of nuclear systems management, the various facets of the science itself and the techniques used to pursue and document that science are all one thing.

    For purposes of licensing a nuclear system, a weakness or a failure in the process and methodology is held to be a failure or a weakness in the science and technology itself.

    Several months ago, National Geographic had an article on global warming which included a map, based on Hansen’s NASA data, purporting to illustrate the extent of worldwide global warming using a continuous topology of temperature anomalies across the globe, so that one could look at any region of the globe and find the extent of the temperature anomaly for that particular region.

    For the average AGW-interested person looking at this very scientific-looking map, there is an inherent assumption that such a map could be produced only if there are sufficient observations to plot the data, and also that a thoroughly professional job has been done from data collection on through data analysis and data trending. (Would we expect anything less of NASA?)

    It is rational and appropriate to ask the question, to what extent has data interpolation, data interpretation, and data estimation been used to produce this finely-detailed topological map of worldwide temperature anomalies?

    If the answer is that very significant data interpolation, very significant data interpretation, and very significant data estimation has been done to produce this map — then the processes, methods, and techniques used to perform those data interpolations, data interpretations, and data estimations are fair game for auditing and criticism — both from a scientific perspective and from an analysis-process / business-process perspective.

    This is what ClimateAudit is all about: the interaction of the science itself with the methods, processes, and techniques used to pursue and document that science.

    As this web site evolves, it is becoming ever more apparent that if we are to take the professional pursuit of AGW science as seriously as we do the professional pursuit of nuclear science and technology, then AGW science and the methods used to pursue that AGW science are all one thing.

  24. Patrick Henry
    Posted Mar 2, 2008 at 10:46 AM | Permalink | Reply

    A very interesting statistic would be what percentage of GISS sites demonstrate final adjustments (TOBS + Urban + etc.) which increase the slope relative to the raw data.

    I would do it myself, but don’t know where the data is.

  25. BarryW
    Posted Mar 2, 2008 at 11:15 AM | Permalink | Reply

    Re 3

    Except that “unnatural” is a judgement call not a verifiable fact. Without having ground truth it’s just an assumption, even if it’s based on expert opinion. In other cases the sites were adjusted based on the surrounding stations rather than deleted. My complaint is that there is not a rigorous process in determining what to do. Do you know of cases where Hansen deleted stations that showed anomalous warming or are they only adjusted? Throwing out data just because you don’t like it very easily leads to ‘cherry picking’. But then I don’t have a positive reaction to adjustments either. It strikes me as analogous to having the IRS adjust the taxes you owe based on a comparison of your income trend relative to your neighbors income trends.

  26. Schlew
    Posted Mar 2, 2008 at 12:20 PM | Permalink | Reply

    … The point is well taken that depending on how you present your data, you can take either side in this debate. Certainly both sides of the argument are guilty of excluding inconvenient data while emphasizing the data that supports their claim. I believe the express purpose of CA has been to identify the misuse or abuse of such data And toward that goal, I believe it has been marvelously successful.

  27. Bruce Foutch
    Posted Mar 2, 2008 at 12:49 PM | Permalink | Reply

    re #30
    Hello everyone at CA. I am a lurker of a few years and have only recently sent in a couple of posts containing links to information that I thought might be of interest to you. I wish I could provide more substantive responses, but I’m not a scientist or statistician like many of you and I am hesitant to add personal commentary.

    However, since I have extra time each day, I am happy to spend that time researching for information that might be pertinent to your discussions. I hope you will let me know whenever I am out of line or off track. I appreciate all the great conversations here at CA and hope to learn enough to provide a balanced presentation about climate change at my workplace.

    I have a question that seems to fit in this thread. Does anyone have any thoughts on the U.S. Climate Reference Network? If it becomes operational, could it provide the kind of “Ground Truth” that Barry W speaks of?

    Program Overview – “The The U.S. Climate Reference Network (USCRN) is a network of climate stations now being developed as part of a National Oceanic and Atmospheric Administration (NOAA) initiative. Its primary goal is to provide future long-term homogeneous observations of temperature and precipitation that can be coupled to long-term historical observations for the detection and attribution of present and future climate change.”


    Link to their site selection guidance:


    Thank you and hello to the other Bruce (#29)

  28. Steve McIntyre
    Posted Mar 2, 2008 at 12:51 PM | Permalink | Reply

    I’ve deleted OT posts on trends.

  29. Bruce Foutch
    Posted Mar 2, 2008 at 12:53 PM | Permalink | Reply

    seems the post numbering was changed when I submitted what now shows as #28. My re #30 should now be re #26. Sorry.

  30. Eager Beaver
    Posted Mar 2, 2008 at 12:58 PM | Permalink | Reply

    Arguably, what the S&ST graph shows (#8) is that there really is no material bias in the surface temp. record due to recent station moves, changes, or surrounding land use changes. Otherwise, we would see a far different relationship between the data. Moreover, the UHI is arguably over stated as well, not only because of the obvious correlation between the data — but for other reasons, such as the fact that warming has been observed in the arctic and over the seas. There are other indications that this is so as well, such as the sudden retreat of tropical glaciers at points spanning the globe.

  31. Allen C
    Posted Mar 2, 2008 at 1:14 PM | Permalink | Reply

    snip – Allen, the problem is that the post is a little too angry. I know what bothers you and lots of people agree but for third parties, it tends to be piling on a bit.

  32. steven mosher
    Posted Mar 2, 2008 at 1:38 PM | Permalink | Reply

    RE 30. Beaver. knaw on this.

    This excercise of looking at Hansen’s work has ONE Goal.

    Cajole the climate science community to step their game up a notch WRT to documentation
    and explanation. Refusing to release data, refusing to release code, refusing to explain
    the studies in engineering detail is unacceptable. It’s unacceptable to a host of men
    and women who work day in and day out documenting their work. Climate scientists dont
    get a pass on this. The theory may well be true but doesnt give you licence to ignore
    your scientific reposisbilities.

    Here is a very simple test. H2001 deleted the Crater Lake Weather station from its calcualtions.

    You can go look at that station on surface stations. You can view the area, the station history
    all the data.

    ONE THING you can’t figure out or explain or defend is why H2001 deletes the entire record.

    1. Does it make a material difference? NO.
    2. Is that the point? NO.

    The point is that H2001 has nearly ZERO traceability WRT to data decisions that are made.

    Lets hold the FDA to the GISS standard. Lets hold Car Milage estimates to GISS standard.

  33. henry
    Posted Mar 2, 2008 at 2:21 PM | Permalink | Reply

    steven mosher said (March 2nd, 2008 at 1:38 pm)

    Lets hold the FDA to the GISS standard. Lets hold Car Milage estimates to GISS standard.

    GISS according to Car Mileage standards: “Your temp record may vary, depending on what you’re driving at…”

  34. Harry Eagar
    Posted Mar 2, 2008 at 2:29 PM | Permalink | Reply

    I continue to read CA on Hansen the way you keep probing a sore tooth with your tongue.

    From a newspaperman’s perspective — that’s what I am — I have no hesitation is describing the GISS statements as [snip]

  35. M. Jeff
    Posted Mar 2, 2008 at 2:31 PM | Permalink | Reply

    steven mosher, March 2nd, 2008 at 1:38 pm says:

    Lets hold the FDA to the GISS standard.

    There may be varying standards depending on just what work is being done. For example, my work at the FDA was supposed to be of sufficient quality and documented to such an extent that it could withstand legal examination. My own claim to “fame” is that Judge Sarah T. Hughs, who gave the oath of office to LBJ, once asked me to speak louder. The FDA did win that case.

  36. steven mosher
    Posted Mar 2, 2008 at 5:23 PM | Permalink | Reply

    RE 35. thats a good one. This SOB http://www.af.mil/bios/bio.asp?bioID=5383

    Stopped me in mid briefing and asked ” are you blowing smoke up my ass son?”
    Moshpit: ” not yet general, be patient.”

    My Boss pulled me aside later. ” what were you thinking Moshpit!!!!!!!”
    ” I was thinking what should I say? and I had it down to two comebacks”

    1. Not as far as you know general
    2. Not yet general, be patient.

    I thought the latter would get me in less trouble.

    hehe. true story.

  37. Gary
    Posted Mar 2, 2008 at 8:05 PM | Permalink | Reply

    Mosh,hehe indeed. So when’s the autobiography coming out? You tell a good story.

  38. steven mosher
    Posted Mar 2, 2008 at 9:56 PM | Permalink | Reply

    RE 37. Nobody would believe it.

    My old college buddy Knepper would

  39. Brooks Hurd
    Posted Mar 3, 2008 at 6:34 AM | Permalink | Reply

    Re: 18
    With more than 30 years of experience in engineering, let me put it succinctly. An engineering quality study is one which when completed will result in a product or service which will not hurt anyone; or much worse kill someone.

    If you want to have a pratical example, what do you suppose the result would be if Boeing designed aircraft using the sort of data manipulation, cherry picking, and tweaking of models to obtain the forecast you would like to see? In other words, how many planes would be flying if aircraft were engineered with the quality that characterizes much of what I see in climate science?

  40. RW
    Posted Mar 3, 2008 at 6:49 AM | Permalink | Reply

    Brooks Hurd – your statement doesn’t define the standard, or give an example of a scientific paper that meets it. Without this information it’s impossible to judge what McIntyre mean by ‘engineering standard’. In engineering, a certain approach is taken, which, incidentally, doesn’t always work. Why should that approach be in any way applicable to climate science? The demand doesn’t really make sense.

    I’m sure an engineering quality document rigorously defines its terms so that there is no ambiguity. I think McIntyre’s constant demand for ‘engineering quality’ falls below the very standards he is demanding. The least he could do, if he wants something, is to clearly explain exactly what that thing is.

    Steve: I apologize for not defining the term. The observation was made on the basis of having read engineering reports. The look and approach of them was quite different than papers in academic journals. As to what defines the difference? I’m not an engineer myself. I suspect that there are differences in the codes of practice that lead to the difference but I haven’t studied the matter sufficiently to be able to provide a formal definition myself. I think that I can illustrate a few cases showing instances where the approach differs. As to your question, I can’t think of any science article – especially ones in Nature and Science – that looks remotely like an engineering report. The most obvious difference is length. An engineering report will run hundreds or thousands of pages, while an article in Nature will be 3-4 pages. But it’s not just length. I’ll try to illustrate the difference in connection with the Hansen material in a forthcoming post.

  41. Brooks Hurd
    Posted Mar 3, 2008 at 8:16 AM | Permalink | Reply


    Ordinarily when there is a disaster such as the one you linked, it is the result of someone taking short cuts. Based on the projects on which I have worked on 3 continents, my suspicion is that it was not an engineering failure as I infer from your post; but rather the failure of at least one contractor to follow the engineering design. Engineers design conservatively because that way, things are less likely to fail. Engineers check and re-check their calculations to reduce the liklihood of problems.

    If Hansen had conservatively designed his data handling code and checked and re-checked his calcs, this topic would not be under discussion.

    Steve: Again, there are other differences in approach besides coding and, while there are many readers interested in software, there are other relevant differences.

  42. James Erlandson
    Posted Mar 3, 2008 at 9:11 AM | Permalink | Reply

    Brooks Hurd and RW: You may be interested in the Federal Highway Administration interim report (pdf) on the I-35W Missippi River bridge collapse.

    This is an example of a process (design, build, use and maintain a bridge) that was started more than 40 years ago, failed dramatically and is now being systematically reviewed to learn why it failed and improve the process. Steve M. and many CA readers believe that the process used to forecast climate is not as well defined as that used to build and maintain bridges.

  43. Michael Jankowski
    Posted Mar 3, 2008 at 9:42 AM | Permalink | Reply

    One funny thing is that some engineering codes have been changed because of the anticipated impacts of AGW, with many more to follow. So “engineering reports” are being based on criteria which come from sources (often GCMs) which do not meet “engineering report” level standards.

    For instance, I’ve seen where clearance requirements for new bridges over some waterways have been increased up to 1m in anticipation of future sea level rises due to AGW. It seems like a cart before the donkey sort of situation.

  44. Posted Mar 3, 2008 at 11:28 AM | Permalink | Reply


    One funny thing is that some engineering codes have been changed because of the anticipated impacts of AGW, with many more to follow. So “engineering reports” are being based on criteria which come from sources (often GCMs) which do not meet “engineering report” level standards.

    This has always been the case. Engineering design dictates that one must be conservative in estimating known or anticipated dangers that might present themselves during the anticipated design lifetime of whatever you are creating. It’s usually better to build boost a bridge 1m rather than build it and find it underwater later. Similarly, one anticipates earthquakes that might never arrive, hurricanes that may never hit and what not.

  45. MarkW
    Posted Mar 3, 2008 at 12:43 PM | Permalink | Reply


    Bridges etc, are designed for the worst case foreseeable conditions. Buildings in places without a history of strong earthquakes are not built to the same earthquake standards as are buildings in California. Buildings 100 miles inland aren’t built to withstand storm surge.

    The fact that 1m is being added to the height of this bridge means that somebody out there has decided that the risk of sea level rising is great enough to justify the extra cost.

    Michael’s point is that the actual risk of sea level rise is dependant upon the output of these climate models.

    I could write a model that proves that magnitude 9.5 earthquakes are possible in Idaho, but nobody would take me or my model seriously.

    BTW, Al Gore says we should be worrying about 20 meter increases, not 1 meter.

  46. Posted Mar 3, 2008 at 1:26 PM | Permalink | Reply

    Of course they designed for worst foreseeable conditions, this includes probabilistic estimates.

    Michael thought it was ironic that engineering decisions are influenced by projections that are not documented in “engineering level” reports.

    My only point it that it is normal for engineers to take projections seriously when even if these projections are not documented with the level of detail in engineering reports. We did that at Hanford. We used information from peer reviewed literature, and cited the information.

    The purpose of the engineering report is to fully describe the basis for decision, making sure that third parties from a range of disciplines can evaluate them without becoming specialists themselves.

  47. Sam Urbinto
    Posted Mar 3, 2008 at 2:08 PM | Permalink | Reply

    Tim Ball:

    Also at some point the wind patterns are affected by the local circulation created by the UHIE dome itself.

    I have made this point myself; you put in a city the size of Chicago, and it affects the weather patterns not just in and around it, but whereever the wind decides to take the heat and auto exhaust and so on and so forth.

    The metro area covers 3 states, is almost 11,000 square miles, and has over 9 million people!


    How about New York? You don’t think 8 million peopple and 7,000 square miles influences the weather all over Earth and because of some nonsense like “it’s only a small percentage of the Earth’s surface.”


    Give me a break.

  48. Michael Jankowski
    Posted Mar 3, 2008 at 7:09 PM | Permalink | Reply


    Yes, engineering does tend to be worst-case conservative. But I don’t see the high end of IPCC 2001 estimates to be “worst case” if I am engineering a structure to last through 2100. I would think an engineer would put in a factor of safety to truly be conservative.

    And before the code were actually changed, I would think a rigorous analysis (i.e., something similar to an “engineering report”) would be performed to determine the accuracy of the 1m projection, provide a cost/benefit analysis of various clearances changes (or none at all), probability of the rise actually being 1m, etc.

    I still find it “funny”…as an engineer myself.

  49. Posted Mar 3, 2008 at 7:46 PM | Permalink | Reply

    Is 1m the worst case IPCC estimate for whatever civil engineering project you are discussing?

    If you think incorporating IPCC estimates is funny, you should see the sorts of worst case scenarios that get incorporated into clean-up at Hanford. I once had to prove that the pores in a Hepa filter were indeed smaller than 1/8th inch in their largest dimension. Anyone can tell this by looking at the things, but safety engineers wouldn’t consider it officially proven until I managed to get the company to certify this. (The measurements “proving” this had only been done because the owner indulged his son. The Hepa filters were known to filter particles larger than several microns in diameter.

    So… I don’t find it surprising that engineering quality reports compile information that is not, itself, already documented in an engineering quality report. If that were necessary, we’d never get anything done.

  50. MarkW
    Posted Mar 4, 2008 at 6:10 AM | Permalink | Reply


    Anyone can come up with worst case scenarios. Proving that they might actually occur is another matter. Hence my example of a huge tremblor in Idaho that Steve keeps snipping.

  51. Posted Mar 4, 2008 at 7:42 AM | Permalink | Reply

    I’m not entirely sure what your point is. Idaho Falls has earthquake requirements for general buildings. Nuclear power plants in Idaho must comply with earthquake requirements.

    Michael says he’s surprised that engineering reports, and engineering designs consider information, theories and results of computations that originated outside engineering reports. Engineering reports do routinely consider these bits of information. So do town boards, zoning commissions and various regulatory commissions.

    Heck, these groups solicit opinion from the public.

    There is no principle of civil engineering that says anyone must exclude information or concerns simply because the plausibility of a concern is not proven as firmly as the Law of Gravity.

    Lots of people believe the seas will rise; engineering firms are going to consider this in their designs. Town boards are going to consider it. The voting public is going to consider it when various boards solicit the public opinion.

    All I’m saying is: this is normal. It’s not surprising.

  52. MarkW
    Posted Mar 4, 2008 at 8:11 AM | Permalink | Reply

    My point is that worst case scenarios have to have some grounding in reality before it’s realistic to use them. Idaho does have earthquakes, but it’s not realistic to expect a 9.5 tremblor there. For the same reason, while it does snow in S. California, it’s not reasonable to require that roofs be built there to handle 2 feet of snow.

    Additionaly, any board that adjusts it’s design requirements because of un-informed opinions from the general public, deserves to be fired.

  53. MarkW
    Posted Mar 4, 2008 at 9:13 AM | Permalink | Reply

    Another way to put it would be:

    We require builders to build for 100 year floods.
    We don’t require them to build for 1000 or 1000000 year floods.

  54. Michael Jankowski
    Posted Mar 4, 2008 at 10:27 AM | Permalink | Reply


    Yes, it’s done, has been done, and will be done. As I said in #43, there will be more and more AGW-related stuff hitting codes.

    But I am not talking about “considering these bits of information” in a design. I am talking about having to design to a county or state standard that comes directly from a hodge-podge of what really amounts to guesswork from non-engineers. It’s funny…not ha-ha funny, just “funny.” Does it “surprise” me? No. Is it “normal?” Well, I’ve yet to come across any other engineering standards pulled out of a lit review of non-engineering publications. I’ve seen codes changed to be “feel good” or due to public pressure. I’ve seen codes changed to suit a county administrators vested interests. But those were dealing with things like structure setbacks, driveway entrance spacings on a highway, etc…not really engineering standards. I don’t see lit-reviews from an intergovernmental panel of non-engineers changing pavement design, earthquake engineering, etc. Hell, you have to jump through hoops (often unsuccessfully) to get a typical city, county, or state to either change their code more make an exception for a product, material, design, or manufacturer with a long-proven track record of success and suitability.

  55. Posted Mar 4, 2008 at 11:46 AM | Permalink | Reply


    Well, I’ve yet to come across any other engineering standards pulled out of a lit review of non-engineering publications.

    I have! And with good reason. When working on one-of a kind applications, the standards don’t exist. Do you think ASTM has a standard for pumping radioactive solid-liquid waste slurries with poorly know physical properities? I quickly looked for a relatively short report of some work done to consider feasilibity of clean up strategies– look at the reference list on this article.

    Do you think there is an engineering standard for pneumatic conveyance of gas/solid liquid mixtures?

    We cited peer reviewed articles constantly both R&D phases. The engineers who cited our papers when designing also cited peer reviewed articles. Those peer reviewed articles are not themselves ‘engineering reports’. But what were we supposed to do, use nothing? If an adverse consequence was suggested in a peer reviewed paper not of engineering quality, we mentioed the adverse consequence, and cited that paper. Should we ignore it?

    Of course not. That would violate the spirit of engineering standards!

    Heck, non-engineers– called “tribal leaders” concerns were addressed in some of our designs. So, yes, it’s normal to have non-engineers guess work or concerns included in designs and even codes. In the case of Handford, the non-engineers concerns were incorporated into law by way of treaty agreements!

    Hell, you have to jump through hoops (often unsuccessfully) to get a typical city, county, or state to either change their code more make an exception for a product, material, design, or

    Of course you have to jump through hoops to relax engineering lor local standards.

    But non-engineers raise them all the time. The basis for raising them is often not proven to engineering standards. It’s not surprising.
    I realize it may seem “funny” in the sense of illogical, but this is pretty common.

    The result is: The fact that a code was raised doesn’t actually tell us raising it was logical, or that strong proof was required to elevate the standard. Sometimes, codes are unduly stringent (sometimes the opposite.) I’m not suprised you are finding some codes require bridges be raised. Lots of people are worried about water levels rising. Whether or not they are correct, boards are going to incorporate these into codes.

  56. Michael Jankowski
    Posted Mar 4, 2008 at 1:13 PM | Permalink | Reply


    The issue of bridge clearance isn’t “one of a kind.” We’re not out to split the atom :)

    And as much respect as I have for the folks who’ve done DOE and DOD work over the years…Hanford is going down as the biggest environmental mess in U.S. history. Aren’t they still trying to figure out how to stop a groundwater plume from reaching the Columbia River?

    Of course you have to jump through hoops to relax engineering lor local standards.

    I’m not talking about “relaxing” them. I’m talking about using an alternative or even superior product, method, or design.

  57. Posted Mar 4, 2008 at 3:19 PM | Permalink | Reply


    Hanford is going down as the biggest environmental mess in U.S. history. Aren’t they still trying to figure out how to stop a groundwater plume from reaching the Columbia River?

    Yep. It’s the biggest mess. The mess was started long, long ago. Heck, during WWII, the scientists just poured chemicals in dirt pits.

    Most of the mess was out of the hands of engineers or even the DOE. For example, Jimmy Carter stopped reprocessing of existing waste, thus causing the mess at K-Basins, which were never designed to store waste indefinitely. No one funded any projects to do anything with the stuff.

    I could go on and on about things that happened. But, if you look at the problems, they are rarely engineering failures. They are political failures. Politicians decided to implement certain project (like filling tanks with waste) and then, if there was no immediate issue, cut funding to deal with the aging waste. (Until tanks started doing things like accumulating hydrogen and burping. Then, suddenly, it’s an emergency.)

    My point isn’t that you should take tips from Hanford: It’s that engineering documents can, and do, include information from things like peer reviewed articles which do not include the level of detail of an “engineering report”.

    This material is often used to show that you have considered all possible hazards– including newly suspected ones.

    Engineering standards are good for designing to avoid known hazards. They aren’t particularly useful for identifying newly discovered issues. There are no standards for these because people don’t know how to deal with them.

    I’m not talking about “relaxing” them. I’m talking about using an alternative or even superior product, method, or design.

    Yes. Chicago still requires conduit I think. For all I know they require iron pipes in houses. It’s hard to get things changed unless the boards want to change things.

  58. steven mosher
    Posted Mar 4, 2008 at 4:40 PM | Permalink | Reply

    re 57. There is no such thing as an engineering failure. there are only failures
    in design specifications.

  59. Ellis
    Posted Mar 4, 2008 at 9:33 PM | Permalink | Reply

    Steve M. I do not know if this is any help, and this is not stated explicitly in either Hansen 1999,2001. However, when I looked at Brohan et al 2006 page 6,

    The distribution of known adjustments is not symmetric — adjustments are more likely to be negative than positive. The most common reason for a station needing adjustment is a site move in the 1940-60 period. The earlier site tends to have been warmer than the later one — as the move is often to an out of town airport. So the adjustments are mainly negative, because the earlier record (in the town/city) needs to be reduced [Jones et al., 1985, Jones et al., 1986].

    I became a little curious. Hansen says that UHI adjustments were 58/42 percent negative to positive, is there any way to see when during the 20 th century the negative adjustments were made. I do not know if it makes a difference , but it would seem to me if a lions’ share of the negative adjustments were made at the beginning of the temperature record and most of the positive adjustments at the later part of the record, could not a spurious trend result?

    Steve: Jones’ adjustments and Hansen’s adjustments are different animals. Don’t use a comment from one as containing any information on the other. We’ve not looked at Jones’ adjustments yet.

  60. Ellis
    Posted Mar 4, 2008 at 10:39 PM | Permalink | Reply

    Sorry, I did not make my question clear, as of course CRU and GISS are two different animals, however, and please correct me, is not the metadata for row stations the same? I guess what I was trying to say is that Hansen makes a pretty big deal that the trends are the same Rural, peri-urban, and urban adjusted and non-adjusted. If this is the case than why even bother doing an adjustment, unless you look at UHI not as a late 20th century trend, but rather as an early 20th century problem. As I said, probably nothing, but I would still like to see the negative adjustments as a function of time.

  61. Posted Mar 5, 2008 at 1:07 AM | Permalink | Reply


    I do not know if it makes a difference , but it would seem to me if a lions’ share of the negative adjustments were made at the beginning of the temperature record and most of the positive adjustments at the later part of the record, could not a spurious trend result?

    ..and then Brohan plots a histogram of applied adjustments and fits a Gaussian distribution to it, obtaining “hypothesised distribution of the adjustments required”. The difference is then fitted to new Gaussian distribution, and “So the homogenisation adjustment uncertainty for any station is a random value taken from a normal distribution with a standard deviation of 0.4 C”. Time is not taken into account, and this 0.4 C, of course, disappears almost completely as we have so many stations. I don’t know how Hansen computes the homogenization adjustment uncertainty, hopefully it is something different.

  62. James Erlandson
    Posted Mar 12, 2008 at 7:07 AM | Permalink | Reply

    lucia and others re: Bridges
    Government Reports Warn Planners on Sea-Rise Threat to U.S. Coasts (New York Times)

    A rise in sea levels and other changes fueled by global warming threaten roads, rail lines, ports, airports and other important infrastructure, and policy makers and planners should be acting now to avoid or mitigate their effects, according to new government reports.

    Noting that 60,000 miles of coastal highways are already subject to periodic flooding, the academy panel called for policy makers to survey vulnerable areas — “roads, bridges, marine, air, pipelines, everything,” Dr. Schwartz said — and begin work now on plans to protect, reinforce, move or replace on safer ground. Those tasks will take years or decades and tens of billions of dollars, at least, he said.

  63. Posted Mar 12, 2008 at 2:51 PM | Permalink | Reply

    Quite honestly, looking at infrastructures is probably a good thing regardless. It’s often difficult to justify the expense to the public. So, in this case, I’m afraid I’m in favor of surveying roads and moving things that are vulnerable. In most cases, they’ll end up moving or shoring up things that were built too close to the water in the first place.

  64. Andrew
    Posted Mar 12, 2008 at 3:15 PM | Permalink | Reply

    Still, a sea level rise measured in inches (unless you believe Hansen, or, less extreme, Rhamstorf) hardly threatens infrastructure. I am bothered that the excuse for scare stories is that they don’t seem to do any obvious harm and some times have positive results. But oh well, if people want to do live that way…

  65. James Erlandson
    Posted Mar 12, 2008 at 3:21 PM | Permalink | Reply

    lucia: You’re right. But if you look at pictures of New York and other harbors from 100 years ago you’ll notice that things have changed. Nothing from that time remains. Everything has rotted, burned or simply outlived its economic life and been replaced. And over the next 100 years the process will continue — just two or three feet higher. The marginal cost of that two or three feet will be vanishingly small. Not headline material.

  66. Andrew
    Posted Mar 12, 2008 at 3:26 PM | Permalink | Reply

    James Erlandson, where do you get that “two or three feet” from?

  67. James Erlandson
    Posted Mar 12, 2008 at 8:55 PM | Permalink | Reply

    Andrew: The draft assessment referenced in the NYT article uses the IPCC projection of a sea level increase between 0.18m and 0.59m. Michael Jankowski commented above, “I’ve seen where clearance requirements for new bridges over some waterways have been increased up to 1m in anticipation of future sea level rises due to AGW.”

  68. Andrew
    Posted Mar 13, 2008 at 5:11 AM | Permalink | Reply

    Thanks. I think they downgraded their numbers in the final report.

2 Trackbacks

  1. [...] GISS ROW Adjustments Last year, I reviewed GISS adjustments outside the US in a series of posts. These adjustments are pig’s breakfast. In many cases, GISS makes UHI adjustments the “wrong” way” i.e. their adjustments presume a UHI cooling effect. These goofy results are mentioned passim by Hansen as “false local adjustments”. At the end of the day, there is no evidence that Hansen’s “UHI” adjustments outside the U.S. even begin to deal with the problem. Posts were here here here here here here. [...]

  2. By Urban Heat Island Effect | Dr. Tim Ball on May 4, 2011 at 2:10 PM

    [...] There are more detailed questions about the techniques used by those agencies such as the Goddard Institute for Space Studies (GISS) to adjust for the UHIE. The crux of the problem is examined here. [...]

Post a Comment

Required fields are marked *



Get every new post delivered to your Inbox.

Join 2,879 other followers

%d bloggers like this: