Pielke Sr on the "New" USHCN

The new USHCN was scheduled to come out a couple of years ago. A paper describing it has finally appeared, discussed by Pielke Sr here. I haven’t reviewed the new paper – something that I’ll be looking for is whether they rely on “homemade” changepoint methods to supposedly achieve homogeneity – “homemade” in the sense that the changepoint methods were developed within USHCN and are not algorithms that are described in Draper and Smith or similar statistical text or described in statistical literature off the Island.

If so, intuitively, I’m suspicious of the idea that software by itself is capable of fixing “bad” data. For me, one of the main lessons of the Hansen Y2K episode was that it refuted the claim that Hansen’s wonder adjustments were capable of locating and adjusting for bad data – simply because the GISS quality control mechanisms were incapable of locating substantial Y2K jumps throughout the USHCN network. The argument with Mann’s bristlecones is similar – Mann’s “fancy” software was incapable of fixing bad data – in that case, the opposite was the case: it magnified bad data.

These are the sorts of things that one has to watch out for when a “fancy” method without a lengthy statistical pedigree is introduced to resolve a contentious applied problem.

65 Comments

  1. Posted May 12, 2009 at 12:02 PM | Permalink

    I’m also suspicious that automatic algorithms can fix bad data. They can sometimes spot bad data, which is a different thing.

    Last October, we learned that GISS doesn’t have procedures in place to spot data quality issues. This makes it very difficult to believe they have methods to fix bad data.

  2. Andrew
    Posted May 12, 2009 at 12:07 PM | Permalink

    I have often wondered how their al-gore-ythms are supposed to correct for data errors. Would be interesting to see how that change-point “fix” works-or doesn’t.

    BTW lucia, your site is down. WUWT?

  3. RomanM
    Posted May 12, 2009 at 12:38 PM | Permalink

    Found the abstract to the paper here and a copy of the pre-print of the paper itself here.

  4. John
    Posted May 12, 2009 at 12:43 PM | Permalink

    As a grumpy empiricist I would have to say that bad data can never be fixed. The best you can do is remeasure and replace the data. If that is not possible – as in temporally ordered data without time travel, whatever value is plugged into the empty place is either a reasonable or an unreasonable guess. It can’t be data in the proper sense of a measurement made of your phenomena of interest.

    • Andrew
      Posted May 12, 2009 at 12:56 PM | Permalink

      Re: John (#4), Oh, pooh. You mean all that work was worthless?

      Re: Jeff Id (#5), Maybe they can use a little MAGICC?

  5. Posted May 12, 2009 at 12:53 PM | Permalink

    There’s no need to fix bad data whatsoever, there’s plenty of temp stations to choose from after all. They can just correlate it with what they think the data should be and chuck the rest.

    Problem solved. 😀

    • John
      Posted May 12, 2009 at 3:11 PM | Permalink

      Re: Jeff Id (#5),

      I like how you restate my point. It sounds much more professional than “guess.”

      John

    • Konrad
      Posted May 13, 2009 at 1:25 AM | Permalink

      Re: Jeff Id (#5),
      Jeff,
      “and chuck the rest” sounds good to me. The hard work of Anthony and his many volunteers has identified around 11% of surveyed stations as CRN-1 /2. A temperature reconstruction based on these stations could be of value. I believe that no adjustment can be applied to data from stations rated CRN-3 and below, the adjustment issues are plainly too complex. However eliminating CRN-3 and below may produce useful results.
      While I could envisage better methods for adjusting for TOB and UHI than presently used, I would still see great value in reconstructing temperature trends using adjustments accepted by warmists using only CRN-1 / 2 station records.
      While I appreciate the work done in debunking that unfortunate Antarctica temperature reconstruction, I would have to note that the effort in producing such disingenuous material seems far less than the effort in debunking it. I would suggest that a switch from reactive to proactive might result in a quicker kill.

      • Posted May 13, 2009 at 10:55 AM | Permalink

        Re: Konrad (#21),

        I agree completely with chucking bad stations according to the surfacestations project and using what should be uncorrected or measurement time only corrected level 1 station data. You may already get the reference so this may be to others. When I said correlate and chuck this is an offhand reference to how some hockey stick temp curves are created from bogus data. I find myself holding my tongue on the rest [snip comments here again].

        As far as proactive on the Antarctic, remember that I’m just an aeronautical engineer who happened to get mad about the aforementioned hockey stick math. I have no will to become a climatologist, it’s not the most exciting science but it may be the most messed up. [chop]

        Surfacestations is one of the most important studies in climatology right now, in my opinion it is the most important. Redoing an Antarctic reconstruction pales in comparison to validating the obviously massively flawed records which humanity is foolishly (intentionally so) relying on. My guess is that the biggest lesson of the project will be the least published on, the historic records around the globe cannot be relied on to determine tenth degree trends for the past hundred years. Perhaps I’m wrong though and everything will match up with GISS/HadCRUT.

        • John S.
          Posted May 13, 2009 at 11:49 AM | Permalink

          Re: Jeff Id (#28),

          Amen to your thoughts on the global data picture, which is far more dismal than what we have here. Thanks to M. F. Maury, USA has a vast network of non-urban stations to select from in constructing regional indices extending back 100yrs or more. In most other countries with a fairly dense station network (e.g., Japan, India, Russia) most of the stations are urban. And the heart of Africa has not a single station that covers the entire 20th century.

        • Konrad
          Posted May 13, 2009 at 6:39 PM | Permalink

          Re: Jeff Id (#28),
          “I’m just an aeronautical engineer” Just? It appears to me from a couple of years of lurking that it is the engineers, geologists, statisticians and meteorologists that are doing the most at correcting the mess climatologists have created.

  6. Posted May 12, 2009 at 1:15 PM | Permalink

    I’d like to know how THIS can be corrected for via software.

    • Navy bob
      Posted May 14, 2009 at 11:59 AM | Permalink

      Re: Jeff Alberts (#7), Jeff – nothing to it: MMTS is in heavy shade from surrounding shrubbery, causing readings significantly below ambient. Tin-roof deck overhang adds additional shade for about half of day. Overhang appears to be an add-on to main house. From amount of weathering, probably installed 10-11 years ago. Algorithm therefore should add 3-4 degrees to readings obtained, increasing gradually to adjust for vegetative growth, with additional upward displacement at time of roof construction, say around 1998. This isn’t rocket science.

      • Scott Brim
        Posted May 14, 2009 at 12:33 PM | Permalink

        Re: Navy bob (#35)
        I am curious as to what objective approach might be employed to confirm that applying these corrections to this station does in fact yield accurate and consistently reliable temperature measurements, while it is operating under all its commonly-experienced seasonal periods and environmental conditions.

        • Neil Fisher
          Posted May 14, 2009 at 3:56 PM | Permalink

          Re: Scott Brim (#36), Yes Scott, this is exactly the problem – any “corrections” to the raw data *must* be shown and *must* add to the uncertainties in any and all subsequent calculations involving said data. The consequences of what the major economies of the world are embarking on are huge and we cannot afford to delude ourselves about uncertainties. Which simply gets back to SM’s calls for “engineering grade” reports on these things – all of it, not just 2 x CO2.

  7. Kenneth Fritsch
    Posted May 12, 2009 at 1:47 PM | Permalink

    The Version 2 USHCN data series has been public for some time now. I downloaded it a few months ago. As I recall when I ran it against the GISS and Version 1 USHCN it had larger trend differences for the US than GISS and larger than Version 1. GISS has the smallest trend with USHCN Version 1 nearly 0.2 degrees C larger for the 20th century. I’ll have to go back to my notes to determine the Version 1 and 2 differences, but it was smaller than 0.1 degree C per century.

    The GISS and USHCN Version 1 differences were attributed by some to the differences in handling of the UHI factor (Karl versus Hansen). Version 2 uses break points to adjust for all potential homogeneity problems and no longer uses a sepaprate UHI adjustment.

    • Kenneth Fritsch
      Posted May 12, 2009 at 2:15 PM | Permalink

      Re: Kenneth Fritsch (#8),

      So much for my trust in my memory. Here is what I found in December of 2008. The USHCN Version 2 series has a lesser trend than Version 1 and GISS was smaller than Version 1. I do not recall looking at the differences for statistical significance.

      Regression values from regressing annual temperatures over the time period 1895-2006 trends are listed as degrees C per decade:

      Ver2 Adjusted Temperature:
      Adj. R^2 = 0.42; Trend = 0.117; StError Trend = 0.013; Lag1 r = 0.072

      Ver 1 Urban Adjusted Temperature:
      Adj. R^2 = 0.43; Trend = 0.144; StdError Trend = 0.016; Lag1 r = 0.263

      GISS Homogenous Adjusted Temperatures:
      Adj. R^2 = 0.28; Trend = 0.110; StdError Trend = 0.016; Lag1 r = 0.180

      • Kenneth Fritsch
        Posted May 15, 2009 at 8:46 AM | Permalink

        Re: Kenneth Fritsch (#9),

        The table I presented above was in degrees F per decade and not degrees C as indicated in the post. The corrected table is given below. My calculations are within reasonable agreement with that of the authors of the paper (linked in RomanM’s Post #3 above) explaining the Version 2 and Version 1 differences.

        In a future post I want to present some USHCN Version 1 and 2 differences by station. Such an analysis can give greater insights into differences than averages.

        Regression values from regressing annual temperatures over the time period 1895-2006 trends are listed as degrees C per decade:

        Ver2 Adjusted Temperature:
        Adj. R^2 = 0.42; Trend = 0.065; StError Trend = 0.007; Lag1 r = 0.072
        Ver 1 Urban Adjusted Temperature:
        Adj. R^2 = 0.43; Trend = 0.080; StdError Trend = 0.009; Lag1 r = 0.263
        GISS Homogenous Adjusted Temperatures:
        Adj. R^2 = 0.28; Trend = 0.061; StdError Trend = 0.009; Lag1 r = 0.180

        • bender
          Posted May 15, 2009 at 11:27 AM | Permalink

          Re: Kenneth Fritsch (#46),
          Plots would be useful. For example, the lag1 r’s are quite different. WUWT? A plot would help diagnose the cause.

        • Kenneth Fritsch
          Posted May 15, 2009 at 12:42 PM | Permalink

          Re: bender (#48),

          Bender, I intend to do just that when I get my hands back around the data and my previous calculations in Excel. Since I have been doing analyses in R, I have a much better historical record of what I did. Unfortunately, these calculations of USHCN Ver 1 versus Ver 2 were before my enlightenment.

  8. JamesG
    Posted May 12, 2009 at 3:42 PM | Permalink

    I wonder what the raw unadjusted data would say – Assuming that errors should go both ways and largely cancel out. Something tells me that it wouldn’t be that alarming.

  9. John S.
    Posted May 12, 2009 at 4:08 PM | Permalink

    Anytime any data “adjustment” is made, it takes us away from the realm of the measured and down the rabbit hole of the expected, desired, or just plain imagined. The idea of large-scale spatial “homogeneity” of temperature variations has never been proven, to begin with, and there’s much empirical evidence indicating quite the opposite for lowest-frequency components, even at scales on the order of tens of kilometers. The whole purpose of the exercise is to wrest control of “trends” from proper sampling and place it in the hands of the analyst. Throw in Caddy Shack instrument shelters, such as at Olga WA, and you can produce “homogenized records” seasoned to any taste. JamesG is correct in his hunch that the raw, unadjusted data tell a whole different story.

  10. John A
    Posted May 12, 2009 at 5:08 PM | Permalink

    Somebody please tell me how any algorithm can adjust for a dataset which is 90% bad (see Anthony Watts’ report on the USHCN). Miraculous wouldn’t begin to describe such a procedure.

  11. Roger Dueck
    Posted May 12, 2009 at 5:19 PM | Permalink

    Steve
    As a petroleum geologist I often “fill in” missing data. It’s called a “Drilling Prospect” and then you go out and find someone to put up money to drill and “Prove” your imaginary data is correct. There is no way of proving any of the “adjustments” correct or incorrect and therefore they should be rejected out of hand. Once data is screwed with it becomes corrupt.

  12. Anthony Watts
    Posted May 12, 2009 at 5:36 PM | Permalink

    When I was invited to speak at NCDC last year, I had a lengthy conversation with Matt Mennes, one of the authors.

    What I learned was this:

    1) The USHCN2 is designed to catch station moves and other discontinuities.

    2) It will not catch long term trend issues, like UHI encroachment. Low frequency/long period biases pass unbostructed or detected.

    On item 1, we have several test cases which we have identified that can tell us if it works or not.

  13. tarpon
    Posted May 12, 2009 at 7:44 PM | Permalink

    Wouldn’t it be easier to fix the bad data collection so that it complied with their guidelines? Software is playable.

  14. Bill Jamison
    Posted May 12, 2009 at 7:45 PM | Permalink

    The data isn’t bad it’s just maladjusted 😉

  15. Paul Penrose
    Posted May 12, 2009 at 10:50 PM | Permalink

    I’m always surprised at what people claim software can accomplish. Someone once told me something that I’ve useful to keep in mind over the years: Computers are idiots; very fast idiots to be sure, but still idiots. Software only makes them slightly smarter.

  16. steven mosher
    Posted May 13, 2009 at 1:49 AM | Permalink

    Anthony,

    I don’t suppose Menne is going to fork over his code is he?

  17. stan
    Posted May 13, 2009 at 6:53 AM | Permalink

    OT, but did you notice the quotes highlighted by Pielke Sr a few days ago at this post http://climatesci.org/2009/05/08/paper-titled-regimes-or-cycles-in-tropical-cyclone-activity-in-the-north-atlantic-by-aberson-2009/

    “ A cautionary tale in which previously published results are shown to be invalid due to the lack of statistical analyses in the original work.”

    “statistics continue to be misused or altogether neglected in the refereed literature, with the inevitable result of misleading or erroneous conclusions.”

    • bender
      Posted May 13, 2009 at 7:50 AM | Permalink

      Re: stan (#23),
      Journal of Statistical Climatology, anyone?

  18. Abdul Abulbul Amir
    Posted May 13, 2009 at 9:25 AM | Permalink

    Fix bad data. The only fix is to pitch it. There needs to be a quality control mechanism to ensure that only bad data is pitched and that all bad data is pitched.

    • John S.
      Posted May 13, 2009 at 11:25 AM | Permalink

      Re: Abdul Abulbul Amir (#25),

      Amen! The whole idea that one can accurately compensate for unknown, highly site-specific UHI effects is preposterous. It’s an open invitation for, ahem, “data management,” which is the antithesis of data integrity.

      And UHI is by no means the only adulteration of historic temperature records that is encountered in the USA. Vast stretches of land west of the Mississippi have undergone severe changes in land use during the 19th and 20th centuries, especially in the High Plains and in the Central Valley of California. Even the best-sited stations in those areas will reflect the effects of land clearing, crop changes, and irrigation. Throw in dam construction, which alters the soil moisture downstream, and you easily get a grossly changed environment. That’s why rural records that rival major cities in their upward temperature trend are not uncommon in the West.

      The construction of truly meaningful climatic temperature indices requires far more than an algorithm. It requires good station records from stable environments. Discarding records from unstable ones will no doubt bring forth charges of cherry-picking, but it must be done to get an unbiased reading of any changes in regional climate, uncorrupted by land-use effects.

  19. Craig Loehle
    Posted May 13, 2009 at 10:11 AM | Permalink

    The problem here is “armchair science” where no one bothers to go look at the stations (and metadata) to find the actual station moves and adjust accordingly, but rather “let the computer fix it”. It couldn’t be that expensive in Watts and a few volunteers have done all they did with no $ at all!

  20. Rod Smith
    Posted May 13, 2009 at 10:14 AM | Permalink

    I would opine that any “quality control” needs to be applied at the highest level, not the lowest. I think it is long past time that NASA “management” of the USCHN receive a bit of very direct and precise “re-engineering.”

    Or to put it another way, hardball, not software.

  21. Posted May 14, 2009 at 5:03 AM | Permalink

    Fixing the bad data is similar to fixing the imperfections of the free markets. While some local patterns may convince someone that it makes sense and things can improve, all the imperfections actually exist for good reasons and they can play a “positive” role in many other contexts. The imperfections are “real” and such a “fixing” therefore inevitably twists the information encoded in the data and it is ultimately a bad thing.

    Steve: Luboš, I don’t want to get into discussion of market imperfections. I strongly support the idea of “full, true and plain disclosure” as a legal requirement for dealing in public securities and this is very much based on reducing market imperfection. But this would be better discussed at your own (excellent) blog.

    • Tolz
      Posted May 14, 2009 at 4:40 PM | Permalink

      Re: Luboš Motl (#34),

      Steve, I don’t think he went OT–that was a pretty good analogy to the topic at hand. And there is a clear distinction between “disclosure” and “fixing” when it comes to free markets as well as climate data.

  22. srp
    Posted May 14, 2009 at 4:43 PM | Permalink

    This topic of surface temperature history could perhaps be “demilitarized” by noting that the policy issues are not affected that much by the trend in the time series of temperature. (Here I concur with our host.) There seems to be agreement by all sides on how much extra energy is captured by any given increase in CO2 (ignoring feedbacks) but not much understanding of a) how long it takes for that energy to show up in surface temperatures, b) what the temperature trends in time and space would be absent the CO2 increase, or c) the spatial distribution of the temperature deltas due to CO2.

    So a true flat or downward historical trend could easily be “bad” news because it means there are big increases being stored up in the system somewhere and an upward trend could be “good” news because it means there’s less of an impact to come. Or the conventional interpretations could be correct instead. Thus, there’s little reason to “root” for one trend or the other regardless of one’s feelings about policy.

    If this were recognized more widely, the issues of data accuracy, data disclosure, and proper statistical techniques raised on this site could perhaps get their due free of some of the shenanigans and polemics that have become standard. On the other hand, fewer people would probably pay attention to these details without their “rooting” motivation, so this recognition might be a mixed blessing from a scientific point of view.

    • John S.
      Posted May 14, 2009 at 6:54 PM | Permalink

      Re: srp (#39),

      The “agreement by all sides on how much extra energy is captured by any increase in CO2” extends only insofar as a cloud-free column of air in the laboratory is concerned. The situation in vitro does not translate into anything credible in situ.

      Re: david_a (#40),

      Assuming that there is an “energy imbalance” that is best revealed by heat content in the deep ocean begs the entire question. There is scarcely any physical basis for the assumption of an imbalance, given the uncertainties of the dynamics of the real–rather than model–climate system. And heat content below the thermocline is pretty well isolated from climatic surface variations. The whole idea of “unrealized heat in the pipeline” deep in the ocean is thermodynamically specious, since it is surface heat that works its way downward into the cooler waters, not the other way around.

      Measurements in the wind-mixed surface layer may offer advantages over surface stations, insofar as freedom from corruptive effects is concerned. The major disadvantage, however, is that we’d have to wait a century to establish any reasonable baseline for natural variations in that layer.

      • srp
        Posted May 14, 2009 at 7:23 PM | Permalink

        Re: John S. (#41),
        I don’t want to hijack the thread, so I will stop after this, but

        1) I tried to set aside for argument’s sake all the cloud and H20 feedbacks

        2) I seem to recall hearing from some of our regular commenters that the climate system may have some complex internal dynamics (e.g. all those Spanish children) and that every now and then heat stored somewhere in the oceans might burble up into the atmosphere. (I suppose it is also possible that kinetic energy stored in moving masses of fluids could get exchanged into heat and vice versa, but nobody ever discusses that so I assume such effects are thought by informed people to be minimal.)

  23. david_a
    Posted May 14, 2009 at 5:24 PM | Permalink

    To get any sort of handle on the energy imbalance the focus is far better on ocean heat content than on surface temps, be they reconstructions or direct measurements. At 2.5 meters depth (1/4 the mass and 4x the specific heat) you have the equivalent specific heat*mass of the entire atmospheric column. Since the ocean is a couple of thousand meters deep you have at least 2-3 orders of magnitude more ‘storage space’ than the atmosphere. Since energy has to be conserved, it is the place where you can accurately measure the cumulative energy budget over time. Because it a natural integrator the accuracy is going to be way way higher than trying to measure the differential which is essentially what you get measuring the surface temps. Additionally there are far fewer problems with regard to the instrumentation as it is almost all computerized now with very little room for ‘operator error’ and there are very few UHI effects in the middle of the ocean beneath the surface. The current IPCC report backs out a constant energy addition for the ocean of about .6 w/m2. which xlates to about 1.0 x 10^22 joules per year.
    From a purely political standpoint these are the numbers that need to be promulgated. It is a simple concept that everyone can grasp, there is little room for screwing around with the numbers, and all the other ‘predictions’ of the GCM’s are meaningless unless this one comes to pass. If the oceans don’t warm, there ain’t no global warming, its as simple as that and needs to be communicated as such.

  24. David_a
    Posted May 14, 2009 at 7:31 PM | Permalink

    John
    It’s not the deep ocean where the energy imbalance would accumulate, but in the top few hundred meters. I agree that deep ocean > 1000m Heat storage as unrealized pipeline heat is silly. The unrealized warming would be the implied lag in longwave reemission as it took time to warm the ocean. Depending on the assumed variance of albedo effects and lw absorbtion effects due to clouds one could make a reliable confidence interval for the max allowable ‘no ocean warming’ if the gcm’s were correct. My guess is that the keepers of the gcm’s know these numbers but would not publish them cause they would set a clear metric for the falsification of the models.

    • John S.
      Posted May 14, 2009 at 9:02 PM | Permalink

      Re: David_a (#43),

      Although SW absorption and thermalization extends tens of meters into the ocean, and heat is mixed downward to the thermocline (typically at <100m), both LW emission and absorption is concentrated literally in the mm-thick surface skin, which is also evaporating. There is no lag in LW radiation from the oceans whatsoever. The only oceanic lag is in the downwelling of heat, which has a negligible effect upon climate at the surface. The high thermal inertia of the oceans, on the other hand, introduces a seasonal lag of ~1 month in near-surface air temperatures, with a consistent net annual transfer of heat to the atmosphere. The relatively unretentive atmosphere does not heat the oceans.

  25. Willem de Lange
    Posted May 14, 2009 at 8:08 PM | Permalink

    Oceans gain their “heat” at wavelengths shorter than 3 microns (solar radiation). Their absorption at longer wavelengths is negligible, so the delayed loss of “heat” associated with the Greenhouse Effect is not a source of energy transfer to the oceans. If you want to have stored heat in the pipeline, it comes from the Sun not the Greenhouse Effect. You could argue that solar energy is stored for variable time periods by ocean circulation, and that there are feedbacks involving clouds (which both affect the input of solar energy into the oceans, and the release of energy by latent heat and radiation). GCMS ignore all these aspects and assume in their parameterisations that infrared energy warms the oceans. This is wrong.

  26. kuhnkat
    Posted May 14, 2009 at 9:20 PM | Permalink

    Willem de Lange,

    http://www.martin.chaplin.btinternet.co.uk/vibrat.html

    From the link:

    “Water is the main absorber of the sunlight in the atmosphere. The 13 million million tons of water in the atmosphere (~0.33% by weight) is responsible for about 70% of all atmospheric absorption of radiation, mainly in the infrared region where water shows strong absorption.”

    Can you point us to information refuting this analysis??

  27. david_a
    Posted May 15, 2009 at 7:45 AM | Permalink

    John,
    I am not implying that surface temperature lag is due to storage of heat in the ocean which then over some long period of time reaches equilibrium with the atmosphere. The lag is due to the fact that the suns output (for this discussion) is constant and continuous. If you place a device around the earth which is energy transparent in one direction (towards the earth) and hold all other things equal you will gradually capture the suns energy below the device. The only reason that below the device the temperature does not increase to infinity is that eventually the device itself will heat and its emission back into space will balance the incoming radiation. This is what is meant by the ‘heat in the pipeline’ phrase. Until the device has warmed the system is still moving towards equilibrium.
    There is nothing physically wrong with this idea.
    My initial comment was directed towards how best to measure the top of atmosphere radiative imbalance, should it exist. To the extent that it does exist and that the earth system is gaining energy, the most reliable way to measure it is in the ocean because the heat capacity of the ocean is orders of magnitude greater than that of the atmosphere. Various processes can move energy around between the ocean and the atmosphere so that for relatively short periods of time and space they can be grossly out of thermal equilibrium. Measuring atmospheric temperature captures all of this variance and increases the ‘noise’ relative to any signal of total earth system energy content. It is a poor way to try and answer the question of whether or not the injection of GHG’s into the atmosphere change the TOA radiation budget, and that is the only question which needs answering. Everything else is just derivative.

    • John S.
      Posted May 15, 2009 at 10:35 AM | Permalink

      Re: david_a (#45),

      You’re now apparently referring to the LW-absorbent atmosphere as the “pipeline.” Those of us who analyze real-world measurements know that the atmospheric e-folding time constant in response to solar radiation is measurable in hours, which produces a widely observed temperature lag ~3 hours in the diurnal cycle. Despite significant increases over the decades in CO2 concentrations, this response characteristic has not changed measurably.

      Certainly the oceans dwarf the atmosphere in their heat capacity. But capacity should not be confused with content. The reason temperature doesn’t even come close to going to infinity is because a) only a finite amount of power is supplied by the sun and b) the radiative response of climate system is quite immediate. The whole idea of “heat in the pipeline” is little more than an arm-waving device to distract the analytically challenged from recognizing that nature is ignoring AGW prognostications.

      • bender
        Posted May 15, 2009 at 11:41 AM | Permalink

        Re: John S. (#47),

        The whole idea of “heat in the pipeline” is little more than an arm-waving device to distract the analytically challenged from recognizing that nature is ignoring AGW prognostications.

        This is opinion. I’m not sure there’s a place for this at CA.

        • John S.
          Posted May 15, 2009 at 1:31 PM | Permalink

          Re: bender (#49),

          See my #52 above. Frankly, your unprofessional obiter dicta are tiresome. And is it you alone that is entitled to express an opinion at CA? I see no point in any future reply to your opinion.

      • D. Patterson
        Posted May 15, 2009 at 12:39 PM | Permalink

        Re: John S. (#47),

        b) the radiative response of climate system is quite immediate. The whole idea of “heat in the pipeline” is little more than an arm-waving device

        Convective, advective, and phase change heat exchanges are not “immediate”, and they predominate radiative transfers. Oceanic heat pools exist. Oceanic currents with differential thermal properties exist. The Hadley Cell exists. Given the fact that USHCN cannot measure oceanic thermal properties, the assertion that “immediate” radiative response invalidating “heat in the pipeline” is without merit along with subsequent conclusions about their effects upon the USHCN.

        • John S.
          Posted May 15, 2009 at 1:14 PM | Permalink

          Re: D. Patterson (#50),

          Please read more carefully. My response was to David a’s (#45) assertion that:

          I am not implying that surface temperature lag is due to storage of heat in the ocean which then over some long period of time reaches equilibrium with the atmosphere. The lag is due to the fact that the suns output (for this discussion) is constant and continuous. If you place a device around the earth which is energy transparent in one direction (towards the earth) and hold all other things equal you will gradually capture the suns energy below the device. The only reason that below the device the temperature does not increase to infinity is that eventually the device itself will heat and its emission back into space will balance the incoming radiation. This is what is meant by the ‘heat in the pipeline’ phrase. Until the device has warmed the system is still moving towards equilibrium.

          I’m well aware of the strong difference between time-scales involved in radiative response and those in advective and evapo-convective redistribution of heat. And obviously USHCN is not concerned with oceanic data. But, I drew no conclusions regarding USHCN in my #47.

      • Ron Cram
        Posted May 19, 2009 at 4:32 PM | Permalink

        Re: John S. (#47),

        You write:

        Those of us who analyze real-world measurements know that the atmospheric e-folding time constant in response to solar radiation is measurable in hours, which produces a widely observed temperature lag ~3 hours in the diurnal cycle. Despite significant increases over the decades in CO2 concentrations, this response characteristic has not changed measurably.

        I am not sure I follow. A three hour temp lag seems reasonable, but what exactly are you measuring? Are you expecting temps to be higher during the three hour lag? Or, are you expecting the temp lag to become four hours? What is wrong with the physical theory that GHGs hold in more heat and trap it in the oceans?

  28. David_a
    Posted May 15, 2009 at 9:00 PM | Permalink

    John
    How about the following thought experiment
    The earth has zero atmosphere and is at black body equilibrium. The oceans are solid blocks of ice. You now instantaneously create an atmosphere around the planet. How long does it take for a new equilibrium temperature to be acheived? Would it be correct to say at the moment that the atmosphere is created there is ‘heat in the pipeline’?
    Since the answer is obviously yes the question then is what is the lag time associated with injecting a given amount of ghg into the atmosphere until a new equilibrium is reached. From your prior post it appears u believe the answer to be a few hours given the current rate of ghg injection.
    Since I’m typing on a phone I’m I’ll save the rest for another time ….

    • John S.
      Posted May 16, 2009 at 10:18 AM | Permalink

      Re: David_a (#54),

      First, if you have oceans in any state, you don’t have a black body. Second, the instantaneous creation of an atmosphere constitutes a wholesale, abrupt change of the planetary system, which bears no resemblence to anything at hand. Even that radical change doesn’t necessarily put “heat in the pipeline,” since no specification of its thermal state at creation nor its material properties and physical dimensions has been made. Third, my reference to the empirically-determined atmospheric temperature time constant applies only to the atmosphere that we actually have and to near-surface temperatures.

      The scientific answer to your thought-experiment question rests entirely in specifications you apparently deem inconsequential. Yet you draw the parallel between changes in trace GHG concentrations and radical change in the whole system. The stability of the empirical near-surface time constant in the face of substantial changes in CO2 militates against such a parallel.

      Contrary to your earlier suggestion that sub-surface ocean temperatures would provide a better signal-to-noise ratio for detecting AGW, I think they would provide nothing of the kind. Why work with measurands that are far removed from the action? IMHO, any possible AGW effect would best be detected as systematic changes in the near-surface conditions. It is here that thermalization takes place and we have a strong diurnal signal to calibrate upon. The atmosphere acts as a low-pass filter and any changes in its thermal capacitance component are more accurately detected at transition- and stop-band frequencies that at the pass-band, where the system response is flat. Although one always wishes for better data and better global coverage, there is no evidence that the system response function has changed. Models, on the other hand, almost invariably show much-overdamped diurnal cycles, i.e., a longer time constant than found in situ.

  29. Mike C
    Posted May 16, 2009 at 3:41 PM | Permalink

    The problem with USHCN V2 is probably going to be in the low frequency variation (UHI, fading paint and etc) which was discussed by Claude Williams at an AMS function.
    Audio here:

    http://ams.confex.com/ams/Annual2006/techprogram/paper_100746.htm

    The poor siting of the stations should also be an issue because of the large percentage of problem stations, and that the computer fix relies on comparing station records toeach other.

  30. Kenneth Fritsch
    Posted May 16, 2009 at 7:31 PM | Permalink

    I attempted to replicate my differences found as reported above between the 1895-2006 trends for the Version 1 and 2 USHCN data series. I was not able to do it, but noted that going back that far in history results in a large percentage of missing data. I found a better comparison period with far less missing data was from 1920 to 2006. I used only those stations that had no missing data and were common to Versions 1 and 2. The portion qualifying was a relatively high percentage with 1073 stations out of approximately 1220.

    The results are tabulated and graphed below and show that there exist rather large trend differences between the Versions 1 and 2 at individual USHCN stations. By my calculations the average difference in trend for all stations is statistically significant. The average difference was 0.0148 degrees C per decade or 0.148 degrees C per century out of a trend for Version 1 that was 0.44 degrees C per century for the same time period of 1920-2006.

    In my view we have a data series that is claimed to be rather accurate and precise by the authors, i.e. Version 1, and then a new improved version is produced by the same authors, i.e. Version 2, that shows large differences from the former version when individual stations are examined and a significant average difference. I really think that an analysis of a more detailed nature show some reasons for less confidence in the stated uncertainty for either version.

    In an off topic comment, I downloaded the data series using R as it is much more efficient than Excel. I, however, admit that I used the Pivot Table in Excel to manipulate the data as I have not yet reached the proficiency with R to use it. I have attempted to use melt (reshape) and cast (reshape) functions in R for this purpose without success. Can anyone give me some clues on how to do in R what Pivot Tables do in Excel?

    USHCN Version 2 – Version 1 for 1920-2006 for 1073 Individual USHCN Stations:

    Average difference = 0.0148 degrees C per Decade
    Standard Deviation = 0.0739; n=1073; t=6.56; Maximum Negative Difference = -0.238 degrees C per decade; Maximum Positive Difference = 0.312 Degrees C per Decade

  31. david_a
    Posted May 17, 2009 at 10:21 AM | Permalink

    John:

    As I understand the energy balance it can be summarized as follows:

    EC(t+1)= EC(t)+Ein(t)-Eout(t)

    If (Ein(t)== Eout(t)) then
    EC(t+1) = EC(t)

    Where EC is the energy content of the earth system (which for this discussion can be approximated by thermal energy alone), Ein is the incoming energy (for this discussion entirely due to the constant energy production of the sun) and Eout is the energy radiated away from the earth system. If the amount of incoming radiation is the same as the amount of outgoing radiation over some time period, then by the law of conservation of energy there will be no net change in the energy content of the earth system.

    My understanding of the AGW hypothesis is that by injecting GHG’s into the atmosphere (and holding all else constant) that Eout can be reduced so that Ein > Eout and while this relationship holds EC(t+1) > EC(t). Since the mass and specific heat of the earth system can be thought of as constant an increase in EC (energy content) will correspond to an increase in temperature.

    If the incoming energy were to remain greater than the outgoing energy then the energy content and hence temperature would increase without bound. However, as the temperature rises the radiation spectrum changes and this causes an increase in Eout which at some point will overtake the decline in Eout due to the GHG injection. And a new equilibrium will be found though at a higher EC or temperature. Between the time that system is initially perturbed and the time it reaches a new equilibrium is what I understand is meant by the phrase ‘in the pipeline’. I do not know whether or not your disagreement is with the physics as I have laid it out or whether you believe that the lag between the perturbation and the new equilibrium state is vanishingly small.

    The reasons it is better to measure the heat content of the oceans rather than measure atmospheric temperature is simply the relative variances of the two measurements. Ultimately the different variances can be traced back to the differing thermal masses of the ocean and the atmosphere. Since the oceans have a thermal mass several orders of magnitude higher than that of the atmosphere their temperatures are much more stable given that the two are in some sort of energetic equilibrium. Or put another way, any process which exchanges energy between the oceans and the atmosphere will have a much larger percent change on the energy content of the atmosphere than on the ocean.

    Both the ocean and the atmosphere will accumulate the difference between Ein and Eout, however an earth system process which temporarily causes an energetic disequilibrium condition between ocean and atmosphere will consequentially have much greater effect on atmospheric measurements of temperature than oceanic ones. This will make extracting the Ein – Eout signal much more difficult in the atmosphere.

    • John S.
      Posted May 17, 2009 at 11:09 PM | Permalink

      Re: david_a (#58),

      The elementary idea of steady-state thermal equilibrium based on an energy budget is fine–as far as it goes. Where it goes awry in the realistic context is by:

      a) Glossing over the oscillating nature of received insolation, due to the Earth’s rotation and to the elliptic orbit around the Sun.

      b) Assuming that such equilibrium can be achieved throughout the climate system by radiative processes alone.

      c) Presuming that modest changes in trace GHGs have a profound effect upon that radiative energy balance.

      Powered by diurnally and seasonally oscillating insolation, Earth’s climate system never achieves steady-state (no temperature changes, no mass motion) equilibrium. All that is possible is some chaotic semblance of global thermal stability over climatic time-scales–what might be termed “climatic equilibrium.”

      To be sure, on a planetary basis, only radiative processes are operative (SW input, LW output). But on a planet >70% of whose surface is water, moist convection is the principal means by which thermal energy is carried aloft from the surface, most of it wrapped up as latent heat. It is relased as sensible heat only during cloud formation, which blocks the radiative path back to Earth. The incoherently back-scattered LW radiation from the atmosphere that provides habitable surface temperatures comes primarily from highly variable water vapor content and cloud bottoms.

      Although trace gases account for ~15% of the total backscattered radiation in the LW spectrum, no compelling empirical evidence has yet surfaced that the overall LW “optical depth” of the atmosphere has been measurably changed by anthropogenic infusion, as reported recently by Steve Gregory. Along with the stable time constant, this is yet another indication that a firm grasp of the workings of the climate system continues to elude theoreticians and modellers.

      Radiative heat transfer operates near the speed of light. The instant that emission exceeds absorption, the air parcel or surface starts cooling, and vice versa. And “greenhouse” gases not only discharge their energy radiatively, but lose it in a largely one-way process through molecular collisions with inert gases that constitute ~98% of the atmosphere. The latter gases, rather than rapidly discharging GHGs, provide the bulk of what little thermal capacitance the air possesses. It is the enormously larger capacitance of the oceans along with that of clouds that insulates Earth’s surface from the extremes of temperature that would be otherwise experienced on a daily basis.

      Our moon, where surface temperatures exceed 100C and drop below -150C in the course of a lunar day (~28 Earth days) provides dramatic evidence of the difference our atmosphere and rotation rate make. Furthermore, lunar eclipses, with drop rates up to 60C/minute [sic!] and rise rates up to 45C/min, show how unretentive the regolith really is. Perhaps in that context, a clear-sky time constant on the order of half a day, leading to a typical lag somewhat over 3hrs at the diurnal frequency for land-surface temperatures here on Earth is no longer surprising. Vestiges of whatever thermalization took place today will drop below the noise level by tomorrow. So much for the home-spun idea of “heat in the pipeline.”

      Now if the Sun were to ramp up its output abruptly, than that idea would have traction. It indeed might take appreciable time for the energy content throughout our climate system to effectively “re-equilibrate.” But such a step-response to active excitation cannot be expected reasonably from gradual changes in a minor passive capacitance component in the system. The Clausius statement of the Second Law of Thermodynamics tells us that the flow of “excess” heat can be only in one direction: out to space.

      As far as your stated preference for “low-variance” oceanic measurements over land-based is concerned, I suspect you’re conflating variance with noise. I’ll always take a strong signal (high variance) over a weak one (low variance).

      Hope this clarifies all matters without further discussion, for which I simply cannot take more time.

  32. Kenneth Fritsch
    Posted May 17, 2009 at 10:52 AM | Permalink

    In my previous post, I inadvertently compared the Version 2 station trends for 1920-2007 to those of Version 1 for 1920-2006. This post is a correction where the 1920-2006 versions of both are compared. While the average difference is reduced between versions, the difference remains statistically significant and the individual station differences continue to be large and my conclusions from the previous post remain the same. The corrected tabular and graphics are presented below.

    USHCN Version 2 – Version 1 for 1920-2006 for 1073 Individual USHCN Stations:

    Average difference = 0.0109 degrees C per Decade
    Standard Deviation = 0.0738; n=1073; t=4.84; Maximum Negative Difference = -0.243 degrees C per decade; Maximum Positive Difference = 0.307 Degrees C per Decade

  33. John S.
    Posted May 17, 2009 at 11:45 PM | Permalink

    Correction: It should read Ken Gregory. My apologies to him.

  34. Kenneth Fritsch
    Posted May 20, 2009 at 10:58 AM | Permalink

    In order to compare Versions 1 and 2 of the USHCN temperature series I compared by matched individual stations, again as I did for 1920-2006 previously, the trends for the periods 1895-2006, 1950-2006 and 1980-2006. The differences are tabulated and graphed below. Again the average differences are statistically significant and the histograms of stations trend differences shows a large percentage of the total stations have relatively large tend differences. I should also note here that Version 2 has less missing data and in fact none for the 1920 and beyond series that I looked at. All comparisons were for stations with no missing data for the period of interest.

    The reoccurring point I see in these analyses is that the original series Version 1 (or USHCN Urban mean –I assume: the authors call it USHCN Version 1) was touted as an accurate portrayal of temperatures in the US. The authors then attempt methods that they judge are improvements over Version 1 and we see significant differences. While improvements are to be encouraged, if indeed the improvements are valid, they do show that the initial series was different (wrong). What will future improvements say about Version 2?

    The Version 1 and 2 trend differences from 1895-2006 were small, but those from 1920-2006, 1950-2006 and 1980-2006 were much larger. The authors of the paper describing the Version 1 to 2 changes pointed to the longest term trends differences and said little about individual station differences (only the Reno station in passing).

    I want to next look at those individual stations with the largest differences between Versions 1 and 2.

    USHCN Version 2 – Version 1 for 1895-2006 for 518 Individual USHCN Stations:

    Average difference = 0.0076 degrees C per Decade
    Standard Deviation = 0.0639; n=518; t=2.70; Maximum Negative Difference = -0.200 degrees C per decade; Maximum Positive Difference = 0.232 Degrees C per Decade; Version 1 Average Trend = 0.052 Degrees C per Decade

    USHCN Version 2 – Version 1 for 1950-2006 for 1146 Individual USHCN Stations:

    Average difference = 0.0192 degrees C per Decade
    Standard Deviation = 0.0948; n=1146; t=6.85; Maximum Negative Difference = -0.359 degrees C per decade; Maximum Positive Difference = 0.495 Degrees C per Decade; Version 1 Average Trend = 0.134 Degrees C per Decade

    USHCN Version 2 – Version 1 for 1980-2006 for 1197 Individual USHCN Stations:

    Average difference = 0.0450 degrees C per Decade
    Standard Deviation = 0.217; n=1197; t=7.23; Maximum Negative Difference = -0.686 degrees C per decade; Maximum Positive Difference = 1.221 Degrees C per Decade; Version 1 Average Trend = 0.292 Degrees C per Decade

  35. bender
    Posted Dec 21, 2009 at 12:40 PM | Permalink

    The published paper, by Menne et al 2009, referred to above, is here:

    Click to access i1520-0477-90-7-993.pdf

    Menne, Matthew J., Claude N. Williams, Jr. and Russell S. Vose, 2009: The United States Historical Climatology Network Monthly Temperature Data – Version 2. Bulletin of the American Meteorological Society

  36. bender
    Posted Dec 21, 2009 at 1:39 PM | Permalink

    Read the Menne paper. Looks like “homogenization” may be the mechanism by which surface records are warmed while supposedly controlling for UHI effects. Look at Fig 8b. Here’s the “trick” this time around – use changepoints to detect stations moved out of the UHI into rural. Then warm up the latter parts of these series to “homogenize”. Next, weakly attempt to isolate and remove the UHI effect.
    .
    As long as the changepoint detection method is more powerful than the UHI trend detection method, you will warm up the latter part of your records.
    .
    Dig here.
    .
    Someone might want to post this comment at WUWT and get Anthony’s take. (He’s probably miles ahead of me.)

  37. bender
    Posted Dec 21, 2009 at 1:52 PM | Permalink

    Fig 13 documents the net increase in warming across the US resulting from “homogenization”

2 Trackbacks

  1. […] Matthew J. Menne, Claude N. Williams, Jr. and Russell S. Vose, 2009: The United States Historical Climatology Network Monthly Temperature Data – Version 2. Bulletin of the American Meteorological Society (in press). [url for a copy of the paper added thanks and h/t to Steve McIntyre and RomanM on Climate Audit]. […]

  2. […] Vote Comment on Pielke Sr on the "New" USHCN by david_a […]