NOAA versus NASA: US Data

Anthony has a post reporting NOAA’s 2008 results, with NOAA reporting:

For 2008, the average temperature of 53.0 degrees F was 0.2 degree above the 20th Century average.

Anthony showed the following image from NOAA:

Readers need to keep in mind that there is a substantial “divergence” between NOAA US and NASA US temperatures as shown in the graphic below. Since 1940, NOAA’s US has increased relative to NASA’s US at a rate of 0.39 deg C/century, thus 0.27 deg C since 1940.


Figure 2. Difference (deg C) between NOAA US and NASA US temperature anomalies.

At present, we don’t know very much about the NOAA calculation. To my knowledge, they make no effort to make a UHI adjustment along the lines of NASA GISS. As I’ve mentioned before, in my opinion, the moral of the surfacestations.org project in the US is mainly that it gives a relatively objective means of deciding between these two discrepant series. As others have observed, the drift in the GISS results looks like it’s going to be relatively small compared to results from CRN1-2 stations – a result that has caused some cackling in the blogosphere. IMO, such cackling is misplaced. The surfacestations results give an objective reason to view the the NOAA result as biased. It also confirms that adjustments for UHI are required. Outside the US, the GISS meta-data on population and rural-ness is so screwed up and obsolete that their UHI “adjustment” is essentially random and its effectiveness in the ROW is very doubtful. Neither NOAA nor CRU even bother with such adjustments as they rely on various hokey “proofs” that UHI changes over the 20th century do not “matter”.


62 Comments

  1. Steve McIntyre
    Posted Jan 16, 2009 at 11:46 AM | Permalink | Reply

    source(“http://data.climateaudit.org/scripts/spaghetti/noaa.us.txt”) #noaa
    noaa=round( (noaa-32)*5/9,2) #to Centigrade
    source(“http://data.climateaudit.org/scripts/spaghetti/giss.us.txt”) #giss.us
    #I had to manually copy data to my own directory so this may not work.
    source(“http://data.climateaudit.org/scripts/utilities.txt”)

    #COMPARE NASA AND NOAA
    Y=ts.union(ts.annavg(noaa),giss.us)
    m0=apply(Y[(1961:1990)-tsp(Y)[1]+1,],2,mean)
    Y=scale(Y,center=m0,scale=FALSE)

    Y=cbind(Y,Y[,1]-Y[,2])
    temp=(time(Y)>=1890);Y=ts(Y[temp,],start=1890)

    #library(GDD)
    # GDD(file=file.path(“d:/climate/images/2009/gridcell”,”noaa_vs_nasa.gif”), type=”gif”, w=400, h=300)
    par(mar=c(3,4,2,1))
    plot(c(time(Y)),Y[,3],type=”l”,ylab=”deg C”);
    title(main=”NOAA minus NASA: USA”)
    abline(h=0,lty=2)
    index=c(time(Y))
    temp=(time(Y)>=1940)&!is.na(Y[,3])
    fm=lm(Y[temp,3]~index[temp]);coef(fm)
    lines(index[temp],fm$fitted.value,col=”red”)
    text(1890,.18,paste(“Trend: “,round(100*coef(fm)[2],2), ” deg/century”),col=”red”,font=2, pos=4)
    # dev.off()

    • MC
      Posted Jan 18, 2009 at 3:10 PM | Permalink | Reply

      Re: Steve McIntyre (#1), Just ran the code through R. From what I can gather the difference is for the complete record between 1895 and 2008. So from the graph the NASA data is greater than the NOAA data for the most part until about 1970. I wonder is this because the NOAA data has been adjusted down in the past. Its surprising that the two different formats have a consistent amount of error between them instead of portions where the difference is zero.

  2. jae
    Posted Jan 16, 2009 at 12:16 PM | Permalink | Reply

    Yikes, maybe it’s a lot colder than we think it is!

  3. W F Lenihan
    Posted Jan 16, 2009 at 12:30 PM | Permalink | Reply

    The continuous US temperature graph appears to show that 1998 and 2005 were warmer than 1934. As I recall it was your work that forced Hansen to admit that 1934 was the warmest year in the 20th Century and to correct his data accordingly. Does the continuous temperature graph misrepresent the data?

    • Phil.
      Posted Jan 16, 2009 at 2:56 PM | Permalink | Reply

      Re: W F Lenihan (#3),

      The continuous US temperature graph appears to show that 1998 and 2005 were warmer than 1934. As I recall it was your work that forced Hansen to admit that 1934 was the warmest year in the 20th Century and to correct his data accordingly. Does the continuous temperature graph misrepresent the data?

      Hansen himself made that very point in 2001.

      The U.S. annual (January-December) mean temperature is slightly warmer in 1934 than in 1998 in the
      GISS analysis (Plate 6). This contrasts with the USHCN data, which has 1998 as the warmest year in the century.
      In both cases the difference between 1934 and 1998 mean temperatures is a few hundredths of a degree. The main
      reason that 1998 is relatively cooler in the GISS analysis is its larger adjustment for urban warming. In comparing
      temperatures of years separated by 60 or 70 years the uncertainties in various adjustments (urban warming, station
      history adjustments, etc.) lead to an uncertainty of at least 0.1°C. Thus it is not possible to declare a record U.S.
      temperature with confidence until a result is obtained that exceeds the temperature of 1934 by more than 0.1°C.

      Steve: As I mentioned above, the NOAA graph has nothing to do with Hansen. The point of the post is that they are different series.

      • Phil.
        Posted Jan 16, 2009 at 4:05 PM | Permalink | Reply

        Re: Phil. (#14),

        Steve: As I mentioned above, the NOAA graph has nothing to do with Hansen. The point of the post is that they are different series

        Absolutely, I was addressing the point that Hansen had changed his mind about which was the record which the quote shows was not the case.

  4. Steve McIntyre
    Posted Jan 16, 2009 at 12:45 PM | Permalink | Reply

    #3. This is the NOAA graph, not the NASA graph.

  5. RICH
    Posted Jan 16, 2009 at 12:47 PM | Permalink | Reply

    “Outside the US, the GISS meta-data on population and rural-ness is so screwed up and obsolete that their UHI “adjustment” is essentially random and its effectiveness in the ROW is very doubtful.”

    Thank you very much for the info Steve. I have now linked your website to my favorites list.

  6. Novoburgo
    Posted Jan 16, 2009 at 12:54 PM | Permalink | Reply

    As W F Lenihan asks above, does the temperature graph misrepresent the data? The next question is: which data? Since NOAA has “modified” the records of numerous stations their representation for their data is probably correct. The big question is does it represent the real temperature record!
    I would guess if you believe that the heat waves and record highs of the 30’s really didn’t happen then the graph is O.K.

    snip – please don’t use this sort of language

  7. Posted Jan 16, 2009 at 1:20 PM | Permalink | Reply

    Any chance CRN could be added to the acronyms file for the benefit of us civilians?

  8. Urederra
    Posted Jan 16, 2009 at 1:33 PM | Permalink | Reply

    I think Steve means CRU.
    CRU____________Climate Research Unit attached to the University of East Anglia, UK (prop: Dr Phil Jones)

    Steve: NOt in this case. CRN is a code of station quality that Anthony uses at surfacestations.org.

  9. Novoburgo
    Posted Jan 16, 2009 at 1:38 PM | Permalink | Reply

    Steve, I apologize for the non-appropriate language but I feel that we are fighting a losing battle.

    snip – policy opinions are not permitted. As for me, I’m not “fighting” any battle against the policies that concern you here.

  10. henry
    Posted Jan 16, 2009 at 2:15 PM | Permalink | Reply

    Maybe it’s just me, but I’m seeing something strange:

    Looking at fig 2, the early data (pre-1980), the spikes are sharp. After 1980, the spikes are blunted. In a couple of cases it appears (by eyeball) that the change in amplitude from one point to the next rises at the same rate as the trend line.

    Is there a statistical way to examine this?

  11. vivendi
    Posted Jan 16, 2009 at 2:23 PM | Permalink | Reply

    Shouldn’t NOAA and NASA, in the interest fo their own cause and in the interest of the planet, make sure they come to the same (or comparable) results? They seem to compete with their temperature records, yet they want us to believe that the science is settled. I would at least expect them to explain where the difference between their data (and satellite data) comes from and to make from it.
    So not only there is a (random) difference in absolute values, there is quite a remarkable trend underlying the difference …

  12. An Inquirer
    Posted Jan 16, 2009 at 2:47 PM | Permalink | Reply

    Steve M says in his article: ‘Neither NOAA nor CRU even bother with such adjustments as they rely on various hokey “proofs” that UHI changes over the 20th century do not “matter”.’
    Does anyone have a definite reference that reveals that CRU does not do UHI adjustments? In a discussion with a AGW believer, I mentioned that HadCru is often reported as not having an UHI adjustment.
    The believer adamantly said that it “most certainly does” and gave me Jones documentation. I read the reference thoroughly and repeatedly, coming to the conclusion that one could interpret the document’s UHI discussion in multiple ways, and I continue to wonder whether Jones/HadCru does adjust for UHI.

    • Phil
      Posted Jan 16, 2009 at 9:31 PM | Permalink | Reply

      Re: An Inquirer (#12),
      From: http://www.cru.uea.ac.uk/cru/data/temperature/HadCRUT3_accepted.pdf, page 11:

      A recent study of rural/urban station comparisons [Peterson & Owen, 2005] supported the previously used recommendation [Jones et al., 1990], and also demonstrated that assessments of urbanisation were very dependent on the choice of meta-data used to make the rural/urban classification. To make an urbanisation assessment for all the stations used in the HadCRUT dataset would require suitable meta-data for each station for the whole period since 1850. No such complete meta-data are available, so in this analysis the same value for urbanisation uncertainty is used as in the previous analysis [Folland et al., 2001]; that is, a 1(sigma) value of 0.0055°C/decade, starting in 1900. Recent research suggests that this value is reasonable, or possibly a little conservative [Parker, 2004, Peterson, 2004, Peterson & Owen, 2005]. The same value is used over the whole land surface,and it is one-sided: recent temperatures may be too high due to urbanisation, but they will not be too low.

      • Phil
        Posted Jan 16, 2009 at 10:38 PM | Permalink | Reply

        Re: Phil (#23),

        From Current Climate Impact of Heating from Energy Usage by A.T.J. de Laat:

        For large energy consuming countries such as the United States, China, and India, the energy consumption is of the order of 0.2-0.4 watts per square meter. For smaller developed countries such as France, the United Kingdom, Germany, and Japan, energy consumption per square meter is larger, exceeding 1 watt per square meter. For small, densely populated countries such as the Netherlands, energy consumption exceeds 4 watts per square meter. On a city scale, such as central New York or Tokyo, energy use can exceed 100 watts per square meter [Makar et al., 2006].

        From Hinkel, et al, in the introduction:

        Urban warming is a manifestation of the direct and indirect alteration of the energy budget in the urban boundary layer. The direct impact is easily visualized as the transformation of stored chemical energy, typically in the form of high-quality fossil fuel. Known as anthropogenic heat, energy is converted to alternate forms to generate heat for buildings, steam for electrical power generation, to power motorized vehicles and drive industrial processes. Although several energy transformations may be involved, the stored chemical energy is eventually dissipated into the atmosphere as sensible heat. Because human activities are concentrated in urban areas, a net flux of heat into the atmosphere is often detectable.

        Consequently, it would follow that energy use of 100 watts per square meter in a large city would result in a large forcing and UHI should not be considered to be negligible. In fact, the UHI forcing may be substantially greater than any CO2 (or CO2 + H2O) forcing due to AGW in urban areas.

        • Posted Jan 17, 2009 at 7:10 AM | Permalink

          Re: Phil (#28),

          The subject of that paper by A.T.J. de Laat (2008) has actually been the subject of this paper published in 1973:

          H. A. Dwyer and T. Petersen, “Time-Dependent Energy Modeling”, Journal of Applied Meteorology, Vol. 12, pp. 36-42. February 1973. http://ams.allenpress.com/archive/1520-0450/12/1/pdf/i1520-0450-12-1-36.pdf

          The results given in the paper indicate that the affects of their estimate of the energy conversion activities by humans can easily be seen in the calculations with their model. The authors used 15.0 x 10^18 BTU/yr (1.58 x 10^22 Joules/yr) as the ‘heat generation’ by mankind over a period of 100 years.

          Additionally, I have mentioned the paper, kind of in-passing, in my Blog Post:

          http://danhughes.auditblogs.com/2007/06/03/dissipation-of-fluid-motions-into-thermal-energy/

          However, I have not been able to figure out what is meant by energy ‘consumption’ in the various data sources. That is, do the published numbers refer to the Demand end or the Supply end. If the Demand end then the consumption needs to be multiplied by about 3, to account for the (in)efficiency, to get the actual energy added into the Climate System. If the Supply end, then no (in)efficiency accounting is needed.

          This is not an issue in the Dwyer and Patersen paper because they simply added the energy into the model calculations. I recall, however, that most (all?) of the Northern Hemisphere.

  13. Steve McIntyre
    Posted Jan 16, 2009 at 2:55 PM | Permalink | Reply

    Brohan et al 2006 is the most recent exposition. I’d look there. The HadCRU authors have published article purporting to show the negligible impact of UHI – see refs in IPCC AR4 chapter 2.

  14. W F Lenihan
    Posted Jan 16, 2009 at 3:00 PM | Permalink | Reply

    Was not 1934 warmer than 1998 and 2005? If so both NASA or NOAA are misrepresenting data. If I am wrong, please explain why.

    Steve: you’re assuming that there is “data” that gives you a straightforward answer; there isn’t. Measurement systems and environments have changed. Having said that, there are better and worse data sets and better and worse ways of handling data. There are dozens of threads on this.

  15. Hu McCulloch
    Posted Jan 16, 2009 at 3:09 PM | Permalink | Reply

    Here’s the GISS US48 chart (1951-80 = 0), for comparison to NOAA above. You may have to stretch the window to get it all on:

    Any difference between the two is pretty subtle, except that perhaps GISS has the 30’s a little warmer than NOAA.

  16. Peter Hartley
    Posted Jan 16, 2009 at 3:56 PM | Permalink | Reply

    Neither NOAA nor CRU even bother with such adjustments as they rely on various hokey “proofs” that UHI changes over the 20th century do not “matter”.

    The paper that seems to be cited most often in this regard is the Parker analysis of windy versus calm nights. The problem with this analysis in statistical terms is that the test has no power. His null is that if there is a difference in temperature between urban and rural sites, the difference will be greater on windy than on calm nights. He does not find the difference in the temperature gap to be significantly higher on calm nights. While this result would obtain if the true difference between urban and rural temperatures is zero (as he concludes), it would also obtain if windier nights according to his measure are unrelated to the temperature gap between the types of sites. As I recall, the key problem is that the measure of windiness he uses is poor thus biasing the difference in difference toward zero.


    Steve:
    I did some analyses of these papers a couple of years ago and nothing much withstands scrutiny.

    • Phil
      Posted Jan 16, 2009 at 10:05 PM | Permalink | Reply

      Re: Peter Hartley (#17),

      His null is that if there is a difference in temperature between urban and rural sites, the difference will be greater on windy than on calm nights.

      The urban heat island in winter at Barrow, Alaska” K Hinkel et al International J of Climatology, Vol. 23, 2003, p. 1889-1905 shows that the UHI is highest on calm nights and essentially disappears when the wind speed is above 10 m/s (20 knots) and is highest for speeds less than 2 m/s (4 knots).

      From the abstract: “Here, we demonstrate the existence of a strong urban heat island (UHI) during winter. … The strength of the UHI increased as the wind velocity decreased, reaching an average value of 3.2 °C under calm (<2 m s−1) conditions and maximum single-day magnitude of 6 °C.”

      • Phil
        Posted Jan 16, 2009 at 10:15 PM | Permalink | Reply

        Re: Phil (#24),

        Another version of Hinkel, et al can be found here. The text is a little different and some of the graphs are in color.

  17. Don Keiller
    Posted Jan 16, 2009 at 4:46 PM | Permalink | Reply

    NOAA and CRU may dimiss UHI as “isignificant”, but we in the UK, at least, are told on a regular basis that it is not.
    Who by? None other than the BBC in their daily weather forecasts. Just the other night they were saying it would get quite chilly in Belfast- down to zero degrees C. But “remember this in the city in rural areas, you may well get down to -2 or -3 degrees C”.

    Am I missing something here, or are NOAA and CRU?

    • DeWitt Payne
      Posted Jan 16, 2009 at 7:12 PM | Permalink | Reply

      Re: Don Keiller (#19),

      I believe the party line is that while there are in fact UHI’s, the actual measurement stations are so well sited and maintained that they are not affected. See WUWT for a counter argument.

  18. Colin Aldridge
    Posted Jan 16, 2009 at 5:04 PM | Permalink | Reply

    If the 2008 NOAA number is more or less at the long term mean and it is diverging upwards against NASA at the rate you demonstrate then one would expect the NASA 2008 number to show 2008 as colder than the 20th century norm. However in 2008 the divergence suddenly goes below trend, with a delta of around +.1 degree compared with 0.2 degrees for all previous years.

    Why would this sudden divergence occur. I think NASA do this retro adjustment of all previous temperatures to “normalise” them but I thought this would have an opposite effect, i.e exaggerating a cold year impact not supressing it.

  19. Mike C
    Posted Jan 16, 2009 at 5:20 PM | Permalink | Reply

    Steve,
    The answer is pretty simple. Hansen applies a low frequency variation and NOAA essentially does not. Hansen’s night lights scheme does catch much UHI, but as we all know, his NASA dark stations have many microclimate and UHI problems. The USHCN v2 program does not catch much of the low frequency variation and that is on the record by Claude Williams himself. Kristen’s home page has a post titled “NCDC wants UHI” and links a presentation by Dr Williams where he clearly states that USHCN v2 does not catch all UHI, and he says that’s not a bad thing.

  20. Posted Jan 16, 2009 at 10:20 PM | Permalink | Reply

    The NOAA/USHCN station data contain some interesting adjustments and patterns. It’s fun to simply explore the individual data. For example, the adjustments to Williams, AZ have this appearance:

    A closer view of the first decade shows

    Adjustments are made by season. It’s a peculiar pattern and hard for me to interpret physically. Why would Dec,Jan,Feb (at -2.1F) be so different from the preceeding and following months? Dunno.

    The impact of the adjustments in the early years is to lower the reported temperatures by about 0.5F. That is not negligible.

    Even with these adjustments, the Williams temperature pattern looks odd:

    There’s a sharp change in trend circa 1932. Was that the result of some weather pattern shift? Well, a large weather shift should affect nearby stations, too. If we compare Williams with nearby Ft Valley we get this result:

    The two neighbors diverged abruptly in 1933, the same time as the Williams temperature jump. Such a divergence is hard to explain physically. I suspect there was an unreported station move at Williams, perhaps to a lower (and thus warmer) elevation.

    The slow divergence of the two neighbors after about 1940 is also interesting. I suspect microclimate factors are at play.

    • Kenneth Fritsch
      Posted Jan 17, 2009 at 1:06 PM | Permalink | Reply

      Re: David Smith (#26),

      David, in third graph down you show a time series for temperatures at the Williams, AZ USHCN station showing the major break around 1932. The Version 2 USHCN series is supposed to deal with these change points and make adjustments accordingly. I suspect the series you used here is not the Ver 2, but I wonder what that would look like. I can post it.

  21. Steve McIntyre
    Posted Jan 16, 2009 at 10:33 PM | Permalink | Reply

    #26. DAvid, I did lots and lots of graphics that look like this when I was trying to figure out Hansen’s Y2K goof.

    On adjustment patterns – Hansen does some adjustments at the same level by quarter, but his quarters are DJF, MAM,….

  22. Harry Eagar
    Posted Jan 17, 2009 at 1:34 AM | Permalink | Reply

    How urban could Barrow, Alaska, have been a century ago?

  23. Chris Wright
    Posted Jan 17, 2009 at 6:09 AM | Permalink | Reply

    It seems that the Parker study based on wind measurements is a major argument against strong UHI. This is completely bizarre. How difficult can it be to take temperature measurements at various places across an urban area?
    Of course, the problem for the warmers is this: if you actually take the measurements you will find a large UHI effect, often several degrees. The presenter of the BBC Climate Wars ably demonstrated this. He measured the temperature at the edge of Las Vegas and then drove into the city centre and repeated the measurement. Central Las Vegas was several degrees warmer, unintentionally providing a strong argument for the sceptical side!
    .
    I haven’t studied the Parker research, but the assumption that the UHI effect will be close to zero on windy nights may be wrong. Local winds will probably be generated by local heating. Cities actually generate their own wind systems due to ground heating and convection effects. It may be that UHI effects could actually be the cause of local winds. If so, then the Parker study may be invalid. It probably is, anyway, as there is a mountain of evidence that UHI is far stronger than the IPCC would have us believe.

    Chris

  24. Peter Hartley
    Posted Jan 17, 2009 at 8:47 AM | Permalink | Reply

    Re:

    Phil (#24)
    The urban heat island in winter at Barrow, Alaska” K Hinkel et al International J of Climatology, Vol. 23, 2003, p. 1889-1905 shows that the UHI is highest on calm nights and essentially disappears when the wind speed is above 10 m/s (20 knots) and is highest for speeds less than 2 m/s (4 knots).

    I have read the Hinkel et al paper and do not have a problem with it. They carefully measured actual wind speeds and temperature differences on different nights for one location. The problem with Parker is that he uses a very noisy measure of wind speeds and does not have paired stations where the temperature difference is being measured like in the careful Barrow study.

    Do the following thought experiment. Assume that the wind measure is uncorrelated with the true wind speed at a given location. What would you find? No statistically significant difference between your group of urban and rural temperatures on different nights even if a correctly done study using paired stations and actual local and contemporaneously measured wind velocities would yield a difference in temperature gaps as a function of wind speed. In short, the statistical power of Parker’s test is crucially dependent on him having a good measure of wind speed for the stations in his sample. If that measure is lousy he will find a false acceptance of the null hypothesis that the difference is zero.

  25. An Inquirer
    Posted Jan 17, 2009 at 10:07 AM | Permalink | Reply

    Thanks, Phil. My interpretation is that HadCru does not adjust for UHI (in fact, HadCru says that it lacks the data to do so), but there might be a tiny uncertainty of .055 degree C per century associated with its land values because of this lack of adjustment.

  26. jae
    Posted Jan 17, 2009 at 10:25 AM | Permalink | Reply

    Evidently the consensus in Japan is that it’s not CO2. The data are said to be available, but you have to read some Japanese.

    Professor Itoh attacked the temperature record itself, saying “Data taken by the U.S. is inadequate. We only have satellite data of global temperatures from 1979 onwards”. Itoh, who has previously called global warming “the worst scientific scandal in history”, is also an expert reviewer for the IPCC.

  27. curious
    Posted Jan 17, 2009 at 11:33 AM | Permalink | Reply

    re: Phil at 28 and Dan at 31
    Wikipedia quotes world 2005 fuel consumption as equivalent to 16TW with 85% from fossil:

    http://en.wikipedia.org/wiki/World_energy_resources_and_consumption

    I make that approx 0.03W/m2 at the earth’s surface (please check). So globally it is way off the forcings discussed as possible for CO2. I think on a very, (very) long view of nuclear, geothermal and world development it is perhaps worth a mention.

    • Phil
      Posted Jan 18, 2009 at 12:09 AM | Permalink | Reply

      Re: curious (#35),

      I make that approx 0.03W/m2 at the earth’s surface (please check).

      I don’t believe either of the sources I quoted claims that anthropogenic waste heat (AWH) is a significant “forcing” on a global basis, and I do not and did not meant to imply that. The question that I was raising is that AWH may be a very significant “forcing” in urban areas, where a lot of the surface temperatures are measured and, so, may be biasing calculated worldwide “average temperature” and, thus, the perception of AGW. Anthony Watts, through his surface stations project, has been raising very significant micro-siting issues. AWH is a similar, but different issue. One could say that it is a macro-siting issue. Differences in temperature of 2-3°C (sometime peaking at 6°C) between urban and surrounding rural areas would seem to be much larger than what HadCRUT3 is estimating (0.0055°C per decade over 100 years amounts to only about 0.055°C total).

  28. Kenneth Fritsch
    Posted Jan 17, 2009 at 12:48 PM | Permalink | Reply

    At present, we don’t know very much about the NOAA calculation. To my knowledge, they make no effort to make a UHI adjustment along the lines of NASA GISS. As I’ve mentioned before, in my opinion, the moral of the surfacestations.org project in the US is mainly that it gives a relatively objective means of deciding between these two discrepant series. As others have observed, the drift in the GISS results looks like it’s going to be relatively small compared to results from CRN1-2 stations – a result that has caused some cackling in the blogosphere.

    GISS defines stations as rural, small town and urban and then uses an algorithm to “smooth” the trends for all proximate urban stations to those of the rural stations. It is my impression that the USHCN Urban adjustment is similar but uses population (outdated as it might be) and not night time brightness levels as GISS does, but uses a different algorithm for adjusting to rural stations. GISS and USHCN admit to differences (that they cannot pinpoint) in the urban adjustments with the GISS adjustment leading to a smaller long term temperature trend for the US. The new Version 2 of USHCN uses a change point adjustment for all homogeneity and urban effects adjustments that actually make the final temperature trends higher than that derived from the formerly final adjustment in the Urban adjusted version. One also finds that the long term differences in temperature trends when comparing these versions can vary significantly when looking at intermediate term time periods.

    When I put together some USHCN station data for regressing temperature trends for the CRN123 and CRN45 ratings by the Watts’ team for RomanM to statistically analyze, we found some significant difference in trends with CRN rating. I can see where that analysis should be repeated using the GISS station data – once that data has been scrapped from GISS and made available to the public.

    The GHCN temperature data for the world, excluding the US (USHCN), are not adjusted, as the USHCN data are, for TOBS or SHAP or MMTS, but are adjusted for an Urban effect using population. I am unclear how this adjustment is made. To my knowledge, the exact details of these adjustments, in general, are not available in a form outside the computer code in which they reside.

    I think that the major revelation coming out of the Watts’ team analysis, in context with the influence that the GISS series place on rural stations for deriving overall temperature trends is the variations and potential non-climate related and micro-site temperature effects that they see in both rural and non-rural sites. The GISS website does discuss this particular issue and its potential influence but without bothering to do or present further analyses that might afford better insights into its extent.

  29. Posted Jan 17, 2009 at 2:17 PM | Permalink | Reply

    Re #38 Yes, please post. It should be a good test of the effectiveness of v2.

    • Kenneth Fritsch
      Posted Jan 17, 2009 at 2:26 PM | Permalink | Reply

      Re: David Smith (#39),

      David, I graphed the USHCN Filenet, Urban and Version 2 temperature series for Wiiliams, AZ USHCN 29359. The break point that you found appears to be removed, or at least mitigated, in the Version 2 series. Whether that adjustment is the correct one is matter for more investigation.

  30. Posted Jan 17, 2009 at 2:38 PM | Permalink | Reply

    Re #40 Excellent! Kudos to USHCN for the improvement. The current GISS history for Williams , though, still shows the break.

    Pardon my ignorance, but is the NOAA plot given in this post based on v2? Also, Ken, do you have a convenient link to v2?

  31. Kenneth Fritsch
    Posted Jan 17, 2009 at 3:09 PM | Permalink | Reply

    David here is the link to ver2 USHCN. The GISS version that you linked to is either the homgeneity adjusted version or the default version which are based on the USHCN adjustments for TOBS, MMTS and SHAP, but not the USHCN adjustments for urban effects or for infilling missing data.

    The way I read what GISS does is that those versions are based on the USHCN Filenet version with their own Urban adjustments and without infilling for missing data. GISS simply leaves missing data missing and when you compare USHCN versions with GISS versions this becomes very apparent. And that is not to say that USHCN temperature series do not have missing data just that GISS has significantly more.

    Also you are probably aware that GISS uses the meteorological year from Dec the previous year through Nov the next year. I had to do some recalculation after noting this is a recent analysis. Also since the annual most be calculated from the months, one has to make sure that the months are properly weighted by the number of days in them and including the adjustment for leap years. Using the USHCN series is much simpler.

    ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/

    You might want to go back to the GISS page that asked you to enter the GISS station ID and determine whether you used the default setting or the homogeneity one – they might be different regards the break point.

  32. Posted Jan 17, 2009 at 3:16 PM | Permalink | Reply

    If the CRN 1 and 2 stations are a good match for GISS, how do we resolve the trend discrepancy between satellite and ground data?

    • Raven
      Posted Jan 17, 2009 at 3:45 PM | Permalink | Reply

      Re: Jeff Id (#43)
      The contentental US GISS measurements are likely not that bad after all is said and done. This conclusion is supported McKitrick&Micheals paper from last year which looked for coorrelations between the temperature record and economic development.

      What is unknown is the quality of measurements for the rest of the world which would dominate the global temperature records which you compare to the satellite data.

      • Posted Jan 17, 2009 at 11:29 PM | Permalink | Reply

        Re: Raven (#44),

        I wondered if someone would answer that way. I have been working with the satellite data lately and trying to correct the data at 1992 which has an inaccuracy due to a transition to a new instrument. The slopes of RSS and UAH match very well on either side of the discontinuity.

        I just wondered if someone had a reasonable physical explanation of why the long term GISS vs Sat. slopes are so different. For me this became more interesting after John Christy’s 1.2 multiplier was confirmed by the SD of de-trended and 2-yr filtered UAH/GISS indicating the expected agreement on the short term with apparently terrible long term agreement.ch (#49),

        Re: Kenneth Frits

        The long term trend difference cannot continue indefinitely, someone needs to make a correction or a physical explanation. If GISS is confirmed by good stations (which raven’s point is good) the trend needs to change on the sat data. Since the Sat data matches balloon well enough (I say that because of my recent understanding of the offsets in sonde) so something must be really messed up in ground data.

        • Kenneth Fritsch
          Posted Jan 18, 2009 at 10:59 AM | Permalink

          Re: Jeff Id (#50),

          I think it is an appropriate time for me to reference an analysis of covariance that RomanM did as a professional statistician on some data CRN rating, population, altitude, longitude and latitude that I provided him and also give some history on what motivated this analysis.

          A poster at CA, known as John V, did some good work with the GISS code, as I recall, and was able to provide some station data for comparison with the Watts’ team CRN ratings very early in the process of the Watts’ team evaluations. He then proceeded to make some comparisons of the GISS temperature trends with the available rural CRN1 and CRN2 stations. As it turns out the variability in temperature trends from station to station regardless of CRN rating can be quite high and John V was using something like 16 CRN2 and 1 CRN1 rural stations in his comparison. It was obvious to me that larger numbers were required to make these comparisons given the size of station variability.

          I attempted to do a layperson’s statistical analysis and found from the advice of RomanM that some of my assumptions were incorrect. RomanM did a rather comprehensive analysis that was posted in the link below. He also included an analysis with the Watts’ ratings broken down into the two groups of CRN123 and CRN45. This breakdown provides larger samples because the CRN12 group is rather small and the CRN3 is considerably larger.

          See post #130 by RomanM at
          http://www.climateaudit.org/?p=3169

          My motivation in all these exercises has been to encourage a proper statistical analysis of the Watts’ team evaluations. I used CRN comparisons with USHCN station adjusted data (the Urban series) and would like to see that comparison expanded to the USHCN Version 2 series and the homogeneity adjusted GISS series.

          I have noticed that the latest Watts’ team CRN ratings are not readily available for viewing and that might be motivated by an attempt to prevent impatient people like me from making premature conclusions based on incomplete data. That would make sense to me, but I would also hope that someone will eventually do the proper statistical analyses and to publish them – or least to post them.

        • BarryW
          Posted Jan 18, 2009 at 1:13 PM | Permalink

          Re: Kenneth Fritsch (#52),

          Here is a link to the latest station list that Anthony has on the surface station gallery website. He’s removed the excel file he originally had for some reason. I don’t know how up to date this version is.

        • Kenneth Fritsch
          Posted Jan 18, 2009 at 6:37 PM | Permalink

          Re: BarryW (#54),

          Thanks, BarryW, for that link of which I was aware. After a more careful comparison it does have more stations rated than I had previously analyzed, but not as many as I thought I saw somewhere had been completed. I suspect now that this is my misconception.

        • BarryW
          Posted Jan 18, 2009 at 8:43 PM | Permalink

          Re: Kenneth Fritsch (#59),

          I think Anthony has not been able to keep up with the number of sites that have been surveyed (500+ marked in the database vs 700+ identified on the surfacestations site. Like many in today’s financial climate, he’s probably stretched pretty thin with his family, businesses, blog, and website.

        • Posted Jan 19, 2009 at 3:51 PM | Permalink

          Re: Kenneth Fritsch (#52),

          Thanks, I read the analysis when you put the link up and you’re right it was quite informative.

    • Kenneth Fritsch
      Posted Jan 17, 2009 at 11:12 PM | Permalink | Reply

      Re: Jeff Id (#44),

      If the CRN 1 and 2 stations are a good match for GISS, how do we resolve the trend discrepancy between satellite and ground data?

      While this might be true, I do not recall seeing a study posted here or in the literature that showed how good that match might be. It is also important and potentially informative to look at those matches over various time periods.

  33. Posted Jan 17, 2009 at 5:37 PM | Permalink | Reply

    Looks like the 1933 break point remains even after the GISS homogeneity adjustment ( Graph ). And here’s a plot of the GISS homogeneity adjustment for Williams since 1910 ( Graph ).

    Here’s what I sorely need

    • Kenneth Fritsch
      Posted Jan 18, 2009 at 11:32 AM | Permalink | Reply

      Re: David Smith (#46),

      Thanks for the graphs that show that the GISS homogeneity adjustment, while apparently significantly changing the trend slope, does nothing to adjust for the breakpoint – as was the case with the USHCN Version 2 adjustment. These version differences make me more and more curious about the underlying adjustments which can be large in individual stations and considerably less large when averaged over the entire series.

      The linked paper below was coauthored by the owners of the GISS and USHCN/GHCN temperature series. In the paper the authors politely point to differences between the USHCN/GHCN and GISS series methodology and resulting temperature trends without really ever coming to gripes with why they exist and what those differences mean for any CIs that might be attached to the resulting and adjusted temperature trends.

      This paper was published prior to the release of the Version 2 from USHCN with Version 2’s differences from the Urban USHCN adjusted series in both temperature trend and methodology. I would like very much to see a follow up paper comparing Version 2 series to the GISS series, and, for that matter, Version 2 versus Urban.

      http://pubs.giss.nasa.gov/docs/2001/2001_Hansen_etal.pdf

  34. JD
    Posted Jan 17, 2009 at 6:24 PM | Permalink | Reply

    The rapid upturn in the NOAA minus NASA (Fig.2) at about 1990 might coincide with the large drop out of stations reported by Joe D’Aleo and Ross McKitrick.

    http://www.uoguelph.ca/~rmckitri/research/nvst.html
    http://data.giss.nasa.gov/gistemp/station_data/

  35. Andrew
    Posted Jan 17, 2009 at 9:43 PM | Permalink | Reply

    “likely not that bad after all is said and done”

    I’m sorry again, but this is not a very scientific statement. I don’t mean that to make anyone mad, but it’s got three cliches in it – ‘likely (with no numbers)’ ‘not that bad (but still bad?)’ and ‘after all is said and done'(when is that?).

    Andrew

  36. trevor
    Posted Jan 18, 2009 at 4:37 PM | Permalink | Reply

    That the UHI exists is evident to anybody who has a car that has an outside temperature readout on the instrument panel. Both of our cars (2004 Subaru Ouback and 2007 Toyota Kluger) have this capability – in the base models too – so it is highly likely that there are many thousands of cars out there that can directly observe the UHI effect as you drive past a city centre or airport.

    We drive from our house in the lower north shore of Sydney to our place in the country pretty much every week, and we have observed the UHI effect around Sydney airport nearly every time. Just yesterday, for example, the temperature readout was 21 degrees on the outskirts of Sydney, reached 27 degrees as we drove by the airport and then the city, and then dropped back to 21 degrees at our home.

    I am curious as to why there is not a lot more comment on the UHI – since it can so easily observed by any of us. Perhaps scientists only drive older cars that don’t have temperature sensors?

    • Craig Loehle
      Posted Jan 19, 2009 at 8:20 AM | Permalink | Reply

      Re: trevor (#56), UHI effect is one of several elephants in the room that everyone is politely pretending is not there, along with the hockey stick linearity assumption, the presto changeo with the magnitude of the aerosol effect, the unphysicality of several GCM processes…The room is getting pretty crowded with elephants.

  37. curious
    Posted Jan 18, 2009 at 6:09 PM | Permalink | Reply

    Re: Phil at 51
    Agreed – the comment was in response mainly to Dan at 31 who seemed to be seeking clarification of the impact of the aggregated human activity liberated energy on the climate system. To me this calc. gave a quick handle on whether or not it is likely to be a factor of influence. The post was in no way intended to undermine the AWH influence on urban areas and the temperatures recorded there. After I hit the post button I thought about clarifying – apologies if it gave the wrong impression. Also I haven’t read the references so I may have misunderstood the context.

    The further comment re: world development etc. was actually serious – I hadn’t dug the data and done the sums but I was thinking that it wasn’t so long ago that developed world energy consumption per capita was (say) 100th of what it is now. However a quick look at WWF2006 report page28 table 2 suggests it’s most unlikey to be a problem. On their basis high income countries are approx 1/6 global population and adding the CO2 and nuclear “footprint” figures suggest a per capita impact of 4x the rest of the world. Taking their measures as an indication of energy use, getting the rest of the world up to the same energetic consumption would imply a fourfold increase in impact (again please check)- so the globally averaged figure should equate to 0.12W/m2. I might be looking at this the wrong way and the WWF data may not be a reliable proxy. I’ll check other approaches at some point; whilst it is a bit OT, I think it is relevant to the AGW discussion although not here.

    FWIW I think the work of Anthony, Steve (and other contributors) is excellent and vital and I certainly wasn’t intending to “dis”!

  38. curious
    Posted Jan 18, 2009 at 6:11 PM | Permalink | Reply

    …and the Beyonce gag had tears running down my face! Best to all :)

  39. Posted Jan 19, 2009 at 1:01 AM | Permalink | Reply

    A little off topic, but I just finished a homogenization of satellite trends which has kept my interest for several weeks. There is a step in the satellite data due to a transition from NOAA-11 to 12. By comparison to GISS over that timeframe UAH turns out to be the superior time series. It’s the same result John Christy got using balloon data.

    http://noconsensus.wordpress.com/2009/01/19/satellite-temp-homoginization-using-giss/

  40. Bobby Hawk
    Posted Dec 2, 2009 at 10:44 AM | Permalink | Reply

    This chart just proves that any normalized data can indicate what ever the author wants it to indicate. Further to claim any type trend in weather, all of the attributes which contribute to weather would have to be reviewed, not just temperature. I don’t mean to discredit a scientists work, but charts of this type are common among business and what do they prove? Basically they prove that specific scientists know how to make charts which show what they want to show. The data is used to support further funding of research, and that’s about the end of the matter.
    Many questions come to mind when it comes to charts of this type, specifically what data is represented? Specifically how was the data collected? Were the instruments that collected the data calibrated? What were the criteria used to select the location of temperature collecting instrument? What time frame of the Day, Month, Year was selected from which to choose samples?
    To me its just another chart that allows the presenter to direct its viewers into the direction the presenter wants to go. Without credibility behind the statistics involved, its basically cute but meaningless wallpaper.

8 Trackbacks

  1. [...] clear indicator that the two data sets are divergent. Steve McIntyre has coincidentally just done a similar comparison of NOAA USA yearly data vs. GISS USA yearly data, and came to the conclusion that the NOAA slope is [...]

  2. [...] has coincidentally just done a similar comparison of NOAA USA yearly data vs. GISS USA yearly data (http://www.climateaudit.org/?p=4852), and came to the conclusion that the NOAA slope is even steeper than GISS, diverging from UAH by [...]

  3. [...] “NOAA versus NASA: US Data“, Steven McIntyre, Climate Audit, 16 January 2009 — Excerpt: Readers need to keep in [...]

  4. [...] At present, we don’t know very much about the NOAA calculation. To my knowledge, they make no effort to make a UHI adjustment along the lines of NASA GISS. As I’ve mentioned before, in my opinion, the moral of the surfacestations.org project in the US is mainly that it gives a relatively objective means of deciding between these two discrepant series. As others have observed, the drift in the GISS results looks like it’s going to be relatively small compared to results from CRN1-2 stations – a result that has caused some cackling in the blogosphere. IMO, such cackling is misplaced. The surfacestations results give an objective reason to view the the NOAA result as biased. It also confirms that adjustments for UHI are required. Outside the US, the GISS meta-data on population and rural-ness is so screwed up and obsolete that their UHI “adjustment” is essentially random and its effectiveness in the ROW is very doubtful. Neither NOAA nor CRU even bother with such adjustments as they rely on various hokey “proofs” that UHI changes over the 20th century do not “matter”. See post here. [...]

  5. [...] to the NOAA 48 data set that I’d previously compared to the corresponding GISS data set here (which showed a strong trend of NOAA relative to GISS). Here’s a replot of that data – there [...]

  6. [...] effort to adjust for UHI in the U.S. (outside the USA, its efforts are risible.) A few days ago, I showed the notable difference between the GISS (UHI-adjusted) version in the US and the NOAA unadjusted [...]

  7. By Surface Stations « Climate Audit on Jul 31, 2012 at 9:05 AM

    [...] have commented from time to time on US data histories in the past – e.g. here, here here, each of which was done less hurriedly than the present analysis. [...]

  8. By The Talking Points Memo « Climate Audit on Jul 31, 2012 at 2:09 PM

    [...] to the NOAA 48 data set that I’d previously compared to the corresponding GISS data set here (which showed a strong trend of NOAA relative to GISS). Here’s a replot of that data – [...]

Post a Comment

Required fields are marked *

*
*

Follow

Get every new post delivered to your Inbox.

Join 3,129 other followers

%d bloggers like this: