Today, I’d like to discuss an interesting problem raised recently by Joe d’Aleo here – has the temperature of New York City increased in the past 50 years? Figure 1 below is excerpted from their note, about which they observed.

Note the adjustment was a significant one (a cooling exceeding 6 degrees from the mid 1950s to the mid 1990s.) Then inexplicably the adjustment diminished to less than 2 degrees …The result is what was a flat trend for the past 50 years became one with an accelerated warming in the past 20 years. It is not clear what changes in the metropolitan area occurred in the last 20 years to warrant a major adjustment to the adjustment. The park has remained the same and there has not been a population decline but a spurt in the city’s population in the 1990s.

I’ve spent some time trying to confirm their results and, as so often, in climate science, it led into an interesting little rat’s nest of adjustments, including another interesting Karl adjustment that hasn’t been canvassed here yet.

Update (afternoon): I’ve been able to emulate the Karl adjustment. If one reverse engineers this adjustment to calculate the New York City population used in the USHCN urban adjustment, the results are, in Per’s words, gobsmacking, even by climate science standards.)

Here is the implied New York City population required to justify Karl’s “urban warming bias” adjustments.

Here is the figure from Joe D’Aleo that prompted the inquiry:

Figure 1. From D’Aleo showing Central Park July temperatures.

First of all, the presence of a USHCN station in New York Central Park is an interesting choice, given the characterization of the USHCN network as being mostly “rural or small town”. Obviously one exception doesn’t disprove this characterization – it’s just that this is definitely an odd choice. But it provides an interesting opportunity to examine USHCN urban adjustment procedures in the context of what even climate scientists must surely acknowledge as being an urban location.

My initial attempt at replicating this result with USHCN v2 data, GISS data and/or GHCN data was unsuccessful. I asked Joe D’Aleo where his data came from and he pointed to two NOAA locations that I had previously not used – the top chart came from ERH here and the bottom chart came from Climvis here. In each case, I had to scrape the data from webpages (since there was no convenient ASCII version.) All three USHCN v2 versions (raw, time-of-observation adjusted and adjusted) were virtually identical) and were virtually identical to the ERH version from 1912 on. There is a puzzling difference between 1871 and 1912 which remains unresolved, but which is not relevant for the present note. The Climvis version did not tie in to any USHCN v2 series however.

I exchanged a number of emails with Karin Gleason of NOAA, trying to determine the provenance of the Climvis series. She pointed to the USHCN v1 website and eventually to the following data set: http://www1.ncdc.noaa.gov/pub/data/ushcn/urban_mean_fahr.Z and, sure enough, this matched (although it only went up to 2003). Combining the Climvis values after 2003 with this USHCN v1 data set, I obtained the following results (annual instead of July in the D’Alwo post), confirming a remarkable difference between the Climvis data (using one USHCN adjustment system) and ERH using another USHCN adjustment system. As D’Aleo observed, these are not small differences: The differences in the 1960s and 1970s exceeded 3 degrees C. How is such a discrepancy possible?

Figure 2. Annual New York Central Park temperatures

USHCN v2 and USHCN v1
The Climvis information links to USHCN version 1 here: http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html . If you scroll down to the adjustments section, you see that the data package contains 4 versions (version 2 only showing 3 versions), the four versions being listed as:

1. Raw: the data in this version have been through all quality control but have no data adjustments.
2. TOB: these data have also been subjected to the time-of-observation bias adjustment.
3. Adjusted: these data have been adjusted for the time-of-observation bias, MMTS bias, and station moves, etc.
4. Urban: these data have all adjustments including the urban heat adjustments.

They describe a 6th adjustment step as follows:

6. The final adjustment is for an urban warming bias which uses the regression approach outlined in Karl, et al. (1988). The result of this adjustment is the “final” version of the data. Details on the urban warming adjustment are available in “Urbanization: Its Detection and Effect in the United States Climate Record” by Karl. T.R., et al., 1988, Journal of Climate 1:1099-1123.

However if one goes to the corresponding site for USHCN version 2 : http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ , you find that the 6th adjustment step (the Karl et al 1988 “adjustment for an urban warming bias” is not included. Climvis, for some reason, is using USHCN version 1 data with Karl’s urban warming bias adjustments, updated somehow to the present.

We also have the interesting spectacle of two branches of NOAA seemingly unable to agree on the temperature at Central Park in the 1960s and 1970s within 3 deg C, while Mann purports to know global temperature in AD1000 within 0.2 deg C.

Update:
I’ve been able to replicate the urban adjustment between 1890 and 1990 as shown in the graphic below. The implications of this replication for the post-1990 data are, in Per’s words, gobsmacking. Karl et al 1988, the reference for the urban adjustment provides a formula for urban adjustment (in deg C) as follows:

$adj = 1.8 * 10^{-3} population^{0.45}$

Candidate population data used by USHCN includes the following file ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/metrof_hybrid.dat which gives populations nominally in 10-year intervals from 1890 to 1990 (although 1890 is missing.) I substituted these population figures into the formula above and show the negative values as bold points in the lower frame of the above graphic – yielding a virtually exact match. So we can safely conclude that this is how the USHCN “urban warming bias” adjustment was calculated between 1890 and 1990.

Figure 3. As Figure 2, but bold dots in lower frame show estimates from population using the formula in Karl et al 1988.

But what happens after 1990? For the period from 1990 to 2006, we don’t know the population figures used by USHCN, but we do know the value of the adjustment and, by reversing the calculation, we can calculate the New York population assumed by NOAA in their calculations, which is shown in the figure below. In effect, USHCN has assumed that by 2003, the last year in the urban_mean_fahr version, New York City has reverted to the population of 1910 and that by 2006 (according to Climvis) had reverted to the population of 1850 (1.2 million).

Figure 4. Estimated New York City Population according to Climate Scientists

If the population of New York is held constant after 1990, as opposed to the extermination supposed by USHCN, then the urban-adjusted values would be as shown below:

1. Michael Jankowski
Posted Jul 5, 2007 at 1:41 PM | Permalink

Fascinating stuff.

So what you have is a city which has had relativey minimal population changes relative to population over the last 60 yrs (plenty of population expansion in the metro area, of course), little change in land use over that period, and a station location hasn’t changed since 1920…yet adjustments of up to 3 deg C are necessary?

2. Michael Jankowski
Posted Jul 5, 2007 at 2:24 PM | Permalink

6. The final adjustment is for an urban warming bias which uses the regression approach outlined in Karl, et al. (1988).

Well, I clicked on the link for Karl (1988), and here’s what it says in the conclusions (p1121, last full paragraph):

“…We do not recommend the use of the equations developed in this study to predict the impact of a heat island at any particular station as the explained variance of the equations is not high enough for such an appraisal to be valid…”

Isn’t step 6/”final adjustment” doing exactly what Karl recommended against and called invalid?

3. Anthony Watts
Posted Jul 5, 2007 at 2:48 PM | Permalink

Here is a picture of the NYC Central park weather station:

Note the MMTS shelter at the rear, where the official temperature for the USHCN network is taken.

4. Bob Meyer
Posted Jul 5, 2007 at 2:56 PM | Permalink

I grew up in Manhattan and am very familiar with Central Park and the weather station located on the castle near the lake. (Although forty years ago I didn’t give the station a second thought)

The population of New York has been relatively steady however, the population of Manhattan has declined since 1910. Wikipedia has the following numbers which correspond to what I remember about the history of Manhattan.

Census Pop. %⯍
1790 33,111 ‘€”
1800 60,489 82.7%
1810 96,373 59.3%
1820 123,706 28.4%
1830 202,589 63.8%
1840 312,710 54.4%
1850 515,547 64.9%
1860 813,669 57.8%
1870 942,292 15.8%
1880 1,206,299 28.0%
1890 1,515,301 25.6%
1900 2,050,600 35.3%
1910 2,762,522 34.7%
1920 2,284,103 -17.3%
1930 1,867,312 -18.2%
1940 1,889,924 1.2%
1950 1,960,101 3.7%
1960 1,698,281 -13.4%
1970 1,539,233 -9.4%
1980 1,428,285 -7.2%
1990 1,487,536 4.1%
2000 1,537,195 3.3%
Est. 2006 1,611,581 4.8%

The unadjusted temperatures seem more closely related to population than the adjusted data although the pre-1975 adjusted data does follow population pretty well. However, it is important to remember that far more people work in Manhattan than live there and that development took place mainly from the 1950’s on. If I had to bet on what would track temperature the best I would choose power consumption on the island.

As for Central Park being an unusual site to choose: you would be hard pressed to find a patch of land on Manhattan other than a city park that isn’t occupied by buildings. However, the castle in the park is probably located farther from people and from the two rivers (three if you count the Harlem River separately from the East River) that flow around Manhattan than any other place.

5. Bob Meyer
Posted Jul 5, 2007 at 2:58 PM | Permalink

I just saw the picture of the station and that is not the one that was on the castle forty years ago. I can’t say how far it is from the castle.

6. JG
Posted Jul 5, 2007 at 3:11 PM | Permalink

NOAA’s MMS system seems to be confused by the location of the weather station. Currently the system says it is located at 40.788890 (40°47’20″N) / -73.966940 (73°58’00″W), which would put it on top of a building next to the El Dorado on Central Park West. MMS also says it was moved there in June 1995 from 40.783330 (40°46’59″N) / -73.966670 (73°58’00″W), a distance of roughly 2000 feet.

However, all other records I can find state that the station has been located at the Belvedere Tower (more popularly known as Belvedere Castle) since 1920. In fact, the textual location description from MMS says it is at Belvedere Tower. The location of Belvedere Tower is 40°46’45.76″N / 73°58’8.51″W. This is approximately 3500 feet, or 2/3 mile, from the coordinates provided by MMS.

An image of the weather station can be found here

7. Keith
Posted Jul 5, 2007 at 3:19 PM | Permalink

So if it’s -4 C in Minsk, and +40 C in Buenos Aires, that means what?

This is so silly.

Now we track that it’s .003 degrees warming in a park in Paris, and equate that to something else.

Crazy.

8. Anthony Watts
Posted Jul 5, 2007 at 3:26 PM | Permalink

RE6 I’m not surprised. The MMS system sometimes has conflicting lat/lon. Is there anybody reading this in or near NYC that can go out and locate the station, get a GPS fix and some photos to add to this discussion and to http://www.surfacestations.org ? If so, please use contact page there and we’ll get you a site survey form and instructions.

9. Bob Meyer
Posted Jul 5, 2007 at 3:46 PM | Permalink

I looked at the map on the NOAA site that shows the present location and two previous locations for the weather station. None are at the Castle, yet the descriptions all say Belvedere Tower. No one that I knew ever called it anything but “the Castle”.

JG’s info was the same that I found when I checked out the castle on the web. When I was a kid the station was mounted on one of the towers, at least that was where I saw the anemometer. I wouldn’t have recognized any of the other instruments since I didn’t care about weather at the time.

In fairness to NOAA they probably never considered the coordinates to be very important since their site requires a different login to get directions to the stations. They might actually know how to get there.

10. Richard deSousa
Posted Jul 5, 2007 at 3:55 PM | Permalink

It’s about time the ground based temperature records are scrutinized. The warmers have been beating up on the satellite temperature records for the past two decades… what’s good for the goose is certainly good for the gander…

11. per
Posted Jul 5, 2007 at 4:19 PM | Permalink

you know, I can’t help but think that a systematic audit of even a sample of these sites could make a solid paper. You would have to think of a systematic approach. But then, there are some simple bits to cover (are stations compliant with their own standards ?), and a little bit of analysis. I don’t think you necessarily have to solve the whole question; but just addressing some of the important issues in a meaningful way could make a significant contribution.

per

12. Roger Dueck
Posted Jul 5, 2007 at 4:21 PM | Permalink

#7 – Heat wave in Calgary
Tuesday Jul 3 +26.1C by 3pm cannot sit on my west-facing deck
+20.5C by 8pm can now sit on my west-facing deck
+14.2C by midnight now need sleeping bag to sit on deck
Wednesday Jul 4 +10.2C by 5am we have warmer days in January!
These are Calgary Int Airport temp; my deck probably has a wider range as it is west facing and along a river valley and so gets the diurnal winds from the mountains at about 10pm.
I am simply amazed at the scientific basis of “adjusting” data by an order of magnitude greater than the alleged UHI effect.

13. Bob KC
Posted Jul 5, 2007 at 4:32 PM | Permalink

Looks like a nice, shady spot they picked out for their weather station. How do shade trees affect the site rating?

14. John Daragon
Posted Jul 5, 2007 at 4:49 PM | Permalink

The layout of instruments in the Central Park Weather Station photograph appear to correspond to the structure you can see on Google Earth at 40deg46’44.40″ N, 73deg58’09.34″ W. That’s just south of the Belvedere building.

15. aurbo
Posted Jul 5, 2007 at 5:05 PM | Permalink

Re #6:

As mentioned in a much earlier post, I grew up in the NYC area around Central Park during the 30s and 40s. I used to ice-skate on the pond adjacent to Belvedere Castle (Tower) on those few weeks in winter when it was frozen over. As coincidence would have it, for several years in the late 30s I lived at 300 Central Park West (AKA The Eldorado)! I was later a frequent visitor to the weather site. During that period, the conical Tower was truncated with weather instrumentation located on the uncovered crenellated roof, principally the wind instrumentation. A CRS-type shelter for temperatures was located on a grassy plot adjacent to the tower. The closest thing to urbanization was the traffic in the 81st/79th Street transverse which was located just to the south of the Castle. The transverse was in a street-wide canyon below grade level that shielded the traffic from the view of those enjoying the ambiance of the park.

I, too, compared the actual Central Park record with the USHCN reconstruction of the annual mean temperatures for Station #305801 (Central Park) and found just what Steve has illustrated above. I also looked at the USHCN Central Park monthly mean max and min temperatures and sure enough, as the Karl, Diaz et al paper described, most of the urban adjustment was to the minimum temperatures with only slight differences in the max temps. This showed that in order to lower the mid-Century means by 3°C, it was necessary to lower the observed minima by nearly 6°C! To me this makes the USHCN adjustments whether valid or otherwise, an exercise in analysis. It certainly can’t be legitimately characterized as data.

As for access to the original raw data, one (so far) indelible source can be found in the morgues of the various NY newspapers where the daily records for Central Park and the other long-term NY database, the City Office data can be found. Despite their totally different exposures, the City Office data tracks quite well with the Central Park data.

16. Anthony Watts
Posted Jul 5, 2007 at 5:38 PM | Permalink

Here we go, amazing what you can find on the net these days. This is from Live Maps with my annotation added:

17. Anthony Watts
Posted Jul 5, 2007 at 5:47 PM | Permalink

And here is the reverse view, looking north:

18. Anthony Watts
Posted Jul 5, 2007 at 5:56 PM | Permalink

RE3: I’m going to self correct and point out that Central Park is not a USHCN station, though looking at the data and history it probably should have been.

The GISS plot is available here

19. JG
Posted Jul 5, 2007 at 6:50 PM | Permalink

Re: #18

Anthony, you say Central Park is not a USHCN station? It shows as one of the stations in the USHCN station inventory and history files:

305801-04 40.78 -73.97 130 NY NEW YORK CENTRAL PARK

20. Anthony Watts
Posted Jul 5, 2007 at 7:02 PM | Permalink

RE19: Ok its been one of those days, I’m fighting a head cold, and the world looks foggy. I looked for “Central Park” in my own list and didn’t find it…look again and its there as NEW YORK CENTRAL PARK.

I think I’ll just step away from the computer a day or two until the fog clears from my head…I’ll also try not not operate any heavy machinery.

21. Steve McIntyre
Posted Jul 5, 2007 at 8:46 PM | Permalink

Update: I’ve been able to emulate the Karl “urban warming bias” adjustment very accurately for 1890-1990. If one reverse engineers this adjustment to calculate the New York City population used in the USHCN urban adjustment for 1990-2006 – the period of the dramatic increase noted by Joe D’Aleo, the results are, in Per’s words, gobsmacking, even by climate science standards. Maybe I’ll think differently tomorrow, but tonight they seem breath-taking. Please re-read the head post for the additions at the bottom.

22. Gary
Posted Jul 5, 2007 at 9:37 PM | Permalink

So the urban adjustment is a multiplier based on population data for the local area (city, town, county?) meaning that another source of error is the accuracy and applicability of the census data. What if the population of a town doubled over the decades but was centered five miles from the (still) rural station? Warming bias.

23. Steve McIntyre
Posted Jul 5, 2007 at 9:39 PM | Permalink

#22. Gary – you missed the point. They’ve done something very weird with the NYC population. It almost looks like they’ve got a programming error in which they convolve the data back to 1850.

24. bernie
Posted Jul 5, 2007 at 9:57 PM | Permalink

This is astounding. Could this be an accidental error? The opposite proposition – that this adustment during the 90s has been carefully and consciously made, NOAA is in danger of a real scandal – this is pretty close to a smoking gun. Are there any other urban locations with similar types of arbitrary and unwarranted adjustments?

If auditors found this kind of error in an inventory valuation – a complete physical inventory would be called for.

25. steven mosher
Posted Jul 5, 2007 at 10:49 PM | Permalink

SteveM..

Risking much stupidity here.. I was wondering.
What happens to the errors and CI before and after adjustments?

Just looking at Orland the other day… The linear fit for 1900 to 2005 (raw) improved
just a bit after the Giss homogeniety adjustment.. Not by much, but as they adjusted the temp record they also did so in a way
that improved the linear fit.

Here was the weird thing ( from memory, but I will check) From 1900 to CURRENT, Gavin told me
that the grid saw a .8C positive linear trend. So the grid is +.8c since 1900.

When I looked at the Orland adjustment, they adjusted 1900 -1.1C and Current day -.1C
The adjustment is fed in that stairstep fashion. The adjustment improves linear fit…
The fit still sucked.. made me think about doing these kind of adjustments ( ramping in a
linear signal) if there is autocorrelation in the underlying signal.. hmm Above my pay grade…

BUT, essentially, they put in a trend adjustement that ” mirrors” the “signal” of the grid..
Its like they nudge the stations to a trend line… like an annealing process

Am I being clear. The grid shows a .8C linear psitive trend for the century. When they adjusted
Orland the adjustment insinuates this trend signal in the data… They decreased the temp of the site, but
the adjustment IMPOSES a trend in data that nudges it to the “grid” trend.
They NUDGE it in the direction of the rest of the grid.. .. By cooling
it more in the past and less in the present.

It might be an iterative process… adjust the site, measure the grid.. tweak some more… tweak the sites
that have outlying trends by applying “ajustments” Sometimes the adjustments are made in a straight
linear fashion ( say 1900 to current) other times the adjustments are fed in in two slopes.. one from
the record start to 1950 and the second adjust from 1950 on…

Essentially, you use these adjustments to weasel data into some kind of “homogeneity”… then you
analyze error after this weaseling..

Might be interesting to look at slopes of adjustments from site to site.

So, it almost looks like adjustments could add “information” to the record, in a way that makes me a bit
squimish..

Am I being stupid here?

26. Anthony Watts
Posted Jul 5, 2007 at 11:27 PM | Permalink

Well after getting some non drowsy sudafed, I thought I’d just run the population numbers Steve linked to the NCDC FTP file along with Karl’s 1988 formula.

From the FTP site I got NYC’s population data by decade, first is station ID followed by 1900, 1910…2000

StationID:30580104 3437202 4766883 5620048 6930446 7454995 12911994 16174478 18071522 17412203 18087000

Plugged in those numbers to a simple scientific calculator program that has a running tape, results below:

Adjustment (°C) = 1.8 * 10^-3 * population^0.45

1.8 * 10^-3 * 3437202^0.45
Ans = 1.572406318
1.8 * 10^-3 * 4766883^0.45
Ans = 1.821704901
1.8 * 10^-3 * 5620048^0.45
Ans = 1.961803551
1.8 * 10^-3 * 6930446^0.45
Ans = 2.155832615
1.8 * 10^-3 * 7454995^0.45
Ans = 2.22778777
1.8 * 10^-3 * 12911994^0.45
Ans = 2.852459567
1.8 * 10^-3 * 16174478^0.45
Ans = 3.156793169
1.8 * 10^-3 * 18071522^0.45
Ans = 3.318334283
1.8 * 10^-3 * 17412203^0.45
Ans = 3.263297587
1.8 * 10^-3 * 18087000^0.45
Ans = 3.31961293

Then I imported the Ans= it into an Excel spreadsheet and graphed it over the past century:

1980-2000 seems pretty flat for adjustment related to population looks like about 0.1°C variance.

Did I miss anything or is it the Sudafed talking? I agree with SteveM, we should all sleep on this.

27. Tom
Posted Jul 6, 2007 at 12:09 AM | Permalink

Creating hockey sticks… unbelievable.

28. Posted Jul 6, 2007 at 12:39 AM | Permalink

In my own personal experience, when data have been “adjusted” incorrectly, it has often been traced to an individual who was given the authority to make adjustments without any real investigation into them. In my case I would be talking about test data for certain things but once involved a person who did make some adjustments in order to make something look better than it was. They were eventually discovered and there was a process in place to review the adjustments and justification for them, but the panel never bothered, they simply trusted the individual.

I wouldn’t be surprised to find many adjustments were created by an individual who was trusted to make them and nobody ever questioned them.

29. Steve McIntyre
Posted Jul 6, 2007 at 4:29 AM | Permalink

Anthony, your non-drowsy wasn’t working. Also Excel is a crappy way to do statistical analysis since there’s far too much manual handling of data and you got bit by it here. If you look at your plot, you left out 1920 on the x-axis, which spread the 1990 value over to 2000. Otherwise your values correspond more or less to the values shown in the bottom panel of my graphic (which plots the negative of these values.)

Here’s how you do this in R.

url=”ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/metrof_hybrid.dat”;usid=305801
widths0=c(6,2,rep(9,11))
names(pop)=c(“usid”,”loc”,seq(1890,1990,10))
x= pop[(pop[,1]==usid),3:13]
index=seq(1890,1990,10)

# 1890 1900 1910 1920 1930 1940 1950 1960 1970 1980 1990
#714 1.379504 1.589877 1.841946 1.983601 2.179786 2.252541 2.884154 3.191869 3.355205 3.299556 3.356498

For reference, the file formats are described in two readmes:
which say:

The population metadata files are fixed-length ASCII files with the following format:

Column Description
1:8 USHCN Station ID
10:17 1890 Population
19:26 1900 Population
28:35 1910 Population
37:44 1920 Population
46:53 1930 Population
55:62 1940 Population
64:71 1950 Population
73:80 1960 Population
82:89 1970 Population
91:98 1980 Population
100:107 1990 Population

For reverse engineering, I calculated the USHCN tobs value – the USHCN urban-adjusted value to get the size of the adjustment.

30. Steve McIntyre
Posted Jul 6, 2007 at 4:37 AM | Permalink

I’ve calculated the USHCN urban-adjusted New York City Central Park series on the basis that NYC population has not been reduced to 10% of 1990 levels (which would undoubtedly have gone a long way towards compliance with Kyoto) and posted it in the head post. I’ve not researched NYC population, but I think that reasonable people can agree that this is a more realistic estimate of NYC population than Tom Karl’s. Maybe Karl could call up Gavin Schmidt or Jim Hansen and ask them if they’ve noticed any abandonment of NYC.

31. DocMartyn
Posted Jul 6, 2007 at 5:51 AM | Permalink

Iam sure that it would be possible to look at the records of the water and electrical suppliers of NYC to find out how much heat has been generated in NYC. The amount of electricity should track well with human activity that generates heat.

Another point Ihave made before is the trees in Cnetral Park. Why not look at the tree rings. We have a temperature record and tress. Moreover, some have been transplated from other locations. It should be very easy to calibarte tree rings vs. Temp (or not as the case might be).

32. Bob Koss
Posted Jul 6, 2007 at 6:22 AM | Permalink

Wasn’t there a SciFi movie called Escape From New York? Maybe it was actually a documentary.

33. Gary
Posted Jul 6, 2007 at 6:53 AM | Permalink

#23-Steve,
No, I caught the bit about defaulting back to the original population value when calculating the 1990-2000 adjustments. The lack of QA is astounding. But I was thinking about how this equation may be applied at all the other stations. What if their population counts are erroneous – not only with missing data as here at NYC Central Park, but with populations that are not centered on the station? Several tenths of a degree error could be introduced even if the UHI is remote enough from the station to have no affect.

34. Earle Williams
Posted Jul 6, 2007 at 6:55 AM | Permalink

It would be fascinating to grid the adjustments for the USHCN and other network stations and compare that to the purported anthropogenically created temperature increase. Perhaps it is too cumbersome to attempt on the scale of a global average, but certainly for some of the hot spots it is worth calculating what the mean (?) adjustment is for those gridcells.

I for one am curious what the gridded temperature anomaly history looks like compared to the gridded adjustments, on a yearly basis.

Re #32:
Bob Koss? I thought you were dead!
;-)

35. John Lang
Posted Jul 6, 2007 at 6:56 AM | Permalink

From now on, we should just use the RAW temperature data. When I say we, I mean Climate Audit and GISS, NCDC and Hadley.

We should just split the raw data into three streams, truly rural stations, stations with some urban development bias and stations with large urban development bias and go with that.

These “adjustments” might have been an altruistic attempt to remove bias at one time, but now they have gotten out-of-hand and must be completely full of errors. There is no rationale for the adjustments seen here.

36. JG
Posted Jul 6, 2007 at 7:28 AM | Permalink

Unless a drop in population results in the removal of roads, parking lots, buildings, etc., shouldn’t adjustments based on population always be clamped at the highest-measured population?

37. Hans Erren
Posted Jul 6, 2007 at 7:37 AM | Permalink

yup

38. Anthony Watts
Posted Jul 6, 2007 at 8:09 AM | Permalink

RE 26, 29 I fixed the Excel generated graphic, and re-uploaded it, Thanks for pointing out the error.

I’ve used Excel for years, but here’s a perfect impetus for me to switch to R. I agree, too much manual data handling.

FYI as a personal excercise I took the raw GISS plot for NYC Central Park from their website and did a quick overlay of my Excel graph of Karl’s adjustment with some graphic x-y tweaks to get them to line up, just to see if they correlate. They do and pretty well.

But I’ll leave that to SteveM should he want to display it.

39. Anthony Watts
Posted Jul 6, 2007 at 8:13 AM | Permalink

RE35 John, part of the problem right now is that MMS doesn’t always accurately define rural stations…some have been overrun as we’ve seen in photos that have been gathered.

40. Paul Zrimsek
Posted Jul 6, 2007 at 8:43 AM | Permalink

What’s the big puzzle? Obviously urban heat islands are sinking below the rising oceans just like all the other islands.

41. beng
Posted Jul 6, 2007 at 9:29 AM | Permalink

W/this newest revelation, I’m beginning to seriously doubt current global temps are any higher now than the 30s/40s, maybe not even as high. Summer record highs in that period were considerably higher than what has occurred since, at least in much of the US.

Science has been abandoned w/anything concerning AGW. The “consensus” finished results (lead by the IPCC) are mostly artwork. Marx would be proud.

42. Bob Meyer
Posted Jul 6, 2007 at 9:44 AM | Permalink

Re #36

I’m not sure that clamping the adjustments at the time of maximum population would work, particularly in NYC. The population of Manhattan (which I argue is more important than the population of the entire city with respect to the Central Park temperature station) declined substantially since 1910. In 1910 there were still a lot horse drawn vehicles and virtually no power consumption at all by modern standards.

Also, if you look at Manhattan from an aerial view you can see that most of the surfaces struck by sunlight are the tops and sides of buildings. How does this affect the “ground temperature” when most of the “ground” is at least ten stories high and the effective surface area is much greater than an equivalent patch of flat land?

Most of the buildings in Manhattan are much higher than they were a hundred years ago when the population was 50% greater than it is now. The street that I grew up on has only one building from before 1950 left on it. The new buildings average about 15 stories while the old ones averaged only about five.

Comparing Manhattan today with the Manhattan of one hundred years ago isn’t even comparing apples and oranges. It’s more like comparing apples and aardvarks.

Finally, I object to the entire philosophy of adjusting the temperature data. The basic idea behind it is that there is a Platonic “true” temperature that must be inferred by systematically eliminating “unnatural” influences that consist entirely of human activities.

We see Gavin Schmidt insist that his models are pure and perfect because no real world inputs were used in producing them. His models are derived entirely from first principles and if their predictions are not borne out by subsequent temperature measurements then the temperature measurements are wrong. He, and apparently most climate modelers, seems to sincerely believe that all “corrections” to historical data must necessarily bring that data closer to the models. If that is not a formula for bias when adjusting data then nothing is.

There is no “real” temperature that actual temperatures readings can only approximate in the way that the shadows on the wall of Plato’s cave approximate the true Forms. There are only the actual recorded temperatures and if the models don’t agree with them then the models don’t agree.

43. Kay Chua
Posted Jul 6, 2007 at 11:30 AM | Permalink

#42

Bob, thanks for the post. I had not realized that the population of Manhattan has declined – presumably residential has been replaced by commercial? There was a time I would have given almost anything to live there.

Like you I have a problem with the UHI data adjustments. The adjustments result in a hypothetical data set (assuming that location was unaffected by UHI – something that does not exist) that is untestable in my opinion. Even if the adjustments appear reasonable, how would we know that they are valid for a particular location?

Kay

44. aurbo
Posted Jul 6, 2007 at 11:33 AM | Permalink

It’s a Chomskian world we live in. Temperatures no longer refer to the actual measurements of molecular kinetic energy we sense as hot or cold, but rather a fictitious number designed to score geopolitical points.
It appears as though Karl & Co are in direct competition with the Hilborn Company of Montreal*.

Data is created by direct observation and recording of defined physical parameters. Adjustments of the data to account for such things as TOB in order to normalize the observations from a multiplicity of sources is an analytical step which creates a para-data which may or not be an acceptable substitution for the original.

In the case of NYC, the TOB corrections can be evaluated by comparing the average of 24 hourly observations beginning an ending at various hours during the 24-hour day. The hourly data for NYC Central Park is available from several sources including NCDC, local newspapers, etc. if anyone wants to go to the trouble of doing this. It is my view TOB corrections based totally on the actual observations will have no effect on the year-to-year trend of the climate series and that is really what legitimate GW or AGW investigators seek to evaluate.

The same can be said for the unaltered data providing there have been no significant changes in the observation site and the accuracy of the temperature instrumentation vis-à -vis representing free air temperatures. The site’s meta-data should provide information on data discontinuities arising from location changes, the shift from mercury-in-glass to thermistor or other electronic devices incorporated in the MMTS instrumentations. The NIST (formerly the NBS) still regards calibrated mercury-in-glass thermometers as industry standards. Thermistors have a habit of drifting with time and I have personal experience of such occurrences with the Central Park observations when the mercury-in-glass thermometers were replaced by thermistors to facilitate remoting the data to the NWS office when the Belvedere Castle site was automated and observing personnel were no longer on the premise.

The USHCN Version1 compilation with urban adjustments for NYC Central Park is at best an embarrassment,

*The original manufacturer (in 1928) of the modern two-piece hockey stick and I believe still in business today. Come to think of it, the NYC adjusted data still resembles a two-piece hockey stick.

45. Bob Meyer
Posted Jul 6, 2007 at 12:53 PM | Permalink

Re 44

Thermistor drift is quite dependent upon the kind of thermistor. There is a NIST paper that showed that certain glass encased thermistors exhibited drifts of less than .001 degrees C per year while other kinds showed very large drifts. My personal experience with thermistors is that they can show drifts of several hundred microdegrees per hour after exposure to high temperatures and that this drift decays exponentially over a period of weeks. NIST had similar results with disc type thermistors.

I didn’t know that the MMTS heads used thermistors to measure temperature. That could be either good or bad depending upon the kind of thermistor and the mass that it is attached to. Small bead thermistors have time constants on the order of seconds in free air. This could cause short term transients from things like passing cars or people lighting cigarettes to produce spikes in temperature readings. It all depends on how the instrumentation is set up, what kind of thermal mass is attached to thermistor and what kind of averaging, if any, is used by the detection circuitry.

I am currently doing millidegree temperature control of navigation instruments and I am finding more error sources than I ever imagined.

46. steven mosher
Posted Jul 6, 2007 at 1:47 PM | Permalink

SteveM.

Interesting discussion at Columbia. background, columbia host a GCM based on
ModelE. I was just hunting around looking at time series and climate..

Stumbled on this.

http://iridl.ldeo.columbia.edu/dochelp/StatTutorial/Homogeneity/

I would expect, f course, a teleconnection betwixt Goddard and columbia..

More later

47. crosspatch
Posted Jul 6, 2007 at 2:06 PM | Permalink

“I had not realized that the population of Manhattan has declined ”

Be careful. Just because the “population has declined” doesn’t mean there are fewer people there. As residential property is turned into commercial property, fewer people might be *sleeping* in Manhattan but I would venture a guess that the population at 12 noon is stable or increasing. People are just commuting in for the day and going off the island after work.

48. crosspatch
Posted Jul 6, 2007 at 2:28 PM | Permalink

Just found an article published by New York Magazine that lists the daytime population of Manhattan as 2,874,003 and a “residential” population of 1,562,723.

Link to article apparently published in August 2006 according to the “Issue Date” html meta tag in the document source.

49. Joe Ellebracht
Posted Jul 6, 2007 at 2:30 PM | Permalink

Well, OK, there is a version at NOAA of Central Park termperatures that are wrong. Other versions are not quite so wrong, apparently. What is this wrongest version used for? If nothing, then it doesn’t matter. If material for the periodic “warmest June since moon captured” press release, then more serious.

50. Joe Ellebracht
Posted Jul 6, 2007 at 2:38 PM | Permalink

If the earlier adjustments were for the population of the whole of NY City, then the current adjustment should also be on this basis, so Manhattan would be the wrong population basis to consider. A switch between these two bases might be the source of the error.

51. steven mosher
Posted Jul 6, 2007 at 2:48 PM | Permalink

Not sure if I posted this or if it hit the page..

One thing that seems odd are the GISS homogeniety adjustments..
I started searching a number of things..

Found this:

http://iridl.ldeo.columbia.edu/dochelp/StatTutorial/Homogeneity/

Now, Columbia Also host a Version of Goddards ModelE. So perhaps
to Columbia

52. Steve McIntyre
Posted Jul 6, 2007 at 2:48 PM | Permalink

#49. It’s hard to say. First, I’d like to know whether there is a computer programming error that affects other sites and how any bad data got inserted into the calculation. I’m not sanguine that this information will be made available. Also, while I’m sure that NOAA and others will say that the version is not used in the larger-scale indices, in a proper audit, one would want to confirm this. Again I doubt that this will be possible.

53. jae
Posted Jul 6, 2007 at 3:47 PM | Permalink

Wonder which dataset these folks used for Central Park.

54. aurbo
Posted Jul 6, 2007 at 3:50 PM | Permalink

Re #49:

Is a bank robber off the hook if he says after he’s been caught; “I wasn’t going to spend the money anyway.”?

Maybe that’s a bad analogy, but this absurd urban adjustment may easily have been used, either by accident or design, to resurrect the hockey stick.

It will be interesting to see if similar goofs were perpetrated on other members of the USHCN Version 1 database.

55. crosspatch
Posted Jul 6, 2007 at 4:43 PM | Permalink

Well, whatever the reasoning for the adjustments, I see this as absolute proof of humans causing global warming. Al might be right in an ironic sort of way.

56. aurbo
Posted Jul 6, 2007 at 4:47 PM | Permalink

Re #53:

In the linked CCSR report there is an interesting satellite derived IR view of Central Park. The picture shows the cooler temps in the park extending eastward to 5th Avenue while Central Park West on the other side of the park is warmer. This may be a combination of a light southwesterly wind and the morning sun angle which would have the shading from buildings on the east side of 5th avenue delaying the sun’s impact on surface temperatures there. The cool spots in the park are the reservoir in the northern half of the park and the lake in the southern half. The warm spot in the east 80s is the Metropolitan Museum of Art. In the area of Belvedere Castle the temps seem to represent the average for the park as a whole (sans the water areas).

57. Thomas T
Posted Jul 6, 2007 at 8:33 PM | Permalink

I remember going to Belvedere Castle in the sixties. I remember the weather station as being on top of the castle. I would think the census bureau would be a better source for population data than Wikapedia or New York Magazine. Should all of Manhattan even be included? How much does the north part of Harlem effect the temperature in mid central park? Does the lower end of Manhattan have a measurable effect?

58. Posted Jul 6, 2007 at 11:32 PM | Permalink

“I would think the census bureau would be a better source for population data than Wikapedia or New York Magazine. ”

I would think census figures would be the worst possible source because they only count residents, not commuters, business travelers, and tourists who fill the commercial buildings during the day and hotels at night. If you rely on census figures for areas such as Manhattan, Washington DC, and San Francisco, you seriously under count the number of people in the city on an average day.

59. Joe Ellebracht
Posted Jul 7, 2007 at 5:34 AM | Permalink

jae:
re #53
Interesting measurements of albedos of various urban surfaces.

60. Bob Meyer
Posted Jul 7, 2007 at 5:44 AM | Permalink

On this one blog thread we just listed a slew of reasons why there shouldn’t be any adjustments to historical data. Everything that we listed might be the dominant influence on any given day (working population vs. resident population, Manhattan population vs. total city population, power consumption, number and size of structures, winds from the Hudson vs. winds from Long Island, proximity of the reservoir and other bodies of water, change in location from the top of the castle to a partially shaded area). How can any simple formula hope to account for all of these influences?

The information just isn’t there to unravel what effect dominates at any given time. It becomes a guessing game with the final guess being justified by “consensus” which means, more often than not, the lowest common denominator.

This game may be okay for the AGW people who know apriori that CO2 is the sole cause of all warming. They can just pick any influence that moves the data to closer conformance with their theory and then declare that influence to be the dominant one. It’s a lot harder to actually quantify these effects especially since there are no independent sources of measurements that are free from particular influences.

It seems to me that the choice is to accept the raw data as it is or else throw out all of the data. Everything else is just filtering the data through a pigeon hole designed to make the data conform to some theory or other.

61. Anthony Watts
Posted Jul 7, 2007 at 10:39 AM | Permalink

A thought occurred to me about the reverse engineered population graph SteveM created.

Census data is decadal. Yet the USHCN data goes to 2006. Could it simply be that since no new census data exists for 2006, that they simply left a data field blank and the program assumed “blank data=0″ or maybe “blank data = starting condition magnitude” ?

I’ve seen similar sorts of things that happen in C++ program coding. A variable that suddenly has a null value gets reinitialized to the starting value. Sometimes, the programmer doesn’t even know about it, because the operation may occur in some third party plug-in or DLL used for data processing where the programmer doesn’t know all the assumptions another programmer made.

As SteveM pointed out in my drowsy Excel exercise earlier, manual data handling can be fraught with errors. So can manual programming.

Such an oversight of leaving a blank data field might explain how Steve’s reversely engineered population graph could have a near zero value for 2006.

Clearly more investigation is needed.

62. Geoff Olynyk
Posted Jul 7, 2007 at 12:17 PM | Permalink

#61 Anthony Watts

That’s why it’s important that an educated human look over any results a computer produces to make sure that the results make sense.

I really hope there’s a good reason why the urban adjustment was dropped to zero over the last few years. Because if there’s just a trust that the adjustment a section of code produces is correct and nobody actually verifies that the results make physical sense, that’s pretty discouraging.

63. BarryW
Posted Jul 7, 2007 at 1:04 PM | Permalink

#62

More to the point, the results were what they expected, since they “knew” the data would show warming, so they wouldn’t check any futher.

64. Posted Jul 7, 2007 at 1:11 PM | Permalink

“if there’s just a trust that the adjustment a section of code produces is correct and nobody actually verifies that the results make physical sense, that’s pretty discouraging.”

That sort of thing happens more often than people might think and I see it fairly often in my work. A mechanism is developed and tested, passes the tests and is put into production and is just “trusted” to do the right thing going forward. And if the error produced is in the direction that was expected (though of an incorrect magnitude) it can be seen as validation of the expected trend rather than being seen as an error. It isn’t until later, in this case when 2010 census data is available and the previous adjustments are themselves adjusted, that they wonder why the trend they were using as a basis for their conclusions disappears that anyone looks into what happened.

In the case of climate data, it looks to me like there is a culture of covering up errors and not talking about them rather than exposing them and revisiting past conclusions and that is the discouraging part for me.

65. Jeff C.
Posted Jul 7, 2007 at 4:13 PM | Permalink

#62 and 63

I suspect you are correct and that the error is the result of sloppy application of the correction and isn’t intentional. It is interesting how these “errors” almost always seem to be in the same direction. More than likely that erroneous data showing recent large cooling would be immediatelly challenged and corrected by the data keepers. However, erroneous data showing warming reinforces the data keepers prejudices and expectations so it is never reviewed in depth. It is too good to check, so to speak. Human nature, but lousy science.

66. Douglas Hoyt
Posted Jul 7, 2007 at 6:19 PM | Permalink

Re #61

Could it simply be that since no new census data exists for 2006, that they simply left a data field blank and the program assumed “blank data=0’€³ or maybe “blank data = starting condition magnitude” ?

If they did this for NYC, it is likely they have done it for every station worldwide. It would cause an increase in global mean temperature and may explain why GISS anomalies are higher than CRU anomalies in recent years.

Also in the US, MMTS shelters are replacing CRS shelters and in most cases the new sensors are closer to buildings than the old sensors (due to limitations on cable length). The net result should be an increase in temperatures because buildings are generally warmer than their surroundings, particularly at night and in winter. It is another net warming bias in the entire network.

67. Posted Jul 7, 2007 at 6:55 PM | Permalink

“The net result should be an increase in temperatures because buildings are generally warmer than their surroundings, particularly at night and in winter. It is another net warming bias in the entire network”

I would agree that this could be the case but I would suspect it would be a step change and not a change in trend over a long period of time. However, when data from large numbers of stations are averaged, individual step changes done over a long period of time *can* make the resulting average appear to trend up until all stations are replaced. I would suspect the same result from repainting of Stevenson screens with latex paint over the years. Individual stations see a step change as they are repainted and the overall average of a large sample of stations would see a rising trend.

68. Jim Clarke
Posted Jul 7, 2007 at 9:47 PM | Permalink

Population is a proxy used to determine UHI. The real variable that causes UHI is the increasing land use changes that usually come with increasing populations, but the population is not necessary. If no one ever lived or worked in Manhattan, but all the buildings and roads were the same, the UHI would be much the same.

I have not been to Manhattan for decades, but I don’t think it could be any more ‘paved’ than it was 30 years ago. Therefore, the UHI adjustment should be the same for 2000 as it was for 1970.

If the climate community buys these ‘adjustments’, then perhaps they would be interested in buying a bridge to Brooklyn that I have for sale!

69. Tom T
Posted Jul 7, 2007 at 10:50 PM | Permalink

Re:#58 and#61
Having been a census taker I can tell you that census data is not just decadal and they do count commuters and business travelers and tourists. The amount of data that the census bureau collects is nothing short of phenomenal. They do a simple count yes, but they do so much more. They do scientific surveys. They ask how much income do you make, how far do you commute, with how many people do you commute with, where do travel to how much do you spend, how do you heat your home, how much does that cost, ect, ect.

They do this much more often than every 10 years. There are some people who are asked the questions I stated and many more every six months. They take all this raw data and put into databases. They know the population and demographics of every zip code in the country, probably within 2% at just about any moment in time. Where do you think Wikapedia or New York Magazine gets their data?
The question is: How to get the data in a useful form. The bigger question still is: what to do with it? I still don’t see a logical reason to use the population of Manhattan Vs that of all of NYC or part of New Jersey.

I really think that what Steve and Anthony are trying to do makes a lot of sense. I just hope there is a logical justification for how the data is handled.
I am tending to agree with #60. AGW people seem to have an agenda. Our goal should be the truth. Unless there is a compelling scientific reason to adjust the readings they ought to be accepted.

70. Posted Jul 8, 2007 at 5:56 AM | Permalink

All aspect of each and every piece of software used in all processes and procedures associated with any and all scientific, engineering, and technology projects that affect the health and safety of the public shall be subject to Independent Verification, Validation, and Qualification under approved and audited Quality Assurance procedures. Coding Verification, numerical solution methods Verification, input data Verification, calculation Verification, Validation of the mathematical models of physical phenomena and processes, user and application Qualification, … All independent of the developers of the software.

Peer review has not ever been applied to any publications for which these process and procedures have not been applied to computer software that is the basis of the publication.

71. Bob Meyer
Posted Jul 8, 2007 at 8:44 AM | Permalink

Re #68

Jim Clarke said:

If no one ever lived or worked in Manhattan, but all the buildings and roads were the same, the UHI would be much the same.

I doubt this. The average residential household in NYC consumes 4696 kw-hrs per year. If you assume that an island with 1.5 million people has an average household size of five people then that works out to 300,000 households. The island has an area of 22.6 sq miles. Do the math and it works out to about 2.75 watts/sq meter. That is only residential electrical usage. If you add businesses plus heating oil plus gasoline plus natural gas usage then the figure would be substantially higher.

In 2001 Hansen estimated the increase in climate forcing due to GHG to be about 5 watts/sq meter per century (he added that this had slowed down after 1980 to 3 watts/sq meter). Even if we use Hansen’s high figure, which I suspect is nonsense, the electrical power consumption of NYC alone would equal half a century’s worth of GHG heating.

An average adult human body produces about 100 watts of power. Multiply that by 1.5 million and spread that out over 22.6 square miles and that works out to over 2.5 watts/sq meter. So the warm bodies in New York produce almost as much heat as the entire residential electrical consumption.

People account for as much heat in New York as the projected effects of increased GHG.

References:

NYC power consumption

Hansen, “Trends of measured climate forcings” Proceedings of the National Academy of Sciences December 18, 2001 vol. 98 no. 26 14778-14783

72. Jeff C
Posted Jul 9, 2007 at 12:43 AM | Permalink

Looks to me like the issue raised by Steve isn’t the only problem with the UHI correction. I’ve been poking through some of the other files in the same folder where the climvis folks pointed (http://www1.ncdc.noaa.gov/pub/data/ushcn/). The population tables used in calculating the UHI correction are interesting. Pasadena, CA has a 1990 population listed at 131,591 resulting in a UHI correction of -0.36 deg F using Karl’s formula (I verified the magnitude using the unadjusted/adjusted data). New York has a 1990 population of 18,087,000 resulting in a UHI correction of -3.31 deg F in 1990. While the 131K population number is probably correct for the City of Pasadena, it lies right in the middle of a metro area with a population of 12 million. Shouldn’t its correction be similar to New York’s? Pasadena is not an isolated town. Mesa, AZ is similar where the population is listed as 288K yet it lies in the Phoenix metro area with a population of close to 2 million. I’ve done quick spot checks on others and seen similar issues. Looks to me like the UHI correction for any big city suburb is understated significantly. The methodolgy here does not look well thought out.

73. MarkW
Posted Jul 9, 2007 at 5:20 AM | Permalink

I would guess that the average household size in NYC is a lot closer to 3 than it is to 5.

Posted Jul 9, 2007 at 12:12 PM | Permalink

RE: #16 & 17 – I was just there last week, albeit sans camera :(

In any case, the photos don’t show it, but that is about 500 feet max from Central Park West. Prevailing winds are bringing thermal flux from the Upper West side right in there most of the time. During the rare Easterly outbreak, the massive thermal plume from the Met (think of all those HVAC systems they’ve got) will be heading West. Who knows how far that extends.

The neighborhoods of the Upper West side have changed more than the Upper East Side. The Upper West side used to be quite middle class, but now, highly unaffordable. Houses have gotten split into trendy flats, and lots of new commercial businesses have gone in. At the south end of the park, especially around Columbus Circle, high rises have replaced the brownstones that used to be there. Change has occurred and I’d have to say, changes mostly would result in more energy dissipation. Add to that the increase in per capita dissipation in general, since 1950, as we became more gadgetized and air condiditioned.

Posted Jul 9, 2007 at 12:20 PM | Permalink

RE: #56 – Upper East Side is very anti commercial – you need to get to Lexington Ave to break out of old money residential stuff. Upper West Side is more welcoming of mixed zoning, there the commerical stuff is only one block W of the park. Also, density in West has gone up as it got yuppified. The Met definitely outputs lots of heat.

76. Bob Meyer
Posted Jul 9, 2007 at 12:53 PM | Permalink

Re #73

MarkW

I picked a number for average household size that was intended to produce the smallest reasonable number of households so that the total electrical consumption would be on the low side. I haven’t lived in NYC for thirty five years so I don’t really know the size of households anymore. I tend to doubt that anyone who wants to raise children would live in Manhattan.

Posted Jul 9, 2007 at 2:39 PM | Permalink

RE: #76 – but a counterpoint would be that places that used to be homes of individual families have been split into flats. So the overall density has likely increased. Then there are all those high rise condo places that have gone in over the past 30 years. 30 years ago, no one wanted to live anywhere in NYC, except Staten Island, now, yupsters are fighting each other over studios and one bedroom places. The place has really gentrified and become a typical “urban gold coast” type of place.

Posted Jul 9, 2007 at 2:50 PM | Permalink

RE: #72 – Not only that, but the megalopolis (LA) Pasadena is part experienced wild growth since WW2, whereas, growth in the NY-NE NJ one has been more steady over time. So, arguably, Pasadena has been spiked even more by UHI than NYC. On a related anecdote, I recently had yet another night time return to SFO, from the Eastern US. There are paterns of urban light pollution that you come to recognize after you have flown a lot. There is a certain pattern from a megalopolis, such as, for example, Chicagoland, or, increasingly, places like Denver and Las Vegas. When one is equidistant from Vegas and the populated part of California, the light pollution pattern of the Bay Area, the San Joaquin Valley and SoCal look just like Vegas does, albeit taking up a larger area. In other words, Bay Area, San Joaquin and SoCal are on the verge of becoming one big urban-surburban-exurban blob. Now thatis some serious UHI!

79. DeWitt Payne
Posted Jul 10, 2007 at 7:34 PM | Permalink

Re: #44

The NIST (formerly the NBS) still regards calibrated mercury-in-glass thermometers as industry standards.

While the NIST will calibrate mercury-in-glass thermometers, they are secondary standards at best. The primary standard for temperature measurement and calibration of secondary measurement devices (actually interpolation between the fixed reference points like the triple point of water and the freezing point of pure zinc) in the range we experience is the standard platinum resistance thermometer, the reference points and measurement methods are specified in ITS-90. While a thermistor will give you great resolution, a thermocouple has lower thermal mass if you want rapid response time like in an instant read digital kitchen thermometer and a PT100 resistance thermometer is more stable.

80. Aurbo
Posted Jul 11, 2007 at 8:47 AM | Permalink

#79,

Thanks for the info regarding NIST standards. Today’s temperature instrumentation must be capable of resolving temps to 0.001°F or less, well outside the capabilities of a mercury-in-glass instrument. But that wasn’t my point. Climatological data is rarely resolved to less that 0.1°F, well within the stability of calibrated mercury thermometers (if not in within the capacity to read the instrument to that resolution). So, suggestions made in the past about the lack of the necessary precision of mercury-in-glass thermometers to record climate data accurately is utter nonsense.

Many good posts above re my home town (NYC). In uptown Manhattan buildings surrounding Central Park are now probably over 90% equipped with air conditioners. Back in the early 1940s and earlier, the only places one could find air conditioning on hot summer days were in department stores and movie theaters. So, technology, mostly that aspect of technology that consumes energy, is probably within an order of magnitude as important to UHI as raw population statistics.

Now for the gobsmacker of the week: I compared the NYC Central Park USHCN Version1 data with the just released Version2 data. Whereas Version1 lowered the NYC raw annually averaged temperature data by up to 5°F to nearly 7°F during the 1960-1980 time frame, Version2 raises the raw temperatures by 1.4°F to 2.5°F with a fairly steady rise through the entire period of record.

Think about that. The difference between the average annual temperatures for NYC Central Park between USHCN Version1 and Version2 for some years is as much as 8°F! This is what NCDC describes as a high quality data base! Whatever the reason for this outrageous discrepancy, notwithstanding the possibility of it being a result of “programming errors” as suggested in some prior posts, the release of this data as benchmark historical climate information is inexcusable. Since NCDC is primarily responsible for maintaining National meteorological data, this sure smacks of major mismanagement.

Posted Jul 11, 2007 at 9:14 AM | Permalink

RE: #80 – 1950: The typical upper West side household was a middle class family with a single income union job, ala “The Honeymooners.” Definitely no A/C. Probably no TV. The certain appliances would be the gas stove and the fridge. TV, probably not. Maybe a washer, if there was a hookup for it. No dryer. Lighting would be either one cieling mounted two socket lamp or a couple of table lamps. Etc. 2007: Investment bankers, architects, boutique owners. A/C is a must, the preference being central, failing that a window unit in every windowed room. High end Bosche (or equivalent) stainless steel fridge, dishwasher, sink unit. Professional restaurant grade cooktop and oven, microwave and overhead exhaust hood. Track lighting plus many lamps. The lighting is like modern shops, it noticably heats the interior. Mutiple PCs, big screen TV, audiophile stereo system. Etc.

82. JerryB
Posted Jul 11, 2007 at 10:41 AM | Permalink

Re #80,

Aurbo,

“… I compared the NYC Central Park USHCN Version1 data with the just released
Version2 data….”

Where did you find USHCN V2 data? I do not see a link to it
at http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ .

83. aurbo
Posted Jul 11, 2007 at 3:23 PM | Permalink

Re #82;

They don’t make it easy or obvious.

Try here:

http://www.ncdc.noaa.gov/oa/climate/ghcn-monthly/index.php

for the home page; then click on the ASCII Files (FTP) link on the upper left. You’ll get an FTP directory for a series of FTP files. The one I used was the v2.mean.Z [raw] and the v2.mean.adj [adjusted] zipped files. These are 11.6 mb and 8.42 mb Global files respectively. The data seems to be in a beta test mode at this time as it is constantly being updated.

This version of the GHCN data uses the international WMO station numbers in which NCDC assigns the nearest WMO station number to nearby COOP stations where necessary. As a result there are 5 stations in the data base using the NY WMO station number which is 72503 (LaGuardia Airport). These stations are delineated by assigning an additional 4-digit number to the WMO number. The country code for the US is 425. Putting this all together, NYC Central Park is listed under station number 425725030010 and contains the raw monthly mean temp data from Oct 1821 through March 2006. The v2 Adjusted data starts in Jan 1835 through March 2006. Read the v2.temperatue.readme text file for the details.

In the raw (and adjusted) data files, only one month is missing in this long series; Feb 1991. I vaguely recall that there was a computer outage in the data stream around that time and the data may not have been received in Asheville in real time. However, in the NCDC published LCD compilation for NYC CP, there is an entry of 40.0°F for that month (Feb 1991) which I entered into the raw file and I used the adjusted data for around that period to estimate an adjusted temperature of 40.4°F or 4.7°C.

I hope this is helpful to you.

84. aurbo
Posted Jul 11, 2007 at 4:43 PM | Permalink

Re #s 80, 82 and 83:

In looking over my comparison between versions 1 and 2 of the HCN data bases for NYC (ref #80 above), I see I may be comparing apples to oranges. The Version1 data I examined was from the USHCN Version1 data base. The Version2 data I used comes from the GHCN Version2 datbase. So in my #83, I may not have answered #82’s question accurately as I have not yet seen the database which is to be referred to as USHCN Version2. In other words, the USHCN Version2 compilation and adjustments for NYC Central Park may be quite different from their Version1 and what I had ascribed to USHCN Version2 in #80.

Notwithstanding all of that, both USHCN and GHCN carriy the same raw data figures for NYC CP and both of these data fields agree with the historic record I have which was produced as these data were being created. So, that does not change the fact that there is up to a roughly 8°F difference in the annual means in the two adjusted databases…USHCN Version1 adjusted and GHCN Version2 adjusted both assigned to the NYC Central Park source.

I’ll try to get this sorted out soon.

85. aurbo
Posted Jul 12, 2007 at 8:49 PM | Permalink

Re my #84;

After looking into the nomenclature of the various HCN databases, here’s what I’ve found.

There are two HCN databases handled at NCDC. The GHCN database is a global database. The Current version is Version2. The domestic database, or the USHCN database is a subset of the GHCN, with the current database Version1 which should match GHCN Version2 before the putative UHI adjustment were added to the USHCN. Thus the adjusted versions of the USHCN and the more conservatively adjusted GHCN don’t agree.

A new version, Version2 of the USHCN is essentially complete and awaiting the blessing of the NCDC chief, currently in Geneva, before being released to the public. It will not contain any UHI adjustment, but will consist of several parts…the raw data, the TOB adjusted data and an additional adjustment essentially based on the meta-data. These will include corrections, missing elements and other site adjustments and will be determined on the basis of all of the qualifying neighboring stations which may number 30 or more, and not just one station pre-selected for its close correlation with the target station. The release is expected sometime within the next 4 weeks.

As for the GHCN, a new version of that database, Version3 is in the works and should be available soon. The US stations in that base should closely resemble Version2 of the USHCN.

86. Douglas Hoyt
Posted Aug 30, 2007 at 5:20 AM | Permalink

GISS has corrected it Y2K error.

What about the NYC depopulation error? Will they correct that as well?

The Y2K error manifests itself as a jump in the temperatures. The depopulation error manifests itself as an upward ramp in temperatures in the last 7 years. They appear to be different and independent errors.

The depopulation error probably occurs in any city where an UHI adjustment is made. It may be a problem that is worldwide.

87. Anonymous
Posted Sep 18, 2007 at 6:51 PM | Permalink

I’ve seen a lot of debate over how the temperature has been adjusted.

I haven’t seen anything in this thread noticing that the exposure is not correct, or noticing that the exposure has changed.

Notice all those trees and shrubs around the site? They have all grown up in the last 50 years… and have undoubtedly influenced temperature in a negative direction thanks to shading the site and evapotranspiration.

88. MarkW
Posted Sep 19, 2007 at 5:13 AM | Permalink

Trees can also cause warming, by slowing wind velocities, and by shielding the sensor from the colder night sky.

89. Gunnar
Posted Sep 19, 2007 at 8:31 AM | Permalink

>> Notice all those trees and shrubs around the site? They have all grown up in the last 50 years and have undoubtedly influenced temperature in a negative direction thanks to shading the site and evapotranspiration.

I think it’s a logic error to correct for natural temperature effects. What are we trying to measure?

90. Gunnar
Posted Sep 19, 2007 at 8:38 AM | Permalink

It’s only an error if the site does not represent the area. In other words, the error comes into play when we extrapolate the temperature at a certain site to be representative of the large area implied by averaging temperatures from sites all over the world. Since trees and shrubs are all over, this would not be an error.

By this logic, temperature should never be adjusted. The weight (the area that they represent), should simply be reduced to the area that is similar to the site.

91. Mark T.
Posted Sep 19, 2007 at 10:02 AM | Permalink

They have all grown up in the last 50 years and have undoubtedly influenced temperature in a negative direction thanks to shading the site and evapotranspiration.

Also, shading is the goal anyway, and I don’t understand how anyone sees this as even remotely a “negative” bias. The entire intent of the screens is to “shade” the sensors so they actually measure the temperature of the air.

Mark

Posted Sep 19, 2007 at 10:30 AM | Permalink

RE: #87 – How long has Central Park been in existence? NYC is not some Johnny come lately Sun Belt slurb. How old are the trees? When were they planted? Do you really purport that they have grown enough since 1950 to make a difference? The questions I would be asking would be, when were they topped, were they ever topped, or are they left to nature? Which ones fell down and when? Etc.

93. Sam Urbinto
Posted Sep 19, 2007 at 11:00 AM | Permalink

Mark T, I wouldn’t think shading the sensor with the screen is the same as shading it and the area itself with vegetation. That runs into what you said SteveS. Is an open field with a sensor covered by a screen the same as a forest the same size with a sensor covered by a screen? We just don’t know the specifics, do we.

94. Mark T.
Posted Sep 19, 2007 at 11:05 AM | Permalink

Mark T, I wouldnt think shading the sensor with the screen is the same as shading it and the area itself with vegetation. That runs into what you said SteveS. Is an open field with a sensor covered by a screen the same as a forest the same size with a sensor covered by a screen? We just dont know the specifics, do we.

I think it is immaterial where the shading it, the goal is a completely shaded sensor. The screens themselves absorb heat, and the amount depends upon the material of the screen, the paint, etc. Shading simply isn’t a “negative” bias, though I won’t argue that cooling due to vegetation isn’t.

Ultimately, the goal is consistency, which is really what these audits are attempting to uncover.

Mark

95. Sam Urbinto
Posted Sep 19, 2007 at 11:17 AM | Permalink

If a sensor is supposed to be indicative of the area its in, the question then becomes how much area is covered with vegetation in the park, how big is the park, and how much of the area in the 5×5 grid that is affected by this sensor. And as compared to the other stations around, as well as to how much if any this one influences the other ones in the adjustments. That it’s in shade (or not) isn’t really the question.

On a related note, what I wonder is how much a sensor is affected, if any, by being screened when the screen is in direct desert sunlight versus in a forest with no direct sunlight.

96. Gunnar
Posted Sep 19, 2007 at 11:17 AM | Permalink

Its a logic error to correct for natural temperature effects.

97. Sam Urbinto
Posted Sep 19, 2007 at 11:24 AM | Permalink

That’s a good point Gunnar. But if a sensor is supposed to be indicative of an area, and 90% of an area is forest (or desert or water or city or farmland…) but the sensor is in the 10% that isn’t, might be a problem.

Actually that brings up the entire “What are we trying to measure?” question you asked.

What does it tell us when we average and combine various scribal records for x number of randomly placed “air temperature sampling stations” of various conditions, over an arbitrary area that is a a 5×5 grid and then combine and average and etc those 5×5 grids to each other? (Such placement which may or may not be indicative of the area covered…)

What does it tell us to sample the seas by looking at the top by measuring surface sea temperatures by satellite over an entire arbritrary area of a 2×2 grid, turning those into 5×5 grids, and then combining and averaging and etc those 5×5 grids to each other? (Those are probably pretty fairly similar over the 2×2 by the nature of water, I’d guess. At least for the surface.)

What does it tell us when we combine the above two into readings over the entire 2592 5×5 squares into one monthly mean, then combine the months into years, then the years into decades and try and track change over “normal” in .1 C increments? (Oh, my.)

What are we trying to measure, indeed.

But wait there’s more! Then we try and correlate this information, and extrapolate and correlate that to GHG emissions and land use changes, combine that with the behaviors of weather systems, glaciers, migrations, vegetation growth and so on.

How do we get from there to “Reduce CO2 because we’re killing our planet!”? How can anyone say with any kind of certainty we’ve been looking at any of these patterns long enough, or understand any of the inter-related factors well enough to come to that conclusion? Or to know what unintended consequences might come out of implementing reduction plans, or what better benefits might be gathered from directing the costs other ways?

How do you measure the temperature of a planet in the first place, much less develop explicit cause and effect given such things as these and their interrelations:

Magnetic field
Core temperature and volume
Cosmic rays
Solar factors
Rotation
Gravity
Lunar factors
Wind
Precipitation
Clouds
Pollution and particulates from burning trees, volcanoes, farming and industrial proceses
Creating and absorbing GHGs

I do agree whatever it is we’re measuring shows +.6C over 130 years. And that what we do influences the climate. Beyond that? Quite the puzzle.

98. Mark T.
Posted Sep 19, 2007 at 11:26 AM | Permalink

On a related note, what I wonder is how much a sensor is affected, if any, by being screened when the screen is in direct desert sunlight versus in a forest with no direct sunlight.

Yeah, that was part of what I was getting at. Anthony was doing a study (underfunded, of course) that was attempting to compare different paints, basically attempting to test the absorption differences. Certainly a screen in direct sunlight will exhibit some otherwise unwanted heating during the day, but how much is unknown. Perhaps a screen over the screen? :)

Mark

99. Sam Urbinto
Posted Sep 19, 2007 at 11:33 AM | Permalink

I think he’s still doing the study. But it would seem obvious to me a screen in the sun will affect a sensor inside more than a screen that’s not. I think that’s an issue, they’re not all shaded.

I know, we’ll adjust for the bias!
:D

100. Gunnar
Posted Sep 19, 2007 at 11:43 AM | Permalink

>> but the sensor is in the 10% that isnt, might be a problem.

Yes, but the correct response to this problem is to install a new sensor in the other area. Then, each sensor would have a weight that accounts for how much area it represents.

So, I vehemently agree with your whole post.

I also keep thinking about the point that someone made: shouldn’t we be trending each site, then averaging the trends, rather than averaging sites, then trending the average?

>> On a related note, what I wonder is how much a sensor is affected, if any, by being screened when the screen is in direct desert sunlight versus in a forest with no direct sunlight.

The rest of the desert is in “direct sunlight”. What is the true temperature in the desert environment? Doesn’t the screen give us the temperature that would result, if one was in the shade? We all know that simply stepping into the shade provides instant relief from the heat. Wouldn’t the air flowing through the screen also feel this instant relief? Therefore, the temperature being measured is not representative of the outside environment.

I think its a logic error to correct for natural temperature effects. What are we trying to measure?

101. Posted Sep 19, 2007 at 11:43 AM | Permalink

I’ve been wondering what the history of air temperature sampling stations is in terms of the type of instruments (thermometers) used, what the number and location by year is, and how they are read. I know that in the beginning years of US weather records, mercury or alcohol in glass tubes were manually read two or three times per day. At some point, madximum and minimum records were started, but I don’t know when; and in later years, constant recording instruments began to be used. Even with those, I don’t know how extensive they have been used.

My point is that the accuracy of global temperature measurements has gradually developed from poor to fair and possibly to good. I don’t believe that accurate averages can be claimed for more than ten years, but I’d like some references to back that up.

102. Gunnar
Posted Sep 19, 2007 at 11:54 AM | Permalink

>> Ultimately, the goal is consistency

>> know, well adjust for the bias

Maybe we should just cut to the chase and put in the ultimate adjustment:

AdjustedSiteTemp = – ( MeasuredSiteTemp – GlobalAvg ) + MeasuredSiteTemp

103. Sam Urbinto
Posted Sep 19, 2007 at 12:11 PM | Permalink

“AdjustedSiteTemp = – ( MeasuredSiteTemp – GlobalAvg ) + MeasuredSiteTemp”

lol, Gunnar

Yes, if we really want to know what’s going on, “each sensor would have a weight that accounts for how much area it represents.”

Douglas, I’ve long had the preliminary theory idea opinion hypothesis that most of the warming has been due to improvements in measuring devices, and most any that’s not is due to the way we use land, with some small amount due to our production of GHG and pollutants. The problem is, if you’re talking long-term climate change, 10 years isn’t enough. Hello, 100 years might not be enough.

That’s why we try and determine cycles by looking at the past. But this isn’t 1000 years ago, much less orders of magnitude longer ago than that.

I think many times, the crux of this matter is those that firmly believe we can tell what’s going on well versus those that firmly believe we can’t tell. Maybe, maybe not. A lot of people have an issue with answers of “We don’t know”. I’d like to day we do. But I don’t think we can.

104. Gunnar
Posted Sep 19, 2007 at 12:36 PM | Permalink

>> most of the warming has been due to improvements in measuring devices

Don’t forget Steve M’s point that the drive towards automation also drove sites to be relocated next to buildings.

105. Mark T.
Posted Sep 19, 2007 at 12:43 PM | Permalink

Maybe we should just cut to the chase and put in the ultimate adjustment:

AdjustedSiteTemp = – ( MeasuredSiteTemp – GlobalAvg ) + MeasuredSiteTemp

A friend of mine in college called this the “Ron Constant” which is precisely equal to the difference between your answer, and the answer in the back of the book.

Mark

106. Anonymous
Posted Sep 19, 2007 at 5:02 PM | Permalink

You guys are missing some things.

The first is that the guidelines for siting a climate station require vegetation and other obstructions to be at least 100 feet distance. Central Park currently fails that test by a LONG shot.

The second is that, yes, those trees are MUCH bigger than they were 50 years ago, not to mention 100 years ago. This has strongly influenced how warm the site gets during summer days. There are in fact many meteorologists in the NYC area who have deferred to other sites in the area during the summer because Central Park is reading artificially cool, and has been for at least 10 years now.

107. Anonymous
Posted Sep 19, 2007 at 5:07 PM | Permalink

A few more things…

Regarding shade, no a climate site is properly sited in an UNshaded location. This is because such a site is easiest to maintain in the SAME STATE. Its easy to prevent large trees and bushes from growing near a site… but MUCH harder to replace them when they are gone. If a site is shaded, and then the trees fall down or die, all of a sudden the microclimate is significantly altered. Trees and other woody vegetation DO SIGNIFICANTLY cool the local environment. If they die or are blown down or otherwise removed, the microclimate changes.

Finally, to the person who said that current equipment reads warm… actually, the current ASOS (which Central Park is) is believed to have a slight COOL BIAS compared to previous observing equipment.

108. D. Patterson
Posted Sep 19, 2007 at 6:20 PM | Permalink

Re: #100

>> On a related note, what I wonder is how much a sensor is affected, if any, by being screened when the screen is in direct desert sunlight versus in a forest with no direct sunlight.

The rest of the desert is in direct sunlight. What is the true temperature in the desert environment? Doesnt the screen give us the temperature that would result, if one was in the shade? We all know that simply stepping into the shade provides instant relief from the heat. Wouldnt the air flowing through the screen also feel this instant relief? Therefore, the temperature being measured is not representative of the outside environment.

I think its a logic error to correct for natural temperature effects. What are we trying to measure?

You are trying to measure the air temperature within the boundary layer of an air mass located a standard measured distance above a land surface which is a normal representative of the area or regional land surface being sampled. Think of the land surface air temperature measurement as a proxy measurement for guaging the heat content of the tropospheric air mass immediately above the land surface boundary layer, because it is too impractical to put the instruments and observers at a greater altitude on such a frequent observation schedule. Upper air observations by rawindsonde and piball (pilot balloon) are used to directly measure charactersitics of the tropospheirc air masses, lower stratospheric air masses, and upper stratospheric masses; but their cost precludes using such methods on anywhere near the scope accomplished by the land surface observations.

The air spaces between the trees and other vegetation inside the boundaries of a forest tend to be more humid and different in other respects when compared to the air mass located directly above the forest and directly at the sides of the forest. Consequently, air temperature measurements taken from the air mass samples within the boundaries of a forest are not representative of the air temperatures and other characteristics of the air mass located 1.5 meters and higher above the forest canopy or a standard distance of meters from the nearest forest boundary. Since the objective of the air temperature measurement is to guage the heat content of the unimpeded tropospheric air mass above the forest canopy and not the heat content of the air spaces within the forest canopy, land surface air temperature measurements representative of the tropospheric air mass must be sampled above the forest canopy, which is typically impractical, or within a clearing in the forest canopy which permits the tropospheric air mass to descend and circulate directly above the land surface without being modified and influenced by forest or other vegetation. In other words, you can think of the interior of a forest as a separate type of environment in much the same way as the interiors of caves, burrows, soils, lawns, buildings, and vehicles have modified and highly modified air spaces which are not representative of the freely circulating air masses to be found in the open atmosphere to be found on their exteriors. There are other records series for measuring forest temperatures, humidity, CO2, and so forth. There are also other records series measuring soil temperatures. The objective of land surface air temperature measurements is different with respect to its goal of determining the heat content of the tropospheric air mass and not the air masses within restricted air spaces such as forest, buildings, soils, etc.

Measurements of air temperatures are taken by thermometers within the shaded confines of an instrument shelter by necessity. When a thermometer is exposed to direct sunlight, the thermal measurement includes the heat energy conveyed to the instrument by conduction from the air mass plus the heat energy resulting from the direct absorption of solar energy by the glass and mercury instrument or thermistor. When a thermometer is not exposed to direct sunlight, the thermal measurement includes the only heat energy conveyed to the instrument by conduction from the air mass. The heat energy resulting from the direct absorption of solar energy by the glass and mercury instrument or thermistor is eliminated by shading the thermometer. Because the objective of the measurement is to determine only the heat content fo the air mass, it is inappropriate to include measurements of the heat content of the soil or solar gain within the materials of the thermometer.

When trying to determine what is to be measured with respect to surface air temperatures, it must be remembered that the objective is to determine the heat content of the tropospheric air mass above and around the forest environment, the desert environment, the agricultural environment, and the urban environment; just as the marine air temperature is a measurement of the heat content of the air mass located immediately above the sea surface and not the sea surface temperature of the sea surface water or the submarine water temperatures.

Posted Sep 19, 2007 at 9:40 PM | Permalink

A point that anonymous misses is that heat from nearby dense urbanization can easily travel into Central Park.

Posted Sep 19, 2007 at 9:48 PM | Permalink

http://www.nytstore.com/ProdDetail.aspx?prodId=1093&refprod=2080

Vs.

“The second is that, yes, those trees are MUCH bigger than they were 50 years ago, not to mention 100 years ago.”

111. Anonymous
Posted Sep 19, 2007 at 10:56 PM | Permalink

First, I do not miss that point. However, the point I am trying to make is that any attempt to consider the urban heat island is overcompensating because it is assuming a properly sited climate station, which it is not.

Second, I realize that the trees in other parts of the park have been large for a long time. That is not the point. The point is that the trees in the vicinity of the ASOS have NOT been large for a long time.

112. D. Patterson
Posted Sep 19, 2007 at 11:40 PM | Permalink

Re: #111

Do you believe that putting the ASOS inside Grand Central Station is going to measure land surface air temperatures which are fairly representative of the heat content of the tropospheric air mass above and around the area of New York City and the 5 degree latitude and 5 degree longitude quadrangular geographic region the city is located within?

113. beng
Posted Sep 20, 2007 at 7:00 AM | Permalink

RE 107: Anonymous says:

Trees and other woody vegetation DO SIGNIFICANTLY cool the local environment.

Anonymous, granted, my observations are quite limited, but measuring temps for 14 yrs in a dense deciduous forest, and comparing that to a nearby (~4.5 miles) NOAA station at an open airport showed that temps were indeed very close on an annual average. The difference was that daytime highs were reduced at my site, but the nighttime lows were equally increased. The effect was greatest during the warm season.

114. Posted Sep 20, 2007 at 7:07 AM | Permalink

Douglas, Ive long had the preliminary theory idea opinion hypothesis that most of the warming has been due to improvements in measuring devices, and most any thats not is due to the way we use land, with some small amount due to our production of GHG and pollutants. The problem is, if youre talking long-term climate change, 10 years isnt enough. Hello, 100 years might not be enough.

Sam,
I agree with this and would add that it’s not only improvements in the instruments, but a huge increase in the number of stations measuring temperature. I think that, included in the way we use land, asphalt paving and black roofing, are large contributors. The land in the US paved for highways is 2-3% of the land area. CO2 is the 90 lb weakling on the beach.

115. Gunnar
Posted Sep 20, 2007 at 7:58 AM | Permalink

>> a climate site is properly sited in an UNshaded location.

Anonymous, this can’t be generally true. The site should reflect the area that it represents. Although a changing site is bad for time series, it is inappropriate to choose a site that misrepresents the area, just to ensure lack of change.

>> You are trying to measure the air temperature within the boundary layer of an air mass located a standard measured distance above a land surface which is a normal representative of the area or regional land surface being sampled.

Yes, I agree that this is the proper and correct answer to the question “what are we trying to measure”.

>> Since the objective of the air temperature measurement is to guage the heat content of the unimpeded tropospheric air mass above the forest canopy

But this violates the definition just stated. If the area being represented is all forest, then the site should be in the forest.

>> Measurements of air temperatures are taken by thermometers within the shaded confines of an instrument shelter by necessity. When a thermometer is exposed to direct sunlight, the thermal measurement includes the heat energy conveyed to the instrument by conduction from the air mass plus the heat energy resulting from the direct absorption of solar energy by the glass and mercury instrument or thermistor.

I agree that absorption of solar energy by the glass and mercury instrument is introducing an error. However, measuring air not in direct sunlight (when that properly represents the area) is ALSO introducing error. From personal experience, I think the error introduced is bigger than the error eliminated, but that should be measured.

>> the thermal measurement includes the only heat energy conveyed to the instrument by conduction from the air mass.

By limiting air flow, the environment inside the screen may not be representative of the area of the site. Air flow speeds thermal equilibrium, and I don’t think it introduces error.

>> The heat energy resulting from the direct absorption of solar energy by the glass and mercury instrument or thermistor is eliminated by shading the thermometer.

Why not just cover the thermometer with a very reflective material? This would avoid glass solar absorption error, while not introducing the shade error.

>> air mass above and around the area of New York City and the 5 degree latitude and 5 degree longitude quadrangular geographic region the city is located within?

The error is in having a fixed grid size, and then over extrapolating the site to represent it. A site that is to represent manhatten would have to be outside, but then it could be right next to Grand Central station.

116. Gunnar
Posted Sep 20, 2007 at 8:06 AM | Permalink

>> but a huge increase in the number of stations measuring temperature

But I think that Steve M has documented a huge decrease in stations in recent years, probable because there is a general reduction in weather hobbying.

117. Steve McIntyre
Posted Sep 20, 2007 at 8:18 AM | Permalink

#116. Misreported. Others have observed the supposed decerase in station numbers. I think that it’s possible that the apparent decrease reflects the failure of GHCN or any other organization to update their collation of non-airport records since 1990.

Posted Sep 20, 2007 at 10:30 AM | Permalink

RE: #111 – on my most recent walk near the castle and the station (early July of this year), the trees near it did not seem to be any older or younger on average than ones elsewhere. The substrate there is a bit rockier, so, absolute heights at maturiry may be impacted slightly. But I do not recall “younger” forest there. Do you have some specific data on this? If so, please share.

119. D. Patterson
Posted Sep 20, 2007 at 12:10 PM | Permalink

Re: #115

Gunnar says:

September 20th, 2007 at 7:58 am
>> a climate site is properly sited in an UNshaded location.

Anonymous, this cant be generally true. The site should reflect the area that it represents. Although a changing site is bad for time series, it is inappropriate to choose a site that misrepresents the area, just to ensure lack of change.

>> You are trying to measure the air temperature within the boundary layer of an air mass located a standard measured distance above a land surface which is a normal representative of the area or regional land surface being sampled.

Yes, I agree that this is the proper and correct answer to the question what are we trying to measure.

>> Since the objective of the air temperature measurement is to guage the heat content of the unimpeded tropospheric air mass above the forest canopy

But this violates the definition just stated. If the area being represented is all forest, then the site should be in the forest.

It doesn’t violate the definition, because measurements of the forest environment is precisely what is NOT supposed to be measured for a land surface air temperature observation. The study of the environment inside the boundaries of a forest is the study of forest ecology. The area being represented by a land surface air temperature observation is absolutely NOT the forest. There is a separate and distinct set of research methods, record series, and air temperature and humidity studies devoted to that specific body of research. Land surface air temperature observation record series are NOT intended or designed to capture data about forest ecologies, but they are specifically intended and designed to capture information and data about the tropospheric air mass currently located above and at the observing location outside of a forest environment. Forest interstitial air temperature, humidity, and other observations and land surface air observations are two totally separate purposes and objectives, and substituting one for the other violates each other’s definitional purposes.

Yes, it is true that forests impart an effect upon any air mass which comes into contact with the forest. The same is true of any other type of an environment an air mass contacts in the course of its evolution and movement through the atmosphere. It must be remembered, however, that an air mass is a mass of air and not a mass of trees and vegetation with which happens to be saturated with substantial quantities of air. The soils, the seas, artificial buildings, and the lower stratosphere also have substantial quantities of air within their environments too, but we also do not use or accept measurements of the air temperatures taken from their environments to represent samples of the troposphere’s air mass present at our geographical location at a given moment in time. We do not do so, because we are trying to learn about the air temperature of the environment inside the lower boundary layer of the tropospheirc air mass and not the air temperature inside the environment of the forest mass, soil mass, sea mass, the lower stratospheric air mass, artificial building mass, or perhaps urban mass. Measurement of the air temperatures inside the other environments does not give us any direct information about what their imparted effects were upon the air temperatures inside the environment of the air mass of the troposphere existing outside those other environments. Land surface air temperature observations are meant to measure only the resultant air temperatures inside the environment of the air mass of the troposphere existing OUTSIDE those other environments.

Think of the other environments as separate body masses which have different characteristics in material, conduction, convection, humidity, and temperature. When any of these other environments having differential characteristics comes into contact with the environment of a tropospheric air mass, you can expect some form of heat exchange to take place, unless there were to be some type of impossible equipoise in temperature and all means of conduction, radiation, and other heat exchange. The objective of the land surface air temperature observation is to measure the resulting air temperature within the tropospheric air mass after the heat exchange has taken place with the other environments, with forests being among the other environments. To use an analogy, we do not measure the temperature of the iron frying pan and its internal environment of the iron bottom to measure the current temperature of a frying egg. If the frying egg just came out of the refrigerator, its temperature may not be anywhere near the temperature of the environment inside the iron frying pan. Likewise we do not use the temperature inside of a forest to represent and measure the environment and temperature of the air mass outside the forest environment. They are two separate environments that are exchanging heat with each other.

Meteorology and meteorologists measure the air temperatures of different air masses as a means of forecasting interactions between those air masses (think of the hot and cold air masses) and the other environments (think forests, deserts, seas, urban areas) they interact with on a short time scale. Climatologists measure long term trends in those same interactions. Land surface air temperature observations are observations of just one part of one environment, the air mass in the lower boundary layer of the troposphere, and they were never intended to be or suitable for representing any other type of environment. This includes the upper air environment of the troposphere, forests, the lower stratosphere, soils, marine surface air, or any other environments; all of which have their own separate air temperature and/or water temperature measurement records series for each such environment.

>> Measurements of air temperatures are taken by thermometers within the shaded confines of an instrument shelter by necessity. When a thermometer is exposed to direct sunlight, the thermal measurement includes the heat energy conveyed to the instrument by conduction from the air mass plus the heat energy resulting from the direct absorption of solar energy by the glass and mercury instrument or thermistor.

I agree that absorption of solar energy by the glass and mercury instrument is introducing an error. However, measuring air not in direct sunlight (when that properly represents the area) is ALSO introducing error. From personal experience, I think the error introduced is bigger than the error eliminated, but that should be measured.

Air has little mass available to absorb anymore radiant energy than it already has, whereas any thermal instrument has a substantial amount of mass which is highly sensitive to the transient changes in solar radiation due to clouds, haze, diurnal and seasonal solar angles of incidence, and much more. Measurement of the heat content of the air mass circulating through the instrument shelter provides a common reference environment which eliminates or at least hugely reduces the tremendous variables introduced by solar radiation, backscatter radiation, wet bulb results from a dry bulb instrument, and so forth. Also, the radiation environment of an instrument shelter is tremendously more representative of the radiation environment within a tropospheric air mass having a 10/10 overcast than the radiation environment found in the open within the first two meters of reflective land surfaces. Confirmation of this fact can be obtained using a photographic color meter or more so with an instrument that can measure infrared radiation within the proper frequency range.

>> the thermal measurement includes the only heat energy conveyed to the instrument by conduction from the air mass.

By limiting air flow, the environment inside the screen may not be representative of the area of the site. Air flow speeds thermal equilibrium, and I dont think it introduces error.

Air flow contributes to the speed and scope of heat exchange and thermal equilibrium between dissimilar environments, with the measured air mass being one of those environments. Since the objective is to measure the heat content of the air mass after it has been removed from further influence and modification by the heat exchanges with other environments such as solar radiation, convection, and so forth; the instrument shelter is the best available compromise for sampling the heat content of the air mass in environmental conditions suitable for the purpose of standard reference comparisons.

>> The heat energy resulting from the direct absorption of solar energy by the glass and mercury instrument or thermistor is eliminated by shading the thermometer.

Why not just cover the thermometer with a very reflective material? This would avoid glass solar absorption error, while not introducing the shade error.

A thermometer cover is shade, but it is shade which does not allow enough space for the circulation of air to prevent the buildup of thermal energy trapped next to the sensor. The closely covered instrument would record higher measurements of heat content than actually exist in the larger air mass just beyond the cover. This is also true of current instrument shelters to a much lesser extent, but using equivalent shelters provides a necessarily common basis of reference for comparative purposes, until the instrument shelters become non-equivalent in environment and non-comparative in measurements.

>> air mass above and around the area of New York City and the 5 degree latitude and 5 degree longitude quadrangular geographic region the city is located within?

The error is in having a fixed grid size, and then over extrapolating the site to represent it. A site that is to represent manhatten would have to be outside, but then it could be right next to Grand Central station.

The five by five degree grid is not at all comparable in volume or heat content per volume. The five by five grid was originally created only as a convenient radio communications reporting format. Marine radio communications and later weather reports transmitted by those mobile radio transmitters lacked fixed geographical stations to identify the mobile transmitting radio station locations and weather observing sites. Using the existing radio grid location formats, the marine weather observations were gathered into databases using the radio grid locations. As you move from the equator to the poles, the areas and volumes of the grids greatly decrease. As the grid volumes decrease, so do the maximum possible heat content for each such volume of air. There are innumerable other potential errors.

I have to question the legitimacy of using land surface air temperatures measured from within the boundaries of an urban environment. It seems to me that the urban environment should be treated as an extension of the land surface to an altitude where the environment comes into equipoise with the same thermal environment as the land surface surrounding the urban environment. In other words, think of the urban environment as a bubble and a hill in the land surface, and the boundaries of the bubble intersect with the ground wherever the urban thermal environment transitions into the rural thermal environments and extend to the zenith above the urban area where the thermal environment above the city also exists above the surrounding rural areas. The urban environment is then treated as just another land surface or sea surface feature that is exchanging heat with the dissimilar air mass environment whose separate air temperature we are measuring and reporting as a land surface air temperature. In effect, the tropospheric air mass is for our purposes is making lower boundary contact with the land surface of the urban environment at some altitude above the actual ground of the urban geography.

This is not already being done for land surface air temperature observations because of the obvious problems there would be in making such observations at the necessary altitudes above the urban communities. The fact that such a departure from and compromise with theory was satisfactory for purposes of typical meteorology but not necessarily satisfactory for the purposes of environmental ecologies or climatology. Such problems with compromises in data quality used for incompatible purposes appears to be a recurring theme and recurring problem where climatology attempts to use data acquired under methodologies and quality standards meant for the practical limitations and requirements encountered for meteorological purposes. Perhaps the time is long overdue when it should be recognized that air temperatures for urban environments must be treated as a separate record series with their own appropriate methodologies and applications apart from meteorology.

120. Sam Urbinto
Posted Sep 20, 2007 at 12:13 PM | Permalink

D. Patterson says: You are trying to measure the air temperature within the boundary layer of an air mass located a standard measured distance above a land surface which is a normal representative of the area or regional land surface being sampled.

Well, in a way, yes. But not really, at least as far as the big picture. What we are trying to measure is the change of the mean global surface temperature over time.

Now ignoring the meaning of that information, and what we do with the information, how are we implementing the goal of measuring that? By sampling the air in various locations and determining the maximum and minimum daily values there. We use this as indicative of what the surface is doing.

How are we sampling the air in various locations; what are we sampling? We read the temperature of the surface around the measuring device by how the surface’s thermal characteristics react and mix with the air at the location of the sensor.

Now to implement the goal of measuring the change of the mean global surface temperature over time, there’s certain qualifications and criteria that have to be taken into account in order to ensure that we are indeed proceding in a way which actually gives us that information and not something else. And that the information has meaning and is accurate.

We have to know the humidity at the sensor’s location also.
The sensors need to be shielded from wind, snow, rain, dust, and heat. And possible human or animal interference or general damage.
The shields need to not heat or cool the air or disrupt the free mixing of the surface’s thermal characteristics with the air.
The sensors need to all be at a uniform height consistent with measuring the atmospheric boundry layer and not so low they measure the surface itself only directly under the sensor but not so high they’re not in the surface layer of the ABL.
The sensors need to be far enough away from obstructions and other materials that might interact with or influence the surface/air mix.
The sensors need to be located in a place indicative of the majority of the area being sampled.
The area being sampled needs to be large enough to be meaningful, but not so large there are too many varied conditions in the area.
The area being sampled needs to be of a known size, so the sensor’s contribution to the whole can be weighted properly.

I might have got some of that wrong or forgotten something, but you get the point I hope.

121. D. Patterson
Posted Sep 20, 2007 at 12:33 PM | Permalink

Re: #121

Sam,

My comment referred ONLY to the purpose of the specific land surface weather observation record series and surface air temperatures being discussed with respect to the Central Park, New York City. One of the problems with the IPCC and their contributors is their use of land surface weather observation records for inappropriate and invalid purposes. One of those suspect purposes is the pursuit of a perhaps mythical Global Mean Temperature which yet remains to be proven a valid scientific reality.

In the meanwhile, the land surface weather observations, methodologies, and procedures were designed and implemented for certain specific and limited purposes and objectives regarding one specific air environment of interest for meteorological purposes, and not for compilation of a Global Mean Temperature accurate to measurements of a tenth of a degree Celsius. Using the land surface weather observations to derive the Big Picture is, in my opinion, altogether and absolutely scientifically insupportable for innumerable reasons.

122. Gunnar
Posted Sep 20, 2007 at 1:32 PM | Permalink

D. Patterson, (http://www.climateaudit.org/?p=1798#comment-140027)

very, very long winded reponse. After reading all that, I’m left unconvinced. Your first paragraph is circular logic: Land Surface measurements should not measure the temperature in a forest, because we’re not trying to measure the forest?

The reasoning in my post stands. You stated the definition correctly, but then imply that what we’re actually trying to measure is the energy content of the troposphere. Errr. If the whole area is forest, we must position the site in the forest. We do want to eliminate human interference, but we don’t want to adjust or compensate for natural effects. If you start subtracting natural effects, you will eventually remove day, night, wind, sun, rain, forests, deserts, suburbs, swamps, tundras, mountainous areas, valleys, etc. Then you’ll end up with the ultimate adjustment that I previously mentioned. Gong!

Perhaps, each station should have the screen, but also thermocouples and an IR temp measurement checking the open air around the screen.

>> forest mass, soil mass, sea mass, the lower stratospheric air mass, artificial building mass, or perhaps urban mass

Why the straw man? We’re measuring the atmosphere, so solids, liquids and human controlled indoors are off the table. If it’s air, just above the surface, we measure it. We do need to measure in urban environments, but we just need to weight the measurement with the size of the city. Imagine if we lived on Trantor, and we removed the UHI effect. Man does exist, we do build cities, and although probably negligible at the moment, our cities are warming the environment.

123. Sam Urbinto
Posted Sep 20, 2007 at 1:56 PM | Permalink

Re #122 Ah, okay. D.P. And very good explanations they are for what you’re trying to say. I understand your point, that’s not the point I’m making though, and I didn’t think that’s what the discussion was about. So we’re looking at it from different angles. I don’t disagree with you. I wasn’t really answering your comment specifically per se, but rather describing the entire situation in general: What are we trying to accomplish overall. What is the purpose of the stations as a whole, which is to derive the change of the mean global surface temperature over time. So that’s what we’re trying to measure, from that standpoint at least.

I don’t really think we’re here to argue about measuring “the Earth”, just how to do it the best we can if we’re going to try.

Overall, you’re more speaking of what each station is trying to measure, if that’s applicable to the area it’s in at all, and if the air the sensor is in is the correct air for the purpose that’s trying to be accomplished. And if doing that to gauge global temps is appropriate. That’s what I got out of it. Obviously, if you want “surface temp” of a forest, you put it in the forest. If you want to know the temp of the lower layer of the ABL, you put it above the forest. I don’t know what either of them really accomplishes, for other than practical purposes, but I’d really rather not get into more than one subject at a time.

My opinion on the goal itself is immaterial as far as disscussing the best way to accomplish that goal. That’s a separate discussion, the validity or worth or importance of trying to accomplish it. (How to build the Titanic, versus why to build the Titanic and/or why not to build the Titanic. lol, How, versus For/Against/Project Management. Anyway.)

As I said, I was ignoring the meaning of the information, and what we do with the information. We know the non-CRN stations were made to tell people “This is ‘the temperature’ outside.” not to track climate to high degrees of accuracy. So all we get is estimates, which is all that we’ll get, because it’s all we can.

I’m simply laying out what would be needed to really attempt to derive anything near some kind of “global temperature” (Which according to NOAA is 47.3 F 1901-2000 for the land surface mean*). If that number is meaningful or not, well. If the descriptions of what everything should be like is some kind of basis for determining some conclusion based upon the fact that they’re pretty much not like that, well, that’s fine too…..

Oh, I forgot some:

We need to know the altitude.
The sensors need to have a high enough degree of resolution to track temperatures to .1 C or lower.
The sensors need to be regularly calibrated.
The sensors should take frequent samples (10 minutes?) and report them via radio, IR or burried cable to a recording station.
There should be multiple sensors (at least for temperature sensors, I’ve kind of been lumping in any other sensors at times).
(I’ll just mention maybe normalizing/equalizing/(whatever you call it) the readings based upon some common temperature/humidity/altitude calculation for each reading over the network of sites.)
An automated system to collect all this and balance everything together and calculate everything would be nice.
etc

And all that kind of thing would give us a far better idea then we are getting now, that’s the bottom line.

——————————

*As far as what NOAA has to say about their land surface mean:

Absolute estimates of global mean surface temperature are difficult to compile for a number of reasons. Since some regions of the world have few temperature measurement stations (e.g., the Sahara Desert), interpolation must be made over large, data sparse regions. In mountainous areas, most observations come from valleys where the people live so consideration must be given to the effects of elevation on a region’s average as well as to other factors that influence surface temperature. Consequently, the estimates below, while considered the best available, are still approximations and reflect the assumptions inherent in interpolation and data processing. Time series of monthly temperature records are more often expressed as departures from a base period (e.g., 1961-1990, 1901-2000) since these records are more easily interpreted and avoid some of the problems associated with estimating absolute surface temperatures over large regions.

124. D. Patterson
Posted Sep 20, 2007 at 9:03 PM | Permalink

D. Patterson, (http://www.climateaudit.org/?p=1798#comment-140027)

very, very long winded reponse. After reading all that, Im left unconvinced. Your first paragraph is circular logic: Land Surface measurements should not measure the temperature in a forest, because were not trying to measure the forest?

You are interpreting the paragraph as circular logic because you refuse to accept the fact that the WMO treats a forest as an integral part of the land surface and not as an integral part of the atmosphere that is being measured. Nonetheless, the WMO and its member weather organizations have been reporting land surface air temperatures while treating forests as an excluded part of the subsurface of the land for decades now.

Federal Standard For Siting Meteorological Sensors At Airports. CHAPTER 2 – SENSOR EXPOSURE
2.6 TEMPERATURE AND DEW POINT SENSORS. The temperature and dew point sensors will be mounted so that the aspirator intake is 5 ± 1 feet (1.5 ± 0.3 meters) above ground level or 2 feet (0.6 meters) above the average maximum snow depth, whichever is higher. Five feet (1.5 meters) above ground is the preferred height. The sensors will be protected from radiation from the sun, sky, earth, and any other surrounding objects but at the same time be adequately ventilated. The sensors will be installed in such a position as to ensure that measurements are representative of the free air circulating in the locality and not influenced by artificial conditions, such as large buildings, cooling towers, and expanses of concrete and tarmac. Any grass and vegetation within 100 feet (30 meters) of the sensor should be clipped to height of about 10 inches (25 centimeters) or less.

Note how the WMO standard requires the air temperature observation to be measured OUTSIDE the forest and where the air can freely circulate, meaning where the tropospheric air mass can freely circulate without obstruction by any form of vegetation.

The reasoning in my post stands. You stated the definition correctly, but then imply that what were actually trying to measure is the energy content of the troposphere. Errr. If the whole area is forest, we must position the site in the forest. We do want to eliminate human interference, but we dont want to adjust or compensate for natural effects. If you start subtracting natural effects, you will eventually remove day, night, wind, sun, rain, forests, deserts, suburbs, swamps, tundras, mountainous areas, valleys, etc. Then youll end up with the ultimate adjustment that I previously mentioned. Gong!

I understand your reasoning perfectly well. Unfortunately, your reasoning is simply contrary to the practices and principles used by the WMO and its member organizations for many decades with respect to Land Surface Weather Observations. Their standard and general practice is to measure the air temperature a distance of at least 1.5 meters above the land surface and any vegetation or at least twice to four times the distance away from the nearest obstructive vegetation and any grass and vegetation within 100 feet (30 meters) of the sensor should be clipped to height of about 10 inches (25 centimeters) or less.

The WMO is not subtracting natural effects which exist in the air mass at least 1.5 meters above or a number of meters beside a forest, a mountain cave, a desert canyon, a tundra, or other natural feature that is part of the land surface and subsurface. It is measuring the air temperatures of the troposphere at the standard distances (1.5 meters etc.) adjacent to the natural effects as intended and regulated by the WMO and its member weather organizations. Whatever natural effects exist in the temperature of the tropospheric air mass at the standard distances above and beside the land surface feature are being measured. The WMO is measuring the temperature of the tropospheric air mass and not the temperature of the air pockets inside the tundra, the mountain cave, a desert rock, a lake, a sea, a bog, an agricultural soil, or a forest. These are all phenomena which have air spaces of their own which are surface and subsurface features of land and marine environments separate from the troposphere being measured and reported by the WMO land surface weather watch.

If you’ll notice, the land surface weather observations also do not include air temperature measurements at higher altitudes within the troposphere, and they do not include air temperature measurements taken within the lower stratosphere. These facts illustrate how the land surface weather observation includes air temperature data from only one limited area of the atmosphere and not other areas of the atmosphere above, below, or beside the defined area of observation. Rather than being a straw man argument as you alleged, the differences serve to illustrate how the land surface weather observation is unique and excludes observations of the air spaces within a forest.

Perhaps, each station should have the screen, but also thermocouples and an IR temp measurement checking the open air around the screen.

>> forest mass, soil mass, sea mass, the lower stratospheric air mass, artificial building mass, or perhaps urban mass

Why the straw man? Were measuring the atmosphere, so solids, liquids and human controlled indoors are off the table. If its air, just above the surface, we measure it. We do need to measure in urban environments, but we just need to weight the measurement with the size of the city. Imagine if we lived on Trantor, and we removed the UHI effect. Man does exist, we do build cities, and although probably negligible at the moment, our cities are warming the environment.

There is no straw man argument on my part, because I am giving you a straightforward and accurate report of the practices, standards, and principles used by the WMO and its member weather organizations when they conduct the Land Surface Weather Watch. In a scenario where a weather watch was being conducted on the surface of Trantor, the Trantorian Meteorological Organization would no doubt recognize the fact that the planetary surface was a 100 percent urban environment and deploy their surface weather observation sensors accordingly, perhaps requiring the observational siting to remain at least 1.5 meters above and a standard number of meters beside any non-urban features like frost fields, melt ponds, algal accumulations, and intergalactic zoos; not to mention all of those anthropogenic influences like vents, heat exchangers, surface pits, and super laser blasters. In any case, forests and the air spaces within them are classified by the WMO as an integral part of the land surface, requiring the land surface weather watch to be conducted 1.5 meters above and so many meters beside such a land surface feature. That is not a straw man argument. That is the literal fact about how the WMO’s member organizations report the land surface weather.

125. Gunnar
Posted Sep 21, 2007 at 9:20 AM | Permalink

>> You are interpreting the paragraph as circular logic because you refuse to accept the fact that the WMO treats a forest as an integral part of the land surface and not as an integral part of the atmosphere that is being measured

Ok, I see you’re just regurgitating WMO policy. I thought we were discussing what should be done, which is to measure the temperature of the spherical slice of atmosphere. I never realized when I was walking through the forest that I was actually walking through land. I should have been like a Horta and screamed “Pain” about the missing “children”.

>> I understand your reasoning perfectly well. Unfortunately, your reasoning is simply contrary to the practices and principles used by the WMO

And the logical argument is?

>> The WMO is not subtracting natural effects which exist in the air mass at least 1.5 meters above or a number of meters beside a forest

Ok, they are just avoiding them, like avoiding air in direct sunlight, and avoiding forests, and avoiding wind.

>> not the temperature of the air pockets inside the tundra

straw man again.

>> not include air temperature measurements at higher altitudes

That makes sense, since that would be outside the spherical slice.

>> Rather than being a straw man argument as you alleged

The straw man argument was comparing “measuring in a forest”, when the whole area IS forest, as being similar to measuring indoors or air pockets inside the soil or sea. Not measuring high altitudes in no way illustrates why measuring in a forest should be avoided.

>> because I am giving you a straightforward and accurate report of the practices, standards, and principles used by the WMO

Ok, I say the WMo is irrelevant, since I’m talking about what should be done, not what is done.

>> the Trantorian Meteorological Organization

Ok, at least you recognized the reference, so you can’t be all bad :). Why would the TMO want to avoid any normal effects, whether non urban or anthropogenic? If they did so, there wouldn’t be a suitable site anywhere on Trantor.

126. D. Patterson
Posted Sep 22, 2007 at 6:39 AM | Permalink

Re: #126

>> You are interpreting the paragraph as circular logic because you refuse to accept the fact that the WMO treats a forest as an integral part of the land surface and not as an integral part of the atmosphere that is being measured

Ok, I see youre just regurgitating WMO policy. I thought we were discussing what should be done, which is to measure the temperature of the spherical slice of atmosphere. I never realized when I was walking through the forest that I was actually walking through land. I should have been like a Horta and screamed Pain about the missing children.

>> I understand your reasoning perfectly well. Unfortunately, your reasoning is simply contrary to the practices and principles used by the WMO

And the logical argument is?

My logical argument is that you need to learn how to use land surface weather observations to produce valid weather forecasts. The logic is obvious when you try to use the data to produce a weather forecast. Until you learn and understand the basic principles, we could go on forever with my presenting explanations and you describing them as long-winded and dismissing them without understanding what you are denying. If you don’t want to believe these comments, try to calculate a valid prediction of the cross-isobar angle for a diverted ageostrophic flow using the weather observations you take from a weather observation site located inside a forest canopy. Until you can teach the meteorological community how to use weather observations taken at weather observation sites located within a forest canopy to consistently produce valid weather forecasts, your argument is without any logical merit whatsoever.

>> The WMO is not subtracting natural effects which exist in the air mass at least 1.5 meters above or a number of meters beside a forest

Ok, they are just avoiding them, like avoiding air in direct sunlight, and avoiding forests, and avoiding wind.

No, meteorologists know and understand why it is necessary to measure the weather conditions and effects in a particular atmospheric environment located outside the forest canopy and not inside the forest canopy. The problem here is that you refuse to learn what they already know.

>> not the temperature of the air pockets inside the tundra

straw man again.

You are just flat wrong. Observing the weather conditions inside the canopy of a forest are invalid for the needs of the meteorologist for many of the same reasons observing the weather conditions inside the air pocket in a piece of tundra, an ice cave, a mountain cave, or a piece of Kansas sod are invalid. They simply do not represent the environment and weather conditions which the meteorologists need to measure for further use in their meteorological calculations. Misrepresenting such facts as a straw man argument is totally false and incorrect.

>> not include air temperature measurements at higher altitudes

That makes sense, since that would be outside the spherical slice.

No, the other air masses at the higher altitudes are different environments and are not spherical slices, especially since they are not spherical, slices, or globally continuous; just as the forests are not spherical slices either.

>> Rather than being a straw man argument as you alleged

The straw man argument was comparing measuring in a forest, when the whole area IS forest, as being similar to measuring indoors or air pockets inside the soil or sea. Not measuring high altitudes in no way illustrates why measuring in a forest should be avoided.

Until you understand why forests and other surface and subsurface environments cannot and do not represent the specific atmospheric environment meteorologists must observe and measure to accomplish certain meteorological tasks and objectives, you are never going to understand meteorology, climatology, or their needs and requirements for the proper and valid siting of observation stations. It really is not so difficult to understand, if you eliminate your erroneous preconceptions. Meteorologists are using the Land Surface Weather Watch observations to perform certain calculations in certain types of weather forecasting tasks. These particular weather observations must measure the weather conditions in just one certain atmospheric environment and none of the other adjacent environments. The forest environment IS NOT the atmospheric environment meteorologists need to observe and measure for the purpose of performing certain meteorological tasks which use the land surface weather observations. Misrepresenting this reality as a straw man argument is not valid and demonstrates your fundamental lack of understanding of the scientific principles that are being applied by the science of meteorology.

>> because I am giving you a straightforward and accurate report of the practices, standards, and principles used by the WMO

Ok, I say the WMo is irrelevant, since Im talking about what should be done, not what is done.

Well, Gunnar, your saying WMO is irrelevant and by implication also the scientific methods meteorologists must use to perform certain valid mathematical calculations makes your argument scientifically invalid and irrelevant. Using observations of weather conditions inside a forest canopy is no more of a valid scientific method for certain meteorological tasks than it would be for you to use a rectal thermometer on squirrels in the forest to represent the pulmonary conditions of a flight of migratory birds overhead the same forest. They are two significantly different environments and applications of the data.

>> the Trantorian Meteorological Organization

Ok, at least you recognized the reference, so you cant be all bad :). Why would the TMO want to avoid any normal effects, whether non urban or anthropogenic? If they did so, there wouldnt be a suitable site anywhere on Trantor.

You failed to recognize one of my references in response. Maybe I can deal you back in.

The Trantorian Meteorological Organization does not need and is not going to use observations of irrelevant phenomena to perform mathematical calculations which have no scientifically valid application to this particular purpose and objective of TMO and Trantor. Remember, I said that ecologists can and do conduct weather observations inside a forest canopy and apply the weather conditions they observed to research problems relating to the environment of the forest. They often do so using a tower within the forest canopy, because the environment inside the forest canopy is often significantly different at various locations and altitudes within the forest. Likewise, the hypothetical authorities of Trantor would be expected to select the appropriate scientific method to apply to the appropriate research need and problem. They certainly would not conduct observations of an internal HVAC duct and use the data to forecast the Mean-Time-To-Replacement for the external Cosmic-Ray shield of a Lunar laser blaster station blister. The Mule would not be impressed by such foolishness.

127. Gunnar
Posted Sep 22, 2007 at 7:35 AM | Permalink

>> to produce valid weather forecasts. The logic is obvious when you try to use the data to produce a weather forecast.

I can’t believe you missed the whole point that we’re not trying to take measurements in order to predict the weather. We’re trying to measure the thermodynamic state of the earth for the purposes of CLIMATE studies.

The bottom line is that we operate on completely different levels of abstraction. Let’s just agree to disagree.

128. D. Patterson
Posted Sep 22, 2007 at 7:52 AM | Permalink

Re: #124

Sam, I understood what you meant. My comments were always intended to answer the question that was asked about what we are supposed to measure. Because the thread concerned the weather observation site/s in Central Park and elsewhere in New York City and the various air temperature datasets obtained from them as part of the Land Surface Weather Watch, I was confining my comments to what the Land Surface Weather Watch is and should be intended to measure. You must always recognize there are many different types of weather watches and other observation programs being conducted besides the Historical Climate Network (HCN) and the Climate Reference Network (CRN). Some of these observation programs and watches have nothing whatsoever to do with the Land Surface Weather Watch and the land surface observation requirments and applications. Those observation stations which do participate in the Land Surface Weather Watch must satisfy the data requirements of the meteorological community, because that is the primary purpose and funding for the Land Surface Weather Watch. Any organization who wants to conduct a forestry weather watch is free to obtain the required funding and develop the appropriate applications for the observational data. Meanwhile, the Land Surface Weather Warch stations are obligated to supply the necessary weather observations of the ABL and so forth to satisfy the scientific needs of the meteorological community and organizations who organized and funded the networks for such purposes.

I dont know what either of them really accomplishes, for other than practical purposes, but Id really rather not get into more than one subject at a time.

[….]

As I said, I was ignoring the meaning of the information, and what we do with the information. We know the non-CRN stations were made to tell people This is the temperature outside. not to track climate to high degrees of accuracy. So all we get is estimates, which is all that well get, because its all we can.

“The temperature outside” is not the primary focus and purpsoe of the CRN or any other major observational weather network. Unfortunately, the controversy over the Global Warming alarmism has cause many people to lose sight of the fact or never become aware of the fact that observations of relative humidity, air pressure, winds, precipitation, and many other weather phenomena must also be accurately observed and reported in tandem with the air temperatures in order for the air temperatures to retain their usefullness to meteorology and climatology. This is another vital reason why land surface weather observation sites can only be stationed within one particular environment, otherwise they would not be capable of fulfilling the primary purposes for which they were created. The weather observations these land surface observations provide to meteorologists and climatologists are indispensible to weather forecasters and all of the human activities which find that weather forecasting is a critical service for their needs.

Any climate reference network is by definition, at least in part, a type of land surface and/or marine surface weather observation network. Accordingly, such a climate reference network must satisfy the same requirements for the same principal weather condition data used by other surface weather networks and weather forecasters. It remains to be seen whether or not there is an organization ready, willing, and able to make the tremendous improvements in observational accuracy and consistency required to meet or exceed all of the weather condition parameters, and not just the parameters for air temperatures, when attempting to account for the heat content of the Earth’s total climate systems to an accuracy of no less than one-tenth of a degree Celsius.

129. D. Patterson
Posted Sep 22, 2007 at 8:25 AM | Permalink

Re: #128

I’m sorry if I’m dissappointing you, Gunnar; but it is yourself “who missed the whole point.” 1. the Central Park New York City and other USHCN observation stations and weather condition data are a part of the Land Surface Weather Watch and must satisfy the data requirements of the meteorological and weather forecasting organizations responsible for them. 2. Any new climate reference networks which hypothetically might be established for the sole purpose of measuring “the thermodynamic state of the earth for the purposes of CLIMATE studies,” will be incapable of using weather condition data from sites not in conformance with standard land surface observation requirements in most mathematical calculations needed for the General Circulation Models and other key meteorological models the climatological models rely upon. This is not to say the weather condition data originating from sites inside a forest canopy cannot be used for anything by the hypothetical climate reference network. However, sites outside the canopy will remain necessary, indispensible to the intended climatological research, and far far more numerous in scope and distribution. Sites within a forest canopy will necessarily have to be specialized in purpose and extremely limited in usefullness to the General Circulation Models and most other numerical models required by the climatologists. These limitations are due to the inherent nature of the planet’s structure, thermal distribution properties, and their critical interfaces to the various dynamic atmospheric environments.

130. Meteorological Technician
Posted Sep 29, 2007 at 5:40 PM | Permalink

Gunnar says:

September 22nd, 2007 at 7:35 am
>> to produce valid weather forecasts. The logic is obvious when you try to use the data to produce a weather forecast.

I cant believe you missed the whole point that were not trying to take measurements in order to predict the weather. Were trying to measure the
thermodynamic state of the earth for the purposes of CLIMATE studies.
The bottom line is that we operate on completely different levels of abstraction. Lets just agree to disagree.

How about we agree that you don’t have a clue and you stop embarrassing yourself?

131. Posted Nov 25, 2009 at 1:59 PM | Permalink

This issue actually suggest that the UHI effect has been quite over-estimated.

Ironic that the place observation bias kicks in is in the exact correction they must fine painful to apply.

• bender
Posted Nov 25, 2009 at 11:16 PM | Permalink

Re: NikFromNYC (#132),
No it doesn’t. Without knowing what fraction of stations are like NYC CP you have no idea how many are over-corrected vs. under-corrected. The analysis of Ross McKitrick and Pat Michaels suggests that land-use/UHI effects are quite UNDER-ESTIMATED.

132. aurbo
Posted Nov 26, 2009 at 12:11 AM | Permalink

First, thanks to #132 above for resurrecting this CA post which I contributed to almost 2.5 years ago.

Also, one should note that through this period, the CP raw data tracked very well with other NYC sites…LGA, JFK, EWR, TEB and the City Office in Lower Manhattan. These relationships were solid enough so that when the CP site became unmanned and observations were remoted to the new Rockefeller Center location of the NYC WX Office, we could spot a temperature sensor drift within a day or two. At this point we would call the Rock Center office and within a another day or two they would send someone out to the CP site to recalibrate the sensor system. We could tell exactly when this occurred because, depending on the degree of drift, there would be an observable discontinuity in the hourly METAR string. The problem was sharply reduced after the newer MMTS sensors were installed.

The point is that there can be no logical meteorological reason for altering the record by reducing NYC temps in the city by 6°F to 7°F in the 1960-1990 period, and then nearly linearly reducing the negative adjustment from 1990 to 2005 at about 1.7°C per year! That’s about the equivalent of moving the NYC thermometer southward to Richmond, VA.

I, and a few others, brought this obvious maladjustment to light some 2.5 years ago and have yet to receive an acknowledgment, much less a satisfactory answer.

Recently, NCDC quietly scrapped USHCNv1 UHI scheme and replaced it with a new measurement of “homogeneity” which uses individual station data inflection points (AKA change-point analysis) that are viewed as biases needing adjustment, and then combining the HCN stations with one of perhaps several nearby COOP stations to create a “pairwise” SHAP (Station History Adjustment Program) algorithm. A discussion of this program can be found in the 2009 July BAMS 90, 993-1007. Unfortunately, the SHAP algorithm and coding is not provided there or in the listed referenced papers.

133. Ian
Posted Dec 3, 2009 at 8:31 AM | Permalink

All the recent comments about trees and whether an UHI correction is appropriate misses the point. The stunning thing to me is the use of a shockingly wrong UHI correction to manufacture a hockeystick warming trend where none was initially present. It is particularly egregious since a UHI adjustment should be expected to reduce the size of any observed warming trend. I therefore find it extremely hard to believe that such an `error’ would go undetected which makes me wonder whether it was in fact not an error at all. This kind of scientific misconduct is so outrageous that all involved should be ceremonially stripped of their doctorates, locked in a small cell, and forced to mark first year statistics exam papers for the rest of their natural lives.

134. Charlie A
Posted Jan 14, 2010 at 4:33 PM | Permalink

I have simple questions:

Is there anyone at NOAA that we can ask about the logic and justification for these specific corrections?

Has anyone at NOAA said whether or not they consider these corrections to be proper?

135. Hector M.
Posted May 14, 2010 at 3:35 PM | Permalink

Fascinanter and fascinanter.

What I’d like to remark is that using local resident population (e.g. residents of Mannhattan) as a marker of the intensity of urbanization in the area) may be misleading. Heat generation in Mannhattan is partially created by non-residents that drive cars or sit in offices within the area (but live in the metro area or beyond). Thus the UHI effect should be measured in a different fashion. One proposal is using some station covering a similar historical period, located at the same general climate, but in a more rural area. In 1900 probably temps in Mannhattan were not much different than temps in areas in N.Jersey or Conn. that are now leafy suburbs, but the two may have unaccountably diverged over time between, say, 1900 and 2010. The difference may be a measure of the UHI.

Example from the city I live in now. The main station of Buenos Aires city is also in a kind of midtown park (the Agricultural School of BA University), surrounded by asphalt, cement, many-storied bldgs and many factorties. Nowadays this station routinely gives 3°C more than some stations located in the outskirts of the metro area (including the main airport), although all are within 30-40 miles of each other and within the same general climatic and geographical zone. I am now trying to reconstruct the series of the outskirts from records at various locations that start at different years and have some discontinuities, but my guess is I’ll find a divergence between the Agric School station and the more “rural” ones around this huge metro area (pop > 13 million). Has anybody done the same for NYC?