The USHCN Station History Adjustment procedure is credited to Tom Karl in Karl and Williams 1987. Karl has been very prominent in the IPCC movement. That’s not to say that the potential adjustment is wrong or unjustified, but that the adjustment needs to be scrutinized with the same care as one would scrutinize an adjustment by, say, Briffa, Jones or Mann.
Towards the end of Karl and Williams 1987, he says:
“Stations with nonclimatic progressive changes due to urbanization may lead to inappropriate adjustments at nearby stations… This latter problem is mitigated to some extent in the HCN since 70% of the stations have populations less than 10,000 in the 1980 census and 90% have populations less than 50,000.”
Here’s the full excerpt together with a discussion of IPCC AR4 consideration of the topic,
Obviously, as we look at more and more HCN stations, it becomes clear that, yes, many of these sites are in small towns, but, no, that doesn’t mean that they are unaffected by nonclimatic factors. Indeed, many of the microsite problems, including grotesque non-compliance with WMO quality standards, would not occur at an urban airport (e.g. you would probably not have a garbage incinerator discharging on a weather station adjacent to an airport runway.) However, it also becomes increasingly clear that Jones, Hansen (and Karl) appear to have exercised no due diligence whatever in ensuring that the USHCN sites met the assumptions of the Karl Adjustment.
At this point, one would have to say that the USHCN network contains a number of stations that do not meet the criteria necessary for the Karl adjustment procedure. At this point, the proportion in the total network is unknown, but based on the spot checks so far, would appear to be well over 50%. If 50% of the stations do not meet the standards necessary to start the Karl adjustment, then it becomes highly relevant to post-audit the entire system to assess the problems caused by recklessly applying an adjustment without any effort to ascertain whether it could be safely applied.
IPCC AR4 Consideration
IPCC AR4 has some comments on the Karl adjustment “off balance sheet”. They have several pages of text on issues that we’ve talked about here – bucket adjustments, urbanization adjustments. However this discussion of problems is not included in the main report but exported to Supplementary Material here. It is the only supplementary material. An appendix on smoothing, much of it taken from Mann’s article, is in the main document – go figure. BTW there are some interesting comments in this Appendix about Brohan et al 2006 that will make UC’s hair curl.
The second paragraph in the excerpt below describes the Karl adjustment method. They state that the accumulation of station history effects are likely to cancel out. This implies that one could reasonably use USHCN data (TOBS-adjusted version) without Karl’s station history adjustments (which seem highly speculative to me).
Note that they mention the microsite problem that we’ve been discussing through their citation of Davey and Pielke 2005 which raised this issue in respect to east Colorado. However they clearly do not grasp the nettle of systemic microsite problems potentially affecting a majority of sites in their network.
Long-term temperature data from individual climate stations almost always suffer from inhomogeneities, owing to non-climatic factors. These include sudden changes in station location, instruments, thermometer housing, observing time, or algorithms to calculate daily means; and gradual changes arising from instrumental drifts or from changes in the environment due to urban development or land use. Most abrupt changes tend to produce random effects on regional and global trends, and instrument drifts are corrected by routine thermometer calibration. However, changes in observation time (Vose et al., 2004) and urban development are likely to produce widespread systematic biases; for example, relocation may be to a cooler site out of town (Böhm et al., 2001). Urbanisation usually produces warming, although examples exist of cooling in arid areas where irrigation effects dominate.
When dates for discontinuities are known, a widely used approach is to compare the data for a target station with neighbouring sites, and the change in the temperature data due to the non-climatic change can be calculated and applied to the pre-move data to account for the change, if the discontinuity is statistically significant. However, often the change is not documented, and its date must be determined by statistical tests. The procedure moves through the time series checking the data before and after each value in the time series (Easterling and Peterson, 1995; Vincent, 1998; Menne and Williams, 2005): this works for monthly or longer means, but not daily values owing to greater noise at weather timescales. An extensive review is given by Aguilar et al. (2003).
The impact of random discontinuities on area-averaged values typically becomes smaller as the area or region becomes larger, and is negligible on hemispheric scales (Easterling et al., 1996). … Urbanisation impacts on global and hemispheric temperature trends (Karl et al., 1988; Jones et al., 1990; Easterling et al., 1997; Peterson, 2003; Parker, 2004, 2006) have been found to be small. Furthermore, once the landscape around a station becomes urbanized, long-term trends for that station are consistent with nearby rural stations (Böhm, 1998; Easterling et al., 2005, Peterson and Owen, 2005). However, individual stations may suffer marked biases and require treatment on a case-by-case basis (e.g., Davey and Pielke, 2005); the influence of urban development and other heterogeneities on temperature depends on local geography and climate so that adjustment algorithms developed for one region may not be applicable in other parts of the world (Hansen et al., 2001; Peterson, 2003).
50 Comments
The patterns are derived using data for recent, well-sampled years, and the technique relies on the assumption that the same patterns occurred throughout the record.
Nevertheless, recent studies have estimated all the known errors and biases to develop error bars (Brohan et al., 2006).
For example, for SSTs, the transition from taking temperatures from water samples from uninsulated or partially-insulated buckets to engine intakes near or during World War II is adjusted for, even though details are not certain (Rayner et al., 2006).
systematic adjustments are necessary
The data have been adjusted by Rayner
Let’s see if I have this right: Assume a cluster of 26 stations – A through Z. Station A has a “discontinuity” in relation to stations B through Z, so A is now adjusted to reflect the “reality” defined by Stations B through Z. Station B, when looked at in isolation , has a discontinuity with Station A and Stations C through Z. Station B is adjusted accordingly. And so on and son. Looping back around until all adjustments become minimal. If that is the process then if some significant subset of the stations are subject to a “smooth” UHI effect, wont this procedure simply add this UHI effect to all the other sites? In short how does this procedure prevent the systematic biasing of any smoothly operating factor, i.e., one that does not show visible discontinuities?
RE: #2 – I hope you are wrong, but suspect you may be right.
re 2:
Indeed thats why you need an independent metric for UHI which is population growth. Worked beautifully for me in the Uccle – De Bilt comparison.
http://home.casema.nl/errenwijlens/co2/homogen.htm
Fellow Citizens,
Considering the fact the we use meteorological station data to make up the temperature reconstructions of the planet earth (ie GISS, CRU etc…) I thought it would only be fair to do a little comparison between what these stations measure and what they don’t.
There are differing numbers of stations used in differing temperature reconstructions but using the 4000 stations that are currently used in the GISS data set which is referred to frequently in media stories/movies, let’s be frank about it and say that these meteorological stations only sample the micro-climate surrounding the sensor. Let us assume an area around the sensor of, on average about 25 square meters…5m X 5m….too small? Ok how about 100 square but that is quite generous since the sensor bulb itself really only measures the tiny area/volume immediately around the sensor…ie quite small.
So then, with 100m2 of area around a sensor being the representative area sample in the micro climate, multiplied by 4000 sensors we can agree that roughly 400,000 meters squared or 400 kilometers squared of earths surface is actually represented in the meteorological station data used in reconstruction of temperature; compare this to the actual area of earth’s surface 510,065,600km2, shows you the folly of using this data set to measure a global mean temperature. The area being covered by station data is but .00008% of the earths total surface.
“Guilt by association” may work sometimes in law, but for numerical analysis of independently conducted measurements its is absurd.
They are thinking of temperatures like space-time/gravity. One particularly heavy mass at one point on a square grid sinks deeper, distorting space then that causes the other objects on the grid to want to roll towards it. Thats fine is you assume all the objects are connected by a linear force, such as gravity. But when dealing with a chaotic system such as the earth’s atmosphere, this kind of process can quickly fall apart in its accuracy. Last time I looked, the atmosphere wasn’t linear at all, which explains why I missed my high temperature forecast for yesterday on the radio by about 5 degrees.
And of course, SteveM is exactly right in pointing out that no consideration or due diligence was given to the micro-site effects, and they appear to have significant magnitude.
#5. Tom, I’m not particularly worried about the number of stations. If there were 1000 good stations, I think that would be more than enough. The problem is that many, perhaps most, of the stations are biased and there’s been no effort to QC the stations.
#4, HAns, maybe it worked in your case, but these US microsite effects are popping up in rural sites where the population change won’t be a good index. Even if there are 4000 sites, the microsites should have been catalogued a long time ago. It sounds as though Jones, Hansen and Karl have made no effort to ensure QC at these stations either.
Citing McIntyre “many of these sites are in small towns, but, no, that doesn’t mean that they are unaffected by nonclimatic factors” I can say that this is absolutely true and it is common not just in the USA but also in Italy and in the main part of Europe.
I give a simple but very clear evidence of it.
I live near Venice, in the city of Padua, we had in the last 20 years a stable population of about 210-220,000 inhabitants.
We are surrounded by a belt of small towns. 20 years ago, we considered such towns to be “countryside”, and indeed just out of the city hurban area we were surrounded just by fields and ditches, before reaching such small towns.
Today the situation is quite different. Even with just a little grow in population, small towns enlarged very much, both for residential and industrial areas. “Countryside” is no more common as we intended it around my city. It is a unique 400-500,000 inhabitants hurban area: but, many towns have less than 20,000 or even 15,000 people.
And this situation is not local: the triangle from Padua to Venice to Treviso is a unique city of 1.5millions people, with a very high density of industrialisation, and on the road from Padua to Treviso (about 50km/30mil) it is very hard to see today a field and not just homes, shops/malls, factories. But we have always the same old administrative organisation: so, every station would figure as non-hurban in such a definition.
And this is the situation not just of my area, but almost all the area from Venice to Milan and Turin is considered (rightly) a new megalopolis of 10-15millions people along 400km/250mil.
And this is not the situation of just Northern Italy: take Benelux, take Rhineland, take Berlin or Paris, take the area from London to Birmingham to Manchester to Liverpool etc.
Not to talk about main sea or mountain resorts, where resident population is far lower of the real inhabitants for summer/winter, and then hurbanisation is larger than usually needed.
Thus, in my opinion, it is quite a nonsense in modern era to consider a station hurban or not just on the base of the number of people who lives in the municipality where the station is located.
RE: #9 – all the flat land around where you live especially along the Venice – Milan alignment, is easy to develop. I am not at all surprised with the figures you gave. Hopefully there is some form of agricultural preservation in force – one of the things that make the food in that region special is the combination of fresh and varied produce, seafood from the Adriatic and of course, the excellent wines from vines growing in the foothills to the north of there … 🙂
re #8
I totally agree Steve, but in reality we are simply measuring the micro-climate
immediately surrounding the sensor. This data is then extrapolated into grid cells
representing a much larger area than the sensor is actually measuring. My point is
that within any given geographical area differing sensor placements may result in
quite a different looking data set…ie, hypothetically speaking do we place the sensor
on this relative 150ft relief of a hill, or do we place it in this relative bowl on
the airport tarmac? Or in an open ‘rural area’ do we measure in the forested area
we may have on a patch of nearby land or do we measure in the open farm-like space across
the road?
Hypothetically, if a given grid cell’s data is made up of say, 5 sensors do these
sensors actually represent the entire grids geologic or flora makeup? How does
one correct the grid for potentially differing land features…ie say a grid is
50% forest, 30% grassland and 10% urban and only one sensor represents this given
grid and the sensor is placed in the urban area. How then does one represent the
rest of the grid with the urban data when the bulk of the grid may be grassland
and forest? Just pondering here…
If you’re just measuring “anomalies” i.e. changes, I don’t think that it matters too much if the landscape doesn’t change. I’m more concerned about incinerator exhaust and trailer park problems.
Right Steve, and this gets to homegeneous data sets of which most of the station data is not.
Where I live in Minneapolis, the sensor has been moved several times since 1890 when the data set
begins. It was downtown at first, then 30yrs later it moved to the airport, very rural then. And at
the airport itself it has been moved several times since the 1930s. How do they adjust the data set for this?
I mean, about 8yrs ago at MSP they moved the sensor to a relative low point on the tarmac. In all fairness, this
seemed to add a cool bias to the data for clear, calm nights. An example, I am a meteorologist and was working
shift one night when they had just moved this sensor. I witnessed a radical temp change in mid summer one
night with the airmass static. The temp was hovering around 62F with a 60F dewpoint at close to sunrise. The
next hours OB came in at 52F! Just two degrees from a record low if I recall correctly. This was due to some cold
air draining into the new sensor area right before sunrise. This situation was brought to the NWS’s attention and the sensor was moved
back. Any adjustments to this data? I dunno, but I doubt it. The sensor was moved after a few months of data
were collected. Homogeneous? Don’t think so.
Re 8:
I agree that substandard stations are introducing a warming bias. Therefore you should only use government monitored stations. EG I have not found any reliable data for East Africa. On the other hand you only need three well kept stations per 1000 km to get a good correlation in annual average temperature, despite all handwaving in this thread about microclimate.
#10: of course, food is still very good 🙂 (but the best is in Central and Southern Italy) and we still have many agricultural areas, just outside our cities: but they have become less large, or concentrated in certain areas (e.g. along river Po) or even we got in small towns a “home+small factory+field system”, where it is impressive how people save some agriculture while building a new home and, next to, their own small factory. And, despite good food and excellent wine, we are maybe the most air-polluted area of Europe (this is due also to our peculiar geographical configuration, a kind of large valley, closed by mountains on 3 sides, and opened just to a very small sea).
Hans, thanks for your interesting posting, I learned much from trying to replicate it.
What is the source of your data? According to GISS, the R^2 between Vienna and Zurich is only 0.57. R^2 of Vienna and Uccle is 0.44. In general, the closer the cities, the better the correlation. The best correlation (R^2= 0.94) is between De Bilt and Uccle, which are only 157 km. apart, while De Bilt and Vienna (908 km apart) are only R^2 = 0.46
Also, I suspect that the units on the left of the graph are incorrect, since Hohenpeissenberg has an average temperature about 3° colder than shown in your graph. My guess is that we are actually looking at anomalies, and the graph is mislabeled.
Finally, what is “Basel-Zurich”? As far as I know, Basel and Zurich are separate cities … but what do I know?
However, I found out something very interesting from doing the graphing, which is how fast the correlation drops off with distance. Here’s the relationship:
Of course, I’d like to see this with much more data, but this implies that generally correlation falls off linearly with distance, and hits zero at somewhere about 1600 km … and that if we want an R^2 of say 0.50, the stations need to be spaced out about 900 km apart.
This in turn implies that for an R^2 of 0.5, we’d need 440 stations to cover the globe (provided they were equally spaced). However, R^2 of 0.5 is not all that good … and if we want an R^2 of 0.75, we’d need 2,000 stations. I also note that the R^2 of the stations ~ 600 km apart ranges from 0.55 to 0.80 … and surely with more stations the spread would be wider.
Finally, consider Susanville and Quincy, as seen in Steve M’s post here. Despite only being 56 km apart, the R^2 between the two is only 0.22 … go figure. Calling the discussion of microclimates “armwaving” is a bit premature at this point …
w.
Hans,
You state that you need just three, well kept stations per 1000 km sq.
The key phrase here is well kept.
A well kept station will not have any of the micro-climate issues that you dismiss as handwaving.
The problem is that we’ve been discovering that many of the stations are not well kept.
re 16:
source is here:
http://home.casema.nl/errenwijlens/co2/europe.htm
I don’t use raw giss, Uccle, De Bilt and Basel/Zurich are uhi corrected.
There is a readme in the excel file.
#17. We’re distinguishing here between “microsite” and “microclimate” issues. I don’t think that Hans disagrees with any of the “microsite” issues, but he isn’t worried about differences between “microclimates”.
I’m not particularly fussed about microclimate problems and I would prefer that people stay focussed on microsite issues. If some progress is made on this, leave microclimate problems for another day.
#16
Let’s see, that number sounds familiar.. Almost:
Shen et al. 1994, Spectral Approach to Optimal Estimation of the Global Average Temperature
re 19:
Correct. IMHO Microclimate variations average out over a year. A well kept sensor is just an absolute mininum requirement. It’s a justified reason for rejecting the data, if the sensor is not well kept.
I still need to subtract the local long term mean from all stations to get a more honest comparision, at this moment I applied some ad-hoc shifts to get the curves overlapping.
However the data is available, so anybody who wants, feel free.
Re #16
There is an R package for computing spatial non-parametric covariance functions that will produce these curves from a data matrix and will even construct the 95% confidence intervals using a bootstrapping method. It’s very fast and easy to use. For an example with weather data, check out Fig. 2 in this paper: http://www.sandyliebhold.com/pubs/peltonen_et_al_2002.pdf
To get our terms right, does “microsite” refer to local effects, e.g., poor maintenance, non-standard box, whitewash vs oil vs latex, vegetation growth/removal, tarmac, exhaust, reflecting bodies, and movement from one microclimate to another, while microclimate refers to stable anomalies associated with a particular site location, e.g., altitude, humidity, air turbulence, etc.?
Re #23 Yes, that’s how I was seeing this.
Just remember Bernie’s phrase “stable anomalies” with respect to microclimate. I strongly suspect poor siting can result in unstable microclimate that doesn’t average out nicely at all. Altitude is usually stable, but air turbulence can be unstable, humidity as well. We have entire reservoirs that are wet for years, then allowed to dry… sometimes permanently.
I was in urban planning in graduate school. We made a field trip to Columbia Maryland to see what was considered the desireable future. Columbia was a “new town’. There had been farm land which was bought and converted very rapidly to a new rather low density city. At the time (1970s) it was mentioned in passing that the local temperature had risen. Another Washinton DC area “new town” was Reston Virginia.
Records should exist somewhere. It should be possible to derive relatively unambiguous quantitative estimates of urban heating effects from empirical records and these could be used to calibrate a model.
Are there UHI models? If not why not? Compared to global climate models they should be trivial. As I understand the posts on this subject UHI adjustments are now made ad hoc by interested parties (interested here means one who has a financial or career interest). One consequence of this practice seems to be that the high temperatures of the nineteen thirties (Dust Bowl) have been adjusted out of existance.
I think the posting of digital photos of the measuring sites by individuals is a good start. I wonder if if something like Google Street View might be possible?
RE: #16 – to be fair, there is a significant difference between Quincy and Susanville. Quincy is in an intermontaine valley in between northern spurs of the Sierra Nevada (where they start to intermingle with the Cascades). The range to its West is lower than the one to its east. There is ample overall moisture and orographics at work. Definitely a classic “Western Slope” type of place. Susanville is overtly to the east of the final eastern escarpment of the Sierra spurs. It is technically within the Basin and Range physiographic province. Rain shaddow, Pacific airmasses cut off, exposure to true continental air masses. Susanville has more in common with Pocatello than it does with Quincy.
Is there a definitive ‘well kept site’ example?
Size of fenced area, composition of ground (gravel? grass?), height?
Which could lead to some method of ‘grading’ how well each site is maintained.
Click to access siting.pdf
The sensor should be housed in a ventilated radiation shield. The EPA recommends the sensor be no closer than four times an obstruction’s height, at least 30 m from large paved areas, and located in an open level area that’s at least 9 m in diameter. The open areas should be covered by short grass, or where grass does not grow, the natural earth.
Avoid these:
‘€¢ large industrial heat sources
‘€¢ rooftops
‘€¢ steep slopes
‘€¢ sheltered hollows
‘€¢ high vegetation
‘€¢ shaded areas
‘€¢ swamps
‘€¢ areas where snow drifts occur
‘€¢ low places holding standing water after rains
#29
These are apparently EPA guidelines. Would the WMO be any different?
Now you are addressing Tom Karl’s influence on temperature trends it is interesting to go back to 1986 and note that in the same year Jones et al launched their Northern Hemisphere tranche of “Global Warming”

http://www.warwickhughes.com/cru86/
and in the same Journal, G. Kukla, J. Gavin, and T.R. Karl published their paper Urban Warming. Follow by link to Read Online and look in the Journal list for page 1265.
This paper found, “Meteorological stations located in an urban environment in North America warmed between 1941 and 1980, compared to the countryside, at an average rate of about 0.12°C per decade. Secular trends of surface air temperature computed predominately from such station data are likely to have a serious warm bias.”
Kukla et al examined USA data from 34 urban / rural pairs.
It so happens that Jones et al 1986 used 10 of these Kukla et al urban stations uncorrected.
A question that has puzzled me for some time is why Kukla et al did not comment on Jones et al inclusion of UHI affected data ?
It is not just the 10 stations, Jones et al used nearly 60 US cities over 50,000 population, see Earthlights image below.
Kukla et al was funded by the DoE, who also funded Jones et al.
Did Kukla et al suddenly decide after looking at Jones et al that their paper was wrong ? That city data are not affected by UHI after all.
Obviously Karl has gone on to write much about temperature data and urban data issues. More on that later.
This Earthlights image of the USA 48 has Jones 1994 stations (521 US) superimposed and you can see most are over lit areas.
In 1988 Wood commented on Jones et al 1986 and Wigley & Jones replied, all scanned and commented on at.
http://www.warwickhughes.com/cru86/wood.htm
Not one US climate scientist stood with Wood as a co-author although obviously he had help.
Warwick:
Very interesting and enlightening. What struck in the tone of the Wrigley and Jones reply was the level of defensiveness and the lack of openness to any of Wood’s points. Clearly, if the UHI effect is of the size Wood surmised, much of the AGW argument woulf hve been dismissed as a local, i.e., urban, and unavoidable (population growth/density) AGW effect.
Has Kukla backed off his original finding of the size of the UHI effect in the US? (See your teasing worked!!)
#30
It was based on EPA, WMO, and the American Assn of State Climatologists, not just the EPA.
re 21:
Done, relative to 1901-1930, the quietest part of the data.
compared with my earlier ad hoc shifts
not bad
RE: #31 – Warwick, speaking of the 1980s, here is something from my own past. In 1984, I was a hard core environmentalist, and traveled within such circles at the Univ of Calif. The following is an excerpt of a New York Times book review from that year. Given that I was, at the time, quite taken with Gaia worship, and was a blind true believer (an early adopter, I was!) in “killer AGW,” and yet, was taking my degree in Geophysics, I’d have to imagine that there were a number of others like me scattered across the academic and government research institutions. I am quite a bit younger than Hansen … his generation were the profs and budding activist leadership at the time. I had this book passed to me by a fellow geophysics major:
===================
* GREENHOUSE: It Will Happen in 1997. By Dakota James. (Donald I. Fine, $15.50.) When Orwell imagined Big Brother’s dreadful Oceania in ”Nineteen Eighty- Four,” readers could hope, ”It can’t happen here.” But in Dakota James’s first novel, ”Greenhouse,” the America of 1997 sounds all too horribly possible. Cities rot, trash piles up, jobs disappear, atomic wastes get buried under housing tracts, strip-mined national parks are converted to Moonscape Wonder resorts and carbon dioxide pollution has passed the tip-over point. Those who said the world would end by ice are swelteringly wrong – the South is a desert, retired Floridians have run with their golf clubs to the Yukon and West Antarctica has melted, flooding Bangladesh. In Chicago, where it’s 100 degrees in the winter shade, Casper Spent, assistant professor of philosophy, is denied tenure because of his resistance to the postmodern high- tech society championed by his dean.
===================
One must wonder, how many Team members also read it, at the time?
Hans:
Can you overlay population growth for these cities on your charts, growth in aggregate power consumption or some other driver of UHI?
re 36: Uccle, De bilt and Basel have been UHI adjusted, Vienna downtown (Hohe Warte) was stable, Hohenpeissenberg is a rural hilltop station.
If you want to check it yourself the zipped excel is here:
http://home.casema.nl/errenwijlens/co2/europe_pub.zip
the population data is here:
http://home.casema.nl/errenwijlens/co2/ucclepop.txt
http://home.casema.nl/errenwijlens/co2/utrechtpop.txt
http://home.casema.nl/errenwijlens/co2/debiltpop.txt
other historic population data is here
http://www.populstat.info/Europe/europe.html
re 27.
Yes Susanville is Mars and Quincy is Venus. No sane person who has been to both would put them in the
same weather bin or climate bin.
Just google map Quincy, Ca. satillite view. Zoom out to 100 miles. See all that green? Conifers.
Susanville used to be a lumber mill town. Then it bceame a prison town.
Susanville is “just down the road” but entirely different. On the north side of town
( coming down the 36) you come out of the mountains down a steep grade and hit town.
Think foothills and valley. Just out of town to the South its the frickin desert.
Quincy and susanville are not on the same planet.
re 35 — Steve S., were you at Berkeley? Just curious as Mann was a UCal-Berkeley physics student, Class of ’89; John Holdren taught physics at UCal-Berkely, and sponsored with Paul Ehrlich the Cassandra conference in ’88 in honor of John Cook. They declared that the probability of disorder went up with the square of the population. Maybe Karl based his temperature adjustments on that and that’s where Mann learned his math for the hockey stick.
This message posted by Roger Pielke Sr. at his site (http://climatesci.colorado.edu/):
If you do not also check the weblog Climate Audit, and are interested in the diversity of problems with the robustness of the surface air temperature data that is used to construct the IPPC estimate of global warming, I recommend that you read the series of posts on that website. It is clearly time for further detailed independent assessments of this data source, which is being used as a foundation for much of the policy actions on climate change that are being proposed.
Given the UHI uncertainties in the land-based data, and the (IMHO) very uncertain ‘bucket’ correction value in the SST data, why was the ocean surface air temperature data given such a brusque dismissal in the AR4 appendix mentioned above? What does the SAT show compared with the uncorrected SST? Presumably the SAT was not susceptible to correction in the same way.
What research has been done — ie, wet research involving throwing buckets over the side of ships and placing thermometers on ships, not modelling — to see if the assumptions regarding SSTs and SATs are correct? It should be a cheap project and should yield much good data very quickly. Surely it must have been done.
JF
Who is also puzzled by the land data station south of Iceland in the middle of the Atlantic.
I live about a half hour north of Burlington, VT. Burlington is not a large city. I live in a very small town at the same elevation (both are on lake Champlain) in May one can see the UHI effect simply by observing how much earlier the leaves come out on the trees in Burlington comparied to surrounding areas. I think they way underestimate how small a city can be and still generate UH1. There is probably a technique one could use to examine stations where on examined the progress of spring at a site, and in a nearby wooded area. Kind of a real-time dendroclimatology 🙂
RE: #39 – In fact, I indeed attended grad school at Berkeley. Electrical Engineering. I overlapped slightly with Mann. My undergrad was at Santa Barbara, in Geophysics.
re 34:
Here is a comparison of 1940-2006 raw (GISS/GHCN) and adjusted (by me) relative to 1901-1930
The spread in the data, in particular after 1950, is evident.
Additional to 31, Tom Karl was lead author for the influential 1993 BAMS paper “A New Perspective on Recent Global Warming: Asymmetric Trends of Daily Maximum and Minimum Temperature”. I have some comments at,
http://www.warwickhughes.com/papers/karl93.htm
In particular on their Fig 7 which reflects the increasing UHI effect in PRC & Japanese data classified by increasing station population. The fascinating thing is how Tom Karl and his numerous well known co-authors all turn a blind eye to this standout evidence of UHI reality when it comes to the paper’s Abstract and conclusions.
I am pulling together links to many of my reviews / comments on one page.
http://www.warwickhughes.com/papers/revgg.htm
#31 — “A question that has puzzled me for some time is why Kukla et al did not comment on Jones et al inclusion of UHI affected data?”
Warwick, did you ever email him and ask?
I have put too many years in coming up against walls and getting no answers, sometimes no replies Pat. But look, you’re right it’s worth a try.
Kukla is now AGW sceptic. At least in his answers for a Czech webzine “Invisible Dog” he said that recent warming may well be a prelude to a next glacial and entirely natural, probably caused by combination of solar influence and the angle of Earth axis to ecliptic plane. At the low angle up to 22 deg the solar energy concentrates on the tropics, ice grows on both poles (recent situation). Average global temp increases until the ice overflows on the neighbouring continents. At the angle 24 deg the processes reverse.
He even called the IPCC “gang of Paris” and some members “fanatics”.
This is a link to a recent interview of Kukla
#47 — relevant to your experience, Warwick, my original question in #46 included the comment that for years, apart from John Daly, you must have felt very isolated in raising a skeptical voice about surface temperatures. A real ‘voice crying in the wilderness.’ Except, in contradistinction to the prophets of old, you actually knew what you were talking about. 🙂 But, I cut that part out before posting. Anyway, I’m very glad you persisted.