Recent study on decreasing US wind energy not as advertised

Wind speed trends over the contiguous USA by Pryor et al. (2009, in press, JGR)

Some (read on to see who) would say that this particular wind farm energy reduction study is speculative, inconclusive, preliminary, and premature, and with the authors’ hesitant equivocation in press interviews, even they may agree with that particular straw man. [Read comment 1 for more about bad science reporting]

The Associated Press story heralds this as a “first-of-its-kind study suggests that average and peak wind speeds have been noticeably slowing since 1973, especially in the Midwest and the East.”

Quotes from various scientists:

“It’s a very large effect,” said study co-author Eugene Takle, a professor of atmospheric science at Iowa State University. In some places in the Midwest, the trend shows a 10 percent drop or more over a decade. That adds up when the average wind speed in the region is about 10 to 12 miles per hour.

The new study “demonstrates, rather conclusively in my mind, that average and peak wind speeds have decreased over the U.S. in recent decades,” said Michael Mann, director of the Earth System Science Center at Penn State University.

Jeff Freedman, an atmospheric scientist with AWS Truewind, an Albany, N.Y., renewable energy consulting firm, has studied the same topic, but hasn’t published in a scientific journal yet. He said his research has found no definitive trend of reduced surface wind speed.

Robert Gramlich, policy director at the American Wind Energy Association, said the idea of reduced winds was new to him. He wants to see verification from other studies before he worries too much about it.

Gavin Schmidt, a climate scientist at NASA’s Goddard Institute of Space Studies, told the Guardian the study had yet to establish a clear pattern of declining winds, and that it was too soon to be thinking of the effects on wind energy industry.
“It’s still very preliminary. My feeling is that it is way too premature to be talking about the impact that this makes.”

Now, this is not an example of consensus, especially when Mann and Schmidt have contrary views, which were marvelously printed one after another in the AP story. Mann’s gleaming endorsement seems a bit premature and some would wonder if he actually read the same study that his fellow Team Member did.

Now, to the study. What does the actual text, meaning the abstract, body, and conclusions actually say? Do the remarks in the press releases by the authors actually represent the research they submitted to the Journal, underwent peer-review, and was accepted for publication? The following equivocation in the AP story by the authors suggests that some hyperbole is at work here:

Still, the study, which will be published in August in the peer-reviewed Journal of Geophysical Research, is preliminary. There are enough questions that even the authors say it’s too early to know if this is a real trend or not. But it raises a new side effect of global warming that hasn’t been looked into before.

The ambiguity of the results is due to changes in wind-measuring instruments over the years, according to Pryor. And while actual measurements found diminished winds, some climate computer models — which are not direct observations — did not, she said.

Yet, a couple of earlier studies also found wind reductions in Australia and Europe, offering more comfort that the U.S. findings are real, Pryor and Takle said.

It also makes sense based on how weather and climate work, Takle said. In global warming, the poles warm more and faster than the rest of the globe, and temperature records, especially in the Arctic, show this. That means the temperature difference between the poles and the equator shrinks and with it the difference in air pressure in the two regions. Differences in barometric pressure are a main driver in strong winds. Lower pressure difference means less wind.

Even so, that information doesn’t provide the definitive proof that science requires to connect reduced wind speeds to global warming, the authors said. In climate change science, there is a rigorous and specific method — which looks at all possible causes and charts their specific effects — to attribute an effect to global warming. That should be done eventually with wind, scientists say.

Let’s get this straight: the study is inconclusive and has many outstanding questions with ambiguous results, but it is consistent with what you would expect with global warming. Presto. But, in climate science, there is a rigorous and specific method of attribution — which the authors did not do — but suggest should be eventually done with wind measurements.

Figure 2: (a) Annual percentiles for 1200 UTC observations from site 724320 (5th, 10th, 20th …
90th, 95th percentile, where the 50th and 90th percentiles are shown in the blue and red,
respectively). Despite considerable inter-annual variability, data from this station exhibit
a significant downward trend in both the 50th percentile (of approximately 0.7%/year) and
the 90th percentile (of approximately 0.6%/year) wind speed. Output from the other data
sources used herein for the grid cell containing Evansville are shown in frames (b) – (g).

The 10-meter wind speeds from the various climate model and/or reanalysis data sources are condensed in this figure to one-grid point or station location, which is a risky way to validate the observations — in this case Evansville. Originally, I thought that these medians were an area-average of the entire United States, which would be more representative of the grandiose claims in the press releases. The reanalysis data sets range from grid-spacing of 0.33 (NARR) to 2.5 degrees (ERA-40 and NCEP) and the climate models are at 50 km grid spacing, all too small to resolve the topography to be representative of a given station location. The authors do point out, importantly:

Observational data – due to instrumentation changes, station moves, changes in land-use or obstacles, and observational sites may not be regionally representative

[Note: the land surface data are described as follows]
1. 00UTC and 12UTC NCDC land-based near-surface wind speeds NCDC-6421 [Groisman, 2002] 336-stations chosen out of 1655 1973-2000
2. DS3505 surface data, global hourly, 193 stations available all stations are airports and military installations

Reanalysis products ensure the data sets are homogeneous and complete, but the near-surface wind speeds are strongly influenced by model physics and data that are assimilated.

Also, it must be noted that 10-meter wind speed is not an assimilated quantity in the models, but it is extrapolated from the lowest model-level (often 50-100 meters height) by use of Monin-Obukhov similarity theory.

With the work of Anthony Watts on the Surface Station observational records with regards to temperature, it is perhaps a good idea to investigate and pay more attention to the type and location of the anemometer. The authors helpfully point out that the “Observational data – due to instrumentation changes, station moves, changes in land-use or obstacles, and observational sites may not be regionally representative.

They go on to say:

Studies that have analyzed wind speed data from terrestrial anemometers have generally found declines over the last 30-50 years (see summary in McVicar et al. [2008] and Brazdil et al.[2009]), the cause of which is currently uncertain. In part because of the difficulties in developing long, homogeneous records of observed near-surface wind speeds, reanalysis data have also been used to quantify historical trends and variability in near-surface wind speeds either in conjunction with in situ observations or independent thereof [Hundecha et al., 2008; McVicar et al., 2008; Pryor and Barthelmie, 2003; Trigo et al., 2008].

So, with these data caveats in mind, the authors continue on in their research avenue to address the following points (from the paper):

Herein we analyze 10-m wind speeds from a variety of observational data sets, reanalysis
products and Regional Climate Model (RCM) simulations of the historical period in order to:

Quantify the magnitude and statistical significance of historical trends in wind speeds and the consistency (or not) of trends derived using different data sets; direct observations, reanalysis products and output from RCMs. As a component of this analysis we provide preliminary diagnoses of possible causes of temporal trends in the in situ observations. Specifically, we examine trends in terms of their temporal and spatial signatures, and the role that major instrumentation changes may have played in dictating those trends.

The methodology is fairly straightforward: time series analysis of the 50th and 90th percentiles of the wind speed distributions and the annual mean wind speed using the 00 and 12 UTC observations or model output from each day of the year.

Conclusions: (quoted from Pryor et al. 2009)

1. Magnitudes of trends in observed wind speed records for 1973-2000 and 1973-2005 are substantial – up to 1% per year at multiple stations.

2. Trends in reanalysis data sets and RCM output where present are generally of lesser magnitude and no other data source is as dominated by negative trends as the in situ observations.

3. Temporal trends in the data sets from in situ measurements are of largest magnitude over the eastern US, but negative at the overwhelming majority of stations across the entire contiguous USA. The trends in wind speed percentiles from in situ observations do not exhibit strong seasonality (Figure 6) or a clear signature from the introduction of the ASOS instrumentation (Figure 7). Hence the cause(s) of the declines remains uncertain.

4. Output from NARR for 1979-2006 indicate contrary trends in the 0000 UTC and 1200 UTC output with declining trends over much of the western US in the 0000 UTC wind speeds but increases in the 1200 UTC output (c.f. Figures 4 and 5).

As expected, the period of observation used in the trend analysis has a profound impact on the presence and absence of temporal trends and indeed the sign of trends.

And the most relevant for last:

Based on the analyses presented herein we conclude there are substantial differences between trends derived from carefully quality controlled observational wind speed data, reanalysis products and RCMs, and indeed between wind speeds from different reanalysis data sets and RCMs.

A few comments:

The quality of reanalysis data sets is not yet sufficient for this type of attribution study of high-resolution, regional climate change. This problem is exacerbated when looking at derived variables such as surface temperature and winds, which are not assimilated observations nor representative of station locations. Also, while the authors chose to use ERA-40, NCEP-Reanalysis, and the NARR (a regional reanalysis over North America), there are other sources of reanalysis data available that may be of significantly higher quality. A recently completed reanalysis is called the ERA-interim, which begins in 1989 and continues on to today, based upon a very recent version of the ECMWF operational forecasting numerical model, which is the best on the planet.

While it is no more the final authority on the subject, I downloaded the data and conducted a few simple experiments to verify the rather ambiguous results of Pryor and company. With access to my university servers, and a program called GrADS, it is very easy to replicate Pryor’s work with minimal effort. It is just a few lines of code and a lot of processing to sort out the distributions of 10-meter wind at each grid point over the past 20-years.


Figure: Running calculations of the distribution of 10-meter winds over the United States, domain averaged: 22.5N-51N; 232.5E-294E). So, the median each month is calculated from the ERA-interim reanalysis data 4 times a day [00,06,12,18] UTC, including a total of 1460 model data points.

Figure analysis: no trend.. Statistically and physically speaking, there is no decreasing trend in the distribution of 10-meter winds over the contiguous United States according to the new ERA-interim reanalysis. (Technical note: This also matches the results from the JRA-25 and NARR reanalysis, which extend from 1979-2008)

What about globally? A fair way to determine where downward trends in wind speed energy are occurring is to simply take the first 10-year period [1989-1998] and subtract it from the second 10-year period [1999-2008] in the reanalysis. The annual medians only are averaged to create the 10-year period. The units are meters per second [m/s]. The second figure is the difference in the 90th percentile wind speed.

The relatively large tropical differences are associated with the trade wind modulation by ENSO. The absolute differences are less than 0.2 m/s over the USA, which is well less than a knot. This is in accord with Pryor et al. (2009) who stated in their conclusions that the reanalysis data did not show a decrease like the observations.

So, what does this all mean?
First of all, it is baffling that a press release would produced for this journal manuscript, which can be described as nothing more than a simple observational vs. reanalysis/climate model comparison study, with ambiguous results, which only leads to more questions about the quality of the data used. There is no attempt to even attribute climate change or global warming to the results in the paper, so the suggestions in the press accounts is a rather egregious fear mongering or perhaps simply an example of journalistic malpractice.

This type of study is critical in the incremental improvement in understanding of micrometeorology, which is modulated by large-scale climate. It is a good question why the observations show one thing and all of the reanalysis datasets show the opposite. Thus, in this case, Gavin Schmidt’s assessment is right on the money. “It’s still very preliminary. My feeling is that it is way too premature to be talking about the impact that this makes.”

Or you can simply read the authors own equivocations in the press — in between the crazy, amateur hour so-called journalism by environmental correspondents.

Note: ECMWF ERA-Interim data used in this study/project have been provided by ECMWF/have been obtained from the ECMWF data server.

TAS vs TOS

My new script for scraping KNMI model makes it very convenient to look at model data without a lot of setup overhead. Up till now, I’d only downloaded air temperature data (tas) and I tested downloading SST data (tos). KNMI’s collection of tos data is unfortunately quite spotty and this information is not consistently available. I don’t know whether this is incompleteness on their part or at PCMDI (from which they derive their data) or on the part of the contributors.

For example the cccma_cgcm3_1 model has 5 20c3m runs with tas (air) values but only one with tos (SST) values. Here’s a plot of the difference between tas (masked to sea only) and tos for the overlapping run. For some reason, the model air temperature over the ocean has increased relative to the SST during the 20th century.

Figure 1. Difference between TAS and TOS. Dotted red are major volcanos.

More on Retrieving KNMI Data

I’ve done a considerable upgrade to my function for retrieving model data from KNMI within R. This builds on the KNMI webpage but IMO is a considerable enhancement of it. I’ve made the script available here .

The function read.knmi.models is built as an emulator of the radio buttons.

Geert’s radio buttons (if I’m understanding things right) build a command in OpenDAP – the command excerpt “cgi?” comes from OpenDAP. OpenDAP was developed in the US for external retrieval of relevant portions of geographically structured data sets without downloading terabytes of data. There are some online manuals on OpenDAP, but I must confess that I’ve worked using Geert’s command structure as a template. The manuals on OpenDAP describe an installation program.

However, the system here: using R to access the KNMI installation seems to rather neatly accomplish the objectives of OpenDAP while working in a language and framework that is much more widely understood.

KNMI does not appear to have a convenient data frame of models and scenarios. I’ve collated information from their webpage as at a couple of months ago and placed it online. It is read as part of calling the functions, which can be done as follows:

source(“http://data.climateaudit.org/scripts/models/collation.knmi.functions.txt”)
dim(knmi.info) #86 4
# model alias scenario Runs
#1 BCC CM1 bcc_cm1 20c3m 4
#2 BCCR BCM2.0 bccr_bcm2_0 20c3m 1
#4 CGCM3.1 (T47) cccma_cgcm3_1 20c3m 5

Now logon using your email. KNMI asks users to register. I’ve done so. Their retrieval command requires an email address and the script here assumes that you have an email address on file with KNMI (the email address yourname@you.com is a placeholder.) The function download_html pings the website to log on, assuming that you’re registered.

email=Email=”yourname@you.com” ##REGISTER FIRST
logon=download_html( paste(“http://climexp.knmi.nl/start.cgi?”,Email,sep=””))

Here’s a sample of retrieving model data. I’ll describe the fields in a minute.

test= read.knmi.models(field=”tas”,model=”giss_aom”,scenario=”sresa1b”,
landtype=”land”, lat=c(-20,20), long=c(0,360), version=”anomaly”)
plot.ts(test,main=paste(“giss_aom”, “sresa1b”) )

I experimented with recovering the centigrade values – something that Lucia asked for this morning, but haven’t debugged this.

The fields are as follows:

field – “tas” is the field name for temperature. Only one tested so far
model – use alias nomenclature from knmi.info.
scenario- use nomenclature from knmi.info. “20c3m” and “sresa1b” are examples. Radio “Scenario Runs” for nomenclature.
landtype – default is all. Can be “land” or “sea”
lat – south to north. default c(-90,90)
long – 0 to 360. default c(0,360)
version – default is anomaly. “centigrade” gets Centigrade
Info – requires knmi.info

A Partial Victory for the R Philosophy

Obviously I think that R is a great language. But one of the reasons that it’s great is because it’s open source and because of the incredible energy and ingenuity of the packages contributed by the R Community for the use of others.

In a real sense (as opposed to a realsense), this sort of open source philosophy represents what a lot of us thought that climate science would be like (and should be like). I have a story today which shows a small victory for open source philosophy. Continue reading

Cloud Super-Parameterization and Low Climate Sensitivity

“Superparameterization” is described by the Climate Process Team on Low-Latitude Cloud Feedbacks on Climate Sensitivity in an online meeting report (Bretherton, 2006) as:

a recently developed form of global modeling in which the parameterized moist physics in each grid column of an AGCM is replaced by a small cloud-resolving model (CRM). It holds the promise of much more realistic simulations of cloud fields associated with moist convection and turbulence.

Clouds have, of course, been the primary source of uncertainty in climate models since the 1970s. Some of the conclusions from cloud parameterization studies are quite startling. Continue reading

Why the difference?

Here is a puzzling comparison of two zonal averages from Phil Jones’ CRUTEM3 gridded land data. Red shows the average from 20S to 20N and black shows the average of the 20-30S band (both N and S). These are calculated from gridded data at http://hadobs.metoffice.com/crutem3/data/CRUTEM3.nc.

I did this comparison because I noticed a difference between my own average of 20S-20N gridded data and the archived “low latitude” average (which was 30S to 30N)


Figure 1. CRUTEM3 Zonal Averages. Black 20-30 N and 20-30S. Red. 20S-20N.

The range of differences goes from -1.7 to 4.6 deg C. Explanations welcome.

Revisiting Detroit Lakes

Some long time Climate Audit readers may remember this famous picture of the USHCN climate station of record in Detroit Lakes, MN.

This is what I wrote on July 26th, 2007 about it in:

How Not to Measure Temperature, Part 25

This picture, taken by www.surfacestations.org volunteer Don Kostuch is the Detroit Lakes, MN USHCN climate station of record. The Stevenson Screen is sinking into the swamp and the MMTS sensor is kept at a comfortable temperature thanks to the nearby A/C units.

Detroit_lakes_USHCN.jpg

The complete set of pictures is here

From NASA’s GISS, the plot makes it pretty easy to see there was no discernible multi-decadal temperature trend until the A/C units were installed. And it’s not hard to figure out when that was.

Detroit_lakes_GISSplot.jpg

And as you know, that curious jump in the GISS record, even though it coincided with the placement of the a/c heat exchangers (I checked with the chief engineer of the radio station and he pulled the invoices to check), it turns out that wasn’t the most important issue.

Steve McIntyre of Climate Audit saw something else, mainly because other nearby stations had the nearly the same odd jump in the data. That jump turned out to be discovery of a data splicing glitch in the NASA GISS processes joining the data pre and post year 2000. Continue reading

Banned at Sudbury Airport

At a friend’s request, I went up to northern Ontario this weekend to look at a gold prospect, which I might chat about some time. I got to trudge through bush for a few hours – exhausting work for city folk, drove a quad around empty logging roads (at a grandfatherly pace) – my sons would be envious.

It wasn’t all that far north, but it was sure cold. We had to wear winter jackets though it was the first week of June. When you see the lurid red dots in IPCC reports showing intense warming in the far north, don’t get the idea that Siberia (or northern Canada) have become warm enough host an IPCC meeting. They haven’t. The IPCC prefers to meet in warm venues.

When I arrived in Sudbury Airport on Friday night, I logged onto the airport internet terminal (conveniently free) and tried to access Climate Audit. Access was blocked. I was –

Banned in Sudbury.

To verify that Climate Audit was specifically blocked, as opposed to blogs, I visited realclimate, which loaded without event. While I had RC online, an instructor at the local university (Laurentian) happened to walk by and congratulated me for reading such a fine website, saying he didn’t often see people in Sudbury doing so. He did not seem to regard this as evidence of either common sense or connoisseurship.

The block was via the software Softforyou, advertised as parental guidance software.

GISS Gridded and Zonal Data

Last week, I wanted to determine what GISS’ tropical land-and-ocean time series was. This did not prove as easy as it sounds. Nothing in GISS is intrinsically complicated – it;s all just averaging and smoothing and adjusting. But the code is written as though the whole thing were being done on a Commodore 64 with negligible memory – which it probably was at one time. The code works to a degree, but what’s actually being done is made hard to see merely from all the operational load of inputting and outputting as though it were a Commodore 64.

To make matters more complicated, their online data is mostly in binary. This is a rather ingenious compromise between being their post-Y2K requirement to put their data online and at the same time keep the hoi polloi out of the data. If you’re operating Fortran on an Unix machine (as medievalists and climate scientists do), then it’s not a problem; but if you’re working in modern languages, it is a problem.

Anyway, I think that I’ve come out alive and can now retrieve and handle GISS gridded data in a modern language and might even figure out the GISS tropical land-and-ocean time series. 🙂

Gridded Data SBBX
There are three gridded data sets online at GISS in ftp://data.giss.nasa.gov/pub/gistemp with older versions in subdirectories of ftp://data.giss.nasa.gov/pub/gistemp/GISS_Obs_analysis/ARCHIVE/ such as ftp://data.giss.nasa.gov/pub/gistemp/GISS_Obs_analysis/ARCHIVE/2009_04/binary_files.

Two of the data files are based on GHCN land data and one is based on SST. Surprisingly, there doesn’t seem to be a gridded version for the combined data.

SBBX1880.Ts.GHCN.CL.PA.250 – land with 250 miles influence. About 19 MB.
SBBX1880.Ts.GHCN.CL.PA.1200 – land with 1200 miles influence. About 42 MB
SBBX.HadR2 – SST. I think that this uses a NOAA version after 1980 and HadSST before 1980. About 33 MB.

The program read_sbbx will download and place this into an R-list say) sbbx with two items
sbbx$info – the information list with 8000 rows and 10 columns of information
sbbx$sbbx – the data as a time series starting in 1880 with 8000 columns. Missing data has already been converted to NA. The R- objects take are 30-50% less space than the binary data/

source(“http://climateaudit.info/scripts/gridcell/read.binary.txt”)
url=”ftp://data.giss.nasa.gov/pub/gistemp/SBBX1880.Ts.GHCN.CL.PA.250″ #18577
sbbx=read_sbbx(url)
dim(sbbx$sbbx) #1560 8000

Lplot Data
There are a variety of series in the directory ftp://data.giss.nasa.gov/pub/gistemp/GISS_Obs_analysis/lplots.

url=”ftp://data.giss.nasa.gov/pub/gistemp/GISS_Obs_analysis/lplots/LOW.Ts.GHCN.CL.PA.lpl”
zon=read.lpl(url)
tsp(zon) # 1880.00 2009.25 12.00

Zonal Data
I think that the zonal data is sitting in the monthly archives (say)
ftp://data.giss.nasa.gov/pub/gistemp/GISS_Obs_analysis/ARCHIVE/2009_04/ZON/. The dataset LOTI.zon.web appears to be the land-and-ocean time series for 46 latitudes from 90S to 90N in 4 degree increments – other zonal versions SST.zon, Ts.zon and Ts.zon250 appear to be zonal versions of the three gridded data sets.

It looks like I can extract the desired tropical data from this. After a LOT of experimenting – much of which was occasioned by my prior lack of understanding of how binary files worked – I developed a program to extract this binary data – following methods in Nicholas’ scripts to some extent, but modifying them to work better in R now that I sort of understand how binary files work.

url=”ftp://data.giss.nasa.gov/pub/gistemp/GISS_Obs_analysis/ARCHIVE/2009_04/ZON/LOTI.zon.web”
loti=read.zon.web(url)
dim(loti) #[1] 1552 46
tsp(loti) #[1] 1880.00 2009.25 12.00

From this we can get the tropical average as:

lat=as.numeric(dimnames(loti)[[2]]);lat
loti.trp=ts(apply ( loti[, abs(lat)< =20],1,mean,na.rm=TRUE),start=1880,freq=12)
ts.plot(loti.trp)

There, wasn’t that easy. You’d think that GISS would welcome this sort of software, but they seem uninterested in anything other than Fortran.

UK Met Office: Refuse and Delete

A couple of week ago, I noticed that the UK Met Office website contained the following statement

Q. Where can I get the raw observations?

A. The raw sea-surface temperature observations used to create HadSST2 are taken from ICOADS (International Comprehensive Ocean Atmosphere Data Set). These can be found at icoads.noaa.gov/. To obtain the archive of raw land surface temperature observations used to create CRUTEM3, you will need to contact Phil Jones at the Climate Research Unit at the University of East Anglia. Recently archived station reports used to update CRUTEM3 and HadCRUT3 are available from the CRUTEM3 data download page.

On May 11, 2009, I sent an inquiry to John Kennedy maintainer of this webpage asking him to “provide me the archive of raw land surface temperature observations as you hold them at the Hadley Center.”

Kennedy replied the same day:

Most of the station data was given to us by Phil Jones under conditions that don’t allow us to redistribute it. If you want the full archive, you will have to contact him.

I sent the following FOI request to the UK Met Office:

I request the “archive of raw land surface temperature observations used to create CRUTEM3” as held by the Hadley Center (referred to on your webpage http://hadobs.metoffice.com/indicators/index.html ) under the FOI Act or other applicable legislation.

In my request, I did not include any of the flowery salutations that some readers have recommended. I did not, for example, say “Thank you, John” or “thanking you in advance, Mr Hadley”.

I received the following response today:

Re: Environmental Information Regulations request

Your email dated 11 May 2009 has been considered to be a request for information in
accordance with the Environmental Information Regulations Act 2004.

You requested the archive of raw land surface temperature observations used to create
CRUTEM3, as held by the Hadley Centre (referred to on webpage
http://hadobs.metoffice.com/indcators/index.html

The Met Office does not hold this information. The information we hold is the value added
data, which is data that has been quality controlled and where deemed appropriate, adjusted
to account for apparent non-climatic influences.

I hope this answers your enquiry.

Kennedy had previously said that “most of the station data was given to us by Phil Jones under conditions that don’t allow us to redistribute it” indicating that they held it. The two answers seem a bit inconsistent, though I guess there are ways to split this hair. However, I’m puzzled as to exactly how they split this hair. They admit to holding some information and seem to have left an opening for a further FOI request, which I’ll pursue.

Oh, BTW, don’t think that my request had no impact whatever. If you go to the Hadley Center webpage linked in my previous request – the one where they supposedly answer the question:

Q. Where can I get the raw observations?

that part of the webpage has been deleted. 🙂 It now reads:

Q. Where can I get the raw SST observations?

A. The raw sea-surface temperature observations used to create HadSST2 are taken from ICOADS (International Comprehensive Ocean Atmosphere Data Set). These can be found at http://icoads.noaa.gov/.