The source code for Hansen’s Step 2- the “urban adjustment” step is online. If anyone’s been able to operate the program through to Step 2, I’d be interested in some stage results for the stations discussed here. The verbal description is not clear and the code is a blizzard of old-fashioned Fortran subscripts, so it will take a little while to translate the procedure into modern languages and see what he’s doing.
Hansen et al 1999 says:
An adjusted urban record is defined only if there are at least three rural neighbors for at least two thirds of the period being adjusted. All rural stations within 1000 km are used to calculate the adjustment, with a weight that decreases linearly to zero at distance 1000 km.
In the stations that I’ve looked at, I’ve seen adjusted stations being calculated when the above condition doesn’t seem to hold. So any light that can be shed on the procedures would be appreciated.
Stations within 1000 km
The first step in Hansen’s Step 2 is the calculation of rural stations within 1000 km of an urban station to be adjusted. The archived script shows that this calculation is done from scratch in each run. No particular harm in that although the information presumably remains the same in every run. I’ve compiled the list of “Hansen-rural” stations for each of the 7364 stations, including in each list the id, name, lat, long, start_raw, end_raw, start_adj, end_adj, distance (from target), GISS-population, GISS-urban and GISS-lights. I archived the result at http://data.climateaudit.org/data/giss/stat_dist.tab . The information is a bit redundant to the station information lists but it was handy having some of the information directly accessible to aid analysis. The object is 33MB . The script to do this is at http://data.climateaudit.org/scripts/station/hansen/step2A.txt
In order to analyze Hansen’s actual urban adjustment mechanism, it’s nice to identify the stations with only a few contributing rural stations. There’s a very pretty R function that can do this in one line. The object containing the information stat_dist.tab is structured as a list in R with 7364 items, one for each GISS station. Each item is an R data-frame – which is structured as a matrix but the columns can be of different types. A very handy thing to be able to use. To obtain the number of rows for the 1000th station in the list, you can use the command
nrow(stat_dist[[1000]]) #86
To obtain the number of rows for all 7364 stations, all you need to do to get a vector is:
stat.length=sapply(stat_dist,nrow)
Just like magic. No blizzard of Fortran subscripts and 5 pages and programming.
Now to locate stations with only a few contributing rural stations for analysis purposes, you can do the following (I’ve already got my GISS information http://data.climateaudit.org/data/station/giss/giss.info.dat loaded as stations.tab
temp=(stat.length<3) ; index=(1:7364)[temp]
temp_urban=(stations$urban==”U”)
stations[temp&temp_urban,1:10]
This yields the following list of sites, all in India:
country id name lat long altitude alt.interp urban pop topo
1180 207 20742867000 NAGPUR SONEGA 21.1 79.05 310 302 U 930 HI
1188 207 20743128000 BEGAMPET 17.5 78.50 545 550 U 1796 HI
1190 207 20743185000 MACHILIPATNAM 16.2 81.15 3 30 U 113 FL
In these three cases, I checked and there was no “adjusted” series for any of the 3 sites, which, in this case, complies with the Hansen et al 1999 3-rural station criterion. I then checked “U” sites with 3 neighbors, all in India, Brazil and New Zealand and again didn’t obtain any adjusted series.
index=(1:7364)[temp_urban& (stat.length==3)];stations[index,1:10]
country id name lat long altitude alt.interp urban pop
1189 207 20743149000 CWC VISHAKHAP 17.70 83.30 66 142 U 353
2054 303 30382599000 NATAL AEROPOR -5.92 -35.25 52 12 U 377
2062 303 30382900000 RECIFE -8.05 -34.92 7 18 U 1184
2088 303 30383781000 SAO PAULO -23.50 -46.62 792 883 U 7034
6011 507 50793116001 AUCKLAND -36.90 174.80 5 9 U 145
6012 507 50793116002 ALBERT PARK -36.85 174.77 49 4 U 145
6013 507 50793116003 AUCKLAND, ALBERT PARK -36.85 174.77 49 4 U 145
6014 507 50793119000 AUCKLAND AIRP -37.02 174.80 6 17 U 145
6024 507 50793890001 DUNEDIN AERODROME -45.93 170.20 1 151 U 77
6025 507 50793893001 DUNEDIN MUSSELBURGH NEW ZE -45.90 170.50 2 190 U 77
I then experimented with sites classified as “small” and got some puzzles as shown below.
index=(1:7364)[temp_small& (stat.length==3)];stations[index,1:10]
country id name lat long altitude alt.interp urban pop topo
312 125 12567197000 FORT-DAUPHIN -25.03 46.95 9 100 S 14 HI
1185 207 20743041000 JAGDALPUR 19.08 82.03 553 512 S 47 HI
1855 224 22443436000 BATTICALOA 7.72 81.70 12 8 S 42 FL
1861 224 22443497000 HAMBANTOTA 6.12 81.13 20 42 S 11 FL
6023 507 50793844000 INVERCARGILL -46.70 168.55 4 28 S 49 FL
For the first two sites, there was no adjusted series, but for the 3-5 series there were adjusted series. Batticaloa and Hambantota are very close and it turns out that their 3 R neighbors are identical. So based on the apparent Hansen adjustment process in which the urban station trends are supposedly coerced to the rural reference stations, one would expect similar adjusted trends. This proves not to be the case. Why – I’m not sure right now and would welcome any thoughts.
The three rural stations for Batticaloa and Hambantota are shown below – note that the distances from the two urban sites to the three rural comparanda are similar and in the same order.
id name long lat start_raw end_raw start_adj end_adj dist pop urban lights
1859 22443476001 DIYATALAWA, SRI 81.00 6.80 1901 1980.917 1901 1980.917 128.2031 NA R A
1200 20743339000 KODAIKANAL 77.47 10.23 1900 1980.917 1900 1980.917 542.0993 NA R A
1203 20743369000 MINICOY 73.15 8.30 1931 2007.917 1931 2007.917 943.8923 NA R Aid name long lat start_raw end_raw start_adj end_adj dist pop urban lights
1859 22443476001 DIYATALAWA, SRI 81.00 6.80 1901 1980.917 1901 1980.917 76.9864 NA R A
1200 20743339000 KODAIKANAL 77.47 10.23 1900 1980.917 1900 1980.917 609.3207 NA R A
1203 20743369000 MINICOY 73.15 8.30 1931 2007.917 1931 2007.917 913.2765 NA R A
The figure below shows the annual temperature values of the three rural stations. Two of them end in 1980 and only one (Minicoy) continues to the present). I test the proportion of years with at least 3 stations to the number of years of adjusted record and found that 2 of the 3 series failed the test. So it would be interesting to locate exactly where Hansen implements the 2/3 criterion in his code. I haven’t been able to do so yet. One also sees that, in this case, much depends on the Minicoy station as the only one continuing to the present.
Now for today’s puzzle – showing the dset=1 and adjusted versions of Hambantota and Batticola. How does Hansen get such different looking adjustments from identical rural comparanda? If anyone can do runs of the actual code for these stations and save any intermediate work, it would help. (Also Wellington NZ).
113 Comments
I’d suggest that, in general, stations in mountainous areas be adjusted by “nearby” stations on a much more careful basis than those on the plains (say by similar monthly trend conformity). Those on the plains should be adjusted by some sort of correlation of location based on prevailing wind direction (high weather correlation) and that station location compared to mean jet stream location be used to decide just which “nearby” stations be used for adjustments (being north of the JS in the NH is quite different than being south of the JS wrt “climate”- especially on a month to month basis).
Take a place like California (or, for that matter New Zealand, or, Central Europe). The topography is like a washboard. It would be nearly impossible without hairpulling, painful, station by station review, to have any hope that, for any given highland or lowland station, “nearby” stations would ever be all similarly highland or lowland, let alone have the same general climate characteristics as other highland or lowland stations. As an added complication, in a place by California, we actually have Marine West Coast, Mediterranean, Midlatitude semi arid and arid and Subtropical Arid all in one state. One part of the state could be getting blasted by cP while the other part is kicking back in mT. Personally, I think the whole thing is completely discredited unless you happen to be sitting equidistant from the Rockies and Appalachians, or in the middle of the Eurasian steppes. But in most places, you’re into a radically different climate within 1000 Km.
Hi,
I think Mark and “not sure” got through step two.. But just compiling and not so much understanding.
I stopped looking at it when John Vs stuff showed up, So I can go back now
There is some weirdness in the code that doesnt follow the documents ( Xcrit variable)..
Mark was able to output stuff in the end I believe.
It’s becoming quite unclear to me whether the whole annualization of daily weather data into climate data is appropriate in any way.
1, 2 – you’d think that that thought would cross the mind of a Ph.D climatologist. I have to seriously question the seriousness of the methodology.
Or am I assuming something? What is Hansen’s Ph.D. in?
Is there an adjustment to the correction based on the altitudes of the rural stations verses the urban they are adjusting?
6, that’s supposed to be in there. The bigger question is, what do these stations have to do with each other? For example, let’s take Los Angeles, San Fransisco, and Death Valley. Death Valley has more of an influence on LA than SF does under Hansen’s scheme. Which two of the three do you think are actually more related?
Seems like it would be a lot less confusing and contentious if y’all just labeled the knobs “adjust this which ever way you need to to get the answers you want”.
If truth is an issue, it would seem like just just using the numbers as they are would let the random stuff balance out.
But we already know I am just a noisy kibiter.
I mostly wanted to say “I’m glad y’all are working so hard on this. It is important. Could save us a bundle.”
More bafflegab from Hansen. It looks like Hansen is playing the community of skeptics like a violin, metaphorically a replica of the one Nero used.
Using stations as much of 1000km apart is absurd.
How about generating a trend from all of the stations used in the Global analysis individually. First, just the raw data. Then redo the trend-lines using various adjustments TOB, site and instrumental changes, FILNET, etc. Then do a statistical analysis of the trend-line slopes,and then a climatological (not a statistical) analysis of meteorological factors affecting each site.
My guess is that a major percent of the useful information will be derived from the raw data directly. At the very least, interested observers should get a chance to view the raw data before less than objective alterations are made.
RE: #8
“Using stations as much of 1000km apart is absurd.”
Especially when the weighting factor is linear. I seems to me that it would make more sense to do some sort of inverse square of the distance as a weighting factor which would give much more weighting to rural stations which are well within the 1000km and much less to the stations near the 1000km distance.
Sorry, I was referring to #9 instead of #8.
New Zealand varies greatly from one end to the other. Top of North Island is sub tropical and South Island is Alpine. Huge differences in climate and you can feel them just driving up or down country.
RE 6.
They only adjustments that are made for altitude are made when stations change
Altitude.
I think you misunderstand what Hansen is doing. I will illustrate in a simple
simple case.
Assume a station at Sea level. Assume its temperature never changes. call it X
Now, Measure the temperature 1KM above this same spot. The ABSOLUTE temp
at 1KM higher will be X-6.5C lower due to the lapse rate.
The question hansen asks is not what the absolute temp is, but rather what
is the trend is. So, in his model you do not need to correct for altitude
EXCEPT for those cases where the altitude changes during the recording.
So, if my sea level site is moved up to 500M then I need to adjust. And
if it moves up to 1000m then I need to adjust again.
He isnt after absolute temp. He is after the change. For a while I played with
removing altitude biases, Trends dont change obviously, but errors get smaller.
RE 7.
Trend trend trend. That is all Hansen cares about. Its not about causation.
His 87 study claims a correlation .5 R2 at 1200km.. Thats the analysis that needs to be
ivestigated. Clearly 1200km or 1000km is not the best in all circumstances,
in fact Hansen noted a difference between NH correlation at 1200km and SH
correlation.
Weather morphs into climate. I’m not sure of the dimensions of the average weather system (may vary strongly by latitude), but it would seem to be on the order of 300 to 600 mi. (500 to 1000 km) radius. That’s a off the top of my head estimate, I’ll have to do slightly more study is even come up with an educated cocktail napkin estimate.
Re #13
“The ABSOLUTE temp at 1KM higher will be X-6.5C lower due to the lapse rate.”
That’s assuming you move straight up in open space. But is that the same as moving to a location on land which is at the same elevation? Surely there are differences in temperature measured in free space at a particular altitude above sea-level, and a temperature measured on land that is at the same elevation. Does altitude adjustment account for this difference?
Let me get this straight;
The urban temp for St. Louis would be adjusted from Duluth, MN to New Orleans?
From Sault Ste Marie to Shreveport…Roanoke Va to Limon Co….Augusta Ga to
Rapid City S.D.???
What in the world are these folks thinking?
I may be just an old truck driver, but I can do a damn sight better than that.
There are three distinct weather patterns in Missouri, Illinois, Ohio,and to
a lesser extent Indiana.
In our regional weather Springfield Ill is usually quite different from St.
Louis and it’s only 90 mi NE.
Anyone who has ever driven a truck or flown for a living
could give a way better method than this.
The obvious anser is that the GISS adjusted values that are published are the input values for this step, not the output values. I would not guarantee that this is followed with every station.
IF anybody can help me create WordPRess readable tables directly from R, I’d a appreciate it. Word Press tables require signs HTML tags and quotation marks and I can’t get R to output WordPREss readable tables – hence the many ragged tables.
re 16.
From what I understand lapse rates do vary from pace to place.
But Hansen uses an estiamte of average Lapse rate. He uses 6C
per km. Again,suppose you have a station on the land at sea
level and for 50 years temp is recorded there. Lets say the
average is 14C. Then they move the station, up a hill, say 100M
high. Average lapse rate would suggest that the site would then
report temps of 13.4C. Hansen uses altitude to adjust for Station
changes only. He would Cool the first 50 years of the record by
.6C.
Since he is looking for trend and not some absolute temp the approach
makes sense. otherwise you would see a false non climatic cooling
at that station. The opposite would happen if you moved the site
the down the hill later.
re 17.
The weather is different to be sure, but Hansen is interesting in the
long term climate and how the averages change over time.
So what matters to him is how things average out. A simple
example. Lets say that Duluth had an average jan temp of 8C
in 1900 and in 2006 it averaged 9C. While St Louis was 12C
in jan of 1900 and 13 C in Jan of 2006. Hansen “averages”
these figures to see what the average trend in temperature is.
Re 20.
Thanks for your explanation, but my question was really whether a lapse rate applied to “elevated land” is legitimately applied. Lapse rate is defined for open space atmosphere, is it not? But doesn’t the atmosphere somewhat follow the contour of the land? In that regard, is lapse rate applied correctly when it is applied to a land surface that is elevated?
19,
I’m not familiar with R, but could you just do a Print Screen?
I need someone to explain this to me, if the urban heat island effect is so significant as to require adjusting the temperature data, why use it at all? Why not run only the rural data without the urban areas and use that as a baseline for a graph? Then run the urban areas and compare the two to determine the heat island effect??? Has anyone done this, I would like to see what the real GAT graph looks like without all this adjusting and tweaking. While someone would object to not having all the areas (60 x 60 mile squares) to determine the GAT, having corrupted data is worse than having none at all. At least when you average the data you have you can account for the areas you don’t have, think of it the way Micrsoft Excel averages a group of cells, if a cell is blank it disregards the cell to determine the average or mean depending on the function.
Re #13
Then I’m confused by graphs for Hambantota which appear to be Celcius. Is Hansen is using the trends to adjust the actual temps then creating the trend for the uban station from that or is he using the trends to adjust the trend of the urban station directly?
No matter how he’s doing it, it doesn’t seem reasonable assume that the trends at different lattiudes and altitudes would be the same trend, except over very long time scales (much more than Hansen has data for).
SteveM,
Will R output XML? If so just use xslt to get your data from xml to html. For simple transforms you can come up to speed on xslt in a day or two.
It appears that you can output XML but would have to install an add on package, then define a DTD for your output and use that to tell R how to generate the XML…from there you have a file that you can manipulate with xslt, not too mention having your output in the most flexible cross platform/application format around.
Does anyone know if R has been formally adopted for use as Open Source Software in a DOD or a DOE project somewhere?
Re: 14
Here’s the paper Steve Mosher is referring to:
Hansen, J.E., and S. Lebedeff, 1987: Global trends of measured surface air temperature. J. Geophys. Res., 92, 13345-13372.
Abstract:
We analyze surface air temperature data from available meteorological stations with principal focus on the period 1880-1985. The temperature changes at mid- and high latitude stations separated by less than 1000 km are shown to be highly correlated; at low latitudes the correlation falls off more rapidly with distance for nearby stations. We combine the station data in a way which is designed to provide accurate long-term variations. Error estimates are based in part on studies of how accurately the actual station distributions are able to reproduce temperature change in a global data set produced by a three-dimensional general circulation model with realistic variability. We find that meaningful global temperature change can be obtained for the past century, despite the fact that the meteorological stations are confined mainly to continental and island locations. The results indicate a global warming of about 0.5-0.7°C in the past century, with warming of similar magnitude in both hemispheres; the northern hemisphere result is similar to that found by several other investigators. A strong warming trend between 1965 and 1980 raised the global mean temperature in 1980 and 1981 to the highest level in the period of instrumental records. The warm period in recent years differs qualitatively from the earlier warm period centered around 1940; the earlier warming was focused at high northern latitudes, while the recent warming is more global. We present selected graphs and maps of the temperature change in each of the eight latitude zones. A computer tape of the derived regional and global temperature changes is available from the authors.
—
I wonder if you can still get the computer tape :^)
HAnsen says:
CAn anyone help me identify where within his code he decides what the “period being adjusted is”?
Re: 30
Hi Steve,
It looks like all of the logic for processing the rural stations within 1000 km of an urban station is in PApars.f, which is in the STEP2 subdirectory. Here are the comment lines at the top of the FORTRAN listing:
C*********************************************************************
C ***
C *** Input files: 31-36 ANN.dTs.GHCN.CL.1 … ANN.dTs.GHCN.CL.6
C ***
C *** Output file: 78 list of ID’s of Urban stations with
C *** homogenization info (text file)
C *** Header line is added in subsequent step
C*********************************************************************
C****
C**** This program combines for each urban station the rural stations
C**** within R=1000km and writes out parameters for broken line
C**** approximations to the difference of urb. and comb.rur series
C****
C**** Input files: units 31,32,…,36
C**** Record 1: I1,INFO(2),…,INFO(8),I1L,TITLE header record
C**** Record 2: IDATA(I1–>I1L),LT,LN,ID,HT,NAME,I2,I2L station 1
C**** Record 3: IDATA(I2–>I2L),LT,LN,ID,HT,NAME,I3,I3L station 2
C**** etc. NAME(31:31)=brightnessIndex 1=dark->3=bright
C**** etc. NAME(32:32)=pop.flag R/S/U rur/sm.town/urban
C**** etc. NAME(34:36)=country code
C****
C**** IDATA(1) refers to year IYRBEG=INFO(6)
C**** IYRM=INFO(4) is the max. length of an input time series,
C****
C**** INFO(1),…,INFO(8) are 4-byte integers,
C**** TITLE is an 80-byte character string,
C**** INFO 2 = KQ (quantity flag, see below)
C**** 3 = MAVG (time avg flag: 1 – 4 DJF – SON, 5 ANN,
C**** 6 MONTHLY, 7 SEAS, 8 – 19 JAN – DEC )
C**** 4 = IYRM (length of each time record)
C**** 5 = IYRM+4 (size of data record length)
C**** 6 = IYRBEG (first year of full time record)
C**** 7 = flag for missing data
C**** 8 = flag for precipitation trace
C****
C**** The combining of rural stations is done as follows:
C**** Stations within Rngbr km of the urban center U contribute
C**** to the mean at U with weight 1.- d/Rngbr (d = distance
C**** between rural and urban station in km). To remove the station
C**** bias, station data are shifted before combining them with the
C**** current mean. The shift is such that the means over the time
C**** period they have in common remains unchanged. If that common
C**** period is less than 20(NCRIT) years, the station is disregarded.
C**** To decrease that chance, stations are combined successively in
C**** order of the length of their time record.
C****
C
HAnsen says:
An adjusted urban record is defined only if there are at least three rural neighbors for at least two thirds of the period being adjusted.
In PApars.f, here are the parameters associated with:
* “at least three rural neighbors” –> NRURM
* “at least two thirds of the period being adjusted” –> XCRIT
C?*** Earth radius, lower overlap limit, min.coverage of approx.range
PARAMETER (REARTH=6375.,NCRIT=20,NRURM=3,XCRIT=2./3.)
RE 30..
MArk and I went thrugh that Code. There was a Bug…..
here it is XCRIT is the 2/3s variable
I Put some comments in fr you.
195 CONTINUE
IF(N3.LT.NCRIT) THEN ! N3 = good years, NCRIT = 20 years SM
IF(RBYRC.NE.RBYRCF) THEN ! First he tries 500KM to find rural neighbors SM
write(*,*) ‘trying full radius’,RngbrF
RBYRC=RBYRCF
CSCRIT=CSCRIF
Rngbr=RngbrF
GO TO 125 ! retarded prgramming structure SM
END IF
WRITE(79,'(a3,i9.9,a13,i5,a15,i5,a50,a5)’) CC(NURB),IDU(NURB),
* ‘ good years:’,N3,’ total years:’,N3L-N3F+1,
* ‘ too little rural-neighbors-overlap – drop station’,’ 9999′
* THE URBAN statin in question cant be adjusted so it is DROPPED SM
GO TO 200
ELSE IF(FLOAT(N3).LT.XCRIT*(N3L-N3F+1.)) THEN ! IF GOOD YEARS LT 66% SM
IY1=N3L-(N3-1)/XCRIT ! Clip The early years. SM
WRITE(79,'(a3,i9.9,a17,i5,a1,i4)’)
* CC(NURB),IDU(NURB),’ drop early years’,1+IYOFF,’-‘,IY1-1+IYOFF
GO TO 191
ELSE
TMEAN=TM3/NXY3
NXY=NXY3
CALL GETFIT(15)
C**** Find extended range
IYXTND=NINT(N3/XCRIT)-(N3L-N3F+1)
write(*,*) ‘possible range increase’,IYXTND,N3,N3L-N3F+1
n1x=N3F+IYOFF
n2x=N3L+IYOFF
IF(IYXTND.lt.0) stop ‘impossible’
if(IYXTND.gt.0) then
LXEND=IYU2-(N3L+IYOFF)
IF(IYXTND.le.LXEND) then
n2x=n2x+LXEND
else
n1x=n1x-(IYXTND-LXEND)
if(n1x.lt.IYU1) n1x=IYU1
n2x=IYU2
end if
end if
write(78,'(a3,i9.9,2f9.3,i5,5f9.3,I5,a1,I4,i5,a1,i4)’) CC(nurb),
* idu(nurb),(fpar(i),i=1,2),nint(fpar(3)+X0),(fpar(i),i=4,6),
* (rmsp(i),i=1,2),N3F+IYOFF,’-‘,N3L+IYOFF,N1X,’-‘,N2X
END IF
Hope this helps
Re#21
I’m sorry Steven I possibly misread;
“An adjusted urban record is defined only if there are at least three rural neighbors for at least two thirds of the period being adjusted. All rural stations within 1000 km are used to calculate the adjustment, with a weight that decreases linearly to zero at distance 1000 km.”
Reads to me that rural data is being used to adjust urban
data.
It seems I’m not the only one suggesting that the whole notion is confusing.
It seems that the authors of #1, 2, 7, 9, 10 12, and 24 are laboring under
the same misunderstanding, if it is inded one.
#22 brings up a good point altitude. How can temp readings from Denver Co, the mile high city be incorporated into the GAT when air temp drops 3 to 5F per 1000 ft? Is Denver’s air temp adjusted up by over 15F to fit the rest of the US at zero ft to sea level?
What about Antarctica? “The US South Pole station (0° east, 90° south, 2835 m elevation) is the most well characterized site on the Antarctic Plateau.” http://www.journals.uchicago.edu/PASP/journal/issues/v116n819/204045/204045.html
How can you average in temps from different altitudes? Isn’t that like mixing apples and oranges????
#33. OK, got it. In the Christchurch NZ example, the count of rural stations hits 3 in 1951 and the adjusted version begins then. The number of rural stations goes to decreases to 2 in 1989 and 1 in 1991.
IT seems to be asymmetrical: the adjusted version starts only when there are 3 rural stations, but it will continues with only 1 rural station. The critical value – as I read it, is defined only over the period from the first year with 3 stations to the last year with 3 stations and in effect measures the proportion f intermediate years when the number dips below 2 and has no QC check against unavailablility of late rural stations.
I’m sure this is intentional as there is a widespread change in character of the dataset to more urban stations after 1990.
#35. Orland is classified as both “R” and “bright”. In the US, he would adjust it as a “bright” site so that would explain your conundrum. The unlit criterion doesn’t seem to work too badly in the U.S. In effect, the only stations that appear to “matter” for the U.S. trend are the “unlit” ones – which is why Hansen’s US trend is different from NOAA and CRU. I don’t get the sense that his R-criterion in the ROW is working as well.
#36. they use deltas from a an average. It’s not an issue.
RE 22
You wrote:
“Thanks for your explanation, but my question was really whether a lapse rate applied to elevated land
is legitimately applied. Lapse rate is defined for open space atmosphere, is it not?
But doesnt the atmosphere somewhat follow the contour of the land?
In that regard, is lapse rate applied correctly when it is applied to a land surface that is elevated?”
Well color me yellow and hit me cross court. I can only tell you
what Hansen does and the general theory behind it. I’m not defending
his approach ( he does that just fine) I’m just trying to burn down
strawmen. You’ll gain no ground if you mischaracterize or misconstrue his
positions. ( psst sometimes I do that for fun just to piss em off)
Now, I have some issues with Hansen’s Lapse rate adjustments. He documents
two of them. It’s way down the list of things to focus on. BUT knock
yourself the hell out. There is tons of stuff to look at it.
I believe he makes mentin of it in his 99 paper. Bug me more if you cant
find it and I post the quotes. or just search on CA.. my name and st. Helena
Re 38, the ‘bright’ is most likely caused by the large truck stops that service I-5. Anthony could probably give you an exact count.
RE: #7 – SF is in Sunset Western Garden Guide’s Climate Zone 17, and LAX is in Climate Zone 24. Death Valley is in, I believe, Zone 12 (would need to double check it). 17 and 24 are actually the two most similar Climate Zones in that they both are heavily marine influenced, Mediterranean and have the warmest winter temps for their latitude bands. In 17 the coastal stratus “feels” colder as do onshore winds. Both are classic California “beach climates.” Both are nearly frost free. 17 generally gets more rain (areal range 15 – 60 in) than 24 (areal range 9 – 40 in). There is a higher incidence of Santa Ana (compressive, dry, warm, offshore wind) outbreaks in 24. Death Valley is like another planet in comparison to either. Hottest warm season temps in CONUS for many days out of the year, zero marine influence and almost no rain or RH.
Further research indicates R is used quite extensively in DOD. Therefore, it likely has official standing as an O.S.S. software that has been developed and maintained using an acceptable software lifecycle development methodology.
RE 38 & 40
ORLAND.. BRIGHT? Well yes, if you Look at Hwy 5 and the crap right around
the exit there, yes, you have lights. Lights but no fricking people.
Oh, I forgot about the Olive stand. 2 people selling Olives. and a dog.
with three legs. and half a dozen fleas.
Stave,
Can we solidify and document conclusion of stage one so we have a more-or-less agreed base to move to stage 2? Geoff.
Sorry, “Steve”. Typo.
You’ll probably want to throw some test data sets at Hansens instantiation
and yours.
Some very simple tests come to mind.
I wonder if its not time for someone here to contemplate auditing the data being produce by NSIDC and Cryosphere Today especially in the light of changes done earlier this year and just recently to antarctica (without any valid explanation, except “software glitch”.
Larry, #5:
“Or am I assuming something? What is Hansens Ph.D. in?”
From Wikipedia:
—————————————————–
It sounds like that Hansen began to suspect (invent?)AGW in the early 70s. That’s his story and he’s sticking to it.
Theduke, the questions regarding the possible effects of CO2 on the earth’s climate were raised by Guy Stewart Callendar back in the 1930’s. Link. If anything, Callendar was the visionary.
You sure you don’t mean the Hansen two step?
Hoping to help with Step 2, I installed a FORTRAN system on my computer and began looking at the code. All the FORTRAN is compilable; but I am puzzled by the first subProgram, “text_to_binary.f”.
The input file, v2.inv, contains lines of 106 characters, such as:
10160355000 SKIKDA 36.93
(which I have broken here into two equal parts for legibility).6.95 7 18U 107HIxxCO 1x-9WARM DECIDUOUS C 49
To read this file, the program has a loop containing these lines (and others following):
read(1, ‘(a64)’) line
read(line(2:64), ‘(i4,i5,a12,i4,a36)’) lat, lon, sid, ht, name
But, pretty clearly, this gives the following results:
lat = 0160
and soon crashes on “illegal numerical input”.lon = 25500
sid = '0 SKIKDA '
I must be reading the wrong input file.
Can anyone point me to, or provide, the correct input file?
Also, there must be other input files to Step 2, which I do not find in Hansen’s release. Can any one provide those?
Deech56 saudL “If anything, Callendar was the visionary.”
If anyone could prove that with something other than what Lindzen calls the “lassitude argument,” that would be an accurate statement.
Deech56
Are you the same deech56 that claimed over on Rabbett that SteveMc wrote a post
on Tamino? And then bashed CA for being too “detail” oriented?
You realize that UC wrote the post. Detail
re 51.
You can’t as far as I know start with step 2. The file you are reading
is most likely modified in steps 0 and 1..
Hansen’s pre-AGW work was on the atmosphere of Venus. Manabe was probably the visionary when it came to climate models. He started his work in the ’60s.
New Zealand.
As a part-timer more able in tactics than in stats, I’m getting confused by the terminology the the various data sources and their adjusted acronyms world-wide.
I’m puzzled by Wellington. Approximately, both North and South Island in NZ are 1000 km long. Wellington sits in the middle, so all stations are within 1200 km of Wellington. So is a lot of sea, so is a landform change from mediterranean sea-level plains to cold alpine glacial. East coast climate is rather different to west coast. The detail required to apply altitude lags etc to even this small country is formidable. The raw climate records go back 100 years in many places. Meta data should be fairly good. There are few large cities, so UHI corrections should be able to be applied in considerable detail. In short, it’s a neat little laboratory country. Nearly every one of perhaps 100 stations should be available as rural.
How about using NZ as a showcase example of how adjustments affect raw data? I see more and more writers seeking this type of comparison.
Did NASA scientist James Hansen, the global warming alarmist in chief, once believe we were headed for . . . an ice age?
He tried to make money off a coming Ice Age.
That didn’t work.
AGW seems to be working just fine.
Re #57:
Robert Frost
55,
It kind of makes sense that someone who is successful at modeling the greenhouse effect on Venus will want to take what he learned there, and move on to Earth. The only problem is that there’s no phase changes on Venus, which makes it a much, much easier problem. That also means there’s no feedback effect.
Moving on to the Earth is non-trivial, and I think that we can all draw our own conclusions regarding what happened when he tried.
I think he joined Richard Shockley and Linus Pauling and a long list of others who refused to recognize when they were out of their depth.
Steve, html formatting is standard to R (examples below). You can automagically generate layouts with tables & graphs then cut & paste the HTML code in WordPress.
tmpfile="t1.html"
xy=cbind(c(1981:2000), c(1:10))
HTML(xy, file=tmpfile)
Another example using CSS formatting to harmonize your presentations
tmpfic=HTMLInitFile(tempdir(),CSSFile="http://www.stat.ucl.ac.be/R2HTML/R2HTML.css")
data(iris)
HTML(as.title("Fisher Iris dataset"),file=tmpfic)
HTML(iris, file=tmpfic)
browseURL(tmpfic)
download documentation with example here :
http://www.stat.ucl.ac.be/R2HTML/R2HTML_1.51.zip
FYI ….
The ‘Old’ Consensus?
INVESTOR’S BUSINESS DAILY
Posted 9/21/2007
Climate Change: Did NASA scientist James Hansen, the global warming alarmist in chief, once believe we were headed for . . . an ice age? An old Washington Post story indicates he did.
On July 9, 1971, the Post published a story headlined “U.S. Scientist Sees New Ice Age Coming.” It told of a prediction by NASA and Columbia University scientist S.I. Rasool. The culprit: man’s use of fossil fuels.
The Post reported that Rasool, writing in Science, argued that in “the next 50 years” fine dust that humans discharge into the atmosphere by burning fossil fuel will screen out so much of the sun’s rays that the Earth’s average temperature could fall by six degrees.
Sustained emissions over five to 10 years, Rasool claimed, “could be sufficient to trigger an ice age.”
Aiding Rasool’s research, the Post reported, was a “computer program developed by Dr. James Hansen,” who was, according to his resume, a Columbia University research associate at the time.
So what about those greenhouse gases that man pumps into the skies? Weren’t they worried about them causing a greenhouse effect that would heat the planet, as Hansen, Al Gore and a host of others so fervently believe today?
“They found no need to worry about the carbon dioxide fuel-burning puts in the atmosphere,” the Post said in the story, which was spotted last week by Washington resident John Lockwood, who was doing research at the Library of Congress and alerted the Washington Times to his finding.
Hansen has some explaining to do. The public deserves to know how he was converted from an apparent believer in a coming ice age who had no worries about greenhouse gas emissions to a global warming fear monger.
This is a man, as Lockwood noted in his message to the Times’ John McCaslin, who has called those skeptical of his global warming theory “court jesters.” We wonder: What choice words did he have for those who were skeptical of the ice age theory in 1971?
People can change their positions based on new information or by taking a closer or more open-minded look at what is already known. There’s nothing wrong with a reversal or modification of views as long as it is arrived at honestly.
But what about political hypocrisy? It’s clear that Hansen is as much a political animal as he is a scientist. Did he switch from one approaching cataclysm to another because he thought it would be easier to sell to the public? Was it a career advancement move or an honest change of heart on science, based on empirical evidence?
If Hansen wants to change positions again, the time is now. With NASA having recently revised historical temperature data that Hansen himself compiled, the door has been opened for him to embrace the ice age projections of the early 1970s.
Could be he’s feeling a little chill in the air again.
Although it’s off topic, thought readers might be interestd:
http://www.investors.com/editorial/editorialcontent.asp?secid=1501&status=article&id=275267681833290
Looks like the good doctor was in on the Great Coming Ice Age fear mongering of the ’70’s.
Excerpt:
On July 9, 1971, the Post published a story headlined “U.S. Scientist Sees New Ice Age Coming.” It told of a prediction by NASA and Columbia University scientist S.I. Rasool. The culprit: man’s use of fossil fuels.
The Post reported that Rasool, writing in Science, argued that in “the next 50 years” fine dust that humans discharge into the atmosphere by burning fossil fuel will screen out so much of the sun’s rays that the Earth’s average temperature could fall by six degrees.
Sustained emissions over five to 10 years, Rasool claimed, “could be sufficient to trigger an ice age.”
Aiding Rasool’s research, the Post reported, was a “computer program developed by Dr. James Hansen,” who was, according to his resume, a Columbia University research associate at the time.
* * *
“They found no need to worry about the carbon dioxide fuel-burning puts in the atmosphere,” the Post said in the story, which was spotted last week by Washington resident John Lockwood, who was doing research at the Library of Congress and alerted the Washington Times to his finding.
Hansen has some explaining to do. The public deserves to know how he was converted from an apparent believer in a coming ice age who had no worries about greenhouse gas emissions to a global warming fear monger.
I use a programs like Powerpoint and Excel to spread my AGW alarmism. Who knew that Bill Gates was in on the scam?
My guess is that he is going to push whichever climate agenda would bring climate scientists greater notoriety and influence (not to mention the increases in research grants). He was trying to make a name for himself then and he is trying to make a name for himself now. The one thing I am sure he would never go for is the notion that everything is going to be ok and there really is nothing that people can do about it. Notice that in both cases it was human caused. Humans increasing the particulates in the former, and humans with CO2 in the latter. I don’t believe he would ever go along with the idea that climate varies according to factors outside the control of humans (evidence of past climate variation notwithstanding). He wants influence, plain and simple. If there is little that people can do about it, he can have little influence in their actions (or be of little support to those who might want to have such influence).
You guys are getting a little freaky. A little too much Hansen on your brain. I would recommend getting back to analyzing the data. That is honest work.
In any case, I’m guessing that Hansen doesn’t drive the debate anymore. The Arctic ice extent data is far more influential in the public eye than the surface temperatures these days. A program of auditing the NSIDC seems just as important.
65, OTOH, the Arctic ice is a much easier argument. It has little to do with air temperatures, and everything to do with this:
In other words, it’s not related to global warming, it’s related to oceanic anomalies.
And at the same time that this is happening, Antarctic ice is at record levels, or at least it was until they Hansened the data (did I just coin “Hansen” as verb?).
RE: steven mosher September 22nd, 2007 at 12:28 pm
“Deech56
Are you the same deech56 that claimed over on Rabbett that SteveMc wrote a post on Tamino?”
Why yes, that would be me. Thanks for asking. Yep, my bad. But the points I was trying to make were: 1. Are reports that do not come to same conclusion as the consensus given the same level of scrutiny around these parts as the ones by Hansen, Mann and that crowd?, and 2. What’s the context for all of this?
I still wonder what the purpose of this “auditing” is. The main question should be “Is the temperature of the earth changing?” The second question should be “How do we use the imperfect information we have to find out if the earth’s temperature is changing?”
You can speculate about the motives of Dr. Hansen all you want (talk about a faith-based effort) but these questions loom larger than any of this.
Hi Deech56.
First, generally speaking there are folks who like the details and folks who like the
big picture or “context ” as you described it On Tamino’s site. I see value in both.
You seemed intent on criticizing SteveMC for not discussing Schwartz, and then tried
to say ” he had time enough to Post on Tamino” Spend some time here. If you steer
clear of the ad homs you will have some fun.
So, you got the facts wrong. happens. The other thing you miss, (not your fault because you don’t hang here)
is the importance Tamino’s Discussion of AR(1) has to this community and its history.
generally speaking, if you look around ( like on unthreaded) you’ll see that discussions of
C02 and climate sensitivity are put on the back burner. Why? Because SteveMc has been looking the
right article. He asked Gavin for something. No response.
Another thing. There is a certain proclivity here to test what is accepted. I’ll give you
and example. Parker’s paper on UHI. Everybody cites it, but there are major issues with it.
Same with Peterson’s paper on UHI. Like Gavin says, peer review is necessary but not sufficient.
We like to to the suffiecient part.
So, when schwartzs paper comes out, and he’s got all these caveats, and it’s quickly rejected
as wrong ( Annan was on it pretty fast and pointed out bonehead things) Then where’s the fun
in that?
SO. Onto your reply for some loose ends:
” 1. Are reports that do not come to same conclusion as the consensus
given the same level of scrutiny around these parts as the ones by Hansen, Mann and that crowd?, and ”
Personally, I don’t read much “denialist” stuff and figure RC and company can handle it just fine.
Once in a while folks here have posted stuff of that nature. If the claims are bogus that
stat guys who dominate here will throw the BS flag.
“2. Whats the context for all of this?”
I think SteveMC put it best when he talked about crossword puzzles.. Thats his slant. If
you start with his hockey stick studies you’ll see how its all connected.. a network
of ideas.
1. Misuse of statitsics by people who won’t consult experts. ( MANN)
2. The refusal to share data and methods.
3. The temperature record that Mann used.. its data and methods.
4. Other proxies, then
So it grows from that seed. Not so hard to understand. The C02 stuff becomes
interesting if there is a neat statistical issue.
“I still wonder what the purpose of this auditing is. ”
Why do you balance your checkbook?
“The main question should be Is the temperature of the earth changing? ”
Of course its changing. Changing as we speak. You mean to ask ( detail thing
again) is there a temperature trend that is increasing, and increasing
in a way that is abnormal, and how confident can we be in this?
“The second question should be How do we use the imperfect information we have to find out
if the earths temperature is changing? ”
Well, first we ask people who are experts in imperfect information. Dr. Hansen
is not a professional statistican. His programmers are not software development
experts. He is not an economics expert. So, before we decide that we MUST
use this imperfect information, we characterize it. We audit it. We see how
much we know, and how well we know it.
You can speculate about the motives of Dr. Hansen all you want (talk about a faith-based effort) but these questions loom larger than any of this.
Re: 65 and 66
Erik
In general terms, I agree with your first observation and suggestion.
I have also made the point that Larry highlights in #66 more than once on different blogs. Based on best available data and understanding of the cylcical nature of what’s going on in the Arctic, the ratio [in terms of actual effect] of ocean water temps vs. air temps/albedo, etc., is as high as 80:20. With the onset of a La Nina in the Pacific basin and the Atlantic due for shift from warmer to colder, it will most interesting indeed to see how long the ice anomalies in the Arctic persist. Meanwhile, mainstream media are studiously avoiding the fact that Antarctica is refusing to follow suite and is not warming.
RE 68.. Opps deech56.. I meant your comment On Rabbetts site about Tamino.
Details, see?
OK, I am embarrassed to ask this question, but where is the raw data at? I have some algorithms developed for examining time-series data for anomolies that could be applied to this data, I think. Another question, does the raw or adjusted, I don’t care for now, data contain 365 days, twice daily? If the data is in a binary format, is it documented anywhere?
RE 71.
What do you mean by raw? SteveMc has a page on data sources. As to the data’s
“rawness” JerryB is a good source, or you can read the USHCN and GHCN documents online.
SteveMC can also adress it.
Re #65
Re #65
It would be nice if the media mentioned that warming in the Arctic occurred in 1920-1940 time period as well.
http://www.arctic-warming.com/what-offers-modern-science.php#attb
“The huge warming of the Arctic, which started in the early 1920s and lasted for almost two decades, is one of the most spectacular climate events of the 20th century. During the peak period of 1930-1940, the annually averaged temperature anomaly from the area 60°N-90°N amounted to around 1.7°C.”
SUV’s?
Away for the weekend at the cottage – glorious weather at the lake. About 250 comments since I looked last. Glad to see that people were pretty civil.
My favorite story of the day:
Thanks to Gore and Hansen food is going to be a lot more expensive because they wants us to burn corn … and it will produce more greenhouse gasses.
#74
If the graph in #66 is to be believed, the anomalies are now 5 degrees C. This seems to be off the chart in terms of excursions from normal temperature. Beats looking for .5 degree anomalies with .2 degree errors.
Are you saying that that the two anomaly values between now and 1930’s are similar? I don’t think so, and I don’t think the Northwest Passage was freely open in the 1930’s as it is now. At least no successful voyages were reported.
I’ve been trying to promulgate the viewpoint in this blog that the signal for warming for both ocean and land temperatures in the Arctic is indisputable and unprecedented in more than 150 years. I challenge the audit team to prove me wrong. Going after the temperate zone data just seems painfully difficult in comparison – all those damn stations and their UHI problems.
Another bonus is that you don’t have to worry about Hansen. This isn’t his area of expertise.
Re#76:
Did the UK Timesonline story you quoted really mean “Nitrous Oxide”?
If that’s so it should make everyone happy. Nitrous Oxide [N2O] is commonly known as laughing gas. Perhaps they meant other oxides of Nitrogen that are a bit more stable.
#77. On an earlier occasion, http://www.climateaudit.org/?p=1266 , I looked at HAnsen’s 64N-90N compilation. Levels from the 1930s were very high. He had made 1937-38 cooler in the past 10 years, and 2005 was the first year that had exceeded levels in the 1930s. Can someone check to see whether NOAA had data in the 1930s.
Re:#78
Yes. Nitrous oxide is actually quite stable, unlike NO and NO2. As a ghg, the strong absorption at 7.8 micrometers is most significant. It is a good oxidizer at high temperature. For elements with stable oxides, a nitrous oxide/acetylene flame is the preferred choice for flame atomic absorption/emission spectrometry. Of course, almost everybody uses plasma emission now.
#74
“The area covered by sea ice in the Arctic has now (September 14, 2007) shrunk to its lowest level since satellite measurements began nearly 30 years ago”
Gee … wasn’t it 30 years ago that the earth was in a period of cooling so profound that Hansen was predicting an ice age? Sure, it has warmed since then. But picking 1979 as a baseline is ridiculous.
http://mclean.ch/climate/Arctic_1920_40.htm
“The decrease of sea ice amounts in 19201940
The area of ice in the Greenland Sea in AprilAugust of 19211939 was 1520% less than in 18981920 (data of Karelin).
In the Barents Sea the area of ice was 12% less in 19201933 than in 18981920 (data of Zubov).
Vise pointed out that since 1929 the south part of the Kara Sea in September was free of ice, while in 1869 1928 the possibility of meeting ice there in September was about 30%.
The polar ice very often came close to the coast of Iceland in the last century and in the beginning of this century. During 19151940 the situation changed: no ice was observed in that region; negligible amounts of polar ice were noticed there only in 1929.
The thickness of ice determined during the Fram cruise was 655 cm; during the Sedov cruise it decreased to 220 cm (the reason for this was more intensive summer melting of ice).
Before Arctic warming, the strait of Jugorsky Shar froze near the 24th of November, but in 19201937 it became frozen two months laterin January.”
“The sailing conditions in the Arctic region became much more favorable in 19201940. This can be proved by the following cruises:
* Knipovich, 1932 (round Franz-Joseph Land)
* Sibiryak, 1932 (round Severnaya Zemlya)
* sailing of non-icebreaking ships along North Sea Route in 193no ice met
* possibility for non-icebreaking ships to double Novaya Zemlya every year since 1930.
The severe conditions of navigation in previous years can be proved by the following cruises:
* In 1912, the ship Foka, a member of the Sedov expedition, could not reach Franz-Joseph Land.
* In 1912, the ship St. Anna, a member of the Brusilov expedition, was trapped in ice near Yamal and carried out with the ice to the central Arctic.
* In 1901, the icebreaker Ermak failed to double Novaya Zemlya.”
#74, Erik Ramberg said: “I dont think so, and I dont think the Northwest Passage was freely open in the 1930s as it is now. At least no successful voyages were reported.”
Well, who knows. The absence of passages are not evidence that it was unpassable. It was possibly more an indicator that the passage
had been tried and reported to be somewhat challenging. Maybe the world was experiencing something that made the challenge of the
unknowns in the Arctic less tantalising than before? Somekind of…recession…perhaps? Who knows.
In 1940, the Passage was navigated successfully by the St.Roch, a Canadian vessel. No easy task for sure, but still passable.
Still, it can’t be taken as evidence that the icecover was larger or smaller than today. Just that someone actually made the passage.
As I gather from scarce news reports, even today the Passage is not a very pleasant or easy journey to make. Although there may be
less ice, the waters hold other challenges as well.
#66 Basically the cooling waters of the oceans even
in NE Atlantic and outside the Iberian peninsula etc
are pushing the warm water north!? If the Arctic would
have been a land mass just like Antarctica things
would have been somewhat different. Remember though that
Vostok cold record is from 1983 and the following years
were among the coldest in NW Europe for decades This winter
of course “only” -81C as compared to -89C in 1983 in Vostok!
Nothing repeats exactly of course…Here in Sweden
though, 1990 Jan and Feb “heat” is not exceeded since.
Even the “Wasalopp” was cancelled…
Re#62:
Looks like the good doctor was in on the Great Coming Ice Age fear mongering of the 70s.
After all, Hansen could claim that he was right on average.
Artic Warming in the 1920-1940 era
Larry #66 — I, for one, will adopt your new verb (Hansened). I’ll also add a friendly amendment: enhansen and enhansening, -verb. The act of mathematically manipulating data to support a conclusion.
In fact, I can remember enhansening data in some of my Freshmen chemistry labs.
Ok, “data enhansenment” it is. Do they have data enhansenment algorithms in R?
Hello all: The codes and the maths leave me a little dizzy. I’m confident with the politics though
It seems clear to me the advocates for GW do tend to fall into the left camp, more so than skeptics falling into the right-wing category
The stuff about Hanson being and ice-ager does not surprise me why? Well there seems to be a type of person from the developed world,a spoilt very well brought up with an affluent background sort of person, who just seem to hate what their societies seemingly stand for. They then join the left. The left like to associate any other group that proclaims to hate the developed west. This phillosophy leads to a self-loathing and as a result all history is distorted into a conspiracy of captitalism. Of course to maintain this conspiracy the left only use the last 200-300 years. It is totally lost on them the fact that humans have only recently in historical terms seriously developed into the world the leftnow take for granted in the last 100 years: Hence Hanson os off that ilk hates the west and looks for any angle to show that the west is wicked and is going to destroy the planet that we live on. You only have to look at his grandiose statements like “stewardship of the planet” to see his feelings of self importance. Not science per se but many do have an axe to grind one way or the other it’s a driving force factor.
1)Steve do feel free to pull this post if it seems detrimental to the sterling work you Anthony and others have done so far.
2) Is the aim eventually to see if the Hansen codes were biased?
3) The fact that many of the skeptic sources are not seemingly giving you just credit by at least having the courtesy to highlight your hard work does seem rather mean.
My apologies in advance-I do seem to grate people.
Lawrence
77
That’s taken directly from NOAA. You’re not sure if NOAA has accurate SST measurements?
I don’t believe that SST graph is accurate (in general,not just the current anomalies) above 60N
I don’t believe that SST graph is totally accurate (in general,not just the current anomalies) above 60N
Sounds like we have some denial going on here…
RE: “The polar ice very often came close to the coast of Iceland in the last century and in the beginning of this century. During 19151940 the situation changed: no ice was observed in that region; negligible amounts of polar ice were noticed there only in 1929.”
Notably, the renewed occurrence of ice bridges between Iceland and the main sea ice mass during winter and spring has been observed since the turn of the century. Sales of arms and ammo meant for dangerous game have gone through the roof. Icelanders are not very worried lately about the demise of the polar bears.
#94
Steve, Thanks for the info.
Henning, R.; Die Erwärming der Arktis, in: WuK, No. 1-2, 1949, p. 49-51.
Several more references to lengthened shipping season here.
Last I heard, the current shipping season is about 100 days. Yet more evidence that it was quite warm in the Arctic from 1920 to 1940. A cyclic shift in ocean currents still seems the simplest explanation for the loss of ice then and now.
Ref 94
http://www.telegraph.co.uk/news/main.jhtml?xml=/news/2007/02/04/wbears04.xml
Would this be the chunk in question?
re 82
In 1942 the St. Roch made a return vouage (Halifax to Vancouver) through the North West Passge. This trip took a more northerly route than the first one. Very little ice was observed and the trip took only 86 days.
This anecdote does show that at least in eh summer of 1942 the Canadia artic was free of any signficant ampunt of ice.
RE: #95 – Bringing a whole new level of truth to the idea that when you walk out the door, you become part of the food chain (heck, even indoors, in certain cases!).
95
10C!
Interesting IBD article from Monday 8/24 listed above. It gives insight to the cause of Hansen’s methods.
http://www.ibdeditorials.com/IBDArticles.aspx?id=275526219598836
Awesome site here. Hope I haven’t detracted from the real work at hand.
Check outInvestors Daily “Editorials and Opinions” Soros Threat to Democracy-Link with Hansen.
http://www.investors.com/editorial/editorialcontent.asp?secid=1501&status=article&id=275526219598836
$720 grand? Sheesh, if McIntyre got $72.00 worth of free gas from Exxon, the AGW crowd would be all over him for it.
I bought the article that S.I. Rasool and Stephen Schneider published in Science in 1971. That article and the Washington Post story were published on the same date July 9, 1971 which indicates that the Post reporter had received the article several days earlier and had already written his story, but under an embargo agreement could not publish it until it appeared in Science. It’s a routine practice with press releases. That gave the reporter time to interview Rasool, who gave him some dramatic quotes.
Rasool and Schneider argued that an 8-fold increase in CO2 would only increase global temperature by 2°C, because the temperature response to increasing CO2 is logarithmic, that is, each successive increment of CO2 has less effect than the one that preceded it, so the temperature response curve becomes progressively flatter. In contrast, they argued, the cooling effect of aerosols produced by industrial activity increases exponentially as aerosol concentration increases; that is, the response curve gets progressively steeper. Rasool and Schneider wrote,
The exclamation mark is in the original. This is the only time I have ever seen it used in a peer-reviewed scientific journal article. I’m surprised the reviewers let them get away with it but then this is Science, a publication fond of climate-scare papers and prone to hype them when they come along. No wonder the Post reporter was interested.
The Science article shows that James E. Hansen not only developed the climate model Rasool and Schneider used to show that the cooling effect of aerosols could trigger a new ice age, he participated directly in their study. His contribution is acknowledged in footnote 16:
It’s commonly said by global warmists that the Ice Age scare of the 1970s was a media concoction, limited to magazines like Newsweek and Time and not taken seriously by scientists. The Science article shows that it was taken seriously by some of today’s most prominent global warming alarmists.
Temperatures for grid cells that have no weather stations are “calculated” by reference to cells that do, and temperatures in urban heat islands are “adjusted” by reference to cells up to 1000 km away. Don’t these maneuvers have the same effect statistically as reducing the sample size?
RE: #96 – I strongly disagree with the “chunk breaking off” idea. The main pack in fact reached Iceland this past season. Do note that Cryosphere Today underreports extent of any ice that is less than 60 or 70% concentration and also has significant underreporting problems at the ice edge. These issues are artifacts of reliance on the flawed passive microwave remote sensing techniques used by the raw data (satellite) collection network used at NSIDC (which is where CT get their feed, although CT’s interpretation algorithms are claimed to be different than NSIDC’s).
Where does the Rasool and Schneider article say that Hansen “developed the climate model they used”? If it was Hansen’s model, and Hansen’s conclusion, it would have Hansen’s name on it.
Re: “Where does the Rasool and Schneider article say that Hansen developed the climate model they used?
The article itself doesn’t say it specifically, but the author of the Washington Post article of July 9, 1971 cited Rasool saying that the model he and Schneider used was designed by Hansen.
Postscript to No. 107.
Rasool and Schneider said in their Science
article that their parameters, “are described further by Hansen and
Pollack (17).” Footnote 17 references a paper by J.E. Hansen and J.B.
Pollack titled “Near-Infrared Light Scattering by Terrestrial Clouds”
that was published in the Journal of
the Atmospheric Sciences in March 1970. That is probably the
Hansen model Rasool was referring to in his statements to the Washington Post reporter. The
abstract is here.
108, “Terrestrial clouds”? Is that as opposed to extraterrestrial clouds?
http://www.dailytech.com/NASA+James+Hansen+and+the+Politicization+of+Science/article9061.htm
Re: “Terrestrial clouds”? Is that as opposed to extraterrestrial clouds?”
Literally. The 1971 Washington Post article said that the model Rasool and Schneider used for their “global cooling” study was designed by Hansen for the study of clouds in the atmosphere of Venus.
Hansen has responded to this week’s bad publicity:
http://www.columbia.edu/~jeh1/
This has probably been said before, but it’s worth repeating. The Rasool and Schneider paper looked at the effect of quadrupling aerosols and found that it would cause severe cooling. For their analysis they used a program created by Hansen for analysis of clouds on Venus.
What part of that is controversial?
Do aerosols cause cooling? Yes.
Can software be written for one purpose and used for another? Yes.
Dear Sir,
I need script to find out/measure Urban Heat Island effect over Karachi-Pakistan. My data is monthly and annual basis. I am using mean monthly, mean maximum and mean minimum temperature of Karachi for Two observatories: One Located in city centre (Airport) and second outside city (about 30km away from the city centre)
I shall be thanks if some one guide me that how can I make the script in Linux for the Fortran 90…
best regards
Sajjad
Université de Strasbourg
France