GISS Gridded Data

GISS gridded data is online but in a format that is unintelligible to people who are working with modern computer languages, as opposed to Fortran and who do not know whether their machines are “littleendian” or “bigendian” (see here for GISS discussion) – phrases rather reminiscent of Gulliver’s Travels, perhaps an apt text for Hansen. (For readers unfamiliar with Gulliver’s Travels, at one point in his travels, Gulliver was called upon to intervene in a war between Lilliputians over whether boiled eggs should be cut first at the big end or the little end.)

Thanks to the resourceful Nicholas, the GISS binary files can now be accessed in R and thus to any other modern languages. There are 3 gridded data files in the GISS directory tp://data.giss.nasa.gov/pub/gistemp/download, one pertaining to 250 km, one pertaining to 1200 km and one to HadR2 (which I haven’t identified yet.) The functions here succeed for all three files.

Unlike Jones who divides the world into 2592 five degree lat-long boxes, Hansen divides the world into 8000 equal area boxes (whose N,S,E and W limits have to be specified.)

The following script will enable a reader to download one of these GISS binary files and organize the information into an information file grid.info and a list of 8000 monthly time series giss.grid (mutatis mutandi for the other files.) This takes a few minutes as the files being downloaded are 20-40 MB.

source(“http://data.climateaudit.org/scripts/station/giss/read.binary.txt”)
download.file(“ftp://data.giss.nasa.gov/pub/gistemp/download/SBBX.Tsurf250″, “SBBX.Tsurf250″,mode=”wb”)
handle < – file("SBBX.Tsurf250", "rb");
Data <- read_GISS_data_SBBX(handle);
close(handle);
length(Data[[2]])

f=function(X) {f=ts(X[[1]],start=c(1880,1),freq=12);f[f==9999]=NA;f}
giss.grid=sapply(Data[[2]],f)
##giss.grid250=giss.grid
grid.info=make.grid.info(Data)
grid.info$lat=apply(grid.info[,c("LATS","LATN")],1,mean)
grid.info$long=apply(grid.info[,c("LONE","LONW")],1,mean)

To plot the gridded data for the 250-km smoothed box containing Wellington NZ, you can do the following

i=unlist(hansenbox(lat=-41.3,long=174.8)); #6719
grid.info[i,] #
# Next LATS LATN LONW LONE NR1 NR2 DL
#6719 1548 -42.07 -39.79 174 177 4 1442 1116772928 -40.93 175.5
ts.plot(giss.grid[[i]]) #OK

There are some mysteries in connection with the gridded data. At present, I can’t get from the station data to the gridded data, though I can get fairly close.

Not unexpectedly with code as crappy as this, it seems to require considerable ongoing maintenance. In the version of toSBBXgrid.f downloaded by Arthur Edelstein only a few weeks ago, there was a line as follows:

PARAMETER (IYREND=2007, IYBASE=1951,LYBASE=1980)

Obviously it makes no sense to specify IYREND at 2007. In the version presently online, this has been changed to parameterize the closing value – which is an improvement, but you’d think that this would have been done some time previously in the past 20 years.

I’m really stumped by the following problem: GISS gridded data has values for the year 1880, but the corresponding station data begins only in 1881. Where does the 1880 data come from? Or did I get wrongfooted somewhere? Suggestions welcomed.

Update: It looks like the “1880 problem” is only a problem arising from the web presentation which starts in 1881 even though the underlying data series start in 1880. Until a couple of weeks ago the only way of getting at GISS data was scraping the web pages (GISS refusing to make any sensible format available.) However, there are now some more improved versions available online and embedded in them are series starting in 1880.

Here’s what I’ve been able to emulate so far for one test case:

wellin21.gif

52 Comments

  1. rafa
    Posted Jun 26, 2008 at 3:08 PM | Permalink | Reply

    little or big endian can make a big difference when doing floating point operations. Old VAX machines (probably used by NASA in the 80′s) used a technique, internal to the operating system, called hidden bit. That allowed the programmer to get an exta bit in the mantissa to increase the precission “assuming” all numbers were in a particular format. 102.3 should be 0.1023 x 10 exp(3). 0.013 should be 0.13 x 10 exp (-3) etc.

    That code must be real old stuff. Amazing.I’m surprised if it delivers the same results when running on two different machines.

    best

  2. rafa
    Posted Jun 26, 2008 at 3:10 PM | Permalink | Reply

    sorry, typo, should read “0.013 should be 0.13×10 exp(-1)”

  3. andrew
    Posted Jun 26, 2008 at 5:12 PM | Permalink | Reply

    Steve, the data file ts.txt on the ftp test site, which seems to be the initial input for step 2, does have data for 1880 for around 500 stations, and the analysis in the Fortran programs seems to start at 1880 as the base year. Maybe you have a different data set?

  4. Not sure
    Posted Jun 26, 2008 at 5:33 PM | Permalink | Reply

    The terms “big endian” and “little endian” in computer parlance were in fact inspired by Gulliver’s Travels: http://www.catb.org/jargon/html/B/big-endian.html

    Kudos to you Steve, for recognizing the reference.

    Most of the world’s personal computers are little-endian. Big Unix servers like the AIX machines NASA uses for GISS calculations are big endian.

  5. DeWitt Payne
    Posted Jun 26, 2008 at 5:33 PM | Permalink | Reply

    For any of you who haven’t eaten a soft boiled egg in the shell, it’s not where it’s cut first, it’s which end is removed to allow one access to the contents with a spoon. Besides a spoon, you need an egg cup to hold the egg. It’s a UK thing.

  6. jeez
    Posted Jun 26, 2008 at 5:48 PM | Permalink | Reply

    If will help bring this portion of NASA into the 21st Century, I would be pleased to donate 200 dollars so that Dr. Hansen can receive and develop code on a modern computer.

    http://www.laptopgiving.org/en/index.php

  7. Ralph Becket
    Posted Jun 26, 2008 at 6:07 PM | Permalink | Reply

    This is off-topic, but I have to respond to rafa’s post. Floating point numbers are represented in mantissa/exponent notation. For example, pi would be represented as 0.314159… * 10^1 where 0.314159… is the mantissa and 1 the exponent.

    (1) Little or big endian should make *no* difference to floating point performance: the CPU can reorder the bits in an FP value as necessary at essentially no cost.

    (2) The “extra bit” comes from the *binary* mantissa/exponent representation used for FP numbers. In a normalised FP number the exponent is shifted to ensure that the mantissa is always of the form 0.xyyyyyyyy where x is non-zero. In binary, this means that x will always(*) be 1, hence there’s no need to represent it, thereby giving you an extra bit of precision for the mantissa. This is often called the “hidden” or “implicit” bit of the mantissa.

    (*) Where this gets slightly tricky is you need a special convention to represent zero, for instance. The same convention is extended to also cover plus and minus infinity and “Not a Number”.

  8. John Lang
    Posted Jun 26, 2008 at 7:25 PM | Permalink | Reply

    I imagine this has been mentioned before, but if Hansen’s code is so bad for something as simple as collating a minimal number of station temperature readings, how can something as complicated as his Global Circulation Model be of any use whatsoever.

    Furthermore, many of the other CGMs operated by other institutions have been developed using Hansen’s earlier CGM codes as the base code. Makes one wonder.

  9. kuhnkat
    Posted Jun 26, 2008 at 7:26 PM | Permalink | Reply

    Here is a nice article on “endianess”.

    http://www.cs.umass.edu/~verts/cs32/endian.html

    Even includes a rudimentary routine to convert.

  10. Steve McIntyre
    Posted Jun 26, 2008 at 7:42 PM | Permalink | Reply

    Please – we don’t need any advice on endian-ness here. If you feel obliged to comment on endian-ness, feel free to email Hansen with suggestions; I’m sure he’d appreciate them. Perhaps like the Lilliputian warriors, Hansen will recommend putting litte-endians on trial for high crimes – or maybe big-endians.

  11. Steve McIntyre
    Posted Jun 26, 2008 at 7:46 PM | Permalink | Reply

    OT – One of our valued readers has gotten frozen out by Spam Karma. I’ve tried whitelisting his IP address to no avail. Any ideas on what I can do to rescue him would be welcome.

  12. jeez
    Posted Jun 26, 2008 at 7:59 PM | Permalink | Reply

    Whenever SK2 flags a comment as spam, it will always list the reasons: blacklisting, abnormally high number of comments, etc. It is usually a good idea to go see what it had to say about your false positive, in order to make sure its error was based on reasonable motives:

    After recovering your comment, go to Admin screen >> Approved Comments.
    You should see your recently-recovered comment somewhere toward the top (if it was the last comment submitted, it will be at the top, otherwise, it could be farther down the list).

    * If using Firefox, Safari or any other half-decent browser: Hover with your mouse over the number in the Karma column.
    * If using MSIE: you may need first to untick the “Enable ‘hovering’ in spam report tables” advanced option, then go back to Approved Comments.

    Under the karma score there should appear a whole bunch of reasons why SK2 thought the comment may or may not have been spam (green is positive points, red, huh, negative). Associated with each reason, a breakdown of points taken or given are listed. If that comment was mistakenly eaten chances are the total of its points is less than zero: have a look at the reasons why and, if they do not make sense, consider contacting me for a bug report.

    http://wp-plugins.net/doc/contact/

  13. Nicholas
    Posted Jun 26, 2008 at 8:01 PM | Permalink | Reply

    Endianness is only an issue because he’s storing floating point numbers in a binary file. Thus you need to read them in with the bytes in the same order as written. Since “network order” is big-endian (a decision I lament, but it was made long ago), this doensn’t seem totally unreasonable to me. Hey at least they are IEEE 32 bit floating point values and not some bizarre old format which would take a lot of work to decode.

    As for the 1880 issue, if it truly is an issue, is it possible that there’s an off-by-one error somewhere and the first year of data is actually 1881? I suspect that andrew is probably right though, some stations start in 1880 so it isn’t an off-by-one error. If so I don’t know what’s going on.

  14. Nicholas
    Posted Jun 26, 2008 at 8:10 PM | Permalink | Reply

    I tried running the code above, there is a line missing. Before:

    i=unlist(hansenbox(lat=-41.3,long=174.8)); #6719

    There should be:

    source(“http://data.climateaudit.org/scripts/station/giss/step3.functions.txt”);

  15. Gary
    Posted Jun 26, 2008 at 8:12 PM | Permalink | Reply

    I’m really stumped by the following problem: GISS gridded data has values for the year 1880, but the corresponding station data begins only in 1881. Where does the 1880 data come from? Or did I get wrongfooted somewhere? Suggestions welcomed.

    Maybe an antique coding trick is being used to build a table with 1880 as element(0) so that 1881 can be element(1)? Just a guess.

  16. Steve McIntyre
    Posted Jun 26, 2008 at 9:25 PM | Permalink | Reply

    #15. But there are values for 1880.

    Hans Erren writes in:

    In January 2005 GISS truncated all station datasets at 1880, the gridded data could contain a data relict derived from the original station database.

    I’m not going to preempt any possibility, but as far as I can tell, the data is generated from the dset2 information and no dset2 series start in 1880.

    I’m baffled.

    I haven’t looked at the Ts.bin5 binary file yet (and am hoping that Nicholas will decode it for me), and maybe that will hold some clues.

  17. pjm
    Posted Jun 26, 2008 at 9:39 PM | Permalink | Reply

    Slightly OT, (or maybe not). I recall being told back in the 70′s that once a program was compiled, corrections might be made by patching the object code, and not necessarily amending the source code. Presumably this related to the cost of computer time back then. The implications for the correctness of legacy Fortan code are not good! If this has happened it becomes an issue if the ‘official’ people are recompiling the Fortan or are using an emulator with the old object code.

    Peter

  18. Steve McIntyre
    Posted Jun 26, 2008 at 11:18 PM | Permalink | Reply

    It looks like the “1880 problem” is only a problem arising from the web presentation which starts in 1881 even though the underlying data series start in 1880. Until a couple of weeks ago the only way of getting at GISS data was scraping the web pages (GISS refusing to make any sensible format available.) However, there are now some more improved versions available online and embedded in them are series starting in 1880. So this one seems to be somewhat solved.

  19. Sven
    Posted Jun 26, 2008 at 11:23 PM | Permalink | Reply

    #17 I only did it when the correct version of all source code wasn’t available. (e.g. for y2k patching). For code that is in some way being maintained, and with source code available, this doesn’t make any sense. If the source code is on the border of being incomprehensible, the object code surely must have passed that border long ago, even for the authors.

  20. Sven
    Posted Jun 26, 2008 at 11:58 PM | Permalink | Reply

    #17 and #19: Of course I haven’t been involved in this kind of code, I was only speaking of maintaining old Fortran code.

  21. Posted Jun 27, 2008 at 12:51 AM | Permalink | Reply

    Actually, I downloaded the source code several months ago, not long after the first update. Kudos to Nicholas for writing the R script to decode the binary files. I suppose these might also work to read STEP2 output? This might be more general than using my Fortran-generated text output files.

  22. REX
    Posted Jun 27, 2008 at 1:21 AM | Permalink | Reply

    I think what people would like to know in simple terms… are the calculations used by J Hansen Credible if yes why, if not why. Can anybody address this simple question. Probably quite important to know for the next few months/year

  23. Nicholas
    Posted Jun 27, 2008 at 4:13 AM | Permalink | Reply

    Steve, where’s this Ts.bin5 file? I’m sure I can work out how to read it when I have some spare time.

  24. John Goetz
    Posted Jun 27, 2008 at 6:18 AM | Permalink | Reply

    The object code is not patched. The scripts included with the GISSTemp source indicate that the code is compiled each time the process is run. I would presume they do this to guarantee updates to the code are captured.

  25. Nicholas
    Posted Jun 27, 2008 at 6:54 AM | Permalink | Reply

    Arthur : the binary file formats vary a bit, but you may be able to re-jig the code and re-use the existing routines to read other binary files.

    Steve : I looked around the FTP server, I couldn’t find a file called Ts.bin5 or similar, when you are ready e-mail me with the link and I’ll take a look.

  26. Steve McIntyre
    Posted Jun 27, 2008 at 7:09 AM | Permalink | Reply

    #22. Resolving HAnsen’s GISTEMP code is not something that policy makers should be waiting breathlessly for. It’s more of a crossword puzzle than anything. And it will probably take more than a few months to fully come to terms with things. At some point now that we’re finally getting control of GISTEMP will be doing detailed comparisons to CRU – where there are just as many mysteries and no code. One question is whether the GISTEMP urban adjustments do anything beyond a random permutation of data and, if that’s the case, does that matter?

    In one sense, it does – it would perhaps be a bit embarrassing for NASA if their famed urban adjustment did not arise above Peter Paul and Mary’s Marvelous Toy – Hansen himself being long past embarrasssment. But you’re not going to find anything so egregious as the data mining of Mann and the Hockey Team. The adjustment may be ineffective, but the trends are not being introduced by the adjustment; the trends exist in the data. However, the potential contribution of urban warming in the Hansen data set will likely be back in play as an issue (the main issue will be Step 2.)

    In policy terms, people will argue that even this didn’t “matter”, that there were multiple lines of evidence etc etc. So don’t think for a minute that there is a magic bullet here; this is more of a crossword puzzle.

    As an intangible, I doubt that many readers of the GISTEMP code will feel much confidence in Hansen’s ability to manage climate model code, but that would be merely an extrapolation.

  27. pat
    Posted Jun 27, 2008 at 7:50 AM | Permalink | Reply

    More on endiness – it was Johnathan Swift’s satire on the constant religious wars in Europe fought over questions like whether the communion host was the actual or symbolic body of Christ. Swift savagely mocked the ridiculous reasons that mankind use to commit mass murder.
    Sadly, nothing much has changed in 300 years.

  28. Demesure
    Posted Jun 27, 2008 at 7:57 AM | Permalink | Reply

    When I worked in Hong Kong, my Chinese collegues taught me to eat 10-15 day incubated cane eggs.
    No hesitation is possible: you start by the big end where the air bag is formed, first sucking the juice, then eating the yolk and finishing by the embryo with salt, pepper, and spicy herbs.
    Delicious, once you learn to like it.

  29. DAV
    Posted Jun 27, 2008 at 8:00 AM | Permalink | Reply

    FWIW: “big/little-endian” were terms created in the early days of what is now the Internet. The terms aren’t just reminiscent of Gulliver’s Travels — they are a specific reference to it and the silliness of arguing one over the other. Little-endian is better, of course ;)

  30. stan
    Posted Jun 27, 2008 at 8:43 AM | Permalink | Reply

    But I thought the emperor wore only clothes of the very finest materials which were evident only to those special enough to be able to envision Scientific Truth. After all, the clothes were even peer-reviewed and approved by 2500 members of the Enlightened!

    How distressing to learn that the clothes are old hand-me-downs from a bygone era, patched together with less skill than that of a marginally competent seamstress. Where’s a little boy when you need one?

  31. Evan Jones
    Posted Jun 27, 2008 at 8:45 AM | Permalink | Reply

    It was, like, so obvious from the getgo that Hansen’s code dump last year was one big “shut up and go away”. Digital caltrops.

    I bet the last thing he was expecting was that someone(s) would actually deconstruct it!

  32. DAV
    Posted Jun 27, 2008 at 9:06 AM | Permalink | Reply

    @30: Not sure what the emperor’s clothes have to do with it. This is a common problem. I’m just wrapping up a project where the telemetry downlink is a mixture of big and little endian. The headers are big-endian (in accord with the CCSDS standard) while the rest is little-endian. Why? Because the headers have no choice and the rest is for convenience reduction in CPU utilization.

    The real problem is that the GISS data are being transmitted in binary. I fail to understand why anyone would want to do that as it raises serious inter-communicability issues. My project (and most telemetry streams) use binary transmission because bandwidth is usually not a luxury on spacecraft however it causes a lot of grief for the ground systems.

    Even if the GISS data were not really intended for sharing, storing them in binary is only asking for trouble at a later date. If one is using FORTRAN Fortran (big-lettered vs. little-lettered! — another issue :) ), though, a binary write is far simpler (less work) to code than a formatted write.

  33. Basil
    Posted Jun 27, 2008 at 9:26 AM | Permalink | Reply

    Steve #26

    As an intangible, I doubt that many readers of the GISTEMP code will feel much confidence in Hansen’s ability to manage climate model code, but that would be merely an extrapolation.

    Droll.

  34. DAV
    Posted Jun 27, 2008 at 9:55 AM | Permalink | Reply

    @26:

    As an intangible, I doubt that many readers of the GISTEMP code will feel much confidence in Hansen’s ability to manage climate model code, but that would be merely an extrapolation.

    I would be very surprised that Hansen, as the head of GISS, had much to do with the code anymore than his boss Michael Griffin or his boss, GWB. It’s simply not his job (anymore at least).

    As to the selection of FORTRAN vs. C or R or whatever, there are a lot of pragmatic reasons to stay with “grandfather” code — not the least is conversion cost. I’m sure there are a lot of youngsters at GISS that are grumbling over the pain of using it. I certainly wouldn’t want to be stuck with it and I’m not much younger than Hansen. Heck, I even think C is primitive as well. I seriously doubt Hansen directed its use for the purpose of obfuscation.

    I’m not any fonder of Hansen than you are but sniping about coding practices just doesn’t speak well for the speaker.

  35. WA
    Posted Jun 27, 2008 at 10:04 AM | Permalink | Reply

    Is it possible that the 1880 problem is one of setting a baseline year parameter from which to calculate year-to-year differences (e.g., change{1881} = temp{1881} – temp{1880})?

    Sorry if this is trivial, wrong or out-of-date. My initial experience with Fortran was with FORTRAN. We later upgraded to FORTRAN With Format. We had a leisurely dinner while it compiled.

  36. Sam Urbinto
    Posted Jun 27, 2008 at 1:10 PM | Permalink | Reply

    Actually, it wasn’t a “war between Lilliputians over whether boiled eggs should be cut first” it was between Lilliput (sharp end) and Blefuscu (round end) over which to eat (break) first. The “thirty-six moon” one. Which Dr. Gulliver finds out after he’s captured by the 6″ tall inhabitants of Lilliput. Yes, the story itself is about England and France, the egg the views of the major political forces (the Church of England and the Roman Catholic church), but it’s also about the Torries and Whigs, etc etc.

    The book is where the word YAHOO comes from. Supposedly, in the jester and crossword puzzle sense, it’s an encrypted anagram for ‘Irish’ (where Swift was born), much like Lilliput is almost ‘London’s Mile End’. Not sure where in France that Blefuscu is supposed to be, though. :)

  37. Steve McIntyre
    Posted Jun 27, 2008 at 1:20 PM | Permalink | Reply

    I placed sample data for one grid cell online here http://data.climateaudit.org/data/giss/step3/step3_test.dat ; this is the 4 dset2 versions used and the outcome. Hansen’s code is in Step 3 ftp://data.giss.nasa.gov/pub/gistemp/GISS_Obs_analysis. I can get close (0.98 correlation) but I’m missing something relevant.

    The basic strategy is Hansen’s “reference” method, longest to shortest (described elsewhere). In this case, the “bias” appears to be calculated by month. But I’ve tried various permutations but can’t match it. I’m not suggesting some huge issue, merely that I’m being wrongfooted somewhere and maybe someone else can push this peanut along.

    hansen_combine= function(X,w,ncrit=20) {
    tsp0=tsp(X);
    K=ncol(X); N=nrow(X)
    long0=apply(!is.na(X),2,sum)#calculates length of the candidate rural series
    order1=order(long0,decreasing=TRUE); # orders by decreasing length
    X=X[,order1];w=w[order1]
    W=t(array(rep(w,N),dim=c(K,N) ) )
    W[is.na(X)]=NA #matrix of weights:
    delta=rep(0,12)

    #start with the longest series
    if (max(long0)=ncrit) {
    bias= mean(Y[temp,2])-mean(Y[temp,1])
    bias= anom(Y[temp,2],method=”hansen”)[[2]]-anom(Y[temp,1],method=”hansen”)[[2]]
    delta=cbind(delta,bias)
    bias=rep(bias,N/12)
    Y[,2]=Y[,2]-bias #these have same centers
    g=function(x) weighted.mean(x[1:2],x[3:4],na.rm=T)
    Y[,5]=apply(Y[,1:4],1,g)
    reference=Y[,5]
    reference.seq=cbind(reference.seq,reference)
    weights1=apply(Y[,3:4],1,sum,na.rm=T)
    }
    }
    }
    names(delta)=dimnames(X)[[2]]
    hansen_combine=list(reference,delta)
    hansen_combine
    }

    #Make an annual anomaly Hansen style
    anom=function(x,method=”anom”,date0=c(1961,1990)) {
    if( (class(x)==”ts”)|(class(x)==”mts”)) K=floor(tsp(x)[1])
    if( is.null(dim(x))) {test=t(array(x,dim=c(12,length(x)/12)) )
    if(method==”hansen”) m0=round(apply(test,2,mean,na.rm=TRUE),2) else m0=apply(test[(date0[1]:date0[2])-K+1,],2,mean,na.rm=TRUE)
    test=scale(test,center=m0,scale=FALSE)
    anom=round(ts(c(t(test)),start=c(K,1),freq=12),2)
    } else {anom=x;
    for(i in 1:ncol(x)) {test=t(array(x[,i],dim=c(12,nrow(x)/12)) )
    if(method==”hansen”) m0=round(apply(test,2,mean,na.rm=TRUE),2) else m0=apply(test[(date0[1]:date0[2])-K+1,],2,mean,na.rm=TRUE)
    test=round(scale(test,center=m0,scale=FALSE),2)
    anom[,i]=c(t(test))}}
    anom=list(anom,m0)
    anom
    }

  38. Steve McIntyre
    Posted Jun 27, 2008 at 2:34 PM | Permalink | Reply

    Here is a short script to read the test case and compare to the target. The correlation is high (0.98) but there are 0.5 deg C differences, which shouldn’t exist with replication. As I mention above, I’m probably wrongfooted on something simple.

    chron=read.table(“http://data.climateaudit.org/data/giss/step3/step3_test.dat”,sep=”\t”,header=TRUE)
    chron=ts(chron,start=c(chron[1,1],1),freq=12)
    distance0=c( 71.6541, 194.1838, 200.6309, 240.3882) #calculated distance from gridcell center
    weights0=1-distance0/1200;weights0
    reftest=hansen_combine(chron[,3:6],w=weights0)
    anomtest=anom(reftest[[1]],method=”ts”,date0=c(1951,1980))

    nf=layout(array(1:3,dim=c(3,1)),heights=c(1.2,1,1.3) )
    par(mar=c(0,3,2,1))
    plot(chron[,1],chron[,2],type=”l”,xlab=””,ylab=””,axes=FALSE)
    axis(side=1,labels=F);axis(side=2,las=1);box();abline(h=0,lty=2)
    text(1938,2.1,”Target”,font=2,pos=4)
    title(“Emulate Wellington NZ Gridcell”)

    par(mar=c(0,3,0,1))
    plot(chron[,1],anomtest[[1]],type=”l”,xlab=””,ylab=””,axes=FALSE)
    axis(side=1,labels=F);axis(side=2,las=1);box();abline(h=0,lty=2)
    text(1938,2.1,”Emulation”,font=2,pos=4)

    par(mar=c(3,3,0,1))
    plot(chron[,1],chron[,2]-anomtest[[1]],type=”l”,xlab=””,ylab=””,axes=FALSE,ylim=c(-1,1))
    axis(side=1);axis(side=2,las=1);box();abline(h=0,lty=2)
    text(1938,0.8,”Delta – scale differs”,font=2,pos=4)

    hansen_combine= function(X,w,ncrit=20) {
    tsp0=tsp(X);
    K=ncol(X); N=nrow(X)
    long0=apply(!is.na(X),2,sum)#calculates length of the candidate rural series
    order1=order(long0,decreasing=TRUE); # orders by decreasing length
    X=X[,order1];w=w[order1]
    W=t(array(rep(w,N),dim=c(K,N) ) )
    W[is.na(X)]=NA #matrix of weights:
    delta=rep(0,12)

    #start with the longest series
    if (max(long0)=ncrit) {
    bias= mean(Y[temp,2])-mean(Y[temp,1])
    bias= anom(Y[temp,2],method=”hansen”)[[2]]-anom(Y[temp,1],method=”hansen”)[[2]]
    delta=cbind(delta,bias)
    bias=rep(bias,N/12)
    Y[,2]=Y[,2]-bias #these have same centers
    g=function(x) weighted.mean(x[1:2],x[3:4],na.rm=T)
    Y[,5]=apply(Y[,1:4],1,g)
    reference=Y[,5]
    reference.seq=cbind(reference.seq,reference)
    weights1=apply(Y[,3:4],1,sum,na.rm=T)
    }
    }
    }
    names(delta)=dimnames(X)[[2]]
    hansen_combine=list(reference,delta)
    hansen_combine
    }

    #Make an annual anomaly Hansen style
    anom=function(x,method=”anom”,date0=c(1961,1990)) {
    if( (class(x)==”ts”)|(class(x)==”mts”)) K=floor(tsp(x)[1])
    if( is.null(dim(x))) {test=t(array(x,dim=c(12,length(x)/12)) )
    if(method==”hansen”) m0=round(apply(test,2,mean,na.rm=TRUE),2) else m0=apply(test[(date0[1]:date0[2])-K+1,],2,mean,na.rm=TRUE)
    test=scale(test,center=m0,scale=FALSE)
    anom=round(ts(c(t(test)),start=c(K,1),freq=12),2)
    } else {anom=x;
    for(i in 1:ncol(x)) {test=t(array(x[,i],dim=c(12,nrow(x)/12)) )
    if(method==”hansen”) m0=round(apply(test,2,mean,na.rm=TRUE),2) else m0=apply(test[(date0[1]:date0[2])-K+1,],2,mean,na.rm=TRUE)
    test=round(scale(test,center=m0,scale=FALSE),2)
    anom[,i]=c(t(test))}}
    anom=list(anom,m0)
    anom
    }

  39. John Goetz
    Posted Jun 27, 2008 at 5:30 PM | Permalink | Reply

    #37 Steve

    I still have trouble reading R, but one of your comments says “sequentially adds in the rural series”. GISS adds “in the remaining stations”, which I presume to mean the rural and urban-adjusted.

    Could you be missing the urbans?

    The weather here is supposed to stink this weekend, so maybe I will go through Step 3.

  40. steven mosher
    Posted Jun 27, 2008 at 5:43 PM | Permalink | Reply

    re 34. DAV. there are patches of Python code interspersed. so hope springs
    eternal.

  41. steven mosher
    Posted Jun 27, 2008 at 5:47 PM | Permalink | Reply

    re 38. since you diverge at what appears to be 1951, I’d suspect the anomaly dance. although that seems odd.

  42. OldUnixHead
    Posted Jun 28, 2008 at 1:45 AM | Permalink | Reply

    re #38
    Sorry if this is just a hand-waving argument, but I haven’t looked in the Fortran code very deeply yet, and, based on your non-zero correlations and Wellington’s location, I was wondering if you are seeing a difference due to not including SST gridded adjustments. In stumping through the data.giss.nasa.gov site I found a ref to the file SBBX.HadR2 in a directory named OISST.

    Do you also get a non-100% correlation from a site in a grid cell that has no possible SST adjustments?

    On another note, were you aware that the URL you provided in the article for the link to “data.giss.nasa.gov/pub/gistemp/download” doesn’t work as it stands now? It seems to be missing an initial “f” — it now starts with just “tp://” ?

  43. Dishman
    Posted Jun 28, 2008 at 2:03 AM | Permalink | Reply

    @26:

    One question is whether the GISTEMP urban adjustments do anything beyond a random permutation of data and, if that’s the case, does that matter?

    I thought that was already answered by the “adjustments since 2000″ graph in The Register and your previous analysis of Step 2.

    If it’s not clear enough, here’s another way to clarify it:
    Use the current data set.
    Pick a recent but not too recent time, like Jan 1990 or 1995.
    For each end date from selected date through today
    Run Step 2 for data up to end date
    Calculate unweighted average ROW temperature of Step 2 output for selected date
    Graph selected date ROW temperature vs end date

    If that shows a trend, then Step 2 is clearly having an effect. If it does, there’s probably merit in proceeding to the next analysis:
    Use the current data set.
    Pick a start date as before
    For each end date from selected date through today
    Run Step 2 for data up to end date
    Calculate unweighted average annual ROW temperature of Step 2 output for all years through selected date
    For all years 1880 to selected date
    Calculate best fit slope of adjusted temperature of that year
    Graph slopes of previous step (ie 1880-1990)

    The Register suggests that the zero crossing is around 1970. I’d make a SWAG that for Step 2, it’s actually 1980 – the year of the census data on which Step 2 is based. Further, the slope of the final graph will tell you roughly how badly Step 2 is currently distorting the data.

  44. clivere
    Posted Jun 28, 2008 at 5:40 AM | Permalink | Reply

    At the end of last year I got the impression that the code release was a rush job in direct response to the criticism and fallout from the problem Steve identified and got them to recognise.

    At the time they indicated that they would shortly provide a version that would be of more interest to scientists. A few weeks later they did provide a revised version but I am not clear that this of any greater interest than the previous release and whether it is actually the improved version Jim Hansen was referring to.

    The versions that have been released appear to be quite old. Assuming they were originally used then they will help in understanding what they have done in the past. However there is a possibility that they are doing something different now. Even if the steps are eventually made to run they may not fully replicate the current output.

  45. Steve McIntyre
    Posted Jun 28, 2008 at 9:04 AM | Permalink | Reply

    #44. I agree that the release was a rush job. My suspicion is that NASA told HAnsen to do it – which made HAnsen really mad, since no one ever told him to do anything. The Y2K problem occurred during a NASA space launch and so some of the reporters on the story were space reporters, who were more critical of this sort of error than newspaper reporters on the climate file. That’s my guess as to why it caught the attention of senior NASA officials, who did something about it.

    The originally unreported revisions a few weeks later did not have the effect of improving the organization or coherency of the code, but substituted a different USHCN data version, which happened to restore 1998 as being a titch warmer than 1934 in the U.S. Just a coincidence, I’m sure.

    One has to keep an eye out on versions – but there are some pretty recent versions available and I don’t think that this is the problem.

  46. Rod Smith
    Posted Jun 28, 2008 at 10:08 AM | Permalink | Reply

    Just a thought from a really “old time” computer guy.

    I haven’t looked at any of the code, but I was struck by the endian discussion. Assuming – possibly incorrectly – that this ‘problem’ has been sorted out, is it possible that the differences in compilers handling ‘negative zero’ data could be be causing the delta flyers?

    It might be instructive to find out what kind of computers this stuff ran on. The world of mainframes wasn’t always comprised of little-endian, twos-compliment machines, and compilers that functioned with them – and I expect NASA was loaded with various mainframes for a long time.

    It may also be unwise to assume that NASA code was necessarily compiled with the even ‘then current’ versions of compliers.

  47. RomanM
    Posted Jun 28, 2008 at 10:11 AM | Permalink | Reply

    #37, 38. Steve McI.

    The problem appears to be in weights0 defined in the lines:

    distance0=c( 71.6541, 194.1838, 200.6309, 240.3882) #calculated distance from gridcell center
    weights0=1-distance0/1200;weights0

    I took another approach which often helps in decoding the coefficients of linear combinations. Fitting a regression to predict their gridcell series (target) using the given stations gives the “weights” used in the calculation. However, the monthy anomaly needs to be calculated as well. You can do both of these with an ANOVA using month as a categorical variable and the station series as the covariates. On R this can be done with the following:

    month = as.factor(rep(month.abb, 54))
    linmod = lm( chron[,2] ~ 0 + month + chron[,3:6], na.action = “na.exclude”)
    summary(linmod)
    predanom = predict(linmod)
    cor(predanom, chron[,2],use = “pair”) # .999815
    plot(chron[,1], chron[,2]-predanom, type = “l”) # look at residuals

    The fit is very good with a correlation of .999815.The coefficients for the months adjust for the anomalies (but not specifically exactly zeroed to the time period used by GISS) and the coefficients for chron[,3:6] are the weights used. This gives the following weights for the stations:

    X507934360010 0.62333689
    X507935460000 0.18488482
    X507933730000 0.15513372
    X507933090000 0.03618952

    which would correspond roughly to Weight = 0.872273 – Distance / 283.9 (my guess is approximately constant x (1 – Distance/250)). I would have checked these weights using your programme, however there seem to be some errors in the hansen_combine function, in particular, in the line:

    if (max(long0) reference=X[,1] #picks longest series to insert.

    Hope this helps you track down what is going on.

  48. Mike C
    Posted Jun 28, 2008 at 10:47 AM | Permalink | Reply

    Steve Mc,

    I’m not sure if this is your question in the original post but HadR2 refers to Hadley SST version 2. As if you didn’t already have buckets of problems.

  49. RomanM
    Posted Jun 29, 2008 at 9:48 AM | Permalink | Reply

    The weighting method that seems to be used for calculating gridcell averages bothers me!

    It is my understanding (correct me if I am wrong) that the following methodology applies: No stations outside of a given gridcell are directly used in the calculation of the temperature series of that gridcell. The series from the stations are combined using weights that are a decreasing function of the distance of the station from the center of the grid cell. If this is true, then there are serious problems with the current method.

    What’s wrong? If you want to merely estimate the temperatures at the center of the grid-cell (which is an arbitrary point defined as a result of the particular grid pattern chosen), this is not necessarily a bad way to do it. However, this value is a poor indicator of the temperatures anywhere else in the cell.

    There are several negative side effects. The obvious one is that stations are assigned a weight due not to their quality, but to their distance from a center point which is the result of the particular (arbitrarily chosen) gridding pattern. Notice that in Steve’s example, station 507934360010 has a weight more than 17 times as large as station 507933090000 in determining the gridcell result. Shift the boundaries of the gridcell and the weights will change, possibly reversing the weights for these two stations.

    The second side effect is less obvious. Suppose that we now use these gridcell values (area weighted) to calculate a “world temperature”. The weight of a station toward calculating the world temperature series is now basically the same as the weight of the station in calculating the gridcell value. Stations at the edges of the gridcells have virtually a zero contribution to the overall result! However, rotate the grid east or west a little, and you will get a different world average since the relative weights of the stations change!!! I can’t believe that the GISS folks would not be able to see this. Does it matter? At best, the error bounds would be larger than they need to be because of this type of unequal treatment of the stations. At worst, there is serious room for possible biases in the estimates. Either way, I doubt that this is a good way of doing things.

    Never let it be said that we can’t be constructive her on CA. Let me suggest a better approach. A more equitable treatment of the stations should not be based on the chosen gridding pattern. First, for each point on the globe, estimate the temperature using all the stations using weights similar to the GISS method – decreasing as a function of distances to that point (but also possibly further weighted by station quality). Now, for a particular grid pattern, we average the estimated temperatures of all points in the gridcell. In practice, the procedure is reasonably easy to carry out. First, distance weighting can ignore all stations farther than some given distance, so that each point temperature estimate is calculated from a relatively small number of stations. Secondly, the averaging over points within a grid cell becomes two-dimensional integrals of the weights belonging to the points in the gridcell (which may have to be evaluated by numerical methods, depending on the weight functions chosen). This needs to be done only once for each gridcell and results in a small number of stations (some of which may be outside of the gridcell) with an associated set of corresponding weights for that grid cell. Once these are calculated, they can be used to calculate the temperature series.

    Does this remedy the previous issues? The answer is yes. Although stations near the center of the gidcell will still have more influence in determining the cell value, stations near the edges have higher influence than they do in the current method. As well, their influence will extend into neighboring cells so that in the overall structure each station is treated equally. Secondly, since the initial estimation was done independently of the grid pattern, the estimates of world temperatures would not depend on the grid structure chosen. Differences between HadCrut and GISS have been explained away by some as differences in the treatment of Arctic temperatures. One wonders if it goes deeper than that.

  50. Stan Palmer
    Posted Jun 29, 2008 at 10:06 AM | Permalink | Reply

    re 49

    Stations at the edges of the gridcells have virtually a zero contribution to the overall result!

    The wieghting decreases the impact of a sngle station but there are proprtionally more stations (greater surface area). The effect washes out so stations in each a4ea contribute to the same amount.

  51. RomanM
    Posted Jun 29, 2008 at 10:21 AM | Permalink | Reply

    #50 Stan

    I think you missed the point. In Steve’s example, station 507934360010 has a weight more than 17 times as large as station 507933090000 in determining the global average merely because of the artificially chosen gridding pattern. That is mainly because it does not contribute anything to estimating temperatures in neighbouring gridcells (at points that may be closer to it than any station in that neighboring cell).

  52. claytonb
    Posted Jul 1, 2008 at 10:51 AM | Permalink | Reply

    It is my understanding (correct me if I am wrong) that the following methodology applies: No stations outside of a given gridcell are directly used in the calculation of the temperature series of that gridcell.

    Really? That’s a huge oversight.

Post a Comment

Required fields are marked *

*
*

Follow

Get every new post delivered to your Inbox.

Join 3,114 other followers

%d bloggers like this: