For your Sunday edification, today’s crossword puzzle pulls together 3 examples of Hansen data “combining”. The question is to find an AlGoreithm that accounts for all three versions. The three data sets each have 3 columns: the first two are data versions; the third column is Hansencombined. In each case, there is an overlap between data versions and where the data versions overlap, they are identical. In each case, during the period of one overlap, one version has one value missing and as a result of this one missing value, the early values of the version which has the earlier values is adjusted.
In one case, the missing value is a winter value; in the other two cases. In the case where a winter value was missing the earlier version was adjusted down; in a case where a summer value was missing, the earlier version was adjusted up.
Praha http://data.climateaudit.org/data/giss/61111520000.dat
Joenssu http://data.climateaudit.org/data/giss/61402929000.dat
Gassim http://data.climateaudit.org/data/giss/22340405000.dat
These are all tabseparated and can be read into Excel for Excel users.
The issue is not to find a rational algorithm that yields the adjustments: the only rational way to combine the data is just to combine the available values. The issue is to find any algorithm, however improbable. I’ve posted notes on my efforts, but am now stuck. Some things seem likely but don’t be limited by my guesses as they may be wrong.
Notwithstanding these caveats, as noted before, it appears that Hansen calculates the first month of overlapping values and last month of overlapping values. One possibility was that he calculated the difference in means over available months without any allowance for whether it was a summer or winter month missing and then uses a form of Hansenrounding to calculate a 1digit adjustment. I was able to get a semiplausible formula that fit Praha and then failed Joenssu; then one that fit Praha and Joenssu, but failed Gassim. Praha has a negative adjustment to early values; Joenssu and Gassim positive adjustments. From inspecting quite a few series, I would say that negative adjustments are by far more prevalent, but we’re working here with 2version examples for simplicity. Who knows what will happen when we get to 3 version stations.
UPDATE #1
Thank you for the various contributions and suggestions, especially to John Goetz both for identifying the problem and solving the calculation of the delta. I think that we’ve solved quite a bit of what Hansen’s done and ended up turning over another large stone: how Hansen calculates seasonal and annual averages for individual station versions. Now that we’re close to describing what he did, it appears increasingly likely to me that Hansen has corrupted his entire data set. Whether the corruption of the data set “matters” is a different issue, but the evidence is now quite powerful that the data set has been corrupted through the problems discussed below.
Let me summarize what I think that we know. In some cases, there are perhaps alternative ways at arriving at an answer and I’ve picked approaches that seem most likely to me to have been used.
SingleDigit Adjustment
I think that the evidence is overwhelming that Hansen makes an adjustment in even tenths to one column in these twocolumn arrays of monthly data. The delta for Praha is 0.1 to the first column; +0.2 for Gassim and +0.1 for Joenssu. After adjusting one column, Hansen then calculates a combined monthly series, that is not a direct average, but a “Hansen average” as discussed below. This leaves two problems: (1) how the singledigit delta is calculated; and (2) how the combined version is calculated from from the adjusted matrix.
Integer Arithmetic
There is convincing evidence that Hansen uses integer arithmetic in tenths of a degree. It’s possible to emulate integer arithmetic to some extent in R with a little thought. Floating point arithmetic with conventional rounding won’t yield Hansen combined series.
Calculation of the Delta
I am convinced that John G has solved the calculation of the delta. By collating monthly series, I complicated the solution of the problem because I (and people following my collations) were working only with monthly data versions. Hansen’s GISS version also includes annual versions. While it is presently unknown how Hansen calculated the annual and quarterly averages (as his method of filling missing data is unknown), John G’s method (and I’ve confirmed this with a slight variation) will recover the deltas.
Briefly, the two versions are compared to determine which years have values for both versions – this is 45 years in the Praha, Gassim and Joensuu cases. Then the difference between the averages is calculated, yielding the required delta (0.1, 0.2, 0.1). In the Gassim case, the result is exactly 0.15 and when I calculated the result using floating point arithmetic, I got 0.1 instead of 0.2. So you do have to take care that one emulates integer arithmetic.
Hansen Averaging
We’ve had a variety of suggestions on how to obtain the combined value once the adjusted matrix is created – some of which are probably equivalent. I’ve restated my own solution to the problem since I’ve spent time confirming that it works – without excluding the possibility that some other variation was used. (BTW I think that it is very unlikely that Hansen detoured through Kelvin in these calculations though.) As I noted in a post below, here’s the evidence to consider:
1) if there is only one available value from the twocolumn matrix after adjustments, the adjusted value is taken whether it comes from column 1 (adjusted) or column 2 (unadjusted). No problem here. So we only need to consider averaging where both columns have values.
2) If the sum of the two columns (in tenths) is even, then the combined value is the average – no problem.
3) for Praha, if the sum of the two columns (in tenths) is odd, then the combined value is averaged up. For Joensuu, if the sum of the two columns (in tenths) is odd, then the combined value is averaged down. For Gassim, there is only one case where the sum of the two columns (in tenths) is odd – May 1990 and in this case, the combined value is averaged down.
The rounding up or rounding down depends on the sign of the adjustment. If there is a negative adjustment to column 1, it’s averaged up; if there’s a positive adjustment, it’s averaged down. I was able to implement this by checking whether the sum of the two entries was even or odd, calculating the “Hansenaverage” as follows:
1) if there’s only one available value, use it.
2) if the sum of the two values is even, take a simple average;
3) if the sum of the two values is odd, add 1 if the delta is positive (i.e. subtraction in the first column) and subtract 1 if the delta is negative (i.e. addition in the first column). Then take an average.
If it seems a little longwinded, remember that we have to consider integer arithmetic.
Seasonal Averages
Now we turn to the seasonal and annual averages that underlie the troublesome delta calculations. Here we have another crossword puzzle. I’ve shown below the relevant overlapping years for the two PrahaLibu” versions – I apologize for the ragged format. I’ve bolded points of difference.
The 1986 year does not enter into the delta calculations but December values enter into the quarterly DJF calculations of the following year. The only difference between the two versions here is that Version 0 has a December 1986 value of 1.4, while Verson 1 is missing a December 1986 value. The 1987 DJF estimated first quarter for the version with a missing month (version 1) is 3.7, while the 1987 actual DJF first quarter for the version with all three months is calculated arithmetically to be 2.0. There is a second smaller difference in 1991 values which is hard to understand since all 1991 values are identical in the two versions. (Explaining this is a new puzzle.) To calculate the deltas from raw data, we have to be able to replicate how Hansen fills in missing data.
In the example below, the actual Praha temperature in Dec 1986 according to version 0 was 1.4 deg C. This information was missing from version 1. By calculating the Dec 1986 temperature required to yield the 1987 DJF average, one can easily see that Hansen estimated the version 1 Dec 1986 temperature at 3.6 deg C – fully 5.0 deg C lower than the actual temperature (Version 0). He then said – aha, version 1 measures cold relative to version 0 and therefore version 0 (which contains the older data) needs to be adjusted down (in this case by 0.1 deg C.) It’s crazy.
Praha Version 611115200000
year jan feb mar apr may jun jul aug sep oct nov dec DJF MAM JJA SON ANN
1986 0.1 6.4 3.9 9.4 16.2 17.1 18.2 17.8 12.6 9.4 5.0 1.4 0.8 9.8 17.7 9.0 8.93
1987 6.7 0.8 0.7 9.8 11.6 15.6 18.6 16.3 15.5 9.6 4.9 1.8 2.0 6.9 16.8 10.0 7.93
1988 2.6 2.2 2.7 9.3 15.7 16.3 18.6 18.3 14.1 9.9 1.3 2.7 2.2 9.2 17.7 8.4 9.40
1989 1.2 3.6 7.5 9.2 14.7 16.2 18.8 18.3 15.0 10.6 1.9 1.6 2.5 10.5 17.8 9.2 9.98
1990 1.5 5.5 7.9 8.1 15.3 17.4 18.7 19.8 12.0 9.6 4.4 0.3 2.9 10.4 18.6 8.7 10.15
1991 1.3 2.9 6.2 7.8 10.2 15.6 20.0 18.5 15.3 NA 3.2 NA 0.4 8.1 18.0 9.2 8.73
Praha Version 611115200001
year jan feb mar apr may jun jul aug sep oct nov dec DJF MAM JJA SON ANN
1986 NA NA NA NA NA NA NA NA NA NA 5.0 NA NA NA NA NA NA
1987 6.7 0.8 0.7 9.8 11.6 15.6 18.6 16.3 15.5 9.6 4.9 1.8 3.7 6.9 16.8 10.0 7.50
1988 2.6 2.2 2.7 9.3 15.7 16.3 18.6 18.3 14.1 9.9 1.3 2.7 2.2 9.2 17.7 8.4 9.40
1989 1.2 3.6 7.5 9.2 14.7 16.2 18.8 18.3 15.0 10.6 1.9 1.6 2.5 10.5 17.8 9.2 9.98
1990 1.5 5.5 7.9 8.1 15.3 17.4 18.7 19.8 12.0 9.6 4.4 0.3 2.9 10.4 18.6 8.7 10.15
1991 1.3 2.9 6.2 7.8 10.2 15.6 20.0 18.5 15.3 NA 3.2 0.7 0.4 8.1 18.0 9.4 8.77
Difference
year jan feb mar apr may jun jul aug sep oct nov dec DJF MAM JJA SON ANN
1987 0 0 0 0 0 0 0 0 0 0 0 0 1.7 0 0 0.0 0.43
1988 0 0 0 0 0 0 0 0 0 0 0 0 0.0 0 0 0.0 0.00
1989 0 0 0 0 0 0 0 0 0 0 0 0 0.0 0 0 0.0 0.00
1990 0 0 0 0 0 0 0 0 0 0 0 0 0.0 0 0 0.0 0.00
1991 0 0 0 0 0 0 0 0 0 NA 0 NA 0.0 0 0 0.2 0.04
Appropriateness of “Bias Method” for Scribal Errors
When one puts down what Hansen did in black and white, the inappropriateness becomes increasingly clear.
In this case, Hansen has estimated a value for the missing December 1986 value in Version 1. The most obvious method for estimating the value would be to use the available December 1986 value in Version 0, since these values coincide for all other points. In linear regression terms, the r2 is 1. Instead of doing this, Hansen used an unknown method to estimate the missing value. He then calculated the difference between his estimate and the actual values in the other version and stated that this difference was a bias in the versions. He then used this bias to adjust the data.
It’s pretty hard for a statistical method to be as “wrong” as this one. In the cases that we’ve seen, the error sometimes adds to early values; it is my impression that the error more often increases the apparent trend. Estimating the impact of this Hansen error on the data set is a different project. We’ll see where this goes.
UPDATE #2
The following Rfunction appears to calculate the Hansendelta between two station versions. X and Y are Rdata frames in GISS format with a column with name “ANN” corresponding to the 18th column in GISS format.
hansen_bias=function(X,Y) {
temp1=!is.na(X$ANN); temp2=!is.na(Y$ANN)
test=match(Y$year[temp2],X$year[temp1]);temp=!is.na(test)
A=Y$ANN[temp2][temp];#A
B= X$ANN[temp1][test[temp]]
a= sum( floor(100*A 100*B)) ; K=length(A)
hansen_bias= sign(a) *round(floor( (abs(a) + 5*K)/K)/100,1)
hansen_bias}
UPDATE #3 (Sept 8, 2007)
Hansen released source code yesterday. In a quick look at the code for this step, the combination of station information is done in Step 1 using the program comb_records.py which unfortunately, like Hansen’s other programs, and for that matter, Juckes’ Python programs, lacks comments. However, with the work already done, we can still navigate through the shoals.
The delta calculation described above appears to be done in the routine get_longest_overlap where a value diff is calculated. Squinting at the code, there appear to be references to the annual column, confirming the surmise made by CA readers. It doesn’t look like there is rounding at this stage – so maybe the rounding just occurring at the output stage, contrary to some of the above surmise. I still think that there’s some rounding somewhere in the system, but I can’t see where right now and this could be an incorrect surmise.
After calculating the delta (“diff”), this appears to be applied to the data in the routine combine which in turn uses the routine add (in which “diff” is an argument).
There is an interesting routine called get_best which ranks records in the order: MCDW, USHCN, SUMOFDAY and UNKNOWN and appears to use them in that order. I don’t recall seeing that mentioned in the documentation. I’m going to look for the location of this information in the new data as I don’t recall any prior availability of this information.
208 Comments
Steve:
I will try my hand, but I also have a kind of dumb question: Why are there multiple data sets for a single site/location with extended overlaps? I can see a 2 or 3 year overlap to figure out site or instrumnet change effects but some of these are extremely odd.
The other thing that struck me is why the adjustments to the single data record as at Praha? Is Hansen already
incorporating changes due to a site with a long record?
These are different scribal versions for the nth time. One might come from the Smithsonian, one from NCAR and there were little differences in the versions. Why was one version different? Some clerk copying.
In these cases,, these are NOT different sites or stations, though this occurs in other locations.
I like the end of the second data set. If not data available, add hockey stick.
#3. I collated one data set a little more recently than the other , which probably accounts for the differences. It takes about 24 hours to scrape each GISS version so I haven’t the dset=0 as recently as the dset=1. So don’t overinterpret the end differenecs.
As a usual lurker here, I’d like to decloak and ask a question that I don’t think has been discussed lately. What is Hansen’s team made up of? Does he have a group of grad students collating data per his guidelines, or is he doing all these manipulations himself? Maybe we can call in the FBI and get one of his henchmen to flip — turn “states evidence” so to speak (or at least wear a wire)?
Also, Waldo:South America is pretty much a “done” topic for discussion, but could someone take a look at my post #55 and see if they agree?
Thanks,
dwhite
OK, I’m just a lowly software developer here, not a climate scientist, but looking at the data, this is what I come up with:
1. Take each data set, and calculate a magic offset in tenths of a degree for it. I don’t know how they did this, but it appears that the offsets were as follows. Note, I’m using an integral representation of tenths of a degree for a reason. We’ll also be doing our math on the values in integral tenths as well, so, for example, 30.1 would be stored as 301.
223404050000 2
223404050001 0
614029290000 1
614029290001 0
611115200000 1
611115200001 0
The later of the two records is always 0.
Then, to combine each month’s data we do something like:
If data exists in exactly 1 set, use that set’s magic delta.
If data exists in both sets, average the values using integer math, and average the offsets using integer math, and add them together.
That seems to work for these 3 sites.
Basically if you assume that the temperature was stored in integers corresponding to tenths of a degree it seems to work out.
To confirm this, btw, it would be interesting to look at sites where there are more than 2 records. Right now I’m assuming that it’s strictly and integral average, but looking at a site with more sets of data associated would probably help nail that down.
Ok, here is a partial answer, do I get some candy ;) I think I can prove how Hansen is “averaging” after he has somehow adjusted for the bias (which I haven’t yet figured out, but it’s still Sunday here… I’m pretty close :) ).
The result comes from Pjarnu (58.4 N, 24.5 E, 61326231000x) series (which, I suppose is Pärnu, Estonia). It has only two original series (613262310000 and 613262310001) both ranging from Jan/1936 to Aug/1989. These series differ substantially, although they appear to be based on the same original measurements.
Now, there is one value such that it’s missing from the first series but available in the second and five values in the opposite way. All of these values are exactly equal to the combined series. Hence no “bias” correction was made to either series, and the combined has to be simply the average of the two series. So how do we get an exact match? Here it is, let me introduce you the Hansen average (x and y in degrees celsius):
floor((10*x+10*y)/2)/10
Have fun!
#6 Skip, your algorithm does not account for the 0.1 degree upward bias of both records for Gassim during the period of overlap. Also, how is the magic number derived from the existing data?
Hi Skip. Can you say anything about rounding?
I humbly think all of this will be traced to the fact that Hansen uses the following naive algorithm for calculating averages (using single precision floats):
SUBROUTINE AVG(ARRAY,KM,NAVG,LAV,BAD,LMIN, DAV)
REAL ARRAY(KM,NAVG),DAV(NAVG)
DO 100 N=1,NAVG
SUM=0.
KOUNT=0
DO 50 L=1,LAV
IF(ARRAY(L,N).EQ.BAD) GO TO 50
SUM=SUM+ARRAY(L,N)
KOUNT=KOUNT+1
50 CONTINUE
DAV(N)=BAD
IF(KOUNT.GE.LMIN) DAV(N)=SUM/KOUNT
100 CONTINUE
RETURN
END
I think floating point errors accumulate and manifest themselves in these kinds of oddities (compounded by the possibility that they might be truncating rather than rounding numbers). If all of those operations were carried in integer (using tenths Celsius as is done in the GHCN) no errors would be possible.
— Sinan
My gut is telling me the odd behavior is a programming error related to identifying the period of overlap. I suspect the averages used to calculate dT are taken from dates that are offset from the actual period of overlap. They could be offset by one month, one year, or perhaps start/end on a calendar year boundary. One or both series may be offset. Gassim is a real puzzle though…
Re: 8
Jean S., I posted before I saw your comment. I think you hit the nail on the head.
#include <stdio.h>
#include <math.h>
int main(void) {
float xf = 9.1;
float yf = 9.1;
int xi = 91;
int yi = 91;
float af = floor( ( ( 10.0 * xf + 10.0 * yf ) / 2 ) / 10.0 );
float ai = ( ( xi + yi ) / 2.0 ) / 10.0;
printf("Hansen: %.1f\n", af);
printf("Integer: %.1f\n", ai);
return 0;
}
C:\Home\asu1\Src\algoreithm> gcc a.c o a.exe lm
C:\Home\asu1\Src\algoreithm> a
Hansen: 9.0
Integer: 9.1
— Sinan
It does, actually. Beacause the bias is 2 tenths of a degree on the earlier record. So the averaged bias of 2 tenths of a degree and 0 is 1 tenth of a degree.
For the other two, because the difference is only 1 tenth, if you’re doing integer math the average of 1 and 0 is 0.
As for how the magic number is derived? Who knows? There’s probably some rhyme or reason to it.
#8 Jean, I don’t think that he’s doing anything like that. I think that he’s using poorman’s fixedpoint, doing all his calculations with integers that represent the number of tenths. The easiest way to check that would be to look for a site with 3 records, say, that had the following ‘magic’ offsets.
Record 1 Offset 0.2
Record 2 Offset 0.1
Record 3 Offset 0.0
Or offsets of 2, 1 and 0 in integers. At that point, I’d expect to see:
Record 1 present – offset of 0.2
Record 2 present – offset of 0.1
Record 1, 2 present – offset of 0.1
Record 3 present – offset of 0.0
Record 1, 3 present – offset of 0.0
Record 2, 3 present – Offset of 0.0
Record 1, 2, 3 present – offset of 0.1
Basically any site with 3 records that don’t all have the same offset would allow us to check the algorithm, but as far as I can see this replicates how the series are combined.
#10 MarkR I don’t think he’s particularly doing anything with rounding, it really does appear to me that he’s just using integers. One record that jumped out was from praha, May (I think) 1990 where we have an obviously bad datapoint because every other case has the two series identical when they’re both present:
223404050000 30.4
223404050001 31.5
Combined 31.0
The way I see it, he’s read the numbers in as the integers 304 and 315. Doing integer math, (304+315)/2 = 309. So he gets an unadjusted average of 30.9, and applies the adjustment calculated above. I think he’s completely ignoring rounding, letting the integer math do that for him.
We are going through some serious contortions trying to determine the magnitude of the correction, but why does the sign of the correction always seem to be inverted? For instance, if the earlier series has a cooler average during the overlap period but it is adjusted down (even cooler) as opposed to up (the intuitive correction). Each of the cases listed seems to behave like this. Praha goes cooler while the other two are warmer in the earlier series and are adjusted even warmer.
Either this is just a boneheaded programming error or something else is going on that we’ve missed.
re 5
Somebody legally inclined needs to look at Hansen 87.
It was funded by an EPA grant. The work was completed by NASA and sigma data.
If it was an EPA grant I cannot imagine they can claim any ownership of the code.
Rights get assigned in these kinds of joint efforts.
What did EPA have as deliverables on the grant?
Re John #9
For Gassim, I think only the first (earlier) record is biased upward 0.2 deg, not both records.
#8. JEan S: does this differ from “Hansen rounding” as I described in the post before this: http://www.climateaudit.org/?p=2017 (I know that it’s hard to keep track of these notes.)
My simple mind sees simpler behavior
Station:
Praha 61111520000.dat and …0001.dat — “Praha1″ and “Praha2″
Joenssu 61402929000.dat and …0001.dat — “Joenssu” and “Joenssu2″
Gassim 22340405000.dat and …0001.dat — “Gassim1″ and “Gassim2″
For Praha: if Praha1 alone is available, subtract 0.1
if Praha2 alone is available or both are there, use as is
For Joenssu: if Joenssu1 alone is available, add 0.1
if Joenssu2 alone is available or both are there, use as is
For Gassim: if Gassim1 alone is available, add 0.2
if Gassim2 alone is available, use as is
if both are available, average and add 0.1
Praha and Joenssu always have identical values when both are present, but Gassim’s station values are sometimes different.
I think Hansen estimated a correction factor for each station and used it consistently.
Let me try and summarize what I think we know regarding these three data sets.
1) The adjustment seems to be calculated based on the data average of the overlapping period for each data set.
2) The adjustment seems to be a fixed offset applied to the earlier dataset. The later dataset does not seem to be changed.
3) The sign of the offset applied to the first dataset is inverted from what would be expected. (e.g. a cooler dataset is cooled even more).
4) The magnitude of the offset appears to be *roughly* equal to the difference of the two averages from #1 above. There appears to be some sort of rounding issue.
5) The adjustment looks to be spurious for reconciling data from different scribal sources. The offset is driven by a small quantity of missing or bad data points. The adjustment seems better suited to reconciling data from different stations.
Am I missing or misstating anything?
#20: Yes. It’s just simple floor. If you are working in integer domain, you simply disregard all the decimals. Try it, you’ll get exact result for Pjarnu. So that’s the last part of the combining two stations. Hansen is combining stations in pairwise manner this way:
1) calculate the difference of the means dt=mean(x)mean(y) in overlap
2) substract the difference from _the first series_, i.e. x
(Sorry for double post, I forgot that one should not use “arrow” here)
#20: Yes. It’s just simple floor. If you are working in integer domain, you simply disregard all the decimals. Try it, you’ll get exact result for Pjarnu. So that’s the last part of the combining two stations. Hansen is combining stations in pairwise manner this way:
1) calculate the difference of the means dt=mean(x)mean(y) in overlap
2) substract the difference from _the first series_, i.e. x = xdt
3) calculate the average between x and y
Now my observation shows how to do 3). In Pjarnu series there is no adjustment from 1), so it was possible to figure out 3). There shouldn’t be anything strange in 2), so it’s left to figure out how 1) is done. My huntch is that you have already done it correctly, but failed to notice due to “wrong” thing in step 3)
#14 (Skip) Well, that gives the exact result for Pjarnu…
No candy for anyone until they can calculate Gassim, Joensu and Praha using the same algorithm.
One suggestion that I haven’t explored yet – John G emailed me wondering if there was some carryover from distance weighting.
An additional way that might help with some rounding touches is say using a weight of 2 on the continuing series and a weight of 1 on the older series.
But the real stumbling block remains:
the difference in mean (with no allowance for month) but this is at present our only traction) as compared to adjustments are:
Diff_Mean Adjustment Value Missing
Praha 0.1316102 0.1 1.4
Joensu 0.2698582 +0.1 16.2
Gassim 0.1149412 +0.2 33.6
Getting Joensu and Gassim to stand still is very perplexing.
#21. We know that. It’s calculating the deltas that is the quandary.
#22. yes we know these things. Except (4) is confounded by the fact that the adjustments are in inverse order relative to the differnce in mean for Joensu and Gassim
I think Joenssu – Joensu refer to the Finnish site Joensuu.
It is very interesting that Hansen uses a data set from Finland that is not homogenized.
We have de facto 6 series which are homogenized from 1880, Maarianhamina, Helsinki, Lappeenranta, Jyväskylä, Oulu and Sodankylä. These should in first hand be used. Not such self fabricated data. One problem with the Finnish data is, that it is secret, I think that only Phil Jones has access to it but cannot (will not) publish it.
What accounts for this in praha?
“year” “223404050000” “223404050001” “combine”
————————8
I’ll try again :). In Praha, how did he get this?
2007.33333333333 NA NA 32.7
2007.41666666667 NA NA 34.8
2007.5 NA NA 34.5
Sorry! Two site names should be Jyva¨skyla¨ and Sodankyla¨
The Finnish (Swedish letters) letters do not appear in this connection.
#29. AS I said earlier, this just reflects different dates in which I scraped source data. I scraped the combined later than the dset=0 and it takes too long to update other than sporadically.
#22 Jeff C. I am not sure how you know which dataset for Praha is cooler.
Re 32
During the overlap period (1986.833 to 1991.833)
Praha dataset 0 overlap period average = 9.165 deg (cooler)
Praha dataset 1 overlap period average = 9.296 deg (warmer)
Dataset 0 is cooler over this period yet it is offset 0.1 deg (even more cooler) to generate the combined values. Seems backwards. All three examples do this.
Keep in mind that if you have 3 records, (I think) you have to find the bias for the first pair, combine, and then use that new series as the starting point for calculating a new bias for combining the third record. This could complicate attempts to understand the formula for bias.
For example, if much of the data is identical then the first bias may work to force an opposite adjustment in the second step, masking the double adjustment and making the actual adjustments more difficult to reverseengineer.
And frankly, I’m confused enough as it is.
KDT@34:
I don’t think so. I think that somehow the bias for each individual series is come up with individually using some method not clear to me, and then they would be combined as I described.
Now, the only plausible reason I can come up to do this is perhaps the machines that the calculations were done on are memoryconstrained. You could store each temperature value in a 16 bit short int rather than a larger float or double floating point value.
I guess my point was that the reason linebyline algorithms don’t turn out to transfer between records is likely because the answer you are looking for isn’t linebyline.
Skip #35
After describing how to combine two series, Hanson says:
All of the situations in the above paragraph should be noted. Any of them could confuse analysis.
Let me bold another part:
The “way” referred to here (I think) includes the bias calculation, not just the averaging.
#33 Jeff C.
You are right, it is backwards. HL87 says is the difference between the station to be combined into the series and the existing series , or:
is then subtracted from the station to be combined, , and that result is then averaged with the existing series.
HL87 says the stations are ordered from longest to shortest record. In the cases of Gassim and Praha, station 0 has more valid records than station 1. Joenssu is the other way around. But it appears that rule is ignored, as the delta seems to always be calculated by subtracting station 1 from station 0, and then adding either the delta or some derivative of it back onto station 1, but not across the whole series as is supposed to be done.
By the way, I think Skip is right. The math is done using integers. Well, I think the data is probably ingested as integers given that the GHCN daily datasets are all integer records.
Using integer math, the three deltas I get for the stations are:
Station_0 Station_1 dT
Joensu 35 32 3
Gassim 248 246 2
Praha 92 93 1
I am not sure how the “3” for Joenssu becomes a “1”, or how the crazy adjustment during the overlap period in Gassim is derived.
Another thought…we should think like a programmer from the mid80’s, because that is when this code was originally written and I doubt anyone has rewritten it since, or ported it to another language.
I can get exactly the same combine results using proper rounding (ie not hansens) by the following method for praha.
1. Average the temperatures over the overlapping periods (1986.83 > 1991.83).
This gives an average of 9.2 for column 1 and 9.3 for column 2 and hence the delta of 0.1 for column 1.
2. Add the delta to all column 1 values (ie subtract 0.1).
3. Add 100 to all values (This is to ensure all values are positive)
4. Average the values and subtract 100. You now have the combined value.
The reason for making all values positive is because most programs will round negative numbers differently than they will positive.
For consistent rounding you should add 0.05 and then discard everything past the first decimal place. However, most program will add 0.05 for positive values and subtract 0.05 for negative values before discarding everything past the first decimal place.
#41. I previously posted a method to get Praha – and there may be others, not that they make sense. The issue is to get all three.
#41
The add 100 was an arbitrary value I used to make everything positive, but if it makes you feel more comfortable then convert to kelvin (+273.2), average them, convert back (273.2). The result is the same
Since the Czech’s have the most accurate thermometers in the universe, this whole debate seems moot. For all of the points to be combined, only two require combining ( at least only two that I saw skimming the data). The rest are identical.
#39 John
Makes sense. The shortest to longest ordering rule doesn’t seem to apply. What does seem to be consistant is that the earlier set is series 0, and the later set is series 1.
That would make series 0 the “existing” series and series 1 the “to be combined” series.
According to the excerpt from HL 87 above, the correction should be applied to series 1, the series being combined into the existing series. Yet in each case we’ve looked at, the correction is being applied to the existing series. The problem being that the correction is being calculated as if it were to be applied to the “to be combined” series (series 1 – the later series).
This seemingly is a programming error, instead of the correction being applied to the correct series, it is applied to the wrong series and doubles the error.
That doesn’t explain the magnitude differences, as you noted. My instinct is that the algorithm selects the overlap period differently than we do by eyeballing it.
Here’s the clue to 1across
compare A to B
calculate bias A_B (by rule 1)
adjust A or B by A_B (by rule 2)
combine A and B to form AB
compare AB to C
calculate bias AB_C (by rule 1)
adjust AB or C by AB_C (by rule 2)
combine AB and C to form ABC
Now, just solve for rules 1 and 2. Solution in tomorrow’s paper.
#39 #41
For overlapping periods, average the values and then add onehalf of the adjustment to the averae to obtain the combined on the theory that onehalf of the data points are “biased”. For differences of 0.1 absolute, the onehalf adjustment is dropped presumably due to rounding.
For Jaensuu, leave off the last data point in series 614029290001 for Dec of 1991 (just a data entry error). That will reduce the delta to 0.1.
re:40
if the high level source code is mid 80’s and Hansen is doing what Skip suggests I wonder if the floating point operations would yield the same accuracy and results on different computers. Some of them used little endian some used big endian some of them a hidden bit technique for having an extra bit to represent the mantissa on floating point op’s. etc.
best
#46. In the examples here, there is no C. The third column is the “answer” not a third version.
Did I download the wrong data sets? Of approximate 180 overlapping data points only 2 are not identical. The probability to two thermometers reading exactly the same temperature to 0.1 degrees C 99% of the time in separate locations seems a tad high to me. Whiskey tango foxtrot is up with the data sets? It really doesn’t matter how you massage the data if the data is suspect.
#47. OK, you may be onto another possibility.
Maybe the combined version for older data was calculated a long time ago and is not updated regularly. Maybe they calculated Gassim or Joensu with a different data set – in which case one of these results will be impossible to replicate.
My suggestion:
1. selected the period with overlapping data
2. determine missing data by interpolation
3. determine the average for each data series
4. round the averages to 1 decimal place
5. subtract the averages and add the difference to one of the series.
Re 52,
Just tried your idea using an average of the two adjacent points for a missing data point. It makes the diffences between the two series virtually disappear (
Forgot about the “less than” symbol issue. Should be (less than 0.004 deg for all three caes)
Steve #49 my apologies.
My reply in #34 was meant for Skip #14:
Without that made clear my comments have rightly confused the world. Sorry!
#47 #51 #52 If this is true, it has nothing to do with UHI, unless Hansen’s lights=0 and wind determines UHI. If he is wrong on either account then he has gotten rid of UHI by including UHI despite the claims otherwise. By #53 it is mathmatically the same as saying averaging all data gets rid of UHI even if UHI is present. Not a bet I would make.
#50. For the 800th time, they are scribal versions of the same data – not different stations.
#51 Steve, here’s an AlGoreithm that works for each of the Praha, Gassim and Joensuu series:
A. Take the average of overlapping data points in series 1 and round it to the nearest tenth of a degree:
Praha: 9.3
Joensuu: 3.4 (leave out Dec 1991)
Gassim: 24.6
B. Take the average of overlapping data points in series 0 and round it to the nearest tenth of a degree:
Praha: 9.2
Joensuu: 3.5
Gassim: 24.8
C. Subtract the values in B from those in A:
Praha: 0.1
Joensuu: 0.1
Gasimm: 0.2
D. For all noneoverlapping data points in series 0, add adjustments in C to the series 0 data points to obtain combined values.
E. For all overlapping data points, average series 0 and series 1 values then round to nearest tenth of a degree and then add onehalf of the adjustments in C to the average to obtain combined values.
F. For all noneoverlapping data points in series 1, series 1 values are equal to the combined values.
This gives the exact combined values for all three series, assuming only:
a) that onehalf of the adjustment for Praha and Joensuu that should be added to the average of the overlapping data points is lost due to rounding or truncation and
b) that Dec 1991 in series 1 for Joensuu with a value of 4.9 was omitted due to data entry error or something equivalent.
I give up! I have never been too good with puzzles and riddles and this is driving me insane.
The question is simply this: Is there really a systematic method that is being applied here or is it really a hodgepodge of errors. While I can wrap my mind around oversights involving truncation versus rounding, I cannot even begin to think about speculating about how the same process that produced 223404050000/1 also produced 614029290000/1.
Ever since I put up the GHCN graphs, I have been trying to come up with a good, systematic method of running a whole bunch of regressions using monthly individual station data. So far, I have not been able to put together one.
I have seen SAS programs written by research assistants thought by other faculty to be SAS gods. In that light, it is not inconceivable to me that whoever wrote the processing programs, typing Fortran code in vi (vi rocks! ;) ) late at night kept doing this until his/her thesis adviser approved the result.
Steve, if you know the answer, please do tell :)
— Sinan
#58. Phil, that’s pretty good. I think that you have to assume that there’s an error in the data somewhere. Perhaps the dset=1 values were originally combined back in 1990 and never changed.
Without the source code, I guess we’ll never know. I’ve posted up a query at realclimate asking for their assistance in their conundrum, given their view that Hansen’s methodology is sufficient to resolve this. We’ll see if Gavin let’s it through.
From Joenssu
2007.333333 NA NA 9.1
2007.416667 NA NA 14
2007.5 NA NA 16.6
Q How do you get an average from two non numbers?
A You use a different dataset for the calculation and then write the result into a database that doesn’t include the different dataset.
#57 Excuse me, combining scribal data that does not differ confuses me. The data sets you show for Gassim only has two points of difference requiring any calculation to combine the sets. Per the GISS raw data plot showing the combination of the two sets, there was significant differences at the point of overlap.
Since the sets are corrected to match at the overlap, the corrections made to combine the sets at the point of overlap are continued to the older set. Whether that is correct or not I do not know. Since the newer scribal set is biased higher than the original at the overlap per the GISS raw data plot, I would suspect instrumentation changes.
My solution to the Sunday Crossword is, “indeterminate based on the data provided”. If you combined the raw data, finding that the newer data set is upwardly biased, then the upward 0.2 degrees would be evident, not justified, but evident. At least that is my opinion. Then I still have to download the raw data to verify my thoughts.
#61
A possible answer to ypur question:
Q How do you get an average from two non numbers?
R. Use the average of the same month on previous years.
Maybe they used this method to determine missing data in overlapping periods. For example, in the data from Praha, the data for December 1986 is missing. The best estimate for this missing data is the average temperature for December calculated from the data of the same series.
Note to 62, if you look at the first plotted point of the of each new set you will see the correction bias required versus the original data set. The actual correction is not specified and the raw data does not include the correction as you have noted. There may be a lack of transparency.
#63 More Joenssu
Nothing at all (no data, no average)for 1995.33333 and 1995.75, so if they are referring back, why not in these two cases?
1995.333333 NA NA NA
1995.416667 NA 16.9 16.9
1995.5 NA 14.8 14.8
1995.583333 NA 15.1 15.1
1995.666667 NA 9.7 9.7
1995.75 NA NA NA
Probably a dumb question but would the differences in latitude influence his approach in any way?
Once you have the deltas you don’t have to do any fancy steps or rounding to get the combined result. Simply do the following for all the data sets.
1. Convert all values to Kelvin by adding 273.2 to each value
2. Apply deltas with a precision of 1/100 degree.
3. Average each monthly total and round to nearest 1/10 degree
4. Convert the averages to Celsius by subtracting 273.2
The following deltas work on the first series in each data set
Praha; Any value between 0.06 and 0.10
Joenssu: Any value between 0.05 and 0.09
Gassim: Any value between 0.15 and 0.19
Now if only I could find a way to calculate the deltas…..
The correct way to average two integers with proper rounding is:
(x+y+1)/2
no? For example,
x = 3 (0.3), y = 3 (0.3), x+y+1 = 7, 7/2 = 3 (0.3) – 0.30 rounded down
x = 3 (0.3), y = 4 (0.4), x+y+1 = 8, 8/2 = 4 (0.4) – 0.35 rounded up
x = 3 (0.3), y = 5 (0.5), x+y+1 = 9, 9/2 = 4 (0.4) – 0.40 rounded down
It’s pretty marginal – it isn’t completely unreasonable to round 0.35 down as it’s as close to 0.3 as it is to 0.4 – but common practice is to round it up, so that’s what I’d be tempted to do.
The first thing I learned in college was KISS.
Prahalibus first recorded per plot 7.5 C versus 7.8 old (divide difference by two and) rounded +0.1
Joensu first recorded plot 2.9 versus 2.7 old (divide difference by two and) 0.1
Gassim first recorded plot 25.0 versus 25.5 (divided difference by two and) +0.2
Apply correction to previous data following new instrumentation.
Locate grad students that had typos and give a C grad.
#62. Of course, what HAnsen did is goofy since there is no problem. That’s understood. What we’re trying to figure out is what he did.
# 63 in # 62 you see the goofiness. I would not do that, but that was what was done to 80% confidence. (Give or take a few percent).
Sorry that is 70 referencing 62.. heck ya’ll figure it out
Now that I have some clue what to reference. 70, new instrumentation has to be calibrated against old to maintain continuity. Unfortunately, the new is not always as accurate as the old. The methods Hansen used, provided the idiot had published the reason, have some validity. Since he was probably jet setting with Al, he may have forgotten to publish what the heck was going on. What I have been saying is the data you need is not there without digging and assuming. Not a very scientific combination.
I was looking at this data:
http://data.giss.nasa.gov/work/gistemp/STATIONS//tmp.222304690002.1.1/station.txt
I noticed that seasonal averages are calculated even when only two of the three months are present. I also noticed that yearly averages are calculated even when only three of four months are present.
For example, note that 2002 has a yearly average calculated even though only seven of the twelve months are present. (Note the use of Dec to Nov for a year.) The theoretical lower limit appears to be 6 months if distributed correctly.
It’s easy enough to calculate the values that were substituted for missing months and years. For examples, 0.7 for Apr02, 3.6 for Apr95, 30.5 for winter02.
The thing I notice is that the values used don’t seem unreasonable, but they’re also not constants. I’m guessing they come from some numerical model (a climate model?)
This data is apparently calculated just for display on this page and the associated graph. I’m not saying anything is suspicious with this at all.
But I wonder if a similar step might be taken when considering missing data in these records under discussion. If they have that data at hand for these graphs, they have it for examining missing months at bias time.
It is clear that no method of averaging known to this man can get the applied offset in Joenssu, so the input for the missing value well may be external. Unfortunately if that’s the case, it will severely confound further attempts at replication.
#74
should read “missing months and seasons”
That’s Kalakan, for those who don’t recognize the data on sight.
Dammit!
I noticed that seasonal averages are calculated even when only two of the three months are present. I also noticed that yearly averages are calculated even when only three of four seasons are present.
I’ll stop messin’ up your board now Steve.
This is moving from mathematics to philosophy.
I started with 8 bit machine language in 1969 and can relate to rounding errors etc, though I have forgotten most of it as no longer needed. I have enormous respect for Steve for voluntarily and doggedly finding problems and seeking correct solutions. The frustration at having to second guess code is huge. In future, the mathematics at other climate stations or regions will no doubt reveal more crosswords, but surely there is a point where the crosswords have to stop and the correct answers get recalculated, legitimately, uniformly and officially.
Philosophically, I have reservations about filling in missing data by using some formula to get a value. No matter what is done, unless the data have no noise at all, the result will be a guess. This applies to interpolations as well to extrapolations of data beyond the ends of the sets and to the merging of data sets. It is not uncommon to find that a value is missing because there was a nonordinary event – sometimes these can be pivotal.
Philosophically, the result that is finally sought is a correct representation of the state of past events. For example, how much of a temperature change trend is real and how much is false, induced by methodology. IMO Steve has provided more than enough examples of doubt to easily justify a complete reworking, not with tricky little dodges, but with (for example) proper physics that works on accepted units like degrees absolute, has correct error estimates and satisfies peer review in its proper sense. Wipe the whiteboard clean and start again.
Philosophically, unless this is done soon, we will lose data forever. People will die without having disclosed important information. Machines will be incapable of reading old data storage devices, there will be no more people able to convert Fortran correctly to what is current, etc.
Steve, do you think that you have now proven enough to allow an end to the crosswords, or is more pressure still needed?
I’ve to correct the name – Joenssu. It’s Joensuu. It locates in eastern part of Finland.
I suspect that an algorithm devised to handle missing months when combining nearby stations is being misused to combine different scribal versions. I suspect the algorithm substitutes plausible values for missing months for some purposes, one of which is doing the bias calculation. This necessarily introduces spurious trends in these situations because the estimated temperature will almost never match the one being compared to.
This algorithm may very well be appropriate when the missing month is truly unknown, as for nearby stations. For scribal versions, a method appropriate for the situation at hand should be devised.
Steve
Dumb question here but could a Freedom of Information Act request for this data not be filed? It is stated in all NASA contracts that data of this type is property of the USG and that the USG has a government purpose license for the data which allows anyone to request it.
It’s all nice and dandy, you guys trying to figure Hansen’s adjustment formula, but why on Earth it is applied for station/s, where no moves were done and the versions are quite obviously from the same measurement set? Moreover, for a quite recent station, where the Institute keeping it can be easily asked for the original dataset. In the month, where in Praha Libus the data are missing, there were already hourly readings! I don’t think that all the people were hit by a black plague epidemy.
I can imagine that checking versions of several thousands stations and guessing the reasons for gaps in the data and the suitable method for remedying these might be a bit boring, but sometimes the data collection is like that.
The answer is, insert the following value in the second series for each station:
Praha — Dec86 = 3
Joensuu — Jun88 = 19
Gassim — Jun88 = 39
calculate the bias
add it to the first series
combine
You should get Hansen’s results.
And here’s where the AlGoreithm comes in. Hansen got these values from him.
I would like to second #78. Rather than spending more time on the crossword, if this group would agree a sound method and then use that to recalculate each station’s trend as correctly as possible, then compare that to Hansen, I think you would have a more productive process.
My guess, a systematic 0.1 degree C warming bias from faulty methodology (whether data interpolation, or algorithm, or SW), and still no proper UHI adjustment. Fully adjusted, a 0.2 degree C warming bias from all defects for the period 1980 to 2005, after which correction will put the surface and satellite temperatures in pretty good correspondence. Murray
Okay, the kids are screaming because daddy is ignoring them, it is time to give up. The crossword puzzle was a fun exericise but ultimately it appears unsolvable as key information seems to be missing.
Following up on the the comments of 78 and 84, it makes sense to determine if the identified flaws in the algorithm thus far impart any systematic bias to the overall trends. The two flaws that I see are:
1) the correction is applied to the wrong series (or looking at it another way, it’s sign is wrong)
2) the correction itself is spurious as a missing data point in the overlap period “corrects” a series that shouldn’t be corrected. The correction looks inappropriate for reconciling data from different scribal sources.
It is possible that these flaws balance out over the GHCN and don’t have much effect on the overall trend. However, if the missing data points tend to be more clustered in Summer or Winter, an overall bias in the corrected series is probable. Since the corrected series looks like its always the earlier series, this could be significant in temperature trends over time.
Follow up on 85,
The correction methodology does look appropriate for reconciling data from different stations, but the sign error would double the difference, not remove it. Again, does it balance out over all the stations or is the error onesided? I’ll try to find some examples from GISS.
#84…I think that was the purpose of the crossword puzzle, to find a sound method that works for all three stations.
#83 KDT…I think you may be on to something. You are suggesting Hansen first fills in the missing values for one series using the algorithm that can produce a seasonal average from just two months, or an annual average from just three seasons.
Again using integer math…
For Praha, the series 1 DJF 1987 average is 37, but D is missing. Solving for D I get 36. Putting this into the second series and now averaging across both series with the same number of data points I get an average of 92 for series 0 and 91 for series 0. The dT is 1. This value is then subtracted from series 1 per the bias method.
I like this method because if it works it does not require us to assume one or more programming errors exist in the code.
I have not yet tried it on the other two…I still wonder how we rationalize the biasing done on both Gassim series during the period of overlap.
#83 KDT…Joensuu would work with 18.0 if using integers, but the simple math I do for the missing value says 14.9 is the appropriate number for that month. If I use that value then the bias is 0.
However, I don’t want to lose this train of thought…I think you are onto something.
John Goetz #87
No. I think the missing values come from an outside source.
In the output I linked above, there are 22 missing months inserted to estimate seasonal and yearly averages, and 2 missing seasons are estimated. It is clear there is a handy source for estimates for any missing month at any location. It is equally clear that these estimates are not calculated from the station data alone.
So I am suggesting that Hansen fills in missing values for his bias calculation from this same source.
#88
Joensuu works fine with any estimate between 18.6 and 20.5. There is no need to invoke negative adjustments. The first station will be cooler, and will be correctly adjusted upwards.
That is how Hansen got his result, by inserting an estimate in that range. Look at the entries for June here:
http://data.giss.nasa.gov/work/gistemp/STATIONS//tmp.614029290001.1.1/station.txt
An estimate of around 19 was not derived from previous years. I don’t think it was derived from station data at all.
These adjustments between data sets at a single site can be far from trivial in their impact.
I am looking at one of the two Prague stations, Praha/Ruzyne as opposed to Praha/Libus. Praha/Ruzyne has what appears to be a more or less continuous record from 1881. It is/was/has been located approximately 15 KM from the center of Prague and is now next to or on the Airport.
Here are the adjustments from 1881 by time period theywere applied:
1st Col:Adjustment from original record to give Consolidated record:
2nd Col:Years with standard adjustment
3rd Col: From
4th Col: T0
5th Col: Comments
– 2.6 28 1881 1909
– 2.7 9 1909 1918
– 2.6 4 1918 1922
– 2.5 3 1922 1925
– 2.4 4 1925 1929
– 2.3 3 1929 1932
– 2.2 4 1932 1936
– 2.1 3 1936 1939+
– 2.0 0 DK DK WWII
– 1.9 0 DK DK WWII & Russian Occupation
– 1.8 0 1949 1949
– 1.7 4 1949 1953 Occasional months with nonstandard adjustments
– 1.6 3 1953 1956 Occasional months with nonstandard adjustments
– 1.5 3 1956 1959 Occasional months with nonstandard adjustments
– 1.4 4 1959 1963 Occasional months with nonstandard adjustments
– 1.3 3 1963 1966 Occasional months with nonstandard adjustments
– 1.2 4 1966 1970 Occasional months with nonstandard adjustments
– 1.1 3 1970 1973 Occasional months with nonstandard adjustments
– 1.0 3 1973 1976 Occasional months with nonstandard adjustments
– 0.9 4 1976 1980 Occasional months with nonstandard adjustments
– 0.8 3 1980 1983 Occasional months with nonstandard adjustments
– 0.7 3 1983 1986 Occasional months with nonstandard adjustments
– 0.6 1 1986 1987 Occasional months with nonstandard adjustments
– 0.5 3 1987 1990 Occasional months with nonstandard adjustments
– 0.4 4 1990 1994
– 0.3 3 1994 1997
– 0.2 3 1997 2000
– 0.1 4 2000 2004
0.0 3 2004 2007
Note: Almost without exception, change in adjustments take place in December, mirroring the Seasonal definition of the year, i.e., DJF, MAM, JJA, SON. The 3 and 4 year periodicity suggests some kind of leap year adjustment.
Clearly for Praha/Ruzyne the adjustments have served to add a significant warming trend from the original data.
Note If this station is used to adjust other stations per HL87 then this trend will have been imparted to all adjusted stations.
This set of adjustments seems to be driven by a need to correct for the obvious discontinuity depicted here. The earlier data clearly has a trend that needs to be checked to see whether it amounts to a warming trend of approximately 0.3C per decade.
Given the above, I suspect that there may be no single algorithm that accounts for the adjustment in the multiple records for a single site, though there may be some standard decision rules based upon the source of the records and the nature of the differences.
#89, 90 KDT…I think Hansen has an algorithm that looks backward in a station’s history to fill in missing months and missing seasons “where possible”, analogous I think to a “reverse version” of his quality control algorithm. I don’t think he gets the data from a third source.
The thought process your previous posts led me into was that Hansen might not be looking at monthbymonth overlap. He might be looking at seasonbyseason overlap, or annual overlap.
When I grab the data from GISS rather than looking at the datasets Steve posted (which do not have the seasonal or annual averages), and I look at the seasonal overlap periods and use those, I get the bias value for Joensuu, but for Praha I get 0 (rounded). So something is still not right, but again I believe it is related to filling in missing data.
#92 followup
I had an error in that I was not using all seasons that overlapped in Praha. Now I get the correct bias for both Praha and Joensuu if I take the average of the overlapping seasons and round the delta to one decimal place. I will try Gassim.
And it works!!!
For all three stations, if I take the annual averages (not seasonal), find the delta between the two and round to one decimal place, I get the following biases:
Praha: 0.1
Joensuu: 0.1
Gassim: 0.2
These biases are then subtracted from the first station. No programming error required.
It also explains the behavior #99 bernie noted:
The seasonal averages begin on a December 1 boundary.
John:
(a) When I looked at the Bagdarin data I noticed the same thing. I think however that the adjustment process for 12 month periods that have missing data goes something like, estimate the year, use that to eastimate the season with the missing record, use that to estimate the missing month. Thatleaves open how they estimate the 12 month period.
(b) Will your approach work for Praha/Ruzyne? I suspect there are multiple adjustments and yours is one for “missing data”. There may be others for griss errors and discontinuties.
Looks like I threw in the towel too soon.
John – can you summarize the correction in a few sentences? I’m getting lost trying to understand it from the comments. Thanks.
#91
Once again, just like Central Park, the correction is the signal. Maybe that should be the Team’s motto?
At this point I will point out that my idea has this advantage: It is a straightforward implementation of the adjustment as described by Hansen.
How to estimate the missing month is just a detail.
But rest assured, Hansen estimated the missing month.
We’re trying to find out how Hansen got his results. The are certainly many convoluted paths to get from Raw Town to Combined City. But all roads go through Bias Junction.
This old man is standing there waving to you that he went thatta way.
To state this as plainly as I can:
If you wish to replicate the combination of these three series you simply implement Hansen’s method as described.
However, you will not get meaningful results unless you deal with the missing month. Obviously it was not excluded so it must have been estimated.
So, to replicate Hansen’s method you must also estimate the missing month. How? Your guess is as good as mine.
But if your guess is as good as Hansen’s, you’ll get his result.
#91
I wonder, where the continuous report of Ruzyne since 1881 comes from. The airport was founded in 1937. The record keeping was renewed in 1945, as it was forbidden during the war. New station was built in 1976, together with new airport terminals. The first automated system near runway came in 1984, more modern one in 1990.
The old GHCN name for a station with the same No. as Ruzyne is “Smetschna”, which is German translation of Smecno, a small town cca 25 km west from Ruzyne. I can’t find anything about meteo records taken in Smecno and how it relates to Ruzyne airport records.
There’s another meteostation in Ruzyne, recording since 1953. The station belongs to the Research Institute of Agriculture, is very close to the airport and until 1961 it was also a part of Czech Hydrometeorological Institute web of stations, like Ruzyne Airport. Since 1961 it is an agrometeorological station only. Don’t know, if the data from 19531691 may be part of a Ruzyne dataset or not.
Its GPS is: 50 05,165´ N, 14 17,901´ E
#94 Candy for John! So the delta is calculated from the _annual_ values for those years common for the two series. Notice the caption for figure 5 in HL(1987) (my bold):
It seems that Hansen is simply substituting something like the mean of available values for the missing months when calculating the average.
#94. John, did you take annual averages for the entire record or only during the overlap period?
#102: Steve, years of overlap seemed to work for me.
EW:
Again interesting Metadata. Even more if Praha_Ruzyne is used as a long record sation per HL87.
JeanS:
I am not sure. The values for the missing months are derived from the seasonal data. The question is how is this seasonal
estimate obtained.
WALDO found in UK Climate Trends
Please note the following link will open a pdf document:]
http://www.metoffice.gov.uk/climate/…ate_trends.pdf
The actual data sets are here:
http://www.metoffice.gov.uk/research…cip/index.html
Enjoy!
The UK MET office proudly say they were technical advisors to “Live Earth” so an AlGorerithm is possible
#103. The odd month in Praha is Dec 1986. What years did you include in the calculation? Did you use a DecNov year?
WALDO found in UK climate trends ?
http://www.metoffice.gov.uk/climate/uk/about/UK_climate_trends.pdf
The Actual data sets are here
http://www.metoffice.gov.uk/research/hadleycentre/obsdata/ukcip/index.html
Enjoy !
The UK MET office proudly say they were technical advisors to Live Earth so an AlGorerithm is possible
#101 Jean S.
When I read HL87, I didn’t really read the caption under Figure 5 that you quoted. What I read in the body of the paper was
and I interpreted “period” as months. However you rightly point out that the figure caption indicates years, which one interpretation could be full years.
#102 Steve McIntyre
I used the annual average only from the overlap period. Note that these are the averages GISS calculates and stores in the station data files. Using the seasonal averages is insufficient as can be seen with Gassim. Both stations are missing the same three seasonal averages (1988 JJA, 1990 SON, and 1991 DJF), but all three years have annual averages (which of course, differ). I tried averaging the existing seasons – all of which overlap – and came up with a bias of 0.1. Using the annual averages, however, I get the desired bias of 0.2.
Murray Duffin September 3rd, 2007 at 7:26 am ,
I said a similar thing a while back.
Calculate a result. Show your data. Show your work. Make the “Climate Scientists” critique the work.
To do that they would have to come into the open.
#96 Jeff C.
The summary is as follows:
Using the data downloaded from GISS, looked at the overlap period where GISS provided an annual average. For this period I calculated an average for both stations using the annualized average, and calculated delta T from these averages.
This is a piece of HL87 that was open to interpretation, apparently.
There are still two problems I see:
1) HL87 says he orders the stations from longest record to shortest. Right now it appears it is mostrecent to oldest. I am not sure how ties are broken.
2) Using annualized GISS data means there is an undocumented step in the process, and that is the creation of the annual and seasonal averages when one of the data points is missing. This undocumented step creates estimated data points for missing data to presumably lengthen the overlap period. This estimate does not seem to be terribly robust. Look at Praha. During the period of overlap all months in which both stations are identical the records are identical. Station 1 is missing a data point for Dec 1986. The method to estimate a seasonal average for DJF 1987 yields 3.7 whereas station 0 has an average of 2. The annual average for 1987 is 7.5 (station 1) and 7.93 (station 0). Since the period of overlap is just four years, this is enough to bias the entire record of station 0 downward 0.1 degrees.
The really interesting thing is that, the shorter the overlap, the greater the influence this methodology has on station records!
#106 Steve McIntyre
Here is what I used:
1986 8.93 999.9 Can't use
1987 7.93 7.5
1988 9.4 9.4
1989 9.98 9.98
1990 10.15 10.15
1991 8.73 8.77
Average 9.238 9.16
DeltaT 0.078
Round 0.1
#107
The UK Waldo looks interesting. Bearing in mind the Armagh, Northern Ireland measuring site which has remained trully rural for well over 100 yrs and that the English measuring sites are entirely opposite, and that the prevailing wind is from west to east; I would expect everything they say below. I wonder if anyone has done an analysis of the Armagh data against the English data. Perhaps using the Armagh set as the “adjustment” reference.
quote from UK met paper;
” Map 1 (see Appendix 1) shows the gridded differences of mean temperature between the 19611990 average and the 19912004 average for each season. The whole of the UK has seen increased mean temperatures in each season, except for a few grid cells on Scottish mountains which had a slight decrease in the autumn. The lowest increase occurred in southwest England, while inland parts of East Anglia experienced the greatest warming, especially in spring and summer.
Figure 3 shows that there have been two main periods of increasing temperatures: 1914 1950 and 1970 onwards. The most rapid warming has taken place since about 1985. It can be seen from Figure 3 that the western side of the UK has experienced less rapid temperature rises during this period: Northern Ireland, southwest England and south Wales, and north Scotland have increased at a slower rate than other districts in the last 30 years.

John Goetz #110
I believe you are using the result of Hansen’s calculations as an input to yours. It is not surprising that you could find a relationship. However, it seems unlikely that Hansen used this method.
http://www.climateaudit.org/?p=2018#comment133599
My post #99 stands without comment. I cannot state my case any better than that.
Don’t ignore it, tell me why I’m wrong.
#110: John, he seems to use in missing points some (rounded) version of the averages for all available values for that month. From which we come to the most disturbing thing in this algorithm:
Hansen seems to be calculating a lot of intermediate results. Also I think the algorithm is actually working with integers (as been suspected here) and rounding seems to be ALWAYS downwards. This is especially disturbing in his 3) step (see my #24). Apparently the same combining method is used (with weighting) for combining different stations for an areal average. So one can think that there would be 23 records/station and tens of stations for a single value, and always rounding downwards at each step. This may have a substantial effect on the final result.
#113/#114: That’s a side track. Hansen is calculating a lot of intermediate results, and the annual average is one of them. John noticed that using those you obtained the solution for the puzzle. How Hansen calculates the averages, is a different question, and it seems to me that he simply substitutes the average over available records for the missing values. So don’t bitch about you being ignored (we didn’t), try to figure out the exact method.
#114 KDT
I am using Hansen’s guess. I am using his annual guess. Remember, the goal here was to take the individual station records and figure out how Hansen combined them into a single record. I did that by using the annual averages he stores in the individual station records rather than the monthly or seasonal averages. How Hanson comes up with the annual record when one or months are missing was not part of the problem and is not mentioned anywhere in HL87. I am extremely curious as to how he does it, but it is a separate problem.
John, you are essentially saying that Hansen calculates his annual averages and stores them in a file. Then, he uses that data to calculate his anunual averages. Now I really want to see that code!
You are, in fact, calculating the bias by examining at the output. You could do that by eye.
Of course now the next crossword puzzle truly is figuring out how he calculates seasonal and annual averages when data points are missing, because to extend this algorithm to three or more station records requires that the intermediate reference station have its annual average calculated as well. Without the secret algorithm, we can’t do it.
#118 KDT
Not at all. I am saying that a station such as Gassim has two or more scribal versions of the temperature record. Hansen ingests these into his software independent of the other, and that software calculates seasonal and annual averages. That software then spits out a new file – one for each scribal version of the station, that contains these seasonal and annual averages. He has simply summarized part of the data and appended it to the existing data. This summary includes some asyetundiscovered algorithm that creates an average when one data point is missing. To this point, however, the scribal versions have been processed independently.
Hansen then goes on to the next step which is the crux of our problem here, and that is to take the multiple scribal versions for the station and combine them into a single record. To do this he apparently takes the annual data he appended to the scribal versions and uses that “summary” to calculate the bias. He does this instead of using the monthly data. This would be fine if the monthly data used to generate the annual summaries were complete, but it is not.
I think you are confusing the individual scribal records with the combined record. I am using the “output” of the scribal records to generate the combined record, but I am not using the output of the combined record to generate the combined record. That would be silly.
KDT:
I agree with you and John, the basis for calculating the seasonal average and/or annual average is the essential next step. I just ran a set of monthly, seasonal and annual correlations for the consoldiated Prague stations and they all are above 0.94, with the overlapping, uncorrected correlations somewhat lower, they are all above 0.88. Now these are higher enough that I could see using one station to estimate the other for purposes of generating a complete record.
John, could it be possible that the averages would be already calculated in GHCN raw data?
John #120 you are correct in that I was confused. Criticism withdrawn.
Yes, the algorithm provides an estimate of a given missing month. Do you see any reason why Hanson would jump through any hoops at all, if he could simply use the result of that algorithm to fill in the missing month we’re talking about here? The rest of the bias calculation is straightforward if he does this, and a bit odd if he does it your way.
#123 KDT
If the annual averages are precalculated and stored in a file, and that file is a matrix like what we see when dumping out the data as a text file, then extracting the annual averages and determining overlap is actually easier than scanning for overlap months. The file formats I tend to see are in a matrix, so this would not be surprising. Remember, it was written in the mid (maybe early) 1980s, probably in Fortran.
For example, I updated my vBasic program for applying the bias method, and I did so by deleting lines. The program became simpler.
#99 KDT says: “However, you will not get meaningful results unless you deal with the missing month.”
I am afraid I am not following you. I was able to exactly duplicate the combined results using the same algorithm for all three stations, as explained in my post #58. I did not have to interpolate any data. The only thing that I did was to exclude Dec 1991 for Jaensuu for Series 1 for averaging purposes to obtain the adjustment only. The data point remains in the final calculation and an exact result is obtained.
I have used the same algorithm to try to replicate Cara, a much longer data series with differing values for almost all months in each column. So far, I have come close to replicating the combined column, but not exactly. Of the 622 points in the combined data series, I have been able to replicate 454 data points in the combined column exactly and 167 within 1 tenth of a degree using Hansen’s method. Only one replicated value is off by more than one tenth of a degree: October 1965, which is off by 1.2 degrees.
It is possible to get a more exact fit for Cara by simply subtracting anywhere from 1.6 to 1.9 degrees from every value in column one except the most recent four and then taking a simple average of each column rounded to the nearest tenth of a degree. That yields only four values off by 1 tenth and one value off by 2 tenths out of 622. However, this last method does not seem to agree with the methodology described by Hansen.
bernie #121
I count 4 Praha/Ruzyne records. Is this the station you are talking about?
KDT:
Yes. Have fun!! See post #91 if you missed it.
#115 Jean S.
I think you are right. Integers are being used. When I updated and ran my vBasic program (which uses floats) I had some spurious 0.1 values in the overlap period. I will create a version that uses integers.
Also, I don’t see the averages for month or greater in the GHCN data, just averages for the daily data.
Phil #125
If I average two series of different length, what would be the point? The result would be wrong. Where there are mismatches, you have to exclude one or estimate the other. Otherwise you are simply performing a calculation that will not give you the relationship you seek. That would be embarrassing.
Hansen handled the situations by estimating the missing monthly data. Seems reasonable to me. How would you handle it?
#129 KDT: From my post #58:
A. Take the average of overlapping data points in series 1 and round it to the nearest tenth of a degree:
Praha: 9.3
Joensuu: 3.4 (leave out Dec 1991)
Gassim: 24.6
B. Take the average of overlapping data points in series 0 and round it to the nearest tenth of a degree:
Praha: 9.2
Joensuu: 3.5
Gassim: 24.8
ONLY the overlapping data points are averaged to compute the adjustment, so both series would be of the same length and composed of data points in the same months. The adjustments are then applied as explained in post #58. The replication is exact. Obviously, without the source code one cannot know if this was exactly the way Hansen, et. al. did this calculation. However, I attempted to use a calculation method as he described it, and was able to exactly replicate ALL the data points in the combined column for all three series: Praha, Gassim and Joensuu as previously explained. I’ll be glad to go into more detail if you would like to replicate my results.
John G, that’s pretty good. I’m going to add some comments on “Hansenaveraging” which pull together some threads posted here but it’s a little different than anyone’s posted.
#130 I don’t mean to imply that this is the scientifically correct way to combine these data series. I am merely replying to Steve’s challenge to find an algorithm that will replicate the calculations in the data sets from Praha, Gassim and Joensuu that he posted. I believe I have found one, although there may be others.
Phil #132 Understood. I also didn’t mean to imply that you were averaging incorrectly. I was expanding on the statement that you were having trouble following.
The challenge was to “find any algorithm, however improbable.”
I am beginning to suspect that my analysis is rejected because it isn’t improbable enough. The rules to this game are confusing.
#122 That is exactly what I noticed. The annual plots for each combination show a difference in the first year plotted and then they all match. The first year must have been used to calibrate (combine) the two series, then a portion of the calibration (1/2 half the difference of the two instruments) is applied to the older series. The new series requires no adjustment. The raw data for each series does not show the the first year raw, but the first year adjusted, or there would have been more variation in the two series.
It is a BS way of doing business IMO, but that appears to be what happened. There must be a site upgrade procedure published somewhere that explains the situation.
Re: #60
Just one response on realclimate.org so far:
(Are you know as “Lex” to the realclimate.org crowd, Steve?)
#135 Paul….what is the thread URL?
I have posted 2 messages at RealClimate to challenge their members about the inappropriateness of applying certain algorithms to climate modeling, but they were not published at all. I believe that I had put in questions that are beyond the comprehension of Gavin Schmidt’s and team, such topic as climate sensitivity in feedback control theory. This is ingenious, that they only allow messages to be posted if they themselves could answer.
#133. I’ll take a look some time; I’ve been busy today (with grandchildren. :)
#131 Steve McIntyre…Hopefully I claim the prize. I will admit that I still get the spurious “1″ differences between my method and Hansen’s in the period of overlap for Praha, but everywhere else I am equivalent. (I converted my code to run in integer, so everything is multiplied by 10.) I believe this is due to rounding when doing the averaging, but at this point my mind is too fried to figure it out. I do round the dT before using it…maybe I should not?
Right now I have faith in my method of using the yearly averages because it does not rely on some bizarre contortion of the published algorithm, nor does it really on multiple programming errors. It simply uses the data that is there…or should I say, added to the station record. The only discrepancy between it and HL87 is the station ordering. But it is easy to see how that error could be injected.
So perhaps someone can apply my method and find the rounding or truncation problem?
And then the real problem: How are seasonal and annual means generated when data points are missing?
#139 I spoke too soon. If I round only the final result and not all the intermediate steps in the bias method, I match Praha exactly. Whew!
re 137. Falafulu.
I have seen your fustration With gavin and others. You have appointed yourself well.
I would not have your patience.
OK, for you Excel people out there, here is the VBasic routine I wrote (hacked) to combine data. It assumes you have preloaded a workbook with the combined and scribed station data, and multiplied all values by 10 to make them integers – except that the annual average still is a decimal to one place.
This script works perfectly for Praha and Joensuu. For Gassim I get spurious differences of 0.1 in the nonoverlap region. However, if I round dT before using it on Gassim, I match exactly. Unfortunately this makes Joensuu not match exactly. Thus, I think there is a rounding issue between VBasic and whatever Hansen is using.
I apologize in advance if WordPress does something funny with the formatting.
Sub BiasAnnualInt()
Attribute BiasAnnualInt.VB_ProcData.VB_Invoke_Func = “B\n14″
‘
‘ Assumes the following sheets exist:
‘ “Combined” – contains the combined station data from GISS. Column 1 has the year, row 1 has months, seasonal and annual means
‘ “St0″ through “Stn” have the individual station records. If there are six records, the sheets are numbered “St0″ through “St5″
‘ Stn sheets have exactly the same number of rows and columns as “Combined”. If temperature data is missing for a given
‘ month or year, fill it with 999.9. All years found in the combined record must be accounted for in the Stn sheets.
‘ “Order” contains the order the record combination should be done. Enter the order in column 1. If there are three stations and
‘ you want to order them 1, 0, 2, then A1=1, B1 = 0, and C1 =2
‘ “Result” will contain the combined data produced by this program. It needs to be present, but can be blank or have anything
‘ you want in it.
‘
Dim x As Long
Dim y As Long
Dim i As Long
Dim StOrder() As String
Dim ResultSheet As Worksheet
Dim OrderSheet As Worksheet
Dim CombinedSheet As Worksheet
Set ResultSheet = Worksheets(“Result”)
Set OrderSheet = Worksheets(“Order”)
Set CombinedSheet = Worksheets(“Combined”)
‘
‘ Read station order from Order sheet
‘
OrderSheet.Activate
Stn = ActiveCell.SpecialCells(xlLastCell).Row
ReDim StOrder(1 To Stn) As String
For i = 1 To Stn
StOrder(i) = “St” & ActiveSheet.Cells(i, 1)
Next i
‘
‘ Erase the contents of the result sheet
‘ Copy the first station listed in Order over to the result sheet to represent T1,1
‘ The Result sheet will contain the current “reference station”, which is the biased combination
‘ of the previous stations that have been processed in Order
‘
ResultSheet.Cells.ClearContents
‘
‘ First copy the complete set of row and column headers from “Combined”
‘ For columns, January is column 1, December is column 13, averages are recorded through column 18
‘ xlLastCell will tell us how many years there are in the combined record
‘
Sheets(“Combined”).Activate
RCount = ActiveCell.SpecialCells(xlLastCell).Row
‘
‘ Now copy the first record’s data
‘
Sheets(StOrder(1)).Activate
Worksheets(StOrder(1)).Range(Cells(1, 1), Cells(RCount, 18)).Copy Destination:=ResultSheet.Range(“A1″)
‘
‘ Find periods of annual overlap between station Tn and station T1,n1 (which is on ResultSheet)
‘ Calculate the means of the two overlap periods
‘ Calculate dT from the two means
‘ W values represent the accumulated “W” (weight) used in Hanson’s paper.
‘ – Wn is always 1 and W1n is accumulated. After the reference station is initialized, W1n is 1
‘ – W1n1 is the previous value of W1n. After the reference station is initialized, W1n1 is 0
‘
Wn = 1
W1n = 1
W1n1 = 0
‘
‘ Loop through and combine the remaining stations
‘
For CurStn = 2 To Stn
‘
‘ Reset accumulators to zero
‘ TnSum is the sum of overlapped temperatures from the station to be combined
‘ RefSum is the sum of overlapped temperatures from the already combined record
‘ OverlapCount tells us how many records overlap
‘
TnSum = 0
RefSum = 0
OverlapCount = 0
‘
‘Activate the current record’s sheet so we can work directly from it
‘
Sheets(StOrder(CurStn)).Activate
‘
‘ Determine the amount of annual overlap and accumulate overlap temperature totals
‘
For i = 2 To RCount
TnTemp = ActiveSheet.Cells(i, 18)
RefTemp = ResultSheet.Cells(i, 18)
If WorksheetFunction.IsNumber(TnTemp) And WorksheetFunction.IsNumber(RefTemp) Then
If (TnTemp
Hmmm…before I try again, maybe there is a limit to how long the comment can be? The code isn’t thatlong.
#135 – Steve Bloom named Lex Luthor along with Steve McIntyre as being “ringleaders of this little audit charade” in another RealClimate comments thread. It amazes me how they can maintain the level of arrogance shown in that response even with the revelation that the team can’t even correctly combine two series. It can’t lead to healthy scientific atmosphere to be so belittling of criticism.
#139. John G, you said:
I’m pretty sure that I’ve figured out what’s going on with Hansen averaging (builing on earlier comments) and varying slightly. Here’s the evidence to consider:
1) if there is only one available value from the twocolumn matrix after adjustments, the adjusted value is taken whether it comes from column 1 (adjusted) or column 2 (unadjusted). No problem here. So we only need to consider averaging where both columns have values.
2) If the sum of the two columns (in tenths) is even, then the combined value is the average – no problem.
3) for Praha, if the sum of the two columns (in tenths) is odd, then the combined value is averaged up. However, for Joensuu, if the sum of the two columns (in tenths) is odd, then the combined value is averaged down. For Gassim, there is only one case where the sum of the two columns (in tenths) is odd – May 1990 and in this case, the combined value is averaged down.
In the case of Praha, column 1 is adjusted down, while in the other two cases, column 1 is adjusted up. It appears that the decision to round up or round down depends on the sign of the adjustment. If there is a negative adjustment to column 1, it’s averaged up; if there’s a positive adjustment, it’s averaged down.
So here’s an algorithm for the HAnsen average which covers all of these cases exactly:
1) if there’s only one available value, use it.
2) if the sum of the two values is even, take a simple average;
3) if the sum of the two values is odd, add 1 if the delta is positive (i.e. subtraction in the first column) and subtract 1 if the delta is negative (i.e. addition in the first column). Then take an average using integer arithmetic in tenths.
This does not preclude some other implementation, but this appears to work in all the cases that I’ve examined given the deltas.
#143. You probably ran afoul of the “less than” symbol which often gets interpreted in WordPress.
#145 Steve … High 5!
Steve:
Look at Praha/Ruzyne (#91 above) – You maybe right but this series plus the Bagdarin series suggest something in addition is going on – not mention the values for the missing months of data. Moreover, what is done for Praha/Ruzyne looks like a human driven adjustment rather than a generalized algorithm – or at least an algorithm reflecting some idiosyncratic reverse engineering, e.g., remove the discontinuity in the raw Praha Ruzyne data. Moreover, if this is the case here – the other adjustments maybe be equally arbitrary without a generalized algorithm. Very strange, but doable for 2500 series in less than a person year with some clearly stated decision rules.
I’ve checked John G’s calculations using annual temperatures scraped from GISS and confirm that these yield the implicit deltas. As John observes, this creates a new puzzle as to how GISS annual averages are created and why the adjustments calculated using the GSS annual average (available at GISS) differ sometimes sharply from the adjustments calculated with the monthly data.
Well, I’m not sure that it’s the “real” problem. But it is definitely “a” problem. I looked at the quarterly averages and it is not easy to see what Hansen did here. And unfortunately, it appears that we have to be able to calculate these numbers in order to calculate the Hansen annual averages.
Steve, heres an algorithm that is a very good fit for Cara, Siberia (combining 3 series), based on my post #58 above. I was able to replicate 615 out of 622 months exactly. 3 months had an error within 2 tenths of a degree and 4 had an error within 1 tenth.
First, the columns as downloaded need to be reordered. As downloaded, the 1st column is labeled “222303720002”, the 2nd and 3rd columns are labeled “NA” and the 4th column is labeled “combine.” When coming up with the algorithm that yielded an exact match for Praha, Gassim and Joensuu, the first column for each of these is labeled “…0000″ and the second column is labeled “…0001″.
Accordingly, I reordered the columns for Cara so that “222303720002” was the 4th column (C4), the 2nd column (with initial values: 13.3, 15.1, 13.8 and 7.2) was labeled “222303720001” and therefore left as the 2nd column (C2). The 3rd column was labeled “222303720000” and placed as the 1st column (C1). This is important because calculation proceeds from left to right in pairs with the right column in each pair being the “baseline” column and the left column being the one adjusted.
The column labeled “combine” is placed as the 6th column (C6). Column 3 (C3) is for the combined values of C1 and C2. Column 5 (C5) is for the combined values of C3 and C4 and is the replicated “combine”.
A. Take the average of overlapping data points (610 b/ C1 and C2) in C2. Do NOT round it.
Ave(C2): 4633/610 = 7.595
B. Take the average over the same 610 overlapping points for C1 and do NOT round it.
Ave(C1): 4668.3/610 = 7.653
C. Ave(C1) – Ave(C2) = a1:
(7.653) – (7.595) = 0.058, which rounded to the nearest tenth: 0.1
This the adjustment for the first pair of columns, which I shall call: a1.
(You can’t simply take the difference between the overall averages for C1 and C2, because they are 7.57 and 7.55, respectively, thus yielding a rounded adjustment of 0, which won’t work.)
D. For all 610 overlapping (x) in C1 and C2,
C3(x) = [C1(x) + C2(x)]/2 – [a1/2]. Do NOT round.
E. For all nonoverlapping (x) in C1 and C2,
If C1(x) = NA, then C3(x) = C2(x),
If C2(x) = NA, then C3(x) = C1(x) – (a1/2)
(For Praha, Gassim and Joensuu, C3(x)=C1(x)a1, but this was a better fit.)
F. For all 614 overlapping (x) in C3 and C4:
Ave(C4): 4713.1/614 = 7.68
Ave(C3): 4598.7/614 = 7.49
(7.49) – (7.68) = 0.187, when rounded = 0.2 = a2
G. For all 614 overlapping (x) in C3 and C4,
C5(x) = [C3(x)*2 + C4(x)]/3 – (a2/2). Round to the nearest tenth.
H. For all nonoverlapping (x) in C3 and C4,
If C3(x) = NA, then C5(x) = C4(x),
If C4(x) = NA, then C5(x) = C3(x) – (a2/2)
Again, round all C5(x) to the nearest tenth.
The following months yielded errors (month, actual “combine”, replicated “combine”, difference):
FEB 1941: 34, 33.9, 0.1
AUG 1941: 14.2, 14.3, 0.1
OCT 1941: 3.9, 3.8, 0.1
JAN 1943: 34.4, 34.2, 0.2
NOV 1945: 20.9, 20.7, 0.2
APR 1946: 3.3, 3.6, 0.1
DEC 1948: 28.3, 28.1, 0.2.
As before, I do NOT imply that this is a scientifically valid way of averaging these series. I am merely trying to find an algorithm that will fit the data while bearing some resemblance to the Hansen, et. al. description. There may be others that fit better.
#151. Phil, I’ll take a look at this. I wanted to be exact on the 2column version before getting to other ones. Take a look at the “Hansen averaging” described in 145. I’m sure that this method will carry over somehow.
#152 Steve, thanks. I will look at 145.
#145 #151 Steve. My method yields exact answers in the case of the overlapping months for Praha, if the averaged values are positive. If the values are negative, however, then my method will be off by a tenth. Clearly, an artifice of rounding. With my method the values are all right on the cusp. That is, I am adding an adjustment equal to one half of 0.1. When the result is rounded, then whether I will be off by a tenth or not purely depends on averaging. I subtract 0.05. If the values are positive, then a result would be for say November 1986: The values are 5 and 5. My method averages 5 and 5 and subtracts 0.05, which = 4.95. This rounds up to 5, so my method would give an exact result.
However, for January 1987, the values are 6.7 and 6.7, which average to 6.7 and then 0.05 is subtracted, yielding 6.75. This rounds to 6.8, whereas the combined was actually 6.7, an error of 0.1 absolute. Again, I am just using the rounding function built in to the spreadsheet.
Steve,
I’m pretty sure he’s creating the anomaly from the months in each series. What he doesn’t seem to be doing is sticking with strictly common months with values when getting the mean for each series. Look at Joensuu. All the monthly values during the overlap period are identical except for one missing month in the 2nd version. That is the only month that has an adjustment in the combined file.
What he did is divide the 47 and 48 month records by 48 to get their mean. That gave him an anomaly. There shouldn’t be any anomaly with all values the same. On top of that he’s making the full value of the anomaly propagate all the way back to the beginning of the record. That’s nuts.
I think the way it should be done is get the means for the common months with observation in both files. calculate the anomaly if any. Divide it by two then take the common months add them, divide by two and then add the anomaly. That avoids using the entire anomaly when there isn’t a second month of data.
I would suggest that the last digit of the file number corresponds to the order used to combine multiple files. So far it looks like it.
By “means for the common months” I mean separate series means using only months with observations in each file.
#145
Steve, you don’t need anything that complicated to get the same results as Hansen.
Your first step must be to convert both series into Kelvin by adding 273.2.
Secondly the adjustment should be in 100ths of a degree, not 10ths.
Thirdly a simple average will produce the combined column once you have rounded to nearest 10th and then converted back to Celcius (273.2). You must round before you convert back.
The adjustments for the different data set are:
Praha; Any value between 0.06 and 0.10
Joenssu: Any value between 0.05 and 0.09
Gassim: Any value between 0.15 and 0.19
Re #109
“Calculate a result. Show your data. Show your work. Make the Climate Scientists critique the work.
That’s good advice. What matters is: is there some temperature trend that can be discerned above the noise and error margins ?
I’m afraid though that his “Calculate a result” isn’t trivial, and requires a lot of work, and maybe the support of a $500 million institution.
To get an approximate feeling of trends you can do the following: take a few dozen rural stations, those with the longest record. Show the trends for each station separately. If possible also the trends of daily minimums and maximums, apart.
The bussines of calculating a global trend may be an impossible task, as there are too many stations, with too many adjustment problems – so many that the error margin would be too high relative to the trend.
#145:
Notice that the above does not work for Pjarnu (delta=0 or without rounding assuming the combination order of #155 delta>0). Also notice that the operation defined in #8 is essentially Integer Division, let’s call it div. Combining all those the redefined rule might be (assuming both x and y are integers, delta the actual offset used in integers, sign(0)=0):
if x+y is even,
then div(x+y,2)
else div(x+y+sign(delta),2)
#154 Steve, per the idea in #157, if I convert to Kelvin and then do my math, my method will now avoid the rounding errors due to negative numbers. Then, convert back to Celsius after rounding.
Hansen’s bias calculation is BOGUS if there is not a onetoone correspondence between months. It is supposed to be a likemonth comparison (march to march) and makes NO SENSE if there is nothing to compare some months to.
My guess is that Hansen estimated the missing months first. If I do that, I get Hansen’s result.
The preferred method here is that Hansen estimated the missing months WHILE PLAYING A GAME OF TWISTER. Aren’t you guys getting sore yet?
#151 Steve, I found an error in my post. Using adjustments of 0.1 for C1 and C2 and 0.2 for C3 and C4 will still yield the results posted for C5. However, I now calculate the rounded adjustment for C1 and C2 to be 0.2, which throws everything off. Sorry.
#160
Phil, if you use my method in #157 you get the correct results, you can also use the same method with data set 22230372000.dat (cara, Siberia) with deltas of 0.18 and +0.01 for series 2 and 3 respectively to obtain Hansens combined value. See post #92 in the ‘Waldo “Slices Salami”‘ article for a full list of working deltas.
The only thing I cant work out is how to derive the deltas from the data.
One more idea, just to throw it out there.
If I wanted to estimate the temperature for some month at some location, I’d get on the internet and find the Surface Air Temperature map for that month, locate the station, and estimate.
A little tedious, but it draws on the best available source for such an estimate.
Now, if the model that generated the SAT map were available to me, that would be so much easier. I could use the results for any date at any place, in an automated fashion.
I think it is safe to assume that Hansen has access to that model.
#163 Terry: Using your deltas of 0.18 and +0.01 in Cara, gives me errors in months Oct 1941 (0.2), Feb 1946 (0.1), Mar 1946 (0.1), May 1946 (0.1), Jun 1946 (0.1), Nov 1946 (0.1) and Feb 1978 (0.1). The most telling is Oct 1941, because that is a month that has data only in the second column and yet it received an adjustment of 0.2, so presumably that whole column was adjusted by 0.2, but I can’t do that for that column and still find deltas for the other two that will yield the combined. I have been trying the method that seemed to work for Praha, Gassim and Joensuu to tease deltas out of the overlapping months, but I haven’t been able to find a consistent way to do that.
#163 #151 Steve, Terry: Success for Cara!!!
However, I have now replicated Cara exactly, as follows: First, convert all values to Kelvin by adding 273.2. Then adjust column 2 by 0.2. Average the adjusted columns and round to the nearest tenth. THEN, subtract 273.2. Then round to the nearest tenth again. I think the last rounding is necessary due to floating point error in my spreadsheet/computer. NOW, where does the 0.2 adjustment come from?
#166 Steve: a clue perhaps? See my #91 in Waldo Slices Salami:
“There are 604 months that overlap all three series and the combined column, which 604 months average:
Col. 1 (Series 2): 7.63
Col. 2 (not named): 7.39
Col. 3 (not named): 7.62
combined: 7.61″
If you round the three averages for the months that overlap all 3 series, you get: Col1: 7.6, Col2: 7.4, Col3: 7.6. Since column 2 is warmer by 0.2, simply subtract 0.2 from column 2 as described in my #166. It looks like that is where the 0.2 adjustment for Cara came from. Please pass the salami.
Terry (#157), converting to Kelvins is immaterial. In fact, you can add any offset (in tenths of degree) to temperatures and it does not change the rounding: it gets nullified when calculating delta and it appears twice (i.e., sum is even) when calculating the average.
So your method rests on the fact that deltas lie on the given 0.05wide interval (and not on the other 0.05wide interval rounding to the same tenths of degree). It’s possible, but somehow I’m still voting for #159. We’ll need to examine more stations to figure this out; does anyone know a two series record with only one year of overlap?
#165
Phil, heres my calculation for Feb 1946 for Cara
1946.08333333333 25.9 26.2 26.4 26.2 # Starting values
1946.08333333333 247.3 247.0 246.8 # Convert to Kelvin
1946.08333333333 247.3 246.82 246.81 # Add Deltas
(247.3 + 246.82 + 246.81) / 3 = 246.9767 # Monthly Average
246.9767 ~= 247.0 # Round to nearest 10th
247  273.2 = 26.2 # Convert to Celsius
Result 26.2
This is the same as Hansens value.
Could you do a step by step breakdown of your calculations for my method so we can work out why we are getting different values?
#168
Jean, converting to kelvin is essential because most programs round negative numbers differently than they do positive.
Real Scientists use Kelvin. I had suspected that Hansen didn’t, but now feel better than before wrt his credentials, if not his political bias(es).
#178, oh really, like what? Please go ahead and try your method with any offset, won’t make any difference.
#172
Jean, whilst its true that adding any offset that makes all values positive would also work, the point in converting to kelvin is that these are temperature records and kelvin is a recognised temperature scale.
If you doubt the effect of making all the numbers positive then try the following.
Start with 0.05 round it to nearest 10th using your favourite program. You will get 0.1.
Start with 0.05, add 1 and you get 0.95, round it to nearest 10th using your favourite program and you get 1. Subtract the 1 you added and you get 0.0 which is 0.1 different than the first method.
To put Terry’s point another way, negative temperature numbers are artefacts of
nonkelvin temperature scales, and any rounding of temperature numbers should
always be in the positive direction.
Re: 172 & 173, good point.
What I found was that Excel does as you indicated with .05, but Visual Basic rounds it up to 0.
Gavin responded to Vernon related to station quality, saying essentially there’s no way to determine good sites have good data or that bad sites have bad data, there’s no evidence of site standard “violations” giving faulty data, and also states rural data is what Hansen uses. To me it appears he’s using the Bart Simpson defense.
http://www.realclimate.org/index.php/archives/2007/08/fridayroundup2/
#28
Alex Trebek was kind enough to let me know that on this board I must rephrase things in the form of a scandal.
Hansen uses estimates in his bias calculation. He does this wherever there are mismatched pairs of readings. I can give you three of them.
Praha Dec86 = 3
Joensuu Jun88 = 19
Gassim Jun88 = 39
Don’t some of those seem a little high to you? Suspicious. And for records with more than one mismatched pair, you can never find out for certain what he guessed. Is this a secret “back door” in the algorithm? I want to know more.
Even better: Without access to Hansen’s “fudge factor” nobody can ever precisely replicate his results. Worse, the algorithm is most certainly not the one from the papers. A crucial step was omitted! How many more are there, I wonder out loud.
If ever there is a better reason to “free the code” I have not heard of it.
One more: The estimates are totally inappropriate for combining scribal versions. They introduce trends where no trend could possibly exist! These guesses will almost never match the recorded temperature. Yet he uses them anyway. Madness.
176, exactly. “You can’t prove that the outofcompliance numbers are wrong”. I’d like to see how far he’d get with that in court.
Jean S,
There seem to be two two series records in GHCN with one year of overlap,
and with observations for most months of both series in the year of
overlap.
KADUGLI
148628100000 1987 262 299 315 324 313 294 274 273 277 286 294 270
148628100002 1987 239 2709999 307 293 276 262 255 258 269 2759999
ST JOSEPH/WBO
425724490030 1948 52 22 25 154 179 230 258 253 220 128 66 2
425724490031 19489999 22 25 154 179 230 258 254 220 128 66 2
re 177 Free the code
Free the Code. I think Gavin got sick of me signing off with that
on every post..
re 164.
KDT. recall that Hansen started this in 87. He couldnt Spell TCP/IP at that stage.
In filling missing data has impacts on the power of homogeneity adjustments.
It is NOT a trivial matter
re 176
I told gavin he was a microclimate sceptic.
Gavin has a particular set of triggers. If you don’t hit his triggers he
ignores you.
#173/#174: Ok, accepted.
#169 Terry: You are correct. I did my calculation in Celsius. When I did it in Kelvin I got the same results you did.
I’ve posted an update to the post, setting out what I believe to be the most plausible answers. Thanks to everyone who participated and particularly to John Goetz for framing the problem and then solving the calculation of the delta.
The next puzzle, as John G articulated, is to figure out how Hansen calculated seasonal and annual averages when there is missing data. However, even without solving this puzzle, the present evidence is sufficient to establish that Hansen’s method of combining different scribal versions is erroneous and has corrupted his entire data set. At this point, we do not know whether the errors introduce a bias into the trend results or whether the corruption is equally distributed up and down. At the same stage of the Hansen Y2K error, there was a similar question: the error was sometimes up and sometimes down, but, overall it introduced a 0.15 increase in U.S. data. This particular Hansen error affects data worldwide, not just in the U.S. My impression is that it will not be random overall but will cause the reported trend to be somewhat higher than the trend calculated without this particular error; however, this is just an impression right now.
My impression is that there is an overall bias
Enough…All please write to your congressman/senators. Feel free to cut out the following letter.
Dear Senator Kyle,
As you are aware NASA and NOAA are both predicting that man made global warming (AGW) will resulting a cataclysmic future for the world. You may also be aware that an individual by the name of Steve McIntyre (www.climateaudit.org) has found a significant error in the reporting of temperatures in North American for the past decade forcing NASA to restate its assessment such that temperatures for the current decade are no more than they were in the 1930s.
Steve McIntyre has tried fruitlessly to acquire the software used to produce the numbers the NASA and NOAA present for their case. Mr. McIntyre is forced to try to reverse engineer the work of NASA for no sound reason. If that werent enough NASA chief scientists have verbally assailed Mr. McIntyre in contradiction to the fact that he uncovered this significant flaw.
One could say that global warming is the mother of all Public Health issue. But unlike any other Public Health issue the researchers are the auditors. Could you imagine Boeing or some pharmaceutical company saying they dont need to prove their work because their drinking buddies (peer review) agreed with it. And unlike other industries this one is run by our government.
I am asking you to help force NASA to become accountable. They need to be transparent on this issue. There are no state secrets. There are no technologies here that would make the US vulnerable. All that is requested is that the software that was used to come with the results be provided for public scrutiny.
Your office can help by either support withholding funding to these organizations, or specifically require that they make available this information or help me file a Freedom of Information Act request.
Thank you for your time and all your hard work
For condition #3:
Dire Dawa record 1, 1967
oooh…found THE ONE:
Neghelli, only record, 1955
For this station, the previous year’s annual average and the following two year’s are not calculated. However, the annual average for 1955 is calculated even though five of the twelve months are missing (OK, four of twelve, or onethird, given the annuals are calculated Dec – Nov). Solve that one and we should be well on our way.
My assumptions:
Probably no programming errors, or at worst one.
The algorithm, once found, is not bizzare (it may not be sound, but it probably is not bizarre).
Rounding issues probably affect the result.
The code was created in the mid80s or earlier and has not been changed…think like a programmer from back then, probably using Fortran 77.
Re update
Hansen Averaging
The only reason I have the Kelvin conversion is because different program/computers will round negative values differently. If Hansen was using a program/computer that correctly rounds negative numbers then he would not have to go via Kelvin. To see if your program correctly rounds negative numbers try rounding 0.05 to the nearest 10th. If you get an answer of 0.0 then your program rounds it correctly and you can completely ignore any conversion to and from Kelvin.
SingleDigit Adjustment
Although all inputs and outputs to Hansens algorithm are to a 10th of a degree, there is no reason why the internal workings should impose this limit. With delta resolutions to 100th of a degree (or more) all 3 data sets can be resolved and in addition the Cara data set (which merges 3 series) can also be resolved. The Prada and Gassim deltas are within the ranges that work with my method (0.10 and 0.15) and Joensus is 100th of a degree more. Do you have the figures for the deltas to 2 decimal places?
Finally, to paraphrase Occam:
My solution is the simplest since there is no test for odd and even or +ve and ve. In summary it is: Add delta, average, round. What could be simpler?
Here’s the list of working deltas.
Praha; Any value between 0.06 and 0.10
Joenssu: Any value between 0.05 and 0.09
Gassim: Any value between 0.15 and 0.19
Cara: The following deltas can be applied to the 2nd and 3rd series.
0.17 and +0.01
0.18 and +0.01
0.19 and +0.01
0.20 and +0.01
0.18 and +0.02
0.19 and +0.02
0.20 and +0.02
0.19 and +0.03
0.20 and +0.03
0.20 and +0.04
Re #100. How many other stations have a similar history? Do we know?
So we have data, all of whose provenance is not certain. To this, we add uncertain statistical techniques. Do I summarise correctly?
IF so: This is now ‘settled science’? Which we nonscientists are supposed to accept on faith? Vast policy changes are built on this?
The integer rounding you describe might sound complicated but it is one of the standard ways that computers do integer rounding. It’s typically called “rounding towards zero”. The other standard technique is to always round down. Now we have logic gates to spare, there may be more complex techniques but in the 80s these were the only two that I’m aware of.
Which rounding system is used depends on the CPU not the language but some high level languages define which rounding to use, meaning that the compiler writer may have to add extra code when compiling for a CPU that uses the “wrong” rounding method.
Rounding Algorithms 101
Sorry for the broken link: http://www.diycalculator.com/popupmround.shtml
As one not participating directly in the puzzle solving, I most appreciated Steve M’s summary of the thread with regards to what was found. I hope this continues with other threads in attempts to decipher GISS.
While I have followed the discussion and can make intuitive sense of it most of the time, I think one has to be directly involved to fully appreciate the “puzzle”. It seems more complicated than it needs to be and too many open options available for the calculations. Should not there be a best or at least most accepted way of doing these calculations?
Actually, it’s both, but that’s due to a variety of reasons, many of them recent developments. Jean’s link explains a cornucopia of methods. Generally speaking, the language MAY have a say in which algorithm is used, but when and why each algorithm is used is answered with “it depends.” Most newer CPUs have various built in functions for round, floor and ceil (all of which are variants of the standard round we all know). The behavior of round, in general, is definable at the assembly/machine level. However, if the CPU you’re using does not have a specified behavior, it will resort to using the math library (libm in C), which typically has IEEE754 defined functions for whatever floating point precision you’re using.
Kenneth, you are correct. For the life of me I cannot understand why they don’t simply do a round to nearest (or round to even) function and be done with it. This is much more complicated than it needs to be. Much more.
Mark
#194. Any of the pointless and probably incorrect complications are introduced by Hansen and not by us. Difficulty in reverse engineering arises because of incomplete and inaccurate description of methodology by HAnsen. We may be wrongfooted here and there , but again the fault lies in the original articles.
I agree, Steve M. By “they” I meant the original authors. For whatever reason, they simply complicate matters to the point that reproducibility is all but impossible.
Mark
Re: #196
That part I understand. My problem with this case and others in climate science statistics that I have encounter here at CA is that I seldom or never see references to the “preferred” analysis or solution. It appears that arbitrariness of methods allows some climate scientists to use whatever methods they feel fits their needs and then not have to vigorously defend the method. To this layman there appears to be too many alternative ways of statistically analyzing and solving these problems. Can these problems be so unique that statisticians have not analyzed them before and found the preferred methods? The debates seem to evolve into side issues.
198 Kenneth. Side issues, or there are multiple issues being discussed as if they were one.
Rather than bust my brain trying to find out how Hansen estimates the missing months, let me pose two, more general questions:
1) Given a record with missing months, along with other (nonscribal) different records covering the same period, is there a preferred method for estimating the missing months?
2) Given a record with missing months, with no overlapping records covering the same period, is there a preferred method for estimating the missing months?
Now, back to my previously scheduled entertainment … the Federer/Roddick match between two outrageous tennis cyborgs …
w
In terms of rounding, there is a preferred method, round to even or round to odd. Both create zeromean rounding errors. Going out of your way to process data in the manner in which these scientists have is either due to ignorance, or intent.
Mark
Re:
Federer is in a class by himself. In terms of hurricane catogories he is a 6 and Roddick is a 3 or 4.
“cyborg” – what an unfair term. This year, we are seeing the best tennis that I’ve ever seen in my lifetime – by far. Federer is a magician and every match is a mustsee. And there are other great players, who would have their majors without Federer. Sampras had his wins but it was never magical.
Jim Courier made an interesting comment: that they are using new stringing that has been available only for about 4 years and it is the new strings that enable the large racquets to have a feel that was formerly impossible. That’s why guys are able to hit so hard and still get bite on the shots. If you think about it, there were a lot more overhits about 10 years ago than there are today.
Re 203. A lot more overhits? This is surely more due to a change in the method for determining whether a ball is in or out (replacing humans by machines), rounding errors, bias, and corruption of the data.
Re: #203
Federer can win on all surfaces and wins the grand slams regularly. The book may be temporarily out on whether he is the best ever what with all the advances in technology but he is almost there in my book. Who doesn’t want to see the best ever in any sport?
I called them cyborgs because it’s hard to believe that humans can play that well … 2 1/2 sets without a service break. Magic is right.
w.
UPDATE #2
The following Rfunction appears to calculate the Hansendelta between two station versions. X and Y are Rdata frames in GISS format with a column with name “ANN” corresponding to the 18th column in GISS format.
UPDATE #3 (Sept 8, 2007)
Hansen released source code yesterday. In a quick look at the code for this step, the combination of station information is done in Step 1 using the program comb_records.py which unfortunately, like Hansen’s other programs, and for that matter, Juckes’ Python programs, lacks comments. However, with the work already done, we can still navigate through the shoals.
The delta calculation described above appears to be done in the routine get_longest_overlap where a value diff is calculated. Squinting at the code, there appear to be references to the annual column, confirming the surmise made by CA readers. It doesn’t look like there is rounding at this stage – so maybe the rounding just occurring at the output stage, contrary to some of the above surmise. I still think that there’s some rounding somewhere in the system, but I can’t see where right now and this could be an incorrect surmise.
After calculating the delta (“diff”), this appears to be applied to the data in the routine combine which in turn uses the routine add (in which “diff” is an argument).
There is an interesting routine called get_best which ranks records in the order: MCDW, USHCN, SUMOFDAY and UNKNOWN and appears to use them in that order. I don’t recall seeing that mentioned in the documentation. I’m going to look for the location of this information in the new data as I don’t recall any prior availability of this information.
I’ve been trying to figure out in what order Hansen’s code combines records. HL87 says from longest available to shortest. The work on CA was indicating that it was most recent to oldest. The newly posted comb_records.py in STEP1 has a routine near the end called get_ids that seems to sort the records by name. Not knowing python I can figure out if this is sorting up or down. If sorting down then the last scribal error would be combined with the nexttolast, and so forth. This would match our observation that the newest record is combined first.
Lubos:
In reference to your comments on my #91 on this thread — the WWII & Russian Occupation comment was simply my editorializing and was more to note additional discrepancies in the record that could not be readily accounted for. The Russian Occupation comment assumed that the equipment had “disappeared” and was predicated on the proclivity of the Soviet occupying forces to take whatever was movable and had any value whatsoever. The 2.7C magnitude of the Praha Ruzyne adjustment struck me as being orders of magnitude greater than anything identified elsewhere – suggesting that there may well be others out there that are not due to any conceivable rounding or missing data correcting algorithm. I anticipate some minor disconnects between the output of the code and the consolidated records. I take your point on the “leap year” – I was grasping for straws.
2 Trackbacks
[...] Clark Your Sunday Crossword Puzzle » This Summary is from an article posted at Climate Audit – by Steve McIntyre on Sunday, September [...]
[...] http://www.climateaudit.org/?p=2018 [...]