Juckes and the West Greenland Stack

Last fall, I discussed information sources on West Greenland ice core series, noting that the West Greenland version attributed by Juckes to Jones et al 1998 was a version that I’d not seen before. While I was looking at the proxy decisions in Juckes et al, I noticed the following intriguing rationalization:

The Greenland stack data used by MBH1999 is a composite of data analysed by Fisher et al. (1996), but the precise nature of the composite is not described by Fisher et al. (1996). … The Crete ice core series [of Jones et al 1998] is preferred to the “stack” series used by MBH1999 because it is better documented.

So one asks: exactly what is the “better documentation” preferred by Juckes and the Euro Team? Or is this “better documentation” a figment of their imaginations?

Jones et al 1998 and MBH98-99 both cite Fisher et al 1996 as a reference. As it happens, Fisher et al 1996 does not appear to describe either series (it only illustrates rather short versions, although it describes Fisher’s stacking methodology.) So any documentation advantage in this reference is a draw.

Fisher (2002) Table 1 reports the use of a version identified as DELNORM6.CWG (with that exact nomenclature) – “CWG” standing for Central West Greenland.: the file itself was ASCII. A couple of years ago, Fisher sent me a CD with a variety of series on it. Experimenting a little, I determined that the series DELNORM6.CWG, included on the CD, matched the MBH99 version 100%. The DELNORM6.CWG file listed its components for each step e.g. the following components in one of the steps. So any person who received the DELNORM6.CWG version from Fisher would have received a relatively adequate recipe for how it was constructed. In Mann’s MBH98 data archive, the email transmitting this series to Mann (from Frank Keimig of the University of Massachusetts) is preserved for some reason.

A CT85A-1Y.CRT 1983 AD 362 212 144
B CT84B-1Y.CRT 1982 267 211 50
D CT84D-1Y.CRT 1982 217 211 0
CRETE CT74-1YS.CRT 1973 922 202 714
MILCT MC73-1Y.MIL 1966 791 195 590
GISP2 GISP1YR.GSP 1985 718 214 498 DEUTERIUM
GRIP SIG#1YR1.GRIP 1979 917 208 703
GRIP SIG#2YR1.GRP 1979 917 208 703
GRIP SIG#3YR1.GRP 1979 917 208 703

I’ve attempted to locate the Jones et al version within the many variations in Fisher’s files, but so far have been unable to locate an exact reference. There are multiple series from Crete, including many of the series in the Fisher stack described above (Crete, A, B, C, D, H). The Jones version is not the same as any of the potential components. The Jones version goes from 1000 to 1983; the only version going back to 1000 (actually to 553) ends in 1973; there are a number of potential components ending in 1983, but all start after 1000. So the Jones version appears with a high degree of probability to also be a CWG stack – except in this case, unlike DELNORM6.CWG, we don’t know the precise components of the stack, although the Jones version is obviously closely related to these components.

So back to the question: is the “better documentation” of the Jones version a figment of the Euro Team’s imagination – the full team including Briffa, Osborn, Moberg, Allen, Hegerl, Weber, Esper. It sure seems that way to me. Perhaps Juckes can identify some such documentation, but he didn’t do so in the article itself. I guess Goosse and the referees didn’t ask him about it.

In addition, given Juckes’ professed concern about ensuring that all series versions have properly received permissions from their originators, it’s interesting to contemplate what permission (if any) Juckes has for archiving the Jones et al version of the Fisher data (or for that matter the MBH version.) I am unaware of any public archiving of either version by Fisher himself.

I presume that Frank Keimig got the DELNORM6.CWG version from Fisher either directly or indirectly. In the email at Mann’s FTP site, Keimig mentions re-formatting the series – in which process the documentation in the Fisher version was removed. Mann got this version from Keimig; Juckes presumably got it from Mann, either directly or from Mann’s archive. Did anyone in this chain ever get permission from Fisher to archive the data publicly? I doubt it. I don’t suppose that Fisher has any objections to people using his data provided they acknowledge him – why would he? But within Moberg’s definition, Fisher himself, to my knowledge, never publicly archived the version now disseminated by Juckes.

I believe that the Jones version is an earlier version of the DELNORM6.CWG series (or some close variation.) Jones may have got his version directly from Fisher who’s very cooperative (or he could have got it from Ed Cook or someone like that indirectly). Presumably Juckes got the Jones version from Briffa or Jones. (BTW Jones refused to provide the versions that he used to me when I requested.) Moberg expressed concern about the possibility of Sidorova issuing a new version and that he did not want to preempt any adjustments by Sidorova to her series. It seems inescapable that the DELNOM6.CWG version (which is used in Fisher 2002) is a version that Fisher prefers to the version used in Jones et al 1998. Yet Juckes seems to have no compunction about not only archiving the Fisher version used in Jones et al 1998 version – a version that never previously had been posted on the internet digitally – and in using a version seemingly not preferred by the originator of the data, all using the pretext of “better documentation”, though no evidence of such “better documentation” was provided anywhere.

References:
David A. Fisher, 2002. High-resolution multiproxy climatic records from ice cores, tree-rings, corals and documentary sources using eigenvector techniques and maps: assessment of recovered signal and errors, The Holocene 12,4 (2002) pp. 401-419
Fisher, D. A., Koerner, R. M., Kuivinen, K., Clausen, H. B., Johnsen, S. J., Steffensen, J. P., Gundestrup, N., and Hammer, C. U.: Inter-comparison of ice core and precipitation records from sites in Canada and Greenland over the last 3500 years and over the last few centuries in detail using EOF techniques, NATO ASI Ser, Ser. I, vol. 41, edited by: Jones, P. D., Bradley, R. S., and Jouzel, J., pp. 297–328, Springer-Verlag, New York, 1996.

43 Comments

  1. Pat Frank
    Posted Oct 10, 2007 at 6:53 PM | Permalink

    I don’t suppose that Fisher has any objections to people using his data provided they acknowledge him.

    That’s the usual way of things in science, in my experience. This whole business of restricting data access, that seems so prevalent in proxy climatology, rings very false to me.

    Maybe Jones’ data are “better documented” for Juckes because it was Phil Jones that used them in a document. I.e., for Juckes, better=Jones.

    What an advert goldmine. [snip] ‘That’s not just a better butter biscuit! That’s jonesian!’

  2. Steve McIntyre
    Posted Oct 10, 2007 at 6:57 PM | Permalink

    #1.

    Maybe Jones’ data are “better documented” for Juckes because it was Phil Jones that used them in a document. I.e., for Juckes, better=Jones.

    Are you suggesting that Juckes has made a subtle dig against Michael Mann? A little subtle for our particular prankster, I think.

  3. Larry
    Posted Oct 10, 2007 at 7:30 PM | Permalink

    Jonesing for the Jonesian?

  4. Larry
    Posted Oct 10, 2007 at 7:42 PM | Permalink

    Aside from the obvious question of what does “better documented” mean, is that (whatever it is) the proper criterion for including and excluding data? This is getting surreal.

  5. Pat Frank
    Posted Oct 10, 2007 at 8:29 PM | Permalink

    #2 — Let’s face it, like you and I, Mann is a mere colonial whereas Jones is English. That carries weight among the Sceptered Isle stalwarts.

    Here’s an illustrative story. A few years ago, friends who work at the U. of Saskatchewan reported the structure resulting from ionic mercury and the amino acid cysteine. They proposed that this was a basis of the toxic effect exerted by mercury, e.g., from fish. The media was interested and reported on their work, but apparently stories in British newspapers said, “Canadian scientists say. . . British experts warn. . . .” No missing that intentional bias.

    A double irony is that both scientists reporting the work were English immigrants from the UK. But the prejudice of origins will have its way.

  6. Gary
    Posted Oct 10, 2007 at 9:38 PM | Permalink

    Steve, if you get a chance can you elaborate on the stacking methodology and how valid it is statistically? The wiggles in multiple ice and sediment core curves are harder to line up than annual tree rings because at deeper levels the dating gets less precise.

  7. Steve McIntyre
    Posted Oct 10, 2007 at 9:53 PM | Permalink

    No idea about this. Note that Fisher’s stacking at least raises the issue. In many ice core situations, there isn’t much replication.

  8. Willis Eschenbach
    Posted Oct 10, 2007 at 10:20 PM | Permalink

    Steve M., what is a “CWG stack” when it’s at home?

    w.

  9. Steve McIntyre
    Posted Oct 10, 2007 at 10:34 PM | Permalink

    “CWG” standing for Central West Greenland – the file itself was ASCII. The “stack” is simply a set of time series of dO18 measurements (or accumulation measurements.) It looks like Fisher does a stepwise PC analysis using available data retaining the PC1, but I haven’t tried to replicate his stack from original data yet. In such circumstances, the PC1 is going to be pretty similar to an average or to an average of scaled versions (CVM).

  10. Hans Erren
    Posted Oct 11, 2007 at 1:16 AM | Permalink

    In seismic data processing a “stack” is the average of a set of independent measurements which enhances the signal to noise ratio.

  11. Philip Mulholland
    Posted Oct 11, 2007 at 3:52 AM | Permalink

    Ref 10

    A stack takes into account the different geometries of the seismic ray paths which together image a common reflection point in the subsurface. The measurements used to produce a stack are therefore dependent on the geometry of the array and are not independent, if they were independent they would not sum together and so could not enhance the coherent signal.

  12. Gary
    Posted Oct 11, 2007 at 6:42 AM | Permalink

    Here’s a reference to a recent (2005) effort to stack deep sea sediment dO18 records. The method uses both algorithms and manual tweaking to align the wiggles. The adjusting is based on some rational criteria and is done to account for apparent sedimentation rate changes. It’s good to see a clear explanation of what and why.
    http://www.maureenraymo.com/2005_Lisiecki+Raymo.pdf

  13. Gunnar
    Posted Oct 11, 2007 at 8:16 AM | Permalink

    >> I am unaware of any public archiving of either version by Fisher himself.

    From a logic point of view, you shouldn’t consider “public archiving” synonymous with permission to use. The public archiving presumably implies “public domain”, but one can place something in the public domain, and not submit to the public archive. What’s more, one can give permission A to person X, without placing it in the public domain.
    [snip]

    Steve: Gunnar, give it a rest please; you’re getting into Methane Mike territory on this. I

  14. Larry
    Posted Oct 11, 2007 at 8:32 AM | Permalink

    10, 11, so this is some sort of sonic in-situ measurement?

  15. Gunnar
    Posted Oct 11, 2007 at 8:36 AM | Permalink

    Steve, ok. Thank you for snipping what I was responding to.

  16. Steve McIntyre
    Posted Oct 11, 2007 at 8:38 AM | Permalink

    #14. NO, think of it just as averaging. He aligns the records so that the minimum cross-correlation occurs at 0-lag. It’s described in Fisher et al 1996 (which is not easy to locate other than at good university libraries.) I’ll try to summarize further on another occasion, but it’s not material to Juckes’ imagination.

  17. Larry Sheldon
    Posted Oct 11, 2007 at 8:47 AM | Permalink

    “because it is better documented”

    Anybody else think if the story of the drunk looking for his lost car keys under the street light because the light was better than where his car is parked?

  18. steven mosher
    Posted Oct 11, 2007 at 9:09 AM | Permalink

    Juckes counted the words of documentation. The shortest document won in his mind.

    hmmm. best if i stop right here.

  19. Larry
    Posted Oct 11, 2007 at 9:25 AM | Permalink

    So the seismic ray part has to do with the origins of the method, but the method itself is just a numerical procedure?

  20. phil
    Posted Oct 11, 2007 at 1:29 PM | Permalink

    #17 (Larry Sheldon says on October 11th, 2007 at 8:47 am)

    “because it is better documented”

    Larry, this shows up elsewhere to:

    From page 17 of NDP019:

    To be included in the U.S. HCN, a station had to be active (in 1987), have at least 80 years of mean monthly temperature and total monthly precipitation data, and have experienced few station changes (see Appendix A for a complete listing of the stations in the U.S. HCN). An additional
    criterion that was used in selecting the 1221 U.S. HCN stations, which sometimes took precedent over the preceding criteria, was prompted by the desire to have a uniform distribution of stations across the United States.

    The issue of representativeness of data sets is valid when they are not chosen at random. There is a difficulty, of course, when dealing with historical data, because the experiment cannot simply be repeated. There is an understandable desire to use those data sets which have the highest data quality, but one should not, IMHO, ignore the uncertainty associated with the possibility that the particular data sets may not be representative of what you are trying to measure.

  21. Willis Eschenbach
    Posted Oct 11, 2007 at 1:40 PM | Permalink

    Steve M, thanks for the info on the stacking procedure. I just took a look at the cross-correlation of some of the data used in the stack you describe above, viz.

    A CT85A-1Y.CRT 1983 AD 362 212 144
    B CT84B-1Y.CRT 1982 267 211 50
    D CT84D-1Y.CRT 1982 217 211 0
    CRETE CT74-1YS.CRT 1973 922 202 714
    MILCT MC73-1Y.MIL 1966 791 195 590
    GISP2 GISP1YR.GSP 1985 718 214 498 DEUTERIUM
    GRIP SIG#1YR1.GRIP 1979 917 208 703
    GRIP SIG#2YR1.GRP 1979 917 208 703
    GRIP SIG#3YR1.GRP 1979 917 208 703

    This data is all available here. I took a look at the cross-correlations of the first four ice cores on the list, and found them to be fairly low, ranging from about 0.40 to 0.50.

    In doing so, I got to thinking about whether any statistical method exists that can say whether or not a large group of individual datasets actually contains a common signal or not. It seems to me that if we average together some small number of individual red-noise datasets (nine in this case), we’ll get some kind of signal out of them. But how do we determine whether this signal actually means anything, or whether it is just random?

    Now in theory, at least, the noise has a higher frequency than the signal of interest. The only method that occurs to me is to average the individual signals temporally (gaussian, butterworth, etc.) first to remove the high-frequency noise, and then take the cross-correlations. If there is a common signal, the improvement in correlation should be greater than the improvement that one would expect just from the temporal averaging itself … but how much improvement in correlation would one expect from the temporal averaging itself, and how much better than that would the correlations have to be in order to say that there is actually a signal present?

    Comments from statistical folks gladly accepted, I suspect that this is covered in some elementary signal processing book somewheres …

    w.

  22. Larry
    Posted Oct 11, 2007 at 2:05 PM | Permalink

    Willis, this isn’t completely obvious to me, but are you saying that the noise would have to be of the order of 1 yr(-1) or it’s harmonics (because the data is annual rings)? If so, wouldn’t the way to approach it be to construct a filter with a sharp rolloff at about 2 yr(-1)? Averaging tends to not have the most ideal frequency response properties. Or am I out to lunch?

  23. Steve McIntyre
    Posted Oct 11, 2007 at 2:08 PM | Permalink

    #22. I would say that a cross-correlation of 0.4 to 0.5 is actually pretty good. Fisher himself has some sensible comments about averaging, observing that the noise in the accumulation series is much higher frequency than the dO18 series (which he attributes to greater diffusion of dO18) and he thinks that an accumulation signal emerges more quickly.

    In the case of Quelccaya, where there are 4 series (two dO18 and two accumulation), Mann treated each of the series as being different antennae teleconnecting with climate fields. Juckes takes one. As for Juckes, even if there are only two Quelccaya dO18 series, why wouldn’t he average them? Instead, he picks one. Ah, the Team – they are always full of merriment.

  24. Steve McIntyre
    Posted Oct 11, 2007 at 2:12 PM | Permalink

    I’ve uploaded the DELNORM.CVG file here http://www.climateaudit.org/data/ice/DELNORM6.CVG so you can see the files used by Fisher in his stack. It opens in Notepad fine.

  25. Jens
    Posted Oct 11, 2007 at 2:35 PM | Permalink

    #25

    link is incorrect. Correct one is http://data.climateaudit.org/data/ice/DELNORM6.CWG
    File-ending that is incorrect.

    Steve and others, keep up the great work! 🙂

  26. Earle Williams
    Posted Oct 11, 2007 at 3:56 PM | Permalink

    There are two different stacking concepts mentioned in the seismic posts above. Hans in #10 and Philip Mulholland in #11 both describe stacking as used in seismic processing but they are two distinct processes.

    The first as described by Hans is essentially just combining multiple measurements of the same configuration of transmitter and receiver. I would call this oversampling but would probably be misusing the term. All it does is sum over multiple samples and increases the signal to noise ratio by averaging out temporal random noise.

    Geometric stacking techniques involve summing sample traces from different configurations of transmitter and receiver based on particular strategies, such as stacking based on a common midpoint, common depth point, etc. The geometric stacking techniques improve signal coherence in portions of the record by averaging out spatial random noise.

    The stacking techniques are not exclusive. Bear in mind this is worse than a cliff notes version of seismic data processing. In simple terms, stacking is just adding together the squiggles from different time series.

  27. Philip Mulholland
    Posted Oct 11, 2007 at 4:38 PM | Permalink

    Ref 27

    Thanks Earle, I guess Steve would like us to leave the geophysics and get back to the geochronology. Gary’s superb link Ref 12 should keep us all on track.

  28. Larry
    Posted Oct 11, 2007 at 5:10 PM | Permalink

    27, that’s a good explanation. Very roughly like CT scanning, where you’re able to improve S/N ration by having multiple angles on the same thing.

  29. Geoff Sherrington
    Posted Oct 11, 2007 at 8:03 PM | Permalink

    Re noise reduction

    Analysis works around concepts of accuracy (how close the result is to the best accepted value) and precision (how repeatable the measurement is, irrespective of whether it is accurate). Multiple measurements of the same variable tend to improve precision, but are less useful to improve accuracy. Accuracy is observed through use of different instruments, variations of technique like pre-concentration, different operators, etc.

    CT scanning does not improve accuracy. It increases information content by showing different geometric views. To improve CT precision, you have to resort to stronger x-ray sources, larger detectors, longer counting times etc, all matters that have been in the radiometric literature for decades. To improve CT accuracy, you need a method to establish what the best answer is to the problem you study. Then you compare the absolute values of your method with the best method. I do not know if the concept of accuracy can be applied to CT scans, unless it is accuracy of position of imaged parts or accuracy of density measurement of parts, established separately. (Resolution is a different concept).

    In work like counting oxygen isotopes in ice cores, the accuracy depends in part on how good the observational method/instrument is. Isotope instruments can be calibrated absolutely against synthetic mixtures if one has first obtained separated pure isotopes. If one just uses mass spectrometry of standard samples all the time, this says nothing much about accuracy and you tend to deal in circular logic. Precision can be improved by replicate measurements and averaging. The odd inaccurate outlier can be picked up by replication analysis of different pieces of the sample. That is why we do analysis on half drill cores, or quartered, or eighths or whatever.

    (There are different approaches to signal:noise possible in some forms of rhythmic data, such as plase-lock amplifiers. Not really applicable here.)

    Few conventional, routine analytical instruments are capable of giving over three significant figures of accuracy. When an oxygen isotope is a small fraction of another and a ratio of them is used, and when the values vary monotonously from one sample to the next, the instruments have a difficulty in providing reliable data. If oxygen isotope ratios are compared left side of core versus right side, I would expect that correlations would be very poor because measurement errors are so large – and especially if you add in foundation assumptions about loss or not of material over time, etc.. I could see circumstances where the multiplication of left side data by small random numbers (like adding white noise), point by point, to produce a synthetic comparison with right side, would give another correlation not much different from the original. Both very poor. Excellent numbers for proving whatever your particular evangelical message is that day.

    I keep emphasising that unusual results should be studied before being discarded. They are capable of containing rare and insightful information that helps theoretical evolution. Please don’t smooth out novel physical mechanisms.

    Different topic, still climactic science: I have spent hours trying to find (a) a definition of the tropopause and (b) why the temperature gradient inverts near the tropopause. Even in literature as late as 2007, I could not find the answers. Fertile ground for more numbers to suit the message of the day. Yet the IPCC is so, so sure….. How can you be so sure when fundamental questions are still being investigated? How can scientists state that the science is “settled”? Do we now tell these current researchers to close up shop, it’s all settled, and we can revert to alchemy and horoscopes again?

  30. Hans Erren
    Posted Oct 12, 2007 at 12:37 AM | Permalink

    re 27 I was also referring to common midpoint stacking. My second post got eaten.

  31. Willis Eschenbach
    Posted Oct 12, 2007 at 5:51 AM | Permalink

    Geoff, thank you for your interesting quote:

    If oxygen isotope ratios are compared left side of core versus right side, I would expect that correlations would be very poor because measurement errors are so large – and especially if you add in foundation assumptions about loss or not of material over time, etc.

    Do you have any hard data about that possibility?

    Also, I’m still very curious about the “stacking” procedure. The stacks seem to be made out of a very small number of datasets. How can we tell if what we are getting after the average is meaningful or not?

    Finally, Steve M., I’m trying to duplicate the DELNORM.CWG dataset you posted. I cannot find the component dataset listed as “CT74-1YS.CRT” in the archive. The documentation says:

    INTERVAL [1062-553 AD] N=510 YRS

    SITE | FILE NAME | FIRST YEAR | TOTAL YRS | #DROPPED AT FRONT | #DROPPED AT END

    CRETE CT74-1YS.CRT 1973 1421 911 0

    In other words, this is the sole dataset that is used for the period from 553 – 1062 AD.

    I do, however, find a dataset called “CT74-1Y.CRT” in the archive, which covers the same period (553 – 1973). It is also the only series in the group that extends back to 553 AD. Now, here’s where the problem comes in. The correlation between the DELNORM.CWG dataset and the CT74-1Y.CRT dataset for the 553 – 1062 AD period is only 0.04 … I haven’t a clue what to make of that.

    Man, I love climate science … there’s more skeletons than there are closets to put them in.

    w.

  32. Willis Eschenbach
    Posted Oct 12, 2007 at 6:09 AM | Permalink

    Here’s the difference between the DELNORM and the CT74-1Y.CRT dataset during the period in question:

    Since CT74-1Y.CRT is the sole and only dataset used during the period in question, I don’t understand the reason for the difference.

    w.

  33. Dave Dardinger
    Posted Oct 12, 2007 at 9:37 AM | Permalink

    re: #33 Willis

    Well obviously the blue line is the same as the red line but shifted about 40 years forward. I suppose it has something to do with some assumption about where to fit the data with respect to some reference date. But given the title of this thread it should be easy to stack them.

  34. Steve McIntyre
    Posted Oct 12, 2007 at 11:38 AM | Permalink

    #32. Willis, I’ll post up the components from the Fisher CD.

  35. Mark T.
    Posted Oct 12, 2007 at 1:29 PM | Permalink

    27, that’s a good explanation. Very roughly like CT scanning, where you’re able to improve S/N ration by having multiple angles on the same thing.

    The same thing is done in radar to smooth the fluctuation of radar cross section (i.e. how “big” the target looks) which most targets exhibit.

    Mark

  36. Steve McIntyre
    Posted Oct 12, 2007 at 4:45 PM | Permalink

    #33. Willis, I’ve posted up the Fisher versions of the components. Maybe they’re different from the archived versions. http://www.climateaudit.org/data/ice/fisher

  37. Willis Eschenbach
    Posted Oct 12, 2007 at 5:46 PM | Permalink

    Steve, thanks for posting up the data. It is quite different from the archived version. The main difference is that the Fisher version only contains 922 years of data, in contrast to the 1421 years of data listed in the DELNORM documentation I listed above. The documentation in the Fisher version says:

    ANNUAL DELTAS USING MASK FILE L74-12.CRT & DATA FILE CT74-12.CRT
    CRETE 1974 DELTA-18. 1ST YR IS 1973AD . LAMBDA=28.7 CM/YR ICE
    922 1 1

    and I have confirmed that the “922” is the actual years of data in the dataset.

    This, of course, means that the first year in the dataset is 1052 AD, while the DELNORM dataset claims that it covers the period from 1062 to 553 AD.

    I looked up the “MASK FILE” listed above, L74-12.CRT, and the “DATA FILE”, CT74-12.CRT, in the ice core documentation. They are listed as:

    L74-12.CRT
    CRETE 1974 CORE DELTA-18. ACCUMULATION . STARTS 1974-553 AD. LAMBDA=28.7 CM/ ICE EQUIV. BASED ON DETAILED DEL CURVE. CM/YR ICE. (CRETE74/12ACC) FIRST VALUE IS WINTER TO SUMMER 1975 (ROUGHLY 1/2 YEAR)

    and

    CT74-12 .CRT
    CRETE 1974 DELTA-18. 12/ YR . STARTS 1974-553 AD . LAMBDA=28.7 CM/YR ICE THE BIG ONE EH. %. 2.391666 CM AVES . (CRETE74/12) DRILLED IN SPRING OF 1975. APPROXIMATELY 12 SAMPLES PER YEAR, BUT VARIABLE. 2ND VALUE IS WINTER 74/75.

    Both of these go back to 553 AD.

    Stranger and stranger … all clarifications gladly accepted.

    w.

  38. Steve McIntyre
    Posted Oct 12, 2007 at 6:29 PM | Permalink

    Willis, I added CT74-1Y.CRT from the Fisher directory. Does that match?

  39. Willis Eschenbach
    Posted Oct 12, 2007 at 6:39 PM | Permalink

    Dave D., good call. For unknown reasons, there were gaps in the DELNORM dataset that Steve M. posted, that needed to be removed to bring it back into line.

    w.

  40. Steve McIntyre
    Posted Oct 12, 2007 at 6:43 PM | Permalink

    Willis, I got a 5 9s correlation for the first 510 years between DELNORM6 and CT74-1Y. I don’t think that there’s a problem. To read DELNORM6, I used the following script:

    #url=”d:/climate/data/FISHER/core-interEOF”
    url=”http://data.climateaudit.org/data/ice/fisher”
    loc=file.path(url,”DELNORM6.CWG”)
    fred=readLines(loc)
    write.table(fred[4:148],”temp.dat”,quote=FALSE,row.names=FALSE,col.names=FALSE)
    test=scan(“temp.dat”)
    delnorm6=ts(rev(test),start=553)

  41. steven mosher
    Posted Oct 12, 2007 at 7:59 PM | Permalink

    RE 36

    Do you want to go to jail?

  42. Larry
    Posted Oct 12, 2007 at 8:10 PM | Permalink

    42, I think it’s in wikipedia. That’s STAP, right? No big secret.

  43. Geoff Sherrington
    Posted Oct 13, 2007 at 6:52 AM | Permalink

    re #32 Willis

    My comment arises from exercises with drill core in rock, not from hands-on with ice cores, and from having once owned a lab where various instruments were used, including isotope counters of several types. That is the basis for my general statement that few routine analytical instruments will return accuracy better than 3 significant figures. Ratio this and the errors compound.

    On this blog there is a lot of discussion about “adjustments” to climate data. You will know of course that prior to that stage, there are often laboratory and other “adjustments” made to the data being adjusted for climate factors. There is no instrinsic reason why the many errors in climate records needing adjustment do not likewise happen in the pre-climate adjustments. Smoothing temp data with sensors of different response times comes to mind. When climatology studies address land albedo and need multi-element soil analysis, the ICP and XRF type instruments that could well be used have rather similar multivariate, sometimes non-linear, regression matrices applied before the data even reach the climatologists. Is this another whole field of investigation, or do we take it that analsyses are free of subjective correctional inputs?

    There should be readers of this blog who can compare left to right side oxygen isotope results from ice cores. It’s a fundamental part of the quality control process, if done properly.

%d bloggers like this: