Ammann’s April Fool’s Joke Part 2

About 2 weeks ago, I observed that the PR (Paleoclimate Reconstruction) Challengers, stung by various challenges to the non-transparency of their data and methods, had promised that, by April 2009, they would, among other things, make a website containing:

– Collection of reconstruction codes, documentation, and related data.

Interestingly, on April Fool’s Day itself, NOAA released a webpage, which I presume is supposed to be the April 2009 deliverable.

At first blush, it looks like something new.

However, many temperature reconstructions had already been archived. That’s how you make the spaghetti graphs. The PR issue has always been the availability of underlying data and reconstruction codes – the things that the PR Challengers promised to deliver by April 2009.

The “new” data is simply a collation of the reconstructions already listed on the NOAA webpage. The various textfiles here contain references to the prior WDCP page. For example, the new page ftp://ftp.ncdc.noaa.gov/pub/data/paleo/reconstructions/pcn/textfiles/biondi1999.txt cross-references ftp://ftp.ncdc.noaa.gov/pub/data/paleo/treering/reconstructions/northamerica/usa/idaho/idaho-temperature.txt.

Bizarrely, NOAA itself already had catalogued temperature reconstructions, listed on its webpage here (predecessor page here)

So the deliverable is nothing except a collation of temperature reconstructions already listed on NOAA’s reconstruction webpage, together with some elementary metadata.

No proxy data. No source code. No documentation beyond the most trivial metadata.

For me, the collation has no utility. I already have read scripts for about 50% of the reconstructions, with annotations that are more helpful reminders about what’s in the data than the present trivial metadata.

I’m not sure who’s going to use this collation or what PR problem it’s supposed to solve. I wonder how much NOAA paid for this (without even considering staff time). I would sure liked to have had the opportunity to bid on the contract.

27 Comments

  1. curious
    Posted Apr 12, 2009 at 10:46 AM | Permalink

    EU has public tender and FOI process for checking out the details of Gov. contracts – I guess there is a US eq.?:

    http://www.businesslink.gov.uk/bdotg/action/detail?r.l1=1073861169&r.l3=1074033478&r.lc=en&r.t=RESOURCES&type=RESOURCES&itemId=1073792570&r.i=1073792571&r.l2=1073858827&r.s=sc

    Click to access awareness_guidance_5_annexe_-_public_sector_contracts.pdf

  2. Bill Jamison
    Posted Apr 12, 2009 at 11:44 AM | Permalink

    Maybe that’s the way to finally get access to the data – bid on a contract to create/manage the website!

  3. Gerald Machnee
    Posted Apr 12, 2009 at 12:31 PM | Permalink

    We will have to send your name in. Then the work could actually get done. And you should get paid for your contributions. How much do these guys (or gals) get paid for submitting the same old spin in a different color???

  4. Peter D. Tillman
    Posted Apr 12, 2009 at 1:32 PM | Permalink

    I imagine you’ve already browsed through this page:
    http://www.ncdc.noaa.gov/paleo/data.html

    I did notice at http://www.ncdc.noaa.gov/paleo/indextree.html and http://hurricane.ncdc.noaa.gov/pls/paleo/contribseries.search
    archived data from Ababneh (no, not the one we want), Briffa, Hughes, and many more unfamiliar names.

    Also possibly pertinent are Esper et al’s Morocco Millennial Palmer Drought Severity Index
    ftp://ftp.ncdc.noaa.gov/pub/data/paleo/treering/reconstructions/africa/morocco-pdsi2007.txt

    And http://www.ncdc.noaa.gov/paleo/pubs/trouet2009/trouet2009.html
    says it links to data from Trouet’s North Atlantic Oscillation Reconstruction.

    It may also be worthwhile to browse the Recent Contributions:
    http://hurricane.ncdc.noaa.gov/pls/paleo/contribseries.recent

    Apologies if these are old news.

    Best, Pete Tillman

    Steve: I am very familiar with this data , to say the least, and have downloaded various data sets from WDCP on many occasions. The Esper data set was uploaded while I was away for the weekend and I will note that in the relevant thread.

  5. Tim
    Posted Apr 12, 2009 at 1:36 PM | Permalink

    Who can use this? Why, people who can prove all your claims of data being unavailable are false, hello!

    • Posted Apr 12, 2009 at 10:31 PM | Permalink

      Re: Tim (#5),

      Baby steps. That’s all, just baby steps.

    • Larry Huldén
      Posted Apr 13, 2009 at 11:40 AM | Permalink

      Re: Tim (#5),
      Data and methods are still not available, only metadata and results. The latter were already presented in the publications, so we did not get anything new.

  6. Jared
    Posted Apr 12, 2009 at 1:58 PM | Permalink

    Is that data even real?

    ftp://ftp.ncdc.noaa.gov/pub/data/paleo/reconstructions/pcn/textfiles/biondi1999.txt

    Year – Anomaly
    1136 – .108
    1138 – .108
    1152 – .108
    1180 – .108
    1187 – .108
    1199 – .108
    1201 – .108
    1213 – .108
    1217 – .108

    Is this data real? It was also .108 in 1992, 1979, 1977, 1953, 1952, etc. You get the point. How real is this data.

    • John A
      Posted Apr 12, 2009 at 4:54 PM | Permalink

      Re: Jared (#6),

      More importantly, the ring widths are measured to the nearest thousandth of a millimeter.

      • Pat Frank
        Posted Apr 12, 2009 at 8:36 PM | Permalink

        Re: John A (#7), False precision: the bread-and-butter of AGW climate science.

    • oakgeo
      Posted Apr 12, 2009 at 10:26 PM | Permalink

      Re: Jared (#6)

      In the text file there are 858 July anomaly measurements recorded at 0.001 precision spanning +2.869 to -2.930. Within that span are 5800 possible unique data measurements, yet there are only 43 unique values recorded. False precision indeed.

      • RomanM
        Posted Apr 13, 2009 at 9:49 AM | Permalink

        Re: oakgeo (#9),

        Not only are there relativley few values, but the spacing between those limited values when ordered is (with one rounding exception) exactly the same. What they calculated was a set of discrete values (for all practical purposes, integers), which were then scaled to mean zero and sd one.

        What reconstruction method produces values of that sort?

        • Posted Apr 13, 2009 at 2:11 PM | Permalink

          Re: RomanM (#13),

          Yep, add 0.03 to the series and multiply the result by 7.24, for example. Some rounding somewhere, maybe. In the paper, 0.97 C prediction error is mentioned; regression model is not specified. But I’m a bit afraid it’s done wrong way 😉

        • RomanM
          Posted Apr 13, 2009 at 2:59 PM | Permalink

          Re: UC (#15),

          Given their description (or lack thereof), my guess would be that they calculated temperature or anomaly estimates, rounded them all to one decimal place and THEN calculated the mean and standard deviation on the rounded data and converted to SDU’s. It checks out in R exactly that way.

          Not the best way of doing things…

        • Willis Eschenbach
          Posted Apr 14, 2009 at 12:03 AM | Permalink

          Re: RomanM (#16), and UC (#15), well, maybe that explains the lack of unique values … but what explains the lack of anything with a standard deviation greater than 3 in either direction?

        • Posted Apr 14, 2009 at 3:56 AM | Permalink

          Re: Willis Eschenbach (#18),

          but what explains the lack of anything with a standard deviation greater than 3 in either direction?

          Do we need to have an explanation for that ?

        • mondo
          Posted Apr 14, 2009 at 1:35 PM | Permalink

          Re: UC (#19),

          Is that a serious question? Consideration of elementary stats and definitions of Standard Deviation will get you there if it is.

        • Posted Apr 15, 2009 at 1:12 PM | Permalink

          Re: mondo (#22),

          Yes, serious question.

          (normcdf(3,0,1)-normcdf(-3,0,1))^858

          ans =

          0.0983170715220416

          Quite rare, ok, but after standardizing a heavily correlated series the number might be 0.2 or something like that, right ?

        • mondo
          Posted Apr 15, 2009 at 2:39 PM | Permalink

          Re: UC (#23),

          Whoa UC. You’re out of my league already.

          My only point was that the definition of standard deviation says that plus or minus 1 SD contains roughly 68% of the values, 2 SD contains 95% of the values and 3 SD contains 99% of the values.

          What I understood Willis to be saying was that there are 858 data points, therefore 1% of them, or roughly 9, should be outside 3 SD of the mean, whereas he identified that no points were, thus throwing into doubt the accuracy of the calculations.

          But maybe I have it wrong……….. Apologies to you in advance if I have.

        • Mark T
          Posted Apr 16, 2009 at 12:42 AM | Permalink

          Re: mondo (#24), What UC just showed you is that the likelihood of no 3 sigma crossings out of 858 normally distributed values is almost 10%. I wouldn’t have even called that quite rare. 3 sigma is 99.7%, btw, not 99%, which makes a huge difference in the calculation.

          He did apply elementary stats, too, by simply calculating the area under a Gaussian curve for +/- 3 sigma, then raising it to the 858th power. The probability of any one of those points being less than 3 sigma is the area, or 0.997…, therefore the probability of all of those being less than 3 sigma is 0.997^858.

          The 3 sigma probability, btw, holds for normal distributions and could be quite different for another distribution.

          Re: Willis Eschenbach (#25), Maybe, maybe not. Without some other information, you really can’t make that assumption. The rounding mechanism could easily explain this, btw, since some otherwise 3 sigma values may have been rounded down. Dunno…

          Mark

        • Willis Eschenbach
          Posted Apr 15, 2009 at 11:16 PM | Permalink

          Re: UC (#23), I just found it unusual that with N=858 there would be nothing with an SD greater than 3. Not impossible, just unlikely. To me it indicated the possibility that the results had been smoothed or changed in some unknown fashion …

          My best to all,

          w.

        • Curt
          Posted Apr 16, 2009 at 3:07 PM | Permalink

          Re: Willis Eschenbach (#25), Who says the distribution has to be Gaussian, or anywhere near Gaussian?

          I just checked out the distribution of a series of throws for the sum of two dice, which has a triangular distribution. The mean is 7 and the standard deviation is 2.45. No points are more than 2.02 SDs away from the mean.

          Lots of Wall Street types have gotten in trouble lately assuming Gaussian distributions of events that are nothing like Gaussian.

    • Willis Eschenbach
      Posted Apr 12, 2009 at 11:59 PM | Permalink

      Re: Jared (#6), that particular data is undoubtedly strange. It is listed as being in “standard deviation units”, but out of the 858 years, not one of them is more than three standard deviations from the mean. People truly don’t look at their data, the Mark One Eyeball Test doesn’t seem to happen … but we digress.
      .
      Steve, I see nothing but reconstruction results on the page. I hope it’s not the promised page. They said they would provide data and code. What they have provided is metadata and results. Metadata & results != data & code.
      .
      Be interesting to see how they play this one. Probably ignore it, claim they’ve done it, and move on. On the other hand, if they were smart, they’d notice that they have until the end of April to make good on their word.
      .
      Then they should post a notice on the web page saying “data and code on the way, it’s still April, we’re working on it, don’t bug us, it’s the beta page”. If they wanted extra bonus points they could acknowledge CA for noticing what they hadn’t noticed. Then pressure Esper the Archiver and Thompson of the Frozen Data and all the rest to cough up.
      .
      Then, before April 30 they should put whatever they have collected of the data out there, plus whatever code they have, and say “Ta da!”. Get at least part of the issue off the table.
      .
      Right.

  7. Posted Apr 13, 2009 at 4:23 PM | Permalink

    Now wait! Have the proper authorities been informed of this data+code problem? Remember last time that the climate audit community found a data problem. It was Super Bowl weekend.

    Gavin was outraged that Steve M. had not informed the proper authorities in the proper manner within a proper time interval.

    We don’t want to outrage Gavin again.

  8. PaulM
    Posted Apr 14, 2009 at 7:47 AM | Permalink

    Some of that data was already available at Tim Osborn’s site
    http://www.cru.uea.ac.uk/~timo/datapages/ipccar4.htm
    (it was probably steve who pointed this out some time ago).
    Looking at the data it is interesting to see that many of the
    reconstructions that go into the 1990s have negative anomalies there.
    briffa2001a.txt:
    1992 -1.92
    1991 -0.25
    1990 -0.53
    briffa2001d.txt:
    1994 -0.39
    1993 -0.32
    1992 -1.92
    1991 -0.41
    1990 -0.61

  9. Howard S.
    Posted Apr 14, 2009 at 9:10 AM | Permalink

    This is what you get when the new head of NOAA is a person who takes liberty with the truth.

    http://oregonstate.edu/admissions/blog/2008/12/18/osu-professor-to-head-noaa/

    President-elect Barack Obama has tapped Oregon State University professor Jane Lubchenco, one of the nation’s most prominent marine biologists, to head the National Oceanic and Atmospheric Administration.

    Who is Jane?
    http://www.sciencedaily.com/releases/2008/02/080214144547.htm

    A review of all available ocean data records concludes that the low-oxygen events which have plagued the Pacific Northwest coast since 2002 are unprecedented in the five decades prior to that, and may well be linked to the stronger, persistent winds that are expected to occur with global warming.
    In a new study in the journal Science, researchers from Oregon State University outline a “potential for rapid reorganization” in basic marine ecosystems and the climatic forces that drive them, and suggest that these low-oxygen, or “hypoxic” events are now more likely to be the rule rather than the exception.

    “In this part of the marine environment, we may have crossed a tipping point,” said Jane Lubchenco, the Wayne and Gladys Valley Professor of Marine Biology at OSU, and the lead scientist for PISCO, the Partnership for Interdisciplinary Studies of Coastal Oceans.