On Oct 20, 2008, I sent Santer the following request:
Dear Dr Santer,
Could you please provide me either with the monthly model data (49 series) used for statistical analysis in Santer et al 2008 or a link to a URL. I understand that your version has been collated from PCMDI ; my interest is in a file of the data as you used it (I presume that the monthly data used for statistics is about 1-2 MB). Thank you for your attention, Steve McIntyre
I received an automated response saying that Santer was at a workshop (Chapman Water Vapor) and would be returning on Oct 30. On Nov 7, 2008, not having received a reply, I sent a reminder to Santer
Could you please reply to the request below, Regards, Steve McIntyre
I once again received an automated reply, this time that he was at a meeting of the Hadley Centre Science Review Group in Exeter and that he would be back in his office on Nov. 10th
I received the following discourteous reply today:
Dear Mr. McIntyre,
I gather that your intent is to “audit” the findings of our recently-published paper in the International Journal of Climatology (IJoC). You are of course free to do so. I note that both the gridded model and observational datasets used in our IJoC paper are freely available to researchers. You should have no problem in accessing exactly the same model and observational datasets that we employed. You will need to do a little work in order to calculate synthetic Microwave Sounding Unit (MSU) temperatures from climate model atmospheric temperature information. This should not pose any difficulties for you. Algorithms for calculating synthetic MSU temperatures have been published by ourselves and others in the peer-reviewed literature. You will also need to calculate spatially-averaged temperature changes from the gridded model and observational data. Again, that should not be too taxing.
In summary, you have access to all the raw information that you require in order to determine whether the conclusions reached in our IJoC paper are sound or unsound. I see no reason why I should do your work for you, and provide you with derived quantities (zonal means, synthetic MSU temperatures, etc.) which you can easily compute yourself.
I am copying this email to all co-authors of the 2008 Santer et al. IJoC paper, as well as to Professor Glenn McGregor at IJoC.
I gather that you have appointed yourself as an independent arbiter of the appropriate use of statistical tools in climate research. Rather that “auditing” our paper, you should be directing your attention to the 2007 IJoC paper published by David Douglass et al., which contains an egregious statistical error.
Please do not communicate with me in the future.
Ben Santer
I sent the following FOI request today to NOAA:
National Oceanic and Atmospheric Administration
Public Reference Facility (OFA56)
Attn: NOAA FOIA Officer
1315 East West Highway (SSMC3)
Room 10730
Silver Spring, Maryland 20910Re: Freedom of Information Act Request
Dear NOAA FOIA Officer:
This is a request under the Freedom of Information Act.Santer et al, Consistency of modelled and observed temperature trends in the tropical troposphere, (Int J Climatology, 2008), of which NOAA employees J. R. Lanzante, S. Solomon, M. Free and T. R. Karl were co-authors, reported on a statistical analysis of the output of 47 runs of climate models that had been collated into monthly time series by Benjamin Santer and associates.
I request that a copy of the following NOAA records be provided to me: (1) any monthly time series of output from any of the 47 climate models sent by Santer and/or other coauthors of Santer et al 2008 to NOAA employees between 2006 and October 2008; (2) any correspondence concerning these monthly time series between Santer and/or other coauthors of Santer et al 2008 and NOAA employees between 2006 and October 2008.
The primary sources for NOAA records are J. R. Lanzante, S. Solomon, M. Free and T. R. Karl.
In order to help to determine my status for purposes of determining the applicability of any fees, you should know that I have 5 peer-reviewed publications on paleoclimate; that I was a reviewer for WG1; that I made a invited presentations in 2006 to the National Research Council Panel on Surface Temperature Reconstructions and two presentations to the Oversight and Investigations Subcommittee of the House Energy and Commerce Committee.
In addition, a previous FOI request was discussed by the NOAA Science Advisory Board’s Data Archiving and Access Requirements Working Group (DAARWG). http://www.joss.ucar.edu/daarwg/may07/presentations/KarL_DAARWG_NOAAArchivepolify-v0514.pdf.
I believe a fee waiver is appropriate since the purpose of the request is academic research, the information exists in digital format and the information should be easily located by the primary sources.
I also include a telephone number (___) at which I can be contacted between 9 and 7 pm Eastern Daylight Time, if necessary, to discuss any aspect of my request.
Thank you for your consideration of this request.
I ask that the FOI request be processed promptly as NOAA failed to send me a response to the FOI request referred to above, for which Dr Karl apologized as follows: “due to a miscommunication between our office and our headquarters, the response was not submitted to you. I deeply apologize for this oversight, and we have taken measures to ensure this does not happen in the future.”
Stephen McIntyre
I sent the following letter to the editor of the journal:
Dear Dr McGregor,
I am writing to you in your capacity as editor of International Journal of Climatology.
Recently Santer et al published a statistical analysis of monthly time series that they had collated from 47 climate nodels. I recently requested a copy of this data from Dr Santer and received the attached discourteous refusal.
I was unable to locate any information of data policies of your journal and would appreciate a copy of such policies.
The form of my request was well within the scope of the data policies of most academic journals and I presume that this is also the case in respect to the policies of your journal. If this is the case, I would also appreciate it if you required the authors to provide the requested collation in the form used for their statistical analysis. While the authors argue that the monthly series could be collated from PCMDI data, my interest lies with the statistical properties of the time series, rather than with the collation of the data.
Regards, Steve McIntyre
251 Comments
It seems so petty of these folks to patronize you instead of just sending the damned data. How hard can it be to just get it over with?
Simply amazing. Is there not a climate scientist out there with a spine to stand up to these people?
Scenario 1: Papers are critiqued by doing a different analysis and claim it is better.
Alarmist Response: Insist that the different analysis is not correct for some reason.
Scenario 2: Exactly replicate the analysis and demostrate why it is flawed.
Alarmist Response: Refuse to provide the information required replicate the analysis.
One can get very dizzy trying to keep up….
So much for openness in research. All you asked for was either a copy of a file small enough to be emailed or a location where you could download it yourself. I guess Santer is not having a good holiday season.
I know I’m not allowed to vent, but &*(%@(##*!
I think there is another scenario worth considering:
Santer may be perfectly well aware of the criticisms already made by Steve on this site
Santer is annoyed that you “appointed” yourself a climate auditor. He has forgotten that anyone in the world is allowed to comment on or check the work of any scientist. And I mean anyone–I have never had a journal check if I have a Ph.D. nor is it required. It is not a closed club, as much as these people would like it to be.
Re: Craig Loehle (#7):”It is not a closed club, as much as these people would like it to be.”
I believe you have put your finger on it: Climatology meets one of the sociological definitions of Ghetto – an insular ethnic enclave which is resistant, and at times hostile, to outside ideas or influence.
Re: Brian (#19):”Considering the tone and maturity of this comment thread … I’m neither defending nor criticizing anybody for any specific actions/inactions … just an observation … ”
Of course you’re not; that would require some, um, specific examples, and the only ones I’ve seen are in Santer’s reply. Steve’s characterization of it as discourteous was charitable; juvenilistically petulant would have been closer to the mark.
———————
The Editor of a major refereed journal doesn’t know the data disclosure policies of his journal? That inspires no end of confidence.
———————-
Re: Eric (#77): “. . . properly documenting my procedures.” Isn’t that exactly what Steve’s asking for?
There’s a certain irony here – Santer et al 2008 was itself an “audit” of Douglass et al 2007. Did any third parties “appoint” Santer to carry out this “audit”?
And for the record, I haven’t granted myself any grandiose titles or appointments.
Re: Steve McIntyre (#8), It is further ironic that the long long discussions of both the original Douglass paper and the new Santer paper covered many points of view stats-wise and showed many analyses and simulations. I would not even say that it was agreed on CA that Douglass et al did it right.
You seem to have the same reputation in the Climate Science world as an “Internal Affairs” division has in a police department!
I also think the term ‘chutzpah‘ applies here.
There is so much irony in climate science the corrosion will eventually cause a fire and melt it!!
Sorry Rosie, you are sooo wrong also.
Does your date range “between 2006 and Oct 2008” allow wriggle room to exclude any details from 2006? Is the word “inclusive” automatically applied?
Naive newbie question: Is this sentence true?
Or is there a thread I fail to see?
Re: Urederra (#13),
Yes, all the data are already available, therefore I don’t see where is the problem: you just have to download the files, and compute some averages etc. Nothing too difficult.
Some comments on this:
http://sciencepolicy.colorado.edu/prometheus/im-a-mac-and-im-a-pc-climate-science-edition-4711
It would be interesting to know whether Santer requested any data from Douglass or his co-authors, when he set about re-appraising the statistical work of 2007 Douglass, et al.
So how did you wind up on Santer’s “naughty list’ this year?
I’ll bet you won’t even get a lump of coal in your stocking, either…
Re: henry (#16), Not even coal, it emits greenhouse gases….
Looks like “egregious” is the new favourite team adjective. All our methods are rigorous and conservative, all your errors are egregious.
Considering the tone and maturity of this comment thread … is it that surprising the researcher’s response was less-than-enthusiastic? I’m neither defending nor criticizing anybody for any specific actions/inactions … just an observation … carry on.
Re: Brian (#19),
Posts like this one never cease to provide a chuckle. There was never a doubt that scientists can be ever bit as petty, ill tempered and emotional as the rest of the population – and even when responding to a legitimate and business-like request that Steve M presented here. What amuses me is that anyone would bother to make the excuse of provocation for someone acting the part of a complete as—– whether they be a scientist or among the unwashed masses. That is usually an instinct that needs no provocation.
Re: Kenneth Fritsch (#21),
On the other hand, no-one should expect the condemned to fix the guillotine that will execute him. I applaud Steve’s auditing efforts and deplore the Mannian and Hansenian contortions, but it is not surprising to me that Santer does not want to help his nemesis.
Re: Pat Keating (#44),
Steve is only a nemesis if Santer’s goal isn’t good science.
Remember the “PR Challenge”:
#22. I drew no such conclusion. However, I replicated Santer’s Table III and showed that this calculation used the standard error of the mean of the models, notwithstanding various fulminations by beaker and Gavin Schmidt against this procedure. I also was able to show that Santer’s Table III test fails for the H2 hypothesis when 2006, 2007 or 2008 data is included – a result that Santer either omitted to calculate or failed to report.
The IJC editor relied as follows:
The publishers are the Royal Meteorological Society.
Re: Steve McIntyre (#24),
I will be surprised if Steve gets a satisfactory reply from the publishers. May get part of a reply.
Re NOAA – Someone is working a late shift to see how to get around it this time.
The PR Circus – We will hear very little. If we do it will be a self serving analysis showing how correct the Team has been.
There was once a commenter here who said that Steve would disappear in a year or so.
Anyway, all this shows that Steve is having an effect – some are improving their work and some are trying to hide it better.
Re: Gerald Machnee (#26),
And I’ll wager that the effect is FAR larger than most folks think. I’ll even bet that most climate scientists read this blog faithfully.
Re: Steve McIntyre (#24),
Steve M, the reply from IJC editor was posted on Nov 10, 2008 and indicated a response would be submitted in a few days. I searched for a response on this thread but could not find one (the pink background for the Steve M replies was helpful). It has been 12 days and I was wondering the status of reply.
While Santer’s reply reminded many of us that scientists too can become so POed as to lose control, my interests are more inclined toward the measured responses that come back after a cooling off period. They can be both informative and entertaining
I am so bewildered as to how Santer can sleep at night. To be so vain, vile and very anti-scientific in his reply. I don’t loose much sleep over this, because it drives me to drinking and I sleep just fine.
According to Roger, Santer is “climate scientist who is employed at Lawrence Livermore National Laboratory, a U.S. government lab. Santer thus works for the public.” It seems to me this guy thinks this is a game, and he can pick up his stuff and go home now. He certainly doesn’t treat his work with any seriousness. If he did, he would be throwing his data and calcs in Steve’s face and saying, ‘I told ya so!!’.
Is this the policy of the entire LLNL? If not, why not. Is Santer the head of his department? Could you also write to the head of the lab and the grant/funding administator perhaps?
What a sad day for all of us.
#19: That’s your excuse? That the mere anticipation of criticism for concealing one’s data fully justifies concealing one’s data? Hmmm, let’s apply that excuse more broadly. Your honor, I knew the busybody police officer was going to hassle me for driving drunk, so it’s no wonder I got plastered before driving home.
This thread is a response to Santer’s decision, not vice versa. In any case, it matters not a whit whether the comments on a blog seem a little peremptory to Santer. Does he want people to accept his results or not? If yes, he should release his data, period. If he wants to keep his data to hiimself he should keep his results to himself too.
Sigh. I think these guys are their own worst enemies. By reacting with a knee jerk instead of taking a moment to engage their brains, they come out of this looking pretty poorly. With even a little bit of forethought, Santer would have known that this would be very public. If he put on a huge false smile and said “Here kid, knock yourself out”, at worst he would end up in an indirect discussion about statistical arcana that probably would not have gone further than the blogosphere. As it stands, he gives the appearance of withholding data and scientific malfeasance.
Smart money says NOAA decides to hand over the data short order as there is little upside to withholding it and large political drawbacks.When the next congressional review takes place, if called to give testimony in front of a congressman looking for a “AGW conspiracy”, I would rather put them to sleep with statistics theory than answer questions about why I denied data to someone with was questioning my work.
I’d be willing to bet that 99.9% of the public has no idea, nor cares, about these issues (egregious, IMO). Terms like “very public,” it seems, really are code words for “CA audience.” If, just once, someone like NBC (yeah right), ABC, CBS, CNN, the AP, or Reuters would actually run a story on something like this, then we might see some backlash. Till then, we are alone.
Mark
It might be an idea for someone to actually do what Santer says. Get the data (up to date), analyse it and publish the analysis. The trouble is, people are arguing over the question “Who got their methods wrong, Santer or Douglass?” when the far more important question is “How well do the models reflect reality?”. Regardless of whether Santer et al is right or wrong, by truncating the data in 1999 it is now completely out of date. Someone, please, do an up-to-date analysis that tries to avoid the pitfalls of the earlier papers.
Steve: See http://www.climateaudit.org/?p=4180 for a replication of Santer’s analysis using up to date information – use of data up to 2006, 2007 or 2008 reverses Santer’s H2 conclusion as presented in his Table III. Although Santer reported the effect of updated information on his H1 hypothesis, he either neglected to carry out or omitted reporting the results of updated information on the H2 hypothesis. Under the circumstances, one feels that slightly more circumspect language from Santer would be well advised.
It’s real simple: If you publish your results then you must provide all your data a and computer codes upon request, including intermediate results. This goes double if you are a public employee. If you can’t accept that responsibility then don’t publish. Surely Santer understands this?
I would almost hope that an employee of LLNL doing non-classified work of this sort would be required to provide a complete copy of all computer media upon request. Wonder what kind of FOIA response such a request would generate. If their computer use policy is anything like other government agencies I’ve seen employees have absolutely no expectation of privacy whether a few things that probably should be private sneak in.
Re: Soronel Haetir (#33), Sorry, but I wrote dozens of papers at Savannah River (where I only needed to get the content cleared for security) and Argonne lab (where no one ever asked to see my work or archive my code). Obviously I was not working on nuclear or secret stuff. There is no archiving requirement at the labs.
I anticipate this response will be right up there with Santer’s and Brian’s on the chuckle-o-meter. I suspect Santer knows he is on safe grounds in his giving Steve M the scientific equivalent of the finger.
Well, Santer is afraid of his work being scrutinized because he knows it’s barely good enough for government work.
Why else hide your work Santer?
Steve- Possible typo. I presume your orignal request was sent to Santer in 2008 (not 2007). Thanks. best regards, -chris
Steve – fixed.
Santer is employed by Lawrence Livermore (not NOAA) and, as far as I know, Livermore is not subject to FOI. Four of his coauthors are employed by NOAA which is subject to FOI. It’s quite possible that none of his coauthors ever saw Santer’s data or discussed it with him – this too would be interesting.
Update – This impression may not be right. It looks like Livermore is a DOE agency and may well subject to FOI.
I think Santer’s paper is fatally flawed, due to truncation, and I think he knows it. Hence, the anger. Same old game that we’ve seen over and over here. If the most recent data doesn’t fit the “hypothesis,” just ignore it. What a science travesty we are witnessing on this blog!
Steve, Craig,
Okay, I thought the various labs like LL, Sandia, INEEL etc were subject to archiving and FOIA. Sad to find out I was mistaken.
Re: Soronel Haetir (#40),
More travesty, since I’m paying for all of this. They are run by “private corporations,” which are 100% funded by public money, so they can get away with this crapola. I would like to mention the political implications of this, but it’s a sure snip. (probably a snip, anyway).
So Steve. That’s the Santer and the clause……. Eh? 😉
Quite a few terms come to mind to desribe someone who closes an email like this:
These do not include the terms “great” or “scientist.”
He certainly has a very high opinion of himself and his work.
It must be awfully frustrating; in the private sector this kind of bloodhounding would be putting people in jail; here, in the public sector, the dog gets beaten.
================================================
There is an interesting archive of e-mails building here showing that some in the climate sciences are either unsure that their results are accurate, or aware that they aren’t accurate. In this instance Santer’s suppressed anger is apparent throughout the e-mail, he is clearly gritting his teeth as he writes is and would much prefer to tell Steve to F-off, but in place uses what can only be described as an attempt at heavy sarcasm. He clearly feels the “audit” will show up defects in the code and methodology, else he would simply say:
“Here are the data, here is the code, here are the methodologies, go do your worst.”
Instead he appears to have finished by metaphorically sticking his fingers in his ears and going “La la lala la.”
“Please do not communicate with me in the future.” Mmnn.
This is really funny. It also becomes funnier every year the temperature refuses to obey the IPCC climate model predictions.
And every year this continues further discredits the Santer paper, not merely its methodology but more importantly its conclusion.
For your next official information request, you should ask for the Santer data to be updated to all available latest measurements since you wish to publish an updated analysis and version.
No doubt that was why he doesn’t want to talk to you anymore!
According to newspapers here, the US Army had to reveal their information of a lost nuclear bomb into Greenland’s glaciers in 1960’s due to a B-52 crash to BBC due the Freedom of Information Act. Why the army just didn’t respond to BBC that “Go and look the glaciers yourself, glaciers are there publicly available”.
It’s obvious that this isn’t merely a coincidence. If I wanted to, what would I need to do in order to statistically prove that such a coincidence is impossible?
From Santer’s reply:
If he’s so concerned about the “egregious statistical error” in the Douglass paper, then why is there no concern about statistical errors in other Team papers?
Re: henry (#52),
That Santer quote overlooks the most egregious statistical error of all which is the presumption that an arbitrary collection of model runs can tell you the confidence limits of the model comprising the average of those runs.
Just absurd.
I read with interest that it was Santer who accused of having “adjusted the wording” of IPCC SAR to make it more alarmist.
Oops. Santer who was accused…
Link
Hoisted by his own petard in the end….
With the passing of Michael Crichton, who besides being a famous author was a Harvard medical graduate and academic, I suggest those who genuinely believe in the scientific method read his Cal Tech lecture from 2003. It expresses most chillingly the sorry mess that science has become, corrupted as it is by politics and advocacy groups:
http://www.michaelcrichton.net/speech-alienscauseglobalwarming.html
having been in litigation, one would that will be latched on here is “sent’. if someone makes data available on a shared drive, FTP site etc then it could be argued tht this does not fall within the strict meaning of “sent”. A made it available and B then took it. A did not “send” it to B.
I would suggest that “sent” be replaced with the phrase “sent or otherwise made available by any and all mechanisms including but not limited to Email, FTP, shared drives …”
Phil Jones copied me on the following reply to Santer’s email:
Re: Steve McIntyre (#59), This is disengenuous and they know it. The raw data is not sufficient to reproduce their work. It is only sufficient to start from scratch and do your own analysis,which is what Phil is suggesting. No concept here on their part that REPLICATION is a crucial and accepted part of the scientific process.
Replication is starting from scratch and doing your own analysis.
It should be on your own data.
When you are forced by law to audit, you pay for it and hate it. When someone within the company (or government) finds and publicizes malfeasance, it is called whistleblowing, and we know what happens to them. Steve is like a whistleblower, but they can’t fire him. How annoying that must be!
To use very improper English,
! I see nothing snarky in Steves request and don’t understand the combative tone of Santer in his response. If the information is so readily available as he claims, then just provide the url for the location and poof, it’s done. As for Phil Jones, he is just touchy because his work doesn’t hold up when subjected to scientific scrutiny.
Methinks they do protest too much!
I don’t get it. Of what value to Steve is Santer’s processed data? If you want to replicate the work and verify the result you need to start with the monthly-mean model output and replicate the analysis. That’s exactly what Santer suggested. Go the PCMDI site, download the model runs and do the analysis. If Santer were to give Steve all of the processed data and/or his codes, Steve is guaranteed to simply get the same result. Then what has he learned or verified – that he can generate the same plot from the same processed data? I do not see the value in that. Santer’s response was obviously rude and inappropriate in tone, but the essence of his suggestion is perfectly appropriate. Get the data and try to replicate the analysis. Only if there is insufficient documentation in the paper to do so, or an attempt to replicate the analysis yields a different result, is there something for Santer et al. to answer for.
Steve: As I said in my request, my interest is in the statistical analysis of the 47 time series, assuming that Santer has collated the information correctly. That’s what Santer et al 2008 is about. I’ve already determined that his Table III doesn’t hold up for his H2 hypothesis; now I want to look at his H1 hypothesis. The collation is irrelevant to that exercise. Re-collating the data contains a risk of introducing an irrelevant discrepancy.
Re: Eric (#64),
Before you can decide if something was done correctly, you need to determine exactly what the authors did. Having the intermediate results can greatly reduce the work and can help find deviations from the descriptions given. If you have the raw data and the final results, it can be very time consuming and sometimes virtually impossible to decode the intervening steps. Look at the Mann 2008 reconstruction. Try to go from the raw proxies to the final reconstructions. Follow the paper and the SI. Oops, some steps were described incompletely, some steps weren’t followed as described and some were simply omitted. However, with the intermediate results, you have signposts which can help you find those places where such inconsistencies occurred. Someone who is confident that their scientific work is properly done would not object to such scrutiny by creating impediments through their lack of cooperation. After all, if a team of 17 authors can’t get it right, who can? 😉
Re: Eric (#64),
Eric, I, as a layperson, did some of my own statistical analyses of the RSS and UAH tropical troposphere temperature data up to the present and found that the trend differences between the GISS land and ocean in the tropics and the troposphere where negative (troposphere – surface) for the UAH data and showed no trend with the RSS data. I could only compare those results to that reported by Santer et al. (2008) for the climate models up to 1999 and found that difference in surface to troposphere trends to be positive. It would be great to obtain the latter data from Santer so that the model comparison could be brought up to date.
Why would any scientists be so mean-spirited that they would not provide that data for an easy comparison? Or why would not that data already be available? Science is supposed to be about seeking the truth and not playing games as Santer and now Jones are doing.
Hey Phil if you announce the world is going to end unless everyone does as you want them to you are bound to attract a few people who’ll ask you how you came to that conclusion. If you produce the entrails of a bird they’re going to ask you what about the entrails you see as giving rise to the forecast. Some of them might even ask you how you got the bird and what the juxtaposition of the entrails meant to you and why. If you respond by saying, “The entrails are there, see for yourself,” you might just lose a bit of credibility. The entrails aren’t the issue here, it’s how you interpreted them to come to the conclusions you have. Same with Santer and every other scientist who makes predictions.
Artifexis right – “..Sigh. I think these guys are their own worst enemies. By reacting with a knee jerk instead of taking a moment to engage their brains, they come out of this looking pretty poorly…”
But they’re also brave/foolhardy. Courageous in the Sir Humphery sense. I would only have the guts to respond this way to an auditor if I was DEAD sure that my work contained NO errors. If there is ANY kind of issue with Santer’s work, this response will be blown up out of all recognition.
Why does he not play the usual team game of agreeing politely and then refering Steve to a web site where only half the required information is stored? Is this a clever double-take, and has he already secretly submitted his work to three independent sets of auditors…?
I think there is a practical issue here, which is that we are all really busy. I don’t think it was very tactful of Santer to say “I shouldn’t have to do your work for you”, but there is a kernel of truth in it. He is a busy guy. As long as the unprocessed data is available and there is sufficient documentation for an independent person or group to replicate the work, then his responsibility as a scientist is met. Certainly he should have found a more cordial way to say it. As I noted: if there is insufficient documentation for independent verification, then Santer et al. and the IJoC have something to answer for. I don’t know if that is the case here, as perhaps it was with the Mann et al. paper
Re: Eric (#68),
Hogwash! The “work” is already done. All, he has been asked to do is send it. This is a case of pure-and-simple obstructionism based on spite. It only demonstrates that we are not dealing with a class act here.
In my view, this refusal is grounds to ignore any findings by this scientist
Re: Mike C (#71),
This is completely ridiculous. If you uncover a flaw in the analysis by replicating the study and arriving at a different result, then you should not ignore the findings, but rather submit a comment to IJoC for peer review and thereby correct the scientific record. That is how science is done. As far as I can see, nobody is preventing you from doing so. That said, no scientist is under any obligation to help you refute their work beyond providing a full documentation of the methods and providing any data that cannot reasonably be collected by an independent experiment.
I see this as being rude, but not necessarily playing games. Piecing together all of the output from intermediate steps of a complex analysis takes time, particularly if many people were involved in the analysis and you being asked to do so in a fashion that others can easily interpret. Presumably they have already taken a lot of time to document their work (I am assuming here, because I haven’t read the paper very carefully). That is their obligation – no more. Any verification or replication should be done independently anyhow. If there is any UAH or RSS data that is not publicly available, as Kenneth asserts, then it should be made public. That’s the end of the story!
Re: Eric (#72),
Do you honestly think that the main author would not have all of the work done on this paper on his own computer… before the paper was even submitted???
I agree 100%. However, “independently” as used here does not mean “without looking at the original work while you do it”. It means that the review is supposed to be done by parties who played no role in coming up with the original results. Not providing the requested material indicates a distinct aversion to such independent verification.
Re: RomanM (#73),
Frankly, that really wouldn’t surprise me.
Here, I think you and I just have a fundamental difference of opinion. I think “independently” means that the experiment is done entirely independently.
Re: Eric (#72),
Eric, the needed data to which I have referred is the updated computer model results from Santer et al. and not that from UAH and RSS.
I find your rationalizations for withholding data by Santer worth a chuckle or two. It is just plain silly to think that that data is not in a computer somewhere that could be easily sent by that person who read and replied to Steve M’s initial email for Santer.
I suppose rationalizations for withholding information do have some value these days with all the bailout money going to private companies. I somehow think that your current one, without a lot of rework, would get smacked down almost immediately.
Re: Kenneth Fritsch (#76),
I would find the notion that scientists should provide every code and the output of every intermediate step of an analysis worthy of a chuckle if there weren’t the threat that the work of scientists like Ben Santer and Micheal Mann could be ground to a halt if they were required to do so. If you want to replicate a result, then go for it. If that requires building a GCM or collecting tree-ring data yourself then do it. This may sound absurd, but it happens every day in the scientific community. That’s part of the reason there are so many climate modeling labs.
Re: Eric (#78),
Hmmph. The topic of the paper was rigorous statistical comparison, not GCM building.
But it is good that Santer17 discussion continues. My dof question is still open, http://www.climateaudit.org/?p=4185#comment-308277 , I’d like to derive Santer’s Eq. (13) next. It looks a bit like the http://www.itl.nist.gov/div898/handbook/eda/section3/eda353.htm
in here. (I don’t have Storch and Zwiers, 1999)
Re: UC (#100),
When the variances can’t be assumed equal you have the Behrens-Fisher problem. The approximate t-test method goes back to Welch:
“The significance of the difference between two means when the population variances are unequal” B. L. Welch, 1938, ‘Biometrika’ 29, pp. 350–62.
I still think all this is irrelevant because any sensible analysis would have used maximum likelihood to fit the trends and the AR1 at the same time rather than using a bunch of ad-hoc adjustments because climate scientists can’t use arima() in R.
Re: Andrew T (#105),
Yes, and
in here equals Welch-Satterthwaite equation in here . Santer’s Eq 13 seems to be modified version of this, as we are dealing with trends instead of means. Now give me 49 simulations of 2009-2100 temps and I’ll give you prediction interval for the trend to be observed.
Re: Eric (#78),
I just got to this post of yours and my first reaction was “you have to be kidding me”, and then I chuckled. If Santer (and Mann) put all the data online from the get go I do not see how their work would grind to a halt. We suspect that much of this grunt work is that of graduate students anyway and therefore the great minds could go on contemplating.
The longer and deeper this conversation goes the more outlandish your rationalizations become for excusing these scientists’ behavior. Why would not they want to make their data available to one and all and particularly if we think of them as someone who is using taxpayer money, directly or indirectly, and as a result being a public servant. I hardly get that impression from these replies.
It is almost as though the argument for withholding data is tied to a requirement for these authors to keep their creative juices for their scientific inclinations going, no matter that the classical inclination of a scientist is to share his results and data in the pursuit of truth. Then there is the argument that these scientists’ time is so valuable that even handing the duties off to scientifically lesser people is a major impediment to their great works and discoveries. All this seems more in line with a training program to develop prima donnas.
#72. If papers are presented as being policy-relevant, then scientists had better get used to the idea that people other than their pals are entitled to examine the data. If you’re suggesting that documentation in these fields is done in a “fashion that others can easily interpret”, you’ve obviously not read this blog. Methods are routinely mis-described.
And make no mistake, it’s not simply being rude. Phil Jones said on another occasion – “We have 25 years invested in this. Why should we let you see our data when your only objective is to find something wrong with it?”
In my own case, when I first looked at the Mann corpus, I had no expectation of finding anything wrong with it and approached such a conclusion very gingerly; however, when Mann said that he had “forgotten” where his data was located and his associate said that it wasn’t in any one place, it was clear that no one had ever checked his work.
Policies at econometrics journals require authors to archive working code and data as a condition of submission. So we’re not talking about a policy that is not already “best practices”.
Re: Steve McIntyre (#74),
I am not interested in defending Phil Jones or Micheal Mann. If methods are not properly documented or raw data is being shared only with “friends” then those are problems that need to be exposed and corrected. But do not confuse a lack to willingness to aid a refutation with a lack of willingness to be refuted. I can only say this so many times: replication of an experiment for the purpose of verification should be done independently. As a practicing scientist, I am under no obligation to help you replicate my experiment beyond properly documenting my procedures. Nor would such help be desirable if the goal if the goal is to independently verify a scientific result.
Re: Eric (#77), Perhaps you misunderstand what the data are? The data are the output of a bunch of climate models. There is no experiment and these models are mostly not available to be run by anyone. It is like saying go run your own atom smasher when you think the detection of some particle may be spurious (and it has happened).
Re: Craig Loehle (#84),
I know precisely what we are talking about. In fact, we shouldn’t be talking about “data” because output from a climate model is not “data”. My understanding is that the model runs cited by the IPCC are archived by LLNL in the PCMDI database, however I do not know what their policy is for updating it, so perhaps the runs do not extend as far as those used by Santer et al.(?), it seems that maybe that is what Kenneth is asserting. If that is the case, then Santer et al. would be best served by coughing them up. Also note that many climate models codes are publicly available. I know for certain that NCAR CAM, NASA GISS, and NOAA GFDL codes are all available for download from the web. But all of this misses the point I was trying to make. Indeed, if you do want to independently verify a particle maybe you should build your own atom smasher. I recognize that this may take decades and hundreds of millions of dollars, but new accelerators do get built every now and then. And presumably the predictions of the standard model, as well as the results of past accelerator experiments, are evaluated against the results from newer accelerators. All I am suggesting is that you view climate science in the same way. I don’t think it is really necessary in this case, since the modeling groups have decided to make their output publicly available. But if you don’t think that is sufficient, then download the source codes and run them yourself. If you don’t think that the community’s models are right, or sufficiently independent from one another, then build your own.
Re: Eric (#89), I’m probably missing something here. Suppose X’s atom smasher produces some data. X’s algorythim, which he does not reveal, produces a conclusion based on that data. Are you saying that if Y legitimately suspects that the mathematical method applied to the data has problems, he has to go build another smasher, run it, produce data (that may or not be the same), then guess what X’s math was, and run that… before he is allowed to critique it intelligently? That the proper scientific approach would not be for X to simply tell Y, or the world, what the mathematical method was and let Y, and the world, check it?
Re: PhilH (#92),
I would expect that the mathematical basis for X’s algorithm would be described in X’s paper.
Re: Eric (#94), you say:
Heck, so would we all, that’s fundamental to science. However, even the most cursory reading of this blog shows that in climate science this is the exception rather than the rule.
Which in turn reveals to us that you haven’t done your homework by reading the blog a bit before posting …
w.
Re: Willis Eschenbach (#97)
In my experience most scientific papers don’t clearly describe their algorithms; at best they describe the situation well enough that with a bit of persistence you can work out roughly what they did. The only way to find out what they really did is to get hold of code, sample inputs, and so on. Generally my competitors are willing to provide this if I ask nicely. Most of the journals I publish in require me to do this on request, though there is a sort of gentleman’s agreement not to take advantage of this to acquire unpublished insights.
To repeat what has been said many times before: climate science is being used to justify multi-billion (multi-trillion?) dollar expenditures. This is not like most science, and we really can (and should) demand higher standards.
Re: Eric (#94),
The precise method for determining the trends in the lower troposphere based on model data are not described in the Santer paper. The current paper describes what they did by pointing to a five page article in Science. I have not downloaded that article, but Science articles tend toward, shall we say, brevity?
In his response to Steve, Santer suggests algorithms (plural) to do figure out the trends are in other papers.
As there are multiple algorithms, and they can be tweaked, it will be difficult to reverse engineer what Santer unless Santer explains further. Providing his code or post processed data would accomplish this.
The situation that has arisen is not uncommon in research. This is why, ordinarily, scientists do grant requests for information or codes that support the major results they present in peer reviewed articles. I don’t know if they are required to do so, but many grant these willingly.
Re: Eric (#89),
Eric, the calculation complexity comes from converting model output to give the integrated areas of the troposphere that RSS and UAH measure and report as T2 and T2LT. You could pick that up from Santer’s nasty and mocking reply.
By the way, Eric, what would you, as scientist, do in a case like this if the data could be emailed with little of no effort. Just like to know what kind of person I am talking to, you know.
Re: Eric (#77),
As a scientist, then you understand that you have the obligation to provide EVERYTHING that is necessary for another person to repeat your experiment. In that case, that has to include (working) computer code, all the data that were used, and any assumptions that were made. Documenting procedures is not enough, when you are using data from somewhere else in forming your conclusions.
Re: jae (#85),
I guess I’ll just say it again. It is my opinion that for the purposes of routine validation of a scientific result, I am not obligated to make my computer codes or the results of intermediate analysis steps available to anyone who asks. If you want to verify my results, then read the paper to learn the procedure, then acquire the data and do the calculation yourself. My codes and intermediate steps are not required for you to reproduce the result. Again, this assumes that the paper sufficiently documents the procedure. I do not know if that is the case. If it is not, then there is a problem with the paper.
Re: Eric (#90),
There is always a problem with the paper. I have yet to come across a published paper that completely documents the steps used. Publishing space constraints are such that the technical details are the first to be cut. Instead, in the absence of such information, one must try a process of trial and error to work out what was done – and if there are more than a few details missing it becomes near on impossible to work out exactly what was done. Different data vintages can affect results in non-trivial ways – even the ‘raw’ data keeps changing in some areas.
I expect that the standards should be the equivalent to these:
Re: Eric (#90), Perhaps you could look around this site. You would find that there has been almost not a single case where the documented procedures in a paper have been sufficient to replicate, verify, audit or test a result. In the case of the GISS (official NASA data for US and world temperatures–you can’t go “build your own”) there were errors in the code and when the code was coughed up by Hansen, no one has been able to get it to run. I would remind you that in fields like biochemistry or tricky surgery techniques, it is common to go visit a lab that innovated a technique to see how it was done.
Re: Eric (#77),
You write:
Where do you get this nonsense? Is this what they are teaching in schools these days?
Perhaps we need to define terms here. Replication can mean different things to different people. In some cases, replication may require obtaining new samples from the field. That seems to be what you have in view. That is not what Steve is requesting. He did call on people to update certain tree ring series relating to the Divergence Problem. When they didn’t, he went into the field and did it himself. The main issue is that these people have not “properly documented procedures” in the first place. The NSF requires researchers to archive their source code, but not everyone does. In this case, you are required to turn over your methodological source code whether you think you should have to or not.
Re: Ron Cram (#224), The point that Eric is missing is that with the studies like these, properly documenting methods is not sufficient for replication. The input data needs to be made available, too, else replication is not possible.
Mark
Re: Mark T. (#227),
I do think it is theoretically possible to properly document methods and archive all your code prior to publication, but it is not common practice. Even if someone attempted it, he would probably leave out one or two important elements unintentionally. That is the point I was trying to make. At that point, the researcher cannot say “Take my word for it!” He has cooperate with replication attempts or he is not a scientist.
Re: Ron Cram (#229), What I was getting at is that some experiments do not require an external data but others do. For example, when I describe a method for detecting multipath signals in noise, anyone that is knowledgeable w.r.t. communication theory can easily generate their own data to test my theory. Santer’s specific example, however, cannot be replicated without access to the actual data used. Simply saying “do the runs yourself” is not sufficient as it would generate different results (at least, that’s my understanding of this issue), so documenting the methods is not sufficient.
Mark
Re: Mark T. (#231),
Now I understand you and agree completely.
I’ve spent most of my life in business where due diligence in the form of audits, feasibility studies, etc. is routine and part of life. In my opinion, if climate science is being used to inform policy, then the public can reasonably expect climate scientists to do more in the way of disclosure and due diligence than may be expected of an organic chemist.
Eric, you express your opinions on what obligations should be. However, in many cases, journal policies require data like this to be made available. The US Global Climate Change Program has established policies that, if implemented, would require disclosure of this data. Santer is a federal employee and the data may be governed by FOI. So whatever your opinions may be on Santer’s obligations, if Santer is intent on testing the system, we’ll see whather the data is producible under one of these sets of duties. Perhaps Santer has communicated the data to someone else and the communication is producible under FOI. Maybe one of Santer’s UK coauthors received data and their communication is producible under UK FOI. And if the institutions have failed to implement the US GCCP policies, that too is worth knowing. Maybe none of Santer’s coauthors ever looked at the data; that too would be interesting.
In terms of creating work for himself, Santer might consider the possibility that it will take him more work to justify the refusal than simply to provide the data.
Re: Steve McIntyre (#80),
On that point we are in complete agreement.
Re: Eric (#83), It is of course speculation how much effort would go into releasing the data. It is further speculated (for example on Lucia’s blog here)that it could be a lot of work.
We likely will never know for sure, as the principals involved are not expected to say. In any event, it’s probably not worth the effort that might be needed to find out, and to some extent the issue is moot.
However, if someone did want to take this further, or to make some judgment in the absence of this information about how much effort was required (now that the data has been released), they probably would first start with the Santer 2008 paper itself which says:
In the “Fact Sheet” on the paper issued by the authors on Santer 2008, they say their paper is based on
(emph.added, available here).
In Santer 2005 (Science, abstract here), Santer et.al. describe their dataset of realizations as follows:
Going back to Santer (2008), they further describe their procedure as follows:
(emph. added)
This suggests to me this dataset was created for Santer (2005), and was simply re-utilized by Santer (2008). If that’s so, giving Steve McIntyre access to the requested “monthly model data (49 series) used for statistical analysis in Santer et al 2008” may indeed have required no more than a few mouse clicks (and perhaps blowing off some accumulated dust).
Eric,
Producing a new GCM does nothing to verify or refute the existing GCMs, it merely provides another model. Given that this very topic started from trying to either verify or refute existing GCMs I don’t see how producing another GCM helps in that quest.
I agree that the Santer statistics must be audited. However, by looking too closely at the trees (statistics) we don’t see the forest (the premise). Santer et al set out to show that the observed temperature data is within the confidence intervals of the model with the implication that CO2 forcings drive temperatures. That is indeed a necessary condition but it is insufficient. They must also show that the model predicts a statistically significant difference when including and excluding CO2 forcings. And that has NOT been done.
The idea that you can deny access to important intermediate calculations of a peer reveiwed paper makes a mockery of science.
If that were really a justifiable notion, you could have two seperate studies on the same raw data doing in general the same thing but coming up with contradictory results with both sides saying “the raw data is there, if you filter out the ‘correct’ items and work it out properly for yourselves your results would comply with ours”. How does one compare the two papers for validity ? By doing your own study and coming up with a third result (of which you will with-hold the intermediate steps again) ?
What an embarassment for climate science.
I seem to remember in the past the results were at times not the same because either the methods were not fully described, they were wrong, or there was extra/missing bits.
Parts of this discussion remind me of discussion about the source code for temperature analysis. How many years did that go on before finding out that it wasn’t all there?
Oh, and about LLNL
Regardless, is there anyone involved who really doesn’t understand the difference between a) recreating a process from the start or some point during it and b) checking an end result.
One is making a pie from a recipe to see how yours comes out, either from scratch or using pre-made crust and/or filling, etc. The other is eating a pie somebody else made and seeing if it tastes like the recipe says it should.
“Dang, this cherry pie has apples in it!”
What does the raw data have to do with conclusions?
Raw information lets you recreate something to see what your conclusions are, and if they match the conclusions of others.
Conclusions are checked by checking the conclusions.
What does being able to compute (easily or not) have to do with not providing the existing supporting data for comparison?
Sorry, Steve, Dr. Benjamin D. Santer (http://www.nersc.gov/news/annual_reports/annrep03/advances/5.1.fingerprints.html) isn’t going to let you copy his homework out on the playground.
I’ve now submitted a FOI request to DOE, to which Livermore Lab belongs.
Re #82
I think you’ve got it exactly backwards. In the topic under discussion, it is precisely the nature of the trees that needs to be examined. Santer, by accumulating 16 other authors to bear witness to his premises has, in effect, created a forest to obscure the tree that is a product of his own construction. And to extend the metaphor a bit further, he has also attracted additional supple saplings [accent on the first syllable] who are willing to bend over backwards and beyond to rationalize and justify his…dare I say…”egregious” behavior.
Mr. McIntyre:
If you need legal help, and don’t already have a team, with FOI enforcement, I offer my assistance.
It’s like math class; show your work to get credit for the solution to the problem.
Re: Eric (#94),
Even if a paper describes the mathematical basis for an algorithm in the first place, that may or may not detail it well enough to be replicated. The same steps and methods that leads to the same results. A non issue, that’s not the subject.
We are not discussing replicating or investigating a study, but rather the examination of the results.
We take a break here for a moment from this thread, just a short break. Steve M. and Ross M. have been abased by Mann for not being “climate scientists”. (Snide aside: Is that term an oxymoron?) This gives me the opportunity to pass on something I once heard privately that deserves to be communicated publicly, and to honor its source, one Harold Hay. Many years ago after one of my chemistry lectures I was speaking with this man and at one point addressed him as “Dr. Hay”. He responded to me, “I’m not DR. Hay. I don’t have a PhD.” (He was the holder of many patents in chemistry and other sciences.) He then continued: “If you have what it takes, then you don’t need a PhD; and if you DON’T have what it takes, then you DO need a PhD.” May he rest in peace; and the rest of us living, take heart!
Re: Bill Larson (#102),
On the same (OT nitpicking) theme, Steve wrote “Dr Karl” but it should be “Mr Karl” since NOAA’s Karl has no scientific PhD (only an honorary doctorate).
Hay now (pun intended), there’s nothing wrong with a PhD per se, just how you use it (or don’t).
Mark
I strongly support your pursuit of requested ‘data’ under the FOI Act. The behavior and arrogance of these people are appalling and indeed seems to be worsening as evidence at odds with the IPCC central position accumulates.
Meanwhile I am scratching my head over a simple conceptual problem I have with the Santer et al approach. As I understand it, the authors defined a class of models which have somewhat differing treatments of the climate system, including presumably differing climate sensitivities. This class of models produced an ensemble of runs for the historical period in question. A statistical analysis of this ensemble is presented that purports to show that the ensemble is indistinguishable from the actual historical record, when ‘weather noise’ is included. Therefore, the conclusion we are presented with is that the class of models as a whole is ‘consistent’ with observed trends. And by implication, attacks on the models per se are ill founded.
If I understand this correctly, it seems strange to define a class of differing predictive methods (models) with the purpose of determining that the class as a whole is consistent with observation. One might argue that it is more logical to consider the predictions of specific models, one model at a time, and statistically rule in or rule out these specific models based on how close their predictions come to observation. I suspect that if you do this, you will find that those with lower computed climate sensitivity to greenhouse gases come closer to the observed record, providing evidence that the actual sensitivity is lower than the bulk of the models.
In practice, different models are probably not completely independent of each other, so the actual number of ‘degrees of freedom’ probably lies somewhere between one and the number of models. Still, it seems that the scientific objective ought to be to determine what the results of different models, when projected against the historical record, can tell us about the climate sensitivity, not whether models as a methodology class are ok or not.
Comments welcome.
Re: Noblesse Oblige (#107),
FWIW, I completely agree. I never could see any kind of reality in an “ensemble of models.” I even think the whole idea is funny. If you have enough models, you get enough variation to cover any historical record. And I’ll bet that the models are so inter-bred that the df are quite close to 1.
Re: Noblesse Oblige (#107), “the scientific objective ought to be to determine what the results of different models, when projected against the historical record, can tell us about the climate sensitivity”
It would be wonderful if Bishop Hill (or someone) would consider writing up the Santer – Douglass & Christy story like you did with the Hockey Stick. It seems that the sensitivity issue, and the nonexistent troposphere heat spot, is an important link in the serial auditing of Climate Science. This can greatly help those like myself who care deeply about what is happening in Science at present, as Michael Crichton did, but may have difficulty understanding scientific/mathematical language at times, and cannot follow every blog here, yet want to grasp the science and the story correctly and adequately.
Re: Noblesse Oblige (#107),
I agree and have made these points more than once. Also until climate modellers can show proof their model predictions are convergent rather than divergent both individually and as a group it seems invalid to use them to predict anything at all and certainly not for any runs that lie outside the experimental confidence limits of the observed data.
Re: Noblesse Oblige (#107), A number of posts have raised exactly the same point. It seems sensible to deal with an ensemble generated by a single model perturbed with different initial conditions, stochastic blah blah. BUT an ensemble of different models? I don’t think so.
In response to #88 Sam Urbinto
It’s no small thing to consider spending hundreds of billions of dollars and changing the economies of the world based on climatology research. If’s it too “big of a deal” for a climatologist to share his “homework” on how he got an answer that influences this discussion then his results should be discarded or suspect. You may remember Fermat’s Last Theory. It looked like it made sense but was considered conjecture by the Scientific community until it was proved 357 years later by Andrew Wiles. Until Santer feels willing to share his homework I’ll guess we’ll all have to wait a few hundred years to truly see if his “conjecture” proves true.
One point that I’d like to remind readers is that climate authors seldom use “standard” methods described in statistical texts. Articles usually studied here are seldom “methodological” or “applied” – but a quirky mix of the two in which the authors are seeking to establish a result on applied data, but generally do something a bit novel in their data handling.
I note the 20CEN model results used in Santer et al 2008 includes runs from the Hadley Centre (both HadCM and HadGEM by the looks of it). According to the PCMDI website, the Hadley Centre imposes the following licence restriction on the use of its data:
Assuming the petulant Phil Jones’ e-mail is representative of their views, I guess the Hadley Centre are unlikely to hold Santer to their licence agreement. I guess arbitrarily long delays for non-commercial reasons are considered fair game…
#114. That’s a nice observation. Santer has described a procedure for converting model outputs into synthetic MSU series, through “static weighting functions” – the synthetic PCMDI runs would appear to be within the scope of this licence. Maybe someone could send a request to Santer specifically requesting the synthetic T2LT series obtained from Hadley Center data, referring to the Hadley licence.
Sorry my response was to Eric #68 not Sam at #88!
Do they make ethics classes mandatory in science degrees nowadays? I certainly don’t recall any.
Santer works for PCMDI at Livermore Lab. Their mission statement says:
Aside from any general obligations in terms of publication, the PCMDI mission is to “support model intercomparison”. Insolent refusals to provide data to facilitate such intercomparisons are hardly consistent with this mission statement.
As a government (DOD) scientist who has been FOIAed I can assure you that Ben won’t enjoy the process. Where he used to be in control he will quickly find that a FOIA lawyer at NNSA is actually in charge. And the FOIA attorney works for the public and not him. Imagine his surprise when computer tapes are made, emails are collected and in general some attorney makes his life uncomfortable until the letter and spirit of the request are satisfied. It is much better to give it up willingly and quickly – unless you are a masochist. And the ultimate criminal act is to delete files. Do that and go to jail or hope that POTUS will parden you.
Are there other data sets or studies that should also be subject to additional Freedom of information Act requests? What about Hansen’s work?
Re: Lucy Skywalker (#120). I think the Bishop would do a great job of telling this story. In fact someday, someone will write a definitive scholarly history of the whole global warming business. This and the hockey stick will be part of it. Bishop writes beautifully about PROCESS, while Lindzen is masterful at narrating the SCIENCE and science politics. Some combination of the two capabilities would be commanding.
It is too early to write the entire history because we don’t yet know the ending, but Volume 1 would be welcome.
OK, look. Were such a request to come my way, I’m sure I would honor it. That said, I simply think that the greater value is in clearly explaining what was done, rather than making obtuse codes public. Once the methods are clearly documented in the literature, anyone can evaluate them at any time. Now I am well aware that many papers do not adequately document their methods. This is inexcusable. If indeed the methods used by Santer et al. for simulating the temperature sounder data from GCM output are not adequately documented, then they should be taken to task for that. Actually, they should have been made to do so at the peer review stage before their paper was published. By all means, it is reasonable to press them on the methods they employed if they are not clearly documented. As Lucia notes, customarily this is done on an informal basis. But in my experience it is typically an exposition of the methods used that is shared rather than a set of codes. In this fashion the methods can be independently evaluated. Under this model, competing groups work independently, but hopefully with a clear understanding of each other’s methods.
Re: Eric (#125),
You said it very well. In reality, writing up an accurate description of a complex analysis is harder than simply supplying the code. And yes, much more helpful.
Since we’ve yet (ever?) to see descriptions sufficient for replication, requests for code and data are fallback options… in the interests of not wanting to press upon the valuable time of the scientists involved.
I don’t think anyone here would argue with your position of wanting to see replicable descriptions. We just haven’t seen it done.
This whole thing is a truly sorry mess.
Re: Eric (#125),
Thank you, Eric. I will judge you by this reply and not by whom you defend.
When real science, such as mathematics, use algorithms as part of a proof, the algorithms are subject to rigorous review. For instance, the 4-color map theorem.
For a scientist to say, “I have the proof. It is proven. I have proved it. Proven, it is. Work out the details yourself”, is hardly science.
I am appreciating and enjoying this commentary, as always. But today I am in an especially jovial mood and want to contribute the following, inspired by Steve M.’s comment on a previous post about Santer not being able to respond because he was at a workshop, noting as Steve did that it IS getting close to Christmas: I saw a Marx brothers movie some time ago (I forget which one), in which in one scene Groucho and some other man are discussing a certain contract. At one point the (straight) man says, “What about the Sanity Clause?” Groucho replies, “Sanity Clause? There is no Sanity Clause.” I will only add that this blog does an excellent job of revealing that there is no Sanity Clause in climate science publishing either. I submit that Santer’s behavior here is more evidence of that.
Eric,
I would argue that the code should be the ultimate authority on what the method actually is. It might not be pretty, but working code provides an unambiguous definition of whatever matter is under discussion. It is much harder to provide that level of clarity with other forms of language.
Eric–
Some groups share code. Some just share exposition. It all depends. But in this case, even the values for the 47 trends in TTT based on 47 model runs have been refused. That’s isn’t even a code. It’s a list of values that would ordinarily be included in an interal report of masters thesis. It is understandable these are omitted from peer reviewed papers– the motive is brevity. But it is peculiar that these aren’t shared when a request is made.
Eric, once agian, I refer you to the mission statement of PCMDI where Santer is employed. It states that “PCMDI’s mission demands that we work on both scientific projects and infrastructural tasks. … Examples of ongoing infrastructural tasks include … the assembly/organization of observational data sets for model validation.”
Much of the effort in Santer et al appears to have been the assembly/organization of data sets, which is what PCMDI is funded to do. The employees have been compensated for the assembly/organization of the data sets used in Santer et al 2008 and, given this mission statement – which is over and above any general obligations as a scientist – it is absurd for him to say that I should do the infrastructural work myself and the managers of the laboratory have an obligation to deal with this refusal to adhere to the approved PCMDI mission.
I am waiting for the climate scientist who provides his paper, data, and methods to Steve prior to publication. With a note something like: “I want to make sure I am not doing something wrong and stupid that I am going to have to defend.”
It seems to me that the only downside in a request like that is that our mythical climate scientist would have to be more committed to science and truth than his political/religious biases.
I shall not hold my breath.
Re: Thom Scrutchin (#131),
Go check out Craig Loehle’s post back on November 15th of last year and the follow-on discussion (www.climateaudit.org/?p=2380). As far as I know it’s about as close as anyone has come to your suggestion so far. It’s a beautiful example of putting the science ahead of all other considerations.
Joe
From Phil Jones’ reply:
There are scientists who have yet to release ANY data, or even write a paper based on their research.
One scientist has been advised by legal to not release data.
And people thought that the Great Wall of China is the only stone wall that can be seen from space…
javascript:edInsertContent(edCanvas,’Re:%20%3Ca%20href=%22#comment-310953%22%3EEric%20(%2368)%3C/a%3E,%20′);updateLivePreview();
But there isn’t sufficient documentation to replicate the work. That would have been crystal clear if you had read the threads on this blog.
Sorry for screwing up the link in Re: John Baltutis (#135)
Re: Eric (#68),
But there isn’t sufficient documentation to replicated the work. That would have been crystal clear if you had read the threads on this blog
There’s something interesting, if not important in Phil Jones’ note, he starts with:
“Ben,
Your response is already up on the Climate Audit site, and when I looked there were over 50 comments.”
Read it carefully and it becomes obvious that this isn’t the first e-mail on the subject. “Your response is already up on the Climate Audit site…” Response to what? We have clearly been included in an already started e-mail conversation, it looks to me like Dr. Santer has been looking to Prof Phil for advice on how to stonewall the requests. Important? Maybe not, but interesting nonetheless. Could we not put an FOI request in for the rest of this correspondence now they’ve copies Steve on it?
Gerry, that’d probably be his (Ben) reply to SM. As in; ‘The email you sent in response to SM is already up at his site.’
So no, no need for earlier correspondence between the two, and speculation into a conspiracy between the two would be more damaging than helpful. And probably flatout wrong.
Was it The Firm that told the story of an innocent who found himself in a firm of corrupt lawyers, realized he’d been framed and compromised so he could not come out with the issue direct but collected (audited) hundreds of chits that showed their record of serial falsification, each one tiny by itself, but together massive?
TSH, you’re probably right, however “The email you sent in response to SM is already up at his site” implies some previous correspondence. Although that in itself doesn’t imply it was conspiratorial. But interesting nonetheless. Let’s leave it there I don’t think I’m adding anything to the debate anyway.
I’ve arrived late at this debate, but the comment by Eric #77 strikes me as bizarre, namely “As a practicing scientist, I am under no obligation to help you replicate my experiment beyond properly documenting my procedures. Nor would such help be desirable if the goal if the goal is to independently verify a scientific result.”
If one were to have discovered a method of cold fusion, what would be the advantage of saying to other researchers “Yah boo, sucks to you! I discovered cold fusion, but I’m not going to let you know how to do it.” That’s my reading of your comment & others have also noted some strangeness in your way of thinking. Now, if there were to be some huge financial reward at the end of your research, naturally it would make sense to protect your investment in your efforts, but as far as I can tell, that’s not the issue here. You just seem to think it’s a good thing to obstruct othesr from verifying any of your work, which begs the question “How good is your work?”.
You may not like my opinion, but your attitude is not nice and, so far, your more recent comments have failed to change my mind. You are trying to defend the indefensible and I would suggest you remember the adage, “When you are in a hole, the first thing to do is STOP digging”. BTW, stop practicing to be a (pompous) scientist, just be one!
Good eh?
Tom Scrutchin@#131: “I am waiting for the climate scientist who provides his paper, data, and methods to Steve prior to publication. With a note something like: “I want to make sure I am not doing something wrong and stupid that I am going to have to defend.””
Amen
Re: Pompous Git (#142),
Actually, Int J Climatology authors are given an opportunity to suggest a name as a referee. I wonder how many Team members suggest Steve McIntyre.
Comments at #107, #110 and #132 mention the assumption that climate model ensemble runs can indicate the range of climate sensitivity.
The assumption is false because ‘average wrong is wrong’.
The only consideration worthy of debate concerns which – if any – of the climate models provides accurate, precise and reliable indications.
The Earth has only one global climate. But each of the models provides an indication of global climate change that differs from the indication of all the others. This demonstrates that at most only one of the climate models provides accurate, precise and reliable indications of global climate change.
So, in the absence of evidence that any one of the climate models is right, it has to be assumed that all of them are wrong. At issue is how wrong they are.
The assumption of ensemble tests is that reality is encompassed within the range of outputs of the models; i.e. it is implicitly assumed that one of the climate models does provide accurate, precise and reliable indications.
But that assumption cannot enable ensemble runs to indicate the range of possible climate sensitivity in reality. It only indicates the range of virtual climate sensitivities output by the totality of models in the ensemble: this is because we know the models’ indications are wrong but we do not know how wrong they are.
Simply, there is no reason to suppose that the output of an ensemble of the models is more trustworthy than the output of any one of them.
Richard
Providing the requested information would probably entail no more than a handful of mouse clicks.
That is surely less ‘work’ than Santer’s lengthy response entailed. So Santer is hiding something. We can only speculate what it is. But if his work was straightforward and honest, Santer would have no reason to stonewall like this.
Re: Smokey (#144), Smokey writes:
“Providing the requested information would probably entail no more than a handful of mouse clicks.
That is surely less ‘work’ than Santer’s lengthy response entailed. So Santer is hiding something. We can only speculate what it is. But if his work was straightforward and honest, Santer would have no reason to stonewall like this.”
I am not so sure he is hiding something.It could just fear of being “audited” by a perceived skeptic and be exposed for it.
A groupthink mentality that has been apparent for sometime now.A decided reluctance to share the data with others who are clearly interested in the research.It could be self preservation.
We read of Phil Jones telling (post # 59) Santer about Steve’s effort for data.Did you note the exasperation? The same Dr. Jones who refused to share data with Warwick Hughes.As McIntire points out in post # 74:
“And make no mistake, it’s not simply being rude. Phil Jones said on another occasion – “We have 25 years invested in this. Why should we let you see our data when your only objective is to find something wrong with it?””
When I read of such comments from scientists about honest requests for data.I am thinking they are afraid that their entire belief system will come crashing down.
It is fear that prevents them from doing the obvious.Sharing the data.
I would like to point to a scientist giving a proper response. In the following paper (of interest to readers here by the way):
Springer, G.S., H.D. Rowe, B. Hardt, R.L. Edwards, and H. Cheng. 2008. Solar forcing of Holocene droughts in a stalagmite record from West Virginia in east-central North America. Geophysical Research Letters 35, L17703, doi:10.1029/2008GL034971.
The data is available from the author (and he sent it right away). I found a discrepancy in Fig. 1 graph vs the data (Ha! beat you to it Steve!) and notified the author. In a few hours he replied that I was right and he will send a notice to the journal. Quick, painless, honest. Wow. Good job Greg!
It’s obvious that if he had nothing to hide then he would have no problem sharing the data. The fact that he’s afraid to share the data is defact proof he’s hiding something.
It’s got nothing to do with having something to hide. It has everything to do with Steve being their pariah. Steve has made life hard for the Team, or at least harder than it otherwise would have been, not through malfeasance on his part, but simple attention to details that they would have preferred went unnoticed. Santer’s reply, as well as much of the stonewalling Steve runs into (though not all), is payback.
Mark
Re: Mark T. (#150),
AFAIC, Steve’s just as much, if not more, “a climatologist” as they are. But since he’s not on “The Team and Associates, Inc”, well.
Who is this brash interloper!
Re: Robinedwards (#157),
Exactly so. If some work is done on data that is “unique” and then conclusions are reached using that data, the only way to validate/verify/recreate the conclusions is to have the unique data.
Unless the process is explained well enough to replicate and the contention is that the results give the same basic answer every time.
I don’t know how these sorts of issues can be determined and then worked out without cooperation.
I would add that even if a model gets it “right”, there’s absolutely no way anyone can know if they got it “right” for the right reasons. Maybe the replication of climate patterns happened by accident, maybe it didn’t. Unless you know absolutely 100% everything there is to know about global climate and all it’s internal and external inputs, you simply can’t know if you got it “right”.
Please consider forwarding the details of your battle for information to Congress. I believe one of the Senators from Oklahoma would be very interested in government employees refusing to provide data that was paid for by taxpayers and which they must legally provide under FOIA. Good luck and keep up the fight.
data –> (tp1) processing –> (tp2) stat analysis –> (tp3) conclusion
Tp = testpoint.
1. Do we want to have robust conclusions?
2. The more testing, the more robust are the conclusion.
3. A conclusion that gives access to testdata is easier to test, compared to a conclusion where only inputdata is given.
4. The more important the conclusion is, the more important the testing becomes.
5. If lots of work are required to test, then less testing will be done.
Eric, do you disagree on any of the points above?
Oh Dear!
I remember an old analysis of Santer’s earlier work by the late John Daly.
http://www.john-daly.com/sonde.htm
That is why he doesn’t like people to analyse his work. Some even called his handling of data “to santerize data”.
So now you can “santerize” data and “mann handle” data.
Re: Michael Sirks (#155),
LOL. Deja Vu all over again. Looks like truncation of datasets is established methodology among some climate scientists.
Re: Michael Sirks (#155),
A damning link, Michael. Santer obviously has a track record as do Nature’s climate science referees.
Re: Michael Sirks (#155),
Chris Knappenberger was here at CA several days ago pointing to the problem that Ben Santer has with short time series. Santer et al (2008) was evidently not the first instance. But Santer’s time periods used, for whatever reason, are trumped by the longer time periods — so let us move on.
As a layman from a science standpoint, yet as a lawyer who is deeply concerned with the impact of so-called scientific “findings” upon public policy in connection with climate change, I see the debate generated in the posts about Ben Santer’s refusal to provide Steve with the requested data as just another example of how science in this country and around the world has become overly politicized. The fact that “Eric” just doesn’t seem to get what the true problem is constitutes further proof.
If you want to publish findings that reach a certain conclusion, and you are so confident about your conclusion, what is the problem with producing the data? What do you have to hide? And what is the problem with someone whom you think is “trying to find something wrong with your work?” I have to deal with that all of the time in my own work, but I don’t worry about it, because I know my work will withstand scrutiny. But withholding data seems to be the way of government scientists today, along with those other “scientists” in the alarmist camp. Let’s see the evidence, let’s have the debate.
Steve, keep up the good work. You need to continue to question as much as you can, because the road to hell is pretty greased these days.
This is quite a hot thread, and very interesting.
My thought about the model runs that Santer made is that it might well be impossible to replicate them, as he suggestgs, simply because they are model runs, in which I would guess that some sort of random element is an integral part of the procedure. These might be manually input starting parameters, values chosen by some randomisation procedure, numerical limits on the number of replications of each model or choice of criteria for declaring that a run has reached its endpoint. (I am presuming that they did actually do “repeats”). No doubt many other such elements that go into producing output from climate models will occur to those skilled in the art.
Are these thoughts silly or uninformed or possibly even reasonable?
Robin
Perhaps this Data Quality Act may be of assistance … http://en.wikipedia.org/wiki/Data_Quality_Act
It applies to all government sources, government grants and government contractors.
The major reason to pursue such Climate Auditing of Santer’s work is that the OECD and IEA (2008) posit that collectively “We the People” should invest some $45 trillion by 2050 to arrest global warming. Where else can we go for a professional “second opinion”? For perspective, that $45 trillion is equivalent to spending the total US 2008 financial bailout EVERY YEAR for the four decades.
See: Energy Technology Perspectives, 2008, Blue Scenario p3, OECD/IEA
http://www.iea.org/Textbase/techno/etp/index.asp
Science is all about replication. If you can’t replicate it, it isn’t Science. We have a word for that: Art.
Santer is an Artist as his work cannot at present be replicated.
Here’s what Karl Popper had to say about the proper attitude of a scientist toward criticism:
“If you are interested in the problem which I tried to solve by my tentative assertion, you may help me by criticizing it as severely as you can; and if you can design some experimental test which you think might refute my assertion, I shall gladly, and to the best of my powers, help you to refute it.” (Conjectures and Refutations: The Growth of Scientific Knowledge. New York: Routledge & Kegan Paul, 2007 (1963), p. 35).
Clearly, neither Santer et al. nor Gavin Schmidt share this scientific attitude. But then, as I’ve discovered in more than a year of research for a book on climate science claims, very little of “consensus” climate science follows Popper’s rules for good science (prediction, falsifiable empirical testing of the prediction). The result, predictably and sadly, is bad science.
Interesting reviewer comment on Santer in NAS 2005 Review of Ch 2, OCR for page 18
Review of the U.S. Climate Change Science Program’s Synthesis and Assessment Product on Temperature Trends in the Lower Atmosphere (2005), p 15 THE NATIONAL ACADEMIES PRESS
http://books.nap.edu/openbook.php?isbn=030909674X&page=31
Well, yes, but if Steve was producing results that agreed with TTAAI, he’d have been given a job offer, or at least an invite to consult with them. He’d also then be included in all the secret society meetings in which data and methods are shared. He’d probably know the secret handshake, too.
Mark
Hello,
I wonder if, apart from Eric, people here have actually worked in labs. As a former graduate Physics student, and son of a research director, I have some small experience of laboratory research and the use of data.
It’s easy to understand why scientists are not all keen on delivering all their intermediate results. After all, no company would be keen on delivering its intermediate results that lead to their final products, and not only because of fear of competition. But mainly because contrarily to what has been said several times, most of the intermediate results are uninteresting and therefore discarded. Sometimes, they are not very pretty either, as is most of the source code that is produced daily everywhere. Actually, everyone who has done it before knows that presenting intermediate results in a complete and easily reproducible manner is very time consuming, while not adding much value.
Besides, producing all the intermediate results would mean keeping Gb of simulation data, which is impractical and wasteful. It would also mean that one has to document details like all the softwares used (and sometimes, there are many, with a lot of ugly data transfer between), their configuration, etc, which no scientist has time, not to mention patience, for.
The article is supposed to provide the mathematical tools and assumptions that led the data analysis, which is a much better way to convey the relevant information in order to replicate the analysis with your own tools.
All this being said, it is good that people try to independently examine these results, and the tone of Dr Santer certainly was unwelcome even though understandable.
#172. Aside from any other obligations, Santer’s employer’s mission statement says (as noted above), and they store terabytes of data:
Santer’s just being obstructive and his tone is not acceptable if he expects people to use his results for policy purposes. The US Global Climate Change Research program has policies which, in my opinion, are designed to prevent prima donna behavior like this.
But if Santer wants to try this sort of stunt, as I said above, I’ve submitted FOI requests and we’ll see what they turn up. We’ll see what the journal policies require. I’ll also see what DOE and PCMDI administrators have to say. We’ll see if any of Santer’s buddies are obligated to produce the data. We’ll see if Santer ever sent the data to any of his buddies.
The fact that they store Tb of data doesn’t mean that they store intermediate results, but only meaningful ones, and also the datasets that allow to reconstruct them (hopefully), and as per your quote, datasets that are used for model validation.
Again, I wish good luck with your request.
The chorus of folks saying that scientists should just supply their raw data, intermediate data, final data and all programs used in the analysis to anyone who requests it indicates to me that most people on this forum have never done serious scientific work. This sort of wholesale handover just doesn’t happen and no scientist I know would expect it from others, except perhaps from colleagues with whom they are engaged in a collaborative research project. Phil Jones’ comment “…the sooner they get their hands dirty with the sorts of analyses we/you’ve done…” is, to my mind, intended to convey the notion that going from raw data to final results is often a complex and tedious process, and that if skeptics were to undertake such a task, they would gain a much better appreciation for the efforts of those that perform it on a regular basis as part of their research.
Re: Andy (#177), Andy, there is a difference between general purpose research and safety-critical or regulatory research. If you do a study that is a clinical trial of a new drug, you have to use standard protocols and cough up your data. Same with a nuclear plant safety analysis or toxicity test for an industrial chemical. Compared to those, the economic and regulatory implications of the GISS/CRU data or the hockey stick or the climate models is much much greater, and yet they resent any oversight or auditing.
Re: Andy (#177),
You mean like when Steve M. and MrPete went up to the Almagre and collected their own tree-rings and had time to mull about in the local Starbucks? Before criticizing, perhaps you should read around at the site a bit. Many in here are scientists of some sort or another and fully understand what is implied and when it is appropriate to ask for data.
Mark
Re: Andy (#177),
arge numbers of scientists and the universities and institutes that fund and employ them file for patent protection for their discoveries. A requirement for patent protection is full disclosure. Even if they have a patent granted, it may be rendered unenforceable later if it is shown that they failed to provide full disclosure.
What is the difference here?
Re: Andy (#177),
Not to pile on (well, actually, yes, to pile on), this comment is just wrong. In the finance / economics field, this is accepted practice. The mighty AER has even codified this in their submissions policy – check
This even applies to general business journals. When my group published an article in the Harvard Business Review (which is hardly a hard quant journal), we had to provide all source data and all computer code that produced the output. I really don’t see why climate scientists shouldn’t be held to the same standard.
#177. First of all, I happen to know what’s involved in collecting data from remote locations as I have many years of experience in mining exploration stocks.
Second, thousands of businesses have to deal with auditors every year. I have experience with this as well. You can’t just tell an auditor – go do your own accounts. I realize that this example is foreign to scientists, but the link comes through public policy and dealing with the public. If scientists expect their results to be applied in public policy, then the public has a right to expect that scientists properly document and archive their results and methods so that third parties can examine them.
Journal peer review is not an audit process; it’s a cursory screening, and all too often it’s concerned more with defending a POV than ensuring replicability of results. When, as a reviewer for Climatic Change, I asked for supporting data and code, the editor said that I was the first reviewer in the 28-year history of the journal to do so and he refused to require the author to comply.
Craig: “If you do a study that is a clinical trial of a new drug, you have to use standard protocols and cough up your data. Same with a nuclear plant safety analysis or toxicity test for an industrial chemical.” – perhaps because you can kill people if you’re not careful. I’m not so sure climate science is in the same category. But perhaps not doing anything about climate change is??
Mark: Maybe I should’ve said “many” rather than “most”. But I still wouldn’t be surprised if it’s more than 50%, which would qualify as “most”. And I still think it is a highly unusual practice and expectation in this field and most other scientific research endeavors. Take, for example, someone who wants to map an ice shelf. They sail to the Antarctic for the summer field season, set up a GPS station, travel all over the ice shelf making differential measurements, download from the data logger, get back to the lab in their home country, reduce the data, grid and fit various surfaces, construct a final DEM and publish a paper. Someone else says “give me your raw data, intermediate data, final data and programs”. I’d consider this an unreasonable request, unless there was an offer of collaborative work – e.g. “I want to validate my satellite altimeter derived map of the ice shelf using your data. How about we write a joint paper on it?”.
Re: Andy (#181), a) Steve’s already pointed out his experience as well as requirements and b) it is only “highly unusual” when public policy is not concerned. Sorry, but that’s the simple truth. Your anecdote is just that, an anecdote. We aren’t talking about sailing to the Antarctic, we’re talking about data acquired through public that is legally required to be available to the public at large.
Furthermore, whether “most” of us are not scientists is a red herring as many of us are, and we are fully aware of these issues as well as requirements.
Mark
Re: Mark T. (#182), “through public that is legally…” should be changed to “through public funding that is legally…”
Mark
Steve: “First of all, I happen to know what’s involved in collecting data from remote locations as I have many years of experience in mining exploration stocks.”
OK. I didn’t say “all”… …out of interest, would you have just handed those data over to whoever asked for them?
“If scientists expect their results to be applied in public policy, then the public has a right to expect that scientists properly document and archive their results and methods so that third parties can examine them.”
First off, I think that it’s up to politicians to decide whether or not the standards of documentation and archiving of results are sufficient to base policy decisions on. If they don’t want to make an unpopular decision (“say goodbye to your Hummers” or somesuch) then they can say “you didn’t do a good enough job convincing me”. Most of the time the public is outraged when politicians refuse to act on evidence of increased risk to life and health (Katrina being a recent case-in-point). It seems to me that the right balance is struck when those that have to make the policy decisions define the standards of documentation and archive of the information on which they have to act.
“Journal peer review is not an audit process; it’s a cursory screening, and all too often it’s concerned more with defending a POV than ensuring replicability of results. When, as a reviewer for Climatic Change, I asked for supporting data and code, the editor said that I was the first reviewer in the 28-year history of the journal to do so and he refused to require the author to comply.”
It seems we are somewhat in agreement on this point. I have also experienced the occasional problem with reproducibility of results from the information given in journal papers. But subsequent contact with the lead author has often led to fruitful collaboration. My biggest problem with the peer-review process in journals is that editors seldom do their job these days. The reviewers are supposed to assist the editor in making his decision but the latter often abdicates his responsibility, essentially saying “unless you can satisfy the reviewer, the paper won’t be published”. Perhaps it’s because there are so many papers these days that editors just don’t have the time. A review is certainly not an audit but “cursory screening” belittles the work that many reviewers do. With reading the manuscript, making sure one understands the main concepts, checking the key references, examining the plots, maybe testing a few cases for oneself and ensuring that the conclusions do not overreach the data, I’d say a good two days’ (unpaid) effort is the norm for a first round review.
Re: Andy (#184),
Should we rely on Peer Review or do we need an Audit?
Should we trust “Policy Makers” to get things right?, shouldn’t we always question them, as they can and do get it wrong, just like anybody else.
In my view too few questions are asked, we wouldn’t be in a mess if more people questioned consensus, as Steve does.
Or do you just want people not to question your scientific consensus?
Re: Andy (#184),
So what’s your favorite game, softball? Steve is going to have fun knocking that one out of the park.
You’re still missing the point that the authors Steve is pressuring are obliged by law and contract to provide their data. Certainly they can expect to limit this to legitimate requests, but Steve’s requests are legitimate.
Your bit about the politicians would be great if they actually cared. I’m not sure if you’ve noticed, but they don’t. They simply need answers. Contrary to your opinion, I’d suspect that most of the time the public does not know that their politicians are not acting on evidence properly, your Katrina anecdote notwithstanding. That’s a big one that gets lots of coverage and details are readily disseminated to the public. You can’t claim the same is true for a paper authored by Santer that maybe a few thousand people even know about, can you?
Mark
#184. As a start, there is a very clear policy statement requiring federal recipients to archive data after a limited period of exclusive use. If NSF enforced these policies on climate scientists that I deal with, that would go a long way to resolving some of my issues. Or if journals enforced their stated policies. To a considerable degree, NSF has become a cheerleader and has abdicated its compliance responsibilities. This happens from time to time in business situations and it seldom is to the benefit of shareholders and the public.
If you notice, I spend far more time and attention trying to get journals and agencies to administer their stated policies. I actually take more exception to mealymouthed answers from bureaucrats and editors than to obstruction by climate scientists.
But if a scientist like Santer chooses to make a spectacle of himself as in the present case, Ill see what happens under FOI, under journal policies and under agency policies.
Mark: “…we’re talking about data acquired through public funding that is legally required to be available to the public at large.”
Most research of the kind I outlined is publicly funded. As far as I understand it, new ‘raw’ collected data are often required to be available, after calibration and validation. However, this is generally secondary to the requirement to publish the findings in a peer-reviewed manner (whether a professional journal or agency report) and often slips under the radar. Maybe there is a decent scientific career to be had in demanding other folks’ hard-won data. Won’t win you many friends, tho’, especially if your intent is to merely to find fault with their work.
“Your anecdote is just that, an anecdote. We aren’t talking about sailing to the Antarctic…”
Do you think that, in the anecdote (which I consider to be quite typical of research data and a lot more relevant than you imply), the “give me all your data and programs” request is unreasonable? I can see that making the DEM available is reasonable (and usually desirable for the author). Raw ‘calibrated’ data maybe (but not popular!). Intermediate data and programs – just can’t see it happening.
Re: Andy (#190),
Andy, I am very surprised at several of your comments. You write:
The intent is to get the science right. If you find mistakes, it improves the science. Are you really arguing that Steve should not spend his time auditing climate papers? You admit the failure of peer-review. People do not replicate the research. They may spend a few hours getting caught up on their reading to find out if the paper is in line with received wisdom of the field, but they do not attempt to replicate. At least not often enough in climate science. I remember this line from a text book:
How often does that happen in climate science? People are jumping to accept alarmist findings without any replication. Steve is doing the hard work of making certain the science is correct. He should be applauded, prasied and rewarded for his work (have you seen the tip jar in the upper left?), not denigrated.
If you want to be constructive, find errors in Steve’s work. He makes all of his data acquisition, methods and code freely available. He will thank you if you find a mistake, because it improves the science.
#184. In mining stocks, you are legally required to make timely disclosure through press releases reporting your drill results. These press releases are filed with the relevant securities commissions and typically maintained on company websites. So yes, I would refer the questioner to the relevant information.
As a related issue, stock watchers know that if results are a little bit late, they are seldom good news. This seems to apply to climate science as well, though lateness here can stretch to years, instead of days.
I applied this principle to making two successful predictions in respect to suspiciously delayed programs. Lonnie THompson’s Bona Churchill ice core still hasn’t been published. In 2005, I predicted these results would be “bad”. An online workshop proceeding turned up a while ago showing that O18 values in the 20th century were not at record levels, but down from 19th century values. Bona Churchill data is still unarchived and unpublished. This would be illegal in a mining speculation.
I also saw references to new work in 2002 at Sheep Mountain, the most important Mann site. Noting that this hadn’t been published, I predicted that this would not be “good” news using the same stockwatch principle.
In 2006, Linah Ababneh, a student of Malcolm Hughes at U of Arizona, reported “bad” results in her PhD thesis. The University of Arizona actually went to the trouble of blocking my IP address from access to their website (where this was online) so it was a while before I noticed this. Mann et al 2008 continued to use the old Sheep Mt version. This would be illegal for a mining promoter.
Sorry for the bad formatting – I clearly haven’t mastered the “Link” function.
gens: “Not to pile on (well, actually, yes, to pile on), this comment is just wrong. In the finance / economics field, this is accepted practice. The mighty AER has even codified this in their submissions policy – check”
Umm… …don’t know what to say really. Do you have any physical science journal examples? Maybe it should be common practice in this field but it isn’t. Apologies for not checking an economics journal. Quite impressive set of guidelines from AER. Maybe people aren’t exploiting this but it could really be very difficult to maintain one’s intellectual property if you have to hand over data and programs. Perhaps people will end up just being more guarded in what they publish…?
Re: Andy (#194),
Well, in Organic Chemistry, back when I was studying it in the 1960s, a very important reference was Beilstein’s handbooks, which were basically recipes for producing particular organic chemicals. In addition to detailed instructions for how to produce them, they also listed the expected yields, somewhat analogous to giving confidence intervals. I wish I could remember for sure if each synthesis required replication. I think it did, but certainly it was considered an honor to get your synthesis published in Beilstein’s. Maybe something like that is needed in Climate Science.
What do you think, Steve M? McIntyre’s Handbooks of Climate Science. Any article submitted with sufficient data and methods that it can be audited gets published (in a stylized format) so that people later interested can rely on the data.
Re: Dave Dardinger (#199),
Dave,
I think you might be remembering Organic Syntheses. A requirement was that a “referee” actually repeat the work. They might have actually been called “checkers”.
As I recall, detailed footnotes were part of the published procedure, which often included comments by the reviewers pointing out what the submitters may have missed in the procedure or not clearly described.
Re: John M (#207),
That may be, though I’m sure Beilstein had a lot of info on organic chemistry. I suppose I ought to go look for info online….
Of course back when I was going to college it wasn’t available (or at least not readily) in the English and to complete the ACS course you had to have two years of German.
But you are right, Beilstein specializes in structure while it’s Organic Syntheses I was thinking about which gave preparation methods. But the main point is the same; it’s a way to reward important work which otherwise is ignored to the detriment of the advance of science. Something like that needs to be done for Climate Science.
Re: Andy (#194),
You ask for physical science journal examples of a data archiving policy. Have you heard of AGU? They have a very good data policy. I am not certain they enforce it when it comes to climate science, but the policy is good.
Re: Andy (#194), from the policies at Nature:
Not absolutely explicit, but the general tone is clear. Read the page to get a better feel.
The American Economic Review example was specifically cited in McIntyre and McKitrick 2005b http://data.climateaudit.org/pdf/mcintyre.ee.2005.pdf -see the closing section. So this example is hardly novel to this site.
And for what it’s worth, for the most part, we practiced what we preached – archiving our code (which was consulted by our critics). For GRL, we did so at the journal for MM2005a. For EE, we did so at a website, though this sort of grey archiving is something that I discourage for others. Maybe I’ll suggest to WDCP that they set up a code archive or maybe I’ll place it in a public wiki. Unfortunately, I didn’t archive code for our Reply to Huybers at the time, though I eventually placed it at a former website (and it’s now in the CA archives).
Much of the Mann et al 2008 source code has been archived, but it’s still incomplete e.g. confidence interval estimation. I asked one of the named reviewers, Gerry North, how the confidence intervals were calculated and, like an Irish cop, he told me to “move along”, but didn;’t answer the question. I don’t suppose that he has a clue how it was done.
Replication has quite a long history in economics – and the AER polices arise in part from past problems. See e.g. this interesting paper by Daniel Hamermesh – a well respected US Labor Economist.
Click to access dp2760.pdf
Economics shares with paleoclimate studies the problem of limited data sources and being for the most part non-experimental. When it is possible to repeat experiments then simple replication of other’s results is not necessary because you just do your own experiment. But in non-experimental fields replication is often the only way of ensuring that results are indeed robust, and in exploring why the results came out the way they do. For example it was only because of Steve’s attempt to replicate the MBH results that it emerged that the method Mann et al described as principal components was anything but standard principal components analysis.
IMO there are a LOT of similarities between the statistical problems of econometrics and paleoclimate. Economists and econometricians readily understand statistical issues affecting paleoclimate analyses where most climate scientists can’t even understand that there’s an issue. It’s quite bizarre. It’s quite a noticeable cultural divide at this blog, where we have many highly knowledgeable statisticians and economists, who simply do not accept the prevarications of climate scientists, who, all too often come here as though they are visiting royalty before leaving with a temper tantrum without ever dealing with the statistical issues. In such exchanges, they typically blame too much roughhousing, but more often they simply don’t give a good account of themselves.
Re: Steve McIntyre (#197),
Hey, but sometimes there are nice surprises like Geert Jan, no?
An interesting powerpoint presentation by one of the editors of the AER is here
www7.nationalacademies.org/data/Data_Moffitt.ppt
And just for fun , when I googled “replication economics” I came across this exercise for students……
Click to access replication_exercise.pdf
OK I’ve had a skim-read of the Ababneh thesis and what I could find on the Bona Churchill ice core. I would’ve expected a well-oiled (a.k.a. well-funded) machine like Thompson’s to turn out the results quite quickly if they had been easy to understand (i.e. fitted the original hypothesis). So it looks at tho’ they have a lot of thinking to do. In fact both cases indicate that local climate is a significant issue when interpreting proxy data. I guess that, in turn, means there’s always likely to be pickings for anyone of a skeptical disposition…
Back to the original point. It seems that Santer et al. are, umm, disinclined to cooperate. It seems that they are being defensive and this is perhaps somewhat understandable. I’m not sure how to overcome the divide but I don’t think it’s too hard to undertake the analysis steps that Santer refers to, if you’re concerned that there’s something rotten in the state of CA. That could legitimately lead to a “comments on” paper in IJoC, assuming you find something wrong. But I s’pose there’s also the (not widely accepted, in this field at least) principle of making datasets available. It may be worth Santer’s (or maybe NOAA’s) while to give you the data since, as #28 points out, the data availability issue is potentially more harmful anyway.
Re: Andy (#200),
You talk about Santer et al being “umm, disinclined to cooperate.” Is that acceptable behavior in your mind?
I found an online reference on theImportance of Replication. Quote:
If “researchers are typically willing to provide details” and journals and funding sources require it, how or why should climate science be exempt?
To my mind, the claim of an exemption to the standards of science brings dishonor on the researcher, the journal and his place of employment. I am sorry, but I do get a little worked up over this issue. Let me say it plainly. Any researcher who refuses to provide the information necessary to replicate is not a scientist. He is a pseudoscientist and his work should be treated as such.
Re: Ron Cram (#221), Tell us how you really feel, Ron…
Re: Craig Loehle (#222),
Craig, I know, I know. It is the Cram temper coming out. I hope none are offended except those who need offending.
Dave: I was really expecting a physical science journal – Beilstein is a reference compilation. I don’t think the original authors were expected to verify the procedures. Some might have done so, of course. The Handbook authors obviously had an interest in replicating the experiments they had gleaned from the literature prior to inclusion – very rigorous (one might say Germanic) of them. But if you really want to document, for example, the Unified Model, there’s an awful lot of code and documentation to review. Are you serious…?
Re: Andy (#201),
Are we talking a Climate Model or a GUT / TOE here? In any case, why shouldn’t the data and code and Run outputs be available for such things? Obviously they aren’t in the papers themselves, but the SIs should have them.
Your whole position seems to be, “We’ve never done it that way before.” Not exactly a confidence-building position.
“…really want to document…” should be “…really want to audit…”
Isn’t the global climate supposed to be indicative of a sort of average of all the local climates anyway? 🙂
I don’t see how Santer’s refusal, particularly phrased as it was, is understandable. Steve’s request was reasonable, Santer’s reply was not. Perhaps you are trying too hard to polish the turd?
Mark
Andy,
Good on you for engaging here at CA. You can expect to be treated cordially, and your points addressed with respect. I have two reactions in response to your points.
The Australian JORC Code provides quite stringent requirements on mining and exploration companies to properly disclose their results. At the risk of repeating material that has already been posted, here are the principles under which the JORC Code is framed:
“4. The main principles governing the operation and application of the JORC Code are transparency,
materiality and competence.
• Transparency requires that the reader of a Public Report is provided with sufficient information, the presentation of which is clear and unambiguous, to understand the report and is not misled.
• Materiality requires that a Public Report contains all the relevant information which investors and their professional advisers would reasonably require, and reasonably expect to find in the report, for the purpose of making a reasoned and balanced judgement regarding the Exploration Results, Mineral Resources or Ore Reserves being reported.
• Competence requires that the Public Report be based on work that is the responsibility of suitably qualified and experienced persons who are subject to an enforceable professional code of ethics.”
The JORC Code is enshrined in Australian Corporations Law and also Australian Stock Exchange Listing Rules, and it is mandatory that all listed mining and exploration companies comply. These rules have been developed over the past 20-25 years, and have become accepted good practice that all serious industry participants accept as being fair and reasonable. The outcome is that mining and exploration companies, especially the smaller ones for whom exploration results are decidedly material, report their exploration results in abundant detail. The JORC Code provides quite detailed guidance on the level of disclosure expected, and practice has improved markedly over the years.
The other major mining countries have adopted similar codes that are either based on the JORC Code, or on similar practice adopted for the same reasons.
The other point I want to make is that those concerned about man’s CO2 emissions, and consider that those CO2 emissions will cause catastrophic global warming, are in effect asking the world population to spend trillions of dollars to address the problem. And those trillions will affect the hip pocket of many millions of people through tax imposts, and higher charges for energy. Whole industries will be shut down, and hundreds of thousands of people will be either dislocated or put out of work, or both. Now it is not to say that this isn’t a reasonable request of the world population if, as you so clearly believe, that anthropogenic CO2 is the worst problem facing mankind.
I hope that you will forgive the more sceptical of us for asking what seem to me to be reasonable qquestions before we simply agree to your requests. It is surely reasonable for us to ask whether the temperature record that you are using really can be relied upon – have the urban heat island effects been properly adjusted for? Are the temperature readings reliable? Have the changing list of temperature stations been properly accounted for? And surely it is reasonable for us to ask whether the proposition that man’s CO2 emissions will lead to a doubling of atmospheric CO2 levels is really understood and substantiated. Do we really understand the CO2 cycle and processes on the planet? And why shouldn’t we ask how it is that you are so confident that a doubling of atmospheric CO2 levels (if it happens) will lead to an increase in Global Mean Temperature of 3 degrees C (the latest figure that James Hansen seems to be using) when it is evident that the science indicates that the sensitivity is more like 1 deg C per doubling of CO2, with the extra 2 degrees coming from assumptions about feedback processes that it seems could just as well be negative (thus reducing the 1 deg C impact) as it could be positive.
It seems to me that the real climate scientists have not covered themselves in glory. They fail to provide clear evidence in support of their propositions, fail to address questions, fail to provide data, processes, programs that would support their conclusions, and generally seek to confuse and obfuscate the issues.
As well, you have Stephen Schneider and Al Gore both admitting that they consider it OK to “exaggerate” the concerns in order to raise public concern to the point where action is taken to address the problem. In their minds the end justifies the means.
It seems to me that it would be helpful if the real climate scientists would comply with the terms of their publicly funded grants in relation to reporting results and supporting information. It would be helpful if they would comply with the policies publishing their “peer reviewed” papers requiring disclosure of supporting information. It would be helpful if they would seriously address the questions directed to them. And it would be helpful if they desisted from the emotional ad hominem comments that seem to characterise the debate.
None of this is unusual or unreasonable. It is normal accepted practice in most other fields. Why should climate science be immune from requirements to adhere to sound practice?
trevor: OK, Steve’s already done the mining code of practice thing. I guess there’s a reason it had to be made mandatory.
Personally, I think that the big models are our best estimate at what will happen with the Earth’s climate over the next 100 years or so. I think the details are pretty poor. The issues with parameterization of sub-grid-scale processes are significant. Biological feedbacks are accounted for in a pretty rudimentary fashion at present. I think much more work is needed in order to be sure. Climate simulations under different scenarios (maybe perturbation of key parameters within current knowledge uncertainties in an ensemble run) are often referred to as experiments. This is one way we can “tweak the knob” to see what might happen. Another way is to perform the experiment empirically. The problem with the latter approach is that we live in the experimental apparatus. I don’t have a problem with emphasizing the 10% probability scenario if it means that 2 m sea level rise is the result, as long as the confidence levels are stated as well. If there’s a 10% chance I’ll be hit by a car when I cross the road, I’d want to know. Modern AOGCMs don’t operate on a simplistic “the feedback is 2 degrees/degree” method. The amplification factor, while a useful means of explaining the role of feedback processes in what’s going on within the climate system, is not a parameter that’s hard-wired into models. The change of snow cover with air temperature, or sea-ice cover with SST, is parameterized based on relationships derived at least partly from empirical observation, but this is not the same thing as a simple f-factor. So I don’t consider it reasonable to say “feedback processes… …could just as well be negative… …as it could be positive”, especially about the ice-albedo feedback.
Some “real climate scientists” may indeed not have “covered themselves in glory”, and/or “seek to confuse and obfuscate the issues”. The many that I know personally are diligent scientists intent on improving understanding of our climate system with no intent to “confuse and obfuscate”. It’s possible, maybe even likely, that people who have staked a career on a particular finding will stubbornly cling to it. It seems to me at least, that’s quite often true of mavericks in any field as well…
So, in summary, I’m not requesting you to do anything. I’m not sure that a higher-CO2 world would necessarily be a worse place, although vulnerable populations might find it hard to adapt to certain scenarios. I think it would be warmer, and the current range of predictions seem feasible. Of course the range of uncertainty is significant, so I include that in my assessment. The current models do a much better job at replicating the effects of aerosols, etc., than even a decade ago. I just don’t see any credible alternative for predicting the next 100 years or so. And legislation is not necessarily a bad thing. We have seat belts, air bags and catalytic converters in our cars – all basically legislated and none of which have killed the auto industry (as was predicted for the seat belt by GM in the late 1950s). If anything, such legislation may stimulate the [global] economy somewhat, as long as it’s thought through carefully and everyone signs up to it.
Dave: OK, I mean the UKMO Unified Model, a version of which has been ported to Linux and used to be available (was last time I checked) but now looks as tho’ it requires a specific agreement with the UK Met Office. Still, there’s the Community Climate Model, which is available at:
http://www.earthsystemgrid.org/browse/browse.htm?uri=http://datagrid.ucar.edu/metadata/cgd/ccsm/thredds/ccsm.thredds
after a simple registration and a wait of, well, I’ll find out (I just registered, but it is Christmas, so…).
As I understand it (having typed “PCDMI” into Google a minute ago and clicked on the 2nd-from-top link) output data from a number of models are available for download after registration.
“…and a wait of, well, I’ll find out…”
OK, it took 6 minutes for my registration to be approved before I could access the site.
#210. My request to Santer was for the data as used by him. It’s easy to register at PCMDI. But you won’t find the data as used by Santer. I’m handy at extracting data from archives and don’t mind working at it, and the public record is pretty clear on that -so that’s not the issue. However, if you are able to replicate Santer’s data from the information in the article, then bully for you. Please send it to me.
But y’know, I’m not going to argue about this. There are policies governing this. Let’s see what happens. I’ve sent FOI requests to the Department of Energy and NOAA; I’ve sent a request to the journal under their policies; I’ve sent a request to DOE under their mission statement. It seems to me that it would have been simpler for Santer to send me the data, but, as I say, if he wants to be a small center of attention in this corner of the blogosphere, I’ll oblige him.
I’ve had some success with this with other obstructionist climate scientists. We’ve been getting information from Jones and Briffa inch by inch under the British FOI legislation. Science magazine hated the bad press about Esper and eventually made him provide most of his data (though Thompson is too big for them to deal with.)
If the climate science community chooses to cheer for obstructionism, as Phil Jones cheered Santer on, this is not a strategy that I would recommend to them. IF they are confident in their results, then they should provide data instantly. Take this sort of issue off the table.
#208. Sure, there’s a reason why the codes had to be made mandatory. There are a lot of tricky people in the world. But you’ve totally missed the point.
I sometimes discuss laws and regulations governing dealing with the public by mining promoters because there are such rules and there’s quite a bit of experience with the rules, so there are precedents and examples.
In the absence of any comparable code of practice for climate scientists, I’ve applied codes that apply to mining promoters as a minimum standard and, from time to time, assessed disclosure and due diligence by climate scientists against those standards.
Nobody has provided a rational argument that climate scientists should have a lower standard than mining promoters, though that seems to be what you’re suggesting. If you can make this case, I’m all ears.
It is my understanding that if you publish a paper describing a protein sequence/structure you must publish your data to back up the claim. Andy seems to dismiss the argument that safety critical studies like those upon which drug acceptance and toxicology standards are based have high standards and that this should apply to climate science. The legislation for GHG regulation makes regulating a single industrial chemical or pesticide seem totally trivial in terms of its potential economic impact. If this diverts money from health care, from research, from education, then yes it can kill. It seems odd to me to act like the people doing key research here have no responsibility to the public, when many of them in fact work in government labs and produce official data sets like GISS or CRU.
The logo at Santer’s employer rather caught my eye:
Steve,
I was wondering if you had seen this “opportunity to comment.”
Steve,
I decided to place my own comment AGU’s Position on Human Impacts on Climate: Comments Invited. I do not know if the AGU will post my comment or not, but I would like it to be seen and I do mention you and your work. This is my comment:
I agree with Fred Singer that the AGU has two choices – to follow the IPCC or to examine the scientific evidence independently. There are good reasons to examine the evidence independently.
1. A flurry of papers published in 2007 and 2008 have changed the scientific climate change landscape. These papers were published after the cutoff date for the IPCC’s Fourth Assessment Report. I will mention just a few important papers, including two on climate sensitivity: One by Stephen Schwartz of Brookhaven National Lab and one by Petr Chylek of Los Alamos National Lab. Chylek and Schwartz approached the question differently. Chylek had new data on aerosols and arrived at an estimate at the low end of the IPCC range. Schwartz took a new and compelling approach and arrived at an estimate even lower. Another key paper, by Roy Spencer and co-authors, examined a newly discovered negative feedback over the tropics. They identified this negative feedback as possibly being the “Infrared Iris” effect hypothesized by Richard Lindzen. If confirmed, this discovery may explain why the Earth has not warmed as much as AGW theory projected over the last 30 years.
2. An independent review is also warranted because the IPCC put key authors in charge of reviewing their own papers. The authors are in a position to keep alternative viewpoints from being presented. This is not an independent review. Roger Pielke of CIRES has pointed out the problems with this approach and detailed the neglect of key research findings when they were contrary to the conclusions of the IPCC authors/reviewers.
* http://climatesci.org/2007/09/01/the-2007-ipcc-assessment-process-its-obvious-conflict-of-interest/
* http://climatesci.org/2007/11/30/climate-metric-reality-check-1-the-sum-of-climate-forcings-and-feedbacks-is-less-than-the-2007-ipcc-best-estimate-of-human-climate-forcings/
* http://climatesci.org/2007/06/20/documentation-of-ipcc-wg1-bias-by-roger-a-pielke-sr-and-dallas-staley-part-i/
* http://climatesci.org/2007/07/20/documentation-of-ipcc-wg1-bias-by-roger-a-pielke-sr-and-dallas-staley-part-ii/
3. The IPCC has not spent any time or effort in validating the General Circulation Models used to project future climate. The IPCC has discussed ocean heat content but has not used this important metric in model validation. Nor has the IPCC spent any time or effort in researching or using the principles of scientific forecasting. At least three peer-reviewed journals are dedicated to scientific forecasting, but the IPCC seem to be completely unaware of this literature. We now have enough data to conduct computer model validity tests. Some of these should be done using ocean heat content. Any resulting projections should meet the principles involved in scientific forecasting. Here are links to initial work in these areas.
* http://sciencepolicy.colorado.edu/prometheus/archives/prediction_and_forecasting/001376letter_to_nature_geo.html
* http://climatesci.org/2008/05/21/can-the-ipcc-model-projections-of-global-warming-be-evaluated-from-just-several-years-of-data/
* http://forecastingprinciples.com/Public_Policy/WarmAudit31.pdf
4. The IPCC continues to promote “hockey stick” graphs of paleoclimate reconstructions. The IPCC claims these reconstructions are independent confirmations of the original MBH reconstructions debunked by Steve McIntyre and Ross McKitrick. However, the National Academy of Sciences investigated and agreed with McIntyre and McKitrick that strip bark trees such as bristlecone pines are not temperature proxies and should not be used in reconstructions. The new, supposedly “independent,” reconstructions promoted by the IPCC all use strip bark trees or other non-temperature proxies. It is time for an independent review of this controversy as well.
#224. The NSF does NOT require paleoclimate scientists to archive source code. They don’t even require them to archive data. They have abdicated their responsibilities, saying it’s up to the journals. IPCC has likewise abdicated their responsibilities to the journals – both somewhat ironic in that the editor of the journal that published Santer didn’t even know whether his journal had a policy. What a comedy.
Re: Steve McIntyre (#225),
I am not so sure. If true, it is probably changing. NSF policy for Social, Behavior and Economic Sciences requires researchers to archive or share mathematical or computer models they have established. I would be surprised if NSF would not like to see a uniform policy on these questions.
Strangely, Geosciences has three different data policies. The Atmospheric Sciences division is still under development. The most promising is from Ocean Sciences.
The Division of Ocean Sciences says it “expects investigators to share with other researchers, at no more than incremental costs and within a reasonable time, the data, samples, physical collections and other supporting materials created or gathered in the course of the work. It also encourages grantees to share software and inventions or otherwise act to make the innovations they embody widely useful and usable.”
If paleoclimate scientists get any funding from Division of Ocean Sciences, they would be bound to share software.
The Division of Earth Sciences requires “researchers and organizations to make results, data, derived data products, and collections available to the research community in a timely manner and at a reasonable cost.” A good lawyer might argue that source code created to analyze data is a “derived data product” and should be archived. But you might have a fight on that.
And of course, you always have the possibility of non-compliance even when the policy is very clear.
Re: Steve McIntyre (#225),
I have communicated with climate scientists who published in AGU publications and did not know about AGU data archiving policies, but I really am surprised Santer’s editor did not know if they had policy or not. What a comedy indeed.
Ron, the Atmospheric Sciences division is the one that is relevant to what I’ve been doing. In one instance, NSF actually intervened to tell Mann that he didn’t have to provide data – even before Mann had actually refused (which he was going to do, but hadn’t yet done for that particular request.) I was new at this then and was shocked by overt interference by an NSF employee. The Atmospheric Sciences policies have been “under development” for years now, without anything being done.
Re: Steve McIntyre (#232),
Wow. What a story. Perhaps it is time for some congressmen to put pressure on Atmospheric Sciences to bring their policies into the 21st century. If they matched the Ocean Sciences policy I would be satisfied – that is, as long as they enforced them.
Keep up the good work, Steve.
Re: Ron Cram (#233), I’m curious which US congressman you would think cares about issues such as these? Certainly none of those that run congress now. Not to be “chippy” (as Steve puts it), but the odds aren’t in favor of science for this particular topic. Pick another topic and you’ll have more support. 😉
Mark
Re: Mark T (#239),
You are correct that Republicans are out of power on Capital Hill and do not have the power to hold investigative hearings as they once did. But this should not be a partisan issue at all. Actually, after some consideration I think the best person for the job is not a congressman, but is Senator John McCain. He already has a pro-global warming pedigree. Putting a data policy in place for NSF Atmospheric Sciences is the right thing for science.
Ron: Just a quick comment on the AGU data policy – it actually looks very reasonable to me too. As I submit to and review for AGU journals, I pretty much have to sign up to it (not sure if I’d read it before, tho’…).
Apologies but Christmas is here and I have snatched a couple of minutes to check back – my wife is already asking if I’m “busy”…
Re: Andy (#234),
The line in the AGU policy I think it interesting is this:
Although the policy does not spell this out in great detail, it seems clear they want new techniques for data handling and analysis to be archived and widely disseminated. When Michael Mann innovates a new statistical method for analyzing data, he should have archived the source code for his new technique. Without doubt science is better served when innovative techniques like Mann’s are made available so other scientists can test them and use them. By extension, all data analyzing methods and codes should be explained and that is best done by archiving the code. Science is supposed to be open. Only pseudoscientists would claim special knowledge or privilege or try to hide their methodologies. Scientific papers cannot be accepted as science until they have been tested and confirmed. It is the purpose of data policies like AGU’s to make that process as simple as possible.
BTW, I am glad you got a chance to read the policy. After re-reading it myself, it could be more clear in places. I would like to see it more like the NSF Ocean Sciences data policy which expects software developed on the government’s dime to be shared with other researchers. I see no reason AGU should not embrace that.
Steve: Actually, NSF Earth Sciences is the one that best meets your requirement. Santer’s monthly synthetic MSU data would be a derived data product. But I don’t see software being a requirement. Ocean Sciences’ policy is essentially about submitting data (and metadata) collected during cruises to an appropriate NDC within a reasonable time-frame.
Christmas call again…
Ron: I don’t see the requirement for making source code available in public archive in the AGU data policy. The sentence you have highlighted refers to AGU publications and the role they play in disseminating new techniques to the community. A new technique can be described in a paper without having to provide the source code. Their data policy really deals with just two issues. Firstly, AGU spell out their requirement for referencing a dataset, i.e., if a dataset is referenced in a paper, it must be publicly available. If it is not publicly available then it cannot be referenced explicitly in the text and must be treated in a similar manner to that of a personal communication in the manuscript. Secondly, if a paper essentially describes the creation of a new dataset (what AGU refers to as a “data paper”) and not with scientific analysis, etc., then the dataset which is being described must be made publicly available.
As far as I can see, NSF Ocean Sciences does not require that software be made available, although they do encourage it. It is my recollection that NASA expects this but I looked at their policy on STI and couldn’t find anything specific (I may have missed something tho’).
Re: Andy (#238),
I misread that line. I see you are correct. The AGU policy requires data archiving but not really anything on source code. You are right again that NSF Ocean Sciences “encourages” researchers to make software available, but does not use the word “expects.” I am sure why the difference in wording. If the government paid for the development of the source code, they can surely require it be shared to other researchers for the purpose of replicating published science papers.
It shouldn’t be, but sadly, it is a very partisan issue.
Mark
While I suspect the issues brought to the fore by Andy here are not all that unusual in other fields as noted from the excerpt and link below, I do think that Andy could be handy if he could answer what he would do in the case like Santer’s and give some reasoning for his potential actions by specifically noting the any negative aspects on the furtherance of science.
I know that we see publishing scientists and others who come to CA to provide rationalizations for scientists’ reactions, like those of Santer in this case, which in the end to me seem more do to with covering for a particular scientists’ fragile psyche than looking directly at the issues of how science is affected. Sometimes it seems as though a scientist can excuse the actions of another scientist and yet state that they would not have reacted in a like manner.
That the scientist can “get away” with some rather “prima donnish” reactions is not in doubt as Santer and others noted here at CA have in the past. I, personally doubt that Steve M’s actions will cause the “stubborn/resisting” scientist to conform, but I do see were Steve’s actions and the reactions from scientists such as an Andy in this case and other defenders can reveal to the laypersons and less directly involved how the system works and judge it accordingly.
I think what Andy describes about the situation could certainly lead one to conclude that a consensus on a particular topic might be just that much more difficult to legitimately break through.
I would guess that the “hard science” work is much easier to document and duplicate than the softer sciences like climate science. Climate science that we analyze here also has the weakness of scientists taking raw data from other scientists in the field and then using statistical methods to make conclusions that might weigh on public policy making. Those intermediate scientists, who may or may not be the same as those obtaining the raw data, certainly should be looking for those outside their fields for help if nothing more than to confirm the correctness of what they have done with the data.
It would appear what I hear is that some scientists in these situations have an ownership issue here from all the efforts they made in manipulating the data and/or collecting it and are likely to give it up only to those with whom they have close relationships – kind of like the Wegman noted inbred circles of influence.
http://overlawyered.com/
Steve Mc,
Speaking of the AGU and requirements for source code documentation, perhaps you have a soulmate in the AGU.
I wonder how his talk was received.
link
Looks like somewhere between the preview and the comment, the link was lost. After looking closely, it looks like there is some redirecting going on that the blog software doesn’t like. I think this will work.
http://www.agu.org/cgi-bin/wais?mm=IN11C-1044
GRL and JGR do not require paleoclimate authors to observe AGU policies. I wrote one GRL editor about this a couple of years ago and he said that they were already asking reviewers to do enough without putting them to the additional burden of asking them to ensure that authors archived their data.
During IPCC AR4 review, D’Arrigo et al was then pending at JGR. I asked IPCC for the supporting data and they blew me off saying that they would not do “secretarial” duties for me, even though I had contacted a “secretariat”. They told me that archiving was the responsibility of the journals. I wrote to the journal (JGR)asking for the data and additionally asking them to ensure that the data was archived in accordance with AGU policies.
Instead of doing that, they complained back to IPCC. IPCC said that I only knew about this article through IPCC and this knowledge was subject to confidentiality and that requesting data was a breach of confidentiality and that, if it happened again, I would be expelled as a reviewer. I still don’t have the data.
Re: Steve McIntyre (#246),
Stories like these… wow. Some of them I have read before but am still shocked. I think I will become a member of AGU so I can speak out on this.
BTW, as I mentioned above, EOS was requesting comments on whether AGU should update its 2003 statement on “Human Impacts on Climate.” I left my comment but learned it will probably not be posted on the website because I am not an AGU member. There’s another reason to join.
Also, I just wrote an article for Citizendium on “Ocean heat content.” Citizendium was founded by Larry Sanger, one of the founders of Wikipedia who left because he wanted to improve the quality of the articles. I hope readers will find it interesting.
Just observed the following comment on real climate, I thought I’d record it here:
In:
http://www.realclimate.org/index.php/archives/2010/02/close-encounters-of-the-absurd-kind/
Comment: 72
Pasteur01 says:
25 February 2010 at 8:01 AM
Dr. Santer,
Did your immediate supervisor decide to release the data before you did?
“…preparation of the datasets and documentation for them began before your FOIA request was received by us.” Dr. David Bader, January 2009
“A little over a month after receiving Mr. McIntyre’s Freedom of Information Act requests, I decided to release all of the intermediate calculations I had performed for our International Journal of Climatology paper.” Dr. Ben Santer, February 2010
Re: ZT (Feb 25 15:29),
Ya but he sure whined about it in the emails. I think he even threatened to quit LLNL, if I recal correctly.
Also posted on RC…
Further to my earlier post, I created the following timeline from the leaked emails and from CA posts.
On November 10, 2008 McIntyre receives refusal to release intermediate data from Dr. Santer.
McIntyre files FOIA request for the intermediate data and related emails on the same day, November 10, 2008.
November 10 or November 11, 2008 Thomas Karl informs Dr. Santer of Mr. McIntyre’s request.
Dr. Santer cc’d his immediate superior, Dr. David Bader, among others in a response to Thomas Karl in an email dated November 11th, 2008.
Thereafter, but before December 16, 2008, Dr. Santer sends an email to his co-authors indicating that he learned of Mr. McIntyre’s FOIA request “earlier this morning.” In that same email Dr. Santer indicates that he had been discussing the “FOIA Issue” with others including Dr. Bader for “several weeks.”
On January 30, 2009 Dr. Bader sends an email to Mr. McIntyre indicating that the decision to release the intermediate data was made before the FOIA request was received.
In his recent statement on this blog Dr. Santer indicated that the decision to release the intermediate data was made after receiving the FOIA request.
Re: Pasteur01 (Feb 26 14:38),
I got a little confused. Perhaps Dr. Santer needs to perform a time series reconstruction.
My favorite email from that time period is where Gavin Cawley thinks he found a Santer boo boo (Wigley calms the team down). Published at RealClimate: group @ 12 December 2007
“Once more unto the breach, dear friends, once more!” (King Henry, no less)
From: Tom Wigley
To: Ben Santer , Phil Jones
Subject: [Fwd: Re: Possible error in recent IJC paper]
Date: Sat, 01 Nov 2008 18:50:12 -0600
Hi Ben & Phil, No need to push this further, and you probably realize this anyhow, but the
RealClimate criticism of Doug et al. is simply wrong.
Re: Pasteur01 (Feb 26 14:38),
OK, here’s a timeline:
1224005421.txt
From: Ben Santer
To: David Douglass
Subject: Response
Date: Tue, 14 Oct 2008 13:30:21 -0700
Prof. Douglass,
You have access to EXACTLY THE SAME radiosonde data that we used in our
recently-published paper in the International Journal of Climatology
(IJoC). You are perfectly within your rights to verify the calculations
we performed with those radiosonde data. You are welcome to do so…
—–
1225140121.txt
From: Phil Jones
To: santer1@llnl.gov
Subject: Re: End of the road…
Date: Mon Oct 27 16:42:01 2008
Ben,
It seems that Climate Audit has been discussing the paper. I ad
a look whilst I was in Iceland as I had nothing better to do a few times.
It was cold and snowy outside, there was internet…..
Seems as though they are making some poor assumptions; someone
is trying to defend us, but gets rounded upon and one of the co-authors
on the paper is in touch with McIntyre.
As it isn’t me, and I can rule out a number of the others, my list of who
it might be isn’t that long….
Looking forward to next week !!
Cheers
Phil
—–
This one is the interesting one:
1225412081.txt
From: Ben Santer
To: “‘Philip D. Jones'”
Subject: [Fwd: Re: [Fwd: Typo in equation 12 Santer.]]
Date: Thu, 30 Oct 2008 20:14:41 -0700
Dear Phil,
I thought you’d be interested in my reply to Gavin (see forwarded email).
Cheers,
Ben
Dear Gavin,
There is no typo in equation 12. The first term under the square root in
equation 12 is a standard estimate of the variance of a sample mean
(see, e.g., “Statistical Analysis in Climate Research”, Zwiers and
Storch, their equation 5.24, page 86). The second term under the square
root sign is a very different beast – an estimate of the variance of the
observed trend….
—–
1225462391.txt
From: Ben Santer
To: “Thorne, Peter” , Peter.Thorne@noaa.gov, Leopold Haimberger , Karl Taylor , Tom Wigley , John Lanzante , Susan.Solomon@noaa.gov, Melissa Free , peter gleckler , “‘Philip D. Jones'” , Thomas R Karl , Steve Klein , carl mears , Doug Nychka , Gavin Schmidt , Steven Sherwood , Frank Wentz
Subject: [Fwd: Santer et al 2008]
Date: Fri, 31 Oct 2008 10:13:11 -0700
Dear folks, While on travel in Hawaii, I received a request from Steven McIntyre for all of the model data used in our IJoC paper (see forwarded email). After some conversation with my PCMDI colleagues, I have decided not to respond to McIntyre’s request…
—–
[The other Gavin but in the timeline: 1225465306.txt
From: “Cawley Gavin Dr \(CMP\)”
To:
Subject: RE: Possible error in recent IJC paper
Date: Fri, 31 Oct 2008 11:01:46 -0000
Cc: “Jones Philip Prof \(ENV\)” , “Gavin Schmidt” , “Thorne, Peter” , “Tom Wigley”
Dear Ben,
many thanks for the full response to my query. I think my confusion arose from the discussion on RealClimate (which prompted our earlier communication on this topic), which clearly suggested that the observed trend should be expected to lie within the spread of the models,…]
—–
1226337052.txt
From: Ben Santer
To: Steve McIntyre
Subject: Re: FW: Santer et al 2008
Date: Mon, 10 Nov 2008 12:10:52 -0800
Dear Mr. McIntyre,
I gather that your intent is to “audit” the findings of our
recently-published paper in the International Journal of Climatology
(IJoC)…I gather that you have appointed yourself as an independent arbiter of
the appropriate use of statistical tools in climate research. Rather
that “auditing” our paper, you should be directing your attention to the
2007 IJoC paper published by David Douglass et al., which contains an
egregious statistical error….
—–
Then,
1226451442.txt
From: Ben Santer
To: “Thomas.R.Karl”
Subject: Re: [Fwd: FOI Request]
Date: Tue, 11 Nov 2008 19:57:22 -0800
…Providing Mr. McIntyre with the quantities that I derived from the raw
model data (spatially-averaged time series of surface temperatures and
synthetic Microwave Sounding Unit [MSU] temperatures) would defeat the
very purpose of an audit…McIntyre’s request (2) demands “any correspondence concerning these
monthly time series between Santer and/or other coauthors of Santer et
al 2008 and NOAA employees between 2006 and October 2008”. I do not know
how you intend to respond this second request. You and three other NOAA
co-authors on our paper (Susan Solomon, Melissa Free, and John Lanzante)…
—–
Then, from Wigley,
1226456830.txt
From: Tom Wigley
To: santer1@llnl.gov
Subject: Re: [Fwd: FOI Request]
Date: Tue, 11 Nov 2008 21:27:10 -0700
…Hmmm. I note the following ,,,
“at which I can be contacted between 9 and 7 pm Eastern Daylight Time”
Is this a 22 hour, or, for people with time machine, a negative 2 hour
window?
Joking aside, it seems as a matter of principle (albeit a principle yet
to be set by the courts) that provision of primary data sources that are
sufficient to reproduce the results of a scientific analysis is all that
is necessary under FOI.
It also seems that judgment of what correspondence is central to the
analysis can only be made by the persons involved…
—–
Then, of course, Phil is involved:
1226500291.txt
From: Phil Jones
To: santer1@llnl.gov
Subject: Re: [Fwd: FOI Request]
Date: Wed Nov 12 09:31:31 2008
Ben,
Another point to discuss when you have your conference call – is
why don’t they ask Douglass for all his data…
—–
It goes on into December…
1228249747.txt
From: wigley@ucar.edu
To: santer1@llnl.gov
Subject: Re: Further fallout from our IJoC paper
Date: Tue, 2 Dec 2008 15:29:07 -0700 (MST)
Cc: santer1@llnl.gov, “Thorne, Peter” , peter.thorne@noaa.gov, “Leopold Haimberger” , “Karl Taylor” , “Tom Wigley” , “John Lanzante” , susan.solomon@noaa.gov, “Melissa Free” , “peter gleckler” , “‘Philip D. Jones'” , “Thomas R Karl” , “Steve Klein” , “carl mears” , “Doug Nychka” , “Gavin Schmidt” , “Steven Sherwood” , “Frank Wentz”
Ben,
I support you on this. However, there is more to be said than
what you give below. For instance, it would be useful to note
that, in principle, an audit scheme could be a good thing if done
properly. But an audit must start at square one (your point). So,
one can appear to applaud McIntyre at first, but then go on to
note that his modus operandi seems to be flawed…
—–
Here’s Schmidt with political advice
1228258714.txt
From: Gavin Schmidt
To: santer1@llnl.gov
Subject: Re: Further fallout from our IJoC paper
Date: 02 Dec 2008 17:58:34 -0500
Cc: “Thorne, Peter” , Peter.Thorne@noaa.gov, Leopold Haimberger , Karl Taylor , Tom Wigley , John Lanzante , Susan.Solomon@noaa.gov, Melissa Free , peter gleckler , “‘Philip D. Jones'” , Thomas R Karl , Steve Klein , carl mears , Doug Nychka , Steve Sherwood , Frank Wentz
Ben, there are two very different things going on here. One is technical
and related to the actual science and the actual statistics, the second
is political, and is much more concerned with how incidents like this
can be portrayed. The second is the issue here…
—–
1228330629.txt
From: Phil Jones
To: santer1@llnl.gov, Tom Wigley
Subject: Re: Schles suggestion
Date: Wed Dec 3 13:57:09 2008
Cc: mann , Gavin Schmidt , Karl Taylor , peter gleckler
Ben,
When the FOI requests began here, the FOI person said we had to abide
by the requests. It took a couple of half hour sessions – one at a screen, to convince
them otherwise
showing them what CA was all about. Once they became aware of the types of people we were
dealing with, everyone at UEA (in the registry and in the Environmental Sciences school
– the head of school and a few others) became very supportive. I’ve got to know the FOI
person quite well and the Chief Librarian – who deals with appeals. The VC is also
aware of what is going on – at least for one of the requests, but probably doesn’t know
the number we’re dealing with. We are in double figures.
One issue is that these requests aren’t that widely known within the School. So
I don’t know who else at UEA may be getting them. CRU is moving up the ladder of
requests at UEA though – we’re way behind computing though. We’re away of
requests going to others in the UK – MOHC, Reading, DEFRA and Imperial College.
So spelling out all the detail to the LLNL management should be the first thing
you do. I hope that Dave is being supportive at PCMDI.
The inadvertent email I sent last month has led to a Data Protection Act request sent by
a certain Canadian, saying that the email maligned his scientific credibility with his
peers!
This one is interesting:
1233326033.txt
From: Ben Santer
To: Smithg
Subject: Re: data request
Date: Fri, 30 Jan 2009 09:33:53 -0800
Dear Mr. Smith,
Please do not lecture me on “good science and replicability”. Mr.
McIntyre had access to all of the primary model and observational data
necessary to replicate our results. Full replication of our results
would have required Mr. McIntyre to invest time and effort. He was
unwilling to do that…
—–
Also,
1233586975.txt
From: Ben Santer
To: P.Jones@uea.ac.uk
Subject: Re: [Fwd: data availability]
Date: Mon, 02 Feb 2009 10:02:55 -0800
Reply-to: santer1@llnl.gov
Dear Phil,
Yes, this is the same Geoff Smith who wrote to me. Do you know who he
is? From his comments about the RMS, he seems to be a Brit…
—–
Love the hit parade.
As for my posts I finally understand that the 2 FOIA requests, NOAA and DOE, found their way to Dr. Santer at different times.
But I still don’t understand why Dr. Bader found it necessary to insist for the record that they decided to post the data before receiving Mr. McIntyre’s FOIA request. I mean they already knew he was trying to get it through NOAA employees. And very obviously (even without the benefit of the leaked emails) they were preparing and posting the data in response to that request.
Steve: It’s sometimes hard to figure out why they say what they do.
Re: Pasteur01 (Feb 27 11:18),
I don’t think the “hit parade” is even complete (just based on my search algorithm) but it gives context. I think the Bader statements are sorta administratively political, trying to hold off on a flood of “harassing” FOIA requests, which is a claim that Jones and Santer frequently made. There are so many times when many could have just followed standard practice and published what they were supposed to…At the minimum, it made it look like there was something rotten, some “dirty laundry”, perhaps.
13 Trackbacks
[…] What, pray tell, is wrong with “many” requesting results of post-processed data used in peer reviewed papers? (I assume here Gavin’s use of pronoun “many” means “one”. Steve McIntyre recently requested data from Benjamin Santer. When Ben refused to provide data, Steve filed a request under the FOI.) […]
[…] The rest is here: Santer Refuses Data Request « Climate Audit […]
[…] More here. […]
[…] “scientists” are really scientists in the classical sense. Today’s evidence is this letter from Dr. Benjamin Santer of the Lawrence Livermore Laboratory to McIntyre’s request for data […]
[…] You can read about how “scientists” in this case repeatedly rebuffed his efforts to get the source data behind their claims here. […]
[…] by the way, Santer refused to release the data for the “rebuttal” he claims to Steve McIntyre – I wonder why? Maybe Santer thought it […]
[…] “Santer et al 2008“, 16 October 2008. He then sent the following request (see here for full details of this correspondence): Dear Dr Santer, Could you please provide me either with […]
[…] that, SteveM’s data request October, 2008, might have prodded Santer to inform his boss Bader that underlying data had been requested. In […]
[…] Santer Refuses Data Request, Climate Audit, Steve McIntyre, 10 November 2008 — Excerpt: Email from McIntyre dated 20 […]
[…] me of these plans when I originally requested the data in October 2009. Instead, as reported at CA here, Santer not only made no mention of Livermore’s data release plans, instead repudiating my […]
[…] me of these plans when I originally requested the data in October 2008. Instead, as reported at CA here, Santer not only made no mention of Livermore’s data release plans, instead repudiating my […]
[…] a nuisance. How could you presume to check the validity of our work, you unworthy *%$&#!?' : Santer Refuses Data Request Climate Audit This ClimateAudit paost refers to a different paper, of course, but the pattern is the same. […]
[…] what data? Scientist refuses to share data with a skeptic, making everyone wonder what he’s hiding. Nice play, […]