The NAS Panel is scheduled to issue its report, "Surface Temperature Reconstructions for the Last 2,000 Years" at 11 a.m. on Thursday. I suspect that many people would expect me to be worried about what the panel will say.
Actually, I’m not worried in the slightest.
Based on presentations to the panel, NAS is in an extremely awkward position if their original intent was to whitewash the situation. If they touch the key questions at all, they have little wiggle room in which to avoid some pretty adverse findings. If they avoid or don’t answer the key questions – some of which are simply reporting on factual situations, then the House committees are going to be pretty mad at them for wasting their time.
Here are some of the key questions where I’ll be looking to see if the NAS Panel provided answers or played dodgeball.
House Committee Questions
The first thing to look for is simply: did the NAS panel answer (a) the questions sent to NAS by the House Science Committee (the Boehlert questions) and/or (b) the questions that had been asked of Mann et al by the House Energy and Commerce Committee (the Barton questions).
Looking for answers to these questions is distinct from checking against the panel terms of reference (which also should be done). In setting up the panel, NAS administrators, presumably up to and including Ralph Ciccerone, played a very tricky game by issuing terms of reference for the panel, which were incomplete relative to both the Boehlert and Barton questions. I thought that this was pretty cute on NAS’ part and "cute" has a habit of backfiring and this could easily happen in this case. For reference, here are excerpts from the two question sets:
I am writing to ask you to empanel a balanced group of scientist to provide Congress with expert guidance on the current scientific consensus on the paleoclimate recod and particularly on the work of Drs. Michael Mann, Raymond Bradley and Malcolm Hughges (the so-called "hockey stick" thesis. The group should, in a clear and concise report issued in a relatively short period of time answer the following questions:
1. …What are the main areas of uncertainty [regarding the temperature record of the last 1000 to 2000 years and how significant are they?
2) ... What are the principal scientific criticisms of their [Mann, Bradley and Hughes] work and how significant are they? Has the information needed to replicate their work been available? Have other scientists been able to replicate their work?
5. According to The Wall Street Journal, you have declined to release the exact computer code you used to generate your results. (a) Is this correct? (b) What policy on sharing research and methods do you follow? (c) What is the source of that policy? (d) Provide this exact computer code used to generate your results.
7. The authors McIntyre and McKitrick (Energy & Environment, Vol. 16, No. 1, 2005) report a number of errors and omissions in Mann et. al., 1998. Provide a detailed narrative explanation of these alleged errors and how these may affect the underlying conclusions of the work, including, but not limited to answers to the following questions:
a. Did you run calculations without the bristlecone pine series referenced in the article and, if so, what was the result?
b. Did you or your co-authors calculate temperature reconstructions using the referenced “archived Gaspé tree ring data,” and what were the results?
c. Did you calculate the R2 statistic for the temperature reconstruction, particularly for the 15th Century proxy record calculations and what were the results?
d. What validation statistics did you calculate for the reconstruction prior to 1820, and what were the results?
e. How did you choose particular proxies and proxy series?
This cuteness led to a couple of interesting events in the NAS panel process. Partway through his presentation, Von Storch put up a slide, referring to the Boehlert questions. The panel, including the chairman, was nonplussed as they had never seen the Boehlert questions. There was some discussion as to whether von Storch would even be allowed to present his answers to these questions. The panel indicated little appetite for wading into detailed issues such as data availability, seemingly wanting to spend its time on “big picture” issues. At the end of the first day, a representative of the House Science Committee told them in no uncertain terms that the House Committee wanted answers to specific questions so that these questions could be taken off the table one way or the other; that there would be many more opportunities for big picture reports. Later (March 30) the panel’s terms of reference was amended to require it to comment on data availability, but this was long after presentation day.
If it wanted to, the panel could easily have construed its terms of reference broadly enough so that it actually answered both the Boehlert questions and the Barton questions. I’d be dumbfounded if they gave comprehensive answers to these questions, which will leave the two committees with some interesting choices, if they are still interested in the issues.
Next I’ll do a quick review of "battleground issues" to look for in the NAS panel report.
1) Data Availability
Data availability was one of the original Boehlert questions, that was not represented in the NAS terms of reference and as noted above, was re-added during the process on March 30. In my opinion, the panel does not appear to have discharged this aspect of its mandate very effectively and I’d be astonished if they produce anything other than generalities on this topic. Answering questions about data availability requires detailed work and cannot done by the usual academic fallback technique of a literature review. This is essentially an accounting question. Were actual accountants or business consultants charged with producing an answer, they would have had some staff working on it; the staff would almost certainly have re-interviewed me about data issues, as I’m knowledgeable about them. This didn’t happen, so my surmise is that the panel has simply not done any investigation of data availability and will render mere generalities back to the House committees.
One particular colorful incident in the presentation was when Von Storch repeated the famous Phil Jones quote (which he might have picked up from climateaudit:
We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it." (Phil Jones).
Von Storch condemned this attitude of Jones in the strongest possible terms, stating to the committee that “Relevant data and details of algorithms need to be made public even to “adversaries”" Von Storch also advised the committee, in answer to one of the BOehlert questions, that "the [MBH] information required for replication was not made available in a suitable manner.
2) The "Divergence Problem."
The Divergence Problem falls into the category of a “main area of uncertainty” and is something that I’ll be looking for. This issue was not really on the table when presentation day started, but really grew legs during the hearing, long before we got on the stage. It got going when Cuffey noticed that the D’Arrigo et al 2006 proxy reconstruction went down after 1985, despite rising temperatures. He asked D’Arrigo about the discrepancy and she said – That’s the "Divergence Problem”, and thought that she’d answered the question.
The "Divergence Problem" arises because the majority of temperature-sensitive tree ring width "site chronologies" go down in the last half of the 20th century. These ring width chronologies are the “active ingredients” in nearly all 1000-year temperature reconstructions – corals and things like that are just window-dressing. ; however, tree ring widths are not increasing with warm late 20th century temperatures, but are declining. To his credit, Cuffey followed up with the $64 question to D’Arrigo: if tree rings are not picking up late 20th century warmth, how can you be certain that they might not have had a similar response to a comparable warm period in the past (e.g. in the Medieval Warm Period). The responses of D’Arrigo and others at the panel were, to say the least, unsatisfactory. In our PPT presentation, we included a graphic from Briffa et al reinforcing the issue. Cuffey must have emerged very dissatisfied with placing much weight on such reconstructions. It really is an important battleground issue for people seeking to rely on Hockey Team 1000 year "reconstructions". It wasn’t a "battleground issue" when the presentations began, but it became one. IPCC TAR totally dodged the issue. This is an Ohio or Florida – it could go either way. It’s a good one to watch.
3. Verification Statistics
One of the Barton questions shown above is the question to Mann about his verification r2 results. Arguably, this is the question that launched the NAS panel, as , in response to the Barton Committee, NAS wrote to the Barton Committee, specifically mentioning this question as one more suitably answered by the formaiton of an "independent expert panel (according to our standard rigorous study process) to assess the state of scientific knowledge in this area"." Mann’s answers to the Barton Committee on this and the related questions were evasive at best.
During Mann’s presentation, a NAS panelist asked Mann the same question as the Barton Committee – did you calculate the verification r2 statistic and what was the result? Mann replied: "We did not calculate the verification r2 statistic – that would be a foolish and incorrect thing to do". None of the statisticians on the panel tried to pin Mann down on this – a passivity that I found very strange and unsatisfactory. Afterwards, panelist Nychka of UCAR told me that, just because they didn’t say anything, didn’t mean they didn’t notice.
This answer by Mann puts the NAS Panel in a real quandary. The previous day we had discussed this very question and provided conclusive evidence to the committee that Mann did calculate the verification r2 statistic. See relevant section of PDF and PPT
The panel had all this evidence in their hands prior to Mann’s presentation.) So Mann’s bold-faced denial that he had ever calculated the verification r2 is not going to help Mann’s credibility with the committee very much, although it would be surprising if they called a spade a spade. After presentation day, matters got even worse, because Wahl and Ammann’s revision became available. It grudgingly conceded that Mann’s verification r2 was ~0.
The entire issue of verification statistics was a tar baby for them to start with, much complicated by "that would be a solly and incorrect thing to do". There’s simply nothing that they can truthfully say on the matter that does Mann any good. Will they dodge the question totally?
4) Confidence Intervals.
Confidence intervals came up in two different ways. Cuffey asked every presenter whether they could estimate the temperature 1000 years ago to within half a degree. Other than Mann, they all said no. Whether by coincidence or intention, half a degree was the 95% confidence interval of IPCC TAR. I’d be surprised if the NAS panel made the comparison, but others (including me) will.
Confidence intervals also came up in a more technical context. We strongly criticized the Hockey Team methodology of estimating confidence intervals based on calibration period residuals, rather than verification period residuals, and asked the panel to declare against using calibration period residuals. Because Mann’s verification r2 is ~0, confidence intervals using verification period residuals (which are large – hence the negligible r2 statistic) will result in very wide confidence intervals – probably from the "floor to the ceiling" in Hegerl’s phrase. So there are two aspects of confidence intervals to keep an eye on.
5) Bringing the proxies up to date
A relatively non-contentious recommendation could arise here. One of the crazy aspects of proxy reconstructions is the use of so many proxy series ending by 1980, before recent warming (although the “Divergence Problem” might be a factor affecting proxy selection.) Alley made interesting observations about problems in relying on academic institutions and doctoral/post-doc programs for what are essentially “routine” updates. This could catch the Panel’s attention such that they make recommendations to NSF.
"You need to pick cherries to make cherry pie”, D’Arrigo told an undoubtedly astonished panel, undoubtedly expecting to hear about more sophisticated selection protocols. Ralph Ciccerone of NAS must have winced. Procedures for selecting proxies are one of the big issues in multiproxy studies as current methods seem arbitrary at best and biased at worst. (The use of the HS-shaped Yamal chronology instead of the Polar Urals Update with a high MWP is an example publicized on this site.) The Barton Committee asked Mann et al to describe their selection procedures, but received an unhelpful answer.
In a follow-up to our presentation, we submitted a graph with a high MWP from picking apples instead of cherries – merely to illustrate the impact of arbitrary selections. Just for fun, we included some of Cuffey’s data in the reconstruction.
Ideally one would hope that the panel will turn its attention to proxy selection protocols, but I’m not hopeful of this.
7) dO18 in Ice Cores
Another major uncertainty in proxy interpretation is that some key recent studies (Hoffmann et al., 2003; Vuille and Werner, 2005; Vuille et al., 2005), have attributed dO18 changes in tropical ice cores as being primarily due to changes in precipitation amount, although these series have been interpreted and presented as temperature proxies. The uncertainty of this interpretation needs to be squarely addressed by the panel.
8. Other Uncertainties
It would be nice if the panel listed all the various identified uncertainties with proxies: non-normality (especially with Moberg) and its impacts; for tree rings, altitude changes, "modern sample bias", etc.
So I’ll be looking for all these issues and more. What if the NAS Panel avoids all or most of the contentious issues and simply produces IPCC Lite? Or if it produces a "two-handed report" – on the one hand, … on the other hand,… (as rumors suggest)? This would re-open the door for the House committees and be rather an embarrassment for the Boehlert committee, which sponsored the NAS panel, and also raise questions about how NAS selected specialties to be represented on the panel.
If this happens, then NAS itself should answer some questions about panel composition. We pointed out the absence of replication specialists, the absence of statisticians with exactly appropriate sub-speciality expertise (the two statistical panelists being more frerquency-domain types).
WSJ l including the wonderful Mann quotatoin: "Giving them the algorithm would be giving in to the intimidation tactics that these people are engaged in," he says.
Ciccerone, R., 2005. Letter to House Energy and Commerce Committee, July 15, 2005.
NAS Project http://www8.nationalacademies.org/cp/projectview.aspx?key=BASC-U-06-01-A
M&M Presentation to NAS Panel PPT
M&M Followup to NAS Panel