Back to reporting on our presentation to the NAS panel, after which I’ll report on Mann. We presented last in the day, immediately following von Storch. Hughes and Mann presented on Friday morning. We gave them a long written presentation, and touched the high points in our PPT, also providing them with a CD of our papers. Our PPT presentation is cited in the NAS "other materials"; our "handout" is mentioned but not cited.
In deciding what to present, we decided to primarily focus on our own published work (i.e. out MBH critiques) . Like von Storch, we gave some answers to the Boehlert questions, not realizing that the panel seemed to be distinguishing its task from answering the Boehlert questions. Because the Divergence Problem grew legs in the early afternoon session, we added a couple of slides to our PPT presentation on the run, showing that it was not just a “few” series and that Briffa et al had not “explained” the problem, as well as adding a few slides from my AGU presentation showing statistical problems with the multiproxy studies as a group. Much was left unsaid.
The approach in our presentations will be pretty familiar to readers of our material. However, we varied the exposition a little from previous expositions both to reflect both the audience and the situation. We placed a lot more emphasis on MBH describing itself as a “new statistical approach”. The discrepancy between authors proposing a "new statistical approach" and MBH reluctance to disclose details of that approach had been an issue raised in Anderson et al  in sharp terms by third party econometricians. MBH98 had said that they took a "new statistical approach" because they found "conventional approaches" to be " relatively ineffective."
If you have a set of proxies that are supposedly temperature proxies, then the most common "old statistical approach" would be to standardize the proxies and take an average. We showed the difference between the MBH reconstruction and a simple mean in the figure shown below. So the "new statistical approach" applied to MBH98 proxies obviously yields quite different answers than a conventional approach.
Figure 1. Top – mean of 415 proxies after standardization; bottom – MBH98.
We classified the principal aspects of the “new statistical approach” as being
1) temperature PCs reified as “climate fields”;
2) Mannian PCs applied to tree ring networks
3) a sui generis* multivariate methodology in the regression step applied to the post-tree ring PC proxies.
We placed more emphasis on (3) than in prior expositions, because the full measure of MBH data mining is not the difference between the no-PC MBH result (which concedes the sui generis multivariate method) and MBH98, but between the mean of all the proxies and MBH98. I’ve become increasingly frustrated at ad hoc use by climate scientists of sui generis multivariate methods with no citation of third-party statistical literature on confidence intervals or other statistical properties. (Hegerl et al, submitted, presented at the NAS panel is merely another example.)
We also drew attention to three main MBH claims that, in our opinion, led to widespread acceptance of MBH in the field: (1) statistical skill in RE, r and r2 statistics; (2) robustness to the presence/absence of dendro indicators; (3) confidence intervals. I don’t remember many questions from the presentation, but here I remember Cuffey asking me on what basis we were making these claims: as sociologists? I didn’t have a very snappy answer for this; I just said that we were simply trying to put the matter in context for this presentation. With a little time to think about it, I’d say that I have extensive experience in examining offering documents for securities for promotional aspects and have considerable practical experience on disclosure requirements in securities offerings, including practical knowledge of what constitutes a representation in such an offering document, and, that this experience informed my judgement that these representations in MBH98 amounted to warranties.
We then provided a detailed discussion of these representations, including exact quotations of the original representations and what we had determined by attempting to verify these claims. We asserted that "replication" of MBH98 required not just approximate representation of a squiggle, but replication of claimed skill and robustness. MBH claims of statistical skill and robustness to the presence/absence of all dendroclimatic indicators stand in stark contrast to actual results. The contrast is not very subtle. We didn’t have the Ammann and Wahl results at the time – they were released almost immediately after the NAS panel session ended. However, we do now and we plan to send them in to the panel, showing that even Ammann and Wahl have (however unwillingly) confirmed our claims of MBH statistical failure. We showed through the CENSORED files that the lack of robustness to bristlecones was known at the time.
We skipped over replication issues in our oral presentation, but there is lots of information in our written material.
We added in some slides from my AGU presentation to show a common pattern over all multiproxy studies: failed Durbin-Watson statistic in the calibration period; with catastrophic reduction of verification r2 statistic in the verification period. I was asked about periods for these calculations — they were all done with common Mannian periods. One of the panelists asked what a Durbin-Watson statistic was.
We made a few comments about the other presenters, noting that both Hegerl and D’Arrigo had refused to provide data during the IPCC process. We proposed to the panel that they not rely on any studies where the data had not been archived (this would include Luterbacher). This led to some discussion afterwards in which Hegerl, in particular, argued that their paper had been submitted to Nature and was under embargo – as though that was a sufficient explanation of why could not provide supporting data (or even identify the 12 sites) to IPCC. My conclusion was the opposite – if they were unable to supply supporting data for IPCC review, then they should have withdrawn the paper from the IPCC process (or IPCC should have deleted references to it.)
We showed a couple of slides on the Divergence Problem — a graphic from Briffa showing declining ring widths and density in the second half of the 20th century and that wonderful cargo cult quote from Briffa. We also showed a nice tripartite slide showing Site Spaghetti — the difference between Briffa’s old Urals series, the update and the Yamal substitution.
What other questions can I remember? I don’t remember very many, as I was pretty focussed on matters at hand. One of the panelists noticed a difference between the BàÆà⻲ger and Cubasch figure in our PPT and what seemed to be a similar figure in B and C itself- that was because we used a figure from the B and C supplementary information for our PPT. Otto-Bliesner asked me what we would do if we were trying to make our own reconstruction. I said that we had no views on the matter at present. With the benefit of Hughes’ presentation, I think that I could reasonably say that I would recommend a “Schweingruber scheme” to a “Fritts scheme”, if it’s not too unfair to call Mannian data manipulation a “Fritts scheme”.
Afterwards, there was some discussion of investigators retaining privileged access to their data. North said that he thought that this was necessary simply because the investigator had obtained the data. I opined that exploration geologists go to just as remote locales as paleoclimate people and it would never occur to them that they “owned” their data. Turekian said that exploration geologists made much more money than academics; I contested that, saying that many exploration geologists live contract-to-contract and scuffle. I don’t get the impression that hard-rock exploration geologists, the ones that I know, make more money than academics. In addition, the U.S. government has already set standards limiting exclusive access to two years to cover this. The problem is that NSF doesn’t enforce it. Hence ridiculous situations like no archived information on the 1989 Dunde ice core until Thompson grudgingly archived the plot points of a Climatic Change figure in 2004 (in response to my complaint) – still not an adequate archive.
There must be some other questions that are slipping my mind at present; I’ll add them in as I or Ross (or Ned or others) remember.
How did we do on balance? I’m sure that there were lots of aspects to the presentation that could be improved. This was only my second presentation to an academic audience (the other being at AGU last December) — so I’m not tremendously polished at this. My main concern was that some one on the panel would ask a question that undermined what we were saying. Nothing like that. (In fairness, the panel is pretty new to the game and maybe they will have more probing questions after a little more time on the file.) I’m prone to go into what Ross views as excessive detail; on the other hand, I think that it’s important to convey to the panel that we have control over all the details and that this is one of our biggest strengths. Anyway no one remotely challenged anything that we said.
One remaining point about von Storch that I didn’t mention before. In addition to his vivid criticisms of Hockey Team matters, he showed two charts with MBH confidence intervals on them and said that he could not replicate them and thought that they were wrong. I mention this because we also discussed confidence intervals and it will be interesting to observe how the panel dealt with this issue raised by both of us, when Mann’s turn came. The entire topic of confidence intervals is extremely important.
There was a pleasant cocktail party after the presentations and I met a number of panellists and several presenters. (Mann and Hughes did not attend, as I mentioned before.)
So what was the score after the first day of play? None of the presenters would endorse a claim that climate of 1000 years ago could be reconstructed within half a degree. Yet the error bars in the famous MBH99 hockey stick graphic used in IPCC TAR were (slightly) less than half a degree.
Alley and Schrag back-pedalled away from millennial reconstructions, with Alley making some quite extraordinary comments that the academic community could do better if it was a “priority”. (Would I ever be irritated if I were a policy-maker funding these guys.)
D’Arrigo explained that you needed to cherry pick to make cherry pie. The “Divergence Problem” reared its ugly head; D’Arrigo said that Briffa had the answers, but we showed that Briffa’s answer was simply cargo cult science.
Hegerl talked about confidence intervals from the floor to the ceiling for low-correlation reconstructions.
In addition to dumping on the HS, Von Storch dumped all over non-replicability of multiproxy studies. All this was before we even went on.
We made severe criticisms of MBH (and other studies) and no one on the panel challenged these criticisms. I’ve posted up on Hughes already and I don’t think that he said anything that would have re-assured the panel.
So by the time that Mann started his presentation at mid-morning Friday, objectively, I would say that Sir Humphrey should have been pretty concerned about how things were going — Statement of Task or no Statement of Task. Sir Humphrey needed a knock-out blow from Mann. Did he deliver? More on this tomorrow.
*The Latin term “Sui generis ” means, in a legal context: “unique.”