This is the title of a famous Sherlock Holmes story and not intended as a slight to any individual.
Take a look at the Review Comments for AR4 Second Draft Chapter 6 online here.
While I was reviewing these comments, I noticed that there are no reported comments on chapter 6 from Caspar Ammann, one of the major participants in the paleoclimate debate. In the Chapter 6 Review Comments, there are many comments about his article, but none by him. In the IPCC roster., he is shown as a Contributing Editor but not a reviewer [Note – this has been modified in light of #7 below].
I’ve just gone through the exercise of searching all the First Draft and Second Draft Review Comments for any on-the-record comments by Ammann and found none. Perhaps the online Review Comments do not include comments by reviewers if they are Contributing Authors to another chapter. If anyone feels like perusing the Review Comments for examples, I’d be interested. This is a bit surprising in a way, since Susan Solomon’s review comments were submitted through the IPCC review process and are on the record, so why wouldn’t Caspar Ammann’s comments on chapter 6 also not be on the record?
Terminology from a then unsubmitted paper turns up in the Chapter Author replies to Review Comments. Compare the terminology in the answer to Review Comment 6-735 (which would have been final around Aug 4, 2006) to the language in Ammann and Wahl, which, as we recently learned, was submitted on August 22, 2006. Some language points track exactly and do not occur elsewhere in the literature. So there is no doubt that Ammann made written comments to the Chapter Authors of Chapter 6, but these are not on the record despite IPCC policy which states:
All written expert, and government review comments will be made available to reviewers on request during the review process and will be retained in an open archive in a location determined by the IPCC Secretariat on completion of the Report for a period of at least five years.
Update: Here is an amazing parallel between the Replies to Review Comments for chapter 6 and Ammann and Wahl 2007. The Reply to Review Comment 6-735 stated:
Rejected – the text is based on the authors’ interpretation of the current literature (and all papers cited are within current IPCC publication deadline rules). The text gives a balanced view.
Please note the following –
The MM05d benchmarking method is based on an entirely different analytical framework than that used by MBH98.
MBH used the standard method in climatology of making a random time series based on the low-order AR characteristics of the target time series during the calibration period. here the N. Hemisphere mean. This random process is repeated in Monte Carlo fashion and its skill in replicating the actual target time series is evaluated according to any measure of merit in which the investigator is interested.
MM’s method instead uses the full order AR characteristics of one of the proxies used in the reconstruction to create pseudoproxies in a Monte Carlo framework. These are then input into the reconstruction algorithm along with white noise pseudoproxies for all the n-1 remaining proxies. This is, in theory a statistically meaningful procedure, which asks what kind of apparent skill is available in the reconstruction simply from one proxy’s noise. However this procedure is not general and would need to be repeated for each proxy set to be examined. Also, it would need the subjective choice of which single proxy should be modelled according to its red noise characteristics each time.
Finally, it does not take into account that some of the verifications seen as “skillful” are associated with very poor/exceedingly poor calibrations, which would be rejected on first principles in real world reconstruction applications. This consideration indicates that the 0.51 threshold cited by MM is actually, at least somewhat, overstated.
I defy anyone to show me any support for these comments in peer reviewed literature as at August 4, 2006. But here are some quotes from Ammann and Wahl 2007:
Standard practice in climatology uses the red-noise persistence of the target series (here hemispheric temperature) in the calibration period to establish a null-model threshold for reconstruction skill in the independent verification period, which is the methodology used by MBH in a Monte Carlo framework to establish a verification RE threshold of zero at the >99% significance level.
Rather than examining a null model based on hemispheric temperatures, MM05a,c report a Monte Carlo RE threshold analysis that employs random red-noise series modeled on the persistence structure present in the proxy data (note, noise here is meant in the sense of the ‘signal’ itself, rather than as an addition to the signal). … RE performance thresholds established using this proxy-based approach have the disadvantage of not being uniformly applicable; rather, they need to be established individually for each proxy network.
Furthermore, the MM05c proxy-based threshold analysis only evaluates the verification-period RE scores, ignoring the associated calibration-period performance. However, any successful real-world verification should always be based on the presumption that the associated calibration has been meaningful as well …
Interestingly, the first paragraph cited above caught my eye in Ammann and Wahl and I asked him for a supporting reference, justifying this argument. To which, Ammann, a federal employee, answered:
why would I even bother answering your questions, isn’t that just lost time?
Update Jan 2010: This particular mystery was resolved by the Climategate Letters, which contain off-the-record correspondence between Wahl and Briffa.