Lee, a new poster, complained that my attempting to guess the series in the CH-blend was insulting to Hegerl et al. [SM note – This sentence is added on Friday evening. I had a sentence somewhat like this in this post. Lee said that the sentence misrepresented his viewpoints – see one of his posts below – and so I removed it. He then complained that I had removed the sentence – see another of his posts. I didn’t save the sentence, so I’ve restored the point from memory but it’s undoubtedly somewhat different from the original reading.]
Here’s my first attempt at replicating the CH blend without knowing what proxies were used:
Black- CH "long"; blue- emulation of CH "long".
I emulated the CH-long blend using the predictions in my earlier post as follows. All of the 12 predictions are in the 14-series Osborn and Briffa  data set. I removed 2 series from the smoothed Osborn and Briffa data set (the Foxtail series and the Chesapeake Mg-Ca series) , took the average of the 10 series available in 1251 (that’s one more than CH so there’s an adjustment to come) and then scaled the average to the CH-long blend. I’ve obviously been able to replicate the CH-blend pretty accurately without them even having to say what proxies they used. Their weighting methodology is not an unweighted average of the proxies. So it’s hard to tell whether the remaining differences relate to weighting systems or different proxies. There’s at least one proxy that I’ve not matched. Also, I’d be surprised if Hegerl used the Alberta version from Luckman and Wilson  – they probably used an older version. I’ll try this.
Does this "prove" anything? Right now I’m just amusing myself by trying to guess what’s in the CH-blend. I think that this should prove to Lee that I’m probably going to be substantially right in my predictions. I’d be amazed if I don’t have at least 5 guesses right.
For the purposes of their Nature study, if Hegerl et al had written the same study and, wherever they used CH-blend, had substituted OB-blend, would the Nature referees have cared? Off the top of my head, I can’t see why they would. Varying this slightly, suppose they had listed the items in the CH-blend and an alert Nature referee noticed that they overlapped the OB-blend, would he have required some discussion of this? Who knows. If the CH-blend is pretty much the same as the OB-blend, should Journal of Climate referees care? Is it something that should be discussed in the submitted paper?
For our purposes, if the overlap is substantial, we can safely say that the two studies are not "independent" in their proxy selection. Does this show that their selection protocols are biased? Maybe they independently arrived at the same proxy choices because these are what result from objective protocols? If so, I’d like to see an objective statement of the selection protocols and how they were applied. Otherwise, there is certainly the appearance of cherrypicking. It’s hard to go much beyond this until the JClim paper comes out.