Yang et al. [2003] #2

I’ve been re-visiting the various multiproxy studies with respect to scale and variability. In addition, as you know, one of my interests in these multiproxy studies is the non-robustness of MWP-modern levels to a very few non-independent proxies, used in multiple studies – in particular: bristlecones, Polar Urals and Thompson’s Himalayan dO18 series – each of which has problems as temperature proxies for different reasons. Obviously, a supposedly "robust" conclusion in multiple multiproxy studies should be robust to the presence/absence of a few potentially problematic series. Here’s an examination of Yang et al [2003] from that point of view. The composite series from Yang et al [2002] is the most heavily-weighted contributor to Mann and Jones [2003] and Moberg et al [2005]. The Yang composite and the North American PC1 (bristlecones) dominate the Mann and Jones [2003] reconstruction, making other series essentially irrelevant. Its contribution to Moberg is less marked, but it is one of only 3-4 series that provide a strong 20th century.

I posted up once earlier on Yang et al [2003] here , raising some questions about some of the proxies. The Yang composite is NOT independent of the sterotype subset, since it contains both the Thompson Guliya and Dunde series. The figure below plots all the series in the Yang composite (in s.d. units). The bottom right panel (red) shows the unweighted average (Yang also calculates a weighted average of 7 mainland China series and a "high-res" average without lake sediment and peat series 6-8). The blue series in the bottom right panel is the same series without two Thompson series, showing a very different MWP-modern relationship: without the Thompson series, the MWP values exceed 20th century values. I will be using the blue series in some experiments.

Figure 1. Individual series from Yang et al [2003] together with composite (red) and composite without Thompson series (blue).


  1. Dave Dardinger
    Posted Nov 30, 2005 at 4:01 PM | Permalink

    Just glancing at the various series, it looks like the Dulan series does the opposite of the Thompson one; i.e. produces much of the high temperatures in the 800’s and 900’s. What would Yang look like without it? I suspect it’d be rather hockeystickish.

    Not that this makes Yang robust.

  2. TCO
    Posted Nov 30, 2005 at 5:02 PM | Permalink

    Steve, Why don’t you take some work that you’ve already done? Like the misdated Polar Urals. That was a fun detective story.

  3. Pat Frank
    Posted Nov 30, 2005 at 8:54 PM | Permalink

    What impresses me is that no two of the series look alike. The top left (guliya? I can’t quite read it) goes off-scale in the Roman period and shows neither a MWP nor a LIA. Jiaming and Japan show what look like early Medieval *Cold* Periods, and if we’re to believe Jinchuan, we’re near a 2000 year minimum in temperature. If the various proxies are supposed to reflect local conditions superimposed on general global climate swings, it looks to me like those proxies reflect different planets.

    Given what I’ve read here over the last many months concerning the unknowns entering into proxies, such as O-18 rainout and aolian tree-ring fertilizations, I wonder how anyone has the temerity at all to suppose that proxies have reached a useful state of understanding, as regards past-climate reconstruction, at all.

  4. Steve McIntyre
    Posted Nov 30, 2005 at 10:50 PM | Permalink

    I agree that the cheekiness is pretty amazing.

  5. Brooks Hurd
    Posted Dec 1, 2005 at 12:34 AM | Permalink


    It is actually very easy for the AGW proponants. First they begin with the preconceived notion that GW is 100% AGW, then they find some studies which support that view.

    Of course they need to ignore the fact that the authors cherry picked specific proxies which have the characteristics that the authors wanted to emphasize.

    They must assume that these proxies are really excellent temperature indicators.

    They must ignore the fact that the authors will release neither their data nor their methodologies. They thus must conclude that truly independent verification of these studies is unnecessary.

    They must ignore the fact that the authors used some odd statistics to get their results.

    They do not question whether instrument data was Tmax or Tmin. Since the diurnal temperature variation has been decreasing over the past century, using Tmin will exagerate the rate of warming. But oh nevermind.

    They must assume that all ice cores trap gas exclusively as gas bubbles (with no dissolved gas or chemcially bound gas molecules). Nevermind that the deep ice cores have no visible bubbles. They also must assume that there is no possiblity of contamination of the ice cores by drilling fluids or ambient air. They also must assume that ice cores may be stored for years prior to analysis without any change in gas concentrations. These assumptions allow AGW proponants to believe that all gas from ice cores are actual paleo-atmospheric samples. Of course they accept these assumptions without any verification.

    If anyone has the audacity to question any of the assumptions, missing data, or math oddities, they simply ignore the questions. If this strategy fails, then they use a litany of logical fallacies to silence the questioners.

  6. Ross McKitrick
    Posted Dec 2, 2005 at 8:39 AM | Permalink

    TCO, the story of the Polar Ural trees (how about “Three Stiffs Who Couldn’t Get a Date”) is written up and out the door. That puts it under something of an embargo for the time being, but rest assured the paper is in play.

  7. Peter Hearnden
    Posted Dec 2, 2005 at 10:23 AM | Permalink

    Re #5
    “It is actually very easy for the AGW proponants. First they begin with the preconceived notion that GW is 100% AGW, then they find some studies which support that view.”

    Like who? Which scientist?

  8. Pat Frank
    Posted Dec 5, 2005 at 4:55 PM | Permalink

    Thinking about this proxy business a bit more, it seems to me that adding proxies looking for a global climate signal is rather like signal-averaging. The assumption in the latter method, which is usually good, is that random noise averages out while a weak signal under the noise adds linearly, and eventually emerges.

    If this is carried over to proxies, climatologists must be assuming that each proxy consists of high-frequency local climate jitter superimposed on a low-frequency global climate signal. Then high-frequency local climate fluctuations in proxy signals are treated as random noise that can average away. But local climate fluctuations are chaotic, not randomly noisy. That means they probably won’t average out, explaining why the high-frequencies in proxy-compilations appear to remain very intense; at least when compared to noise reduction in a signal-averaged spectrum.

    I work with x-ray absorption spectra, of which single scans can be very noisy. If I averaged 159 very noisy scans — the number of proxies Michael Mann used (supposedly) for his millennial plot — I’d get a nearly noise-free spectrum. But Mann didn’t. That seems to indicate that the high-frequency components of local climate proxies do not behave as noise, and don’t average out.

    So then the question is, when proxies are added, how does one decide that the low-frequency oscillations that emerge, and that are interpreted as global climate swings, are not just artifacts of uncompensated local climate signals in the individual added proxies, that just happen to add up positively (or negatively)?

    Do people ever test this possibility by taking two smaller sub-sets of proxies, taken from adjacent locales, and making two separate compilations to see whether the same global average emerges?

    In x-ray spectra, we often remove high-frequency noise using a Fourier transform filtering process. That leads to an ancillary question for anyone here. If there is some low-frequency global climate signal beneath the high-frequency local climate signals in proxies, would Fourier-filtering out the high-frequency component reveal the low-frequency global variation? If not, why not? Shouldn’t it be possible to represent any proxy plot as a set of added sinusoids?

    Doesn’t the whole ‘add-these-proxies-and-see-the-Earth’ business assume that there is a pseudo-oscillatory global climate signal invisibly submerged beneath the jitter in every single individual proxy data-set?

  9. per
    Posted Dec 5, 2005 at 6:10 PM | Permalink

    hi pat
    there are a couple of issues that seem to come up with your insightful analogy with signal processing.

    straight off the bat, there is an issue about selection of proxies. Which proxies do you use, and how do you select them ? If we take the analogy of x-ray spectra, it’s like chucking out a subset of your measurements in a biased way; if you see absorption at 3A, chuck out the spectrum ! This will obviously lead to a seriously biased result.

    You can also do experiments with x-ray spectra to find out how reliable the method is; for example, you can say that 159 scans gives you a good signal to noise. In reconstructing past temperatures, we really have minimal clue how reliable the method is, nor how many proxies you would need before it became reliable. Proxies that don’t relate to temperature are obviously an important problem.


%d bloggers like this: