I’ve just been informed that a new paper is ‘in press’ snappily called "Testing the Fidelity of Methods used in Proxy-based Reconstructions of Past Climate", by Mann M.E., Rutherford S., Wahl E., Ammann C. It is due to be published in "Journal of Climate". If anyone can get a copy of this paper, then let me know via my e-mail: climateaudit AT gmail.com or by posting a link in the comments field Thanks.
-
Tip Jar
-
Pages
-
Categories
-
Articles
-
Blogroll
- Accuweather Blogs
- Andrew Revkin
- Anthony Watts
- Bishop Hill
- Bob Tisdale
- Dan Hughes
- David Stockwell
- Icecap
- Idsos
- James Annan
- Jeff Id
- Josh Halpern
- Judith Curry
- Keith Kloor
- Klimazweibel
- Lubos Motl
- Lucia's Blackboard
- Matt Briggs
- NASA GISS
- Nature Blogs
- RealClimate
- Roger Pielke Jr
- Roger Pielke Sr
- Roman M
- Science of Doom
- Tamino
- Warwick Hughes
- Watts Up With That
- William Connolley
- WordPress.com
- World Climate Report
-
Favorite posts
-
Links
-
Weblogs and resources
-
Archives
11 Comments
John, this is likely what the paper is about:
Click to access KN2_Mann.pdf
The climate of the last 2000 years from multi-proxy reconstruction
Michael E. Mann
Department of Environmental Sciences, Clark Hall
University of Virginia
Charlottesville, VA 22903
Both reconstructions from climate ‘proxy’ data (e.g. tree rings, ice cores, corals) and climate
model simulations, suggest that late 20th century warmth is anomalous in the context of the
past 1000-2000 years. Various alternative reconstructions differ in their details however.
Many of these differences appear to be related to issues of seasonality and spatial
representativeness. Statistical methodologies for reconstructing past large-scale temperatures
from proxy data have now been tested using a long forced simulation of the NCAR CSM 1.4
coupled model. Analyses of synthetic ‘proxy’ networks produced from the model suggest that
existing proxy-based climate reconstructions are likely to yield reliable estimates of past
temperature variations within estimated uncertainties. Important differences between
estimates of extratropical and full (combined tropical and extratropical) hemispheric mean
temperature changes in past centuries appear consistent with seasonal and spatially-specific
responses to climate forcing. Forced changes in large-scale atmospheric circulation such as
the NAO, and internal dynamics related to El Nino, may play an important role in explaining
regional patterns of variability and change in past centuries.
References:
Jones, P.D., Mann, M.E., Climate Over Past Millennia, Reviews of Geophysics, 42, RG2002,
doi:10.1029/2003RG000143, 2004.
Mann, M.E., Cane, M.A., Zebiak, S.E., Clement, A. Volcanic and Solar Forcing of the
Tropical Pacific Over the Past 1000 Years, Journal of Climate, 18, 447-456, 2005a.
Mann, M.E., Rutherford, S., Wahl, E. and Ammann, C., Testing the Fidelity of Methods
Used in Proxy-Based Reconstructions of Past Climate, Journal of Climate (in press),
2005b.
Rutherford, S., Mann, M.E., Osborn, T.J., Bradley, R.S., Briffa, K.R., Hughes, M.K., Jones,
P.D., Proxy-based Northern Hemisphere Surface Temperature Reconstructions:
Sensitivity to Methodology, Predictor Network, Target Season and Target Domain,
Journal of Climate (in press), 2005.
Shindell, D.T., Schmidt, G.A., Miller, R., Mann, M.E., Volcanic and Solar forcing of Climate
Change During the Pre-Industrial era, Journal of Climate, 16, 4094-4107, 2003.
Using climateaudit.org for my own test purposes here, but that’s OK since the website’s funding is ‘suspicious’.
R² ‘R’ & ALT+0178
14°C ’14’ & ALT+0176
Apologies
Michael,
Am I missing something or does this sound like circular reasoning to you? “Yes we plugged in our assumptions as to how proxies respond to temperature and got back what our assumptions implied.”
This may test self-consistancy, but I think more is needed to verify that the initial assumptions are correct. OTOH, I haven’t read the paper yet so maybe something was done to sort out the assumptions from the results.
I think it’s interesting that all of the citations point to the same restricted number of people. It’s like one big happy family.
This may also be relvant:
“Real world and simulated proxy series: Teleconnection fidelity”
at http://www.assessment.ucar.edu/paleo/past_stationarity.html
Team/Collaborators: E. Wahl, C. Ammann (NCAR), N. Graham (Scripps and HRC), D. Nychka (NCAR), M.E. Mann (University of Virginia)
The work reported “…has been directed to understanding proxy reconstructions of the ENSO climate mode, which are understood less than global and hemispheric climate reconstructions. The PaleoCSM has shown good capability to reasonably represent ENSO behavior, both in the ocean component and in surface teleconnections, which makes it an appropriate experimental vehicle for this work.”
After the justifications for using RE to test significance in MBH what is interesting here is that the statistical measure for significance reported in the diagrams is r2!!
Mike, why don’t you write UCAR and ask them to post up this information at their website, pointing out this article? Cheers, Steve P.S. Aak me about this again in a cuople of months.
But wait – there’s more Steve.
“On the variability of ENSO over the past six centuries”
Rosanne D’Arrigo, Edward R. Cook, Rob J.Wilson, Rob Allan, and Michael E. Mann
found at http://www.nersc.no/MACESIZ/Papers/2004GL022055.pdf
In this paper you will find a diagram showing Pearson r, RE and sign test results and in the text the following:
“The final reconstruction was developed by averaging early and late calibration reconstructions within each nest, and splicing these series together after their variance and mean had been adjusted to that of the 1709–1978 nest. The fidelity of the signal decreases back in time (Figure 1). The r2 values range from 52% to 43% between the most and least-replicated models, similar to skill levels established in the shorter ST98 and MBH00 reconstructions.”
It looks like r2 has been rehabilitated as a valid measure of significance!
coupla months later…
This is now available here.
Look at the verification statistics that are employed.
Look at the models they’ve validated against as well! It must help their method tremendously to be trying to extract a virtally straight line signal from noise, instead of a signal with a fair bit of low-frequency information as Storch did.
I’d really like to know how & why the estimates of past solar variability have been so drastically scaled back, which is what gives the models they used such a distinctive shape.
Judith Leans NASA chart here shows a change in irradiance of 2-3 W/M2 over the past 500 years. Mann describes the centennial changes of circa 1 W/M2 used by Storch as “much larger than the most recent estimates” which are apparently now down to 0.15 W/M2. This is a reduction in estimates of past solar irradiation changes by a factor of 20 since 2002. What’s going on?
I noticed what John D did too. I also noticed that my #3 comment above doesn’t need to be changed. It still looks like they were just proving X in, X out.
I admit I didn’t quite get everything in a first read through but I have a couple things which bother me. The first is the obvious one that while they’re willing to cite von Storch, they’re not willing to cite M&M, even though von Storch was in response to M&M and AFAIK M&M are the ones who introduced pseudoproxies (albeit of a somewhat different sort), into the discussion. I don’t know if Mann doesn’t realize or just doesn’t care how petty and unprofessional this looks.
The second thing that I’m wondering about are their remarks about Volcanoes:
Later we have:
So what does this prove? It seems to me that they must be entering in explicit forcings to their models and are pleased to see that this results in the temperature changes they’ve tuned the system to produce. Are we supposed to imagine that they’ve never before checked such an obvious thing to do to make sure their models are self-consistant? This isn’t the problem skeptics have with the models. We’d assumed that they’d pick up point forcings. The question is what assumptions were being made to make the model outcomes match 1. the instrumental record and 2. the available proxies.
I.e. we know the multiproxy reconstructions to date rely on weighing the proxies directly or indirectly (via PCs), to produce the instrumental record. But what do the models use? We’re told they are parameterizations, but without knowing the parameters and the values chosen / derived, how can we know to what extent the same process isn’t, under a different name, being used to get desired results? This paper, obviously, does nothing to answer this basic question.