Another excellent post by David Stockwell here. Everyone having fun?
-
Tip Jar
-
Pages
-
Categories
-
Articles
-
Blogroll
- Accuweather Blogs
- Andrew Revkin
- Anthony Watts
- Bishop Hill
- Bob Tisdale
- Dan Hughes
- David Stockwell
- Icecap
- Idsos
- James Annan
- Jeff Id
- Josh Halpern
- Judith Curry
- Keith Kloor
- Klimazweibel
- Lubos Motl
- Lucia's Blackboard
- Matt Briggs
- NASA GISS
- Nature Blogs
- RealClimate
- Roger Pielke Jr
- Roger Pielke Sr
- Roman M
- Science of Doom
- Tamino
- Warwick Hughes
- Watts Up With That
- William Connolley
- WordPress.com
- World Climate Report
-
Favorite posts
-
Links
-
Weblogs and resources
-
Archives
8 Comments
I still can’t understand the validity of the concept that you can cherry-pick series, based on their positive correlation to temperature, agalmate those series, and then say, “Yep, the reconstruction correlates with temperature!” How can they do this with a straight face?
David, re: your Figure 1, what line would you superimpose on this graph as an indication of skill? [Somewhat subjective, I know, but I’d be interested to see it.]
#2 Sorry, I don’t understand the question. The lines on the graph show only the one and two-sigma estimates of the mean for the random series. I hope more about tests of skill in the posts above is helpful.
http://www.climateaudit.org/?p=110 is another take on this, that might be of interest.
Thanks Steve. I am just applying the same red noise technique to the ‘select and average’ method in use prior to MBH98 PC method that you have examined in detail.
Looking at it, it seems there might be a valid approach to finding a signal by getting the expectation of the shape of the random reconstructions and testing for deviations from that. Eg. The recon on my site, suggests that there might be a significant broad dip in temperatures from 1600-1700 (LIA). This however is the only significant signal I would think you would find.
Before doing any signal analysis, one should determine the detection limits of the methodology. Just looking at I would think that 0.5 degrees would be a good guess – so maybe the consensus of the panel could be right. Just my ideas for a couple of constructive quantifications.
David, something else about PCs that I noticed that’s a little odd, but that I’ve never seen written up. If you do ordinary PCs on red noise, the PC1 ends up with more low-frequency in it than the native information. It’s absolutely consistent.
Odd? PC’s just orient coordinates along maximum variance (sum of squares) don’t they? And a red noise spectrum has more power (variance) in the low frequency end.
re #10 Ahh, that comment helps me understand the old PC1 vs PC4 thing. From the POV of the short, off-centered period, the blade of the hockeystick is really all there is so it’s seen as low frequency with respect to that. But if you use a long period covering all the proxies then the blade is a high-frequency blip and therefore it’s not surprising a proxy/component explaining it is demoted to PC4.
So then, insofar as R2 is high-frequency sensitive and is calculated over longer and longer periods, one blip in the 20th century becomes less and less important and consequently R2 should drop.
What isn’t intuitive to me yet is what this means in terms of what multi-proxy reconstructions should show to be useful.
8 Trackbacks
[…] To recap previous posts (http://www.climateaudit.org/?p=566), in replicating the cross-validation procedure used in MBH98 for reconstruction skill of randomly generated series on raw and filtered CRU temperatures. The RE statistic correctly indicated no skill for the reconstruction in both the raw and filtered temperature data. The R2 statistic indicated no skill on the raw temperature data and skill at predicting the filtered temperature data. The importance of these ‘tests’ is that they are the basis for accepting or rejecting a reconstruction. The question addressed is, are the tests using RE and R2 capable of discriminating between a meaningful proxy data and a reconstruction developed using random data? […]
[…] Today I am reporting more results of reconstructing past climates with randomly generated sequences (http://www.climateaudit.org/?p=566). Here are a few experiments to identify the critical components of the dendroclimatology methodology. I record the skill of reconstruction with: different types of series (i.i.d., alternating means and fractional differencing), and dropping each component of the methodology in turn (positive slope, positive correlation, calibration with inverse linear model). […]
[…] Here is the ‘spaghetti graph’ of a number of prominent reconstructions, with one and two-sigma confidence intervals. The CRU calibration temperatures are the solid black line. Can you find the random reconstruction? (http://www.climateaudit.org/?p=566) […]
[…] ClimateAudit […]
[…] At ClimateAudit the cheerful Steve McIntyre quipped: Another excellent post by David Stockwell here. Everyone having fun? […]
[…] Here is the ’spaghetti graph’ of a number of prominent reconstructions, with two-sigma confidence interval. The CRU calibration temperatures are the solid black line. Can you find the random reconstruction? (Thanks to Steve McIntyre at http://www.climateaudit.org/?p=566 for recon data.) […]
[…] To recap previous posts (http://www.climateaudit.org/?p=566), about replicating the cross-validation procedure used in MBH98 for reconstruction skill of randomly generated series on raw and filtered CRU temperatures. The RE statistic correctly indicated no skill for the reconstruction in both the raw and filtered temperature data. The R2 statistic indicated no skill on the raw temperature data and skill at predicting the filtered temperature data. The importance of these ‘tests’ is that they are the basis for accepting or rejecting a reconstruction. The question addressed is, are the tests using RE and R2 capable of discriminating between meaningful proxy data and a reconstruction developed using random data? […]
[…] More experiments with random series Filed under: Random climate — davids @ 8:01 pm Today I am reporting more results of reconstructing past climates with randomly generated sequences (http://www.climateaudit.org/?p=566). Here are a few experiments to identify the critical components of the dendroclimatology methodology. I record the skill of reconstruction with: different types of series (i.i.d., alternating means and fractional differencing), and dropping each component of the methodology in turn (positive slope, positive correlation, calibration with inverse linear model). […]