Your Comments on Juckes Omnibus

In order to reduce noise levels, I am going to act as a type of chairman of the Juckes Omnibus thread. If you wish to comment on that thread, please do so here. If there’s something that I feel should be transferred to the Juckes Omnibus Thread for Juckes to reply to, I’ll do so. We ourselves can chat about that thread here, but let’s leave that thread for Martin Juckes to respond to,. if he so chooses.

93 Comments

  1. Steve McIntyre
    Posted Nov 7, 2006 at 7:25 AM | Permalink

    comment by Eduardo Zorita

    I do not have the Indigirka data, so I am just guessing here. The decline in the last, say, 20 to 30 years in the series looks strange, considering that the station data in this area indicate either flat or clearly increasing annual temperatures in the last decades. Does anybody know for which region should this series be representative, or whether it is a proxy for summer temperature?

    posted 6 November 2006 @ 5:16 pm | Edit This
    #

    Comment by bender

    This is why auditing will only get you so far in this game. You can lead a dendroclimatologist to Indigirka River, but you can’t make him drink.

    posted 6 November 2006 @ 5:47 pm | Edit This
    #

    Comment by Steve McIntyre

    Eduardo, if the hypothesis is that Siberian tree ring chronologies are a proxy for summer temperature, then a new series is a good test of that hypothesis. Sidorova et al stated:

    Current warming started at the beginning of the XIX-th century and presently does not exceed the amplitude of the medieval warming. The tree ring chronologies do not indicate unusually abrupt temperature rise during the last century,

    posted 6 November 2006 @ 5:55 pm | Edit This
    #

    Comment by Pat Frank

    #7 “¢’‚¬? “The decline in the last, say, 20 to 30 years in the series looks strange, considering that the station data in this area indicate either flat or clearly increasing annual temperatures in the last decades.”

    On the other hand the line shape of the last 20-30 years of the series is not noticeably different from the various local maxima in the rest of the series, representing earlier times. Is it not possible that there are maxima in the series representing tree rings produced during past times when the annual temperature trends were flat or increasing?

    One doesn’t want to flog a dead horse here, but in the absence of a quantitative theory relating tree rings and temperature (and other variables) isn’t it a bit presumptuous to be extracting quantitative results (explicit temperatures) from them?

    Following Steve M.’s comment in #9, if the hypothesis is tree rings represent summer temperatures, then perhaps the last 20-30 years of data falsify that hypothesis.

    posted 6 November 2006 @ 8:30 pm | Edit This
    #

    Comment by Willis Eschenbach

    Here’s the Indigirka summer temperature …

    For what it’s worth, both series peak in 1937.

    w.

    posted 6 November 2006 @ 10:34 pm | Edit This
    #

    Comment by John A

    Re #4

    The x-axis is "Year (CE)" and ends in 1975
    The y-axis is "Reconstructed sea-water temp"

    So Juckes removed Sargasso because it was a long methodically sound 2000 year proxy that was just 5 years short. And not because it showed the Medieval Warm Period.

    By the way, the Sargasso sea proxy points were averages of 50 year “bins”, so watch out anyone who tries to use smoothing below that limit.

    posted 7 November 2006 @ 3:53 am | Edit This
    #

    Comment by Willis Eschenbach

    Re #12, while the x-axis ends in 1975, the data is centered 50 year averages. So the data actually extends to ~2000, and the last data point is the 1950-2000 average.

    w.

    posted 7 November 2006 @ 4:09 am | Edit This
    #

    Comment by bender

    bin it,
    pin it,
    spin it.

    and if all else fails, snip it.

    posted 7 November 2006 @ 5:05 am | Edit This

  2. Paul Penrose
    Posted Nov 7, 2006 at 8:49 AM | Permalink

    I don’t see any way that you can produce figure 1 from Jukes et al 2006 using the proxy series they claim to have used no matter how much you massage the data. I don’t know if this is the result of simple error or not, but with Martin being so coy about how figure 1 was produced, I have to wonder. Somehow I don’t think we’ll be seeing any code from him or a better explanation on how it was done. I’d love to be proven wrong and see Dr. Jukes fully disclose all the data and code (at least for figure 1), but I just don’t see it happening.

  3. eduardo zorita
    Posted Nov 7, 2006 at 8:53 AM | Permalink

    #Indigirka summer temp

    Thanks, Willis. The summer temps seem to display a slight positve trend since 1970 or so. This is what striked me, since in the Indigirka series, at least by eye, the trend is clearly negative.
    But perhaps it is the smoothing in the proxy series plot that makes me think so.
    If you have the data “inofficially” you could check this

  4. Steve McIntyre
    Posted Nov 7, 2006 at 9:40 AM | Permalink

    #2. The way that Figure 1 works is by reducing the number of series in the AD1400 network so that the proportion of bristlecones is increased and it affects the covariance PC1. Juckes’ “reason” for reducing the number series as expressed in the living room is that it avoids the need for extending series to 1980 for the PC analysis. (Of course he could have used Mann’s method of stepwise analysis and ended in 1971). While this sort of “reason” sounds plausible, it’s impossible to disentangle it from data snooping, because the reasoning is done after the fact. How do we know that Juckes wasn’t using Esper’s methodology:

    this does not mean that one could not improve a chronology by reducing the number of series used if the purpose of removing samples is to enhance a desired signal. The ability to pick and choose which samples to use is an advantage unique to dendroclimatology. That said, it begs the question: how low can we go?

    The answer to Esper’s question remains unanswered in Hockey Team articles.

  5. Steve Sadlov
    Posted Nov 7, 2006 at 10:01 AM | Permalink

    RE: Indigirka – I seem to recall from my studies that the Tsungska (sp?) bolide strike was what, about 1905?

  6. MarkR
    Posted Nov 7, 2006 at 10:29 AM | Permalink

    Some background on Dr Juckes.

  7. Posted Nov 7, 2006 at 10:40 AM | Permalink

    The ‘Union’ reconstruction is quite accurate. 0.3 C two-sigma. That is way better than MBH99 0.5 C two-sigma. The simpler the better?

    Taking \sigma =0.15 K, the root-mean-square residual in the calibration period, 1990 is the first year when the reconstructed pre-industrial maximum was exceed by 2 \sigmalatex .

    It seems to me that there is a combination of CVM and ‘2-sigmas from calibration residuals’. Is that true?

  8. eduardo zorita
    Posted Nov 7, 2006 at 10:57 AM | Permalink

    For those interested in a theory of tree-ring growth

    Click to access Evans_etal2006.pdf

  9. KevinUK
    Posted Nov 7, 2006 at 10:58 AM | Permalink

    #7, Mark R

    Thanks for the link. Martin Juckes certainly has been busy writing letters to the newspapers over the last few years on a variety of different issues.

    Now why am I not surprised that it turns out that he is a member of the Green Party? And what a paradox it is that he works next door to the Harwell (former Atomic Energy Research Establishment) site, yet is clearly anti-nuclear power. He must find it very hard living next to all the low-level waste stored in drums in the car park there. I presume he doesn’t dare venture into the site canteen or the on-site social club for fear of being contaminated by the UKAEA/AEA Technology workers there.

    KevinUK

  10. Spence_UK
    Posted Nov 7, 2006 at 11:33 AM | Permalink

    #7, #9

    Hmm, I did think to myself there was a distinct lack of objectivity in Dr Juckes’ approach. If these are one and the same (how many Martin Juckes are there in this part of the world?) then I hold out little hope of constructive scientific dialogue.

    Being a member of the green party wouldn’t make his scientific analysis wrong, but it certainly adds an explanation for his behaviour here that leaves an unpleasant taste in the mouth.

    Cowley

    Shah Jahan Khan Liberal Democrat 627 Elected
    Mumtaz Fareed The Labour Party Candidate 534
    Martin Nicholas Juckes Green Party 294
    Philippa Martha Whittaker Respect 213

  11. IL
    Posted Nov 7, 2006 at 12:25 PM | Permalink

    The posts on Martin Juckes’ politics, personal life etc are completely irrelevant to his published science, which is the only thing that should concern us. If you want to drive people away from engaging in constructive dialogue, this is exactly the way to do it. Concentrate on the pea under the thimble in Steve’s words, not whether the person is wearing a funny hat.

    On Esper’s question – there is no problem in accepting some proxies and rejecting others as long as you have good, independent prior grounds for doing so! (which is what seems to be missing here in the rationales given). Suppose a speleothem or other proxy was found to perfectly reproduce the external temperature then you would need no other. (This presupposes rigorous measurements over time of temperature against proxy parameter which is what really seems to be missing in many proxy studies). This one proxy would be so superior to other proxies that introducing other proxies would only introduce noise. Thus Esper’s question is perfectly reasonable – to me he is asking, can you find a perfect proxy? (That you have compelling independent reasons for believing is a perfect proxy 🙂 ).

  12. Steve Sadlov
    Posted Nov 7, 2006 at 12:43 PM | Permalink

    RE: #11 – Well when so called scientists use their station as a level to foist radical and extremist visions into the realm of policy, they open themselves up to it. History teaches us that those who stand by when this happens may live to regret it later.

  13. Spence_UK
    Posted Nov 7, 2006 at 12:58 PM | Permalink

    Re #11 – this is exactly why I included the caveat that it doesn’t affect the science, which should stand and fall on its own merit. However, Martin has behaved most ungraciously here on a number of issues, clearly treading a line of being as little help as possible without being “unhelpful”. Common courtesy is not a scientific issue and it is difficult not to assume his political bias is influencing his behaviour in this way.

    As for the election results I posted up above, I would argue Martin has done exceptionally well to field that many votes for a minority party. That should be taken as a compliment.

    However, if Steve feels these comments may further prejudice his analysis of Martins paper, I have no objection to my comments being removed.

  14. Armand MacMurray
    Posted Nov 7, 2006 at 12:59 PM | Permalink

    Re:#11,12
    Sorry, Steve, but IL is right here. This is not a policy forum, and Dr. Juckes has not been “foisting” any policy visions here. There are plenty of other sites/forums to challenge/discuss Dr. Juckes policy views.

  15. mikep
    Posted Nov 7, 2006 at 1:02 PM | Permalink

    But IL’s substantive point remains. If there was a single perfect proxy it would make sense to use it alone. The real problem seems to be that the proxies are chosen not on the basis of prior knowledge about how they respond to temperature. Instead they are chosen on the basis of tight in-sample fit without regard to problems of spurious correlation. The out-of sample breakdown of such relationships is a classic sign that the original relationship was likely spurious.

  16. Armand MacMurray
    Posted Nov 7, 2006 at 1:06 PM | Permalink

    Re: #13
    Sadly, there are plenty of examples of ungraciousness in science (as elsewhere in life); no need to go looking at political views for an explanation, as evidenced by the many who are gracious despite political differences.

  17. Steve McIntyre
    Posted Nov 7, 2006 at 1:28 PM | Permalink

    #15. I agree with the points about proxy quality – the difficulty is disentangling the selection protocols. In some leading medical journals, (I understand) that they make people file the protocols before the study if they want to publish in a leading journal. It would be nice to get that here. As to Esper, he de-slected individual cores in some of the chronologies in Esper et al 2002 and has refused to explain the reason for doing so (other than citing Esper et al 2003) which gives the quote cited here and doesn’t illuminate any objective reasons or methodology.

  18. jae
    Posted Nov 7, 2006 at 5:22 PM | Permalink

    11: You say:

    On Esper’s question – there is no problem in accepting some proxies and rejecting others as long as you have good, independent prior grounds for doing so! (which is what seems to be missing here in the rationales given).

    Exactly so. But I have not seen any indication of what the grounds for doing so are, or any indication that the selection criteria were developed prior to the actual selection. It looks to me like the selection criteria are simply ad hoc, based on whether the data fits some preconceived notion (i.e., if it “shows warming” in the last century). Shouldn’t these criteria be explained in their papers?

  19. Willis Eschenbach
    Posted Nov 7, 2006 at 5:23 PM | Permalink

    UC, in #7 you say:

    The “Union’ reconstruction is quite accurate. 0.3 C two-sigma. That is way better than MBH99 0.5 C two-sigma. The simpler the better?

    Taking \sigma =0.15 K, the root-mean-square residual in the calibration period, 1990 is the first year when the reconstructed pre-industrial maximum was exceed by 2 \sigma .

    It seems to me that there is a combination of CVM and “2-sigmas from calibration residuals’. Is that true?

    While this RMS error sounds “quite accurate”, it is not. First, the 95% confidence interval of the 0.15°K rms error is ± 0.09°K.

    Second, the rms residual of the HadCRUT3 Northern Hemisphere data with respect to a straight trend line is \sigma = 0.157°K ± 0.09°K (2 sd). In other words, the Union reconstruction doesn’t perform better than a straight line, which doesn’t impress me at all.

    An average of random red-noise will give something approximating a straight line. Thus, a group of proxies has to beat a straight line to make me want to sit up and take notice. The Union reconstruction does not do so.

    Third, the correlation is neither particularly good, nor is it significant. The R^2 of the Union reconstruction with respect to NH temperatures 1850-1980 is 0.49, with a “p” value of p = 0.08 (not significant). They claim it is significant, but they use a very odd and un-tested “Monte Carlo” method for estimating significance in the presence of autocorrelation. I have used the standard method (Quenouille, M.H., Associated Measurements, Butterworth Scientific Publications, London, 1952).

    Finally, the true error of the method needs to be determined by bootstrap methods. Juckes et al. have cherry-picked their proxies and gotten a passable fit, but that proves nothing about the inherent error of the method.

    In short, they have presented cherry picked proxies that do not outperform a straight line, and do not have a significant correlation with the instrumental record … nor is their error statistically different from MBH98.

    w.

    PS – There are a couple of other problems with their claim that 1990 exceeds the pre-industrial mean by 2 sigma.

    First, and most important, is that you can’t use i.i.d. statistical methods in the presence of trends. Suppose, for example, that they were doing this same exercise in the middle of the Little Ice Age. Would the fact that temperatures in the early 1800s exceeded temperatures in the Little Ice Age mean anything? All it means is that temperatures contain trends on temporal scales from diurnal to millennial … which we knew already.

    Second is the assumption that the proxies are as accurate in the year 1080 as in the year 1980, i.e., that no dating errors have been made.

    Third is the fact that averaging different proxies will introduce “beat frequencies” of unknown size and period. The “CVS” method, while minimizing the effect of these during the calibration period, will unavoidably increase the effects of these outside the correlation period.

    Fourth is that the trends are different in the calibration period (Union, +0.38°C/century; instrumenta;, +0.28°/century). If this difference is maintained throughout the record, the early reconstruction will be low by a full degree.

    Thus, their claim is statistically meaningless.

  20. jae
    Posted Nov 7, 2006 at 5:24 PM | Permalink

    oops, #11, maybe i’m just saying the same thing you are.

  21. Hans Erren
    Posted Nov 7, 2006 at 6:13 PM | Permalink

    re http://www.climateaudit.org/?p=886#comment-63212

    Comment by Martin Juckes

    #71: Hans, looks like a typing error (by me) entering the Tornetraesk position, it should be 68N, 20E, which hopefully will keep it out of the Baltic. Thanks for pointing that out.

    Can you please elaboratre on your selection of Tornetràƒ⣳k and does it differ from Tornetràƒ⣳k by Schweingruber as archived in the databank?
    http://home.casema.nl/errenwijlens/co2/swedenmap.htm

    See also:

    http://www.climateaudit.org/?p=877

    Tornetrask
    Team Table 1 lists 4 different versions of Tornetrask under different alter egos. The following 4 series all include the same locations:

    #11 “Northern Norway” of Hegerl et al, ascribed lat-long of 65N, 15E is actually Tornetrask !?!.
    #6 “Tornetraesk (Sweden)” of Moberg ascribed lat-long of 58N, 21E is Tornetrask
    #17 “Tornetraesk Sweden” of Esper also ascribed lat-long of 58N, 21E is Tornetrask. This version is used in the All-Star reconstruction.
    #19 “Fennoscandia” of Jones et al 1998 and MBH, ascribed lat-long of 68N, 23E is also Tornetrask. This near-duplicate version is also used in the All-Star reconstruction.

    Thus, we have a range of estimates for the location of Tornetrask going from 58 to 68N and from 15E to 23E. The “oldest” version of these is the version in MBH/Jones et al 1998. But in this case they additionally use the Esper version, making two versions used from this site. The Moberg version appears to be the Briffa 2000 version. These are supposed to be “independent” series.

  22. Posted Nov 8, 2006 at 2:44 AM | Permalink

    #19

    I was hoping that authors would answer to our questions, maybe there is something that we don’t understand. But let me elaborate on this CVM + calibration residuals issue meanwhile. Let t_i be the temperature vector and r_i the corresponding reconstruction vector. i=1..N is the calibration period. In addition, let’s assume that t and r are centered in that period (so we can avoid the rms-std discussion).

    If I got it right, CVM forces

    r^Tr=t^Tt

    i.e. sample variances match exactly after CVM. Calibration residuals are e=r-t, and we are interested in rms (or sample std, whatever) of the residuals

    \sigma _e=\sqrt{\frac{1}{N}\sum _i (e_i)^2}=\sqrt{\frac{1}{N}e^Te}

    term e^Te is the interesting one, let’s open it

    e^Te=(r-t)^T(r-t)=r^Tr-2r^Tt+t^Tt

    and using the first equation we’ll obtain

    e^Te=2t^Tt-2r^Tt

    r^Tt is dot product, i.e. r^Tt=\|r\| \|t\| \cos \theta . Thus,

    e^Te=2 \|t\| \|t\| (1-\cos \theta)

    Going back to std, we’ll obtain

    \sigma _e=\sqrt{\frac{1}{N}2 \|t\| \|t\| (1-\cos \theta)}

    i.e.

    \sigma _e=\sqrt{2} \sigma _t  \sqrt{1-\cos \theta}

    So, there is an upper limit for errors:

    \sigma _e \leq 2 \sigma _t

    This limit is a function of deviation of calibration temperature. For commonly used 1901-1980 calibration temps this limit is 0.4 C. If we use CVM and calibration residuals, we’ll never get larger 2-sigma than 0.8 C. Whatever data we use.

  23. Willis Eschenbach
    Posted Nov 8, 2006 at 3:14 AM | Permalink

    UC, an excellent analysis. Unfortunately, I got lost in the last step. Should the “2” in the last step be \sqrt{2} , or (more likely) am I just not understanding it?

    w.

  24. Posted Nov 8, 2006 at 3:31 AM | Permalink

    Re: #5

    RE: Indigirka – I seem to recall from my studies that the Tsungska (sp?) bolide strike was what, about 1905?

    1908

  25. Posted Nov 8, 2006 at 3:44 AM | Permalink

    #23

    If r=-t, the cosine will be -1 and we’ll get \sqrt{2} \sigma _t  \sqrt{2}=2 \sigma _t . That is the maximum.

  26. Hans Erren
    Posted Nov 8, 2006 at 4:13 AM | Permalink

    for your convenience I mapped all locations from table 1 from Juckes et al on an interactive zoomable map

    http://home.casema.nl/errenwijlens/co2/juckesmap.htm

    screenshot:

  27. Willis Eschenbach
    Posted Nov 8, 2006 at 4:32 AM | Permalink

    Thanks much, UC, I knew I was missing something. I was thinking about cosine going from 0 to 1 … late night brain gap …

    w.

  28. Posted Nov 8, 2006 at 5:12 AM | Permalink

    #27

    -1 is quite extreme example, I think in practice worst you can do is to reconstruct a series that is orthogonal (uncorrelated in stats) to the temperature vector. Then you’ll get a maximum of \sqrt{2} \sigma _t , and 0.57 C for max 2-sigma.

  29. Jean S
    Posted Nov 8, 2006 at 5:23 AM | Permalink

    re#22: Nice analysis, I never thought about it that way! Steve, I think UC’s analysis deserves a highlight!

    For those not so much into math, here’s an explenation of the UC’s derivation in simpler(?) terms:

    Suppose you have the instrumental temperature series and your (somehow obtained) tempereture reconstruction. Now if you “standardize” your reconstruction by setting its mean and standard deviation in the caliberation period to equal those of the instrumental series, your Mannian “uncertainty levels” (MBH98) are upper bounded by the double of the standard deviation of the instrumental series in the caliberation period irrespective of your reconstruction (it could be pure noise)!

  30. Steve McIntyre
    Posted Nov 8, 2006 at 8:41 AM | Permalink

    #29. Will do. This is an excellent point which I’d pondered in the past and UC has given a very nice demonstration. As I recall, the reported Mannian uncertainty levels are about 90-95% of the instrumental 2-sig; I’ll check this as well.

  31. Ross McKitrick
    Posted Nov 8, 2006 at 9:29 AM | Permalink

    It would be a nice check on UC’s result to replace the reconstruction vector r(t) with (a) white noise, (b) nonsense data like, say, S&P500 or US real interest rates; and see if the 2-sigma bounds are better, worse or the same than using the tree ring proxies.

  32. Posted Nov 8, 2006 at 11:10 AM | Permalink

    #30 soon we’ll face the big problem

    MBH99 1000-1400, \sigma _r is 0.11. Calibration temperature \sigma _t is 0.20. MBH99 \sigma _e  for 1000-1400 is 0.25. Has anyone ever seen the reconstruction with 12-proxies only for post 1400? How would it look like? I have a theory how it is possible to obtain those values, but that’s a stupid one.

  33. Steve McIntyre
    Posted Nov 8, 2006 at 11:17 AM | Permalink

    32. No one has ever seen the unspliced MBH reconstructions for the AD1400 step or, for that matter, the AD100 step. I’ve emulated the calculation and will post it up – remind me if I don’t within a week or so.

    We’ve tried every which way to get Mann to produce the unspliced AD1400 step, but he’s refused. The NSF refused. Nature refused. Even the House Energy and Commerce Committee didn’t get it. Ralph Cicerone refused to ask. Gerry North asked but got nowhere. It is so frigging pathetic.

  34. Steve McIntyre
    Posted Nov 8, 2006 at 4:39 PM | Permalink

    #

    Comment by bender

    But if the dendroclimatologists are obliged to provide replicable methods and access to data, they will lose their one major asset: their monopoly on the ability to package selected truths as the whole truth. We can’t have that.

    posted 8 November 2006 @ 2:55 pm | Edit This

  35. Willis Eschenbach
    Posted Nov 8, 2006 at 6:07 PM | Permalink

    Re #31, Ross, thank you for your comment. You say:

    It would be a nice check on UC’s result to replace the reconstruction vector r(t) with (a) white noise, (b) nonsense data like, say, S&P500 or US real interest rates; and see if the 2-sigma bounds are better, worse or the same than using the tree ring proxies.

    Since reading UC’s fascinating post, I’ve been doing some preliminary work on this. First, the raw data. The average of the (normalized Union – normalized instrumental)^2 values is 0.59 standard deviations. This agrees with the Juckes calculations. The standard error of the mean is 0.11 sd. This means that the rms error is sqrt(0.59) = 0.77 sd, and the standard error of the rms error is sqrt(0.11) = 0.33 sd.

    Next, white noise. The rms error of white noise vs instrumental data is 1.41 ± 0.08 sd.

    Finally, red noise. I constructed red noise series using the autocorrelation of the Union reconstruction. The rms error was 0.99 ± 0.9 sd.

    Now, is the Union reconstruction significantly different from the red or white noise pseudo-proxies? To be different at the 95% confidence level, assuming that the errors are independent, the 1.66 * (standard error of the mean) error bars must not overlap. For the Union reconstruction, this is 0.77 + 0.33*1.66 = 1.31 sd. For the white noise, this is 1.41-0.08*1.66 = 1.31 sd.

    And for the red noise, this is 0.99 – 0.09*1.66 = 0.84 sd.

    Thus, while you could make the case that the Union reconstruction does significantly better than white noise, it does not do significantly better than red noise.

    At least that’s how I figure it … I’m sure UC can tell me if I’ve made any errors.

    w.

  36. Posted Nov 9, 2006 at 5:51 AM | Permalink

    Re #35: This should is close to what we did for the figures presented in table 3 of the manuscript. There we looked at the R values, but this should not affect the significance estimates because (normalized Union – normalized instrumental)^2 = 2 – 2R. We looked at the R values obtained both using simple red noise (i.e. first order Markov, for those who want the details) and using random sequences which reproduce the same auto-correlation structure as the reconstructions and N. Hemisphere temperature. In the case of the Union reconstruction, the correlation time scale for the reconstruction is significantly longer than that for the observed temperature (Figure 6), so we looked at the R values obtained from correlating random series based on the statistics of the Union with random series based on the statistics of the temperature. In both cases (using red noise and the more detailed statistical structure) we obtained significances over 99% (based on a sample of 10000).

  37. Posted Nov 9, 2006 at 6:29 AM | Permalink

    #35, 36

    I think CVM alone cannot bring up significant spurious correlation with red noise (and Table 3 confirms that). Did you try the the same with inverse regression? I think it is more susceptible to spurious correlation.

    #33

    We’ve tried every which way to get Mann to produce the unspliced AD1400 step, but he’s refused. The NSF refused. Nature refused. Even the House Energy and Commerce Committee didn’t get it. Ralph Cicerone refused to ask. Gerry North asked but got nowhere. It is so frigging pathetic.

    Then we’ll have to keep on guessing. Tried this combination: inverse regression and CVM after that. (12 proxies from Juckes et al supplement). Got quite close to MBH99 reconstruction for 1000-1400 (0.8 correlation, mean(MY-MBH99) 0.03 C, std(MY-MBH99) 0.09). Calibration residual std before CVM matches quite well with IGNORE THESE COLUMNS column 1. (Now, we can keep guessing until we figure it out. But that’s not how science should work..)

  38. Steve McIntyre
    Posted Nov 9, 2006 at 8:23 AM | Permalink

    #37. MBH99 used 14 proxies – they also used Quelccaya 1 dO18 and accumulation, making 4 series from 1 site. Juckes without commenting on it reduced this to two.

    I can reconcile exactly to Wahl and Ammann’s emulation of MBH. I’ll post up my digital version of the MBH99 step – remind me if I don’t do it within a few days.

  39. Posted Nov 9, 2006 at 9:04 AM | Permalink

    #38

    Thks for the info. It will be interesting. (And it is also interesting that with CVM+INV combination we have a close match. But this is spurious, of course. Nobody would overfit the shape with INV and then overfit the scale by CVM)

    Are those 2 proxies archived somewhere?

  40. Steve McIntyre
    Posted Nov 9, 2006 at 12:02 PM | Permalink

    Juckes has re-issued his archive of reconstructions following some commentary here without issuing any notice so far at Climate of the Past or at his website. There are 64 series in the new archive as compared to 68 in the previous set. Not included in the new set appear to be the following:

    27 mr_uhi_1000_cvm_nht_01.02.001
    28 mr_uhi_1000_invr_nht_01.02.001
    29 mr_ulo_1000_cvm_nht_01.02.001
    30 mr_ulo_1000_invr_nht_01.02.001

    The id codes of the other 64 series match. I wonder what the differences are.

  41. bender
    Posted Nov 9, 2006 at 12:23 PM | Permalink

    Steve M, he’s “moved on”. What’s your hangup?
    🙂

  42. Steve McIntyre
    Posted Nov 9, 2006 at 12:29 PM | Permalink

    I wonder what he did. I’m looking forward to ultimately looking at his statistical ventures, which look pretty hair-raising.

  43. Barney Frank
    Posted Nov 9, 2006 at 12:38 PM | Permalink

    I note a depressing trend at CA. Climate scientists come here with an apparently professional attitude and desire to discuss issues. Then they get asked some tough questions and usually supply less than adequate answers which results in more tough questions. Then they start getting testy or playing the victim and come up with a reason to leave. This behavior, even more than the substandard science it produces, is pretty disheartening and to a non scientist really gets me wondering as to just how much I can trust from any scientific discipline.
    I remember reading a quote by a giant in the medical research field who said that 90% of published medical research was useless junk. Starting to wonder if that is not pretty darn accurate for most science.

  44. Steve McIntyre
    Posted Nov 9, 2006 at 1:15 PM | Permalink

    #43. I don’t think that they are used to any sort of analysis of their data and results. They seem to be pretty fragile flowers.

    #40. I’ve re-collated this and after re-sorting the matches, I can’t find any differences in the two data sets other than removing the 4 series mentioned above. The order of the series in the nc and csv versions seems to be different. Reconstruction #9 in the old csv version is reconstruction #28 in the new nc version. So although Juckes said it was meaningless, there don’t seem to be any changes in the archive #2.

  45. Jean S
    Posted Nov 9, 2006 at 2:23 PM | Permalink

    re #33: I don’t know if you noticed, but we now have the exact residuals (or the reconstruction for the period 1902-1980) for the AD1000 step (MBH99). [Add to my “scanned residuals” the “sparse instrumental” values to obtain the recon for AD1000. Further substract the “dense instrumental” to obtain the “true residuals”].

  46. bender
    Posted Nov 9, 2006 at 2:56 PM | Permalink

    Re #42 #41 was tongue in cheek.

    They seem to be pretty fragile flowers

    Funny, so many of them claim to be “thick-skinned”. Reality is they’re such flowers that I don’t even think they know what what thick-skinned means.

  47. Steve Sadlov
    Posted Nov 9, 2006 at 3:21 PM | Permalink

    RE: #43 – RE: The behavior you noted.

    It is certainly indicative of a lack of character, and may be indicative of outright fabrication and lying.

  48. Barney Frank
    Posted Nov 9, 2006 at 4:36 PM | Permalink

    #44

    How the hell do people get to this level in any scientific discipline without becoming used to analysis of their data or results? I believe you may very well be correct Steve, but if you are, it is a pretty pathetic state of affairs.

    #46

    I recall the repeated, unprompted references to how thick one person claimed her skin to be and assumed at the time that the opposite would probably prove to be the case. I nearly commented on it a couple of times but didn’t want to disturb the “collegiality” that was going on.

  49. bender
    Posted Nov 9, 2006 at 6:31 PM | Permalink

    Re #48
    Juckes at some point must have gone to the same college to develop his equally thick skin.

  50. Pat Frank
    Posted Nov 9, 2006 at 6:44 PM | Permalink

    #43, 48 — None of that is typical of the chemistry with which I’m very familiar. I’ve gone through several wringers, and have turned that crank a few times on others. There are instances when someone gets a soft touch review. This is mostly restricted to the more famous people, where a little flattery seems irresistable to some people. Maybe there’s a natal disposition to sycophancy. But in any case, I’ve never seen the kind of shabby methodology, conclusion-mongery, and data-snooping as Steve M. has exposed in dendroclimatology, anywhere in my experience of science. If 90% of science was that kind of trash, we’d not have anywhere near the technology we do now. Most science is really the incremental bits of knowledge that only near colleagues in the field notice. I put the fault in climate science on the seductive call of political druthers. They had an opportunity to bend things to suit their prejudices, and to their everlasting shame, they took it. Egged on by the loud applause of the bloomer gallery. Science is objective demonstration, though, and it may be that their piper may soon demand payment.

    And isn’t it time, Steve M., Bender, Willis, Ferdinand, Jean S., Francois O., and David Stockwell, that you published more of what you’ve produced here? Get on with it, damn it. I want to read it all in one place, and I want the piper to have his invoice. In public.

  51. Willis Eschenbach
    Posted Nov 9, 2006 at 7:18 PM | Permalink

    Martin, thank you for your comment in #36. You say:

    Re #35: This should is close to what we did for the figures presented in table 3 of the manuscript. There we looked at the R values, but this should not affect the significance estimates because (normalized Union – normalized instrumental)^2 = 2 – 2R. We looked at the R values obtained both using simple red noise (i.e. first order Markov, for those who want the details) and using random sequences which reproduce the same auto-correlation structure as the reconstructions and N. Hemisphere temperature. In the case of the Union reconstruction, the correlation time scale for the reconstruction is significantly longer than that for the observed temperature (Figure 6), so we looked at the R values obtained from correlating random series based on the statistics of the Union with random series based on the statistics of the temperature. In both cases (using red noise and the more detailed statistical structure) we obtained significances over 99% (based on a sample of 10000).

    Not sure whether you mean “should be close” or “is close” at the start of your post, but our conclusions are quite different. For the normalized data series, 1850-1980, I find the following:

    Mean of (instrumental – Union proxy)^2 = 0.59 sd.

    Standard deviation of (instrumental – Union proxy)^2 = 1.22 sd.

    Standard error of mean = 1.22/sqrt(131) = 0.11 sd.

    RMS error = sqrt(0.59) = 0.77 sd.

    Standard error of RMS = sqrt(0.11) = 0.33 sd.

    Thus, the 95% confidence interval for the RMS error is 0.77 ± 0.66 sd. This easily encompasses my results for the red noise, given in post #35 above. Thus, my calculations say that the Union reconstruction is not significant, and you say it is.

    What are your corresponding figures for the mean, std. dev., etc, of instrumental vs Union, and instrumental vs red noise?

    The difference in our conclusions may come from a couple of possibilities. First, did you account for autocorrelation in calculating the standard deviation of the residuals (instrumental – Union)^2? This affects the standard error of the mean.

    Second, according to your post you have not analysed the problem at hand. You say you analysed the

    values obtained from correlating random series based on the statistics of the Union with random series based on the statistics of the temperature.

    But this is not what we want to know. We want to know whether the Union reconstruction does better than chance at predicting the instrumental record, not better than chance at predicting a random series based on that record. My results showed that the rms error for a red noise series based on the statistics of the Union, compared with the instrumental record, were well within the 95% confidence interval of the Union/instrumental figures given above.

    Your comments greatly appreciated,

    w.

  52. welikerocks
    Posted Nov 9, 2006 at 8:26 PM | Permalink

    And isn’t it time, Steve M., Bender, Willis, Ferdinand, Jean S., Francois O., and David Stockwell, that you published more of what you’ve produced here? Get on with it, damn it. I want to read it all in one place, and I want the piper to have his invoice. In public.

    We 2nd the motion! Because “What You Need to Know about Global Warming with Tom Brokaw” is once again [tonight/right now] airing on the “Science Channel”.

  53. Posted Nov 10, 2006 at 6:19 AM | Permalink

    When it comes to math, everybody has a break-point, and maybe I’m about to reach mine, but let’s try:

    #31, #35

    Let’s assume we have independent reference vector (temperature) and reconstruction vector. Then they are orthogonal and, on the average, 2*RMSE will be 2\sqrt{2} \sigma _t  . That is 0.57 C in the NHTemp case. I don’t think that the redness of the reconstruction vector matters (in the CVM case). Short Monte Carlo shows that standard deviation from this 0.57 C value is 0.1 C. I think this agrees with Table 3. Not sure how INV behaves, need to think about it. But combination of INV and CMV is a complete no-no, it draws spurious matches from any data.

    But I need to emphasize: ‘CMV calibration residual 2-sigma’ as an error measure won’t do, as shown in #22.

    #45

    Very sparse instrumental as reference in 1000-1400 construction CIs, but calibrated with full instrumental, that would solve the #32 problem. But if that is true… Well, can’t imagine.

  54. Willis Eschenbach
    Posted Nov 10, 2006 at 6:22 AM | Permalink

    Thank you for the vote of confidence, ‘rocks. However, there are a few problems with that.

    1) Journals are very, very reluctant to admit that they have made a mistake. Their point of view is that scientists might make mistakes, but they’re caught by the bullet-proof peer review system. Thus, it’s hard to write a paper about “the mistakes in paper X” and get it published.

    2) Journals, by and large, have a very strong AGW bias. Look at how easily the AGW papers slide through the “good old boys” peer review system. Nature and Science, in particular, are very reluctant to publish anything questioning the “consensus”.

    3) I’m just a working stiff. I don’t have 36 co-authors and graduate students to do the dirty work, I have to do it all … plus make a living. Time.

    4) I am self-educated, without formal accreditation or an institution to go after my name. You can imagine how that plays with the journals …

    5) Journals are looking for new, fresh results, because they are businesses and that’s what sells. They are not looking for problems in old results.

    So, while it’s good in theory, it’s quite difficult in practice. I persevere, however, and am currently finishing a paper on smoothing the endpoints of temperature trend series for re-submission to JRL. They said the first one was too mean to Michael Mann … imagine that. I also have a paper in with E&E, which may be published.

    w.

  55. BradH
    Posted Nov 10, 2006 at 7:13 AM | Permalink

    Re: #51

    You’re so polite, Willis. Kudos to you. A better man than I.

  56. welikerocks
    Posted Nov 10, 2006 at 7:46 AM | Permalink

    Re: #51

    We hear you Willis, but one can hope. 🙂 We sure appreciate all of your input along with the others.

    I personally keep thinking at some point, [maybe when the policies are in place for awhile as a result of this junk science] when regular people start hurting or struggling- somebody, a TV reporter or leader, or scientist with some integrity will introduce all the truth to the Main Stream Media outlets-or perhaps Old Mother Earth will suprise us and speak for herself at some point. We don’t know anybody-and we have a cornicopia of friends and family, co-workers around the world and the USA- who feels this GW stuff is good science.

    Re: #50 The behavior of some of the scientists who’s work is in question and have commented on this blog is just -freaky. It is like reading a script from a Twilight Zone episode and the story takes place in a world where right means wrong and wrong means right.

  57. bender
    Posted Nov 10, 2006 at 9:43 AM | Permalink

    Re #54
    Exactly.

  58. Francois Ouellette
    Posted Nov 10, 2006 at 12:14 PM | Permalink

    #54 Willis, here are my personal pieces of advice:

    1) Journals are very, very reluctant to admit that they have made a mistake. Their point of view is that scientists might make mistakes, but they’re caught by the bullet-proof peer review system. Thus, it’s hard to write a paper about “the mistakes in paper X” and get it published.

    IMO, journals don’t “make mistakes”. They publish papers with mistakes in them. Honest editors will not claim that peer review is bullet proof. It’s a rudimentay filter that has its advantages and its drawbacks. But you’re right that it’s hard to just publish about “the mistakes in paper X” other than as a comment with a reply. But the truth is you don’t have to do that, or formulate it like that. Just publish your own version of somebody else’s work, with your own better methodology. And then sneak in little sentences such as : “our results point to a different direction than the work of such and such” with a passing reference. Too much of what is done here at CA is what might be called “negative” science: just claiming that this or that paper isn’t good enough. I can understand the emotional reaction by the authors: if you think you’re so good, why don’t you publish your OWN results! And I think you, and many other posters here (not me by any standards…) could, and should do that. Believe me, the moment you’ll have a couple of papers, your work will be taken much more seriously. You can claim all you want that they don’t want to play fair, but if you don’t play by their rules either, they can dismiss your work as much as they like.

    2) Journals, by and large, have a very strong AGW bias. Look at how easily the AGW papers slide through the “good old boys” peer review system. Nature and Science, in particular, are very reluctant to publish anything questioning the “consensus”.

    Not ALL journals have a strong AGW bias. I’ve said it here before: forget Nature and Science. For heaven’s sake, I’ve never published there, and that didn’t stop me from pursuing a scientific carreer! GRL sure doesn’t seem to have such a strong bias.

    3) I’m just a working stiff. I don’t have 36 co-authors and graduate students to do the dirty work, I have to do it all … plus make a living. Time.

    Focus on one thing at a time. I’m sure you can write a bloody good paper on any of the subjects you talk about here. Pick the most interesting or the most relevant one. Just do it!

    4) I am self-educated, without formal accreditation or an institution to go after my name. You can imagine how that plays with the journals …

    Agreed. That’s the most difficult part. But apart from Nature or Science, where the Editors do a pre-filtering, other journals typically consider all papers submitted. So it’s up to the reviewers. But reviewers, if they are honest, and if the paper is well written, will give you a chance. If they don’t, persevere, make your point with the Editor, not acrimoniously but politely (and God knows YOU are polite!). The reviewer’s comments have to be precise and convincing as well if the paper is to be rejected. As a reviewer, I always spent a lot more time to reject a paper than to accept it. If the review is just vague, ask the Editor for more specific comments, and reply to them accordingly. If one reviewer is obviously dishonest, ask for a third (or fourth) opinion. Believe me, that is every author’s burden. You couldn’t believe how hard it was to publish some of my papers!

    5) Journals are looking for new, fresh results, because they are businesses and that’s what sells. They are not looking for problems in old results.

    Again, forget Nature and Science. A lot of journals are not-for-profit and run by scientific societies. But you must publish original and relevant stuff. If you revisit an old problem with a fresh point of view, it’s very much all right.

    I persevere, however, and am currently finishing a paper on smoothing the endpoints of temperature trend series for re-submission to JRL. They said the first one was too mean to Michael Mann … imagine that. I also have a paper in with E&E, which may be published.

    Why don’t you send them to some of us here who have experience with the publishing system. We can give you our own comments. I would be pleased to do this for you.

  59. Pat Frank
    Posted Nov 10, 2006 at 12:29 PM | Permalink

    #58 — Right on, Francois! 🙂 If you get your analyses published in specialist journals, Willis, and end up over-turning the standard litany, you’ll be famous. Science and Nature will end up begging you for a review of your work. Go for it!

  60. Willis Eschenbach
    Posted Nov 11, 2006 at 5:55 AM | Permalink

    Francois, Pat Frank, ‘rocks, thank you for your comments and your encouragement. As I said before … I persevere. Heck, I’m still persevering in trying to understand Juckes methods … and still waiting for his answer to my questions about his Monte Carlo methods …

    w.

  61. Boris
    Posted Nov 11, 2006 at 1:01 PM | Permalink

    “It is certainly indicative of a lack of character, and may be indicative of outright fabrication and lying.”

    Ah, the true motivation:

    1. Invite climate scientist.
    2. Poitely question climate scientist.
    2. Question climate scientist’s answers.
    4. Question climate scientist’s further answers with insinuations of them lying or manipulating.
    5. Demand data.
    6. Accuse climate scientist of hiding something when they shut off dialogue.
    7. Go to Step 1.

  62. Francois Ouellette
    Posted Nov 11, 2006 at 1:45 PM | Permalink

    How about:

    1. Climate scientists are always welcome
    2. Climate scientists come here complaining about the noise, and the lack of respect for their work
    3. Climate scientists are politely asked questions
    4. Climate scientists don’t answer but pretend there is no problem
    5. Problems are pointed out
    6. Climate scientists still don’t answer and complain about “stooges” with an arrogant tone
    7. More questions are asked
    8. Climate scientists still refuse to answer, pretend this was an interesting experiment, and leave.
    9. Go to step 1

  63. fFreddy
    Posted Nov 11, 2006 at 1:45 PM | Permalink

    Re #61, Boris
    Ah, the true believer:

    1. Believes that catastrophic global warming is scientific
    2. Doesn’t understand the questions
    3. Doesn’t understand the answers
    4. Go to Step 1.

  64. Dave Dardinger
    Posted Nov 11, 2006 at 1:46 PM | Permalink

    Hey, Boris, we’re used to trolls here. Your hectoring just isn’t going to get much play.

    The fact is that you can’t just say “here’s the answer” when it’s not and get much respect. If you, Boris, want to stick up for some particular climate scientist, then provide evidence that what people claimed wasn’t true was in fact true (or vice versa). In this case the site is an open book and you shouldn’t have much trouble supporting your favorite if he or she is worthy of supporting.

  65. jaye
    Posted Nov 11, 2006 at 1:54 PM | Permalink

    RE: #61

    Surely you are aware of how blogs work being an open forum, etc.? Most of the “technical” posters here don’t resort to hints and allegations of impropriety, that mostly comes from those that are observing the interchange. And I have to say that given the preponderance of evidence presented in this series of threads,it seems to me that either there is a communication issue or one of “professionalism”.

    I’ve seen reasonable evidence that supports apparent subterfuge regarding the true nature of data (mislabeled, hidden, or self serving selection/filtering) and usage of methods that are curiously favorable to the preconceptions of the authors. Frankly, the behavior of these guys is shameful but hardly surprising. The green side of the aisle believes in the essential correctness of their position, therefore by “any means necessary” is justified by providence.

  66. Boris
    Posted Nov 11, 2006 at 10:05 PM | Permalink

    fFreddy,

    I’d love to read your research. Please point me to the appropriate journal(s).

    Dave,

    Brief example. Steve marked Juckes’ explanation of the length of the Sargasso sea proxy data as “of course, false.” Do I need to spell out the implication of such a statement?

    jaye says:
    “I’ve seen reasonable evidence that supports apparent subterfuge regarding the true nature of data (mislabeled, hidden, or self serving selection/filtering) and usage of methods that are curiously favorable to the preconceptions of the authors. Frankly, the behavior of these guys is shameful but hardly surprising. The green side of the aisle believes in the essential correctness of their position, therefore by “any means necessary” is justified by providence.”

    Auditors, audit thyselves!

  67. Steve McIntyre
    Posted Nov 11, 2006 at 10:27 PM | Permalink

    TRansfer:
    #

    Comment by Paul Penrose

    I have noticed over time that Dr. Juckes comments have become less and less informative and more insulting, culminating in his claim that Steve McIntyre is “constantly quoting out of context and spreading false information”. Now that’s an insult – Dr. Juckes take notice! In my mind his claim is without merit, and I challenge him to prove it, if he can.

    It’s really too bad that it’s come to this because I really wanted to see an honest debate on some of these issues with one of these paleoclimatologists in an open forum. Sadly it appears this will not happen with Dr. Juckes. What is it with these people? Do they have their egos so invested in their work that they can’t tolerate any criticism or questioning of it?

    posted 9 November 2006 @ 4:50 pm | Edit This
    #

    Comment by Boris

    And this statement:

    “Martin’s statement is, of course, false. Here ‘s a plot of the Sargasso Sea series taken from Moberg’s Supplementary Information.”

    is not insulting?

    And this:

    “Dear Martin,

    Also, please quit saying you’ve answered questions when you haven’t. I said:

    Finally, you have not explained the omission of the Indigirka series.

    You said:

    The answer to your question about selection is also elsewhere on this blog, in a contribution from Stephen …

    NO. IT. IS. NOT. ANSWERED. ELSEWHERE PLEASE. ANSWER. THE. QUESTION. ABOUT. INDIGIRKA.

    I really hate to be petty about this, but handwaving and saying “I answered that elsewhere” WHEN YOU HAVEN’T just doesn’t cut it.

    What about Indigirka? What about Sargasso? Both of them fit your criteria, and you didn’t use them. Why not? ”

    is simply annoying. Climate scientists are going to stop coming here to play. But I’m wondering if that is the true goal.

    posted 11 November 2006 @ 12:34 pm | Edit This
    #

    Comment by Steve McIntyre

    #24. Boris, asking for an explanation of the exclusion of Indigirka (or Sargasso Sea) is a pretty simple question and both are pretty fundamental questions. Unless one understands selection protocols, it’s really hard to proceed with any analysis. These were questions that were on my mind regardless of whether Juckes showed up here. When he showed up, we asked him about it (And not just me). He either didn’t provide answers or gave answers that didn’t make sense. So people asked again.

    If he doesn’t want to answer, we can’t make him; but don’t pretend that he’s answered the questions if he hasn’t.

    posted 11 November 2006 @ 1:47 pm | Edit This
    #

    Comment by Willis Eschenbach

    #24, Boris, I wrote PLEASE ANSWER THE QUESTION as you quoted above. Why? Because that was the fourth or fifth time I had asked it without getting an answer. It is not a trivial question. Dr. Juckes, to his credit, finally answered the question … but only after repeated prompting.

    There are more polite ways to ask somebody to answer a question that has been ignored … and that’s what I tried the first four times. When that method failed, I was forced to try another.

    The parts of the equation that you seem to be missing are:

    1) Everyone, on either side of the aisle, gets attacked here at some point or another. Happens to me, bender, Steve M., Steve B., everyone. Since people are free to post here, it can’t be avoided.

    2) If you come here to defend your paper, you are generally welcome and treated with respect. If you start tapdancing around the questions, making vague claims and not backing them up, saying you’ve answered questions that you haven’t answered, eventually people’s patience will wear thin.

    w.

    posted 11 November 2006 @ 3:49 pm | Edit This
    #

    Comment by Reid

    Re #19: Martin Juckes says: “I’ve got some other work to do for a few days, I’ll check your site again sometime next week.”

    My instinct tells me the Hockey Team pays close attention to Climate Audit.

    And Martin Juckes will be reading this today, not next week as he claims. He just won’t be commenting again until sometime next week.

    posted 11 November 2006 @ 4:09 pm | Edit This
    #

    Comment by Boris

    There’s a difference between asking for an explanation or clarification and saying someone is “of course, false.” I don’t think it’s unreasonable to see this as an insult.

    posted 11 November 2006 @ 10:01 pm | Edit This
    #

    Comment by Dave Dardinger

    Steve, I think you should ban Boris from this particular thread and move his rather trollish remarks to the the comment thread. He’s obviously trying to poison the well with respect to any future discussions with Dr. Juckes.

    posted 11 November 2006 @ 10:24 pm | Edit This

  68. Will J. Richardson
    Posted Nov 15, 2006 at 5:15 PM | Permalink

    Re: Jukes Omnibus Comment #23 by Jukes

    It looks like Mr. Jukes will not engage on the issues raised in the thread. His comments are dismissive of the collective statistical expertise represented on Climate Audit, and smug. His air of assumed paleoclimatological omniscience indicates a closed mind blind to the funadamental flaws in his analysis. I doubt that keeping the Jukes Omnibus thread open is worth the trouble. He will not answer your questions and has no intention of taking any of the comments here seriously.

  69. Reid
    Posted Nov 15, 2006 at 5:47 PM | Permalink

    Re #68: “His air of assumed paleoclimatological omniscience indicates a closed mind blind to the funadamental flaws in his analysis.”

    I believe Juckes and most of the Hockey Team players know by now their work is deeply flawed. The tactics used by the Hockey Team remind me of the way the Vatican’s Office of the Inquisition acted when the church’s cosmology was no longer scientifically tenable during the Renaissance. Don’t acknowledge the obvious and vilify those that do.

  70. bruce
    Posted Nov 16, 2006 at 2:22 PM | Permalink

    Given Dr Jucke’s reluctance to explain his paper or to respond to valid questions relating to his work, and given the course of the discussion here on these issues, most observers will have formed their own view as to whether Dr Juckes is conforming to the scientific method in a spirit of genuine enquiry as advocated by Richard Feynman and others.

    I thought it would be useful to poll observers as to whether they think that Dr Juckes is demonstrating compliance with the scientific method and respect for established procedures in his work. I for one think that Dr Juckes, like Micheal Mann and Phil Jones before him, has so far failed to demonstrate compliance with scientific method.

    Such a poll would certainly demonstrate whether there is a “consensus” and what that “consensus” is. I think that the Mannians might be surprised at the conclusion.

    I notice on blogs relating to other topics that there systems for polling contributors. Perhaps CA could look into that. In the absence of such software, posters could perhaps just state their view on this matter. I am not a scientist, so before we initiate such an exercise it would perhaps be preferable if others had a go at framing the questions.

  71. jae
    Posted Nov 16, 2006 at 2:49 PM | Permalink

    70: Bruce: They fall FAR short of following the scientific method, as I understand the method. But I’m an old guy; maybe the modern generation has redefined the method to be something like the following:

    1. Form a belief (but call it a hypothesis).
    2. Design experiments to support the belief. It’s OK to engage in ad hoc cherrypicking, data snooping, and bogus statistical procedures (don’t have your work reviewed by statisticians, however). The end justifies the means.
    3. Do not cooperate with anyone who does not support your belief (hide data; evade questions; engage in ad hom arguments, etc.)
    4. Claim that you are part of a scientific consensus and “the science is settled.”
    5. Issue press releases that exaggerate the dire consequences predicted by your work.
    6. NEVER admit to a mistake, no matter how minor.
    7. Select journals for your publications that share your beleifs and will gladly let your cronies serve as reviewers.

    I guess I gotta go back to school and relearn science.

  72. interested observer
    Posted Nov 21, 2006 at 4:20 PM | Permalink

    Climate of the Past interactive discussion now appears to contain an explanation from Mr. Juckes’ for not using the Indigirka proxy: :”The Indigirka proxy is not available without restrictions (if we had used we would not have been able to place all the date used in the supplementary materials)”.

  73. interested observer
    Posted Nov 21, 2006 at 4:25 PM | Permalink

    correction: should read “The Indigirka proxy is not available _for use_ without restrictions…”

  74. MarkR
    Posted Nov 21, 2006 at 4:33 PM | Permalink

    #72

    if we had used we would not have been able to place all the data used in the supplementary materials

    I wonder if he applied the same rule to all the other proxies he did include?

  75. Willis Eschenbach
    Posted Nov 22, 2006 at 5:47 PM | Permalink

    Test to see if I can post … sorry.

    w.

  76. Willis Eschenbach
    Posted Nov 22, 2006 at 9:03 PM | Permalink

    Well, I started to see what the actual Juckes data holds, and I immediately ran into two problems.

    One is that not all of the 18 records used fit their criteria. Their “criteria” are as follows:

    These series have been chosen on the basis that they extend to 1980 (the HCA composites and the French tree ring series end earlier), the southern hemisphere series have been omitted apart from the Quelcaya glacier data, Peru, which are included to ensure adequate representation of tropical temperatures. The MBH1999 North American PCs have been 20 omitted in favour of individual series used in other studies. Finally, the Polar Urals data of ECS2002, MBH1999 and the Tornetraesk data of MSH2005 have been omitted in favour of data from the same sites used by JBB1998 and ECS2002, respectively (i.e. taking the first used series in each case).

    I put “criteria” in quotes because “if you have two series, pick the earlier one” is not really a criteria, it is an ad-hoc rule. Why is it not a criteria? Because it doesn’t depend on the proxy, it depends on an unrelated fact applied to two proxies. Another example of an ad-hoc rule would be “if you have two series, one by Mann and one by McIntye, pick the one by Mann.” As you can see, neither “criteria” has anything to do with the proxies themselves.

    In any case, the proxy that doesn’t fit their criteria is Methuselah Walk, which only extends to 1979. I suppose they might have meant that the rule is that the proxies extend to the start of 1980, in which case it would fit their criteria. It has other problems, however, such as being a bristlecone series from a lower stand border, that should disqualify it, but it’s used despite all off that. Coincidence? You can decide.

    The other problem is more serious, and it comes when we go to “normalize” the various proxies. This is done by subtracting the mean, and then dividing by the standard deviation. The problem is that these series are autocorrelated, and we must take that into account when we are normalizing them. Autocorrelation increases the standard deviation.

    The problem is that in three of the series, the autocorrelation is extremely high (&gt 0.995). The usual way to adjust for autocorrelation is to calculate an effective number of data points “Ne”, which is smaller than “N”, the actual number of data points. If the autocorrelation is too high, however, Ne goes to zero, and no standard deviation can be calculated.

    The three series which have this problem are the Arabian Sea globigerina proxy, the China Combined proxy, and the GRIP borehole temperature proxy.

    The problem with the China and GRIP proxies is that they are not at annual resolution. In fact, over the 981 years of the record, the GRIP borehole proxy only has 39 data points … say what? The China proxy has only 176 data points. I had hoped to be able to get an adjusted standard deviation from the reduced datasets (after removing duplicates), but the standard deviation was still too great.

    The authors note that the two series which have the greatest impact on the reconstruction are the GRIP and the Arabian Sea proxies. I suspect this may be because they have not adjusted for autocorrelation.

    So, I’m out of ideas here … can’t get an adjusted standard deviation, so I can’t normalize the data, so I can’t see what they’ve done … I’ll continue with their erroneous method, using the unadjusted standard deviation, and see what else I might find.

  77. Willis Eschenbach
    Posted Nov 23, 2006 at 4:51 AM | Permalink

    Well, further news from the Juckes Union reconstruction. Being unable to do things the right way (using adjusted standard deviation) because of the high autocorrelation of the proxies, I did it their way and normalized the proxies using the i.i.d. standard deviation. Meaningless, I know, but what can I do?

    In any case, I thought I’d take a look at the components that make up the reconstruction, to try to see if any of them made sense. I split the proxies into three groups: tree rings, ice cores, and other (stalactites, globigerina, Chesapeake Mg/Ca, Chinese composite). Here’s the results:

    Now, consider the three groups.

    The ice cores say it was much warmer around the year 1000, cooled to 1475, warmed to 1675, cooled to 1800, warmed to 1940, and cooled to 1980.

    The tree rings say temperature was steady to about 1575, cooled to 1625, warmed to 1775, cooled to 1800, warmed to 1940, and cooled to 1980.

    The “other” group says it cooled to 1400, dropped precipitously over about 50 years, and has warmed steadily since then.

    What can we make of this?

    1. Tree rings and ice cores basically agree from 1800 to 1980. Before that, on the other hand, they disagree completely in both the size and timing of changes. This points out how unbelievably foolish it is to use a ~one century calibration period on a ~thousand year data set.

    2. The hockey stick shape is due to the “other” proxies, which increase almost linearly from 1500 to 1980.

    3. The Ice cores show a warmer MWP. The tree rings show a slightly cooler MWP. The “other” proxies show a MWP that is way, way, way cooler than the present.

    4. I don’t trust any of them.

    5. Anybody who thinks we can take the average of those three lines and get a reasonable sense of the “Climate of the Past” needs a serious dose of intravenous science to cool their fevered brows ….

    And finally, I have to come back to my oft-asked question … how can these guys get away with this?

    How can anyone look at those three records shown above and seriously claim that the average of them represents past temperature? It makes absolutely no sense at all to me.

    w.

  78. Dave Dardinger
    Posted Nov 23, 2006 at 7:33 AM | Permalink

    re: #77

    Well, Willis, if these were randomly selected proxies I think it would be fair to say that it was relatively warm in the MWP, cool in the LIA and warm, probably more than in the MWP than at present. The problem is tht the proxies are/were originally selected to show warming in the instrumental period and consequently everything after 1850-1900 should be ignored as far as proving anything goes. So we’re basically left with warm, cool, warming….

  79. Dave Dardinger
    Posted Nov 23, 2006 at 11:40 AM | Permalink

    In the Junkes Omnibus thread Dr. Junkes said:

    If you accept that all proxies have problems, why are you singling out the bristlecone pines?

    in response to the request to show that the other proxies weren’t window-dressing. I think he’s confused about what “window-dressing” is. Something that is window dressing is something that is there just for looks and not necessarily for sale as it were. So in this case the meaning is that the other proxies didn’t contribute to the reconstruction appreciably, not that there was anything wrong with them as proxies. Indeed it might be that they’re better temperature proxies than the bristlecone pines.

  80. Willis Eschenbach
    Posted Nov 23, 2006 at 7:20 PM | Permalink

    To illustrate more clearly the problem with the use of a variety of proxies with a very short calibration period, I have extended the research described above. I took the three groups of proxies (tree rings, ice cores, and “other”), and variance matched them to the 1850-1980 HadCRUT3 annual Northern Hemisphere temperature series. Here are the results:

    Now, anybody could be forgiven for thinking “hey, we have some really good proxies here”. I mean, all of them are very close to the instrumental record. I haven’t calculated the statistics, but by eyball, it’s clear that these three groups are excellent proxies for the calibration period.

    But when we expand the scale, and look at the full millennial record, things don’t look so good … in fact, they look downright ugly:

    As you can see, the method is exquisitely sensitive to the exact fit of the proxies to the temperature in the most recent period. The three groups of proxies vary widely in the past. From these proxies, what can we say about the temperature in the year 1000, or 1500?

    We can say nothing. They are worthless without some method to separate the wheat from the chaff.

    One of the fundamental assumptions about the multi-proxy approach to the reconstruction of past temperatures is that the averaging will get rid of the noise while preserving the signal. As Juckes et al. put it,

    The composite is intended not only to average out regional anomalies but also to average out errors which might be associated with particular proxies or sets of proxies. It is clear that the proxies are affected by factors other than temperature which are not fully understood. We are carrying out a uni-variate analysis which, by construction, treats all factors other than the one predicted variable as noise.

    While this is a very tempting assumption, that averaging will improve things, without data to back it up it remains just an assumption. However, as the graphs above show, there is more than noise in the differences between sets of proxies. The three groups are obviously measuring (when they measure anything at all) very different things. There is absolutely no a priori reason to believe that an average of these three sets will give a better answer than one of the individual sets. Before we believe the claim that averaging is the way to treat all of these proxies, we should have at least a bit of evidence to support the claim. Juckes et al. have not provided any such evidence, they have just stated the claim without support.

    Are there good proxies for the temperature of the past millennium? I suspect there are. However, it is not at all clear how to distinguish those that are good proxies from those that are bad, or useless. The three sets shown above are indistinguishable on the basis of the calibration period. If one set of these Union reconstruction proxies is right, the other sets are very, very wrong … but which set, if any, is right?

    w.

  81. Willis Eschenbach
    Posted Nov 23, 2006 at 9:05 PM | Permalink

    Questions about China

    Well, another mystery. The Juckes et al. paper says their “China: composite (degC)” is from “General characteristics of temperature variation in China during the last two millennia, Yang et al. Geophys. Res. Lett., 29(9), 1324, 25 doi:10.1029/2001GL014485, 2002.” Information about that paper is available here, and the data is available from the WDC Paleo Archive.

    The mystery is that the data archived by Juckes is reduced in accuracy from the data at the WDC, because it is the WDC data divided by 100 and rounded to 3 decimals. For example, in the year 1970 AD, the WDC data is 0.78677, and the Juckes data is 0.008 … why on earth would you want to reduce the accuracy of your dataset? The original data has six significant digits … the Union Reconstruction version has one!

    The other difficulty I have with this series is that it is the only composite series in the record. None of the other composite records were used (Moberg et al., Hegerl et al., MBH9X, etc.). All of the other proxies in the Union reconstruction are a single proxy, not a composit. Dr. Juckes has stated (but not followed) his rules for the Union composite proxy selection … but he has failed to include a rule about including or omitting other composites from his “Union” composite.

    This also makes meaningless his claim that he has archived the individual proxies used in the Union reconstruction. He has archived a version of the China composite (with greatly reduced accuracy), but that composite is made up of 20 separate proxies which he has not archived.

    Nor has he followed his own rules on proxy selection, as more than half (12) of these unarchived proxies have no data from 1000-1100. Thus, the claim that only proxies extending from 1000-1979 were used in the Union reconstruction is absolutely false. Of the the 37 individual proxies used in the Union reconstruction, about a third of them do not meet his proxy selection criteria.

    In addition, while Dr. Yang has kindly provided Steve McIntyre with some of the proxy data from the China composite, much is still missing. There is no archive of the “documentary data” used for the E. China series, and there is no archive of 12 of the tree ring series. Also, there are discrepancies between the Dunde Ice Core used by Yang and the version archived (after several requests from Steve M.) by Thompson. Finally, the Guliya data is not archived.

    In short:

    1) There is no a priori proxy selection rule which allows the inclusion of the China composite. It was included for unknown reasons, without justification.

    2) A third of the 37 proxies in the Union composite do not meet the a priori proxy selection rules.

    3) Juckes has not archived over half (20 of the 37) proxies used, and there is no known archive for at least 15 of these proxies.

    4) It’s dangerous to flip over rocks in the Juckes paper … no telling what will crawl out.

    w.

  82. jae
    Posted Nov 23, 2006 at 10:29 PM | Permalink

    Willis: great posts!

  83. jae
    Posted Nov 23, 2006 at 10:40 PM | Permalink

    It is getting pretty clear that there are no real selection criteriA, just one selection criterioN: “How the hell can I get a hockey stick?”

  84. Steve McIntyre
    Posted Nov 23, 2006 at 11:10 PM | Permalink

    Willis, I don’t understand your post here. The Yang composite doesn’t have 20 proxies. It has 9 proxies: 1) Guliya; 2) Dunde – in both cases, Yang uses a very smooth version inconsistent with the most recent versions archived in 2004 by Thompson; 3. an unarchived Dulan tree ring series; 4. an unarchived S Tibet tree ring series 5. an unarchived E CHIN documentary 6. an unarchived Great Ghost Lake,Taiwan sediment series 7. an unarchived Jiaming sediment series 8. an unarchived Jinchuan sediment series 9. an unarchived Japan tree ring series (dC13 as I recall). If you take out the Thompson Dunde and Guliya series – which should be done if these are to be cited as “independent evidence” – then the Yang series has a very different look. BTW if anyone has access to the publications of the Taiwan sediment series – Science in China D, I think, I’d appreciate a pdf. U of Toronto doesn’t carry it. Also all 9 of the proxies have values in 1100. However only 5 of the proxies have values in 1980.

    As to archiving, Briffa has calculated a completely different chronology from Yamal than the one archived by Hantemirov and Shiyatov. Juckes has cited Hantemirov and Shiyatov as an authority, but didn’t use the version that they archived. Instead he used Briffa’s completely different version without any disclosure. Briffa has refused to archive measurement data for Yamal so that observers can check for themselves. Likewise Briffa has refused to archive measurement data for the Tornetrask update and Taimyr. Given these flat-out refusals, I’m not sure what Juckes’ criteria really are.

  85. MarkR
    Posted Nov 23, 2006 at 11:36 PM | Permalink

    Re#80 Thank you for the graphic which clarifies to me what the core of the problem is with all the Hockey Team papers up to and including Juckes.

    They make the assumption that proxies that correlate well with recent instrumental data are good proxies for temperature. Your graph clearly shows that is not the case, as the proxies differ widely in the non instrumental period, so as you say, they can’t all be correct. I suppose the deviations of the proxies from each other in the non instrumental period must have been measured somewhere along the line? In fact in Juckes’ Table 1 only 5 of the 34 proxy series shown have an R higher than 0.5 when compared to the Northern Hemisphere temperature record, so even the claim that the Proxies correlate well with recent temperature is suspect.

    From Juckes Paper

    MM2005c also suggest that the MBH1998 “North American proxy PC1” is a statistical outlier as far as its correlation to Northern Hemispheric, temperature Tnh, is concerned. Table 1 shows, however, that other tree ring series and other proxies have higher anomaly correlations with Tnh. Pages 1019 (foot),1020 (top)

    What does this mean? Does it mean that Manns PC1 R=0.49 is ok because other tree ring proxies have a higher R?

    Where is the logic in this?

    Lastly, a more general question. If a sample is taken, it can be measured and used to evaluate the characteristics of the population as a whole. The larger the sample, the more confidence can be placed in its characteristics being similar to the global population.

    I would have thought that the sample data collected to form the basis of these proxy data sets is so small compared with the overall population of each group, that the amount of confidence that can be placed on them accurately representing the population as a whole is very low. In addition, aside from the issue of cherry picking data, as Bender has pointed out (survivor bias), it is virtually impossible for any sampling of trees surviving today to be in any meaningful way reprentative of the tree population in the time period being measured.

  86. Willis Eschenbach
    Posted Nov 24, 2006 at 2:22 AM | Permalink

    Steve, thanks for your comment above. You say:

    Willis, I don’t understand your post here. The Yang composite doesn’t have 20 proxies. It has 9 proxies: 1) Guliya; 2) Dunde – in both cases, Yang uses a very smooth version inconsistent with the most recent versions archived in 2004 by Thompson; 3. an unarchived Dulan tree ring series; 4. an unarchived S Tibet tree ring series 5. an unarchived E CHIN documentary 6. an unarchived Great Ghost Lake,Taiwan sediment series 7. an unarchived Jiaming sediment series 8. an unarchived Jinchuan sediment series 9. an unarchived Japan tree ring series (dC13 as I recall).

    However, according to Yang’s paper, the “unarchived S Tibet tree ring series” is not actually a single series, but an average of 12 individual tree ring proxies in Southern Tibet. According to Yang,

    Averaging 12 temperature-sensitive tree-ring
    series from various parts of Tibet (r = 0.52 to 0.79, p

    Thus, rather than nine individual series, Yang contains 8 individual series, plus an average of 12 other individual series. Of the total of 20 individual proxies, 12 of them (the group from Tibet) don’t have data from 600 AD to 1100AD.

    w.

  87. Willis Eschenbach
    Posted Nov 24, 2006 at 2:24 AM | Permalink

    Nuts, munched by the dang &lt symbol again. The previous post should have finished:

    Averaging 12 temperature-sensitive tree-ring
    series from various parts of Tibet (r = 0.52 to 0.79, p < 0.01), Wu
    and Lin [1981] established a reconstruction of yearly average
    temperatures anomalies. However, there is a data gap from the
    7th to 11th century in this series (Figure 2).

    Thus, rather than nine individual series, Yang contains 8 individual series, plus an average of 12 other individual series. Of the total of 20 individual proxies, 12 of them (the group from Tibet) don’t have data from 600 AD to 1100AD.

    w.

  88. Willis Eschenbach
    Posted Nov 26, 2006 at 5:18 AM | Permalink

    Steve M., in your post above you say that 4 of the proxies in the Yang composite have ending dates before 1980. Which four are they, and what date do they end? I’d look it up myself, but your link to the Yang data here is broken …

    I’m compiling a list of the issues raised here, for eventual posting on the CoP discussion site. I’ll post it here first and ask for comments …

    w.

  89. Steve McIntyre
    Posted Nov 26, 2006 at 7:38 AM | Permalink

    It’s annoying that the data directory here is not searchable. I’ll move the data files here somwehwere else. The link is http://data.climateaudit.org/data/Yangbao.data.txt .

  90. Willis Eschenbach
    Posted Nov 26, 2006 at 4:31 PM | Permalink

    Thanks, Steve. Here’s my list of the proxies which have problems:

    _________________________Series__Series________________________________
    ______Series_______Type__Start___End_____________Archived___________

    ______Guliya________IC_____OK______OK____Differs from Archived Version_
    ______Dunde________IC_____OK______OK____Differs from Archived Version_
    ______Dulan________TR_____OK______OK_____________Unarchived__________
    ____S Tibet 1_______TR____1100____1950____________Unarchived__________
    ____S Tibet 2_______TR____1100____1950____________Unarchived__________
    ____S Tibet 3_______TR____1100____1950____________Unarchived__________
    ____S Tibet 4_______TR____1100____1950____________Unarchived__________
    ____S Tibet 5_______TR____1100____1950____________Unarchived__________
    ____S Tibet 6_______TR____1100____1950____________Unarchived__________
    ____S Tibet 7_______TR____1100____1950____________Unarchived__________
    ____S Tibet 8_______TR____1100____1950____________Unarchived__________
    ____S Tibet 9_______TR____1100____1950____________Unarchived__________
    ____S Tibet 10______TR____1100____1950____________Unarchived__________
    ____S Tibet 11______TR____1100____1950____________Unarchived__________
    ____S Tibet 12______TR____1100____1950____________Unarchived__________
    ____East China_____DOC____OK______OK_____________Unarchived__________
    _Great Ghost Lake__SED____OK______OK_____________Unarchived__________
    _____Jiaming_______SED____OK_____1960____________Unarchived__________
    _____Jinchuan______SED____OK_____1950____________Unarchived__________
    ______Japan________TR_____OK_____1950____________Unarchived__________
    ____Tornetrask______TR_____OK______OK____Differs from Archived Version_
    ______Yamal________TR_____OK______OK____Differs from Archived Version_
    ____Tornetrask______TR_____OK______OK____Differs from Archived Version_
    ______Taimyr________TR_____OK______OK____Differs from Archived Version_
    _Methuselah Walk___TR_____OK_____1979________________OK______________

    TR: tree ring DOC: documentary SED: sediment.

    I invite anyone who knows of problems with any of the proxy series (timing, location, duplication, start/end dates, etc.) to add to this list. Also, if anything in this list is incorrect, please post a comment to that effect.

    Finally, I am looking to collect concise statements of any other problems with the Juckes et al. “Millennial temperature reconstruction intercomparison and evaluation” study. These include theoretical, practical, and ethical problems. When the collection is complete, I will post them on the “Climate of the Past” online review web site. The discussion there is open until 21 December 2006, so I propose 7 December 2006 as a closing date for these comments on this thread.

    Likely this should be a new thread.

    My best to everyone,

    w.

    PS – Is there a way to post a table, such as the one clumsily implemented above, to this site?

  91. Willis Eschenbach
    Posted Nov 27, 2006 at 1:54 AM | Permalink

    More oddities … I got to thinking about the inability of the AR(1) monte carlo simulation to do better than the Union reconstruction at emulating the NH instrumental data … and the comment by several people including Steve M. that all this proves is that the Union reconstruction is not an AR(1) process. So I decided to take a look at how the autocorrelation varied over the Union reconstruction. Since I was using 1850-1980 as the calibration period, I looked at the lag-1 autocorrelation of all the possible 131 year periods in the Union reconstruction. Here they are:

    Not quite sure what to make of this. Lag-1 autocorrelation in the calibration period 1850-1980 is 0.85. Maximum 131 year lag-1 autocorrelation is 0.87, minimum is 0.35, average is 0.66. Lag-1 autocorrelation for the entire 981 year dataset is 0.84.

    I see no easy way to generate “red noise” with anything like that correlation structure. Again, I have to ask, why not use Quenouille’s formula for the significance of the correlation of two trends, as discussed here? I still haven’t gotten an answer as to why this would not be appropriate.

    Best to you all,

    w.

  92. bender
    Posted Nov 27, 2006 at 7:05 AM | Permalink

    Re #91

    1. In all stochastic time-series analysis we must remember that the sample autocorrelation function is not the same as the population (ensemble-generating) autocorrelation function. Even if the population acf is stationary, a sample acf is expected to dip up and down randomly as in this graph of ac(1). Of course, no one is saying the population acf is stationary either. It may well change as the climate system goes through its machinations. I believe I’ve made this point before about the danger of making inferences about acfs based on sample acfs from short* series. (*Where “shortness” is relative to the shape of the acf.)

    2. I do not have the technical competence required to determine if Quenouille’s method as you propose is statistically valid. But inuitively it makes sense to me. I saw your question the first time around in another thread and thought: “I better not reply; I’m no expert”. But I agree; your suggestion may have merit. One must keep in mind that the sample ac coefficients used in Quenouille’s method are going to bounce around through time as in your graphic. It’s anybody’s guess as to whether this variation is meaningful. But it means that any analysis is context-dependent: you will not get the same correlations if you work from 18th, 19th, 20th c., etc. Climatologists must be wary of the contextual nature of all time-series analysis. Framing is everything.

  93. MarkR
    Posted Dec 19, 2006 at 10:32 PM | Permalink

    I think it’s good to know where people are coming from:

    Looking at how the insurance market works for the science journal Nature, Myles Allen, an Oxford University physicist, thinks that problem is now largely solved. All you have to do, he says, is work out “a ‘mean likelihood-weighted liability’ by averaging over all possibilities consistent with currently available information”. Unpacked, it means that if past greenhouse gas emissions have increased flood risk tenfold, 90 per cent of the damage caused by a flood can be attributed to past emissions. Because carbon dioxide mixes itself in the global commons of the atmosphere, “an equitable settlement would apportion liability according to emissions”, argues Allen.

    Link