Something New in the Loehle Network

In the construction of his network, Loehle has done something very simple and very sensible that amazingly has never been done in a complete network in any previous temperature reconstruction. Something that neither bender nor JEG noticed; in fact even Loehle himself didn’t notice. Try to guess before you look at the answer.

To my knowledge, Loehle’s network is the first network to be constructed using series in which every proxy as input to the network has already been calibrated to temperature in a peer reviewed article. This is pretty amazing when you think about it. It’s actually breathtaking. Every prior network has included some, if not a majority, of uncalibrated proxies.

Here’s the distinction: proxy results from an original author sometimes come as temperature reconstructions or more often in native units, such as dO18, tree ring chronology units, pct G Bulloides etc. Ocean sediments are proportionally a greater part of the Loehle network than predecessor networks and here Stott and others calculate estimated SST in deg C often from Mg-Ca ratios, sometimes using other methods and it is these SST estimates that Loehle uses. By contrast, a Graybill tree ring chronology (for example) comes in dimensionless units and some ice core and coral studies are denominated by their original authors only in dO18 units. The distinction is not necessarily between ocean sediments and tree rings. Sometimes tree rings are used to make a temperature reconstruction (e.g. Tornetrask), but many important tree ring chronologies (and derivatives such as Mann’s PC1) have never been reduced to a temperature reconstruction. This can be important: there’s a reason why you’ve never seen a temperature reconstruction from Mann’s PC1 by itself – it would be too cold throughout its history, a point made long ago in MM2005 (EE), but insufficiently noted. In some networks, uncalibrated proxies were not necessarily even temperature proxies. MBH98 included instrumental precipitation series as “temperature” proxies, presuming that there would be some covariance somewhere with something.

I’ve done a mental survey of all the canonical predecessor networks (MBH, Jones et al 1998, Crowley and Lowery 2000, Briffa 2000, Briffa et al 2001, Esper et al 2002, Mann and Jones 2003, Moberg et al 2005, Rutherford et al 2005, Hegerl et al 2006, D’Arrigo et al 2006, Juckes et al 2007) and assure you that each one of them had a number of series that were in native currency, so to speak, i.e. in dO18 units, tree ring chronology units, etc. The only predecessor network that even had a high proportion of calibrated reconstructions was the low-freq portion of Moberg (which heavily overlaps the Loehle network.)

This selection criterion had an interesting knock-on effect for Loehle’s methodology, which accordingly seemed far too simplistic for JEG. In a CPS approach, popular with Team authors, the series will be standardized to a unit standard deviation, averaged and then re-scaled so that the standard deviation matches the standard deviation in the calibration period. (This latter can be expressed as a constrained regression.) Because the series are already in deg C, Loehle did not carry out either of the two re-scaling steps. Although JEG went ballistic about this, is there anything wrong with what Loehle did, given his network?

In practical terms, I doubt that the difference between the two methods amounts to a hill of beans. My guess is that the “topography” of Loehle’s reconstruction using CPS will be virtually identical to the series already calculated. Simply as a matter of prudence, I see no harm in doing a CPS calculation using Loehle’s network – the calculation is trivial – and that the calculation should be done to show the lack of difference simply because such CPS calculations are lingua franca in the trade. But I doubt that it will make a material difference to the result.

JEG vehemently criticized Loehle on this point and, in a way, it’s more interesting to try to understand exactly what underpins JEG’s vehemence. JEG:

This is a very different situation from usual multiproxy studies which use sophisticated methods to ensure that a proxy’s weight in the final result reflects its ability to record some variance in the temperature field (whether local or not). While there is merit in exploring a bare bones approach (arithmetic mean), it then becomes indispensable to demonstrate that each proxy is : a) a temperature proxy (not a salinity one…). b) a good one at that.

… Once again, such care would not be required when a climate-field-reconstruction or “composite-plus-scale” approach is employed, as the proxy’s ability to record temperature is implicit in the calibration therein. Since the author effectively treats the proxies are perfect thermometers (which is conceptually acceptable as long as it is explicitly justified), the lack of this discussion is unforgivable, and in my book, constitutes grounds for rejection any day of the week.

Or elsewhere JEG here :

either you care about LOCAL temperatures and you use something known as a Composite Plus Scale approach. Or you only care about the ability of a proxy series to record *some* climate information via teleconnections : that is the heart of climate-field reconstruction techniques, like MBH98 or the more sophisticated RegEM-based versions. … In either case you explicitly account for how proxies describe variance in the mean temperature ; either by regressing them against the mean, or against local temperature (and then average them), or against a subset of principal components that describe the large-scale features of the temperature field, then use that to reconstruct this linear subspace of the T field and then average it globally (the Mannian approach).

I think that JEG has got things backward here and that Loehle is actually on stronger ground than the Team on this particular issue.

The calibration of individual proxies to temperature and the demonstration of their validity is a critical issue in this area. Indeed, one of the persistent themes of this blog is precisely this: the failure by the Team to demonstrate the validity of key proxies like Graybill bristlecones or Arabian Sea G Bulloides. Because the Team has relied to a considerable extent on proxies that have not been individually calibrated in peer reviewed literature, the demonstration of the validity of any uncalibrated proxies should, in my opinion, be an item in the presentation of a multiproxy reconstruction; I think that it’s reasonable not to require this demonstration if the calibration has already been done in peer reviewed literature.

Emile-Geay argues that this demonstration of validity is unnecessary in Team articles because the “proxy’s ability to record temperature is implicit in the calibration”. This is simply untrue and completely neglects the pitfalls of spurious correlation and spurious covariance, pitfalls that can easily become worse in a data mining operation. I doubt that many MBH readers realized the degree to which they were relying on calibrations that had never been individually peer reviewed, and the degree to which the reconstruction relied on calibrations generated in a home made algorithm remote from scrutiny by peer reviewers. This problem extends to Mann et al 2007, where once again the same uncalibrated series are being calibrated in an algorithm whose workings are poorly understood, with the individual calibrations being remote from scrutiny by peer reviewers.

In contrast to the Team’s reliance on home-made calibrations remote from scrutiny by peer reviewers, Loehle relied exclusively on prior calibrations done in the sunlight of peer reviewed articles where the calibration could be individually scrutinized. JEG views this as a defect. I disagree – it’s an advantage.

193 Comments

  1. MattN
    Posted Nov 20, 2007 at 11:59 AM | Permalink

    It’s a major advantage. All the data has already been reviewed, critiqued, and approved for general consumption. I suspect hard statisitcal analysis will reveal how robust the reconstruction is.

    I’d like to see a South American and/or Antarctic proxy, just so it covers even more of the globe and, therefore, more global.

  2. Al
    Posted Nov 20, 2007 at 12:09 PM | Permalink

    Another advantage would come out in the propagation of errors. Everything is already in C, so accumulating the errors depends solely on the original work providing an error for that individual series.

    (And a weighted average would also not be extremely difficult – instead of PCA etc., you can do a straight weighting based on each series’ individual error.)

  3. bender
    Posted Nov 20, 2007 at 12:17 PM | Permalink

    Steve M, your knowledge of the proxies is deep. I knew you could dig out something that we would miss! At least this substantiates my remark that “the work is novel”. I did not realize how novel.

  4. jae
    Posted Nov 20, 2007 at 12:40 PM | Permalink

    WOW! This was probably one of Craig’s selection criteria?

  5. John A
    Posted Nov 20, 2007 at 12:54 PM | Permalink

    MattN:

    I’d like to see a South American and/or Antarctic proxy, just so it covers even more of the globe and, therefore, more global.

    Yes, but what would that mean? Yes more and more proxies could be added, but the Earth’s climate is not linear and cannot meaningfully be reduced to a single index.

    What we can say is that some non-tree, temperature calibrated proxies can produce similar variations over time which we interpret that many different places experienced relative periods of warmth and cooling in approximately the same time periods which we label “Little Ice Age”, “Medieval Warm Period”, “Dark Ages Cold Period”, “Roman Warm Period”

    Even the term global could be misleading. The Northern and Southern Hemispheres have markedly different climate histories even in the recent past, the NH showing warming in the satellite record, the SH showing no significant warming or cooling.

  6. Bill F
    Posted Nov 20, 2007 at 12:57 PM | Permalink

    So this is why Craig keeps telling JEG that the calibration has already been done and refers to the cited sources for his proxies? I think JEG took those answers as another way of saying RTFR, but Craig really did mean that the work had already been done and he was using the proxy temperature values as they were published, not through some calibration of the native units. Interesting indeed…

  7. richardT
    Posted Nov 20, 2007 at 1:26 PM | Permalink

    Perhaps Loehle would have had better chance of publishing this paper in a decent journal if this had been the claim to novelty, rather than the “look no tree-rings” line.

  8. Paul29
    Posted Nov 20, 2007 at 1:41 PM | Permalink

    How many other datasets meet these criteria? Did Craig use a small set of what’s available. Why not throw everything in that’s been peer-reviewed and temperature calibrated? Then the selection criteria would be clean and obvious.

  9. Pat Keating
    Posted Nov 20, 2007 at 1:52 PM | Permalink

    7 richard T
    It might have cut Team anger by 50%, but they would still have had a problem with the MWP coming back to life….

  10. RomanM
    Posted Nov 20, 2007 at 1:56 PM | Permalink

    IMO, I would choose only those which have been calibrated to local temperatures and avoid the ones which use specious “global” calibration.

  11. Steve McIntyre
    Posted Nov 20, 2007 at 2:01 PM | Permalink

    Craig says that he looked through the archives for data sets. It takes a lot of time to wade through literature and Craig has asked for suggestions. LAst year I set up a thread on high-resolution ocean sediments. I think that I’ll convert this to a “Page” and invite people to make suggestions – but not for commentary.

  12. bender
    Posted Nov 20, 2007 at 2:11 PM | Permalink

    Actually, thinking it over, I think I had simply assumed that all the proxies had been calibrated. I can’t imagine claiming something to be a proxy for pre-instrumental data if it it’s never been calibrated against instrumental data. If this is novel, that seems rather shameful.

  13. Posted Nov 20, 2007 at 2:11 PM | Permalink

    You know, I think he left out the stock market proxy.

    There has to be temperature data in there some where. Why not use it? After all the numbers are probably accurate to better than 1 part in 1,000. All you need is the right adjustment tool to make it work.

    the proxy’s ability to record temperature is implicit in the calibration therein.

    See? JEG agrees with me.

    Maybe the stock market is too broad. Perhaps wheat prices. Or hog futures. Or cattle prices.

    the proxy’s ability to record temperature is implicit in the calibration therein.

    See? JEG agrees with me.

    I’m probably having too much fun. However, if Steve doesn’t mind, perhaps other proxy suggestions are in order since the proxy’s ability to record temperature is implicit in the calibration therein.

    Why even bother with thermometers? They are expensive and prone to breakage. Why not use yardsticks to measure temperature since the proxy’s ability to record temperature is implicit in the calibration therein.

    It’s not science it’s Climate Science.

  14. Michael Smith
    Posted Nov 20, 2007 at 2:36 PM | Permalink

    Let me see if I understand this. The hockey stick consists of an instrument record spliced onto the end of a proxy record that includes series that have never been calibrated against any sort of instrument?

  15. David Ermer
    Posted Nov 20, 2007 at 2:42 PM | Permalink

    Let me see if I understand this. The hockey stick consists of an instrument record spliced onto the end of a proxy record that includes series that have never been calibrated against any sort of instrument?

    Is this just now sinking in? And although I’m glad Steve pointed it out in this thread, Loehle stated unequivocally more than once in the original thread that the proxies had been calibrated with local temperatures in the original papers.

  16. Sam Urbinto
    Posted Nov 20, 2007 at 2:43 PM | Permalink

    No wonder I was confused at what the issue anyone had with combining all the proxy papers. Wasn’t it obvious that all the data Craig used was from peer-reviewed publications rather than new work? I didn’t even bother thinking about it. That it was all peer-reviewed papers and had already had the work done was plainly stated at the get go on this. I kept thinking “but these are from finalized peer-reviewed papers on temperature reconstructions”. I look forward to seeing the CPS version!

    As far as the argument, there’s two cases I would think:
    1. All these peer-review papers are done properly and need no further work. The information can be used as showing valid temperature reconstructions as is.
    2. One or more of these peer-reviewed papers wasn’t done properly and needs further processing. The paper(s) does not show a valid temperature reconstruction(s) as is.

  17. windansea
    Posted Nov 20, 2007 at 2:43 PM | Permalink

    Craig’s paper is getting some media attention

    The Trees were kidding 🙂

    http://blogs.news.com.au/heraldsun/andrewbolt/index.php/heraldsun/comments/the_trees_were_kidding_the_world_was_warmer_just_a_few_centuries_ago/

  18. bender
    Posted Nov 20, 2007 at 2:49 PM | Permalink

    Re #17
    Bolt should show the two series overlaid, the way UC did. This is the other substantive reason for not divorcing discussion of Loehle (2007) from that of MBH9x.

  19. Jack Linard
    Posted Nov 20, 2007 at 2:53 PM | Permalink

    Loehle, L-O-E-H-L-E, Loehle
    La la la la Loehle

  20. Andy
    Posted Nov 20, 2007 at 3:13 PM | Permalink

    Brilliant in its simplicity.

    Seems like the exact opposite of, say, magic BCPs teleconnected to global temperatures.

  21. Duane Johnson
    Posted Nov 20, 2007 at 3:19 PM | Permalink

    Isn’t there an analogy in Craig’s approach with so-called meta-analyses that are commonly used in epidemiology and other fields, where several independent studies are combined? Perhaps some of the same pros and cons apply.

  22. Posted Nov 20, 2007 at 3:24 PM | Permalink

    As is often the case a very simple well thought out exercise attacking something which should have been done years ago produces outstanding results.
    Clearly Garig has appreciated the comments made here at CA and he is to be applauded for being prepared to accept the critisism with an open mind and address the issues raised. It is what I had alwasy thought good science was about. Are you listening Al?

  23. Posted Nov 20, 2007 at 3:26 PM | Permalink

    I’m sorry my fginers are ton wkring well tadoy!!

  24. Jean S
    Posted Nov 20, 2007 at 3:36 PM | Permalink

    Hmmm, this was the essence of my first two comments about Loehle reconstruction. Rob proposed already to test the “CPS approach”, which I’ve been waiting him to do. Seems like he’s busy with other things, so maybe Steve could perform it.

  25. Joe Black
    Posted Nov 20, 2007 at 3:39 PM | Permalink

    Maybe it’s time to overlay historical CO2, CH4, aerosols, albedo, solar outputs, etc. on top of Loehle?

  26. Posted Nov 20, 2007 at 3:56 PM | Permalink

    brilliant how many positive things this audit is bringing up in the Loehle paper!

    apart from minor points that “could have been added” or “perhaps should be checked”, nothing negative so far. but quite some positive stuff.

    either Craig is a genius and “the team” made up of idiots, or this auditing is slightly biased. your decision.

  27. Stirner
    Posted Nov 20, 2007 at 4:00 PM | Permalink

    It might also be interesting to overlay the Wahl and Ammann Scenario 6 (No Bristlecones) graph.
    http://www.climateaudit.org/?p=933

  28. John A
    Posted Nov 20, 2007 at 4:01 PM | Permalink

    I’ve no idea what you mean by “nothing negative” but the questions of robustness and the significance of the results have yet to be addressed.

  29. steven mosher
    Posted Nov 20, 2007 at 4:10 PM | Permalink

    RE 26. I don’t know sod you’ve raised several devastating points.

    However, jamie curtis has something to say to you:

  30. captdallas2
    Posted Nov 20, 2007 at 4:36 PM | Permalink

    Ref 28, The results are just as robust as the peer reviewed and temperature calibrated selected series. The unselected series need to justify their robustness.

  31. Steve Moore
    Posted Nov 20, 2007 at 4:46 PM | Permalink

    RE #29:

    What?
    Aristotle WASN’T Belgian?

    Mosher, you really know how to pull the rug out, don’t you?

  32. Larry
    Posted Nov 20, 2007 at 4:51 PM | Permalink

    Mosh, they need that Utube on unthreaded. There are a couple there who fit that perfectly.

  33. Larry
    Posted Nov 20, 2007 at 4:52 PM | Permalink

    I won’t say who he is, but she is Lucia. Except Lucia is way more polite and diplomatic. But probably just as exasperated.

  34. Kenneth Fritsch
    Posted Nov 20, 2007 at 4:53 PM | Permalink

    either Craig is a genius and “the team” made up of idiots, or this auditing is slightly biased. your decision.

    I have decided that your view of things might also be biased, but none of these real or suspected biases will stop the progress of the analysis that I see playing out here. I am awaiting the analysis of the underlying proxies to the Loehle reconstruction and the attempts to put some error bars on the overall results.

    Steven Mosher, that version of Jamie Curtis could criticize, ye insult, me all night long.

  35. steven mosher
    Posted Nov 20, 2007 at 4:59 PM | Permalink

    RE 32. that’s initially who I intended it for but…

  36. Philip_B
    Posted Nov 20, 2007 at 5:07 PM | Permalink

    There seems to be a mismatch between the historical record which shows major cooling around 540AD and the proxy reconstruction which seems to put the same event at least 100 years earlier. BTW, this isn’t a criticism of Craig’s excellent study. Just an observation of interest.

  37. Anthony Watts
    Posted Nov 20, 2007 at 5:14 PM | Permalink

    Calibration, what a concept. Now if we could just get a calibrated and bias free surface temperature record, we’d have a two-fer.

    See my latest survey of Klamath Falls, OR and imagine how to remove the bias from this site.

    http://wattsupwiththat.wordpress.com/2007/11/19/how-not-to-measure-temperature-part-34/

  38. Larry
    Posted Nov 20, 2007 at 5:18 PM | Permalink

    37, not to mention the fact that if they use an electronic instrument that’s not well grounded/shielded, the electrical interference can really screw it up.

  39. Anthony Watts
    Posted Nov 20, 2007 at 5:39 PM | Permalink

    RE38 when I was there doing the survey, the hum from the power transformer was deafening. If I had an EMF meter with me I’ll bet it would have pegged. Even the chain link fence vibrated at 60 hertz.

  40. steven mosher
    Posted Nov 20, 2007 at 5:51 PM | Permalink

    RE 39. Do you have a schematic on the MMTS?

  41. Anthony Watts
    Posted Nov 20, 2007 at 5:54 PM | Permalink

    Let move the discussion to unthreaded, sorry Steve, didn’t mean to hijack thread. Mosh, no schematic I can find other than this test plug:

  42. Posted Nov 20, 2007 at 5:56 PM | Permalink

    Hi all,

    I must break my vow of silence, because my words are being (as usual on CA), grossly distorted by Steve M. He won at this petty game for today : i needed a break from computing MonteCarlo verification statistics anyway. (seriously!)

    In contrast to the Team’s reliance on home-made calibrations remote from scrutiny by peer reviewers, Loehle relied exclusively on prior calibrations done in the sunlight of peer reviewed articles where the calibration could be individually scrutinized. JEG views this as a defect. I disagree – it’s an advantage.

    Please let me re-state the angle of my review. It is : does this paper meet basic criteria for publication in a climate journal, say any AGU or AMS publication ? Some experienced authors may want to correct me here, but my criteria have always been :
    1) is the approach novel ?
    2) is the methodology described with enough detail ?
    3) are all important choices justified ?
    4) are the uncertainties appropriately discussed ?
    5) are the conclusions warranted by the analysis ?

    In the case of Loehle’s paper :
    1) yes
    2) no
    3) maybe
    4) absolutely not
    5) absolutely not

    They do not all carry equal weight in my book : 4 and 5, in particular, are crucial. No one expects a paper to hold the answers to all the mysteries of Creation. But these are pretty basic steps that ensure that we are talking science. Of course, there is some amount of subjectivity in applying such criteria ; hence the need for more than one reviewer. And yes, i would argue that previous Team published reconstructions meet these criteria for the most part.

    Enough distortion : I never said that Loehle’s method was unacceptably simple. I said that given its simplicity, it required additional assumptions, which needed to be explicitly stated. I actually think it is be a great idea to assemble T-proxies only, but as i said in my review : to be convincing, the paper would have to demonstrate that each proxy is :
    a) a temperature one
    b) a good one at that.

    In the case of the Holmgren record , this is dubious. There are other proxies in the list, which, at first glance, were hydrological rather than thermal, precluding their use in this barebones approach. This would makes any reviewer fidget in his/her seat upon reading and ask questions. Thus, i am genuinely happy that Steve is doing the homework neglected by Craig Loehle and that i have no time to do now (sorry, i’m not retired yet) . It is useful work, and this is real progress. And i mean it.

    The next step is to assemble a table of proxy calibration residuals and see whether they are “good”, i.e. fit some pre-defined criteria of resolved T variance. This is easier said than done , because as Craig complained somewhere, this information is not readily available from all cited publications. That does not mean it is impossible, and let’s trust the shrewdness and perseverance of our friend McIntyre to get to it in little time. If worse comes worse, additional assumptions would have to be injected, and their impact on the result check for self-consistency.

    Am i being too picky ? I think not. My argument that “an error bar is better than none” was rejected on the grounds that the errorbars were meaningless . Similarly, just because one writes that a proxy is calibrated against temperature, it does not mean the job is done. What if the proxy in question only describes 10% of temperature variance, with 35% due to hydrological changes ? Its inclusion would introduce large, non-temperature effects in the reconstruction, and this would have nefarious consequences on the mean. If enough (say 6 out of 18) have this problem, you could be in deep trouble. It requires at least a comprehensive discussion – absent from Loehle’s paper at this stage.

    Similarly, just because a proxy is uncalibrated against local T, it does not mean it does not reflect temperature variations in some other parts of the globe. This is something CAers consistently fail to understand, without ceasing to squeek about. Take Palmyra coral d18O : it correlates better with remote (ENSO) than local sea-surface temperatures (numerical exercise left to the reader. actually it is anti-correlated). Why ? Because the isotopic signal is simultaneously affected by temperature and rainfall, both of which in this example are dominated by an ENSO signal on interannual timescales. It is an example of a constructive interference, and using only a local T temperature on coral d18O would hence discard precious climatic information.

    Magic ? No, dynamics.
    It is clear that much of the bullets shot at “uncalibrated proxies” by Steve M (and much of the CA crowd) are a consequence of their profound misunderstanding of the very concept of teleconnection. This is where statistics fail to tell us about all the mysteries of the World : unfortunately, Physics gets involved. So while i respect and acknowledge the comments of skilled statisticians on this board, it would seem that to solve a climate problem they should do their basic homework and take a climate 101 class. And please stop being just as snobbish and arrogant as they claim we are.
    Then we might be able to have a fruitful discussion.

    To be fair, climate scientists could make an effort to explain these concepts more clearly to non-specialists. I’m sure we could work something out. I’m willing to help provided certain die-hard all-round-skeptics are willing to learn anything about climate.

    In summary, “calibrated is better than uncalibrated” is a profoundly simple-minded and Manichean statement.

    No one doubts that if we were in possession of a sufficiently high number of decently-calibrated temperature proxies throughout the globe, it would obviate the need for physics and fancy statistics. Until we have such a network, it is unproductive to endlessly denigrate things you do not begin to understand, Steve.
    Instead, I suggest you continue the intense scrutiny of the data because, unlike uninformed criticisms, it is leading us somewhere.

    Best,
    Julien

    PS : BTW, i would appreciate if people stopped calling me “Dr”. That is exactly the type of intellectual snobbery i want to avoid when i come here. Especially if ‘Dr’ is a preface to a cheap ad-hominem attack. Ciao.

  43. Barclay E. MacDonald
    Posted Nov 20, 2007 at 6:09 PM | Permalink

    The the surface temperature record’s ability to record temperature is implicit in the calibration therein.

  44. steven mosher
    Posted Nov 20, 2007 at 6:26 PM | Permalink

    Anthony, agreed. Move to unthreaded. With Juliens return, I’m going to shut up and watch.

  45. captdallas2
    Posted Nov 20, 2007 at 6:27 PM | Permalink

    Dr. excluded JEG,

    Interesting comment, from a lay person mind you, what do you think od Tsonis’ synchronized chaos math? I feel it may be a good way to validate teleconnections. Or is that overly simplistic.

  46. bender
    Posted Nov 20, 2007 at 6:35 PM | Permalink

    JEG,
    For the record, I agree with your clarified review here:

    1) is the approach novel ?
    2) is the methodology described with enough detail ?
    3) are all important choices justified ?
    4) are the uncertainties appropriately discussed ?
    5) are the conclusions warranted by the analysis ?

    1) yes
    2) no
    3) maybe
    4) absolutely not
    5) absolutely not

    Loehle (2007) “as is” is simply too raw. But because it is salvageable, it is very far from “pseudo-science” (whatever that means). I’m glad you clarified your stance.

    i needed a break from computing MonteCarlo verification statistics anyway

    Good show! That’s the ticket! Can’t wait to see the results.

    I assume you know what we mean around here about avoiding “active ingredient” bcp?

  47. Clayton B.
    Posted Nov 20, 2007 at 6:37 PM | Permalink

    Similarly, just because one writes that a proxy is calibrated against temperature, it does not mean the job is done. What if the proxy in question only describes 10% of temperature variance, with 35% due to hydrological changes ?

    Then it’s not calibrated to temperature?

  48. bender
    Posted Nov 20, 2007 at 6:40 PM | Permalink

    His point is that there’s value in weighting the proxies according to the amount of variation they explain in calibration.

  49. jeez
    Posted Nov 20, 2007 at 6:51 PM | Permalink

    My brother’s an MD, PHd, FACP, and FACMG and I call him Dr. Certainly no disrespect when I use the term.

  50. Jan Pompe
    Posted Nov 20, 2007 at 6:56 PM | Permalink

    Thank you for this post Steve like bender I had simply assumed that Craig’s proxies were calibrated to temperature mainly because his network was made up as far as I could tell from the list as already peer reviewed studies (metastudy??). It didn’t occur to me that it was unusual and couldn’t quite understand what the argument was about. I have been seeing the Soon and Baliunas paper, that reaped such heavy criticism from the team, in a similar light. I also hadn’t realised that the Team relied on “home-made calibrations remote from scrutiny by peer reviewers”.

    A penny has dropped.

  51. fFreddy
    Posted Nov 20, 2007 at 6:58 PM | Permalink

    Re #42, JEG

    it is unproductive to endlessly denigrate things you do not begin to understand, Steve.

    Oh, dear.

  52. Yorick
    Posted Nov 20, 2007 at 7:00 PM | Permalink

    Why is it that when I hear “learn about the climate”, I hear, drink some kool-aide? Loehl’s curve is utterly unsurpising to anybody who looks at non tree ring paleoclimate proxies. It is the stick that is wrong. It does not help that those pushing the stick also make noise about saving the planet. As if this should not give anybody pause as to the reliability of the data. When one feels the planet is at risk if people are not sufficiently frightened, what is a little fudging with the data, and refusal to recognize problems with methods?

  53. boris
    Posted Nov 20, 2007 at 7:01 PM | Permalink

    Similarly, just because a proxy is uncalibrated against local T, it does not mean it does not reflect temperature variations in some other parts of the globe.

    Seems untrustworthy. Sort of like disallowing hearsay as testimony.

  54. Tom C
    Posted Nov 20, 2007 at 7:03 PM | Permalink

    #42 JEG

    I think that CA readers understand the concept of teleconnection much better than you realize. In fact, we understand it so well that it causes discomfort for some.

    Please tell: what physical basis is there to suggest that bristlecone pines teleconnect to global temperature? And while you’re at it, what physical basis is there to expect the teleconnection to show up in the fourth PC of an average of uncalibrated proxies?

  55. Bernie
    Posted Nov 20, 2007 at 7:04 PM | Permalink

    I think harping on local calibration is now somewhat beating a dead horse. Steve McI’s point is that the minimum requirements that Julien raised is not frequently met. Now the teleconnection issue suggests a seminal point at least as Julien frames it. I get the possibility of teleconnection or remote T calibration, but what teleconnection seems to require is a specified physics-based model as to why (a) the local calibration does not work and (b) why a remote calibration does work – otherwise we are into glorified and, even worse, teleological cherry picking.

    So, a request to Julien – references that explain and demonstrate remote calibration as a methodologically sound approach – rather than raw positivism.

  56. Posted Nov 20, 2007 at 7:11 PM | Permalink

    I dislike arogance in anyone but particularly those with knowledge, intelligence and talent. It sort of turns me off what they are really saying because I’m trying to understand why they are so arogant. Perhaps it’s just the knowledge it is they alone who care about the planet and our children’s future and they alone can save us from our own stupidity.

    The one great thing I admire about Steve M is his ability to admit when he is wrong, admit that he might be wrong and his ability to admit others might be right. That I believe is humility. A thing much absent today.

  57. Susann
    Posted Nov 20, 2007 at 7:19 PM | Permalink

    I started a post requesting more info on the concept of “teleconnection” from Julien, but stopped, not wanting to belabor anything that was already known by everyone else. However, upon seeing the attitude people have here towards the concept, I would appreciate if Julien could provide a link to or reference to a good source of info on it from a climate science perspective. I will read the archives here that discuss teleconnection with avid interest to see the CA take on it.

  58. Posted Nov 20, 2007 at 7:20 PM | Permalink

    Stirner wrote:

    It might also be interesting to overlay the Wahl and Ammann Scenario 6 (No Bristlecones) graph.

    How about overlaying the schematic from the original 1990 IPCC report (figure 7.1c)?

  59. Jack Linard
    Posted Nov 20, 2007 at 7:26 PM | Permalink

    I apologize for dumb comment (#19).

    Whenever I have tried to post previously, I have been rejected because of some apparent evil lurking in my computer.

    I believe I have something to add to these discussions and will try to be more serious in future if I am allowed in.

  60. Bernie
    Posted Nov 20, 2007 at 7:39 PM | Permalink

    I don’t think the teleconnections issue is technically difficult to understand – though Lubis is more than welcome to chime in: the issue is proof that as a general methodology it is viable. The proof will be demonstrated by well specified models that are compelling. For example, Julien’s example above will be compelling to the that there are multiple proxies that use the same or very similar models. It is too easy to come up with adhoc models that correlate one thing with another. The ball is in Julien’s court.

  61. Sam Urbinto
    Posted Nov 20, 2007 at 7:42 PM | Permalink

    In the case of the Holmgren record , this is dubious. There are other proxies in the list, which, at first glance, were hydrological rather than thermal, precluding their use in this barebones approach. This would makes any reviewer fidget in his/her seat upon reading and ask questions. Thus, i am genuinely happy that Steve is doing the homework neglected by Craig Loehle

    If the goal here is to be productive in what we’re trying to accomplish, this statement is really confusing me. JEG, you seem to be saying two things here. I don’t know and wouldn’t dream of attributing motives to you. Regardless of what you’re trying to do, here’s what I see you doing: Making statements that may or may not seem confrontational to you but I would categorize as being so. The effect of that cause (whatever the root cause is not being important) is that rather than discussing the issues, it’s spiraling into a circular meaningless debate from something other than a common plane of reference.

    Perhaps you need to stop using phrases like ‘the Opposition’ ‘illustrious Hockey Stick graph’ ‘distinguished scientists like… Michael Mann’ ‘obscurantists’ ‘poorly-written piece of pseudo-scientific gibberish’ ‘sadistic pleasure in lacerating that article’ and ‘too eager to make a splash on CA’ But I also congratulate you on such things as ‘i could be more constructive’ and ‘so are i am pleased to see he is not giving Loehle a free ride’ But maybe look a little more critically at past work, regardless of the interconnections, that do the same things you take at issue with this one. Maybe you need more rum, pork loin and jamming, or tea, suntanning and yoga.

    So there’s a couple of issues here:

    1) If the original papers, what I suppose would have been temperature reconstructions using proxies, if they are not, what does that say about peer review and the state of climate in the past equated to such in the present?
    2) If the original papers are valid temperature reconstructions using proxies, what else needs to be done to them, then?
    3) How well removed can you be from the process being involved in it? Or in other words, can you pick how you’d like your future behavior to be, or will you just switch in between extremes?
    4) Why isn’t everyone providing the raw data and methods for processing it so it can be replicated?
    5) Everyone should be held to the same standards. This does not seem to be the case.
    6) I have estimated that if all remains the same, the forcing of going from 400 to 800 ppmv of CO2 will take about 35 years and result in about another 2.5 C of increase in the global mean anomaly. What say you?
    7) What’s your issue with Craig? I am wondering why you can’t see that if there’s so much overlap with Moberg et al, complaining about him and not them is rather disingenious. List for us which ones of the 17 Craig used that you think are not valid temperature reconstructions. While you’re at it, you can tell us why it seems Hansen’s behavior is bizarre on his past and present work.

    BTW, I’d appreciate answers, and not hand-waving, calls to RTFR or any other similar behaviors, like pushing buttons on purpose and being transparent about that fact. It’s all we can judge from; your behavior and tone. Oh, walking in here acting like you know everything just because you have a PhD then complaining of snobbery seems rather odd. I’m not quite clear on that point; perhaps you can clarify it some?

  62. bender
    Posted Nov 20, 2007 at 7:51 PM | Permalink

    The teleconnection issue is, I believe, an issue of the temporal and spatial scaling of proxy (i.e. tree) responses to environment.

    Although trees only respond to changes in their immediate surroundings, it is possible to characterize these instantaneous responses over integrated time steps. (In the same way that climate is a human invention: the time-integral of weather.) If temperature and precipitation are independent random variables then you get the same response function whether you are looking at short or long time scales. But over very long time scales seasonally integrated reponses to T and P are not in fact independent, they covary. (That is why circulatory modes such as ENSO are hybrid P/T signals.) Moreover, these are red noise processes. Consequently, the “uniformitarian principle” goes out the door, and what happens is your proxy starts appearing to respond to large-scale phenomena, as opposed to local-scale phenomena. This is because you fundamentally have a mis-specified response model. The net result of the covarying precip/temp input is that the proxy appears to be responding to regional-scale climatic processes, when what is really happening is that the structure of your model error is changing over time in harmony with the hybrid P/T input signal. We interpret this spatial scaling mismatch as a “teleconnection”. The proxy at point A appears to be responding to an anomaly sequence centred on point B. i.e. It is at B where the covariance in P and T is strongest. (And a sample drawn from that point might prove that responses there are stronger than anywhere else. Unfortunately bcps don’t grow in the middle of the Pacific ocean where anomalies are generated.)

    Appearance is the operative phrase.

    Now, JEG may have a better explanation, and when he gives it I would listen. My offering here is speculation. I don’t know what the literature says about the subject. I’m only trying to make sense of what appears to be a simple, solvable paradox, by reasoning from basic plant biology. I will think about it more tonight.

  63. Posted Nov 20, 2007 at 7:52 PM | Permalink

    Admittedly speaking from a point of ignorance on the finer details of this kind of thing, it seems to me that the best way to do a paleoclimate reconstruction is as Loehle 2007 did. Form a set of “virtual 2000-year thermometers” from locally calibrated proxies, and then use those to generate a global mean temperature record (the averaging should have be done with area weighting, something Loehle 2007 did not do).

    When you allow a bunch of proxies to calibrate to any temperature signal anywhere, don’t you run a big chance of finding a correlation simply through using too many proxies? After all I can make the right linear combination of 20 random vectors fit a parabola, if my parabola is only sampled at 20 locations.

    Again, naively, I would agree with many here that you have to show a physical mechanism for a “teleconnection” before accepting what the linear algebra is telling you. If a proxy is correlating well with the temperature record from a distant location, you have to assume that it’s by random chance unless a physical mechanism can be shown.

  64. cbone
    Posted Nov 20, 2007 at 8:05 PM | Permalink

    It is clear that much of the bullets shot at “uncalibrated proxies” by Steve M (and much of the CA crowd) are a consequence of their profound misunderstanding of the very concept of teleconnection.

    I think this classic cartoon illustrates the concept that JEG is implying in #42

  65. Aaron Wells
    Posted Nov 20, 2007 at 8:10 PM | Permalink

    CBone,

    I love it! I have always thought of that cartoon when the issue of teleconnections arises. Its a perfect analogy. It’s a “Far Side” cartoon, is it not? Thanks for dredging it up!

  66. cbone
    Posted Nov 20, 2007 at 8:14 PM | Permalink

    Re: 65

    It is by a cartoonist whose pen name is S. Harris. His website is here: http://www.sciencecartoonsplus.com/

  67. Steve McIntyre
    Posted Nov 20, 2007 at 8:14 PM | Permalink

    At the NAS presentation day, Malcolm Hughes made a distinction between two approaches to proxy reconstructions that I thought was quite thoughtful (as I observed in my notes at the time.)

    One approach was what he called the Schweingruber approach in which one identified sites that according to geographic and botanical information were expected to be temperature limited and then did simple statistics on a large network.

    He called the other approach the “Fritts approach” – what we would now call a Mannian approach, in which you took a total grab bag of proxies making no effort to ensure that they responded to local temperature and then relied on software to extract a signal.

    My entire instinct tells me that the Schweingruber approach is what’s needed. If I had my druthers right now, I’d like to see a lot of detailed ocean sediment analyses like David Black’s at Cariaco and Nancy Richey’s at Pigmy Basin. I don’t for one minute denigrate them.

    Do I believe that any statistical significance can be ascribed to the covariance between say the 6th principal component of Mann’s Stahle SWM network and the 11th temperature principal component – or the equivalent in RegEM: no.

  68. jae
    Posted Nov 20, 2007 at 8:42 PM | Permalink

    The whole problem with teleconnections is that you can invoke this “explanation” for whatever you want to prove. I’m sure that there are some teleconnections, but this idea is greatly overused by the Team. And make no mistake, JEG is a solid team member. Just look at his course outline. We are going slowly here, but we are getting to the bottom of this morass.

  69. bender
    Posted Nov 20, 2007 at 8:45 PM | Permalink

    Yes, what I am arguing is that the Schweingruber approach is intrinsically better (though more expensive) and that the Fritts approach leaves you vulnerable to potentially spurious correlations resulting from putative low-frequency (i.e. spatially teleconnected) signal responses that do not exist. In other words, the miracle step in the cartoon may be a product of temporally covarying, hybrid P/T signals whose synergistic effects are not captured in a linear univariate additive response model. This would destroy Fritts’s approach.

  70. Jonathan Baxter
    Posted Nov 20, 2007 at 8:46 PM | Permalink

    It is clear that much of the bullets shot at “uncalibrated proxies” by Steve M (and much of the CA crowd) are a consequence of their profound misunderstanding of the very concept of teleconnection. This is where statistics fail to tell us about all the mysteries of the World : unfortunately, Physics gets involved.

    As one who has done graduate-level study in both physics and statistics, I find this to be a curious statement.

    Experimental physics relies on statistics in a fundamental way. Eg, when physicists say “The Higgs boson mass has an upper bound of 144 GeV at the 95% confidence level” they’re making a statistical statement. Same goes for “teleconnection”: it’s either a real physical process (or processes) and hence can be measured and subjected to statistical analysis, or it’s not. Physics and statistics go hand-in-hand.

  71. boris
    Posted Nov 20, 2007 at 8:46 PM | Permalink

    It seems reasonable to be open about methods for finding “the signal”. Hearsay is used during investigation. When it comes to policy debate a more restricted standard for data should apply. Hearsay is not used at trial.

    The science will take care of itself. It alsways does.

  72. bender
    Posted Nov 20, 2007 at 8:55 PM | Permalink

    jae, you will appreciate this one. The problem is not just one of spurious correlation, but of exaggerated strength of correlation. Bumping your proxy up from r=0.15 to r=0.45 makes a huge difference in its apparent credibility. The problem is those pesky degrees of freedom you ought to subtract as you hunt through endless space seeking for an overfit “teleconnected” match. Steve M can confirm, but I doubt very much that the degrees of freedom are discounted in the Mannomatic as it hunts around for pattern matches. The redder the signal, the more punishing those lost degrees of freedom would be.

  73. bender
    Posted Nov 20, 2007 at 8:57 PM | Permalink

    The science will take care of itself. It always does.

    Although perhaps not at a time scale fast enough to prevent serious policy damage.

  74. Mark T
    Posted Nov 20, 2007 at 9:14 PM | Permalink

    We understand teleconnection quite well, maybe even better than JEG. Teleconnection makes sense when you consider that perhaps temperature in one area of the globe affects some other climate factor in another. I.e. high temperatures in Asia cause a drought in the Rocky Mountains. However, and this is a big however, tree-rings are supposedly measuring temperature, not some other factor teleconnected to temperature. If high temps in Asia cause an extreme drought in the Rockies, and tree-rings respond, they are no longer temperature proxies but precipitation proxies. How on earth is it possible to tell historically when this “switch” occurs? Indeed, what is the conversion factor for switching from temperature to precipitation? Certainly the ring widths no longer have the same scale? Or better, what happens when you select trees based on their resistance to precipitation changes but then decide that “well, back then it was precipitation acting as a proxy for temperature in Asia.”

    Utter nonsense.

    Btw, JEG, your inability to understand why linear multivariate methods designed to operate on uncorrelated inputs (um, temperature and CO2, both forcers of tree growth, are correlated by hypothesis) while at the same time chastising CA posters for not understanding the issues is exactly the kind of intellectual snobbery that YOU seem wont to partake in. Amazing how little you understand your own hypocrisy.

    Mark

  75. Carl Gullans
    Posted Nov 20, 2007 at 9:40 PM | Permalink

    JEG: So increases in tree ring growth are X% correlated with higher average temperatures everywhere around the globe… except near my proxy, where local conditions are affected by other processes that are Y% correlated with changes in *average temperature around the globe*, processes which are then in turn Z% correlated with tree ring growth? I don’t mean to be annoying, but are you serious? Can you not take a step back and see how ridiculous this idea is as fully explaining the “divergence” (a.k.a. zero predictive skill) problem? Furthermore, you are asserting then that local temperatures have less of an effect on tree ring growth than do non-temperature climate alterations from warming elsewhere (e.g. CO2, precip)? Where is the proof for this? Lack of proof should be rejection of this fanciful idea, not acceptance.

    BTW, Don’t GCMs, at the moment, predict future droughts (decreased tree ring growth) precisely due to an increase in temperature and within the region of nearly all tree-core samples? I’m not suggesting that the GCMs do or do not have any accuracy here, but shouldn’t this be problematic?

  76. Pat Keating
    Posted Nov 20, 2007 at 9:42 PM | Permalink

    70

    the very concept of teleconnection. This is where statistics fail to tell us about all the mysteries of the World : unfortunately, Physics gets involved.

    I’m a physicist but the first time I ran across ‘teleconnection’, I thought “BS?”. It sounds a hell of a lot more like signal-processing and accidental ‘correlations’ than physics. But I’m just a physicist, not a climatologist.

  77. Susann
    Posted Nov 20, 2007 at 9:44 PM | Permalink

    Although perhaps not at a time scale fast enough to prevent serious policy damage.

    Or climate damage.

    he analogy isn’t perfect admittedly, but while we don’t understand the ins and outs of breast cancer, if diagnosed with it, I don’t want to wait until then to treat it. Even if the treatment isn’t foolproof, and even if there are side-effects, not treating when the tumor is growing in order to wait until science is absolutely certain is suicide.

  78. bender
    Posted Nov 20, 2007 at 9:51 PM | Permalink

    That’s the traditional understanding of correlation via teleconnection: it’s the product of two non-causal processes being causally correlated with a third. You have data for #1 (proxy) and #2 (instrumental at some distance away), but not #3 (instrumental at proxy), which would be the smoking gun if you had it.

    correl(#1,#2) GT 0 => “teleconnection” between #2 and #3.

    Science on the fly – always a risky venture. What I’m suggesting is that proxy response to P+T+P*T is not just non-uniform (as Mark T supposes), but appears strongest at time-scales over which P*T covariance dominates. So your teleconnection is neither through precip, nor temp, but some hybrid drought signal. But without the interaction term in the calibration model you are unlikely to locate the true source of the teleconnection. (And that is assuming the source does not shift around over time, another questionable approximation.)

    I think there is nothing wrong with the teleconnection concept. It’s how you make use of it that is dangerous.

  79. Posted Nov 20, 2007 at 9:59 PM | Permalink

    @Pat Keating,
    I’m an engineer; I’ve never heard of teleconnections. I propose we share some popcorn and discuss the ontology of teleconnections. 🙂

  80. Pat Keating
    Posted Nov 20, 2007 at 10:11 PM | Permalink

    78 bender
    That’s helpful. Maybe I can pick your brain some more on this tomorrow. How do you exclude accidental ‘correlations’?

    79 lucia
    Sounds good, but I can’t do it this week. I have to contemplate my navel tomorrow, and count angels on a pinhead on Friday. Freddy Nietsche is probably coming over on Saturday.

  81. Kenneth Fritsch
    Posted Nov 20, 2007 at 10:13 PM | Permalink

    Similarly, just because a proxy is uncalibrated against local T, it does not mean it does not reflect temperature variations in some other parts of the globe. This is something CAers consistently fail to understand, without ceasing to squeek about. Take Palmyra coral d18O : it correlates better with remote (ENSO) than local sea-surface temperatures (numerical exercise left to the reader. actually it is anti-correlated). Why ? Because the isotopic signal is simultaneously affected by temperature and rainfall, both of which in this example are dominated by an ENSO signal on interannual timescales. It is an example of a constructive interference, and using only a local T temperature on coral d18O would hence discard precious climatic information.

    Magic ? No, dynamics.
    It is clear that much of the bullets shot at “uncalibrated proxies” by Steve M (and much of the CA crowd) are a consequence of their profound misunderstanding of the very concept of teleconnection. This is where statistics fail to tell us about all the mysteries of the World : unfortunately, Physics gets involved. So while i respect and acknowledge the comments of skilled statisticians on this board, it would seem that to solve a climate problem they should do their basic homework and take a climate 101 class. And please stop being just as snobbish and arrogant as they claim we are.
    Then we might be able to have a fruitful discussion.

    The problem with the concept of teleconnections in temperature reconstructions, in my mind, is that without a good and reasonable a prior rationale for applying it and/or physical model, teleconnecting after the fact has to be suspect as a spurious correlation.

    You have made a rather harsh indictment of the (mis)understanding of teleconnections by participants here at CA and in what I can only call a rather snotty tone. I can certainly bear your manners (or even become impervious to them) much more readily if you would educate some of us here by showing us a teleconnected temperature correlation used in a temperature proxy from peer reviewed and linked papers and how the author(s) used priors and/or prior models to make the connections.

    We have had Rob Wilson posting here and more or less tacitly approving of searching out trees with correlations to temperatures in the calibration stage and discarding those that do not correlate without a lot of apparent, to me anyway, effort to present and discuss a prior or even after the fact criteria for doing so.

  82. Carl Gullans
    Posted Nov 20, 2007 at 10:28 PM | Permalink

    #78: Rhetorical question; would you believe a thermometer in NYC if it was correlated with temps elsewhere in the U.S. but not in NYC? The answer is hopefully no, because we know that a thermometer is nearly exactly correlated with local (in this case, within a few mm) temperatures. We know this through repeated experimentation. We know that there are relatively few confounding factors, and so would not assume the existence of teleconnection in the above thermometer… we would discard it as defective.

    What is the basis of the belief that bristlecone pine tree ring widths are correlated with temperature? How far away were the thermometers (in the temp record) that were used to calculate this correlation for any of the studies demonstrating tree ring temperature signals? Were any of them halfway around the world? If not, why is there even consideration of the assumption that in effect means that such a thing occurs?

    We know that trees carry a temperature signal because they are correlated with local temperatures. Suggesting that trees correlated with non-local temperatures and not with local temperatures are also temperature proxies is a bit ridiculous to assume. Linearity and lack of interactions (or other variables at all) are also poor assumptions. While teleconnection is a concept that makes sense, assuming its validity in the tree ring case is foolhardy and just plain stupid.

    Essentially, I agree with you, I was venting in response to #42 a bit more.

  83. bender
    Posted Nov 20, 2007 at 10:29 PM | Permalink

    -I can’t see any reason why proxy at A would correlate better with a low-frequency drought signal at B than A, other than due to random chance alone.
    -I can see why proxy at A might correlate better with a low-frequency drought signal at B than a high-frequency temperature signal at A.
    -I can see how model mis-specification error could muck up any search for meaningful correlations, and hence lead you to erroneously infer “teleconnections” that do not exist.

    A concrete example of teleconnection is the fact that during solar cycle peaks high pressure air masses that form over NA are correlated with high pressure air masses that form over Siberia. High pressure blocks precip, leading to drought. Of course solar is not the only forcing, so the connection is weak and ephemeral.

    Out on a limb here.

  84. Carl Gullans
    Posted Nov 20, 2007 at 10:37 PM | Permalink

    #83: By the way, ask anybody who works in any other field whether they use multivariate linear regressions anymore. Most abandoned that practice at least ten years ago, since almost nothing in the real world has a linear relationship with anything else… or at the very least could not be better modeled/predicted by a neural network or SVM.

    This could actually be done if we had accurate measurements of all confounding variables during some calibration period (i.e. use local precip, temperature, CO2, acidity of rainfall, whatever) to predict proxy performance. Even then, if such a model produced a ~0 validation statistic, it would clearly be an overfit piece of garbage. You’d get fired for presenting something like that as useful in any business.

    Another point people forget is that “it’s the best that could be done with the available data” is NOT acceptable. If the best you can do is garbage, then you pack your bags and go do something else. If you can’t reconstruct the climate without non-robust and questionable assumptions, then I guess you won’t know past temperatures… doing the best you can do is the wrong choice here.

  85. bender
    Posted Nov 20, 2007 at 10:41 PM | Permalink

    I better just let JEG do the talking. I am having a pretty hard time defending the idea. A correlation is grounds for suspicion of direct or indirect causation, but it is not a proof. Hunt around in space after the fact and you’re bound to find lots of spuriously high correlations. No question.

  86. bender
    Posted Nov 20, 2007 at 10:45 PM | Permalink

    Re #86
    Neural networks have been used to “model” tree ring growth. Unfortunately these models are uninterpretible black boxes with no direct or obvious physiological interpretation.

  87. Posted Nov 20, 2007 at 10:47 PM | Permalink

    The Air Force Combat Climate Center on teleconnections:

    http://www.stormingmedia.us/88/8814/A881404.html

  88. Barclay E. MacDonald
    Posted Nov 20, 2007 at 11:00 PM | Permalink

    “Similarly, just because a proxy is uncalibrated against local T, it does not mean it does not reflect temperature variations in some other parts of the globe. This is something CAers consistently fail to understand, without ceasing to squeek about. ”

    When it comes to teleconnections, I am skeptical, and seriously concerned about these steps:

    “4) are the uncertainties appropriately discussed ?
    5) are the conclusions warranted by the analysis ?”

    squeek! squeek!

  89. Carl Gullans
    Posted Nov 20, 2007 at 11:36 PM | Permalink

    #87: Absolutely agreed.
    #88: I was not aware of this, probably because they have not ever been discussed here, but a simple google search revealed plenty (including, at first glance, what might be the study that #42 references as an example of teleconnection!) that I’ll be looking into tonight. As a general statement you are correct about neural networks, but there are valid counterarguments. What is preferable: A linear model with a .3 R2 that has easily understandable relationships or a NN model with a .6 R2 that is very difficult to understand? I would take the model with the better predictions on the validation data. If no true relation is being captured in a model, the model won’t validate well. You can almost completely ignore overfitting so long as the training and validation statistics are about the same! I would be very concerned, however, with training a NN (or any model) on 50 years of data and validating it on 30 or so and then having it predict backwards 1000 years. Confounding variables may have existed back then that aren’t here now, and vice versa (as has been discussed here). Randomizing the training and validation periods rather than taking continous year blocks would help, but not eliminate, this problem.

    NNs are not truly black box models either, although they are a complete bitch to get information out of. You might not know exactly what happens in there even if you’re looking at the formula, but one can change a single input variable (e.g. temperature) to the completed model (holding all others constant) and graph the changes in output for various conditions (high precip, low precip, medium precip + high Co2, etc). It’s annoying, but it can be done, and probably can be done better than I just described.

    #89: Decision trees completely axe any concerns about normality or the shape of the distribution, which is a good thing, but the results don’t make much sense here (albeit without having access to the full text). Classification trees are not a good idea to use for continuous data as they appear to be doing, and if you did use them you couldn’t then apply a normalized STDev (which I assume, again without seeing the full text, is what they did since trees do not have confidence limits). Comparing r^2 to correctly classifying into n-bins also makes no sense if n is very small… need the full text.

  90. dover_beach
    Posted Nov 20, 2007 at 11:57 PM | Permalink

    The idea of teleconnections is not as silly as I first thought, nevertheless, whatever a tree is responding to, the response must have something to do with how these teleconnections effect local conditions. I mean the argument can’t be that a tree is responding to non-local conditions. It is the local temperature, precipitation, etc. that the tree is responding to and which the tree-ring width reflects, surely?

  91. bender
    Posted Nov 21, 2007 at 12:02 AM | Permalink

    #90 surely

  92. Ron
    Posted Nov 21, 2007 at 12:04 AM | Permalink

    Sorry people but a lot of you may be too young to appreciate this. Many years ago (1935) Gene Autry and the Melody Ranch Boys did battle with the Underground Kingdom. If memory serves, it was the underground nasties who used a sc-fi contraption that let them see Gene and the boys anywhere they wanted to look—vision at a distance. Wow. It was all great fun at those rip roaring Saturday Westerns when we were kids and not troubled with the difficulty of a TV screen without a camera at the other end. I guess we also could have accepted a tree growing thicker rings anticipating the temperature it might get because it sees the temperature somewhere over the mountains. Of course today we’d know it was all done with wormholes.

  93. Posted Nov 21, 2007 at 12:31 AM | Permalink

    How cool is this? Some guy in Colorado had a really healthy dinner tonight. I feel great because now, through nutritional teleconnections, I know I am also absorbing some of those minerals and vitamins.

    Is there actual PROOF (independently replicated) that this works with tree temp proxies, or is this just a fancy way to force correlation / causation where there actually is none?

  94. Steve McIntyre
    Posted Nov 21, 2007 at 12:39 AM | Permalink

    JEG writes:

    Please let me re-state the angle of my review. It is : does this paper meet basic criteria for publication in a climate journal, say any AGU or AMS publication ? Some experienced authors may want to correct me here, but my criteria have always been :
    1) is the approach novel ?
    2) is the methodology described with enough detail ?
    3) are all important choices justified ?
    4) are the uncertainties appropriately discussed ?
    5) are the conclusions warranted by the analysis ?

    In the case of Loehle’s paper :
    1) yes
    2) no
    3) maybe
    4) absolutely not
    5) absolutely not

    Let’s discuss academic reviewing and maybe I’ll start a thread on this. Let me start by saying that I feel like an anthropologist when I see the form and style of journal peer reviews because I didn’t grow up in that system. I never wrote an academic article until I was over 55 and my first encounters with academic peer review came after I had had lengthy experience with due diligence on prospectuses, financial statements, business feasibility studies – all done differently than journal peer review.

    I don’t view journal peer review as a bad thing, only as very casual. I’ve reviewed a couple of papers at Climatic Change and, as a result, Climatic Change changed their policies and now require authors to make data available. I asked to see supporting data and source code in one review (which authors are required to submit in econometrics); Stephen Schneider said that no reviewer had ever asked for those things in 28 years and refused to ask the authors for code. In one case, the authors actually withdrew the paper rather than provide supporting data.

    In business, there are a lot of important documents that are not “novel” in an academic sense. My own sense of the present situation in climate science is that the demand for novelty is not necessarily relevant. For example, let’s consider the Almagre core that we’ve collected. The principal interest is really just the data. We can comment on the data, but I don’t think that it would behoove us to try to do anything “novel”. That would introduce an irrelevant variation.

    Your next criteria:

    2) is the methodology described with enough detail ?
    3) are all important choices justified ?

    I completely agree that these are important issues. Obviously proxy climate science has a disastrous history in this respect. Methodologies are consistently described not merely with inadequate detail, but even inaccurately. (This is one of the reasons that I want to see source code – contrary to perception, it’s not to nitpick, it’s to find out what the authors actually did, as opposed to what they say they did.) I probably have more experience than anyone in assessing whether a methodology is described in enough detail to permit replication. HOw does Loehle compare with the predecessor articles? Relatively well. I’ve come close to replicating his results; so the methods need to be clarified further. But I’ve been completely unable to decode MBH99 confidence interval calculations or how Mann and Jones 2003 works. So on issue (2) – having actually dug into it, I think that Loehle could readily amend his article to meet the standards that you and I seem to share.

    As to the justification of choices, I’m not sure how you differentiate this from methods. Loehle described his selection criteria (something that I construe as part of methods). Is this what you have in mind?

    Are the uncertainties appropriately discussed?

    This is a real conundrum because, in my opinion, uncertainties are not appropriately discussed in any proxy reconstruction article. MBH98 reports uncertainties but its methodology is incorrect and thus not “appropriate”. MBH99 reports uncertainties but no one knows how the uncertainties were calculated. I’m not sure that anyone really knows how to calculate uncertainties for these reconstructions right now.

    This puts a fair reviewer in a bit of a conundrum. Can a reviewer use a defect that affects an entire discipline to prevent publication of an article that in this respect is no better and no worse than other entrants. Here the reviewer should be conscious of whether his views are being affected by adverse personal interests. For example, JEG is expressing a sensible objective here, but is he using this ostensible objective primarily as a trade restraint i.e. to discourage publication of an article primarily because it is presenting an MWP? (I’ve got an idea here. If JEG can explain to us how MBH99 confidence intervals are calculated, maybe we can get this methodology applied to Loehle.)

    are the conclusions warranted by the analysis ?

    I don;t think that Loehle can, with any confidence, claim that the MWP was warmer than the 20th century any more than Mann can claim that the 1990s were the warmest decade of the millennium. I would certainly have made a more nuanced conclusion.

    It seems to me that JEG’s detection of an unsupported conclusion is much acute in an adverse article. Unsupported conclusions occur elsewhere without JEG being troubled. That is not an argument against supporting the conclusions BTW – just an aside.)

    JEG studiously avoids the consideration of precedents – something that is very important in legal decisions where judgement is also involved. If journals have consistently approved articles in a field that do not meet the standards that JEG and I aspire to, at what point would we as reviewers have the right to unilaterally raise the hurdles for the entire field? In such circumstances, I think that a reviewer can fairly do this only to articles that he does not oppose. So for example, if JEG were reviewing an article by Mann or Ammann and he wanted to use that occasion to draw a line in the sand and get them to meet the above standards, no one would argue about it. But if he’s doing it to an article adverse in interest and his new righteousness has the effect of suppressing an article that shows an elevated MWP while otherwise being indistinguishable from something by Hegerl or Esper etc, then it’s probably not fair.

    Has JEG supported his allegation of “pseudo science”.? It doesn’t seem to me that he has.

  95. bender
    Posted Nov 21, 2007 at 12:41 AM | Permalink

    Ah, here’s the bit I was missing in my reasoning.

    Suppose proxy A is “teleconnected” with climate centre B. The climate signal at B (think central Pacific ocean) may be dominated by low-frequency variability as compared to the climate signal at A (think California). If the response of proxy A is integrated over time, it will be a low-frequency response and therefore its signal more correlated with climate signal at B than A. i.e. Climate signal A is contaminated with noise to which the proxy appears not to be reponsding. Of course it IS reponding; it’s just that the models are mis-specified and the data poorly resolved due to sampling error, so you will never pick out that high-frequency response. So … climate station B appears more causal than A. Twilight zone teleconnection.

    Yes, I think that’s it. I was forgetting: the two stations need not be homogeneous in the frequency domain. A powerful node in the global circulation is going to have lower-frequency variability than some satellite location.

    You need to dissect out all the causal layers to understand how this teleconnection effect can happen.

    The reason teleconnection is important is that when r(proxy A, climate station A) = 0 you might be inclined to reject a causal relationship, but r(proxy A, climate station B) GT GT 0 may give you pause. When generating a hypothesis you don’t want to be overly dismissive. (But by the same token, when implementing a trillion-dollar policy you don’t want to be working with mere working hypotheses! You want some assurance that the science is further along than that.)

  96. Posted Nov 21, 2007 at 12:48 AM | Permalink

    If teleconnections really were as important as JEG says, then they would be explained in the Team’s papers and we would know all about them, wouldn’t we? Or those papers wouldn’t meet JEG’s standards for publication, would they? Or is this all just a diversion?

  97. bender
    Posted Nov 21, 2007 at 12:49 AM | Permalink

    #95 Glad to see you are not letting the “pseudo-science” claim go unchallenged. I think that was a very unfortunate rookie mistake that he should retract and apologize for.

    Apologies for the teleconnections distraction if it’s OT for this thread.

  98. Steve McIntyre
    Posted Nov 21, 2007 at 12:58 AM | Permalink

    Maybe the correct course of action is to try to find proxies that are actually responding to local temperature, rather than teleconnecting. For example, we don’t estimate the temperature in Chicago by measuring rainfall in Thailand, even though there may be some teleconnection.

    Let’s grant JEG the premise that (say) a coral dO18 somewhere is teleconnected to ENSO. If that relationship is established in a third party article and the calibration to some intermediate variable can be sensibly used in amodel, then maybe it would have some utility in a properly constructed model. But the third party calibration should be a datum to the reconstruction rather than one of ten thousand coefficients calculated from a Mannian multivatiate calculation.

    My guess is that the attempt to utilize confounded proxies is somewhat of a blind alley and the most pressing need is to develop better temperature proxies – perhaps Mg-Ca is a step in that direction.

  99. bender
    Posted Nov 21, 2007 at 12:59 AM | Permalink

    Yes, sonicfrog, but there are no bcps in, say, the Pacific ocean to register the effect locally.

  100. bender
    Posted Nov 21, 2007 at 1:04 AM | Permalink

    #99 I agree completely. Just because the concept isn’t loonie doesn’t mean it hasn’t been abused in practise.

  101. Posted Nov 21, 2007 at 1:08 AM | Permalink

    #42

    My argument that “an error bar is better than none” was rejected on the grounds that the errorbars were meaningless .

    We have a trade-off here, no error bars or wrong error bars. Not a good situation, no indication of accuracy or false indication of accuracy (which leads to discussion about reliability, or consistency in some fields). Here are the MBH98-style CIs for Loehle’s reconstruction Jean sent me:

    It is clear that much of the bullets shot at “uncalibrated proxies” by Steve M (and much of the CA crowd) are a consequence of their profound misunderstanding of the very concept of teleconnection. This is where statistics fail to tell us about all the mysteries of the World : unfortunately, Physics gets involved. So while i respect and acknowledge the comments of skilled statisticians on this board, it would seem that to solve a climate problem they should do their basic homework and take a climate 101 class.

    I think statisticians agree that physics must be involved, for example

    A statistical relationship, however strong and suggestive, can never establish a causal connection: our ideas on causation must come from outside statistics, ultimately from some theory or other.

    (Kendall’s ATS)

    I’d like to take that climate 101 that explains physics of all teleconnections in MBH98.

  102. bender
    Posted Nov 21, 2007 at 1:18 AM | Permalink

    third party calibration should be a datum to the reconstruction rather than one of ten thousand coefficients calculated from a Mannian multivatiate calculation

    When doing statistical analysis I distinguish between exploratory methods designed to generate working hypotehses vs. inferential statistics designed to test an a prioiri hypothesis. Mannomatic pattern matching and post-hoc invoking of teleconnection is exploratory analysis of the worst kind. That half-cooked stuff should never have been fast-tracked up to the level of global policy. (Yes, Susann, the precautionary principle.) Novel, yes. Correct, no. Pseudoscience? Ask JEG.

  103. bender
    Posted Nov 21, 2007 at 1:21 AM | Permalink

    Re #102
    Physics? Of teleconnections? Oh dear, maybe I missed something?

  104. dover_beach
    Posted Nov 21, 2007 at 1:31 AM | Permalink

    Bender, my remark at #90 was a cry in the wilderness. Some of the practices Steve has reported on recently have simply been breathtaking. I’ve been following the debate daily for almost a year now and the more I do the more this field looks like a house of cards. And yet I continue to believe there must be a method in their madness; the curse of innocence.

  105. jeez
    Posted Nov 21, 2007 at 3:18 AM | Permalink

    RE: 92

    In 1935 I doubt you kids in the theater had any mental picture of television with or without cameras at all. Commercial television was launched at the 1939 World’s Fair.

  106. Paul
    Posted Nov 21, 2007 at 4:04 AM | Permalink

    Well, proof indeed that nobody reads my posts.

    My comments from the original Loehle thread:

    The overall thrust appears to be a complaint that Craig has not mirrored the standard team approach (call it global calibration, or regrssion of regional/local/sourced PC/proxies against estimates of global mean temp.).

    I can’t buy into that. This represents a truly independent (the first such???) method to addressing the same essential question:

    My next gem, that might also get ignored is; aren’t we looking at 2SLS (2 stage least squares – equivalently Invstrumental Variables approach) here? At least in terms of approach, if not in actual application.

  107. EW
    Posted Nov 21, 2007 at 4:14 AM | Permalink

    He called the other approach the “Fritts approach” – what we would now call a Mannian approach, in which you took a total grab bag of proxies making no effort to ensure that they responded to local temperature and then relied on software to extract a signal.

    relying on software for extracting signal – reminds me again of DNA phylogeny studies. Much time and effort was spent to design programs that take into account this or that kind of nucleotide substitution and distill a phylogenetic tree out of a sequence dataset. But deep in their soul, everybody knows that if the phylogenetic signal isn’t already visible when you put the sequences together for the first time, then no amount of calculations, bootstrap and mcmc makes it better.

  108. Don Keiller
    Posted Nov 21, 2007 at 4:55 AM | Permalink

    Re #41 “It is clear that much of the bullets shot at “uncalibrated proxies” by Steve M (and much of the CA crowd) are a consequence of their profound misunderstanding of the very concept of teleconnection. This is where statistics fail to tell us about all the mysteries of the World : unfortunately, Physics gets involved. So while i respect and acknowledge the comments of skilled statisticians on this board, it would seem that to solve a climate problem they should do their basic homework and take a climate 101 class. And please stop being just as snobbish and arrogant as they claim we are.”

    This is like no physics that I have ever heard of- at least operating at the macroscale. My understanding of quantum entanglement is a phenomenon in which the quantum states of two or more objects have to be described with reference to each other, even though the individual objects may be spatially separated. Without additional explanation from JEG, I guess this is the “Team” explanation for “Teleconnections”. OK so we have a hypothesised quantum mechanical explanation for why a BCP on Sheep Mountain can respond to the temporal variations in the large-scale patterns of the (climatic) spatial field.
    It is all very well proposing a hypothesis, but science progresses by the testing of such.
    Go on JEG/Team construct your null hypothesis and go right ahead.
    In the meantime, just in case this does not prove possible, I would be very happy for you to sign up for a second year module that I teach called “Organisms in their Environment”. In summary it shows, with reference to the peer-reviewed literature, how a variety of organisms adapt to their particular ecological niches. My specialist input is in environmental plant physiology. Strangely I have not yet found the need, (or indeed the papers) to include plant responses to non-local environments.

  109. Gerald Machnee
    Posted Nov 21, 2007 at 5:39 AM | Permalink

    Teleconnection – looks like JEG is following Mann’s footsteps – it works for the BCP but not the MWP.

  110. bender
    Posted Nov 21, 2007 at 6:55 AM | Permalink

    #107
    Truly independent? Hardly. Not in a functional/statistical sense. Read Loehle & Moberg. The overlap between the two networks is large. Independent in terms of Wegman’s social network, well, yes.

  111. Bernie
    Posted Nov 21, 2007 at 7:14 AM | Permalink

    #99
    Steve:
    Occam’s razor says you are right. However, paleos have these neat potential proxies, e.g., BCPs, that will have to be ignored unless there is some way to do a T calibration. You correctly argue, IMO, that they should simply drop them. If they persist in using proxies with no meaningful relationship to local T, then it is incumbent upon them to specify a compelling and generilizable physical model – otherwise they are simply reinventing the high correlation between the #parsons and the consumption of rum in 18th Century Boston. It is all spurious. Julien and others need to formalize and validate the remote T calibration methodology.

  112. jae
    Posted Nov 21, 2007 at 7:55 AM | Permalink

    JEG seems to have raised his own bar, regarding what is required for publishing. His original post (143 on Loehle Proxy thread) says:

    As a climate scientist, and one currently working on climate reconstructions (with M.Mann on top of that !), one may be surprised to read that i welcome this study as a useful “What if ?” experiment. I therefore agree with bender (#16) that “the approach is novel and the results worthy of publication”. Unfortunately, this is the only positive thing that can be said about it.

    But now:

    Please let me re-state the angle of my review. It is : does this paper meet basic criteria for publication in a climate journal, say any AGU or AMS publication ? Some experienced authors may want to correct me here, but my criteria have always been :
    1) is the approach novel ?
    2) is the methodology described with enough detail ?
    3) are all important choices justified ?
    4) are the uncertainties appropriately discussed ?
    5) are the conclusions warranted by the analysis ?

    In the case of Loehle’s paper :
    1) yes
    2) no
    3) maybe
    4) absolutely not
    5) absolutely not

    I think I agreed with him more before. While the best studies would meet all his criteria, perhaps, and there are some flaws, I am sure glad the work was published so all could see it. I think we are all better off because of it, despite it’s flaws.

  113. steven mosher
    Posted Nov 21, 2007 at 8:25 AM | Permalink

    I said a while back that folks should read JEGs ENSO papers. Here’s
    an abstract.
    http://digitalcommons.libraries.columbia.edu/dissertations/AAI3249076/

    re the teleconnection issues

  114. Bob Meyer
    Posted Nov 21, 2007 at 8:30 AM | Permalink

    Re #95

    bender said:

    “Climate signal A is contaminated with noise to which the proxy appears not to be reponsding. Of course it IS reponding; it’s just that the models are mis-specified and the data poorly resolved due to sampling error, so you will never pick out that high-frequency response. So … climate station B appears more causal than A. Twilight zone teleconnection.”

    If I understand this then would it be possible to first filter out the high frequency components from your “climate signal A” and then look at the correlation. By lowering the variance due to high frequency components you should improve the correlation, no?

    But is the low frequency response that you describe real or coincidental? And is this what is meant by “teleconnection”? I think I understand where you are coming from but JEG’s explanation is either beyond me, or beyond reality.

    I am somewhat handicapped here because my statistical toolbox is geared towards signal processing and like the old saw “when your only tool is a hammer, every problem looks like a nail”. So any help here would be appreciated.

  115. steven mosher
    Posted Nov 21, 2007 at 8:32 AM | Permalink

    here is another JEG study.

    Click to access Emile-Geay_etal2007inpress.pdf

    For those seeking a physical mechansism of teleconnection, Rossby wave like teleconnections
    are cited in this paper

    FWIW

  116. windansea
    Posted Nov 21, 2007 at 8:33 AM | Permalink

    JEG writes:

    And yes, i would argue that previous Team published reconstructions meet these criteria for the most part.

    to be convincing, the paper would have to demonstrate that each proxy is :
    a) a temperature one
    b) a good one at that.

    fair enough, could you explain why you think BCP tree ring proxies used to this day in Team reconstructions is justified?

  117. Lawrence Hickey
    Posted Nov 21, 2007 at 8:36 AM | Permalink

    Maybe Loehle should develop a video game, where the menu of proxies is presented with a provenance for each and a discussion of why it is a temperature proxy.(some calibrated, some not- etc) Then the user selects the statistical methodology consistent with the proxy set chosen. (Maybe Mann’s Bristle cone heavy statistically faulty choice can be one of the choices. ) Then the reconstruction generator makes a temperature reconstruction. With a feeling for the number of degrees of freedom inherent in the reconstruction process, the entire enterprise would have as its goal the elucidation of the whole reconstruction game. The player can construct a AGW result with ease, but in the ensemble of possible reconstructions, it will be evident that it is a sorry dogleg, constructed to return a preconceived result. The uncertainties and the conclusions points 4) and 5) of JEG would be addressed to his satisfaction, because the conclusion is that the technique is burdened by so many degrees of freedom that it is inherently uncertain. I am not entirely being sarcastic. If such a workbench was extensible, with user R or java extentions, it might tap into a creative well that would explore even more of the vast dimensionality of the problem.

  118. Bob Meyer
    Posted Nov 21, 2007 at 8:51 AM | Permalink

    Re 77

    Susann said:

    [T]he analogy isn’t perfect admittedly, but while we don’t understand the ins and outs of breast cancer, if diagnosed with it, I don’t want to wait until then to treat it. Even if the treatment isn’t foolproof, and even if there are side-effects, not treating when the tumor is growing in order to wait until science is absolutely certain is suicide.

    The analogy is better than you think. Imagine the horror of a woman treated for breast cancer who doesn’t have it. Vomiting, hair loss, fuzzy headedness, lack of energy, etc. Now imagine this not for a short treatment followed by cure and happiness, but a life long (and life shortening) treatment with no prospect of a return to the pre-diagnosis condition.

    The precautionary principle applies to taking action in the face of uncertainty as well as not taking action in the face of uncertainty. That’s why being accurate in the diagnosis is so important – lives hang in the balance.

  119. bender
    Posted Nov 21, 2007 at 8:59 AM | Permalink

    Re #116

    Thirdly, we address similar questions for the Holocene, and explore how solar and orbital forcing could have produced centennial- to millennial-scale variability in the ENSO system. Using the same model, forced by our best estimates of solar irradiance, we conduct ensemble simulations perturbed by realistic amounts of stochastic wind. We show that solar irradiance can plausibly generate millennial-scale ENSO variance above the model’s level of internal variability, in spite of the noise. We then explore teleconnections to the North Atlantic and North Pacific, Southeast Asia and Central Andes regions, and offer a mechanism explaining the major paleoclimate records of solar-induced variability over the Holocene.

    More or less my explanation of “teleconnectivity”, albeit with greater precision of wording. Which you’d expect for a PhD thesis.

    Thanks for the link, mosh.

  120. Posted Nov 21, 2007 at 9:55 AM | Permalink

    Re #57

    Susann says:
    November 20th, 2007 at 7:19 pm

    I started a post requesting more info on the concept of “teleconnection” from Julien, but stopped, not wanting to belabor anything that was already known by everyone else. However, upon seeing the attitude people have here towards the concept, I would appreciate if Julien could provide a link to or reference to a good source of info on it from a climate science perspective. I will read the archives here that discuss teleconnection with avid interest to see the CA take on it.

    For teleconnections to the tropical Pacific (which is what i know about), the best reference that springs to my mind is :

    Trenberth, K. E., G. W. Branstator, D. Karoly, A. Kumar, N.-C. Lau, and C. Ropelewski (1998), Pro-
    gress during TOGA in understanding and modeling global teleconnections associated with trop-
    ical sea surface temperatures, J. Geophys. Res., 103, 14,291–14,324, doi:10.1029/97JC01444.

    JGR is an expensive journal, though, and K.Trenberth did not make it available through his website (unlike Mann). But perhaps your home institution’s library has a subscription ? Sorry i can’t help more : if i start sending articles to everyone it may open a big can of worms. But requesting the article from the author usually works.

    On the other hand, it may be a little too technical for your taste.

    I will ask our own teleconnection expert Peter Webster for more accessible references.

    Cheers
    J

  121. Lance
    Posted Nov 21, 2007 at 10:09 AM | Permalink

    In the absence of a null hypothesis any supposed “teleconnected effect”, data mined from proxy studies, should be considered ad hoc at best.

  122. jae
    Posted Nov 21, 2007 at 10:12 AM | Permalink

    Please let me re-state the angle of my review. It is : does this paper meet basic criteria for publication in a climate journal, say any AGU or AMS publication ? Some experienced authors may want to correct me here, but my criteria have always been :
    1) is the approach novel ?
    2) is the methodology described with enough detail ?
    3) are all important choices justified ?
    4) are the uncertainties appropriately discussed ?
    5) are the conclusions warranted by the analysis ?

    In the case of Loehle’s paper :
    1) yes
    2) no
    3) maybe
    4) absolutely not
    5) absolutely not

    I submit that none of the peer-reviewed climate modeling articles that I have seen in the prestigious journals meet 2, 4, and 5. Of course, I have not read many, but from the comments I’ve seen here, none of them seem to meet 4.

  123. Larry
    Posted Nov 21, 2007 at 10:14 AM | Permalink

    122, indeed, this looks like pure conjecture. If it isn’t, I don’t think they’ve made the case.

  124. bender
    Posted Nov 21, 2007 at 10:21 AM | Permalink

    #122/124 Agreed. (I’m explaining the concept, not arguing its legitimacy in quantitative temperature reconstruction.)

  125. Posted Nov 21, 2007 at 10:22 AM | Permalink

    If teleconnections really were as important as JEG says, then they would be explained in the Team’s papers and we would know all about them, wouldn’t we? Or those papers wouldn’t meet JEG’s standards for publication, would they? Or is this all just a diversion?

    The concept is considered ‘standard knowledge’ in climatology since the 1980’s.
    Do you ever expect to see a proof of the central limit theorem in every statistical article that uses the normal distribution ? I would venture to say ‘no’. Does it mean i should arrogantly confront the editors of, say, the Journal of the Royal Statistical Society for ‘obfuscating’ the issue ? No, it means i should humbly go do my homework. I certainly have a lot to learn in statistics : i think i’ve been pretty humble about it.

    I’m sure you’ve heard more than enough times the phrase “standing on the shoulder of giants
    Statisticians stand on Gauss, Bayes, Kendall, Pearson and others (forgive my ignorance)
    We stand on Horel, Wallace, Gutzler, Webster, Hoskins, Karoly, Trenbeth, and people like that.

    The main issue here is twofold :
    1) language difference between the two fields are a hurdle to common understanding.
    2) climatology is a vastly more immature science than statistics, having only taken off the ground in the 1950s or 1960’s (some people may want to correct me here ; the point is : it wasn’t 1750). As a result, it is hard to find a clear undergraduate-level account of even some basic concepts. It doesn’t mean it’s bul***, though.

    I’ll do what i can to help on that front… and welcome any statistician to contact me for collaboration (or just basic education) if they have bright ideas to share. Humility goes both ways, you know.

    Cordially,
    Julien

    Steve: A habit that I’ve noticed among visitors to this blog engaged in a debate as JEG is here is the tendency to be more responsive to perhaps less-focused observations by non-active posters. I don’t expect visitors to necessarily know who is an active poster and who isn’t, but the the pattern has occurred before and, all too often, it then means that they don’t respond to the more active posters. The post to which JEG responded here is Eric’s 2nd post at this blog, his last one being 6 months ago. Just because someone posts here doesn’t mean that he has statistical expertise. It wasn’t bender, Jean S, UC, myself or other active posters who asked this question and none of us would have posed the question as it was above. All of us are prepared to accept that teleconnections in climate exist and a condescending response is unhelpful. In JEG’s own specialty, Gilbert Walker did seminal work in both statistics and ENSO (and there’s a nice article on this.) That doesn’t mean that Mannian-type algorithms make any sense.

  126. steven mosher
    Posted Nov 21, 2007 at 10:42 AM | Permalink

    RE 120, Yes Bender that was one of the quotes from JEG that resonated with your explantion.

    Note that JEG also makes predictions about tree rings in South America. Interesting no?

  127. steven mosher
    Posted Nov 21, 2007 at 10:57 AM | Permalink

    I think JEG goes a bit beyond mere hand waving about teleconnections, at least in his work. JEG correct me
    if I’m wrong but you wrote:

    “Consequently, if solar-induced ENSO variability did actually occur,
    it should appear in Southern Hemisphere climate records, e.g. tree-ring records of droughtsensitive
    regions of South America. This is in contrast to the predictions of water-hosing experiments
    in the North Atlantic [Zhang and Delworth, 2005], which have asymmetric responses
    about the equator in the Pacific. It is hoped that high-resolution proxy data from the Southern
    Hemisphere will soon enable us to distinguish between these competing paradigms of global
    climate change.”

    Essentially, you are making a prediction of sorts about high res proxy data that you have yet to
    examine. If true, that puts your position in a different light than folks here are used to seeing.

    There is also a tangent that ties into bringing proxies up to date.

  128. RomanM
    Posted Nov 21, 2007 at 11:03 AM | Permalink

    #126 JEG

    Do you ever expect to see a proof of the central limit theorem in every statistical article that uses the normal distribution ?

    No, I wouldn’t. However, I WOULD expect that the author making such a reference would clearly indicate the set of variables under consideration and why the assumptions that are needed for the application of the CLT apply in that particular situation. The devil is in the details. Most of the point that need making with regard to teleconnection have been pretty well presented by bender, Bernie, MarkT and others. However, anyone intending to include teleconnection effects needs to specify how they are supposed to work in their setting. The statistical model should explain how the temperature “signal” on say, a one-dimensional tree ring response, gets replaced (sometimes?, always?) by the precipitation,CO2 fertilization, drought or other effect controlled by the distant climate. It should explain why the effect is always in the same direction and of the same magnitude (otherwise the “linear” response may no longer be linear) as the temperature response. The model should be able to distinguish between effects without other information since that information will not be available outside the period of temperature record. It should also contain factors which will account for the fact that some of the proxies at the site respond to local temperatures and others only to global. Of course, some justification will be given to the stationarity of the teleconnection over the period being studied since you say in the summary of your thesis: “Lastly, we investigate how ENSO teleconnections might have differed during the Last Glacial Maximum (LGM).”

    No, you can’t apply the CLT if you don’t have any specific variables or structures to apply it to.

  129. Posted Nov 21, 2007 at 11:17 AM | Permalink

    Dear Steve,

    I completely agree that it is an advantage to have pre-calibrated proxies. But in fact, I am surprised that you agree with me. 😉 If we prefer pre-calibrated proxies, it means that we tend to trust physical, theoretical arguments more than statistical arguments. It’s because these proxies must be justified to be good before their correlation with instrumentally measured temperatures is compared with the analogous quality of other proxies.

    On the other hand, the statistical MBH-like (and MM-like?) approach would be to choose the proxies dynamically by their agreement with known measured temperatures. I would say that if such a statistical test had a lot of nontrivial information in it and the good proxies were highly correlated with the measured temperature in the relevant time interval, the statistical dynamically chosen approach would be superior.

    However, I think that the amount of information in the 20th century temperature is poor and the accuracy with which typical proxies reconstruct it is poor, too. So we face a dangerous situation in which very bad proxies might get into the ensemble by pure chance (correlation doesn’t imply causation) and totally contaminate it. That’s why I would prefer non-statistical, independent arguments to get the right candidate proxies and their correct normalization. If you tell me that Loehle is the first one who does it in this old-fashioned way, I say Good for him. But there may be other criteria to look at his work, too.

    Best wishes
    Lubos

  130. bender
    Posted Nov 21, 2007 at 11:18 AM | Permalink

    Thanks for the reply in #126 Steve M. I get the sense I’m being dodged, not ignored. And gosh I hate that. I really don’t want to fill up all the threads with stuff about JEG – but he refuses to face the music on this “pseudo-science” allegation and what a level application of that standard would mean for IPCC AR4.

    Last post for the day.

  131. Posted Nov 21, 2007 at 11:19 AM | Permalink

    Roman, That was my point. I don’t deny or dismiss teleconnections, but they shouldn’t be raised as post hoc support for work (as oppsed to conclusions) already done. And if they are raised to support the conlcusions, then more explanation is needed. Sorry if my post was confusing or disrespectful.

  132. Steve McIntyre
    Posted Nov 21, 2007 at 11:20 AM | Permalink

    Let me try again on the concept of using (say) ENSO indicators in a temperature reconstruction.

    Let’s step back from proxies for a moment and think about instrumental measurement. When Jones or Hansen is making an estimate of global mean temperature, do they give a weighting to ENSO indicators in the calculation? To my understanding, they don’t.

    Let’s suppose that you only have 18 temperature instrumental measurements in the entire world and you have an ENSO measurement as well. It seems possible to me that you might be able to use the ENSO measurement to improve your estimate of global temperature, but isn’t this something that should be demonstrated empirically in a calibration period? Perhaps it’s already been done by one of the “giants” – if so, I’d be interested in the article. But it also seems possible to me that you’re better off just using the sample of temperature measurements that you started with and leaving the non-temperature series off to the side.

  133. Posted Nov 21, 2007 at 11:24 AM | Permalink

    Bernie says (#55)

    I get the possibility of teleconnection or remote T calibration, but what teleconnection seems to require is a specified physics-based model as to why (a) the local calibration does not work and (b) why a remote calibration does work – otherwise we are into glorified and, even worse, teleological cherry picking.

    So, a request to Julien – references that explain and demonstrate remote calibration as a methodologically sound approach – rather than raw positivism.

    Ahhh, how nice it is to talk with reasonable people !
    I will do my very best to explain this convincingly in my next article. Meanwhile, i cannot be 100% sure that it hasn’t been done elsewhere in the literature.

    For now, 2 cents :
    The basic reasoning is 2 steps removed from a local calibration. If you believe that :

    1) geological proxies (like tree-ring widths in the Sierra Nevada) are one (noisy) measure of local climate (chiefly, temperature and precipitation)
    and
    2) that climate in, say, California, is affected by conditions in the Tropical Pacific (which few californians would care to dispute after the 97/98 El Niño).
    3) you have a physical theory for why this is so (Rossby waves and mid-latitude wave/mean-flow interactions)

    then you have, arguably, a sound theoretical basis for a remote calibration of tropical SSTs from tree-ring widths in the Sierra Nevada. Does this answer your question ?

    Because the signal tends to be diluted by distance at long ranges, you need many locations to do it – but it is doable with a decent signal/noise ratio. Some people here at GaTech have a paper coming up on this, but i highly doubt they’d want to join this fray – the downside of CA’s rhetorics of intimidation. But it has many upsides, as you guys continually prove.

    I now have a moral obligation to provide you with a more detailed explanation of the physical theory of teleconnections. Working on it. Please check my blog in a few days.

    JEG

    Steve: You are free to do what you want, but please note that neither I nor bender nor Jean S nor UC have asked you to provide a more detailed explanation of the physical theory of teleconnections and do not view you as having any moral or other obligation on this topic. What we asked you to do is to obtain an explanation of MBH99 confidence interval calculation. I would much prefer that you spend your scarce time on this topic, which pertains immediately to matters at hand. As to teleconnections, speaking personally, I studied algebraic topology at university and have an excellent intuitive understanding of constraints on geometrical patterns on a sphere. I’ll read your post, but view it as entirely tangential to any issues raised here by me.

  134. Posted Nov 21, 2007 at 11:39 AM | Permalink

    Dear Steve,

    I think that the very existence of the influence of ENSO on global temperature is well-demonstrated and arguably pretty well quantified. For example, if you believe that the nicely looking reconstruction of temperature anomaly by Svensmark and Friis-Christensen

    http://motls.blogspot.com/2007/10/svensmark-and-friis-christensen-reply.html

    is more than a coincidence, it is useful to know that their explanation relies on a 0.14K/decade trend, galactic cosmic rays, big volcano discounts, and compensations for ENSO. So I would say that the coefficient is known encoding how much ENSO influences the global mean temperature.

    In this sense, all papers that mainly care about “dangerous” and “unnatural” and “man-made” contributions to climate change shouldn’t look at the global mean temperature but the global mean temperature with the ENSO effect removed. That would eliminate some wiggles. ENSO is a part of the story that is moderately understood. Also, I think that the coefficient how ENSO influences temperatures should ideally be calculated from all available data in the world and used universally throughout all reconstructions, instead of individual reconstructions dynamically calculating this coefficient from their often incomplete datasets.

    Concerning your question, I don’t think that ENSO can improve your temperature reconstruction; ENSO can only improve the theoretical explanation of the reconstructed trends (especially if you defined ENSO indicators as purely relative numbers, i.e. “differences” of temperatures). But what one must be careful is whether a calculated reconstructed temperature is the real “straightforward” global mean or the global mean with the conventionally expected effect of ENSO removed.

    It is also a somewhat philosophical and subtle question which of these two temperatures is one that should control the intensity of the global warming hysteria. It is clear what an alarmist would say right now: the corrected mean temperature with ENSO removed should control the hysteria because we just have a La Nina. So it may look like it is cold but the mean temperature minus the ENSO correction is much higher, bringing us closer to the doom. Needless to say, during El Nino, they choose the opposite strategy and tend to forget about ENSO and derive the hysteria purely from the naive global mean.

    I am sure that a mature and honest scientist like yourself doesn’t make these biased things. Nevertheless, it is still a question which of these two reconstructions should be more interesting and which of them should occur in papers. As a reductionist, I would tend to prefer the corrected temperature from which a maximum number of more or less understood influences were removed. That would include not only ENSO but also possibly PDO, volcanos, and maybe even the cosmic rays. With these subtractions, one could get simpler curves, after all, that would look more meaningful than a piece of spaghetti.

    Best
    Lubos

  135. Kenneth Fritsch
    Posted Nov 21, 2007 at 11:46 AM | Permalink

    Re: #121

    The concept is considered ’standard knowledge’ in climatology since the 1980’s.
    Do you ever expect to see a proof of the central limit theorem in every statistical article that uses the normal distribution ? I would venture to say ‘no’. Does it mean i should arrogantly confront the editors of, say, the Journal of the Royal Statistical Society for ‘obfuscating’ the issue ? No, it means i should humbly go do my homework. I certainly have a lot to learn in statistics : i think i’ve been pretty humble about it.

    I’m sure you’ve heard more than enough times the phrase “standing on the shoulder of giants”

    This is all very confusing to me, JEG, on when one needs to explain oneself in more detail and when one can simply declare that the questioner go look at the literature.

    I judge, if you could fulfill my previous request to provide an example of a teleconnection used to explain a temperature correlation in a linked paper and detail how the connection was (or was not) made a prior, I would feel less like your comments were simply being used as a put-off. Surely the concept of teleconnections is not what is being generally queried here, but instead, the application of that concept specifically to a temperature proxy/reconstruction and more specifically to how and when the teleconnection is applied – even though you may have to step down from those large shoulders to do it.

  136. Posted Nov 21, 2007 at 11:59 AM | Permalink

    Re # 94.

    Steve,
    i agree with a lot of what you said.

    JEG studiously avoids the consideration of precedents – something that is very important in legal decisions where judgement is also involved. If journals have consistently approved articles in a field that do not meet the standards that JEG and I aspire to, at what point would we as reviewers have the right to unilaterally raise the hurdles for the entire field? In such circumstances, I think that a reviewer can fairly do this only to articles that he does not oppose. So for example, if JEG were reviewing an article by Mann or Ammann and he wanted to use that occasion to draw a line in the sand and get them to meet the above standards, no one would argue about it. But if he’s doing it to an article adverse in interest and his new righteousness has the effect of suppressing an article that shows an elevated MWP while otherwise being indistinguishable from something by Hegerl or Esper etc, then it’s probably not fair.

    I fully agree. Do you have any evidence that i have been more lax with other work ? Where are the precedents you are judging me on ? May i see the precedents that ornate your long and flawless public career ? You can see that this is going nowhere but into a black hole of ad-hominem attacks, so let’s stop here, shall we ?

    My (limited) experience with publishing has been very positive. On the only two papers that i have published so far, each reviewer has been incredibly competent, critical, insightful and constructive.
    It lead to drastic improvements in the manuscript and left me with a feeling of deep respect for peer-review.

    I agree with you that providing data and code should be a sine qua non condition of submission. For the rest, it remains to be seen whether econometricians or statisticians are the guardians of a wonderful land of ironclad publications standards. If you have any data supporting your claim that climate reviews are vastly more biased than in other field, i’d love to see them. Drawing a network of Mann-connectnedness is cute : would it be any different in any other field of the same size as ours ? In particular, are Bayesians and Frequentists devoid of bias with each other ? I’ll won’t shy away from the hard truth if you demonstrate it.

    (I suspect the scale issue is 90% of the answer.)

    all right ; last post of the day too. I’m sorry i can’t reply to everyone’s satisfaction. I can’t make CA my full-time job. I’m not blowing people off ; the more thoughtful posts (bender) often demand time for thoughtful answers. As i said before, you are perfectly free to dismiss all my words if you so choose (which you’d do anyway…)

    Steve: You should really chill out some of your snippy remarks, which spoil some of these efforts. I’ve treated your comments quite seriously and spent time on them so why would you say : “you are perfectly free to dismiss all my words if you so choose (which you’d do anyway…)”. It sounds petulant, when your presentation should be upbeat. Your wandering off into rhetoric is also petulant. “guardians of a wonderful land of ironclad publications standards” – the reason for the development of these standards in econometrics was precisely the same problem as is experienced in climate science. I mention the precedent not to be invidious but to dispel arguments that such a standard is impossible , just as collecting bristlecones proves that it is possible to update the proxies. Why would you interject an allegation that I;ve claimed that “climate reviews are vastly more biased than any other field”. I’ve never made that claim. I probably have about the same amount of experience with academic review as you do and would not generalize from that limited experience to other academic disciplines. However as compared to the due diligence in business where I do have experience, I found that the due diligence in journal climate science was very casual, but that may be true in other disciplines as well. I had nothing to do with any Mann-connectedness drawing and thought that the concept was poorly conceived. If I were looking for connections, I would begin with Bradley and Jones rather than Mann and I would have focused on the connectedness of the supposedly “independent” studies – which might be relevant.

  137. David Ermer
    Posted Nov 21, 2007 at 12:03 PM | Permalink

    RE 134

    Let me know if I understand the argument correctly:

    B is correlated to A (teleconnection between ? and local temperature)
    C is not correlated to B (local tree growth and local temperature)
    therefore:
    C is correlated to A (2nd teleconnection unrelated to 1st teleconnection?)

    What exactly is the physical causal connection between the global variable A and the local rate of tree growth, why does it overwhelm the local variable B, and why does this provide information about the global temperature? Without this clarification, invoking teleconnections seems to be nothing more than hand waving.

  138. Susann
    Posted Nov 21, 2007 at 12:22 PM | Permalink

    For teleconnections to the tropical Pacific (which is what i know about), the best reference that springs to my mind is :

    Trenberth, K. E., G. W. Branstator, D. Karoly, A. Kumar, N.-C. Lau, and C. Ropelewski (1998), Pro-
    gress during TOGA in understanding and modeling global teleconnections associated with trop-
    ical sea surface temperatures, J. Geophys. Res., 103, 14,291–14,324, doi:10.1029/97JC01444.

    JGR is an expensive journal, though, and K.Trenberth did not make it available through his website (unlike Mann). But perhaps your home institution’s library has a subscription ? Sorry i can’t help more : if i start sending articles to everyone it may open a big can of worms. But requesting the article from the author usually works.

    On the other hand, it may be a little too technical for your taste.

    Thank you for the reference — I’ll do my best to sort through it and find what I can myself. I have very generous access to journals through work and school, so we shall see if I can access the article.

  139. Posted Nov 21, 2007 at 12:28 PM | Permalink

    Steve:

    A habit that I’ve noticed among visitors to this blog engaged in a debate as JEG is here is the tendency to be more responsive to perhaps less-focused observations by non-active posters.

    Yes, I’ve noticed that and consequently I will no longer ask questions of informed people who visit and are generally dismissive of this site and its work. JEG was asked many direct and pointed questions over the past few days. He frequently chose to ignore what appeared to be the most difficult ones. To be fair, given his isolated status, he had more questions thrown at him than he could conceivably answer without having to spend most of his time here. However, he dodged the tough ones, many of them posed by you and Bender.

    I hope that JEG continues to regularly visit this site and that the unschooled (moi) or partially-schooled in these matters allow those with the background and experience on these boards to engage him. Much good could come of it.

  140. Steve McIntyre
    Posted Nov 21, 2007 at 12:43 PM | Permalink

    #140. The pattern is odd. It would be like me going over to RC (hypothesizing non-censorship) and, after obtaining a response from Gavin, ignoring it and choosing to engage with Lynn V and then thinking that I’d put in a good day’s work.

  141. boris
    Posted Nov 21, 2007 at 1:06 PM | Permalink

    (Policy vs science) I have no problem with linkage of the sort:

    Precip at location X is affected by ([inv] proportional to) temp at location Y;Proxy at location X reflects precip at location X but not temp at location X;Therefore proxy at location X reflects temp at location Y;

    That’s fine for science. I just draw a distinction between science and policy. If the indirect linkage is unknown or highly speculative then it might not be appropriate for making policy.

  142. bender
    Posted Nov 21, 2007 at 1:55 PM | Permalink

    If the indirect linkage is unknown or highly speculative then it might not be appropriate for making policy.

    Yes, and worse – it puts the policy-maker’s credibility at risk. That is precisely why it is a huge mistake to ignore hard scientific uncertainty. The careful scientist protects the policy-maker by carefully considering the probability that he is wrong. Then, if so desired, a “precautionary principle” can be invoked by the policy-maker. But policy-maker must know that unless people are truly alarmed, expensive precautions will be a very tough sell.

  143. MPaul
    Posted Nov 21, 2007 at 2:16 PM | Permalink

    So if I’m understanding JEG, teleconnections (low frequency, large scale climate patterns) are a significant assignable cause of variation that can be separated through statistical techniques from random variation in a time series. From that perspective, it would seem reasonable to use use remote calibration if these trends are truly global (and I suppose uniformly global(??)) and non-random. But given (a) that they are low frequency, and (b) are highly attenuated, I would guess that you would need a huge number of samples to coax out such a signal and get to any reasonable level of confidence. I’m not an expert, am I off base here?

  144. beng
    Posted Nov 21, 2007 at 2:18 PM | Permalink

    The idea of “teleconnections” might be applicable to a sediment sample from say, deposits in the Gulf of Mexico just off the Mississippi delta, to sites where the sediments originated — the Mississippi watershed. Another ex., river water levels “teleconnect” upstream to precipitation in its watershed.

    Stretching that idea to trees is absurd.

  145. Peter D. Tillman
    Posted Nov 21, 2007 at 3:15 PM | Permalink

    RE: JEG, #134

    Steve said:

    You are free to do what you want, but please note that neither I nor bender nor Jean S nor UC have asked you to provide a more detailed explanation of the physical theory of teleconnections… What we asked you to do is to obtain an explanation of MBH99 confidence interval calculation. I would much prefer that you spend your scarce time on this topic, which pertains immediately to matters at hand…

    JEG is a postdoc, and Mann is a power in JEG’s chosen field. Moreover, it appears that Mann considers McIntyre his enemy. Is it reasonable to expect JEG to antagonize Mann for CA’s benefit?

    JEG has done prior work on teleconnections and has some interesting things to say. Perhaps this topic needs its own thread, or is better conducted on his own blog. But I don’t think browbeating him about the Hockey Team’s failings is constructive.

    I’ve seen several distasteful “pile-ons” for previous outside-expert visitors to this (otherwise admirable) blog. Steve, it’s your blog, but I think it’s important to make people like JEG feel welcome here.

    Cheers — Pete Tillman

    Steve: I take the point. And I don’t really expect JEG to beard Mann about his method. And I welcome his contributions. But I’m not really interested in a first-year condescending essay on teleconnections as though that were a response to any issues that I had raised. That’s why I was a little curt on the point.

  146. Kenneth Fritsch
    Posted Nov 21, 2007 at 3:21 PM | Permalink

    In #134 JEG said:

    For now, 2 cents :
    The basic reasoning is 2 steps removed from a local calibration. If you believe that :

    1) geological proxies (like tree-ring widths in the Sierra Nevada) are one (noisy) measure of local climate (chiefly, temperature and precipitation)
    and
    2) that climate in, say, California, is affected by conditions in the Tropical Pacific (which few californians would care to dispute after the 97/98 El Niño).
    3) you have a physical theory for why this is so (Rossby waves and mid-latitude wave/mean-flow interactions)

    then you have, arguably, a sound theoretical basis for a remote calibration of tropical SSTs from tree-ring widths in the Sierra Nevada. Does this answer your question ?

    JEG, here is where the confusion leaks in from your example/explanation of teleconnections — in the context of a proxy reacting/correlating to a remote temperature and not correlating with a local one. If trees in CA react to a noisy signal of the local CA climate and that climate is affected (through teleconnections) by climate conditions in the Tropical Pacific then whether that remote effect totally dominates CA climate or merely affects it significantly how does tree ring growth filter out local temperatures and react to more strongly to remote temperatures?

    My only way of intuitively rationalizing your explanation would require you to explain some frequency differences in tree ring responses to local and remote climates that provide a less difficult method of extracting the (lower) frequency remote climate signal and show how one avoids the increased chance of making a spurious correlation. Otherwise your explanation falls flat in my view.

  147. steven mosher
    Posted Nov 21, 2007 at 3:46 PM | Permalink

    RE 146. Pete. I agree there is nothing to be gained or proved in the scientific area by
    Imposing this litmus test on JEG. It’s a short term rhetorical gotcha.

    Make the point and move on. Improve Loehle’s reconstruction.

    You will not defeat a Zombie theory ( AGW) by squeezing a kid’s head in a logical vice.
    Won’t happen. It’s stupid to try. Score blog points, have a beer, and get back to the science.

  148. steven mosher
    Posted Nov 21, 2007 at 3:59 PM | Permalink

    RE 134. JEG, I imagine when the california meterological mafia at CA have a read of your stuff we
    might have a fun thread, or not.

    Steve SADLOV? are you on the planet today? or diving in cold water without proper head gear?

  149. Steve McIntyre
    Posted Nov 21, 2007 at 4:09 PM | Permalink

    JEG says:

    The basic reasoning is 2 steps removed from a local calibration. If you believe that :

    1) geological proxies (like tree-ring widths in the Sierra Nevada) are one (noisy) measure of local climate (chiefly, temperature and precipitation)
    and
    2) that climate in, say, California, is affected by conditions in the Tropical Pacific (which few californians would care to dispute after the 97/98 El Niño).
    3) you have a physical theory for why this is so (Rossby waves and mid-latitude wave/mean-flow interactions)

    then you have, arguably, a sound theoretical basis for a remote calibration of tropical SSTs from tree-ring widths in the Sierra Nevada. Does this answer your question ?

    One of the issues then becomes how much weight can you put on some of the older tree ring chronologies. For example, Ababneh’s work, as recently noted, does not confirm Graybill’s chronology at Sheep Mountain. Repeated below:


    Figure 1. Sheep Mountain (Bristlecone) Chronologies Black – Ababneh 2006; red – Graybill 1987.

    The failure of Hughes to ensure that this data is archived and published promptly should be a serious concern to JEG, who’s spending a lot of time and energy on this topic and is entitled to have access to up-to-date data.

    JEG, as a word of advice, I’d be very wary of the Graybill chronologies. There’s undue reliance on them. The big difference between Graybill and Ababneh should have been investigated by Tucson with a detailed report, rather than being ignored. Disgraceful. And even if your covariance methods work better than I think they do, they are readily compromised by this sort of bad data.

  150. Posted Nov 21, 2007 at 4:27 PM | Permalink

    I think that comes from expert arrogance. Those of us who are not in the main stream commentators group are prone to ask questions that can be sneered at, which makes us easy targets. When the outside expert comes up against you or bender, they better have their act together because you know as much or more than they do and they know they’ll be embarrassed if they try that with you.

    It does strike me that from what I understand JEG saying about teleconnections would run into trouble if the frequency of the local signal was of the same order as the remote signal, or if you had one or more signals impinging on the same proxy which could be either global (say ENSO and solar variations) or local (temperature and soil conditions).

  151. aurbo
    Posted Nov 21, 2007 at 6:30 PM | Permalink

    I have had some experience in the use of teleconnections which goes back to about the late 1960s. The original work my colleagues and I did was involved with investigating correlations between ENSO conditions and coincident, or more often subsequent weather conditions elsewhere on the planet. When we found high levels of correlation between ENSO and some remote weather events we characterized the relationship as a teleconnection. The reason we pursued such studies was that ENSO conditions were slow changing excursions in the Tropical Pacific SSTs and certain surface pressure relationships (originally the SLP pressure difference between Tahiti and Darwin AUS) that due to teleconnections appeared to bias…at least statistically…weather conditions elsewhere for an extended period of time, mostly temperatures and precipitation.

    The value of teleconnections were that such biases would generally persist for several months, sometime more than a year, and allowed us to show some skill in the holy grail of weather prediction…namely long range forecasting. (Skill can be defined as a prediction that shows improvement over both climatology and persistence). Considerable experience was developed over time and the relationships served us well…at least early-on. Then in the 1975-1977 time frame, the teleconnection statistics changed and even reversed for some parts of the world and our L/R forecasts busted. It didn’t take us long to figure out that something profound had occurred in the teleconnections associated with ENSO during that time period. Many years later our work led us to believe that the reason for this major climate shift was related to the PDO (Pacific Decadal Oscillation) sometimes described for shorter periods as the PNA which we viewed as another, perhaps more dominant teleconnecting system. Other semi-independent large scale circulation features such as the NAO, AMO, AO, etc. were likely involved as well.

    Most of these treleconnecting features can be related to long-wave pressure patterns of the mid-tropospheric circulation, including Rossby waves, that exhibited a certain amount of stationarity as to the Longitude at which their respective amplitudes peaked We characterized such patterns…frequently hemispheric in scope…as regimes. A regime can be defined as a persistent pattern that may or may not be anomalous and is identified by its attributes (i.e. a warm regime, or a wet regime, etc.) The amplitude of such waves can vary considerable within and during a regime.

    A critical point is that such teleconnection patterns featuring persistence of L/W features exists under certain physical constraints. The principal one is that an hemispheric isopleth of pressure (or height) must meet up with itself as it circles the globe. It’s not an open system. Therefore, well depicted L/W patterns fall into a limited family of wave numbers, usually between 1 and 7 (1 representing hemispheric zonal flow). The low number ones are more likely to exhibit some persistence.

    Another consideration of teleconnections due to L/W patterns, is that they often represent a quasi-zero sum game. That is, that where an individual pattern may be associated with a positive temperature excursion in one part of the world, it usually creates a negative excursion is some other area which may or may not be coincident in time, mostly occurring within a matter of days, or at most, a week or two. There are some effects that don’t balance out in the short term, particularly where heat produced by temperature changes are being stored in sinks or absorbed by substances, most commonly H2O, that undergoes a change in state where heat changes from sensible to latent or vice versa..

    The implications of this in regard to Global climate should be obvious. That is, a snapshot of global surface temperature anomalies at any one time will be altered or reversed when regimes change and that the concept of a Global mean temperature is fuzzy indeed.

  152. pjm
    Posted Nov 21, 2007 at 6:43 PM | Permalink

    JEG: I now have a moral obligation to provide you with a more detailed explanation of the physical theory of teleconnections. Working on it. Please check my blog in a few days.

    Thank you. I don’t have the background in this issue that some others have, and I would be interested.

  153. Susann
    Posted Nov 21, 2007 at 6:52 PM | Permalink

    Aurbo, thank you for that post. Fuzzy indeed. I feel confident that the phenomenon of teleconnection is there, but our ability to capture its effects on proxies in a meaningful way is the question.

  154. Pat Keating
    Posted Nov 21, 2007 at 7:11 PM | Permalink

    152 aurbo

    Thank you for your useful mini-essay on teleconnections you have known. Could you provide a link to a review article on such phenomena?

  155. Mike B
    Posted Nov 21, 2007 at 7:14 PM | Permalink

    I feel confident that the phenomenon of teleconnection is there

    What is the basis for your confidence? Just curious.

  156. aurbo
    Posted Nov 21, 2007 at 7:26 PM | Permalink

    Re #156:

    Thank you for your nice comment. I spent much of my career working in the private sector. As a group, essentially all of our work was done in-house and of a proprietary nature. We were probably giving ouselves more credit to the value of our “cutting edge operational research” than it deserved. The bottom line, which in retrospect I regret, is that in the competitive environment we worked in, we did not publish original papers or review papers of the work we were doing that was ancillary to our principal product which was operational weather forecasting.

    SAT

  157. aurbo
    Posted Nov 21, 2007 at 7:29 PM | Permalink

    Correction:

    My previous (157) should have been directed to #155 with additonal thanks to #154.

  158. Criton
    Posted Nov 21, 2007 at 7:38 PM | Permalink

    The acceptance or reliance on “teleconnections” to construct a global or hemispheric temperature reconstruction is problematic. Presumably your network of thermometers or proxies has sufficient geographic diversity to capture the ultimate statistic you are trying to reconstruct. Now with teleconnections, you have proxies or thermometers that not only capture local effects, but also apparently those occuring thousands of miles away. In theory, these more distant locations are represented by their own proxies and thermometers. Thus, If you are including proxy A from location Z to capture temperature change at location Z, but in reality are actually capturing temperature change in location F, you would be giving undue weight to the proxy that should be recording the temperature at location F and little or no weight to the actual temperature at location Z, unless the two signals can be somehow separated from the proxy. I haven’t seen this done. Ultimately, the teleconnections issue, as presented by Mann and its effects on specific proxies, has no emperical foundation.

  159. Susann
    Posted Nov 21, 2007 at 7:43 PM | Permalink

    What is the basis for your confidence? Just curious.

    Well, as a former student of ecology, the idea of interconnections between different parts of an ecosystem and how changes in one part affect others is not new to me. Granted, ENSO’s effect on the temperature of the Sierra Nevada is a bit larger than the kind of ecosystems I studied (fescue grasslands), but the basic principle is the same even if on a much greater scale.

  160. olram
    Posted Nov 21, 2007 at 7:52 PM | Permalink

    sorry but i am a newbie. telecconnection through what exactly? time or space? spatial teleconnection seems odd

  161. aurbo
    Posted Nov 21, 2007 at 10:16 PM | Permalink

    Re #161:

    Teleconnections are principally spacial in nature. The simplest of these depend upon the character of the hemispheric circulation, which includes the planetary wind fields. The winds are driven by atmospheric pressure differentials which for reasons cited in #152 operate under the constraints of the stability of the long-wave features that mark the global circulation. In a 4-wave configuration, a ridge at the Greenwich Meridian will show similar ridges at 90 degrees East, 180 degrees (the Dateline), and 90 degrees West. Weather associated with ridges will be similar to that observed under equivalent pressure anomalies at the other three L/W ridge positions, subject to differences induced by the various surface features (land, ocean, mountainous terrain, etc.). Thus, events upstream teleconnect with events downstream.

    As to the common El Niño/Southern Oscillation (ENSO) teleconnections, those associated with cold episodes (La Niñas) can be found here, and those with warm episodes (El Niños) are here.

  162. aurbo
    Posted Nov 21, 2007 at 10:37 PM | Permalink

    Correcting to #!62:

    To find the actual graphics of the aforementioned teleconnections, click on cold episodes for La ninas and warm episodes for El Ninos within the prior links, or else look here and here.

  163. Peter D. Tillman
    Posted Nov 21, 2007 at 11:01 PM | Permalink

    Re 154,155 teleconnections

    Here’s an illustrated primer to what aurbo was outlining:
    http://library.thinkquest.org/20901/teleconnections.htm

    For Rossby waves, see http://en.wikipedia.org/wiki/Rossby_wave

    Heh. I know just enough about this stuff to be dangerous… {G}

    Cheers — Pete Tillman

    “The trouble with predicting the future is that it is very hard.”
    — Yogi Berra

  164. Peter D. Tillman
    Posted Nov 22, 2007 at 1:25 AM | Permalink

    Steve said, #146:

    I take the point. And I don’t really expect JEG to beard Mann about his method. And I welcome his contributions. But I’m not really interested in a first-year condescending essay on teleconnections as though that were a response to any issues that I had raised. That’s why I was a little curt on the point.

    Yeah, he can be pretty obnoxious [1], but seems to know his stuff, and admits when he’s wrong. I hadn’t noticed this before (at http://thatstrangeweather.blogspot.com/):

    JEG, 11-18-07: “McIntyre is well-founded when he says that much of my criticisms would apply to some previous work, none of which have i ever pretended to defend.”

    Very interesting discussion — in fact this past week has seen a whole series of first-rate posts, by yourself, Loehle, JEG and a host of others. Thanks again for the good work — you’re having a real impact on CS.

    Cheers — Pete Tillman

    [1] –kinda reminds me of myself at his age… Ah, youth!

  165. Philip_B
    Posted Nov 22, 2007 at 3:15 AM | Permalink

    Another consideration of teleconnections due to L/W patterns, is that they often represent a quasi-zero sum game.

    That’s the main issue I have with teleconnection (apart from Mann using it to shore up the Hockey Stick, make the divergence problem go away, rescue broken dendro proxies, etc).

    As a broad rule when it is cold and wet in Perth, it is warm and dry in Melbourne and Sydney. Say, I detect a trend toward more cold wet weather weather in Perth, what does that tell me about Melbourne, given that I know the weather/climate is teleconnected between the 2 places? Does it tell me Melbourne is getting warmer and dryer, or does it tell me Melbourne is getting colder and wetter? Well, teleconnection would support both results depending on whether what you measured in Perth resulted from a regional weather pattern shift (decadal oscillation) or a global cooling trend. Teleconnection appears capable of predicting everything in climate and therefore predicts nothing.

  166. Dave Dardinger
    Posted Nov 22, 2007 at 6:12 AM | Permalink

    The trouble with the sort of teleconnections we’re discussing here is that there’s no indication that they’re not going to be reflected in the local weather / climate. It’s fine to say that the weather in, say, a California mountain range is reflective of SSTs in the middle pacific, but why wouldn’t the average LW portion of the local weather follow this same SST if the tree rings of the area do? To just call such a divergence “teleconnections” doesn’t get us anywhere.

    Now we’ve discussed this before and it’s been generally agreed that things like precipitation patterns can allow there to be a disconnect between local and distant weather / climate when it comes to tree ring growth, but why not just come out and say so and try to show that this is the correct explanation? But if so, then you have to relate a given place’s LW climate signal to the place that teleconnects with it not necessarily a global temperature measure. And also this would imply that proxies like Loehle has used would likely be more indicative of global temperature than treerings since they wouldn’t have to be filtered through the teleconnection. IOW, the Loehle multiproxy reconstruction is likely a more accurate reading of long-term global climate than treerings. Is this what JEG or Mann want to admit?

    Further, why would global temperature readings be the thing to measure and compare with local temperatures if teleconnections are important? Wouldn’t you want to construct a metric which combines temperatures and precipitation and possibly things like cloud-cover as well? Now MBH9x had the opportunity do do something like that, but blew it, IMO. The thing is that the desired global climate would be reflected locally as a combination of whatever things can teleconnect. So if we limit ourselves to temperature and precipitation, we have to have a theory of how they combine so that we can see if within any given grid cell they do indeed reflect the global values. But while an instrumental record of gridded global surface temperatures can be constructed, I don’t believe that a similar value has been worked out for precipitation, or at any rate was used in the MPRs. Therefore the calibration process is presumably flawed and as I said at the beginning, I don’t see how simply calling out “teleconnection” gets us anywhere.

  167. pk
    Posted Nov 22, 2007 at 8:04 AM | Permalink

    #150, Steve

    Your chart showing Graybill and Ababneh is not comparing apples to apples. Graybill was mainly strip bark trees, while Ababneh is 25 strip bark and 25 whole bark trees. When you separate out Ababneh’s strip bark trees, they look very similar to Graybill’s.

  168. Steve McIntyre
    Posted Nov 22, 2007 at 8:21 AM | Permalink

    #169. I disagree entirely. Given that the NAS panel said that strip bark trees should be avoided, the most relevant comparison for an end-user such as JEG would be between the Ababneh whole-bark chronology and the Graybill chronology – which would yield an even bigger difference. There is no graphic version of the 1000-year Ababneh whole bark chronology, so I used what was available. Also and this is a different point – my take on the Ababneh strip bark is that the 20th century spurt is not as pronounced as the Graybill. Maybe Hans Erren will do some more digitizing and we can confiirm this.

    The main point is that Ababneh chronology should supercede the Graybill chronology. Perhaps the Graybill whole-bark chronology should be the superceding chronology, but in its absence, one can start estimating the knock-on effect by using the Ababneh mixed chronology.

  169. pk
    Posted Nov 22, 2007 at 8:33 AM | Permalink

    #170. I agree it’s the most relevant comparison since NAS said strip barks should be avoided. My comment was more pertinent to Ababneh not discussing the difference between her series and Graybill’s. I did a graphical overlay of her strip bark vs Graybill’s and they appeared to be very similar.

    Steve:
    Even if they were similar, she should have done the comparison. It’s particular puzzling because she actually does compare her series to an even older chronology by Lamarche which is not as extreme as Graybill. (BTW I don’t know where the illustrated Lamarche chronology comes from as it doesn’t match ca506.)

  170. ttfn
    Posted Nov 22, 2007 at 9:03 AM | Permalink

    The Fritts methode: http://ams.allenpress.com/archive/1520-0450/10/5/pdf/i1520-0450-10-5-845.pdf

  171. Bernie
    Posted Nov 22, 2007 at 9:13 AM | Permalink

    I agree with Peter, Julien’s blog (at http://thatstrangeweather.blogspot.com/):
    is pretty interesting. Kudos to Dr. Curry for grabbing an interesting PostDoc.

  172. Marine_Shale
    Posted Nov 22, 2007 at 5:39 PM | Permalink

    Re Susann and others interested in teleconnections between ENSO conditions and regional climate.

    People may find this research paper from Australia interesting. While it is fairly technical (and long) it does discuss the issues that both Mann and JEG are currently engaged with. Also the fact that it talks mainly about Australia shouldn’t put people off because the same principles apply to other areas in the world including the United States. They mainly discuss the problems with modeling ENSO conditions because of many non-linear teleconnections.

    I think that it may have implications regarding Mann’s (JEG’s?) selection of a “sweet spot”.
    Suppose you have what, to your mind, is location that has an unusual but fortuitous linear relationship to ENSO conditions (perhaps some tree’s in the SW United States). You look at the tree rings from a thousand years ago and detect no signal for particularly warm conditions, teleconnect this back to global temperatures and goodbye MWP.

    What if though, all you are seeing (which Steve and numerous other posters have pointed out) is a signal indicating not mild temperature but lack of water.

    This observation from the research paper:

    “An additional reason for non-linearity in some rainfall teleconnections might be that atmospheric circulation anomalies that lead to rainfall declines cannot reduce rainfall below zero. So at some point the rainfall response in some locations will be capped and so further intensification of the relevant ENSO atmospheric circulation teleconnection will not lead to any further decline in rainfall.”

    In other words the temperature associated with the ENSO conditions can continue to rise substantially with no signal from your proxy being possible.

    Your sweet spot isn’t so sweet anymore.

  173. Marine_Shale
    Posted Nov 22, 2007 at 5:54 PM | Permalink

    Sorry, here is the link.

    Click to access RR113.pdf

  174. Alex Curylo
    Posted Nov 24, 2007 at 12:50 PM | Permalink

    [118]

    Maybe Loehle should develop a video game…

    You appear to be mostly jesting here, but I’ve actually been thinking for a while lurking around here that’s an idea worth spending some time on. See, I’m an applications programmer, and a notably well-received part of my oeuvre has been the Mac ports of the Paradox Interactive EUII-engine historical simulation series. The historical “science” underlying their various games strikes me as being not completely unlike the current state of climate science as the various discussions here lead one into suspecting it actually is: a model of individual nations’ AI is pretty much made up of whole cloth and poked and tweaked until it gives results which generally conform to the model developers’ expectations. Only real difference is that their black box model of a poorly understood dynamic process has to produce results over millions of replays that conform to the users’ expectations of how history “ought to” develop properly enough to be enjoyable, rather than just producing a graph for a journal publication which supports the author’s intended result.

    And because of that difference, people spend an insane (Seriously. Just trust me here. You heard about World of Warcraft addiction? That’s _nothin’_ compared to the model wonks that sink their creative energy into tweaking Paradox games.) amount of time questioning those historical/economic models, tweaking their assumptions, constructing completely alternative models, etc. There’s literally thousands of people out there who seem to have no better thing to spend their lives on than making Paradox’s historical models more “correct”, on scales ranging from tactical deployment of German armour during World War II up to the European social pressures that created the Crusades.

    So where I’m going with this, in case you’re wondering, was that if a game was designed that was actually fun — at least in the sense that the Paradox games are “fun” — to play, and if it more or less accurately simulated the debatable value judgments that go into selecting data sets and creating models, and let you tweak those judgments in the same way that Paradox games have their models in text files that anyone can edit; well, then, I think you have a good chance of creating a massive army of bright people whom it would never, ever occur to read a single scholarly article or ClimateAudit thread on their own motivated to turn into experts of encyclopedic degree on the uncertainties of climate science, and every possible combination and weighting of every known data set would be tested for correlation, and those of you who actually do science would, hopefully, be provided with some rough guidance as to the results you could expect if you did a proper job of investigating a particular thesis.

    If anyone thinks this is an intriguing idea and has some suggestions for design, feel free to contact me, alex at alexcurylo dot com. You needn’t restrict your suggestions to individual-level implementation scale either, I can bring a game design to people who have financed entire studios from scratch on the strength of a good idea. Here in Vancouver, see, “sink high six/low seven figures into a good game idea, have it be a hit, then sell out immediately to EA” is a perfectly valid and not all that uncommon business strategy. The trick, of course, is the “have it be a hit” part…

  175. Pat Keating
    Posted Nov 24, 2007 at 1:17 PM | Permalink

    In 42 JEG states:

    Because the [d18O] isotopic signal is simultaneously affected by temperature and rainfall….

    Does this mean that the d18O “measurement” of past temperature from ice-cores is also confounded by rainfall?

  176. bender
    Posted Nov 24, 2007 at 8:45 PM | Permalink

    #176 That’s the implication.

  177. Pat Keating
    Posted Nov 24, 2007 at 11:04 PM | Permalink

    Do you know if the ice-core data has ever been questioned on that basis?

  178. kim
    Posted Nov 25, 2007 at 6:32 AM | Permalink

    Alex, ‘it will be a hit’ if we’re entering a cooling phase.
    ==================================

  179. kim
    Posted Nov 25, 2007 at 6:52 AM | Permalink

    All the world is divided into two classes; those who don’t know which way temperature is going, and those who don’t know they don’t know which way temperature is going.
    =============================================

  180. Paul
    Posted Nov 25, 2007 at 6:59 AM | Permalink

    Temperature, rainfall, CO2 fertilization, plus any other factors cause annual relative tree ring growth.

    And here are all the TR proxy studies regressing tree rings against temp. Is there not an “identification problem” here?

    Just like a failing undergrad economtrician regessing output on prices, they will end up with garbage, even before you get to the nuances of endogeneity, staionarity, serial correlation, heteroskedacity etc. etc.

    Don’t all these studies require at a bear minimum to be estimated under a multiple equation approach (anybody tried to implement BVAR?)?

  181. Marine_Shale
    Posted Nov 25, 2007 at 5:14 PM | Permalink

    aurbo made a comment in a previous post (152)

    The value of teleconnections were that such biases would generally persist for several months, sometime more than a year, and allowed us to show some skill in the holy grail of weather prediction…namely long range forecasting. (Skill can be defined as a prediction that shows improvement over both climatology and persistence). Considerable experience was developed over time and the relationships served us well…at least early-on. Then in the 1975-1977 time frame, the teleconnection statistics changed and even reversed for some parts of the world and our L/R forecasts busted. It didn’t take us long to figure out that something profound had occurred in the teleconnections associated with ENSO during that time period. Many years later our work led us to believe that the reason for this major climate shift was related to the PDO (Pacific Decadal Oscillation) sometimes described for shorter periods as the PNA which we viewed as another, perhaps more dominant teleconnecting system. Other semi-independent large scale circulation features such as the NAO, AMO, AO, etc. were likely involved as well.

    The “Holy Grail” indeed. This is what Mann (and apparently JEG) are after.

    JEG says:

    Emile-Geay and Verification r2 Statistics post no 49

    The day will some come when i publish my own multiproxy NINO3 reconstruction (with Drs Mann and Cobb, sorry), and i had long decided to simultaneously release all my code, and as much data as i can. CA, i promise, will be the first to know.

    The point of the research paper from Australia (linked in my previous post) was that,with all the work that has been done in Australia and internationally on ENSO,they are still unable to understand the complexities of ENSO events when relating them to the recorded weather data in Australian history, let alone be able to model them and predict future weather or climate patterns with any skill.

    Mann though has now got to the point where he is suggesting that he has reconstructed an NH ENSO history for the last thousand years or more and can even begin to detect the impact of AGW on various current ENSO indices (he must be a lot smarter than those silly Australian scientists). The problem of course with Mann’s ENSO reconstruction is that it is based (ultimately) on just a few trees (PC1).

    All his papers have been heading in the direction of the “Grail” for fifteen years (yes, I have read them all) and they are all based on his proxy and psuedoproxy networks that Steve M has so clearly invalidated.
    If the work of Craig Loehle and others is accepted then it destroys not only Mann’s temperature reconstructions but every other bit of his fifteen years of work on ENSO reconstruction.
    It would also destroy his hope that he would be the one to come up with the greatest predictive model of all time, which could never be invalidated because any errors would be blamed on the curious effects of AGW on various ENSO factors.

    How can he ever agree that his data and methodology are flawed when the rest of his life’s work would be washed away by a tsunami of his own creation?

  182. Susann
    Posted Nov 25, 2007 at 5:33 PM | Permalink

    All his papers have been heading in the direction of the “Grail” for fifteen years (yes, I have read them all) and they are all based on his proxy and psuedoproxy networks that Steve M has so clearly invalidated.
    If the work of Craig Loehle and others is accepted then it destroys not only Mann’s temperature reconstructions but every other bit of his fifteen years of work on ENSO reconstruction.
    It would also destroy his hope that he would be the one to come up with the greatest predictive model of all time, which could never be invalidated because any errors would be blamed on the curious effects of AGW on various ENSO factors.

    How can he ever agree that his data and methodology are flawed when the rest of his life’s work would be washed away by a tsunami of his own creation?

    It’s tempting to ponder personal motivation, and certainly I’ve wondered about that myself. However, I don’t think it does the reputation of this blog any good to have so much of a focus on personality, esp. Mann and other members of the “Team”. It takes this to the personal level in a way that detracts from the mandate of audit and into attack and gloat.

  183. Michael Jankowski
    Posted Nov 25, 2007 at 5:52 PM | Permalink

    Re#165, he made that remark about other resconstructions – including Mann’s works – while on this site, too (although his terminology in that case was, “never endorsed” them).

    I am still having a hard time grasping how he can make such statements in one breath while discussing future plans to co-author a reconstruction paper with Mann in his next breath. Is that not the ultimate form of endorsement?

  184. steve mosher
    Posted Nov 25, 2007 at 6:00 PM | Permalink

    RE 183. Susanne For the most part we eschew motive hunting. Sometimes however
    abductive reasoning seems warrented.

  185. Susann
    Posted Nov 25, 2007 at 6:40 PM | Permalink

    Steve, when I first visited here, I was a bit taken aback by the post deletions and unthreading, but now I understand it and appreciate its necessity. Like it or not, this blog and others are now part of the public dialogue on climate change and it reflects on people and sides. Even though I know there is no official “side” taken here, again, like it or not, it is perceived by many as being on the “skeptic” or “denialist” side. I understand the continued focus on Mann’s work, and the MBH papers: it’s the posts that descend to the personal with Mann and JEG that I think should be avoided because it makes this venture look too much like vendetta. Most of us — scientist and layperson alike — act in our own personal interest in some way so I think we can assume that motive underlies much of our behavior, Mann and JEG — why even Steve Mc — included. Pointing it out seems, well, self-evident. If a paper or argument has merit or has flaws, the personal motives of the authors really are irrelevant. You can be the most selfish and biased individual and write a sound research paper.

    I think I can guess at motives: I want explanations based on sound science.

    Steve: thank you for this. 90% of the deletions that I make are people venting too enthusiastically or going a bridge too far or several bridges too far in attributing motives or malice to others. It’s time consuming and annoying for me to have to do this tidying. I’m sorry if it disrupts sequences, but maintaining sequences would be oppressive in terms of my time. Also people who get cross at sequencing issues, please let off-topic posters know at the time. If people behave better, then the moving won’t be as much of an issue. I deal much more lightly with critics of this site than I do with supporters.

  186. bender
    Posted Nov 25, 2007 at 7:03 PM | Permalink

    vendetta?

    Not sure how anyone could see it that way. What is desired – what still has not been achieved – is accountability and transparency. CA keeps the pressure on individuals and groups in hope of changing climate science reporting practices. Some have changed. Many haven’t. The conflict is not personal, it is institutional.

    Steve: The desire of individual scientists to hoard data is understandable. What isn’t understandable is that NSF fails to implement high-level policies designed to prevent scientists from hoarding relevant climate data. The scientists annoy me, but I blame the institutions.

  187. steve mosher
    Posted Nov 25, 2007 at 7:13 PM | Permalink

    OK. this is OT. I don’t worry since I am tangent man.. ( cue the music)

    now, I dissect you. OK?

    “Steve, when I first visited here, I was a bit taken aback by the post deletions and unthreading,
    but now I understand it and appreciate its necessity. ”

    Funny, My reaction to having a post deleted was exactly the opposite. I wasnt taken aback
    I was THANKFUL my stupidness was deleted. When you think what you have to say is
    important or informed or influential, you are taken aback. Hmmm.

    “Like it or not, this blog and others are now part of the public dialogue on climate change
    and it reflects on people and sides. ”

    “it reflects” What you mean to say is this. Some people will judge your ideas by the
    ideas ( however unrelated) of your followers. I’m well aware of the PR function.
    “it reflects” means that most people do not consider the merits of the case. People
    of color show up at my rally ” it reflects” I get the code for irrational appeals.
    I used to teach rhetoric. Can’t tell I know.

    “Even though I know there is no official “side” taken here, again, like it or not,
    it is perceived by many as being on the “skeptic” or “denialist” side. ”

    Yes. Percieved by many. makes it true I suppose.

    “I understand the continued focus on Mann’s work, and the MBH papers:
    it’s the posts that descend to the personal with Mann and JEG that I think should be avoided
    because it makes this venture look too much like vendetta. ”

    I’m not sure that would help. When the rhetorical stratgey of the other side is to
    metaphorcially align you with holocaust deniers and to refuse any debate because the
    matter is settled. When that strategy is employed then what. Be reasonable? be reasonable
    with people who call you nazis when your family died in death camps?

    “Most of us — scientist and layperson alike — act in our own personal interest in some way
    so I think we can assume that motive underlies much of our behavior, Mann and JEG — why even Steve Mc — included.”

    You totally mis the mark here absolutely. No one denies motive. The question is how do remove
    motive. How do you remove the motives of Mann? Not by whining about them. Not by pointing them out.
    your remove the MOTIVE BY REPLICATING THE WORK. I dont have his motives. If he gives me his data
    and gives me his methods and I can repliacte his work, then his results are independent of
    motive. But mann wouldnt give his data and methods. And to date others follow his lead.
    Of course motive underlies our behavior. The point of the scientific method is to reduce this.

    ” Pointing it out seems, well, self-evident. If a paper or argument has merit or has flaws,
    the personal motives of the authors really are irrelevant.
    You can be the most selfish and biased individual and write a sound research paper.”

    Yes, as I have said countless times “exxon paid me to say 2+2=4.”

    “I think I can guess at motives: I want explanations based on sound science.”

    As a student of Kuhn you know exactly what is going on.

  188. Susann
    Posted Nov 25, 2007 at 7:21 PM | Permalink

    Bender, as an outsider, that is how “I” perceived some posts by some members, so while you may not see how that is possible, others do. From my perspective as a new visitor, it appeared as if some were out to get Mann, hoping to see him fall, fail, etc. While some people might have a legitimate reason to feel that way, it’s not good to have that appear to be what motivates people or the blog. Many posters here take a lot of this very personally and it is apparent in some posts – as if it is Mann rather than shoddy statistics/lack of transparency — that is the target. Just sayin. YMMV.

  189. Marine_Shale
    Posted Nov 25, 2007 at 8:36 PM | Permalink

    Susann

    Re #182

    I take you point entirely. To reduce things merely to motivation can be viewed as just another form of ad hominem attack.
    I apologise for this.
    The science is the issue.

    If you ever do wade through this lengthy research paper:
    “Asymmetry in the Australian response to ENSO and the predictability of inter-decadal changes in ENSO teleconnections”

    Click to access RR113.pdf

    You could perhaps understand why I feel that the treatment of this topic by some scientists and the conclusions they have drawn lack a bit of substance.

    I think perhaps it is time to go back to lurking to avoid further embarrassing myself.

    Cheers all,

    Marine_Shale

  190. Susann
    Posted Nov 25, 2007 at 9:15 PM | Permalink

    marine shale, you have likely only said what other people were thinking; it’s just that it should remain there. I am no one here, so by all means, listen to Steve McIntyre not me.

  191. Marine_Shale
    Posted Nov 25, 2007 at 10:28 PM | Permalink

    Steve, susann and others,

    One last thing before I go.

    I did allow subjectivity to colour my previous remarks and for that I do apologise.

    A good friend of mine recently sold his 1000 acre cattle farm, started by his grandfather, here in rural Australia (for a knock down price) because, in large part, he believed some “climate scientists” when they said there may never be substantial rain again in our area. We have been going through an extended drought in Australia (long El Nino) and my paddocks have been pretty dry as well and I had to offload all my sheep. recently we have transitioned to a La Nina phase (fairly weak at this point) but some good rains have come and the pasture isn’t too bad now where I am.
    The businessman from Melbourne who bought my friends farm (and several others in the area) is apparently pretty happy.

    Most of the people that I know just want to be sure that the scary projections we are presented with are based on good science that then flows through to good policy.
    The argument is not merely an academic one and to prematurely describe the science as “settled”, when patently it is not, will not do any of us much good.

    Thanks again Steve for all your work.

  192. bender
    Posted Nov 25, 2007 at 11:10 PM | Permalink

    Re #191
    Am I the only one that thinks ENSO, PDO, NAO etc. are post hoc inventions, and that their ephemerality could greatly limits their potential utility in forecasting? Reading the literature I get the sense that climatologists think there is this immutable pathway out there in the ocean that exhibits a somewhat trustworthy time-series behavior. Or that if the time-series behavior changes, it does so only in response to a forcing by yet a different construct – that equilibrium will return when the forcing abates. Always a new post hoc time-series construct in some mysterious new location whenever an a priori prediction fails. This kind of smells. JEG? Tim Ball? Is there no limit to the circulatory modes to be discovered/inferred through time?

    [Steve M, I realize that circulation is generally OT for CA. Snip if you must.]

  193. Marine_Shale
    Posted Nov 26, 2007 at 12:59 AM | Permalink

    Re 194

    No bender, you are not.

    That was what I was trying to get at:

    The point of the research paper from Australia (linked in my previous post) was that,with all the work that has been done in Australia and internationally on ENSO,they are still unable to understand the complexities of ENSO events when relating them to the recorded weather data in Australian history, let alone be able to model them and predict future weather or climate patterns with any skill.

    Dr Mann is the one maintaining that he can both reconstruct and predict ENSO events.

    By the way, I have always enjoyed your posts.