Craig Loehle Reconstruction #2

Continuation of Craig Loehle Reconstruction


  1. Posted Nov 19, 2007 at 1:58 PM | Permalink

    Very promising paper and it should be scrutinized indeed. I observe that the main proxies are based om Sea Surface Temperature and meteoric d18O. Both have their own pecularities.

    Apart from the various problems with various SST methods (metal ion ratios, alkenone, tex86), are sea surface temps proxies for global temperatures? Increased SST would increase evaporation, cause more local cloud and increases albedo for lower temperatures and vice versa.

    Meteoric d18O on lower lattitudes do not correlate well with temperature, probably due to monsoon variation apart from that d18O also depends strongly on seasonality of precipitation, if predominant summer precip changes to predominant winter precip then the resultant d18O accumulation will look very cold without actual temperature changes. Also aridity decreases the temperature at which condensation takes place (lower dewpoint) delaying the condensation to higher cooler altitudes, which makes the precipitation looking cooler as well. More elaboration here.

    Therefore it would require a lot of correlation testing and comparing several more proxies preferably from the same areas to confirm the skill.

    Apart from that I see that the dating of the Medieval Warm Period in Europe matches historic anecdotes quite nicely.

  2. braddles
    Posted Nov 19, 2007 at 4:58 PM | Permalink

    I think Loelhle should be congratulated on this paper. For a non-specialist, its greatest value is that it presents real data without first using secret methods to torture it beyond recognition. At a technical level, the lack of statistical analysis may be seen by specialists as a negative, but at least we get to see an unalloyed, unmanipulated result. This is vastly more valid than the Hockey Stick, which was subjected to so many adjustments and manipulations that Steve still hasn’t got right to the bottom of what went on.

    Yes, perhaps the Loehle paper should have avoided making conclusions (even though they are heavily qualified) about the late 20th century from this data. However, it seems to me that if it is true that the 11th century was half a degree or more warmer than the 19th, then it is perfectly reasonable to conclude, from other sources, that current temperatures are not unprecedented in the last 1000 years.

  3. steven mosher
    Posted Nov 19, 2007 at 7:11 PM | Permalink

    Susanne dont worry about speaking the word that shall not be spoken ( Cough cough.. C02… cough cough)
    St. mac is tolerant to a degree, so if we cross the boundaries, then we get tossed into the purgatory
    that is THE GREAT UNTHREADED, and then the mohel comes along and snips our stuff. No worries.

    Just dont pen the great american novel in a post and expect it to survive. WHEW!

    Anywho. you wrote:

    “From a risk management perspective, it makes sense to have plans in place
    to deal with the identified real and potential threats.
    This means planning for both the low-probability high-impact threats
    as well as high-probability low impact threats.
    I don’t think the answers are in yet on what the current warming portends,
    but it would be foolish in the extreme not to plan for all possibilities, however remote they are.”

    Hence my comment about planning for all possibilities. You prolly said this reflexively. And I am a brat.
    But you knew that.

    Pascal’s wager. This particular argument of mine sends seculars types into low orbit. I am
    agnostic but love the structure of argument. We abide by a rule here not to discuss religion.
    The wager is not about religion per se, but rather about calculating odds in a game theory situation
    or risk assesment situation when the outcomes are really bad or really good.

    It’s a common argument in AGW circles to say the following. ITS SAFE assume the truth of AGW, because
    if we are WRONG, the harm is small. ITS UNSAFE to deny the truth of AGW because if we are wrong, the harm
    is large.

    Therefore assume AGW is true and act accordingly. minimize risk.


    Replace “Truth of AGW” with “truth of religion X”

    Replace harm with “chance of burning in hell”

    This is the inverse of pascals wager, but shows the same point.

    crap.. we are headed to unthreaded… story of my tangents

  4. tetris
    Posted Nov 19, 2007 at 8:55 PM | Permalink

    Everything that has been posted around the Loehle paper has been both generally instructive and more importantly, very revealing.
    More in particular, JEG stepped into the fray with a very French “je vous emmerde” [for those not versed in the language of Voltaire it trasnlates as “F..Y”..] attitude. I recognize it all too well; part of my roots are there. Pascal, Lavoisier, Pasteur and a quite a few others would probably agree with me that it unfortunately doesn’t do proper science much good.
    I would hope that JEG realizes that he may have just encountered, on CA, a taste of the what proper auditing/scientific scrutiny actually entails [and with all due respect, his departmental chair’s views on the matter notwithstandimng and with which, as posted here, I disagree].
    Again, thanks to your politeness serving as a lever, this has once again demonstrated the double standard of what passes for Climate “Science”.

  5. Willis Eschenbach
    Posted Nov 19, 2007 at 11:09 PM | Permalink

    JEG, thank you for all of your contributions to this discussion. It is very refreshing to have someone actually stand up and discuss the issues.

    However, you err badly when you say:

    I acknowledge i had read this part a little quickly, but I still think you should pay closer attention to smoothing potentially non-stationary timeseries.
    (warning : epileptics and people allergic to the name “Mann” should not open this link)

    I’d also like to warn statisticians not to open the link, you might die of laughter … it states that often, the best way to smooth includes running the smooth through the last point of the series (AKA “end-pinning”), and proves that Mann really, truly knew what he was talking about … but only when he said “I am not a statistician” …

    JEG, if you think that Mann’s paper is valid, you desperately need to talk to some real statisticians. Mann’s paper is a sad joke. I tried to point that out to GRL, but they passed my paper on for review to some friend of Mann’s who said I was too hard on him … didn’t find anything wrong with my paper, just thought I was being mean. Ah, well.


  6. Roger Dueck
    Posted Nov 19, 2007 at 11:21 PM | Permalink

    From the comments on the previous thread one would assume that “peer review” ends at publication. I believe Craig’s comment “is the content of the paper and it’s veracity not more important than the “prestige” of the publishing journal” hits the mark. THIS IS peer review and it continues unabated in the scientific community. Only the arrogant dismiss it as less!

  7. Craig Loehle
    Posted Nov 20, 2007 at 11:26 AM | Permalink

    For computing confidence intervals, people seem to be assuming that we are dealing with annually resolved data. The data vary from annual to centennial to irregularly spaced. How do you interpolate? What does this do to the errors? Just computing ci around the points estimated by the papers I cited does not allow you to carry that through the interpolation and smoothing. What does smoothing do the the errors? What about dating error? The ref JEG gave for the “proper” way to do ci was for tree ring data (annual) with no dating error and not interpolation andno smoothing. I throw down the challenge: how do I carry uncertainty from the raw data through the temperature recon at irregular intervals into the final smoothed curve? Provide a reference please. Maybe I’m not as dumb as I look.

  8. Kristen Byrnes
    Posted Nov 20, 2007 at 11:39 AM | Permalink

    I did not like JEG’s tone at first althought I apreciate his change in tone. I wonder if Judith Curry gave him a good boot to the backside. If so, Georgia Tec gets a few points for good management and moves up a little on my list of colleges. That said, It is a good thing to have more educated and experienced voices in the crowd who do not always agree, and I hope JEG comments on this blog more often.

  9. jae
    Posted Nov 20, 2007 at 11:43 AM | Permalink

    Apart from that I see that the dating of the Medieval Warm Period in Europe matches historic anecdotes quite nicely.

    I think this is an extremely important point. IMO, there is no doubt at all that the MWP occured. The only question is how hot was it? I think there is a lot of historical evidence that it was hotter than now, and that should be factored into an analysis of the Loehle paper.

  10. Jon
    Posted Nov 20, 2007 at 11:47 AM | Permalink

    JEG, if you think that Mann’s paper is valid, you desperately need to talk to some real statisticians. Mann’s paper is a sad joke. I tried to point that out to GRL, but they passed my paper on for review to some friend of Mann’s who said I was too hard on him … didn’t find anything wrong with my paper, just thought I was being mean. Ah, well.

    The phrase I’d like to hear mentioned more often (coming from an EE background) is “group delay”. This is a basic physical property of filtering and cannot be overcome by fabricating the series after the end point. You can construct a model to extrapolate the signal of course (e.g., by constructing an ARMA model), but that isn’t the same thing as smoothing and should be treated with much more skepticism.

  11. Jonathan Baxter
    Posted Nov 20, 2007 at 12:41 PM | Permalink

    I throw down the challenge: how do I carry uncertainty from the raw data through the temperature recon at irregular intervals into the final smoothed curve?

    Don’t bother. Just calculate the standard deviation at a bunch of representative points on your 18 proxy curves, after all processing, and plot them as error bars. Each of the curves is supposed to be a measurement of temperature corrupted by some unknown (and inhomogeneous) noise process. Since you are just performing a straight average of the curves, plotting standard deviations between the curves is as good a way as any of summarizing the variance (and hence uncertainty) in the predictions.

    Without a lot more information on the provenance and reliability of each proxy, I’d be pretty wary of any other approach.

  12. Larry
    Posted Nov 20, 2007 at 12:51 PM | Permalink

    Willis, I’m not a statistician, but the problem with that is obvious to me. It should be obvious to anyone who went farther than arithmetic. I’m not laughing, I’m just bemused, as is so often the case when I see what the team does. Sometimes it’s hard to tell chutzpah from invincible ignorance.

  13. Pat Keating
    Posted Nov 20, 2007 at 1:22 PM | Permalink

    10 Jon
    Yes, group delay is an unfortunate issue. It may be possible to back off somewhat on the smoothing (e.g., go down to 20- or 15-year, or even 10-year averaging) to get closer to the data end-point. You would have to look at the data to see whether that is feasible/sensible.

  14. Sam Urbinto
    Posted Nov 20, 2007 at 5:27 PM | Permalink

    I read JEGs post over at his blog on this, and it seems he realizes he was a bit too hasty and um confrontational at first. I see a lot of folks come here thinking one thing, and I try and explain it patiently that just we come to different conclusions than somebody else does, doesn’t mean that concluis is (or isn’t) wrong. Some ‘get it’ better than others over different time periods.

    It just seems JEG hasn’t been around this long enough, or familiar enough with the issues and side issues of everything that’s happening. Hope he sticks around long enough to become better acquainted with things. He says he’ll be back.

    What it all boils down to is this. No matter how smart you are, no matter your credentials, no matter what models you use, no matter what the substance does on its own alone in a lab, and no matter how scientific the process, if you claim “2X of Y = Z” for something in the real world, I say, prove it with a formula or empirically. If not, it’s a guess. Might be a SWAG, but sorry.

    It’s not up to me to prove your proxy is a good one. It’s your job to prove it is, or provide the data and the methods to replicate it. Craig’s willingness to take criticism, want to make his paper better and more accurate, to have it audited and verified, and to make his methods and sources open and available is how everyone should be behaving, if anyone wants policy makers to take this field of climate and its results seriously.

    Anyway, that’s just in general. I have the feeling this paper, along with any improvements based upon the multi-disciplinary review of it here, is going to change a lot of things about “business as usual”

    Speaking of that, Hansen is working on a new paper.

    Hansen, J., 2008: Tipping point: Perspective of a climatologist. In The State of the Wild 2008: A Global Portrait of Wildlife, Wildlands, and Oceans. E. Fearn and K.H. Redford, Eds. Wildlife Conservation Society/Island Press, in press.

  15. Peter D. Tillman
    Posted Nov 20, 2007 at 10:45 PM | Permalink

    Sam Urbinto, #14

    I read JEGs post over at his blog on this…

    Also note this interchange:
    CL: “I don’t have my lifework tied up in this. Maybe we could collaborate on the remake?”

    JEG: “I appreciate the offer. I need to read some of your other work to decide if your work ethics are compatible with mine. Perhaps we can meet ?”

    I think JEG is going to be a real asset here. I can put up with a smartass who really knows his stuff.

    So, bonjour, Julien. Wilkommen to CA!

    Cheers — Pete Tillman

    “Fewer scientific problems are so often discussed yet so rarely decided
    by proofs, as whether climatic relations have changed over time.”
    — Joachim von Schouw, 1826.

  16. Jimmy
    Posted Nov 21, 2007 at 8:58 AM | Permalink

    If they do collaborate, I hope that they make a reconstruction with more than 18 datasets.

    And include Antarctic ice cores. And more sediment cores. And more speleothems….

  17. Posted Nov 22, 2007 at 3:09 PM | Permalink

    Where can I find Craig Loehle’s primary data set? I want to construct a sampling variogram and find out where orderliness in his sample space of time dissipates into randomness. It would be nice to define this incredibly important interval of time! Thanks and regards, JanWM


  18. Posted Nov 23, 2007 at 5:28 PM | Permalink

    In the first thread about the Loehle construction, I posted some concerns about Loehle’s method of normalizing the proxies (by subtracting each proxies mean from itself):

    For what it’s worth, I re-analyzed the data using what I consider a better method of normalizing. The results are very close to the published Loehle results. I am not posting a new plot because it would not add anything to the discussion.

    If anyone’s interested, I normalized this way:
    1. Linear interpolation between years on all proxies (no extrapolation outside range)
    2. Found all years with data from all 18 proxies (~1300 years)
    3. Normalized proxies to have zero mean over the common years

  19. Dave Dardinger
    Posted Nov 23, 2007 at 8:10 PM | Permalink

    re:18 John V,

    Thank you very much for posting a “negative” finding. It’s sign of a scientific and fair mind to do so.

  20. John A
    Posted Nov 24, 2007 at 11:19 AM | Permalink

    John V:
    I would encourage you to post the plots. I think it is germaine to the discussion to show how robust (or not) the Loehle reconstruction is.

%d bloggers like this: