Loehle Correction

Craig Loehle has responded to various criticisms and issued a correction here, which he asked me to note. He states that:

the original and correction are pasted together. It has data, urls, a map, proper confidence intervals and hypothesis tests, and corrections to dating issues Gavin found

He states that “the shape of the curve didn’t change appreciably.”

In contrast, Mann et al 2007 continued to use incorrect locations for their precipitation series (the rain in Maine still continues to fall mainly in the Seine, while the rain in Bombay still continues to cause rainouts for the Phillies), and he continues to use his incorrectly calculated PC series.

It is remarkable to compare the alertness of the climate science community in this instance to the somnolent reviewing at the Journal of Climate for Mann et al 2007 – where the reviewers either didn’t bother checking whether Mann had fixed known errors or didn’t care. It is encouraging that Loehle responded promptly to the criticism by issuing a correction.

I am only posting a notice of the correction at this time, as I have not yet parsed through the details.

356 Comments

  1. John Creighton
    Posted Jan 21, 2008 at 12:58 PM | Permalink

    That kind of kills the hockey stick graph.

  2. MattN
    Posted Jan 21, 2008 at 12:59 PM | Permalink

    So, tell us Steve, how do you really feel about this… 🙂

  3. jae
    Posted Jan 21, 2008 at 1:18 PM | Permalink

    That kind of kills the hockey stick graph.

    I dunno; it appears to be indestructible.

  4. Andrew
    Posted Jan 21, 2008 at 1:29 PM | Permalink

    So, how much more correcting are they going to demand before its allowed in the spaghetti graph?

    I’m sure that Steve is just glad to be dealing with someone who isn’t obstructing his analysis of their work.

  5. Bob Koss
    Posted Jan 21, 2008 at 1:56 PM | Permalink

    By addressing concerns raised, Craig is demonstrating how criticism should be handled. Whether other issues develop or not, I certainly respect his efforts.

  6. AJ Abrams
    Posted Jan 21, 2008 at 2:03 PM | Permalink

    How in the light does someone make a correction like that yet still state in their abstract “with little change in the result”? The MWP is now shown to be warmer, yet that isn’t a significant change in the results? The whole premise that current GW is unprecedented, and therefore must be human related, goes up in smoke if data shows a recent event like the MWP was actually warmer than current conditions. That isn’t significant? I’m at a loss of words.

  7. Phil.
    Posted Jan 21, 2008 at 2:03 PM | Permalink

    He still claims that the downtick at the end is due to the cold decades of the 60s and 70s whereas the 60s are the warmest years in his data since 1400!

  8. Andrew
    Posted Jan 21, 2008 at 2:17 PM | Permalink

    AJ Abrams, you’ve got it backwards. Previously, Loehle’s reconstruction showed a very warm MWP, but now it isn’t significantly warmer.

    The warmest tridecade of the MWP was warmer than
    the most recent tridecade, but not significantly so.

    Phil, What you are saying makes no sense. The corrected version extends to 1935,
    http://www.econ.ohio-state.edu/jhm/AGW/Loehle/LoehleMcC.txt

  9. Posted Jan 21, 2008 at 2:25 PM | Permalink

    That Hockey Stick has scholiosis.

  10. AJ Abrams
    Posted Jan 21, 2008 at 2:25 PM | Permalink

    I read the abstract for the correction only and the results of the correction (I’m at work), thanks for the correction it makes a ton more sense now.

  11. Andrew
    Posted Jan 21, 2008 at 2:32 PM | Permalink

    Oh, noting this:

    Note that the use of smoothed data (29-year running mean) and the
    existence of dating error in the series means that peaks and troughs are damped
    compared to annual data and are likely even damped compared to the true history
    (Loehle, 2005). Some of the input data were also integrated values or sampled at wide
    intervals. Thus it is not possible to compare recent annual data to this figure to ask
    about anomalous years or decades.

    It obviously would solve all those problems to just apply a running mean to the instrumental data, but I decided to try it anyway. Using HADCRUT:
    http://hadobs.metoffice.com/hadcrut3/diagnostics/global/nh+sh/annual
    And UAH MSU, and common-zeroing that to HADCRUT in 1979 and common zeroing both to Loehle in 1850 (on both the reconstruction it self and the confidence intervals) then doing a 29 year moving average, I got this:

    Ready for a barrage of “use R”‘s. It seems that it would match better with reality if something was shifted forward a few years.

  12. Posted Jan 21, 2008 at 2:41 PM | Permalink

    It is a prompt and diligent response to criticisms indeed. Like Steve, I have not had the time to scrutinize its every nook and cranny, but it looks MUCH more scientific now. I applaud the promptitude of the corrections, and agree that we climate scientists would be well-inspired to respond in this manner.

    Obviously, the main limitation is still that the “global T” timeseries stops in 1935… For all their faults, climate scientists are usually trying to address the question of how warm was the MWP compared to the 1990’s (not the 1930’s), on a global or hemispheric scale. Loehle’s metholodogy, although considerably improved, fails to bring anything meaningful to the question. Indeed, to substantiate his claim that the MWP was the warmest tridecade ever, he is forced to do a hodge-podge patching to the GISSTEMP series for the the post-1935 period – a choice which some of the eminent statisticians in the audience might have something to say about.

    In any case, the conclusions are much more tame, thus more reasonable now. Essentially they are saying that the “AGW axiom” is undecidable given his dataset : temperatures of the recent decades and the MWP are within uncertainties of each other. I would agree with that much.

  13. Craig Loehle
    Posted Jan 21, 2008 at 2:43 PM | Permalink

    re: Andrew says: January 21st, 2008 at 2:32 pm
    The data have dating errors, so I do not recommend over-interpreting wiggles in the graph at fine scales such as decades.

  14. Andrew
    Posted Jan 21, 2008 at 2:45 PM | Permalink

    Er, wouldn’t solve them, sorry.

  15. Andrew
    Posted Jan 21, 2008 at 2:46 PM | Permalink

    Thanks Craig, I’ll be more careful next time.

  16. Craig Loehle
    Posted Jan 21, 2008 at 2:49 PM | Permalink

    re: Julien Emile-Geay says: January 21st, 2008 at 2:41 pm
    The AGW question is indeed undecidable from this data, though in a “hodge-podge” sort of way it does seem that one can reject at least that the recent period was warmer than the MWP. The result also suggests that attempts to make the MWP go away are perhaps misguided. If there was a MWP, then the “null model” for climate trends is not flat and featureless.

  17. Andrew
    Posted Jan 21, 2008 at 2:58 PM | Permalink

    I would argue, Julien Emile-Geay, that whether Loehle “brings anything to the table” depends on whether you believe the meal is forthcoming. I don’t think there will ever be enough data to say with confidence exactly how warm the MWP was, becuase the reconstructions to base these claims off of are sparse indeed. We have something like 3000 temperature stations around the world. Leohle, or anyone else for that matter, has a few dozen locations to go on. Not exactly easy to be definitive.

  18. Posted Jan 21, 2008 at 3:02 PM | Permalink

    Andrew–#18, How about hard evidence??–Like organic material underneath retreating glaciers… Not enough???–I guess we should all resolve ourselves to proxy guesses from 1 bristle cone pine.

  19. Andrew
    Posted Jan 21, 2008 at 3:10 PM | Permalink

    Gaelan Clark, I agree that such things tell us that the MWP existed, and that, at least in those places, it was warmer than the present. But I mean to say that, quantitatively, the exact degree to which it was warmer or not globally requires a lot more evidence. That’s a quantitative question. Organic material is qualitative evidence. And I’m well aware that proxies from Bristlecones tell us more about soil moisture than temperature.

    Sorry, I seem to have made you angry. No hard feelings, right?

  20. Kenneth Fritsch
    Posted Jan 21, 2008 at 3:14 PM | Permalink

    Re: #12

    For all their faults, climate scientists are usually trying to address the question of how warm was the MWP compared to the 1990’s (not the 1930’s), on a global or hemispheric scale. Loehle’s metholodogy, although considerably improved, fails to bring anything meaningful to the question. Indeed, to substantiate his claim that the MWP was the warmest tridecade ever, he is forced to do a hodge-podge patching to the GISSTEMP series for the the post-1935 period – a choice which some of the eminent statisticians in the audience might have something to say about.

    It does not take a statistician, eminent or otherwise, to notice the obvious differences between Loehle’s rendition of a reconstruction and those of Mann et al and their progeny. Forget, for the moment, the later 20th century temperature increases and notice the much larger variations that Loehle’s rendition shows. Mann et al’s original rendition as it turns out was built for the argument that non-AGW effects prior to the 20th century caused very little temperature variation and only the late 20th century showed any significant variation (shown more prominently by the instrumental record and particularly beyond the 1980s) when man was spewing CO2 and other GHGs.

    The questions then become why the difference in renditions and whether either or neither is based on validly accurate temperature proxies. Until I see a more detailed analysis of the Loehle proxies I remain skeptical.

  21. Posted Jan 21, 2008 at 3:20 PM | Permalink

    Stats in the absence of causality is not science.

  22. Peter Thompson
    Posted Jan 21, 2008 at 3:34 PM | Permalink

    #12 JEG,

    ” For all their faults, climate scientists are usually trying to address the question of how warm was the MWP compared to the 1990’s (not the 1930’s)”.

    Didn’t our host do a pretty good job demonstating that the 30’s were at least as warm as the 90’s, at least in the continental US? That always seemed to be Hansen’s “go to” series until that time. I gather he’s gone global since.

  23. Kenneth Fritsch
    Posted Jan 21, 2008 at 3:38 PM | Permalink

    Re: #17

    Francois, I much respect your judgments, but I think I would expect no less from JEG in this case. It gives a great opportunity for this humble layperson to observe those little gray cells operating in the mind of an up and coming young climate scientist and particularly so in reacting to fresh inputs that, at least on first notice, might go against the grain of the established orthodoxy.

  24. Andrew
    Posted Jan 21, 2008 at 3:40 PM | Permalink

    Peter, the temperatures in the US were never impressive. Why would Hansen make that his “go-to” data?

  25. steven mosher
    Posted Jan 21, 2008 at 3:42 PM | Permalink

    I think JEG gets it about right or mostly right.

    1. Craig responded to errors quickly and forthrightly. Everyone can take a lesson there.
    That JEG praises Craig for this can be taken as a tactic criticism of others.

    2. I think Hu added something of note to the analysis and think he should be singled out by
    folks for making a contribution.

    3. I think the Issue of the MWP versus the 1930s does Add something, especially

    A. if all the studies comparing the MWP to the 1990s are flawed and/or infected by bogus proxies
    B. If the instrument record post 1975 is likewise infected.

    Anyway, Well done craig.

  26. Posted Jan 21, 2008 at 3:57 PM | Permalink

    I’ve set up a short page at http://www.econ.ohio-state.edu/jhm/AGW/Loehle/ with further details of the standard deviation calculation (complete with color spaghettigram!), and an ASCII file with the corrected series, standard errors, and 95% CI bounds, already noted by Andrew above in #8. Note that a single PDF contains both the original Loehle (2007) article and Loehle and McCulloch (2008) correction.
    More anon.

  27. Steve McIntyre
    Posted Jan 21, 2008 at 4:00 PM | Permalink

    In this context, I remind people of the spaghetti graphs that I showed in my 2006 AGU presentation http://data.climateaudit.org/pdf/agu06.ppt in which slight variations of canonical recons yielded different medieval-modern relationships, a theme also raised in our discussions of Juckes.

    These variations take the matter well past the 1930s right up to the present. For Esper 2002, there’s an Esper variation that has a warmer MWP; likewise for Briffa 2000, MBH99 etc etc.

    Most of the “proxies” contain little information, leaving the result imprinted by a few selections – Yamal and bristlecones favoring the modern warm period, Polar URals Update, Indigirka … favoring MWP. Do you prefer cherries or apples, as we asked the NAS panel?

  28. Posted Jan 21, 2008 at 4:03 PM | Permalink

    good.
    i would still change the title, as it implies that you are looking at 2000 years of data…

    who will spread the happy news now among all those who used the former version as proof, that the MWP was warmer than “modern” times?

  29. Andrew
    Posted Jan 21, 2008 at 4:14 PM | Permalink

    Sod, nothing has necessarily changed. One could interpret the graph before, extending only to about a decade or two ago, as neither confirming, nor denying that the MWP was warmer than today, though giving strong weight towards confirming. The weight has lessened, but they way I see it, the reconstruction still leans that way, if less significantly so.

  30. Pat Keating
    Posted Jan 21, 2008 at 4:24 PM | Permalink

    29 Andrew
    The point is that Craig’s reconstruction shows a warm period in Medieval times and is quite different from Mann’s hockey stick. The argument over whether today’s temperature’s are about the same or slightly less warm than the MWP seem to me to be rather silly. The point is that they are close, not like Mann’s contrivance.

  31. Mike B
    Posted Jan 21, 2008 at 4:27 PM | Permalink

    #28

    who will spread the happy news now among all those who used the former version as proof, that the MWP was warmer than “modern” times?

    Loehle’s result:

    Yes, there was a statistically significant global MWP and

    MWP>=CWP.

    That’s a profound refutation of the Mannian Hooey Stick.

  32. Craig Loehle
    Posted Jan 21, 2008 at 4:28 PM | Permalink

    sod: I could not change the title when it is a “correction” sorry. Two salient points:
    1) If tree rings give different results than other proxies, then all is not right in Denmark. Further study is needed.
    2) The result bears on the assertion that recent temperatures are “unprecedented”. With these data there is no question that this assertion is rejected (in a statistical sense). Now what is needed is more and better data to tighten up that result (or refute it). It remains with those who claim unprecedentedness to firmly prove their case.

  33. Phil.
    Posted Jan 21, 2008 at 4:30 PM | Permalink

    Re #8

    Phil, What you are saying makes no sense. The corrected version extends to 1935,
    http://www.econ.ohio-state.edu/jhm/AGW/Loehle/LoehleMcC.txt

    What are you talking about, I’m referring to the discussion of fig. 1 (see below), the data for which in the correction runs up to 1980.

    The time series produced by the simple mean of smoothed deviations (Fig. 1) shows
    quite coherent peaks. Note that the use of smoothed data (30-year running mean)
    means that peaks and troughs are damped compared to annual data (Loehle, 2005).
    Some of the input data were also integrated values or sampled at wide intervals. Thus
    it is not possible to compare recent annual data to this figure to ask about anomalous
    years or decades. The data show the Medieval Warm Period (MWP) and Little Ice Age
    (LIA) quite clearly. The series ends with a downtick because the last set of points are
    averages that include the cool decades of the 1960s and 1970s.

  34. Lance
    Posted Jan 21, 2008 at 4:32 PM | Permalink

    sod,

    “As warm” is damning enough to any claim that current temperatures are “unprecedented” and must be attributed to anthropogenic CO2. If natural variability produced temperatures as high as present in the MWP then there is little evidence to support claims of an impending human caused climate catastrophe.

    P.S. The lower case letter starting the name is so 2002.

  35. Andrew
    Posted Jan 21, 2008 at 4:36 PM | Permalink

    Pat Keating, of course, it has always been my understanding that the point is the similarity, not the exact difference between warm periods (ie, one’s warmer, one’s hotter). But to the layman, the idea that this is the warmest period in a thousand years, regardless of the profile of past climate shifts, is a compelling case for them that current warming=AGW. Which is why Mann got so much attention.

  36. bender
    Posted Jan 21, 2008 at 4:38 PM | Permalink

    who will spread the happy news now among all those who used the former version as proof, that the MWP was warmer than “modern” times?

    You may as well ask: who will spread the happy news now among all those who used the hockey stick as proof, that the MWP was possibly warmer than modern times? [The HS is dead.]

    Well: all of us, that’s who.

    Thank you Dr. Loehle for implementing ALL of the reviewers’ suggestions. And thanks, Hu, for being pro-active in helping him out. If every author responded as quickly and as vigorously as you did, the job of reviewer and editor would be so much more pleasant.

    Would be great to see a MBH HS and Moberg recon overlaid on top of Loehle’s recon with the suitably massive confidence envelope. Would like to see where and how much the HS falls outside L&Ms confidence envelope. Probably Steve M will get to that at some point.

  37. Andrew
    Posted Jan 21, 2008 at 4:41 PM | Permalink

    Phil,

    The peak value of the MWP is 0.526 Deg C above the mean over the period (again
    as a 29 year mean, not annual, value). This is 0.412 Deg C above the last reported
    value at 1935
    (which includes data through 1949) of 0.114 Deg C.

    No sixties here.

  38. Andrew
    Posted Jan 21, 2008 at 4:44 PM | Permalink

    Actually, this quote might help Phil to:

    With the corrected dating, the number of series for which data is available
    drops from 11 to 8 in 1935, so that subsequent values of the reconstruction would be
    based on less than half the total number of series, and hence would have greatly
    decreased accuracy. Accordingly, the corrected estimates only run from 16 AD to 1935
    AD, rather than to 1980 as in Loehle (2007).

  39. Phil.
    Posted Jan 21, 2008 at 4:45 PM | Permalink

    Re #33

    What are you talking about, I’m referring to the discussion of fig. 1 (see below), the data for which in the correction runs up to 1980.

    OK I see what’s up, the site refers to the correction but that link goes back to the original data.

  40. Craig Loehle
    Posted Jan 21, 2008 at 4:53 PM | Permalink

    Sorry for the confusion, but the new paper is tacked on to the end of the old paper so noone can get just one without the other. if you actually download you get both.

  41. Andrew
    Posted Jan 21, 2008 at 5:00 PM | Permalink

    OK I see what’s up, the site refers to the correction but that link goes back to the original data.

    Odd, it looks like the new data to me. My link or someone else’s?

  42. Craig Loehle
    Posted Jan 21, 2008 at 5:17 PM | Permalink

    The link to supplemental info (data for graph) is in the corrected ms in the figure legend.
    http://www.ncasi.org/programs/areas/climate/Loehle_Supplemental_Info.zip

  43. Sam Urbinto
    Posted Jan 21, 2008 at 5:30 PM | Permalink

    So this is rather the same boat as what Drs. North and Wegman said about MBH: It neither proves nor disproves anything, it leaves us in the same state of ignorance as we used to be. What’s wrong with not knowing something you can’t know and just dropping it?

    I really wish everyone would stop calling the anomaly trend “the temperature” (and then equating that to energy balance). All we know is the anomaly trend is hanging around .6C right now.

  44. Kenneth Fritsch
    Posted Jan 21, 2008 at 5:39 PM | Permalink

    Re: #27

    Most of the “proxies” contain little information, leaving the result imprinted by a few selections – Yamal and bristlecones favoring the modern warm period, Polar URals Update, Indigirka … favoring MWP. Do you prefer cherries or apples, as we asked the NAS panel?

    Before I bake a cherry or apple pie I have to know whether the apples and/or cherries are “ripe”. At this point I will be content to eat cake.

  45. bender
    Posted Jan 21, 2008 at 5:41 PM | Permalink

    Ok, Sam, I’ll bite. But please post your reply in unthreaded. Why do you expect the temperature trend over time to look any different from the anomaly trend? As long sa we’re talking about long-term trends, does your distinction really matter?

  46. Andrew
    Posted Jan 21, 2008 at 7:29 PM | Permalink

    By the way, any word on whether Realclimate is planning a new critique of Loehle? Their old one is now stale.

  47. Craig Loehle
    Posted Jan 21, 2008 at 7:34 PM | Permalink

    I sent the new ms to Gavin–let’s see what he does.

  48. Raven
    Posted Jan 21, 2008 at 7:39 PM | Permalink

    I sent the new ms to Gavin–let’s see what he does.

    Julien Emile-Geay has already told us what the ‘real-climate’ talking points will be: focus on the 1935 limitation and the lower MWP temps and claim the paper is irrelevant.

  49. PhilH
    Posted Jan 21, 2008 at 8:18 PM | Permalink

    Steve:

    This may be non-scientifically off-point, but I have been re-reading “In the Throne Room of the Mountain Gods,” Galen Rowell’s account of the failed 1975 American K2 expedition into the Karakoram Himalayas in which he makes numerous references, on apparent local authority, to medieval trade routes from China through the region before the glaciers expanded and filled the river beds and moraines in the area.

    Steve: I’d be very cautious about relying on local stories. Anything medieval can be well dated archaeologically and that’s the only information that’s worth relying on. HAving said that, it looks like there have been major millennium scale changes in interior Asia; there are old buildings buried in the Taklamakan desert.

  50. jae
    Posted Jan 21, 2008 at 8:36 PM | Permalink

    Julien Emile-Geay has already told us what the ‘real-climate’ talking points will be: focus on the 1935 limitation and the lower MWP temps and claim the paper is irrelevant.

    No, now Gavin HAS to address the fact that modern temperatures are not warmer than the last milllyunnn years.

  51. Phil.
    Posted Jan 21, 2008 at 8:42 PM | Permalink

    Re #32

    2) The result bears on the assertion that recent temperatures are “unprecedented”. With these data there is no question that this assertion is rejected (in a statistical sense). Now what is needed is more and better data to tighten up that result (or refute it). It remains with those who claim unprecedentedness to firmly prove their case.

    This rests on your reading of Gistemp and your definition of the present as 1992 (even then you say the difference, 0.07º, isn’t significant), Gistemp shows an increase of ~0.3º since then!

  52. Andrew
    Posted Jan 21, 2008 at 9:01 PM | Permalink

    Phil., Que? No comprende, Señor. The data in question are smoothed, so its not really appropriate to do a smoothed/nonsmoothed comparison. Also, No GISS, please. I mean, its interesting that Craig chose this, but I wouldn’t have. But then again, I’m a “denier” aren’t I? Look up at my graph. It seems that, at least the way I did it, there is no “unprecedented” recent warmth.

  53. Phil.
    Posted Jan 21, 2008 at 9:13 PM | Permalink

    Re #52

    Well Craig chose GISS so who cares what you would choose? He claims that “With these data there is no question that this assertion is rejected (in a statistical sense)”, well when the most recent data he can connect to is 1992 in an era of steadily increasing temperature I’m afraid that assertion doesn’t stand.

  54. Andrew
    Posted Jan 21, 2008 at 9:20 PM | Permalink

    Who cares? Certainly I care! Ask Craig why he did that, not me. Steadily rising, though, depends on your definition. I can find any number of short periods, and one or two medium ones, when temperature wasn’t rising. For the long run, yes, pretty steady rise. And of course I don’t want to get a “IT IS JUST WEATHER” thing, but there hasn’t been much warming in the last ten years, except in GISS (and maybe NCDC or another surface record?) of course.

  55. Phil.
    Posted Jan 21, 2008 at 10:20 PM | Permalink

    Re #54

    It’s you who doesn’t want Craig to use GISS not me, he chose it so to be consistent I use it in my comments. He states that the Tmax 16 years ago matched his reconstruction Tmax in the MWP and I haven’t seen many years since 1992 that weren’t warmer than ’92.

  56. conard
    Posted Jan 21, 2008 at 11:21 PM | Permalink

    Poor Craig. In his first paper he tried to use the most recent data and was taken to the woodshed. Now his study is no good because the last statistically defendable data is too old– and it is not even his data 😉

    Andrew,
    I think Craig should have avoided the temptation to bring in the instrumental record and let his case ride on the proxies. He could not resist so you followed him into the weeds and poor ol’ sod ran ahead and fell off the cliff.

    Phil,
    You seem like a smart guy, why pick the low-hanging fruit? Your comments on the paper’s “main significance”?

  57. Posted Jan 22, 2008 at 2:19 AM | Permalink

    I appreciate your efforts, Craig and Hu! You might get some criticism of error model

    X_{jt}=\mu _t + \varepsilon _{jt}

    as the scale error term

    X_{jt}=\mu _t + \varepsilon _{jt}+\epsilon _j \mu _t

    is missing (whether it is significant or not brings us back to source publications). But anyway, the Team should learn a lot from your clear and sensible statistical error analysis.

    Hu

    I’ve set up a short page at http://www.econ.ohio-state.edu/jhm/AGW/Loehle/ with further details

    SupplementaryInfo

    …The second component would reflect the difference between the local true temperature anomaly and the global anomaly, and would have a pairwise covariance that was some declining function, to be empirically determined, of the great circle angular distance between the pair of proxies in question. Global temperature could then be efficiently estimated by means of Generalized Least Squares (Aitken’s formula), using the complete spatial covariance matrix. Such a strategy would be more efficient than the coarse “gridding” procedure commonly used in climate studies.

    This is very interesting topic, have you read Shen’s papers ( see links in http://www.climateaudit.org/?p=1603#comment-111575 ) ?

  58. Posted Jan 22, 2008 at 4:29 AM | Permalink

    sod: I could not change the title when it is a “correction” sorry. Two salient points:

    thanks for this reply.

    before i give some answers or make some more smart remarks, i have two more suggestions:

    1. the paper would still benefit massively from a table overview over the proxies. number of data points, first, last data, type of data, min, max, average temperature and links to data and source.

    2. i thought that Steve’s graph, showing the number of active proxies at the end of your observed period gave good information.

  59. Posted Jan 22, 2008 at 5:10 AM | Permalink

    1) If tree rings give different results than other proxies, then all is not right in Denmark. Further study is needed.

    this is an important point, that most people fail to notice.
    to achieve the result of a distinctive MWP, tree ring proxies had to be left out. we should keep that in the back of our minds, when evaluating the results.

    2) The result bears on the assertion that recent temperatures are “unprecedented”. With these data there is no question that this assertion is rejected (in a statistical sense). Now what is needed is more and better data to tighten up that result (or refute it). It remains with those who claim unprecedentedness to firmly prove their case.

    i don t see how you can reject a thesis concerning the late 20th early 21th century by using data up to 1935.

    and i would LOVE to see a comparison of error ranges between modern measured temperature against the range given for 18 proxies.

    the short term medieval warming spike shown in your graph, is about 0.3 °C during the 9th century. to me that looks very much like a totally different situation from what happened during the 20th century.

  60. Michael Smith
    Posted Jan 22, 2008 at 5:57 AM | Permalink

    i don t see how you can reject a thesis concerning the late 20th early 21th century by using data up to 1935.

    Any claim that current temperatures are unprecedented in the last 1,000 years presupposes some minimum amount of knowledge about temperatures over that period of time. Craig’s data contradicts the tree-ring reconstructions — which means, at a minimum, that our knowledge of past temperatures moves from the catagory of “known to be fairly flat with little variability” to “unknown because of conflicting data”.

    Until that is resolved, the claim that current temperatures are unprecedented is simply unsupportable.

  61. Peter Thompson
    Posted Jan 22, 2008 at 6:04 AM | Permalink

    sod #60,

    “the short term medieval warming spike shown in your graph, is about 0.3 °C during the 9th century. to me that looks very much like a totally different situation from what happened during the 20th century”.

    Since you want to pick cherries, I’m going to pick some apples.

    There are always two issues at play in these discussions, and the ability to switch back and forth as required has always made the “unprecedented warming” crowd hard to pin down. If one points to evidence that it was likely as warm as now in the recent past the argument pivots to the rate of warming as opposed to the absolute level of temperatures, and vice versa. Well then. it would appear that insofar as this study is concerned, one cannot make the case that the absolute temperature is unprecedented, and this conclusion gains support from organics being uncovered by the retreating glacier in Greenland which have been dated to approximately 1000 years old.

    On to the rate of warming. sod, look at (eyeball) the period which begins at the year 1200 and runs for 60-70 years. It would appear to warm between 0.50 and 0.60 C over that period. That appears to be a precedent to me.

  62. Posted Jan 22, 2008 at 6:14 AM | Permalink

    There are always two issues at play in these discussions, and the ability to switch back and forth as required has always made the “unprecedented warming” crowd hard to pin down.

    not at all. i like your argument, as it makes an important distinction.

    so the two of us seem to agree:

    the temperature change of the 20th century was unprecedented over the last 2000 years.

  63. Peter Thompson
    Posted Jan 22, 2008 at 6:26 AM | Permalink

    sod #63,

    “so the two of us seem to agree:

    the temperature change of the 20th century was unprecedented over the last 2000 years”.

    If one were a professional hair-splitter, one might make that interpretaion of my post. It wouldn’t be any less wrong, however. Please do expand on your post, the agreeing part in particular.

  64. Craig Loehle
    Posted Jan 22, 2008 at 6:28 AM | Permalink

    phil: before you get all bent out of shape, read it carefully. 1992 is the last smoothed point. It covers a 29 year period up to 2006. You can’t compare annual values to a smoothed curve.

  65. Craig Loehle
    Posted Jan 22, 2008 at 6:32 AM | Permalink

    Another point to keep in mind: in my 2005 paper I showed that if you average series with dating error then the peaks and troughs (if any) are flattened out–just like a kalman filter, except you are filtering out the signal rather than the noise. So, the true MWP peak and LIA trough are likely more extreme than shown in this graph.

  66. MarkW
    Posted Jan 22, 2008 at 6:33 AM | Permalink

    sod,

    Why wouldn’t one want to leave out proxies that aren’t good temperature indicators, in a study of temperatures?

  67. bender
    Posted Jan 22, 2008 at 7:43 AM | Permalink

    the temperature change of the 20th century was unprecedented over the last 2000 years

    Where on earth do you get this idea, sod? Show the graph.

  68. bender
    Posted Jan 22, 2008 at 7:48 AM | Permalink

    sod, before spouting off these lines of yours, you are obliged to address #62’s:

    sod, look at (eyeball) the period which begins at the year 1200 and runs for 60-70 years. It would appear to warm between 0.50 and 0.60 C over that period. That appears to be a precedent to me.

    Agree or disagree? That is a yes/no question.

  69. Craig Loehle
    Posted Jan 22, 2008 at 8:03 AM | Permalink

    I specifically did not address questions of “rates” of change with my data, because of the dating errors and wide spacing of the data. I do not believe you can answer questions of rates with this data. I am a sceptic even of my own data.

    By the way, to those who doubt that it is even possible to obtain temperatures for the globe for past periods, I do not assert that you necessarily can but since models are being calibrated based on reconstructions and people are making policy from them, it seems to be critical data for decision making, real or not.

  70. deadwood
    Posted Jan 22, 2008 at 8:11 AM | Permalink

    By the way, to those who doubt that it is even possible to obtain temperatures for the globe for past periods, I do not assert that you necessarily can but since models are being calibrated based on reconstructions and people are making policy from them, it seems to be critical data for decision making, real or not.

    Indeed! A very important distinction.

  71. bender
    Posted Jan 22, 2008 at 8:15 AM | Permalink

    I do not believe you can answer questions of rates with this data.

    sod thinks he can, and I would like to hear his argument.

  72. Posted Jan 22, 2008 at 8:20 AM | Permalink

    sod thinks he can, and I would like to hear his argument.

    sod tends to agree with Loehle on this case.

    but i notice as well, that the temperature increase MEASURED in the 20th century is 2 to 3 times bigger than the one in this MDW.
    the message out in the blogosphere is, that the current temperature increase is similar to that in the MWP (and yes, this is in amount and speed!). i have seen little or no evidence that supports this assumption.

    i do notice as well that the amount of applause that the Loehle correction is getting, is negatively correlated to the willingness to accept the changed results of the study..

  73. Glacierman
    Posted Jan 22, 2008 at 8:26 AM | Permalink

    Thank you Dr. Loehle. I have watched all of this and I appreciate the way in which you have handled the criticism of the first paper, the way in which you have responded and how you have conducted yourself after the papers were published. You did not hide, or arm wave, or have your buddy publish something saying the criticism of your work had been thouroughly refuted. That will lend itself to advancing our knowledge of the subject matter.

    I am hopefull that this will push the issue of tree ring data as temperature proxies, as they appear to lack sufficient sensitivity to temperature minus other factors. I think that intuitively this is a simple thing to understand, however, it has not gone that way in the literature. I agree that further study is needed and the tree ring proxy folks need to better explain the divergence problem seen in modern records.

  74. bender
    Posted Jan 22, 2008 at 8:26 AM | Permalink

    sod, your conclusion is not supported by the data. That was the whole point of the rewrite – to properly express the amount of uncertainty on the MWP. Look at the width of the envelope. It may have been a much faster rise in tempearture than what we are seeing today. As usual, you have missed the point. Stop trying to torque the analysis to fit your preconceived view. Look at the data for what it is.

    Loehle is being congratulated for his rapid response, due diligence and accountability, not because he’s a MWP saviour.

  75. Steve Geiger
    Posted Jan 22, 2008 at 10:09 AM | Permalink

    oops, don’t know if that last note took…but wouldn’t the averaging and sparser data of the past only error on the side of lower
    dTemp/dt (hence less rapidity of change for the MWP increases)? Or could it error somehow (unituitively, IMO) to make the curve steeper?

  76. Craig Loehle
    Posted Jan 22, 2008 at 10:16 AM | Permalink

    re: Steve Geiger
    If the averaging of series with dating error flattens out peaks and troughs like I argue in my 2005 paper in Mathematical Geology, then intuitively the rates of warming/cooling MUST decrease on average. However, over short time periods the errors can cause blips up or down that would give the impression of faster rates of warming/cooling.

    see:
    Loehle, C. 2005. Estimating Climatic Timeseries from Multi-Site Data Afflicted with Dating Error. Mathematical Geology 37:127-140

  77. Andrew
    Posted Jan 22, 2008 at 10:17 AM | Permalink

    Gee, conard, now I’m embarassed. I guess I should have known better, though.
    Craig, I agree, whether we can pin down the exact temperature of the globe in, say, 1492, or not, models and policies will be calibrated to these proxies (I’m sorry to say they probably won’t, in your case).
    Bender, yes, we are all glad that Craig has been so helpful. Well, most of us. Some people decided when they saw the MWP that they didn’t like it. I always look at proxies on their relative merits, and of course my respect for the data habits of the creators, to decided whether it can be trusted. Actually, when I first saw the Hockey Stick, I thought, okay, this looks serious. Then I looked into it, and I was shocked at what I found. If someone produced such a graph to which no obvious objections could be raised, I might take notice. Heck, if Craig had done so, I would have taken notice. I don’t like trees, but I have nothing against other things. But trees can’t be trusted, in my opinion, whether they show a MWP or not.

  78. Carrick
    Posted Jan 22, 2008 at 10:28 AM | Permalink

    SOD:

    but i notice as well, that the temperature increase MEASURED in the 20th century is 2 to 3 times bigger than the one in this MDW.

    Note sure we can really say that, not having a comparable set of measurements during the MDW.

    But I’ll point out to you that the models themselves say that anthropogenic warming didn’t initiate until around 1975 (before that, according to the models, the forcing from anthropogenic CO2 was balanced by anthropogenic sulfate emissions).

    So according to the models, more than half of this “unprecedented” warming is entirely natural. Statistically, the slope for the temperature rise in the first half of the 20th century is indistinguishable from the second half, so there’s not much of an argument to be made about “unprecedented”, unless you want to use it as a synonym for “never before recorded”.

  79. Francois Ouellette
    Posted Jan 22, 2008 at 10:38 AM | Permalink

    Craig,

    I haven’t read your paper, but a thought comes to mind. If one assumes that two series should somehow be correlated when there is no dating error, then couldn’t one do a convolution of two series, find the time where the convolution is maximum, and use it to “synchronize” the series (of course also assuming that it’s not a scaling error)? Of course you’d have to stay within the error range of the dating. I realize this may be seen as an ad-hoc procedure, but it would be interesting to see the result. Actually, if one has multiple series, one could compare the convolution results between the various combinations, and see if they match at all, and use this as a criterion to justify the procedure. Have you done anything like that?

  80. Craig Loehle
    Posted Jan 22, 2008 at 10:49 AM | Permalink

    Francois Ouellette says:

    The data consist of samples some of which have been dated. At each sample that is dated, there is some dating error. Samples inbetween the dated ones are given a data based on an age model (imputed age with depth in the sediment). The errors make the age model incorrect in subtle ways, like stretching a slinky in some spots and compressing it in others. However, I was able to show in the paper I cite above that if the several series are averaged the peaks & troughs of the mean will line up pretty well with the true peaks & troughs, but the amplitude will be compressed. This is in fact sort of what you are suggesting. Email me and I’ll send a reprint.

  81. Andrew
    Posted Jan 22, 2008 at 11:11 AM | Permalink

    And, Carrick, if I’m not mistaken they also can’t produce a MWP that is as warm or warmer than the present. So that means that a reliable reconstruction over the last two Millenia that showed such a thing would invalidate those same models. But the reliable aspect is th trouble, isn’t it? Because that can be contested.

  82. Keith Herbert
    Posted Jan 22, 2008 at 11:12 AM | Permalink

    sod 59:

    [t]o achieve the result of a distinctive MWP, tree ring proxies had to be left out. we should keep that in the back of our minds, when evaluating the results.

    This is backward, isn’t it? It should read, “To eliminate global MWP, tree ring proxies had to be included.”

  83. Andrew
    Posted Jan 22, 2008 at 11:14 AM | Permalink

    Also, Keith, why do you suppose sod loves trees so much? Oh wait. If your clever, you already see why that’s a rhetorical question.

  84. Steve McIntyre
    Posted Jan 22, 2008 at 11:36 AM | Permalink

    #59. sod says:

    to achieve the result of a distinctive MWP, tree ring proxies had to be left out

    As I’ve pointed out many times, this is untrue. It depends what you select – if you use Yamal instead of Polar Urals update, you can get Modern higher than MWP and vice versa; similarly with Indigirka versus bristlecones; or depending on Mann PC1 bristlecones versus Ababneh bristlecones.

  85. MarkW
    Posted Jan 22, 2008 at 11:42 AM | Permalink

    But trees can’t be trusted

    Sounds like a personal issue.

  86. Andrew
    Posted Jan 22, 2008 at 12:03 PM | Permalink

    MarkW, it is, which is why I added “in my opinion”.

  87. yorick
    Posted Jan 22, 2008 at 12:12 PM | Permalink

    I wonder if using the median for each time point might be useful? Maybe I will try it. One thing is for certain, a LOT of high resolution information is lost in borehole data. The values converge on the average over time as the heat spreads in the ice, clipping hot periods and filling cold periods. To make inferences about rate of change from data like this is pointless. It also speaks to the impossiblity of claiming “unprecedented” warming, when so much information is known to be lost. It is self evident, when you think about it, that the GRIP record has smoothed a lot of peaks and valleys out of the data.

  88. MarkW
    Posted Jan 22, 2008 at 1:45 PM | Permalink

    Andrew,

    DO you have trust issues with other plant forms? Are you worried that the ficus is spying on you? ;*)

  89. Posted Jan 22, 2008 at 1:49 PM | Permalink

    Mann’s hockey stick is now official dead.

    Long live the Loehle ‘plow’ complete with re-established MWP and LIA.

    Regard

    KeinUK

  90. Posted Jan 22, 2008 at 1:49 PM | Permalink

    Mann’s hockey stick is now officially dead.

    Long live the Loehle ‘plow’ complete with re-established MWP and LIA.

    Regard

    KeinUK

  91. Peter D. Tillman
    Posted Jan 22, 2008 at 2:07 PM | Permalink

    Re: Sod, MDW

    MDW is some variant of MWP?? Mighty Damn Warm?

    Probably not
    MDW Mixed Drink World
    MDW Marktwerking Deregulering Wetgevingskwaliteit ***
    MDW Migrant Domestic Workers ***
    MDW Mass Destruction Weapons **
    MDW macromolecular dry weight **

    Inquiring acronymologists want to know!
    PT

  92. hu.mcculloch
    Posted Jan 22, 2008 at 2:13 PM | Permalink

    Here’s a color version of Fig. 2 of the corrected note (I hope):

  93. Darwin
    Posted Jan 22, 2008 at 2:14 PM | Permalink

    http://www.climateaudit.org/?p=2641#comment-201746
    Sod does not like trees; sod cannot grow under trees; sod needs sunlight. Let it shine in.

  94. yorick
    Posted Jan 22, 2008 at 2:18 PM | Permalink

    Bayesian statisticians argue that even when people have very different prior subjective probabilities, new evidence from repeated observations will tend to bring their posterior subjective probabilities closer together. However, others argue that when people hold widely different prior subjective probabilities their posterior subjective probabilities may never converge even with repeated collection of evidence. These critics argue that worldviews which are completely different initially can remain completely different over time despite a large accumulation of evidence.

    – Wikipedia on Baysian Inferences

  95. hu.mcculloch
    Posted Jan 22, 2008 at 3:05 PM | Permalink

    And here are the 18 corrected and smoothed series used:

    Individual series and their errors about the reconstruction are available in the SI on my page at http://www.econ.ohio-state.edu/jhm/AGW/Loehle.

  96. Andrew
    Posted Jan 22, 2008 at 3:26 PM | Permalink

    MarkW, you can never be to careful about those fici. 😉

    Darwin, Peter, your both hilarious.

    More seriously…

    Individual proxies show quite a lot of, er, diversity. Hard to tell, but are some the same color?

  97. jae
    Posted Jan 22, 2008 at 3:35 PM | Permalink

    Hu: what is the green one; it’s really wild!

  98. woodentop
    Posted Jan 22, 2008 at 3:53 PM | Permalink

    “Have the green one washed and brought to my tent!”

  99. Andrew
    Posted Jan 22, 2008 at 4:09 PM | Permalink

    Guys, if you look closely, it appears that there is more than one green one. Maybe that’s why?

  100. Posted Jan 22, 2008 at 4:16 PM | Permalink

    Jae #97 writes,

    Hu: what is the green one; it’s really wild!

    Actually, there are 4 or 5 green series. I tried unique colors, but they mostly looked like mud, so I went back to 4 bright colors for this chart. The wildest one is Cronin (-4 at 1500, +3 at 150 and 1900, -3 at 200), but some of them are Holmgren (+3 at 850, +2.5 at 1500). See my SI at http://www.econ.ohio-state.edu/jhm/AGW/Loehle/ for individual graphs.

    As I show in my SI, Cronin (and deMenocal) have such high variance that they actually impair the variance of the unweighted average. Culling these two out substantially improves the fit, though the published correction sticks with an unweighted mean of all 18 series, as in Craig’s original paper. Weighted LS effectively gives these two series 0 weight without actually throwing them out.

  101. Peter Thompson
    Posted Jan 22, 2008 at 4:17 PM | Permalink

    sod,

    Since “unprecedented” seems important to you, I would submit that there is one temperature indication in #92 which is unprecedented, no matter what you claim vis a vis the instrumental record for the 21st century, in the last 2000 years, and it occurs c. 1600-1700. All I can say is thank goodness something happened to stop that dramatic unprecedented in recent history trend, or the few pitiful humans left not dead from exposure or starvation would doubless be fighting over the scarce resources available at the equator. I must admit, however that the idea of Toronto being covered by two miles of ice has a certain attraction……..

  102. Posted Jan 22, 2008 at 4:37 PM | Permalink

    UC (#57) writes,

    I appreciate your efforts, Craig and Hu! You might get some criticism of error model

    as the scale error term

    is missing (whether it is significant or not brings us back to source publications). But anyway, the Team should learn a lot from your clear and sensible statistical error analysis.

    (It doesn’t look like his LaTeX came through. If not, see his post.)

    It’s true that one limitation of my SE’s is that I assumed constant variance across time in each series’ errors. The cross sectional variance across series was much more important, so I focussed on it for the present.

    Future efforts might try to quantify the time variation without abandoning the cross sectional heteroskedasticity by going back to the original calibrations and seeing if the authors provided appropriately time-varying se’s (which should ordinarily vary with the value of the proxy and therefore reconstructed Temperature). The authors’ variance is not necessarily the true measurement variance, but the measurement variance could be modeled as some series-specific constant, to be estimated, times or plus the authors’ estimated variance. The total variance would then be this plus a locality term that depended in some way to be determined on the distances from other sites.

    Holmgren does provide time-varying confidence intervals in a graph Craig sent me, though I’m not sure whether this was published or just something she sent him privately. I suspect that most authors provided no se’s at all. Farmer and deMenocal do give measurement standard error estimates in their articles, based on the recommendations of the 1970’s work they base their calibration on, but my estimated standard errors for them are much lower, despite the fact that my errors contain both measurement error and locality error, so it appears that their source (I don’t have their article handy at the moment) was too conservative in its se estimates.

    In fact, “classical” (CCE) calibration estimates are median estimates rather than mean estimates, and their confidence intervals, if computed correctly a la Brown, should ordinarily be asymmetrical, depending on which side of the mean of the calibration period you are on. There is therefore no unambiguous se. However, it is probably adequate when aggregating just to treat the point estimate as a mean-unbiased estimate of the true value, and then to estimate a single standard error by taking the geometric mean of the distances to the the two 95% CI limits and dividing by 2.

    I printed out Shen’s article. Thanks!

  103. Craig Loehle
    Posted Jan 22, 2008 at 4:49 PM | Permalink

    I wanted to publicly thank Hu for his help. One of the problems with this type of analysis is that there is no rigorous method for dealing with error. For example in a sampling experiment in introductory stats there are formally proven methods for proper estimation. Here we have sample error (different with time and for each data set), dating error passed through an age-vs-depth model (usually without any formal stats on it) in many cases, and a regression equation for temperature as a function of the proxy with its own errors. We can have opinions about the right thing to do, and we can do something that seems justified, but a proof that is airtight that we did the right thing is lacking, as far as I know. For this data we now have 3 confidence interval approaches: jacknife, bootstrap (albeit with the old analysis) and now Hu’s contribution. They give similar results, which is encouraging.

  104. SteveSadlov
    Posted Jan 22, 2008 at 5:03 PM | Permalink

    RE: “Peter Thompson says: January 22nd, 2008 at 4:17 pm” – Taking the 30 years war, Cromwell, the English Civil War, the 7 Years War (including the so called “French and Indian War”), the French Revolution and the Reign of Terror as leading indicators of that trajectory, I’d have to say I share your view.

  105. woodentop
    Posted Jan 22, 2008 at 5:30 PM | Permalink

    #104 – whoops! The French Revolution and accompanied terror kicked off in 1789 – still cold though. An interesting insight into the cold weather in the UK in the 17th Century is provided (qualitatively) in Samuel Pepys Diary, which covers 1660-1669 London. For a taste of the bitter winters in the 18th Century, I’d recommend Boswell’s Edinburgh Diaries.

  106. poid
    Posted Jan 22, 2008 at 6:07 PM | Permalink

    Sod:

    If you wish to make a better comparison, the average anomaly for the last 29 years using monthly GISS data is 0.4. This isn’t directly comparable with the reconstruction, as they are centred over different periods for starters, but it gives you some idea of the magnitude of the current anomaly.

    Do you really think this value is without precedent when you compare it to the reconstruction? If you do, please explain, because you simply cannot make that assertion. Even using the average of the last decade (around 0.5) you cannot make that assertion!

    This is numerical illustration of what Dr Loehle has been trying to say, that the average of a period will tend to flatten out the peaks and troughs and the more extreme short-lived anomalies for the period are unseen. So smoothed reconstructions should not be compared with annual spikes and troughs.

    Certainly, this reconstruction shows that you simply cannot claim that the modern period is certainly, or even “very likely” or whatever the IPCC would say, the warmest period in the last thousand or miiiiiillyun years. And the confidence intervals betrays that the “certainty” that the IPCC and RC types have attached to their assertions regarding past temperatures.

  107. Susann
    Posted Jan 22, 2008 at 8:45 PM | Permalink

    First off, kudos to Craig Loehle for listening to criticisms and addressing them in a correction.

    The graph looks pretty and to a layperson, fairly convincing that there was a MWP which was possibly higher than the CWP, at least up to 1935. However, I’m not much of a statistician so I don’t feel confident I can judge what can and can’t be concluded from the graph. I would be grateful if someone with real savvy would explain to me the 95% CI in the graph and what it represents. Does this mean that we are 95% certain that the temperature anomaly in AD 200 was between -0.6 ish and -0.1 ish?

    Could I conclude from that graph that, say, the temp anomaly in 840 AD could have been as low as 0.28 and in 1935 it could have been as high as 0.48 and thus suggest that the CWP could still be higher than the MWP? How do I know which is the most likely case?

    Also, from what I read, the NAS panel concluded that we have more certainty about proxy temperature data in the last 400 years than any time before. IOW, less certainty for the period from 1600 AD back to 1000 AD and very little certainty (or more uncertainty) from 1000 back to 0 AD due to the proxy data being less reliable the farther back in time we go. If this is the case, shouldn’t the uncertainty be higher the farther back we go? Is our ability to compare the two periods suspect as a consequence?

    IOW, the data for temps in the CWP are more “certain” than the data for temps in the MWP. So even if the graph appears to show a MWP with temp anomaly higher than that in the CWP, since there is greater uncertainy for data from that earlier period, are we able to draw any conclusion about the two periods relative to each other? Couldn’t the temp be much higher — or lower — in the MWP than the data suggests due to this higher uncertainty?

    Finally, when thinking about what the existence of a MWP means to our understanding of the CWP, an analogy (probably poor, but so be it): I had a fever back in 1980 of 38.5. I had strep throat. In 1996 I had a fever of 39 C. I had bronchitis. I currently have a fever of 38. Must I conclude it is either strep throat or bronchitis because those were the sources of my previous fevers? Is it not possible that my current fever is caused by a totally unrelated microbe or even something non-microbial? The existence of a MWP only says that there have been periods when the temperature was high relative to the average temperature. It does not say what the cause was of the high temperature nor that the temperature increase today must be “natural”.

    We still have to answer the question of what is causing the current warming. If we rule out known natural forcers (let alone unknown natural forcers), such as solar or volcanoes, we appear to be left with known non-natural forcers — anthropgenic forcers, such as AGHGs, land use, UHIs, etc.

    The paleoclimate proxy temperature reconstructions are very interesting and a lesson in science for us laypeople, but given the uncertainties in the data and science at this stage in its development, I don’t know how much help they give us answering the most pressing questiions about the CWP, its causes, prospects, let alone our responses.

  108. MrPete
    Posted Jan 22, 2008 at 8:50 PM | Permalink

    Susann — AFAIK, the prior NAS statements about uncertainty were based on prior — BCP-based — research. Loehle’s research changes the game a bit, it would seem.

  109. Alan Woods
    Posted Jan 22, 2008 at 9:03 PM | Permalink

    Re: 105

    According to Hubert Lamb in his classic “Climate, history and the modern world” the decade leading up to the French Revolution had been particularly cold and wet, leading to many crop failures. IIRC, the price of bread rose something like 100-fold. Peasants weren’t happy and I think the price of cake was out of their reach too.

  110. jae
    Posted Jan 22, 2008 at 9:23 PM | Permalink

    Susanne says:

    The existence of a MWP only says that there have been periods when the temperature was high relative to the average temperature. It does not say what the cause was of the high temperature nor that the temperature increase today must be “natural”.

    That is exactly right, Susanne. Now think some more about it.

  111. Craig Loehle
    Posted Jan 22, 2008 at 9:29 PM | Permalink

    Susann: the absence of a MWP was taken to mean that there were no possible natural factors, therefore recent warming had to be human-caused. the existence of a MWP means that this chain of reasoning is not valid.

    The error bars are indeed wide, which means it is difficult to pin down exact numbers. This means more data would be helpful (i.e. it is not “settled”).

  112. bender
    Posted Jan 22, 2008 at 9:57 PM | Permalink

    Hi Susann,

    I will try to answer your questions.

    The graph looks pretty and to a layperson, fairly convincing that there was a MWP which was possibly higher than the CWP, at least up to 1935. However, I’m not much of a statistician so I don’t feel confident I can judge what can and can’t be concluded from the graph. I would be grateful if someone with real savvy would explain to me the 95% CI in the graph and what it represents. Does this mean that we are 95% certain that the temperature anomaly in AD 200 was between -0.6 ish and -0.1 ish?

    Yes. IF you were to accept that there is no error in the proxy calibration, which is of course unrealistic. Add in that error and the confidence intervals would grow wider still.

    Could I conclude from that graph that, say, the temp anomaly in 840 AD could have been as low as 0.28 and in 1935 it could have been as high as 0.48 and thus suggest that the CWP could still be higher than the MWP?

    Yes.

    How do I know which is the most likely case?

    Values at the top and bottom of the envelope are equally likely. More data are required to narrow the envelope.

    Also, from what I read, the NAS panel concluded that we have more certainty about proxy temperature data in the last 400 years than any time before. IOW, less certainty for the period from 1600 AD back to 1000 AD and very little certainty (or more uncertainty) from 1000 back to 0 AD due to the proxy data being less reliable the farther back in time we go. If this is the case, shouldn’t the uncertainty be higher the farther back we go? Is our ability to compare the two periods suspect as a consequence?

    Excellent question. Tree-ring based proxies typically have more modern samples than ancient samples, and confidence intervals get smaller the more quality data you have. That is why the NAS panel concluded what they did: because they were considering proxies that included tree-ring based proxies. That is precisely what Loehle eliminated. Therefore his reconstruction does not display that idiosyncratic feature of confidence intervals narrowing over time.

    IOW, the data for temps in the CWP are more “certain” than the data for temps in the MWP. So even if the graph appears to show a MWP with temp anomaly higher than that in the CWP, since there is greater uncertainy for data from that earlier period, are we able to draw any conclusion about the two periods relative to each other?

    You can compare the apples of the past with the oranges of today only if you have a robustly estimated confidence interval. If you accept that Loehle & McCulloch’s confidence interval is robust, then there are insufficient data to reject the null hypothesis that the two periods exhibit equal temperatures. They appear to be equal.

    Couldn’t the temp be much higher — or lower — in the MWP than the data suggests due to this higher uncertainty?

    Could be, yes. Not likely, however.

    We still have to answer the question of what is causing the current warming. If we rule out known natural forcers (let alone unknown natural forcers), such as solar or volcanoes, we appear to be left with known non-natural forcers — anthropgenic forcers, such as AGHGs, land use, UHIs, etc.

    Correct. And don’t forget natural forces that we don’t understand fully yet: sun, ocean. That the GCMs can not predict ENSO, PDO etc. bothers me. That those circulatory modes are post-hoc inventions, not necessarily persistent through time, also bothers me. It bothers Gavin Schmidt too. But me more than him.

    The paleoclimate proxy temperature reconstructions are very interesting and a lesson in science for us laypeople, but given the uncertainties in the data and science at this stage in its development, I don’t know how much help they give us answering the most pressing questions about the CWP, its causes, prospects, let alone our responses.

    They also help us determine whether or not the current warming is “dangerous”. If we’ve seen it before, it’s less alarming. Polar bears have gotten through it before, for example.

    What these studies do not tell us is whether the warming trend will continue or whether some hitherto latent negative feedback process will kick in to cap the warming. A process such as Christy’s ocean clouds. That’s where the GCMs must come in. (Free the code! Liberate the codewriters! Include the cloudmen!)

    But seriously. Keep asking great questions, Susann.

  113. Curt
    Posted Jan 22, 2008 at 11:43 PM | Permalink

    #104 SteveSadlov:

    Don’t limit yourself to Western events. I’ve seen credible estimates that China lost over 40% of its population in a couple of decades during the coldest periods of the 1600s.

  114. Posted Jan 23, 2008 at 1:52 AM | Permalink

    Hu (#102)

    It’s true that one limitation of my SE’s is that I assumed constant variance across time in each series’ errors. The cross sectional variance across series was much more important, so I focussed on it for the present.

    That’s right approach, we cannot correct everything at once. I mentioned scaling errors, because that’s the topic where MBH98 error analysis really goes down. Loehle’s approach is very different (no massive unknown scale matrix B to estimate). In addition, any non-sense proxy can be modeled as scaling error ( B = 0, estimate of it will not be zero ), that’s why you get high calibration RE in MBH98 even with white noise input.

    In fact, “classical” (CCE) calibration estimates are median estimates rather than mean estimates, and their confidence intervals, if computed correctly a la Brown, should ordinarily be asymmetrical, depending on which side of the mean of the calibration period you are on.

    The relation of CCE and central point of Brown’s confidence region

    (Y'-\hat{\alpha}-\hat{B}X')^TS^{-1}(Y'-\hat{\alpha}-\hat{B}X')/\sigma^2(X')\leq (q/v)F(\gamma)

    is something where I did go wrong in here (silly me, but I’m a bloody engineer, not a statistician 🙂 ) That central point is not ML estimator, and actually the difference between CCE and that central point is, as per Brown,

    C^{-1}D ,

    where

    C=\hat{B}S^{-1}\hat{B}^T-(q/v)F(\gamma)(X^TX)^{-1}

    and

    D=\hat{B}S^{-1}(Y'-\hat{\alpha})

    Now let \gamma \rightarrow 1 and then the confidence region degenerates to CCE. The central point of confidence region is dependent on the confidence coefficient. To live is to learn.

    However, it is probably adequate when aggregating just to treat the point estimate as a mean-unbiased estimate of the true value, and then to estimate a single standard error by taking the geometric mean of the distances to the the two 95% CI limits and dividing by 2.

    I used this method to propagate errors from TPCs to NH-average in here.

  115. MarkW
    Posted Jan 23, 2008 at 5:44 AM | Permalink

    I think that concentrating on only the MWP may be a mistake. Evidence shows that further back in time, there have been a number of warm periods, some noticeably warmer than the MWP. Off the top of my head, there was the Minoan Warm Period, of around 1000BC. The Roman warm period of around 1AD. Then the MWP around 1000AD. Interesting periodicity there as well.

  116. kim
    Posted Jan 23, 2008 at 5:52 AM | Permalink

    S, your parenthetical ‘let alone unknown natural forcers’ makes no sense. Prove that you are thinking clearly here. Your intelligence is manifest; what is causing your apparent blindness to the meaning of the existence of the MWP and the LIA?
    ==============================

  117. Andrew
    Posted Jan 23, 2008 at 6:18 AM | Permalink

    MarkW, noting the periodicity, it sounds sort of like Fred Singer’s 1500+-500 year cycle. Intresting. But I haven’t actually seen all the evidence for these periods, except a single proxy in an essay by Bob Carter. How exstensive is the evidence for them? I sometimes get the impression that we have very few proxies over even the last 2 Millenia. Am I mistaken in either noticing a sparcity, or in extrapolating this to mean there are few which extend back farther?

  118. PaulM
    Posted Jan 23, 2008 at 7:13 AM | Permalink

    Why does the current warming have to have a cause?

    Raven is quite right. This is really important and needs to be said again and again. Complicated nonlinear systems fluctuate in an apparently irregular way, even with constant forcing. We don’t ask what ’caused’ it to rain today or what ’causes’ a gas fire flame to flicker. The time-scale of the fluctuations is longer in the case of climate, simply because the space-scale is larger and the processes (eg variation in ice-cover) are slower. Craig Loehle and others have found all sorts of past wiggles similar in magnitude to the present one, and it would be absurd to attempt to argue a ’cause’ for all of these. It is disappointing that so few people appreciate this simple point.

    And of course the other thing that needs to be said again and again is, looking at the data for the last few years, ‘what current warming?’

  119. Craig Loehle
    Posted Jan 23, 2008 at 7:25 AM | Permalink

    re: andrew
    there are few proxies which go back farther. The really long records (50,000 yrs say) tend to have sparse sampling (every 500 yrs) due to limitations of budgets or due to low sedimentation rates, so you can’t use those for these questions.
    Bender: thanks for answering Susann exactly correctly, IMO.

  120. Francois Ouellette
    Posted Jan 23, 2008 at 7:28 AM | Permalink

    Craig,

    I’ve read your 2005 paper with great interest. A couple of questions:

    The reconstruction of the “true” curve seems to depend on having a “model”. You give a first example with a sine curve, and then to see if more complex curves can be reconstructed, you go on to a 2-component signal. But an actual temperature (or other) signal is likely to contain many more components, depending of course on the temportal resolution one wants. What is the limitation, in terms of number of components, of the nonlinear analysis tool you are using? And how does the choice of a given model influence the reconstruction if you’ve got the model wrong?

    Also, when trying to reconstruct a global mean using multiple regional series, one cannot assume that all series would have the same shapes (otherwise one would be enough…). So you would have to reconstruct each series individually, it seems.

    The point I was making above was inspired by the use of CDMA in communications, where you can extract an encoded signal by convoluting it with the various codes. If the codes are orthogonal, only one code will give a strong convolution. I was just wondering if something similar can be used to compare a given series with the purported global mean, assuming that when there are no errors, the signal will be maximum. Maybe one can devise some algorithm that would maximize the signal by searching through the “error” space.

  121. Craig Loehle
    Posted Jan 23, 2008 at 7:39 AM | Permalink

    Francois: In my 2005 paper I show that it works with a 2 component model. I did not try more. To find the best model, I have used an overdetermined model (too many curves) and the extraneous ones end up with coefficients of 0, or you can do a stage-wise fit. I did not try these methods with this data because I have found in prior papers that reviewers object to the idea of cycles in climate data (even though there are many publications on this).

  122. bender
    Posted Jan 23, 2008 at 8:10 AM | Permalink

    reviewers object to the idea of cycles in climate data (even though there are many publications on this)

    There’s more to the objection than just that.

  123. steven mosher
    Posted Jan 23, 2008 at 9:35 AM | Permalink

    RE 126.

    In communications you KNOW there is a source attempting to communicate with a receiver.

    You can test various approaches. You can verify.

    In the real world mother nature may be babbling with no intention whatsoever.

  124. bender
    Posted Jan 23, 2008 at 9:46 AM | Permalink

    In the real world mother nature may be babbling with no intention whatsoever.

    Nonlinear systems are prone to babbling. Sometimes it goes on and on and on. That’s LTP. Sort of like a troll that won’t zip it.

  125. Francois Ouellette
    Posted Jan 23, 2008 at 9:56 AM | Permalink

    #131 Steven

    In communications you KNOW there is a source attempting to communicate with a receiver.

    I do KNOW that! The idea here is that if you take an average of many series with random dating errors, then maybe it is possible to correct the errors in an individual series by maximizing its convolution with the average, assuming that the average is closer to the “true” signal than the individual series. That wouldn’t work if all series have the same bias, of course. But here, contrary to CDMA (Code Division Multiplexing), the series are not orthogonal, only partially so. I’m just throwing ideas here. Craig,s 2005 paper is about retrieving the “true” signal in the presence of errors. There is a lot of work in communications (more or less my field) about retrieving signals, and I’m sure some of it is applicable to some extent to climatic time series. Some here are probably more knowledgeable than I am, I admit.

  126. Phil.
    Posted Jan 23, 2008 at 10:06 AM | Permalink

    Re #130

    Accepting that Craig’s study does show warming at least similar to, if not greater than, the present warming during medieval times, why do you insist that the current warming MUST be due predominantly to CO2?

    Can you not even consider that perhaps at least a significant portion of the recent warming can be explained by natural forces that cannot be easily quantified or predicted?

    You’re begging the question, however by definition if you can’t quantify the ‘natural forces’ or use them for prediction they can’t explain anything, it’s just handwaving!
    However the increase in CO2 and other radiatively active gases does change the heat transfer through the atmosphere in quantifiable and predictable ways so why can’t you accept that ‘a significant portion of the recent warming can be explained by’ that? Why do you feel that it’s necessary to deny that and seek other ‘natural forces’ that you can’t quantify instead?

    Your intransigence is beginning to appear dogmatic.

    This would appear to be a good description of your position.

  127. Larry
    Posted Jan 23, 2008 at 10:13 AM | Permalink

    I think Francois may be on to something, but I suspect that the statisticians have something that’s equivalent. CDMA does provide the ability to divine a signal from below the noise level, provided there’s enough redundancy and bandwidth. That’s how pretty much all spread spectrum (cell phones, wifi, bluetooth, etc.) these days is done. But I don’t think it has any magical properties that aren’t inherent in the law of large numbers.

  128. Steve McIntyre
    Posted Jan 23, 2008 at 11:04 AM | Permalink

    Please limit comments on this thread to things connected to Loehle’s proxies.

  129. bender
    Posted Jan 23, 2008 at 11:05 AM | Permalink

    I live in a world of rock hard cuasality, but every now and then I find myself perplexed by something it can’t explain. Like clouds, actually.

    I live in a different world of nonlinear complexity where almost nothing can be explained, everything is perplexing, and you constantly have to battle people who think they see relationships that, upon careful examination, do not exist. In these cases statistics is your best friend. It prevents you from believing things that you wish were true.

    That is why I am pleased with the Loehle correction. It squares up the debate on both sides. There are simply not enough data to decide. Back to the field we go. Sort of like MBH should have been doing in 1998 with a bcp update.

  130. bender
    Posted Jan 23, 2008 at 11:10 AM | Permalink

    #129: crosspost with #128. apologies.

  131. Phil.
    Posted Jan 23, 2008 at 11:12 AM | Permalink

    Re #130

    “However the increase in CO2 and other radiatively active gases does change the heat transfer through the atmosphere in quantifiable and predictable ways so why can’t you accept that ‘a significant portion of the recent warming can be explained by’ that?”

    This is a real question from a layman. From what I have read, it is my understanding that feedback effects are posited to play a major role in AGW warming and that these feedbacks are not well understood. Doesn’t this argue against the current scientific predictability of the effects of any specific change in CO2 concentration?

    What I said was that an increase on ghgs had a predictable effect on heat transfer through the atmosphere which is correct, the response of the earth has to be to cancel out that change, for want of a better phrase, ‘at the top of the atmosphere’. Exactly how that response is translated to the surface is the more difficult part, involving inter alia the feedbacks you mention.

  132. Andrew
    Posted Jan 23, 2008 at 11:20 AM | Permalink

    Okay, on topic. It seems that in order to do a comparison of this with the other reconstructions (like the spaghetti graph, for example, we would need to center them, and the instrumental data on a different period than 1960-90 (which is the ususal, right?) and instead on, say, 1920-30 or something. That might give us a beter idea of how Loehle compares to the rest. And it would make for more flavorful spaghetti. 😉

  133. Patrick M.
    Posted Jan 23, 2008 at 11:21 AM | Permalink

    re 112 (bender):

    Bender said:

    Values at the top and bottom of the envelope are equally likely. More data are required to narrow the envelope.

    To clarify your answer a bit for me, (a layman), values dead center in the envelope are most likely and the probability decreases towards the top and bottom of the envelope equally, correct?

    Is there a standard formula for probability based on its position in the envelope?

  134. MrPete
    Posted Jan 23, 2008 at 11:27 AM | Permalink

    Patrick M — good question. If I’ve learned my lessons correctly here, the answer is NO. Values dead center are NOT “most likely.” All we know is to N probability (usually 95%), the correct answer is somewhere — anywhere — inside the envelope. And the rest of the probable “correct” answer lies outside.

    Here’s another question I would like to ask: given that the envelope covers a long time period, can we say anything about probability vs time?

    IOW (In Other Words), how many of the following are valid claims (or even close)?

    * 95% certain that 100% of the real data fits in the envelope
    * 100% certain that 95% of the real data fits in the envelope
    * 95% certain that 95% of the real data fits in the envelope
    * 95% certain that half the data fits inside
    * 95% certain that some (any) data fits inside

    ??

  135. Phil.
    Posted Jan 23, 2008 at 11:32 AM | Permalink

    Re #60

    Any claim that current temperatures are unprecedented in the last 1,000 years presupposes some minimum amount of knowledge about temperatures over that period of time. Craig’s data contradicts the tree-ring reconstructions — which means, at a minimum, that our knowledge of past temperatures moves from the catagory of “known to be fairly flat with little variability” to “unknown because of conflicting data”.

    It doesn’t really contradict them, see:

    Several of those reconstructions show a MWP and they still used tree-ring data, a key difference appears to be that Loehle’s shows a significantly earlier (and warmer) MWP. The ‘fairly flat’ was the earliest result (10 yrs old), later results have shown more variability, and this reconstruction isn’t unusual in that regard.

  136. Phil.
    Posted Jan 23, 2008 at 11:35 AM | Permalink

    Re #135

    OK it didn’t like a .png so let’s do it as a link:
    Recons

  137. Alan S. Blue
    Posted Jan 23, 2008 at 11:38 AM | Permalink

    bender@112

    Values at the top and bottom of the envelope are equally likely. More data are required to narrow the envelope.

    The probabilities inside a confidence interval aren’t evenly distributed though. A second factor is that we can directly calculate probabilities of two separate events happening given properly constructed confidence limits.

    Restated: The comment “The year 850 was hotter than 1930” has a probability that is dramatically higher than the reverse.

  138. bender
    Posted Jan 23, 2008 at 11:39 AM | Permalink

    To clarify your answer a bit for me, (a layman), values dead center in the envelope are most likely and the probability decreases towards the top and bottom of the envelope equally, correct?

    yes and yes

    Is there a standard formula for probability based on its position in the envelope?

    yes – and if you had the raw data you could do it yourself. But I would simply ask Hu very nicely to plot, say, the 50%, 90% and 99% confidence intervals along with the 95%.

  139. MrPete
    Posted Jan 23, 2008 at 11:39 AM | Permalink

    Loehle’s also does not show the modern “out of the ballpark” spike.

    Partly that’s because of the poor stats usage in other reconstructions and graphs.

    Without proper uncertainty analysis, the graphs such as Phil has linked to really have very little to say — much less than is suggested by the “certainty” of a nice skinny line (even in a spaghetti set) or the supposed “uncertainty” interval in the original HS graph.

  140. Peter D. Tillman
    Posted Jan 23, 2008 at 11:49 AM | Permalink

    re: Little Ice Age gets BIG , 101,104,105

    Fellow SF fans who missed it might enjoy Larry Niven et al, Fallen Angels http://www.isfdb.org/cgi-bin/title.cgi?561
    –which is a novel on what-if the enviros win, shut down industry & we get a new Ice Age? Pretty entertaining, in an over-the-top way.

    This is not entirely idle speculation (OK, the novel is, but not the idea) — Ruddiman’s Plows, Plagues & Petroleum gives this idea some background & support. Geologically, almost certainly we’re in an interglacial, & 3km of ice over Toronto could be in the not-too-distant future… 🙂

    Happy reading–
    Pete Tillman

  141. bender
    Posted Jan 23, 2008 at 11:53 AM | Permalink

    Craig’s data contradicts the tree-ring reconstructions

    It doesn’t really contradict them

    Either one or both of you is wrong, or the proposition is ill-posed. This is why I asked in #36 that some spaghetti lines be plotted on top of the L&M envelope. Because you may see some of those lines straying outside the envelope. That will provide a firmer basis for starting to decide if the series are compatible.

  142. Peter D. Tillman
    Posted Jan 23, 2008 at 11:57 AM | Permalink

    Re 139, real COLD

    — and, if you’d like a REALLY SERIOUS (possible) unforeseen consequence, look no further. For fans of the Precautionary Principle 😉

    Cheers — Pete Tillman

  143. MrPete
    Posted Jan 23, 2008 at 11:58 AM | Permalink

    Bender, obviously, I learned my lesson wrong. 🙂 Sorry for dumb questions.

    I’m trying to get a gut feel for how these CI curves “work.”

    * 99% CI will be wider than the 95% CI, in a sense to “catch” more possibilities
    * Conversely, 1% CI would be quite narrow.

    Something’s making my head hurt, and hopefully I have a resolution:

    a) A 50% CI can be drawn (at least) two ways…
    — 50% in the middle
    — 50% on both outside edges (out to infinity)
    — or any other combination that covers 50% of the “probability space” (my term)

    b) A 5% CI likewise can be drawn in many places, not least on the *outside* of the 95% curve (out to infinity)

    At first glance, gut feel says the curves can be drawn any number of ways… that there’s no single “correct” curve for 95% CI or any other, other than by convention.

    To resolve: is the lower probability for the outer edge, and higher probability for the middle, due to the larger number of “probable combinations” available to the middle… a bit similar to dice-roll stats?

    If so, I can handle that. (And comparing to dice provides other help to learning common sense. E.g. one would never suggest that when rolling dice it will always come up seven. Nor seven-or-higher.)

  144. Phil.
    Posted Jan 23, 2008 at 12:05 PM | Permalink

    Re #141

    Either one or both of you is wrong, or the proposition is ill-posed. This is why I asked in #36 that some spaghetti lines be plotted on top of the L&M envelope. Because you may see some of those lines straying outside the envelope. That will provide a firmer basis for starting to decide if the series are compatible.

    The full sentence makes it clear what I was responding to (below) it implies that tree-ring reconstructions result in flat, low variability results which isn’t the case (uncertainty bounds notwithstanding).

    Craig’s data contradicts the tree-ring reconstructions — which means, at a minimum, that our knowledge of past temperatures moves from the catagory of “known to be fairly flat with little variability” to “unknown because of conflicting data”

  145. bender
    Posted Jan 23, 2008 at 12:08 PM | Permalink

    Mr Pete,
    The probability level is the probability that the true GMT lies in the computed, estimated interval. If you want 99% confidence, you need a huge interval to cover a large range of possibilities. If you want to be only 50% certain you will accept a narrower range. A good question is what is the probability level required to make it such that AD850 and AD1930 confidence intervals do not overlap at all. That will gove you some indication as to the probability that the two GMT values were different.

    There are all kinds of caveats with that kind of analysis. This is a gross description of how to think about the problem. Wegman would surely have some suggestions for fine-tuning the process.

  146. bender
    Posted Jan 23, 2008 at 12:12 PM | Permalink

    it implies that tree-ring reconstructions result in flat, low variability results which isn’t the case

    You don’t know that, however. That is precisely the question being asked by some!

    uncertainty bounds notwithstanding

    Uncertainties notwithstanding!? That’s the baby in the bathwater. You can’t throw that out! It’s fully half the analysis!

  147. Raven
    Posted Jan 23, 2008 at 12:18 PM | Permalink

    Phil. says:

    However the increase in CO2 and other radiatively active gases does change the heat transfer through the atmosphere in quantifiable and predictable ways so why can’t you accept that ‘a significant portion of the recent warming can be explained by’ that? Why do you feel that it’s necessary to deny that and seek other ‘natural forces’ that you can’t quantify instead?

    Your argument falls apart because we can’t really quantify the effect that CO2 has either. Many of the estimates of CO2 sensitivity are based on analysis of actual temperature trends with the assumption that CO2 is the primary driver. If natural variations are a significant factor then any assumptions about CO2 sensitivity must be reduced accordingly.

    Your argument boils down to: ‘we can’t determine whether natural various are a factor so we must ignore them’. I don’t find this argument compelling enough to justify the massive social changes demanded by alarmists.

  148. Peter D. Tillman
    Posted Jan 23, 2008 at 12:26 PM | Permalink

    Re 115, other Hot Times

    The one that’s getting a lot of attention among paleoclimaologists lately is the Paleocene–Eocene Thermal Maximum http://en.wikipedia.org/wiki/Paleocene%E2%80%93Eocene_Thermal_Maximum

    —which truly was unprecedented in “recent” geological time: “Sea surface temperatures rose between 5 and 8°C over a period of a few thousand years, and in the high Arctic, sea surface temperatures rose to a sub-tropical 23°C, or 73°F.”

    This has been interpreted as a “Big Burp” of methane clathrates — it’s tough to explain that sharp of a peak. More recently, Dan Schrag and coworkers have added a Big Slug of CO2 to the mix, by oxidizing a Whole Bunch of organic material. All the explanations are speculative to highly speculative (and on to ridiculously improbable), but this is the natural event that most resembles the CWP. Curiously, it didn’t cause a mass extinction, or at least not a big one. Worthy of Further Study (and $$$!!), fersure.

    Here’s a recent review: Beyond methane: Towards a theory for the Paleocene–Eocene Thermal Maximum, John A. Higgins and Daniel P. Schrag, Earth and Planetary Science Letters Volume 245, Issues 3-4, 30 May 2006. Abstract: doi:10.1016/j.epsl.2006.03.009 at http://dx.doi.org. Full pdf online somewhere.

    Happy reading–
    Peter D. Tillman
    Consulting Geologist, Arizona and New Mexico (USA)

  149. bender
    Posted Jan 23, 2008 at 12:27 PM | Permalink

    “uncertainties notwithstanding” is what drives me nuts about policymakers! They want to dispense with the uncertainty – pretend it doesn’t exist – and then argue to invoke a “precautionary principle”. The degree of uncertainty serves to condition the appropriate level of precaution. That’s why they want to get rid of it.

    To be clear, my argument is not with Phil, it is with the POV of the uncertainty denialist.

  150. MrPete
    Posted Jan 23, 2008 at 12:27 PM | Permalink

    Bender says:

    The probability level is the probability that the true GMT lies in the computed, estimated interval.

    Just dotting my I’s and crossing T’s: I read this to say it is the probability that 100% of the true GMT lies in the computed, estimated interval. And I’m inferring from the rest of your statement that asking the other questions (what is 95% CI for 95% of the interval in question, etc) is far more complex.

    This is getting interesting.

  151. Phil.
    Posted Jan 23, 2008 at 12:27 PM | Permalink

    Re #146

    “it implies that tree-ring reconstructions result in flat, low variability results which isn’t the case”

    You don’t know that, however. That is precisely the question being asked by some!

    Well if you want to describe Esper et al. as flat because it doesn’t have error bounds you’re straining credulity.

  152. Steve Hempell
    Posted Jan 23, 2008 at 12:29 PM | Permalink

    Phil says: #126

    “However the increase in CO2 and other radiatively active gases does change the heat transfer through the atmosphere in quantifiable and predictable ways …..”

    Wow, really. Could you go to the Physics phpBB site and, in detail, explain this fully using the 1st principles of physics?

    Thanks.

  153. Steve Hempell
    Posted Jan 23, 2008 at 12:35 PM | Permalink

    Re 151

    Sorry, I meant to sys the phpBB site Physics/Empirical Observations of the Greenhouse Effect.

  154. bender
    Posted Jan 23, 2008 at 12:38 PM | Permalink

    #150 Phil, speaking of strawmen, name the logical fallacy you just committed there.

  155. Phil.
    Posted Jan 23, 2008 at 12:58 PM | Permalink

    Re #149

    “uncertainties notwithstanding” is what drives me nuts about policymakers! They want to dispense with the uncertainty – pretend it doesn’t exist – and then argue to invoke a “precautionary principle”. The degree of uncertainty serves to condition the appropriate level of precaution. That’s why they want to get rid of it.

    To be clear, my argument is not with Phil, it is with the POV of the uncertainty denialist.

    I should have expressed myself more clearly, probably should be ‘the absence of uncertainty bounds notwithstanding’, I was suggesting in the absence of a published uncertainty bound we just have to take the mean data on face value. The published data doesn’t show that using tree-rings make for flat recons, quite the opposite, the error bounds could be so huge as to make the exercise meaningless but that’s a different argument.

  156. bender
    Posted Jan 23, 2008 at 1:05 PM | Permalink

    if you want to describe Esper et al. as flat because it doesn’t have error bounds you’re straining credulity

    What the proponent might have argued instead is that tree-ring based reconstructions may produce flattened MWPs.

    This would invalidate the following logic:

    A-Most tree-ring based reconstructions have flattened MWPs
    B-Esper is a tree-ring based reconstruction
    C-Esper has a prominent MWP
    D-Therefore most tree-ring based recons do not produce a flattened MWP

    Facts:
    1-Esper is only one of many recons, each variable
    2-The proposition was applied to the ensemble, not to every member of the set individually
    3-The issue is not whether MWP is flat, but flattened

    Again, degree of flattening is a quantitative issue. Alarmists like to take quantitative assertions and make them winnable yes/no arguments. This is erecting a straw man through a combination of cherry-picking examples and definitions.

    Note I am not saying Phil is an alarmist. In fact, looking carefully at what he wrote, he did not even commit an error. Because he started his statement conditionally, using “IF …”.

    Isn’t that clever?

    So don’t asert what he is asking to be asserted and you will not strain credulity.
    snip

  157. bender
    Posted Jan 23, 2008 at 1:06 PM | Permalink

    we just have to take the mean data on face value

    I disagree strongly with that advice.

  158. Craig Loehle
    Posted Jan 23, 2008 at 1:08 PM | Permalink

    Another problem with the tree ring studies is that most of them stop at 1000AD (nice round number) but the recon must extend to BEFORE the peak to show the shape of the MWP. If you start at 1000 AD, it looks like temperatures have been simply falling slowly since then until 1850 or so, but this is misleading.

  159. bender
    Posted Jan 23, 2008 at 1:11 PM | Permalink

    we just have to take the mean data on face value

    What that policy leads to is alarmists systematically failing to report uncertainty levels and thereby taking control of the agenda. That is non-scientific. I’ll give you a possible example. Judith Curry’s BAMS article that purported to discuss trends in HURDAT hurricane data, but did not report ocnfidence intervals on the trend line. GAvin Schmidt’s error-free GCM runs. (Until it suits his purpose to show them. Which is special pleading.)

  160. bender
    Posted Jan 23, 2008 at 1:16 PM | Permalink

    Phil seems to be playing the same game that Bloom used to play, only he’s WAY better at it. Watch the pea under his thimble. Sorry to distract from the thread, Dr Loehle. Carry on.

  161. Posted Jan 23, 2008 at 1:29 PM | Permalink

    MrPete,

    Patrick M — good question. If I’ve learned my lessons correctly here, the answer is NO. Values dead center are NOT “most likely.” All we know is to N probability (usually 95%), the correct answer is somewhere — anywhere — inside the envelope. And the rest of the probable “correct” answer lies outside.

    That’s a safe way to interpret CIs. However, with Hu’s model, I can place a 5 % CI around the dead center, and that will be the shortest 5 % CI.

    Here’s another question I would like to ask: given that the envelope covers a long time period, can we say anything about probability vs time?

    Hmm, if we state that temp at year T is within those limits (at T), we’ll be correct 95 % of the time (however the true temp may wary). But this gets easily too iid, soon we’ll have to switch to stochastic processes and filtering theory. Anyway, I think your second statement is OK (given the model is correct 😉 ).

    a) A 50% CI can be drawn (at least) two ways…
    — 50% in the middle
    — 50% on both outside edges (out to infinity)
    — or any other combination that covers 50% of the “probability space” (my term)

    Try to find the shortest CI. Often the length is random, then use the expected length. And finally you can seek for uniformly most accurate interval.

  162. Michael Smith
    Posted Jan 23, 2008 at 1:29 PM | Permalink

    Phil,, regarding your comment in 134. The lower confidence interval on L&M’s reconstruction shows a peak anomoly in the MWP of something like .25 – .28 deg C, if I read the graph correctly. The reconstructions shown at the link you provided in 135 show a peak anamoly in the MWP of about .1 deg C. Since it is possible that confidence intervals on the reconstructions you linked to would overlap L&M’s, I concede that I cannot state unequivocally that L&M contradict the tree-ring reconstructions — we’d need some reasonable CIs on the tree-ring series to assess that.

    However, I don’t think that changes the overall point I was making to sod. Namely, that the effect of L&M is to show that the claim that modern warming is unprecedented is unsupportable in view of the current state of the data about past temperatures. We simply don’t know whether modern warming is unprecedented or not.

  163. Craig Loehle
    Posted Jan 23, 2008 at 1:37 PM | Permalink

    For sod & phil et al: It is asserted that the recent warming is unprecedented. To accept this, one would need to show that recent with ci and MWP with ci do not overlap and that recent was above MWP. With our data, that assertion is rejected. It is not necessary to prove the MWP was warmer than recent to reject that recent was warmer than MWP. To use other reconstruction based on tree rings properly, one would need confidence intervals. Almost invariably, they are plotted (as in IPCC spagetti graph or wikipedia) as simple lines with no ci. From these lines we are asked to accept that recent is warmer, but this is without any statistics at all.

  164. UK John
    Posted Jan 23, 2008 at 1:40 PM | Permalink

    Couldn’t help myself had to join in !!! what fun!!

    Proxies tell you a bit of the tale, human history and archeology tell you another bit, 1000 year old vegetation found under the retreating ice tells you a bit more.

    Anyone who thinks present conditions are unprecedented is a Twonk! but they award Twonks Nobel prizes, I await the call from the Norwegians!

  165. Posted Jan 23, 2008 at 2:02 PM | Permalink

    couple of points:

    1. if i don t reply to a question, assume that i am either busy or that i have the feeling that we are moving away from the topic. i am trying not waste more of Steve’s or my time than absolutely necessary…

    2. i found this part very interesting:

    Just dotting my I’s and crossing T’s: I read this to say it is the probability that 100% of the true GMT lies in the computed, estimated interval.

    so i would like to repeat my question from above:
    can anyone simply plot modern measured data and a 95% interval into the same graph, so that we can compare?

    the error here seems to be about +-0.3° and i have repeatedly heard claims that our measured data is off by a bigger margin. (if “50% of 20th century warming is UHI …)

    3. Craig:

    To accept this, one would need to show that recent with ci and MWP with ci do not overlap and that recent was above MWP. With our data, that assertion is rejected

    my problem with this is: the wider the ci of your proxy data is, the more difficult it is for modern temperature to get outside of it.

    and i still haven t figured out: how do you reject something about modern temperature by an analysis up to 1935?

  166. Andrew
    Posted Jan 23, 2008 at 2:07 PM | Permalink

    Sod, see my post at 11. Or you can repeat the analysis yourself, in whatever way best suits your needs.

  167. Craig Loehle
    Posted Jan 23, 2008 at 2:12 PM | Permalink

    sod: I was speaking generally about what criteria one uses to reject or accept something in statistics. There has been an implication that it is necessary to prove the MWP was warmer when the assertion has been that the recent period is warmer. Thus it is necessary to demonstrate that the recent is warmer by showing that it differs statistically. You are right, the wider the ci, the harder to reject. This means that more data and a better proxy model will narrow the ci and make it easier to correctly evaluate differences. I did not pick the ci, it is given from the data. The tendency to plot these reconstructions without ci and say the recent is warmer is misleading.

  168. bender
    Posted Jan 23, 2008 at 2:13 PM | Permalink

    sod, don’t be a dope.

    my problem with this is: the wider the ci of your proxy data is, the more difficult it is for modern temperature to get outside of it.

    sod, that is everyone’s problem. mine. yours. now you are starting to get it. maybe.

    and i still haven t figured out: how do you reject something about modern temperature by an analysis up to 1935?

    You don’t. Unless you are prepared to do another, simultaneous comparison between the 30-year average centred on 1935 and the 30-year average centred on 1993, say. (Pick your years to suit.)

  169. bender
    Posted Jan 23, 2008 at 2:16 PM | Permalink

    can anyone simply plot modern measured data and a 95% interval into the same graph, so that we can compare

    Compare what? Apples and oranges? (You’re not ignoring that whole thread on inferential splicing, are you?)

  170. Craig Loehle
    Posted Jan 23, 2008 at 2:18 PM | Permalink

    sod: my 1935 value is smoothed with data up to 1949 which encompasses the mid-century warm interval. Then I approximated the rise since then (see the paper) from GISS smoothed with 29 yr running mean and compared that with a 1-tailed test. I can for sure reject the hypothesis that this temperature is warmer than the MWP (if the proxies are any good at all).

  171. Andrew
    Posted Jan 23, 2008 at 2:37 PM | Permalink

    Okay, I think I see sod’s concern. He is worried that the magnitude of recent warming is less than the magnitude of the confidence intervals. I don’t think this is so. They range from +-.3556 to +-.2729. That means they are only about as large as warming in the last century, not all of it.

  172. bender
    Posted Jan 23, 2008 at 2:42 PM | Permalink

    Andrew, are you trying to compare an annual observation now to a thirty year average 70 years ago? Please clarify what your comparison is.

  173. Posted Jan 23, 2008 at 2:47 PM | Permalink

    #166

    The tendency to plot these reconstructions without ci and say the recent is warmer is misleading.

    Here’s another misleading way, with CIs:

    1. Use variance matching

    The hemispheric and global composites were standardized to have the same mean and (decadal) standard deviation as the target instrumental hemispheric mean series over the period of common overlap (1856–1980).

    2. Use calibration residuals to obtain CIs

    Rough uncertainty estimates in the hemispheric reconstructions were determined from the magnitude of the unresolved variance during the calibration period, taking into account enhancement of uncertainty at centennial timescales [Mann et al., 1999].

    3. Use hockey stick to verify

    However, previous, more highly (annually) resolved hemispheric temperature reconstructions available back to AD 1000 which have already been successfully cross-validated against the instrumental record [e.g., Mann et al., 1999] provide a means for longer-term cross-validation.

    4. Avoid the issue of annual vs. smoothed comparison (apples to oranges) by inventing a new method for smoothing the end points

    (the constraint employed by the filter preserves the late 20th century trend)

    And here you go 😉 Now you can say that recent is warmer.

    ps. 5. Reveal the method one year after publication, and warn against false conclusions based on unobjective statistical smoothing approaches. 🙂

  174. Phil.
    Posted Jan 23, 2008 at 2:54 PM | Permalink

    Re #168

    “and i still haven t figured out: how do you reject something about modern temperature by an analysis up to 1935?”

    You don’t. Unless you are prepared to do another, simultaneous comparison between the 30-year average centred on 1935 and the 30-year average centred on 1993, say. (Pick your years to suit.)

    Which is what Loehle did in his paper and concluded that the MWP and 1992 were indistinguishable, of course that only gets us up to 1992 and there’s been a lot of water under the bridge in the intervening 15 years.

  175. bender
    Posted Jan 23, 2008 at 2:54 PM | Permalink

    I would like to see a gray scale version of L&M08 with all confidence interval lines from 1% to 99% plotted, where the line shade is propotional to the confidence level. pure black for 1%, pure white for 99%.

  176. Craig Loehle
    Posted Jan 23, 2008 at 3:02 PM | Permalink

    Phil: the 1992 value is smoothed GISS data up through 2006 (29 year running mean so at least I’m comparing different brands of orange)

  177. bender
    Posted Jan 23, 2008 at 3:07 PM | Permalink

    #174
    I think this would look neat. Color the space between the lines as opposed to the lines themselves and you can call it a “Loehle uncertainty plot”. You’ll be more famous than you already are!

  178. Andrew
    Posted Jan 23, 2008 at 3:08 PM | Permalink

    Bender, I’m just taking the range of the confidence intervals (ie, subtracting the reconstruction itself from the bounds, then looking at the most extreme values in question) to the amount of warming between 1900 and 2000, I believe (the oft quoted .6). This seemed to me sod’s concern, that “of course you can say that the MWP was warmer, the confidence intervals are as large as the recent trend!” or you could hide the elephant in the weeds. I’m showing that his trunk will stick out.

  179. MrPete
    Posted Jan 23, 2008 at 3:09 PM | Permalink

    #174 / 176 — might also be pretty, and perhaps easier, as a 3-D plot. 🙂

  180. bender
    Posted Jan 23, 2008 at 3:10 PM | Permalink

    Color the mean in thick white and color 99%,95%,90%,50% thin black.

  181. bender
    Posted Jan 23, 2008 at 3:11 PM | Permalink

    #178 2D always better for publication. No need for 3rd D.

  182. Posted Jan 23, 2008 at 3:12 PM | Permalink

    Okay, I think I see sod’s concern. He is worried that the magnitude of recent warming is less than the magnitude of the confidence intervals. I don’t think this is so. They range from +-.3556 to +-.2729. That means they are only about as large as warming in the last century, not all of it.

    thanks. i couldn t have put it into such words myself.
    boy do i regret that both numerics and stochastic lectures were always scheduled early friday after “party thursday”…

  183. MrPete
    Posted Jan 23, 2008 at 3:12 PM | Permalink

    Hmmm… probably of no use as a static view, but to do 174/176/178 (grayscale or 3-D CI curves) with separate contributions from each component, and selectable translucency for the different “layers”… now THERE’S an analytical map that could carry real interpretive power.

    (Can’t help myself: I’m a mapping/GIS/visualization guy at heart… 😉 )

  184. Posted Jan 23, 2008 at 3:16 PM | Permalink

    sod: I was speaking generally about what criteria one uses to reject or accept something in statistics. There has been an implication that it is necessary to prove the MWP was warmer when the assertion has been that the recent period is warmer. Thus it is necessary to demonstrate that the recent is warmer by showing that it differs statistically. You are right, the wider the ci, the harder to reject. This means that more data and a better proxy model will narrow the ci and make it easier to correctly evaluate differences. I did not pick the ci, it is given from the data. The tendency to plot these reconstructions without ci and say the recent is warmer is misleading.

    i agree with you on the subject of confidence intervals. i agree with some more points as well, but there are enought supporters around to tell you. please understand when i focus on “critical points”.

  185. MrPete
    Posted Jan 23, 2008 at 3:16 PM | Permalink

    Bender — imagine a topo map with contours and 3-D shading. Very nice for publication.

    Perhaps the grayscale produces the same thing. Essentially, a mountain ridge through time, with cliffs or gentle slopes on each side. Is it Berkeley Park (Rainier) or Half Dome (Yosemite)? 🙂

  186. Phil.
    Posted Jan 23, 2008 at 3:17 PM | Permalink

    Re #156

    What the proponent might have argued instead is that tree-ring based reconstructions may produce flattened MWPs.

    This would invalidate the following logic:

    A-Most tree-ring based reconstructions have flattened MWPs
    B-Esper is a tree-ring based reconstruction
    C-Esper has a prominent MWP
    D-Therefore most tree-ring based recons do not produce a flattened MWP

    Facts:
    1-Esper is only one of many recons, each variable
    2-The proposition was applied to the ensemble, not to every member of the set individually
    3-The issue is not whether MWP is flat, but flattened

    Again, degree of flattening is a quantitative issue. Alarmists like to take quantitative assertions and make them winnable yes/no arguments. This is erecting a straw man through a combination of cherry-picking examples and definitions.

    But the point is bender no matter what strawmen you might want to erect in your mind the poster did not say that, he said that tree-rings gave flat profiles with little variability. No ‘may’ or ‘tend to’ or ‘flattened’ or ‘most’, it was quite explicit. Had he said otherwise I wouldn’t have replied as I did. The logical sequence (A-D) you propose didn’t occur. The issues you raise might properly be taken up with the original poster not me.

  187. bender
    Posted Jan 23, 2008 at 3:19 PM | Permalink

    #184 nice bad. parsimony & completeness good.

  188. Andrew
    Posted Jan 23, 2008 at 3:26 PM | Permalink

    Phil, looking back to the start of your disagreement it seems to have started with the qualifier “fairly” before flat. No one every said exactly flat and likely what they meant was “flat compared to today”. Looking at them, I see that “flat compared to today” seems fair to me:

  189. bender
    Posted Jan 23, 2008 at 3:28 PM | Permalink

    the poster did not say that

    but I did.
    So, you win that battle. And I win the war.

  190. MrPete
    Posted Jan 23, 2008 at 3:29 PM | Permalink

    “color 99/95/90/50% thin black” with gray scale or color elsewhere — you are defining a 3-D shaded topo map the hard way.

    Trust me, the color/grayscale part will not be helpful for understanding unless you use it in the form of a lighting model (at an angle), or an elevation model (with easily distinguished bands). Just as straight overhead lighting is confusing to the eye, so will this be unless used to show the “surface topography” so to speak.

    Contour lines keep parsimony and completeness for visual analysis. Can’t wait to see this one.

  191. bender
    Posted Jan 23, 2008 at 3:32 PM | Permalink

    Do it both ways. Yours will look nicer. Mine will be more acceptable for publication.

  192. Posted Jan 23, 2008 at 3:34 PM | Permalink

    Sod, see my post at 11. Or you can repeat the analysis yourself, in whatever way best suits your needs.

    nice one. i must have missed this. good work. (will post some remarks later)

    sod: my 1935 value is smoothed with data up to 1949 which encompasses the mid-century warm interval. Then I approximated the rise since then (see the paper) from GISS smoothed with 29 yr running mean and compared that with a 1-tailed test. I can for sure reject the hypothesis that this temperature is warmer than the MWP (if the proxies are any good at all).

    and it contains the cold 20s. it is just a smooth! if we stay reasonable and don t chose extreme data points and the graph is more linear than exponential, the actual value will be a pretty good guess for the smoothed one.

    i can t shake the feeling that you guys are stacking stuff in your favor:

    * big ci range (yes, it looks like you can t help on that one)
    * HADCRUT data
    * huge ci on modern temperature. (are you serious on this Andrew?)
    * using smoothed modern data (while keeping the “recent” talk up)

    keeping this spirit, MY eyeball analysis goes like this:
    i add 0.7° (using GISS) to the 1935 data on the Loehle graph and add in a tiny ci (shouldn t be above +-0.03 in comparison. come on, we are comparing measured temperature from 1000 sources to 18 proxies..).
    IF there is an overlap, it will be an unlikely one…

  193. Phil.
    Posted Jan 23, 2008 at 3:38 PM | Permalink

    Re #176

    Phil: the 1992 value is smoothed GISS data up through 2006 (29 year running mean so at least I’m comparing different brands of orange)

    Yes I know but just because 2006 data is used to calculate the mean doesn’t mean that the 1992 value is somehow representative of today.
    The data you present tells us nothing about the data since 92 except of course that since the data you’re smoothing is rising fast the next 15 years must continue to rise (also the database you used shows the 5 warmest years on record as being since 1992).

  194. Phil.
    Posted Jan 23, 2008 at 3:46 PM | Permalink

    Re #189

    the poster did not say that

    but I did.
    So, you win that battle. And I win the war.

    Really, the war in your own head? You appear to approach this like the quiz show Jeopardy, you read the answer and work out what question it was the wrong answer to; rather bizarre! Perhaps before posting we should run any post we wish to reply to through the Bender filter to find out what the original poster was supposed to say and then reply to that?

  195. bender
    Posted Jan 23, 2008 at 3:55 PM | Permalink

    Phil, if you think x is y you are straining credulity.

    I’ll remember that one. Use whenever no one said ‘x is y’ but you want to bait them into defending that position. Cute.

  196. Phil.
    Posted Jan 23, 2008 at 4:05 PM | Permalink

    Re #195

    While you’re at it try to remember to include the post you’re referring to rather than your present scattergun approach which makes things difficult to follow.

  197. Dave Dardinger
    Posted Jan 23, 2008 at 4:14 PM | Permalink

    re: #171 sod,

    i can t shake the feeling that you guys are stacking stuff in your favor

    Yea, sure. Like using BCPs which aren’t temperature proxies in the first place, even if some trees do turn out to work as proxies, isn’t stacking the deck?

    Why do you think Loehle is important? It gets rid of the question whether trees are decent proxies altogether and uses proxies which have a reasonable chance to be valid temperature proxies. The trouble is that its hard to convert such proxies into modern temperatures. It’s a lot easier to compare much larger spans of time to develop rather arbitrary temperature scales. But since Loehle is just adding anomilies rather than trying to set actual temperatures, he doesn’t have to worry about that problem either.

    But when you now want to compare instrumental temperatures to proxy temperatures, the problem comes back. If the instrumental temperatures had no problems, finding an appropriate level to set them at would be easy, but until things like UHI and land use changes are cleared up, it’s apples and oranges all the way down.

  198. Andrew
    Posted Jan 23, 2008 at 4:16 PM | Permalink

    Sod, Craig did the smooth becuase it would lessen the effects of the dating errors. It wasn’t a deck stack. Maybe Climate Audit should have an “assume good faith” rule (with exceptions for those from whom we have come to have little reason to trust, of course).

  199. poid
    Posted Jan 23, 2008 at 4:21 PM | Permalink

    Phil #92:

    You are complaining that the value centred around 1992 is smoothed and doesnt tell us anything about today. I disagree; it tells us that peaks we see such as today would appear disguised in a proxy reconstruction in the past due to the effects of smoothing.

    So it tells us that you cant use today’s peak points to try and assert that the current period is certainly warmer than the past.

    What you are trying to do is compare apples to watermelons, it just doesn’t work.

    BTW, the mean of the last decade is 0.2 degrees greater than the mean centred around 1992 (according to GISS). If you use this figure, does it change the answer? No, it doesn’t. You cannot assert that the current period is certainly warmer than in the past, nor can you assert that the rate of warming is unprecedented. Similarly, you cannot assert that the MWP was definitely warmer than today from the data.

  200. bender
    Posted Jan 23, 2008 at 4:34 PM | Permalink

    #195

    try to remember to include the post

    Will try. And you try not being such a slippery word-parsing lawyer.

    #150

    if you want to describe Esper et al. as flat because it doesn’t have error bounds you’re straining credulity

    If you want to imply that the commenter is straining credulity for thinking something that they didn’t say then you are straining credulity. It is possible that, compared to other proxies, tree ring reconstructions produce flattened MWPs. Why choose one instance, Esper, and pretend that addresses the general argument? To bias the argument in your favor and against that of the commenter? That would be tendentious.

    My advice is to follow your own advice and not play at mind-reading, erecting straw men, putting words in people’s mouths, and so on. The things you accuse other people of doing.

    Are you finding it hard to focus on Loehle’s paper?

  201. bender
    Posted Jan 23, 2008 at 4:36 PM | Permalink

    You cannot assert that the current period is certainly warmer than in the past, nor can you assert that the rate of warming is unprecedented. Similarly, you cannot assert that the MWP was definitely warmer than today from the data.

    If you were to assert that you would be straining credulity.

  202. Andrew
    Posted Jan 23, 2008 at 4:42 PM | Permalink

    Can you assert that there is at least a slightly higher probability that it (the MWP) was warmer than the present (assuming that this reconstruction and its confidence intervals are “correct”) than the opposite, and a good probability that they have been about the same? That’s the impression I got.

  203. Craig Loehle
    Posted Jan 23, 2008 at 4:51 PM | Permalink

    Given that averaging proxies with dating error is likely to hide (smooth out) higher peaks, my intuition is that the MWP was warmer than the graph shows. In addition, some of the proxies are probably not so good, which adds noise and reduces the mean further. Thus I think it likely that the MWP was warmer than today even with statistics, but I do not assert that this is proven.

  204. Michael Smith
    Posted Jan 23, 2008 at 5:06 PM | Permalink

    Regarding 185:

    But the point is bender no matter what strawmen you might want to erect in your mind the poster did not say that, he said that tree-rings gave flat profiles with little variability. No ‘may’ or ‘tend to’ or ‘flattened’ or ‘most’, it was quite explicit.

    Speaking of straw men, omitting the fact that I used the modifier “fairly” ahead of “flat” certainly makes my statement easier to attack.

  205. Phil.
    Posted Jan 23, 2008 at 5:16 PM | Permalink

    Re #199

    You are complaining that the value centred around 1992 is smoothed and doesnt tell us anything about today. I disagree; it tells us that peaks we see such as today would appear disguised in a proxy reconstruction in the past due to the effects of smoothing.

    So it tells us that you cant use today’s peak points to try and assert that the current period is certainly warmer than the past.

    What you are trying to do is compare apples to watermelons, it just doesn’t work.

    Which is why I’m not trying to do so, I’m trying to discourage the extrapolation of Loehle’s results beyond what’s appropriate for the very reasons you outline. I understand why he wants to do so since his proxies die out at the end of the 20th century, one of the criticisms of the previous paper was trying to extend their use beyond their limit. Mann was criticized for tagging on an instrumental record onto his reconstruction as I recall, same applies here.

  206. Phil.
    Posted Jan 23, 2008 at 5:38 PM | Permalink

    Re #204

    Speaking of straw men, omitting the fact that I used the modifier “fairly” ahead of “flat” certainly makes my statement easier to attack.

    Actually I quoted you verbatim in my response to your statement (not an attack) (see below).

    It doesn’t really contradict them, see:

    Several of those reconstructions show a MWP and they still used tree-ring data, a key difference appears to be that Loehle’s shows a significantly earlier (and warmer) MWP. The ‘fairly flat’ was the earliest result (10 yrs old), later results have shown more variability, and this reconstruction isn’t unusual in that regard.

    bender took exception to my pointing out that not all tree ring studies result in such flat curves, saying I ‘couldn’t possibly know that’. When I produced an example of a very variable result he went ballistic with the series of screeds that you see above including a rewrite of your original post to justify his position.

  207. Phil.
    Posted Jan 23, 2008 at 5:45 PM | Permalink

    Re #203

    Given that averaging proxies with dating error is likely to hide (smooth out) higher peaks, my intuition is that the MWP was warmer than the graph shows. In addition, some of the proxies are probably not so good, which adds noise and reduces the mean further. Thus I think it likely that the MWP was warmer than today even with statistics, but I do not assert that this is proven.

    Why would a bad proxy necessarily reduce the mean, couldn’t it increase it too? Also I still have some issues with your treatment of the 100yr frequency data have you looked at the effect of different methods?

  208. bender
    Posted Jan 23, 2008 at 5:57 PM | Permalink

    Nice piece of historical revisionism. Whatever. Do what you need to win your little battles. They clearly mean a lot to you.

  209. Michael Smith
    Posted Jan 23, 2008 at 6:06 PM | Permalink

    re: 207:

    Actually I quoted you verbatim in my response to your statement (not an attack) (see below).

    Well, yes, you quoted it verbatim — initially. Later, you represented my statement as if it included no such modifier — “….he said that tree-rings gave flat profiles with little variability. No ‘may’ or ‘tend to’ or ‘flattened’ or ‘most’, it was quite explicit.”

    By the way, perhaps you missed it, but I responded to your initial point — that L&M doesn’t really contradict tree-ring reconstructions — in 162.

  210. Craig Loehle
    Posted Jan 23, 2008 at 6:14 PM | Permalink

    If a proxy is no better than noise it will tend to hover around the zero offset and when averaged will bring down any peaks–however feel free to experiment with it yourself.

  211. bender
    Posted Jan 23, 2008 at 6:18 PM | Permalink

    #209 Michael Smith. Don’t mind him. I intuited what you meant. If Phil chooses to argue with some words he imagined you’d said, well, some people do that. Make up stuff and then argue against what’s in their mind. I’m sure he won’t make a habit of it.

    But anyways, back to Loehle. Before concluding how this recon compares to tree-ring reconstructions I would like to see them overlaid on one another. I would also like to see the error bars represented slightly differently. Would make it easier to say which periods are warmer or cooler than others with what probability.

    Wouldn’t take much to do this.

  212. bender
    Posted Jan 23, 2008 at 6:45 PM | Permalink

    And the folks who say there need not be any “cause” of a warming period? Pardon my incredulity. Might as well give up on science altogether if that’s your view.

    You incredulity is unpardonable. What is the internal variability of earth’s climate system, Susann? Show me the numbers and how you got them.

    Yes, everything has a cause. The issue is external vs internal causes. Not everything has an external cause.

    Take your other junk to unthreaded to avoid it being snippped. I’ll address it there.

  213. Steve McIntyre
    Posted Jan 23, 2008 at 6:47 PM | Permalink

    Susann, I asked people to discuss Loehle on this thread

  214. jae
    Posted Jan 23, 2008 at 6:47 PM | Permalink

    The fact of a natural temperature excursion in the past does not preclude an anthropogenic one today.

    No, but it certainly causes uncertainty.

  215. bender
    Posted Jan 23, 2008 at 6:49 PM | Permalink

    I was not referring to ALL members of the group. I was referring to SOME members of the group. I can amend the statement if you like.

  216. bender
    Posted Jan 23, 2008 at 6:57 PM | Permalink

    #212 and #215 can be deleted. This is a good thread to keep clean.

  217. Raven
    Posted Jan 23, 2008 at 7:12 PM | Permalink

    Susann says:

    And the folks who say there need not be any “cause” of a warming period? Pardon my incredulity. Might as well give up on science altogether if that’s your view.

    Good science looks at all possibilities and does not ignore possible explainations because they are inconvenient. Proxies suggest that warming periods have occurred at approx 1000 intervals for the last 4000 years which means a random or periodic internal variation is a plausible explaination for any warming or cooling trends. Any scientist that disagrees must demonstrate why their explaination is more plausible that the random variation explaination.

  218. Andrew
    Posted Jan 23, 2008 at 8:02 PM | Permalink

    Craig, which proxies used would you regard as noise? I’m curious as to which ones we might want experiment with.

  219. Patrick M.
    Posted Jan 23, 2008 at 8:08 PM | Permalink

    Again I would like to state my layman’s view to see if I’m getting this.

    Loehle’s paper is not as important for what it adds to climate science as it is for what it removes from climate science. It throws the tree ring proxy data into question by presenting a collection of reconstructions that have no tree ring data and arrive at a different history than tree ring proxies.

    Would it be fair to say that the question of comparing Loehle’s history to the present is not as important as comparing it to other histories, (which may have been compared to the present)?

    In short, Loehle’s paper is not an answer, it’s a reopening of a question?

  220. Craig Loehle
    Posted Jan 23, 2008 at 8:15 PM | Permalink

    Patrick M asks: “In short, Loehle’s paper is not an answer, it’s a reopening of a question?”
    Yes, and also it represents a reconstruction without questionable algorithms and with at least a reasonable attempt at confidence intervals. I would never assert that 18 proxies are enough to settle anything or give a precise answer.

  221. Patrick M.
    Posted Jan 23, 2008 at 8:29 PM | Permalink

    re 220 (Craig Loehle)

    Thanks! I’m just trying to follow the discussion and sometimes it helps me to step back and get the big picture.

    I think you have set an excellent example with your rapid correction and openness to criticism. There are some Nobel Prize winners out there who could learn from your actions.

  222. Andrew
    Posted Jan 23, 2008 at 8:30 PM | Permalink

    Which shows that you are reasonable, Craig. On the other hand, one reconstruction suddenly shows up that finally “gets rid” of the MWP, and it gets lauded as flawless triumph.

  223. bender
    Posted Jan 23, 2008 at 9:11 PM | Permalink

    May I point out that this is a victory for little ‘s’ science that gets published because it’s necessary, as opposed to Big ‘S’ Science that gets published because it’s novel? Policymakers should be asking for a lot more of the former, and less of the latter.

  224. Phil.
    Posted Jan 23, 2008 at 9:29 PM | Permalink

    Re #162

    Phil,, regarding your comment in 134. The lower confidence interval on L&M’s reconstruction shows a peak anomoly in the MWP of something like .25 – .28 deg C, if I read the graph correctly. The reconstructions shown at the link you provided in 135 show a peak anamoly in the MWP of about .1 deg C. Since it is possible that confidence intervals on the reconstructions you linked to would overlap L&M’s, I concede that I cannot state unequivocally that L&M contradict the tree-ring reconstructions — we’d need some reasonable CIs on the tree-ring series to assess that.

    Both are plots of anomalies and they don’t have the same basis so you can’t compare the absolute values without allowing for the offset. As I mentioned somewhere else in this thread there’s quite a large difference in the date of the maximum too.

  225. Phil.
    Posted Jan 23, 2008 at 9:36 PM | Permalink

    Re #210

    If a proxy is no better than noise it will tend to hover around the zero offset and when averaged will bring down any peaks–however feel free to experiment with it yourself.

    That’s one way of being a bad proxy but there’s also the possibility that it’s biased, for example responds to something other than temperature such as a tree that responds to water as well as temperature.

  226. Andrew
    Posted Jan 23, 2008 at 10:14 PM | Permalink

    Bender, I think in some ways this is “novel”. It is restricted to precalibrated proxies, for instance. And the idea of deliberately excluding trees is new (some may have accidentally done that before, but not deliberately). And in a world where we are by now used to proxies that are supposed to kill the MWP, we find you in which it happens to exist. What it isn’t is politically correct. But yes, hurray for science.

  227. bender
    Posted Jan 23, 2008 at 11:11 PM | Permalink

    And now, my comments on Loehle & McCulloch (2008) having been delivered, I will go back to lurking. I close by noting that I, then JEG some hours later, submitted scathing reviews of Loehle (2007). Both of us have agreed that the latest paper, though imperfect, is publication worthy, and is modest in its claims. Well done.

    Don’t ever forget your error bars, folks.

    One question you may wish to answer soon is whether tree-ring based recons like MBH98 are better because their error bars are narrower. I am almost certain I will see that one crop up in the blogosphere in the coming week.

    The other major outstanding issue is the canonical studies on which the Loehle proxies are based. They are surely riddled with error that is unaccounted for.

  228. Posted Jan 24, 2008 at 5:52 AM | Permalink

    Sod, Craig did the smooth becuase it would lessen the effects of the dating errors. It wasn’t a deck stack. Maybe Climate Audit should have an “assume good faith” rule (with exceptions for those from whom we have come to have little reason to trust, of course).

    i understand that.
    and he has a dating problem BECAUSE he is not using mainly tree data. (which has some advantages as well..)
    i assume good faith, up to a certain point. your “stacking” is slightly beyond that point.

    Can you assert that there is at least a slightly higher probability that it (the MWP) was warmer than the present (assuming that this reconstruction and its confidence intervals are “correct”) than the opposite, and a good probability that they have been about the same? That’s the impression I got.

    please, when making such “assertions”, could you try to INCLUDE the basis?
    like using HADCRUT data, HUGE ci for modern data and 30 years smooth?

    i still believe that a minor change in these assumptions (using GISS, no smooth and a small ci) will give a COMPLETELY different result!

  229. conard
    Posted Jan 24, 2008 at 4:39 PM | Permalink

    Andrew
    No need to be embarrassed. There is nothing wrong with being in the weeds– who knows what you will find. Just don’t be surprised when you don’t end up where you wanted to be or that no one followed you.

  230. Andrew
    Posted Jan 24, 2008 at 5:11 PM | Permalink

    Sod, if you really want it apples to oranges, here:

    That’s the same as my analysis above, just without the smooth. Which means, yes, with smaller confidence intervals and without the smooth, and with GISS, you have your “unprecedented” temperatures, but the only thing I needed to take away was the smooth, without doing anything else. Happy now?

  231. Posted Jan 24, 2008 at 5:15 PM | Permalink

    That’s the same as my analysis above, just without the smooth. Which means, yes, with smaller confidence intervals and without the smooth, and with GISS, you have your “unprecedented” temperatures, but the only thing I needed to take away was the smooth, without doing anything else. Happy now?

    yes. thanks Andrew for providing that graph. (still working on my R “skills”)

  232. Raven
    Posted Jan 24, 2008 at 5:17 PM | Permalink

    sod says:

    yes. thanks Andrew for providing that graph. (still working on my R “skills”)

    You do realize that the comparison is meaningless without the smooth because the smoothing hides peaks in the proxy reconstruction?

  233. bender
    Posted Jan 24, 2008 at 5:20 PM | Permalink

    That graph is very misleading because the color choices make it look like a splice, and that’s exactly the inference sod and others want to make.

  234. conard
    Posted Jan 24, 2008 at 5:22 PM | Permalink

    I spoke too soon. It’s ok to be embarrassed now 😉

  235. Posted Jan 24, 2008 at 5:24 PM | Permalink

    You do realize that the comparison is meaningless without the smooth because the smoothing hides peaks in the proxy reconstruction?

    you do realize that the smooth hides the last 20 years of global warming?

    with several of the proxies having a single data point per century, i would say that peaks in the proxy reconstruction are completely irrelevant.

  236. Andrew
    Posted Jan 24, 2008 at 5:26 PM | Permalink

    Bender, I’m not up on my terms here, but how is tacking the data on to one another not a “splice”? I figured sod wouldn’t leave us alone until we coughed up something unprecedented. I threw him a bone. What colors would you suggest? I could repeat it.

    Also, that’s not R, just Excel. Broke seventeen year old here, who’s not allowed to put things on the family computer. (Just turned seventeen a few days ago, in case you remember me saying I was sixteen)

  237. Posted Jan 24, 2008 at 5:28 PM | Permalink

    That graph is very misleading because the color choices make it look like a splice, and that’s exactly the inference sod and others want to make.

    i am not interested in a splice.
    and i am not going to use Andrews work to mislead anyone.

  238. Andrew
    Posted Jan 24, 2008 at 5:30 PM | Permalink

    Conard, I would be embarrassed if it had been my idea and not sod’s. Sod, the smooth doesn’t really “hide” twenty years of warming. Do I have to explain how a smooth works? Additionally, since the recon is smoothed, there could easily be twenty years of “hidden” warming there to.

  239. bender
    Posted Jan 24, 2008 at 5:34 PM | Permalink

    Andrew, some advice, since you are young. All graphs should have a legend. The legend should make it clear what the data are. If the explanation is complicated, put it in a figure caption. Use different line styles for different data types.

    If you do not do this policymakers and turkeys like sod are free to spin the data however they like. Never give people what they want just to make them go away. Give them the facts. Your fiction will come back to haunt you.

    Search CA for “splice” to see what the issue is. Learn what even Mann knows: splicing is a deception.

  240. steven mosher
    Posted Jan 24, 2008 at 5:36 PM | Permalink

    re 236..

    Now, do the same chart with HADCRU. Why HADcru? hadcru is the basis of all IPCC modelling.
    Hindcast verifications are done versus HADCRU, not GISS.

    IN fact, hadcru have estimated an anomaly of .37 for 2008, so you use that when you smooth the data.

  241. Posted Jan 24, 2008 at 5:38 PM | Permalink

    Bender, I’m not up on my terms here, but how is tacking the data on to one another not a “splice”?

    i can only guess that he expects critisism because the splice idea was massively attacked here on the hockeystick.
    but i am not interested in scoring cheap points.

    Also, that’s not R, just Excel. Broke seventeen year old here, who’s not allowed to put things on the family computer. (Just turned seventeen a few days ago, in case you remember me saying I was sixteen)

    you are doing nice stuff. my advice to your parents. by your son a computer!

    Sod, the smooth doesn’t really “hide” twenty years of warming. Do I have to explain how a smooth works?

    i guess i have a vague idea of the process.
    so is the final point in the smooth closer to the 1992 temperature or to the one in 2007?

  242. Posted Jan 24, 2008 at 5:41 PM | Permalink

    If you do not do this policymakers and turkeys like sod are free to spin the data however they like. Never give people what they want just to make them go away. Give them the facts. Your fiction will come back to haunt you.

    it wont in this case.

  243. conard
    Posted Jan 24, 2008 at 5:45 PM | Permalink

    Happy Birthday!

    Now it is time for me to be embarrassed. When I was seventeen I was … well, no where near a computer and a lot less careful with my spare time. Kudos to you.

  244. Andrew
    Posted Jan 24, 2008 at 5:47 PM | Permalink

    Okay, If I may attempt to offer an explanation of what I did (although I stated it in the original post WAY up there) just to clarify, quoting myself:

    Using HADCRUT:
    http://hadobs.metoffice.com/hadcrut3/diagnostics/global/nh+sh/annual
    And UAH MSU, and common-zeroing that to HADCRUT in 1979 and common zeroing both to Loehle in 1850 (on both the reconstruction it self and the confidence intervals) then doing a 29 year moving average, I got this:

    I’m probably really bad for using 3 different data sets. Any way, the other graph is identical, but instead of a 29 year smooth, I just let all the variation fly, thus, apples to oranges. The recent data are quite literally common zeroed at there last data point to the same point on Loehle. Given the dating error, this is actually almost definitely a very bad idea. I’m starting to regret that walk in the weeds, now. I noted a slight “divergence” in the 20’s, but given the dating error this doesn’t appear to be meaningful.

  245. bender
    Posted Jan 24, 2008 at 7:30 PM | Permalink

    Andrew, your methods have got to be clear in the graphic and in the caption associated with the graphic. It’s ok to use different datasets, just make that abundantly clear on the graphic. Know that your audience/client is sometimes going to want to make inferences that they shouldn’t. Your job is to make it clear in advance the risk they’re taking in doing so.

    Why not repost the graphic, with caption, following the various bits of advice in this thread?

    Apologies for sounding paternalistic. But you may as well learn how to do things right if you are an aspiring scientist.

  246. bender
    Posted Jan 24, 2008 at 7:31 PM | Permalink

    And download R. It’s free. It’s good. And sod already knows how to run R scripts.

  247. Rob R
    Posted Jan 24, 2008 at 8:23 PM | Permalink

    Craig

    I have been following the evolution of the argument over your 2007 paper with interest. I think you indicated that there were high resolution proxy temp series that were close to making the cut but failed due to less that 20 dates. Is there any chance that some of those series could be upgraded, for instance by additional 14C (or other) dating? They might then make the grade and be capable of inclusion. Seems to me that with all the millions of reseach dollars surging through the climate science community at present that a tiny fraction could be diverted to do such work. This would be a relatively inexpensive way of extending the scope of the type of study that you have carried out. That is of course subject to enough of the original sample still being archived, as would happen with a mineral exploration drill core. Do you have any idea how many additional qualifying proxy series might be generated in this way? Would this potentially improve the global coverage of proxy series and maybe tighten the 95% confidence interval on your global amalgum?

    Cheers

    Rob R

  248. Rob R
    Posted Jan 24, 2008 at 9:24 PM | Permalink

    Craig,

    Further to the previous post, I suspect it would be useful to do an out-of-sample test on the next best set of proxies. Perhaps a set non-treering set with 12 to 19 acceptable dates. It does seem to me that the choice of 20+ dates was somewhat arbitrary. Would such a test show a similar result? With a second spin of the wheel you would to some extent get a different set of proxies and a semi-randomised new set of sample locations. You might lose a bit on dating confidence but gain in strength of numbers.

    Rob R

  249. Andrew
    Posted Jan 25, 2008 at 1:56 PM | Permalink

    Bender, I would, but I’m not allowed to put things on the family computer. Generally this means music downloads, but it is a strict general rule. As for reposting with a clearer caption etc., I might later, but I’m not sure how important it is. Have a nice weekend!

  250. MarkW
    Posted Jan 25, 2008 at 2:36 PM | Permalink

    sod,

    The nature of many proxies makes them inherently smoothed.

    Do you not believe that we should be comparing smoothed data to smoothed data?

  251. John Creighton
    Posted Jan 25, 2008 at 6:59 PM | Permalink

    #250, I guess it depends on what kind of point you’re trying to make.

  252. Posted Jan 26, 2008 at 2:22 AM | Permalink

    sod,

    The nature of many proxies makes them inherently smoothed.

    Do you not believe that we should be comparing smoothed data to smoothed data?

    no. we are comparing proxies to meassured temperature anyway.
    and we are comparing a middle point to an endpoint.

    you are trying to use an apple peeler on an orange!

    #250, I guess it depends on what kind of point you’re trying to make.

    sort of true. modern temperature data is much more accurate than proxy records. why would we want to force a smooth on it, losing the last 15 years?

  253. John Creighton
    Posted Jan 26, 2008 at 2:32 AM | Permalink

    Well, if our not interested in the high frequency signal then the temperature data could be considered noisy. I suspect you wouldn’t through away the past 15 years of data if you are using low frequency proxies to estimate the temperature given that the smoothing process inherent in the proxies is causal.

  254. Geoff Sherrington
    Posted Jan 26, 2008 at 4:36 AM | Permalink

    re # 59 sod

    and i would LOVE to see a comparison of error ranges between modern measured temperature against the range given for 18 proxies.

    Where are you going to find a trustworthy set of modern measured temperatures? It’s fashionable to adjust, these days.

  255. MrPete
    Posted Jan 26, 2008 at 7:37 AM | Permalink

    Sod says,

    why would we want to force a smooth on it, losing the last 15 years?

    How about, so we have a valid comparison?

    Change scales to get an illustration that may aid in understanding.

    Joe brings you monthly rainfall data for the last 1200 months (100 years).
    You already have daily rainfall data for the last thirty days.
    You want to know whether the data you have is normal or abnormal.

    Do you “smooth” the data or not? Why average things out and lose all that nice daily detail?

    AFAIK, the problems are not all that different.

  256. Posted Jan 26, 2008 at 9:49 AM | Permalink

    Where are you going to find a trustworthy set of modern measured temperatures? It’s fashionable to adjust, these days.

    please tell me:
    when historic data from 18 proxies over 1935 has an error range of +-0.3°C, what error bars do you think modern measured data from 1000 sources has over 130 years?

    i don t think that you can seriously claim that there isn t a factor 10 between the two!!

    Joe brings you monthly rainfall data for the last 1200 months (100 years).
    You already have daily rainfall data for the last thirty days.
    You want to know whether the data you have is normal or abnormal.

    many problems with this example. just imagine the data from the last 30 days was measured by specialists in 1000 locations, while the 1200 months data was extracted from tree ring proxy…

    here is my example:

    imagine you get locked into a hot storage room in a factory by night. during your attempts to open the door, you broke a pipe, which is now filling the room with hot steam.
    several thermometers in the room tell you, that the temperature is raising rapidly.
    but you find some old books as well, in which some guys have noted down temperatures that they meassured with some antique instrument.

    would you “smooth” the new data?

  257. Posted Jan 26, 2008 at 10:52 AM | Permalink

    imagine you get locked into a hot storage room in a factory by night. during your attempts to open the door, you broke a pipe, which is now filling the room with hot steam. several thermometers in the room tell you, that the temperature is raising rapidly. but you find some old books as well, in which some guys have noted down temperatures that they meassured with some antique instrument.

    🙂 I’d call to some statistically illiterate PhD student from Yale, and ask him to write a journal paper that shows that those antique instruments accurately indicate that this steam is un·prec·e·dent·ed·ly hot.

  258. bender
    Posted Jan 26, 2008 at 11:10 AM | Permalink

    You see why I told you not to give him the graph, Andrew? He’s going to do damn well what he pleases with the data because that’s the way his mind works. Doesn;t care a whit whether the intended inference is valid or not. Tha planet’s on fire.

  259. steven mosher
    Posted Jan 26, 2008 at 12:39 PM | Permalink

    re 258.. I’m not sure how andrew adjusted the Hadcru anomaly to”match” the Loehle recon

    “All data were then converted to anomalies by
    subtracting the mean of each series from that series. This was done instead of using a
    standardization date such as 1970 because series date intervals did not all line up or all
    extend to the same ending date. With only a single date over many decades and dating
    error, a short interval for determining a zero date for anomaly calculations is not valid.
    The mean of the eighteen anomaly series was then computed for the period 16 AD to
    1980 AD. When missing values were encountered, means were computed for the sites
    having data. Note that the values do not represent annual values but rather are based
    on running means.”

    So, how does one splice hadcru onto this?

  260. Posted Jan 26, 2008 at 2:20 PM | Permalink

    You see why I told you not to give him the graph, Andrew? He’s going to do damn well what he pleases with the data because that’s the way his mind works. Doesn;t care a whit whether the intended inference is valid or not. Tha planet’s on fire.

    why not stick to the facts bender?

    proxy data and modern temperature data are different. the proxy data is obviously more in need of a 30 year smooth than the thermometer data.
    that is a fact.
    the smooth cuts away 15 years of warming from the end point. that is a fact as well.

    you are not trying to get an apple to apple comparison, but just to get your desired result.

    So, how does one splice hadcru onto this?

    are you asking Craig Loehle?
    because how without some sort of splice can he determine that the MWP was about as warm as modern times?

  261. bender
    Posted Jan 26, 2008 at 2:23 PM | Permalink

    you are not trying to get an apple to apple comparison, but just to get your desired result.

    do you talk to yourself often?

  262. John Creighton
    Posted Jan 26, 2008 at 3:45 PM | Permalink

    #260 I don’t like the idea of making blanket statements that proxy data is more in need of smoothing then temperature data. I’m not sure how much noise there is in each data set and what is the nature of the noise.

    I do think that blind smoothing or filtering can be dangerous. Smoothing introduces correlation, and effectively can create what looks like a signal from white noise when in fact it is just the dynamics of the filter.

    We should distinguish between smoothing and filtering because generally smoothing is a non causal filter. Filters are usually causal and only use data back in time in order to eliminate noise and not forward in time. Thus for a casual filter it is only the beginning of the data and not the end of the data that needs to be removed.

    Similarly an anti-casual filter you only need to remove the end of the data and not the beginning of the data. I’m sure that predictor corrector algorithms could be derived which use model based filters that would provide effective noise elimination over the entire data set without the need to throw away data at the end points. For such algorithms to work it would be necessary to have a model of the signal which has good predictive qualities.

  263. Posted Jan 26, 2008 at 3:56 PM | Permalink

    simple question:

    does anyone have a link to a 30 year smooth applied to any modern data set alone?

    do you really consider this a standard procedure?

  264. Posted Jan 26, 2008 at 5:52 PM | Permalink

    Steve Mosher (#259) writes,

    So, how does one splice hadcru onto this?

    One can do as we did in the article, and splice a 29-year centered MA of the series in question (GISS in the article) to our last observation (1935). This effectively splices 1921-49 of the instrumental series onto the corrected reconstruction.

    But of course there is nothing magical about either GISS or 1935. If one wanted to use a longer period and/or a different index, one could, for example, splice the average of our roughly tridecadal 1935, 1905 and 1875 values to the 1860-1950 average of one’s favorite instrumental series.

    Different indices and different splicing intervals will of course give somewhat different results. My impression is that it is a close call whether we are now (over the past 30 years) warmer or cooler than the MWP, but that there is no clear evidence that we are significantly warmer.

  265. Craig Loehle
    Posted Jan 26, 2008 at 6:09 PM | Permalink

    Rob R.: asks about whether existing geo data could be enhanced with more samples (denser sampling). Many of these are archived (eg., ocean sediment cores) and could be revisited. I have neither access nor funding, but it is a good question and the answer is yes.
    Question about why smoothing was done: 30 yrs is the standard for “climate” and without some smooth on this timescale you can’t see a pattern.
    I’ve been in NC giving a talk on this material and I’m glad I missed some of the bikering. Kudos to Andrew for his efforts at graphing.

  266. steven mosher
    Posted Jan 26, 2008 at 6:34 PM | Permalink

    re 264. Hu, what is your the 1935 anomaly centered to? Maybe I should read the article again?

    Say for example your 1935 is a +.2C anomaly from X, what is X?

  267. MrPete
    Posted Jan 26, 2008 at 8:49 PM | Permalink

    sod, sadly you missed the point completely.

    just imagine the data from the last 30 days was measured by specialists in 1000 locations, while the 1200 months data was extracted from tree ring proxy…

    Let’s assume for the moment that ALL the data is exactly precise. Doesn’t change a thing about my example, nor about Craig’s paper. One data set is sparse, the other is fine-grained. Apples and oranges. Monthly temps do not tell you whether there were blistering hot days. So you can’t look at daily temps and say that something special has happened.

    imagine you get locked into a hot storage room in a factory by night. during your attempts to open the door, you broke a pipe, which is now filling the room with hot steam.

    Your outcome is predetermined before you ever measure anything. In your example, science and measurement is immaterial. You broke the pipe, the room’s filling with steam, and you know you need to leave.

    And in the case of real world temperature, you keep harping on the idea that today is warming than 1934. Yet the (thermometer, not proxy!) measurements don’t support you.

    It’s more like: you broke a pipe and broke into a sweat, thinking that steam would come pouring out. But you’re so hot from all the work you’ve been doing, and a little bleary-eyed from sweat, that you can’t really tell what’s going on. Some coworkers are also afraid, but others are relaxed, saying the pipe you broke doesn’t actually connect to anything and there’s no real issue. 🙂

  268. bender
    Posted Jan 26, 2008 at 9:12 PM | Permalink

    you’re so patient mr pete. good explanation.

  269. _Jim
    Posted Jan 26, 2008 at 10:56 PM | Permalink

    It’s more like: you broke a pipe and broke into a sweat, thinking that steam would come pouring out. But you’re so hot from all the work you’ve been doing, and a little bleary-eyed from sweat, that you can’t really tell what’s going on. Some coworkers are also afraid, but others are relaxed, saying the pipe you broke doesn’t actually connect to anything and there’s no real issue.

    In the interest of light-heartedness and levity this Sat. evening let me play mosher for just a moment –

    Reading the above scenario I imagined one of those badly written Twilight Zone short-stories in which:

    – Improbable scripts are acted out using

    – over-played parts by grade B actors

    – playing on the most basic of human fears by using

    – overwhelming emotion to the exclusion of rationality

    – to (quite frequently) something new.

    They had some measure of appeal in the fifties, when originally written and aired, but fall far short for any one acquainted with reality today.

  270. John Balsevich
    Posted Jan 26, 2008 at 11:18 PM | Permalink

    I am sorry if this has already been brought up, as I am late to the discussion and have not read all the posts, but:

    It seems to me that tree ring analyses are not valid for determining past temperatures — for one thing, there is a genetic component to tree growth and wood deposition, e.g., tree growth is developmentally regulated, as well as being seasonal.
    Also, the biosyntheses of lignan, celulosic materials and secondary metabolites which make up the wood are regulated via a variety of feedback loops which help the tree cope with variability in temperature, available moisture, and sunlight. Furthermore, variable competition (often cyclical in nature) by other plants, insects, and microbial pathogens will certainly impact tree growth. Nutrients and soil composition also will fluctuate over time.
    I seem to note that temperature reconstructions with a heavy emphasis on tree rings afford linear graphs with a slope of zero — this appears to me not to be argument for previous climate stability, but an argument for the efficacy of biochemical feedback mechanisms, and a tree’s ability to cope with weather variability.

  271. bender
    Posted Jan 27, 2008 at 12:01 AM | Permalink

    There are dozens of threads devoted to tree rings as temperature proxies. Have a look around.

  272. Posted Jan 27, 2008 at 1:12 PM | Permalink

    Steve Mosher (266) writes,

    re 264. Hu, what is your the 1935 anomaly centered to? Maybe I should read the article again?

    Say for example your 1935 is a +.2C anomaly from X, what is X?

    As in Loehle (2007), X is the bimillenial average of the reconstruction. I.e., each series had its own mean subtracted out before being averaged together. Since each series may begin or end at a different year, there is a small ambiguity as to exactly which period is relevant, but since a majority of the series run from at least 1-1950AD (before tridecadal smoothing), it is reasonable to think of that as the reference period.

    In retrospect, we may have slightly understated the MWP in the manner, particularly with regards to Calvo, which ends in 1440 before smoothing. (I see now there is a typo in my SI table, which I’ll try to change tomorrow. The graph clearly ends in the early 15th century.) By normalizing Calvo to 1-1440 rather than 1-1950, we left out most of the LIA, and hence had him slightly too low relative to the whole period.

    In any future incarnation, I think now that such a study should first decide on a reconstruction period (eg 1-1950 before smoothing in our case, when a majority of proxies were active). Then demean those proxies which cover the full period by their own means. Then take the longest truncated series, compute its mean over its own period, as well as the mean of the first group over that subperiod, and subtract out the difference. In this manner, the longest truncated series will now be zeroed to the mean over the whole period, rather than just its own period. Repeat for the second longest series, etc., until all are zeroed.

    (Somewhere in the Loehle Category, Steve has a thread on Calvo, pointing out that there is unpublished information that permits his series to be continued into the 1900s. Any future study might also consider using this extension, thus minimizing the issue.)

  273. Posted Jan 27, 2008 at 3:12 PM | Permalink

    Let’s assume for the moment that ALL the data is exactly precise. Doesn’t change a thing about my example, nor about Craig’s paper. One data set is sparse, the other is fine-grained. Apples and oranges. Monthly temps do not tell you whether there were blistering hot days. So you can’t look at daily temps and say that something special has happened.

    your assumption doesn t make any sense.
    proxy data IS different from modern temperature data.
    my example is valid. you are trying to use the same tool (smooth) on two entirely different things.

    And in the case of real world temperature, you keep harping on the idea that today is warming than 1934. Yet the (thermometer, not proxy!) measurements don’t support you.

    are you saying that temperature today is NOT warmer than 1934? is that supposed to be a joke?

  274. Raven
    Posted Jan 27, 2008 at 3:23 PM | Permalink

    sod says:

    are you saying that temperature today is NOT warmer than 1934? is that supposed to be a joke?

    The US temperature records show the 30s were as warm as the last 10 years. The champion data manipulators at NASA would like us to believe that the US was an aberration and the rest of the world was a lot cooler. Believing such a claim requires one to believe that the temperature records from the rest of the world are reliable and have not been affected by spurious warming in recent years due to urbanization. I don’t feel that such a belief is plausible.

  275. Posted Jan 27, 2008 at 3:37 PM | Permalink

    he US temperature records show the 30s were as warm as the last 10 years. The champion data manipulators at NASA would like us to believe that the US was an aberration and the rest of the world was a lot cooler. Believing such a claim requires one to believe that the temperature records from the rest of the world are reliable and have not been affected by spurious warming in recent years due to urbanization. I don’t feel that such a belief is plausible.

    GISS says you are wrong.

    have you got any data about arctic sea ice in the 30s? or is the melt an UHI effect?

  276. bender
    Posted Jan 27, 2008 at 3:44 PM | Permalink

    1998 and 1934 are a tie, no?

  277. Posted Jan 27, 2008 at 3:50 PM | Permalink

    1998 and 1934 are a tie, no?

    please learn to read bender.

    the US temperature records show the 30s were as warm as the last 10 years.

  278. Raven
    Posted Jan 27, 2008 at 3:55 PM | Permalink

    See: http://www.climateaudit.org/?p=1880

    have you got any data about arctic sea ice in the 30s? or is the melt an UHI effect?

    The sea ice melted enough to allow navigation of the NW passage in 1944.
    Temperature data from Greenland shows that the 30s were quite warm.

    See: http://www3.interscience.wiley.com/cgi-bin/abstract/101525941/ABSTRACT?CRETRY=1&SRETRY=0

    In contrast to Northern Hemisphere mean temperatures, the 1990s do not contain the warmest years on record in Greenland. The warmest years in Greenland were 1932, 1947, 1960, and 1941.

    That said, the pre-satellite data from the poles is pretty sparse so ANY claim regarding relative temperatures at the poles are pretty dubious. That means the GISS estimates of global temps from the 30s do not necessarily have any connection to reality.

    The only really reliable records we have are from the continental US and they show little or no warming even though the UHI probably affects those records as well (i.e. today may actually be cooler than the 30s).

  279. Craig Loehle
    Posted Jan 27, 2008 at 3:56 PM | Permalink

    re: steven mosher January 26th, 2008 at 6:34 pm

    Our anomaly is relative to the long term mean of the entire data set.

  280. Posted Jan 27, 2008 at 4:08 PM | Permalink

    The sea ice melted enough to allow navigation of the NW passage in 1944.
    Temperature data from Greenland shows that the 30s were quite warm.

    1944 is not in the 30s.

    this is beyond absurd. you believe that EVERY modern temperature measurement is WRONG (apart from the US temperature in the 30s..), but you think that Loehle got the MWP right via proxies.

    sorry, but that is simply nuts.

  281. bender
    Posted Jan 27, 2008 at 4:17 PM | Permalink

    please learn to read bender

    sod, you said:

    are you saying that temperature today is NOT warmer than 1934? is that supposed to be a joke?

    “Today” is 2008 and it is cold. I picked a warmer year, 1998, to bias the analysis in your favor.

    I think that rather than learn to read, what I need to do is learn to ignore. You.

  282. Raven
    Posted Jan 27, 2008 at 4:18 PM | Permalink

    sod says:

    this is beyond absurd. you believe that EVERY modern temperature measurement is WRONG (apart from the US temperature in the 30s..), but you think that Loehle got the MWP right via proxies.

    No. I believe the uncertainties in the data are sufficently large that one cannot draw any conclusions regarding the difference between the 30s and today. Loehe’s proxy analysis simply reinforces this view.

  283. Posted Jan 27, 2008 at 4:24 PM | Permalink

    “Today” is 2008 and it is cold. I picked a warmer year, 1998, to bias the analysis in your favor.

    I think that rather than learn to read, what I need to do is learn to ignore. You.

    please learn to read. when i said that, i was talking about GLOBAL temperature.

    then raven brought up US temperature.

    so you are wrong on all accounts. GLOBALLY its warmer today than in 1934. (2007 was hot!)

    and the last decade was warmer in the US than the 30s.

  284. Posted Jan 27, 2008 at 4:26 PM | Permalink

    No. I believe the uncertainties in the data are sufficently large that one cannot draw any conclusions regarding the difference between the 30s and today. Loehe’s proxy analysis simply reinforces this view.

    the “uncertainities” in the Loehle reconstruction are infinitely WORSE than those in modern temperature data.
    it is beyond me, how it could reinforce your view. 8apart from fitting into what you want to believe..)

  285. bender
    Posted Jan 27, 2008 at 4:28 PM | Permalink

    you’re welcome /ignore

  286. Mike Davis
    Posted Jan 27, 2008 at 4:34 PM | Permalink

    sod: seek help

  287. Posted Jan 27, 2008 at 4:36 PM | Permalink

    guys, why not simply counter my points?

    Steve: I, for one, have not “accepted” Loehle’s MWP estimates. I’ve tried to consider his recon as follows – is there any objective reason to include Moberg in a spaghetti graph and not Loehle?

  288. Craig Loehle
    Posted Jan 27, 2008 at 6:07 PM | Permalink

    Please note that I do not myself consider my reconstruction the final word on the topic–it only has 18 series. What it shows is that it is not automatically true that the recent period is warmer than the MWP, and with these data we would reject that hypothesis. But it is not PROOF that the hypothesis is false.

  289. Raven
    Posted Jan 27, 2008 at 6:33 PM | Permalink

    sod says:

    the “uncertainities” in the Loehle reconstruction are infinitely WORSE than those in modern temperature data. it is beyond me, how it could reinforce your view. 8apart from fitting into what you want to believe..)

    The alarmists claim that the current warming is ‘unprecedented’. My view is this claim is not supported by the data when one considers the uncertainties. That said the data does not specifically refute the alarmist claim either. The fact that we don’t really know whether the current climate is warmer than past is something that policy makers should take into account when drafting policies.

  290. bender
    Posted Jan 27, 2008 at 7:01 PM | Permalink

    #287

    is there any objective reason to include Moberg in a spaghetti graph and not Loehle?

    That is the question, and the answer is “no”.
    #289

    The alarmists claim that the current warming is ‘unprecedented’

    That was false even before Loehle & McCulloch (2008). L&M just drives the point home.

  291. MrPete
    Posted Jan 27, 2008 at 8:49 PM | Permalink

    Sod,

    I’ll leave you to stew on the difference between early 20th century and recent temps. Search CA for Steve’s work that caused a correction in the tempn record, leaving more “top ten” temps in the early 20th than now. More important for the discussion here: the adjustment was less than 0.2 degrees and caused the leaderboard to be rearranged. It really IS that close. No big difference.

    Beyond that, you said:

    your assumption doesn t make any sense. proxy data IS different from modern temperature data. my example is valid. you are trying to use the same tool (smooth) on two entirely different things.

    Guess what. Loehle used the same tool (smooth) on 18 different proxies, ALL quite different with different measurement spacing.

    I think we may be getting somewhere in this vigorous discussion. Sod, it sounds like you are claiming the purpose of the “smooth” is to hide questions about data value uncertainty. In reality, the purpose is to properly address data timing/measurement-density issues. Fine-grained data tells little about climate trends. So they smooth even modern data (look at your GISS graph!) And a mixture of coarse and fine data can’t be compared. So they smooth. That’s why I gave you an example with monthly vs daily data. Valid/sophisticated smoothing (taking various important factors into account) is how “they” deal with such issues.

    Turn your analysis upside down for a moment. As you said, (smoothed) proxy data IS different from modern (fine grained) temperature data. You can’t put both in the same analysis box, or graph, or sentence, without adjusting one to fit with the other. They’re different beasties.

  292. Phil.
    Posted Jan 27, 2008 at 10:41 PM | Permalink

    Re #287

    Steve: I, for one, have not “accepted” Loehle’s MWP estimates. I’ve tried to consider his recon as follows – is there any objective reason to include Moberg in a spaghetti graph and not Loehle?

    You’d have to reconcile the relative zeros, the main thing that bothers me is the treatment of the 100yr series.

  293. MarkW
    Posted Jan 28, 2008 at 7:14 AM | Permalink

    sod,

    You claim that you are interested in peaks, now and then.

    As the discussion has shown, averaging your data lowers peaks. The proxy data is inherently averaged.

    Unless you also average the modern data, you are comparing one set of data with peaks leveled, with one set in which they haven’t, in order to find which has a higher peak.

    Not a valid comparison.

  294. MarkW
    Posted Jan 28, 2008 at 7:15 AM | Permalink

    sod,

    Anthony Watt’s work has shown that there are a significant fraction temperature stations with error margins of 5C or greater.

    Your belief in the accuracy of the modern network is quaint, but not warranted.

  295. MarkW
    Posted Jan 28, 2008 at 7:18 AM | Permalink

    bender,

    Yes, he’s already decided in advance what the correct answer is, and the only valid data is the data that agrees with him.

  296. MarkW
    Posted Jan 28, 2008 at 7:19 AM | Permalink

    sod,

    How do you know that the pipes are putting in steam? Perhaps it’s just stage smoke.

  297. MarkW
    Posted Jan 28, 2008 at 7:21 AM | Permalink

    just imagine the data from the last 30 days was measured by specialists in 1000 locations, while the 1200 months data was extracted from tree ring proxy…

    If we’re going to start with a fantasy world, why can’t I just assume that the proxy data is perfect, as is?

  298. MarkW
    Posted Jan 28, 2008 at 7:27 AM | Permalink

    sod whines:

    the “uncertainities” in the Loehle reconstruction are infinitely WORSE than those in modern temperature data.

    yet somehow you are absolutely convinced that today’s temperatures are higher than that shown in the proxies, despite the fact that in your opinion, proxies are infintely bad.

    How interesting.

  299. steven mosher
    Posted Jan 28, 2008 at 7:31 AM | Permalink

    thanks craig.

  300. MrPete
    Posted Jan 28, 2008 at 10:12 AM | Permalink

    One last, then time for some RW (Real World) work…

    Sod said:

    have you got any data about arctic sea ice in the 30s? or is the melt an UHI effect?

    This thinking suggests to me that sooner or later we need a ClimateAudit wiki (community-built document set) that summarizes for newbies, some of the clearer thinking that’s been going on — eliminating various fallacies and spin in analysis and thinking.

    Obviously to CA regulars, Sod’s statement is tossing sticks and stones into the mix, attempting to somehow disparage the simple result that has emerged from L&M 2008’s work. I’m just thinking it would be nice to have a place to send people to read up on fallacies, good/bad statistical analysis, strengths and weaknesses of various measurement, proxy and modeling methods, etc etc. Better if some of our tax dollars would actually fund this and a few grad students would take it on, but hey, why not some more Citizen Science.

    In this case: sod, the obvious answer is:

    Humanity is not in control of the planet’s heat distribution. Distribution of ice at the poles (and in between for that matter) is rather non-uniform. Models don’t predict the reality well, and reality is not even being measured all that well.

    (I could lob “look at global ice levels — they are above normal” back at you, but my point is more that we really don’t know enough to interpret the data properly, particularly when it comes to sea ice. If the “standard climate smooth” convention is 30 years… we will have our first satellite-measured climate-ice data point in 2009!)

  301. steven mosher
    Posted Jan 28, 2008 at 11:24 AM | Permalink

    Bender. To further some of the work that Andrew was playing with and to provide some miracle grow
    for sod, I thought I would take a look at stitching the recon to an instrument
    series. Craig and Hu made a passing mention of this (WRT to GISS) in the paper, toward the end.

    To build my frankenstein graph I used Hadcru. Because Ahdcru goes from 1850 on, that would give
    me 1850-1935 to overlap. I messed about with a couple approaches of Reconciling the Hadcru anomaly
    to the Loehle anomaly. Do, I simply pin 1850 at zero bias, 1935? So, I simply used the Whole 1850-
    1935 period and minimized the difference to zero. I think the required offset for Hadcru was something like
    .23C

    Then I did a 29year centered MA. on the offset Hadcru.

    I wondered about doing the smooth before the offset, but I was just playing around, so didnt think
    to deeply about it.

    Then I plotted my frankenstein. The Loehle anamoly and the hadcru anomaly lined up quite nicely
    EXECPT for one fact.. an apparent shift in time.

    Before we talked about the time error in reconstructions. If you believe in wiggle matching
    then Loehle recon matches up with hadcru IF you add 15 years to the Loehle dates.

    I’ll let people who know what they are doing attack this little oddity.

  302. MarkW
    Posted Jan 28, 2008 at 11:33 AM | Permalink

    have you got any data about arctic sea ice in the 30s? or is the melt an UHI effect?

    Just how does this fact affect the discussion of whether current temperatures are warmer than the MWP???

  303. Craig Loehle
    Posted Jan 28, 2008 at 2:17 PM | Permalink

    Re: Mosher graph
    I would not be surprised by a 15 yr shift given the dating errors. Interesting.

  304. Posted Jan 28, 2008 at 2:25 PM | Permalink

    Re Steve Mosher #301,
    I’m not sure what you’re showing here or why. What is Loehle 1935? Craig’s original 2007 series (which admittedly had a
    50-year timing error on 4 of the 18 series), or the new 2008 correction? In either case, why is shifting it 15 years “time corrected”?

    Are you just saying that it matches HADCRU better with an arbitrary 15 year shift than without?

  305. Posted Jan 28, 2008 at 2:38 PM | Permalink

    Are you just saying that it matches HADCRU better with an arbitrary 15 year shift than without?

    Is there any point in matching to a temperature series which includes the bucket correction? Raw data would be better (unless that’s what you’ve used, of course…)

    JF

  306. Posted Jan 28, 2008 at 2:42 PM | Permalink

    Steve: I, for one, have not “accepted” Loehle’s MWP estimates. I’ve tried to consider his recon as follows – is there any objective reason to include Moberg in a spaghetti graph and not Loehle?

    i can t think of an important one.
    most people will ignore it, because of where it was published.

    i don t have a problem with the part of the Loehle work, that is supported by data.
    but i strongly disagree with the part, that isn t.

  307. Posted Jan 28, 2008 at 3:11 PM | Permalink

    As you said, (smoothed) proxy data IS different from modern (fine grained) temperature data. You can’t put both in the same analysis box, or graph, or sentence, without adjusting one to fit with the other. They’re different beasties.

    that is all nice.
    simple problem. you are losing 15 years of GOOD data! actually, the BEST data we have.
    i am pretty sure that you can bring the two together without doing that. but it is not in your interest to do so.

    As the discussion has shown, averaging your data lowers peaks. The proxy data is inherently averaged.

    Unless you also average the modern data, you are comparing one set of data with peaks leveled, with one set in which they haven’t, in order to find which has a higher peak.

    those “peaks” are useless anyway. noone would place huge faith into a shortterm (1-2 years) peak in proxy data.
    and very few people believe that we are in a “peak” situation right now.

    Anthony Watt’s work has shown that there are a significant fraction temperature stations with error margins of 5C or greater.

    i ve asked this question now a hundred times: what sort of an error is that?
    details, please.

    and i DID notice, that none of you dares to answer my simple question above: if the error over 1935 years based o 19 proxies is +-0.3°, then what is the error of measured temperature over 150 years at 1000 stations?!?

    Just how does this fact affect the discussion of whether current temperatures are warmer than the MWP???

    sea ice does not affect this discussion, unless you believe that the 30s were warmer than the last decade, like Raven does.
    then you have quite a lot of things to explain…

  308. henry
    Posted Jan 28, 2008 at 3:29 PM | Permalink

    sod said:

    and i DID notice, that none of you dares to answer my simple question above: if the error over 1935 years based o 19 proxies is +-0.3°, then what is the error of measured temperature over 150 years at 1000 stations?!?

    And the simple answer is, we don’t know what the “error of measured temperature over 150 years at 1000 stations” is yet. Only 50% of the US stations have been surveyed, and so far, about 87% of those have a possible error greater than 2° (56% – greater than 2°; 14% greater than 5°).

    And that’s just the US stations.

  309. Raven
    Posted Jan 28, 2008 at 3:31 PM | Permalink

    sod says:

    those “peaks” are useless anyway. noone would place huge faith into a shortterm (1-2 years) peak in proxy data. and very few people believe that we are in a “peak” situation right now.

    Actually, many people do feel that temps peaked in 1998 and are on their way down as the solar cycle heads into a minimum.

    You *want* to believe that the current data is not a peak. But you cannot *prove* that the current data is not a peak until enough time has past (29 years perhaps?).

  310. Posted Jan 28, 2008 at 3:35 PM | Permalink

    And the simple answer is, we don’t know what the “error of measured temperature over 150 years at 1000 stations” is yet. Only 50% of the US stations have been surveyed, and so far, about 87% of those have a possible error greater than 2° (56% – greater than 2°; 14% greater than 5°).

    And that’s just the US stations.

    i am here to learn. please explain to me, what “error>5°C” means..

    in detail, please..

  311. Sam Urbinto
    Posted Jan 28, 2008 at 5:01 PM | Permalink

    This data is no more than what it is. There are real statisticians working on it. It is totally based upon peer-reviewed studies. I fail to see how anyone could consider it less meaningful rather than more.

    Anyway, greater than 5 degrees means the station matches the official description of the margin of error attributed to a station that’s CRN 5

    As far as some of the other discussions, if I get daily t_mean readings at the 5 locations in the same grid:

    min/max
    15/15
    1/30
    -20/50
    -5/35
    10/20

    Last year this was:

    min/max
    15/15
    0/30
    -20/50
    -5/35
    10/20

    So the anomaly is up .1 This average of t_mean for such wide ranges is meaningful after I combine it for the month and then combine that with all the other grids for the month on a global scale?

  312. steven mosher
    Posted Jan 28, 2008 at 5:42 PM | Permalink

    RE 303 and 304. Hu “time corrected” is the wrong term. Time adjusted is better

    I wanted to “stitch” the loehle series to the hadcru instrument record.

    How to do that??

    basically I wanted to “match” the loehle series ( 1850 -1935) against the HADcru series 1850-1935

    The latter is annual. I matched the series by zeroing SUM(loehle-hadcru) from 1850-1935.

    Then I smoothed Hadcru with your 29 year centered filter and applied the offset.

    ( I think this process could be flawed. I was just winging ***)

    Then I looked at how hadcru smoothed ( 29 year) “matched up” with Loehle over the 1864-1935 period.

    Not bad, BUT!!!! it looked like the match would be better if I shifted Loehle to the right ( added years)

    Basically.. to “stitch” a Recon to a instrument series, you have make two adjustments.

    one in X, the other in Y.

    When I shifted craigs dates by 15 years right ( added 15) the peaks and valleys were happy!
    ( not a statistical test you want to report)

    So, I think there is an interesting test here. to stitch the recon to the instrument
    you make two adjustments. One in Y, the other in X.

    My point??? I was just messing about.

    You have a look at the data. Take hadcru and run your 29 year filter on it.

    Then just plot it on the same chart as Craigs data, lining up the dates…

    How to wiggle X and Y to sync those two curves?

  313. steven mosher
    Posted Jan 28, 2008 at 6:09 PM | Permalink

    Sod,

    If you read French you can see exactly what the 5C error means.

    Ask Anthony for the French version of the paper. More later if you like.

  314. jae
    Posted Jan 28, 2008 at 6:32 PM | Permalink

    312: Steve Mosh: Roy Spencer did it somehow. He shows it on that video that was linked around here somewhere.

  315. jae
    Posted Jan 28, 2008 at 6:42 PM | Permalink

    314: The video is linked at the BB.

  316. Posted Jan 28, 2008 at 7:07 PM | Permalink

    RE 310, 313, I actually did a rough translation of Michel Leroy’s Meteo France site classification manual last year, which is the source of the 5 degree error figure for Class 5. The simplified CRN (US Climate Research Network) ratings used by Anthony Watts on http://www.surfacestations.org/USHCN_stationlist.htm identify Class 5 with an “error >= 5C”. Leroy’s ratings (which are a little more tolerant of offending surfaces to start with) give Class 5 a less precise “error 5 dC or more?”. Anthony is naturally just taking the official US ratings as his norm.

    In a postscript to my translation, I show that the Boulder CO station, which would be a CRN Class 3 (“error 1 C”) because of the bike path that carries climatologists up to NCAR immediately above the station, would rate a MF Class 2 (not ideal, but reasonably accurate) given the small area covered by the bike path and even the nearby road and parking lot.

    I still haven’t heard back from Leroy if it’s OK to release my translation. Perhaps he heard I’ve been keeping bad company…. Anyway, if anyone wants an unofficial copy of one or both, pls e-mail me at mcculloch (dot)2(at)osu(dot)edu.

  317. steven mosher
    Posted Jan 28, 2008 at 7:23 PM | Permalink

    RE 314. JAE? no I will not make out with you!

    I like recon work. I like the instrument series. I like GCMs. I’m Here to learn.

    Go on with the chlorophyll

    Sorry, I love that scene.

  318. steven mosher
    Posted Jan 28, 2008 at 7:40 PM | Permalink

    re 316.. Will do Hu.

  319. Posted Jan 28, 2008 at 8:27 PM | Permalink

    Re # 315, thanks, Jae!
    Roy Spencer of U Ala Huntsville cites Craig’s reconstruction in a talk online at http://video.energypolicytv.com/displaypage.php?vkey=6f9554fe9375772ff966&channel=Climate%20Change.
    He cites the 2007 version, but the 2008 correction doesn’t look all that much different, for his purposes.

  320. Craig Loehle
    Posted Jan 28, 2008 at 8:51 PM | Permalink

    I sent roy the update.

  321. Posted Jan 29, 2008 at 2:22 PM | Permalink

    Anyway, greater than 5 degrees means the station matches the official description of the margin of error attributed to a station that’s CRN 5

    it would be much better to move this part of the discussion there.

    but the problem is, many people use the term from the classification without the slightest understanding of it.

    it could be an average error. or we would want to know the time period in which we expect this error to happen. (day? hour? year?)

    for climate research this error is only important if it effects certain times or values (min max or the reading times) and if it changes systematically over time.

    Sod,

    If you read French you can see exactly what the 5C error means.

    Ask Anthony for the French version of the paper. More later if you like.

    i think i ve seen the Leroy paper before and i didn t include any extra information as far as i remember.

    just curious: what will happen if you get “error>=5°” and “error 30%” because of an obstacle problem with solar radiation?

  322. steven mosher
    Posted Jan 29, 2008 at 2:39 PM | Permalink

    re 321. You havent seen the Leroy “paper” unless you got it from him. Explain where you found
    it. Use French. Since Hu did the english translation this should be a fun test.

    In every UHI study I have looked at the errors are not skewed toward inhibiting insolation, but rather
    skewed toward increasing heat storage or obstructing reradition.

    However,
    It’s really all beside the point if you just agree to follow the consensus science on climate siting.

    The NOAA standards and the WMO standards are all clear. to avoid contamination ( either cooling or heating )
    Sites should follow guidelines.

    Climate science, consensus climate science tells us we can estimate the global temp with 60 GOOD sites.

    We’ve got hundreds of bad sites. Rather than exert mental energy and debate on “adjusting” these sites
    that dont meet consensus climate science standards, just drop the bad sites. It’s simple.

    You could drop 1000 sites in the united states and you would still have 10 times the number sites used
    in Brazil.

  323. conard
    Posted Jan 29, 2008 at 2:57 PM | Permalink

    steven mosher : re322

    I am glad that you brought up theorized number of GOOD sites required for GMT estimates. I have seen this discussed before and it is on my list of things to lookup and understand. If time ( and good will 🙂 permit will you reply in unthreaded with references?
    Thanks

  324. Posted Jan 29, 2008 at 3:41 PM | Permalink

    re 321. You havent seen the Leroy “paper” unless you got it from him. Explain where you found
    it. Use French. Since Hu did the english translation this should be a fun test.

    i guess we are speaking about this paper.

    my french isn t perfect, but it is good enough to notice that there is little additional information on my question in it.

    Leroy has a nice new presentation out, concerning screens and maintenaince.

    and this time the relations are in the right direction….

    ps: dropping bad stations is a good idea!

  325. steven mosher
    Posted Jan 29, 2008 at 6:36 PM | Permalink

    re 324. Now, see if the stuff you linked to was the actual paper delivered at the AMS 10th symposium
    on meterological observations, 1998.

    Audit sod. go dig.

  326. Posted Jan 29, 2008 at 6:58 PM | Permalink

    “Sod” (#324) writes,

    [Quoting Steve Mosher] re 321. You havent seen the Leroy “paper” unless you got it from him. Explain where you found
    it. Use French. Since Hu did the english translation this should be a fun test.[endquote]

    i guess we are speaking about this paper.[http://meteo.besse83.free.fr/Divers/Classite.htm]

    my french isn t perfect, but it is good enough to notice that there is little additional information on my question in it.

    The Besse83 page is based on Leroy’s paper, but like CRN omits the question marks after the error estimates that are present in the original. I was originally planning to translate the Besse page, since it is quasi-official and all I could find, but then Leroy sent me his original, which is in fact a little different. Perhaps CRN is based on the Besse version, since it confidently omits the question marks.

    Basically, Leroy is saying that the error in a Class 5 site may be as big as 5 dC or even higher, not that it is certainly at least 5 dC. This is just a judgement call, not a documented measurement.

    Bear in mind that Leroy is only looking at the immediate environment. A typical CRU site like Runway 28L of Port Columbus International Airport (the nearest CRU site to me) might actually be an MF or CRN Class 1, because it is on a grassy midway at least 100km from the nearest taxiway. Nevertheless, it is breathing in the exhaust and/or burning tires of some 200 flights a day, and is surrounded by an UHI of runways, taxiways, parking lots, freight depots, and interstate highways, not to mention being downwind from a metropolitan area of more than 1 million.

    On the other hand, the CRU stations SFO and Reno, being directly on unvegetated artificial surfaces (landfill or compacted clay) are clear-cut 5’s, UHI or no, IMHO. Three other of the carefully selected CRU stations might be especially be fun to take a closer look at: On rural Long Island, the village of LaGuardia, in the Garden State, the municipal airstrip for the town of Newark, and in the cornfield state of Illinois, a township named O’Hare.

  327. steven mosher
    Posted Jan 29, 2008 at 7:11 PM | Permalink

    re 323. The referenced paper ( from memory.. And I cant remember when My alzhiemsers was this bad)
    was Shen???… It’s cited on CA somewhere, probably with my name attached to the post or maybe gavins
    name. I am usualy better at recalling things but when you talk to sod your brain rots.

  328. Posted Jan 29, 2008 at 7:29 PM | Permalink

    Someone (#326) wrote

    at least 100km from the nearest taxiway.

    Umm… make that 100 m.

  329. steven mosher
    Posted Jan 29, 2008 at 9:02 PM | Permalink

    re326. The 1998 AMS symposium paper was 4 pages long. Did Leroy send you that?
    I believe its the one cited by Noaa.

  330. Posted Jan 29, 2008 at 9:29 PM | Permalink

    What he sent me is Note Technique #35, Classification d’un site, Novembre 1999, put out by Direction des systemes d’observation of Meteo France, 12 pages long of which first 8 pertain to temperature.

  331. conard
    Posted Jan 29, 2008 at 10:28 PM | Permalink

    LOL. Thanks

  332. Posted Jan 30, 2008 at 4:57 AM | Permalink

    Basically, Leroy is saying that the error in a Class 5 site may be as big as 5 dC or even higher, not that it is certainly at least 5 dC. This is just a judgement call, not a documented measurement.

    thanks for the answer Hu.

    i had a discussion with a guy from Meteofrance some time ago (perhaps even here on CA?!?), who confirmed what you just said.

    the real problem of course is, that most people who mention the “error>5°C2 thing don t understand what it means. that is why they can t answer the question.
    anyone who thinks about the notation for even one second, will notice that “potential error >5°C” is the only reasonable explanation. one doesn t need a lot of insight or french papers to come to this conclusion…

  333. Peter Thompson
    Posted Jan 30, 2008 at 6:47 AM | Permalink

    sod 332

    the real problem of course is, that most people who mention the “error>5°C2 thing don t understand what it means. that is why they can t answer the question.
    anyone who thinks about the notation for even one second, will notice that “potential error >5°C” is the only reasonable explanation. one doesn t need a lot of insight or french papers to come to this conclusion…

    Very good sod. Now read up on error and significant digits and what it means in relation to instrumentation with this potential error. Most people who claim it is “hotter now” based upon temperature anomalies in the tenths of one degree C, don’t understand that.

  334. MarkW
    Posted Jan 30, 2008 at 7:02 AM | Permalink

    For most sites where the “potential” error is greater than 5dC, it’s highly unlikely that the actual error will be just a few tenths of degree C.

  335. MrPete
    Posted Jan 30, 2008 at 7:25 AM | Permalink

    Weighing in briefly (as a rather lightweight wrestler 🙂 )…
    Seems to me, staying on topic here, the question being raised with respect to L&M is what would be the problem with comparing or analyzing a not-obviously-invalid set of proxies with the modern temp record. Is there something truly different about the data series.
    We’ve already discussed to death issue(s) related to smoothing: the need to address differing data measurement “density” and the need to observe trends at climatological time scales rather than “instantaneous” highs and lows.
    Another aspect of this is what might be called Anthropological Measurement Impoact. The more we believe we have a comprehensive understanding of the “sensor” (whether a thermometer, treemometer, siltmometer, icemometer, d’O-teen-mometer or any other crudmometer), the more we tend to feel confident in making various adjustments to the raw measured data.
    The interesting thing is, the final outcome we seek is an anomaly record, not absolute values. And in that sense, the adjustments can themselves make up a significant component of the final data value.
    Typically, the adjustments are not so crude as to be obvious to the naked eye, but they’re in there all the same. Sure as you can find mushrooms and garlic in your Prego spaghetti sauce.
    To me, this is another source of uncertainty in splicing modern temp to historical proxies. Proxies tend to be composed of various “accumulations” in nature. Modern temp is a highly tweaked data series calculated from a set of man-made sensor devices which themselves are tweaked to represent an attribute of the natural world.
    Yes, this is an arm-waving argument in this posting, supported by many analytical results in the CA site. (Note: I’m not saying anything at all about the direction of the tweaks. Just that we’re dealing with older generally less-tweaked data vs modern generally highly-tweaked data. Apples vs Fruit Rollups.)

  336. Andrew
    Posted Jan 30, 2008 at 11:10 PM | Permalink

    steven mosher

    So, how does one splice hadcru onto this?

    Common “zero” of earliest year in hadcrut with the same in Loehle (made them the same value by subtracting the difference), then 29 year smooth “trendline”. And I was advised against “wiggle fitting” (ie arbitrary shifting) so I don’t think you should do that.

    sod, as before, a smooth in the recent data doesn’t “hide” your elephant anymore than the whale is hidden in the recon by that smooth. Is the whale hidden in there? If the elephant is in the recent data, then a whale must certainly be in the Medieval data!

    Additionally, if I may way in on your little “warmer now or thirties” debate, while I’m sure there are errors in the data, such that recent warming is exaggerated, I doubt it isn’t warmer global now than then. That would be all but physically impossible. But it is an interested question: Why did some geographic areas experience an especially “warm thirties” and why does this period also happen to be warm global, but not as warm as now? That’s a big mystery, I think.

    Anyone is welcome to repeat my analysis with whatever data the please, of course!

  337. steven mosher
    Posted Jan 31, 2008 at 7:35 AM | Permalink

    re 336. thats what i did. however, i wiggle matched. why? just to see.

  338. Paz
    Posted Feb 1, 2008 at 8:39 PM | Permalink

    Long time reader, first poster here, but to my eyes it’s kinda obvious: if the Loehle-study needs a 29-year smooth to compare recent temperatures to earlier ones, then it just can’t say anything valid about the recent warming, and i start to wonder what the fuss is all about.

  339. MrPete
    Posted Feb 1, 2008 at 9:48 PM | Permalink

    Paz,

    The problem is, “recent” warming is essentially about the same temp whether within the Loehle range (1930’s) or today. But the *main* thing it is “saying” is not about today. It is speaking volumes about the past.

    And that is what the fuss is about. Is today special or not? If it was quite warm 1000 years ago, then quite possibly we should be more cautious about claiming today is something to fuss about.

  340. Paz
    Posted Feb 2, 2008 at 7:16 AM | Permalink

    “But the *main* thing it is “saying” is not about today.”
    No, that’s not true. The purpose of the study is the relationship between today and the past. To make this comparison you need an accurate enough measure for the past’s and today’s climate. In particular, you need a measure that *can* capture the recent temperature changes, which happened on a timescale of a few years. If the only way the Loehle method allows you to compare the present to the past, is by running a 29 year smooth on the recent warming, then this method is, per definition, not accurate enough.

    Craig Loehle said somewhere above:
    ” What [the reconstruction] shows is that it is not automatically true that the recent period is warmer than the MWP, and with these data we would reject that hypothesis.”
    Again, that is not a fair characterization. The only conclusion we can draw from the study is: if we use a really crude measure, *then* we can’t find any differences between the past’s and today’s climate. But that conclusion is, of course, trivial. What the study therefore does, ironically, is to highlight the need for proxies with a good temporal resolution (*cough* treerings *cough*)

  341. Peter Thompson
    Posted Feb 2, 2008 at 7:54 AM | Permalink

    Paz,

    What good does a proxy which has good time resolution do for you if it refuses to tell you anything about temperature consistently? If you are a long time reader, as I am, then you must have read about divergence, updating of proxies, calibration and the like. It seems likely to me that trees are a better proxy for moisture than temperature, but that is the key…..uncertainty. Loehle reminds us that we can’t find any definitive answers about the past vs. the present, unlike the IPCC which ignores the uncertainty.

    Since things are uncertain, in my mind that ups the value of anecdotal evidence, or rather makes it’s relatively low value similar to fuzzy science. There is a glacier receding in Greenland which is uncovering plants scientifically dated (with good temporal resolution) to 1000 years ago. What does that tell us about then and now?

  342. Mike Davis
    Posted Feb 2, 2008 at 8:24 AM | Permalink

    Pete T:
    It tells us that the world goes round and round and that the temperature goes up and down on many time scales

  343. Craig Loehle
    Posted Feb 2, 2008 at 8:28 AM | Permalink

    Paz: also note that Steve M has shown that you can easily find tree ring series which show a warm MWP.

  344. MrPete
    Posted Feb 2, 2008 at 8:59 AM | Permalink

    Paz,

    Your argument proves too much. You say “…To make this comparison you need an accurate enough measure for the past’s and today’s climate…this method is, per definition, not accurate enough.”

    I don’t think anyone will argue much with that statement. Sure, we’d love even more accurate methods. We’re constantly arguing for better data disclosure, better CI evaluations, etc etc etc. The point is, this analysis method is more accurate and better defined (in CI, etc) than previous proxy studies. It’s one of the best-defined proxy studies available.

    And so, if THIS method is not accurate enough, that speaks volumes about our penchant to be so certain about even less-accurate methods.

    Does that help?

  345. deadwood
    Posted Feb 2, 2008 at 11:58 AM | Permalink

    Paz’s big complaint can be solved the same way Mann and the Hockey team solved it for the tree ring proxies.

    All Loehle needs to do is tack on the instrumental record to the end. But of course that is, as has been repeatedly and correctly pointed out here at CA, bad science.

  346. Craig Loehle
    Posted Feb 2, 2008 at 8:47 PM | Permalink

    There have been clear problems with the tree ring reconstructions. The proxy record I put together clearly is noisy and imperfect as well. How there is any way to spin this and say recent temperatures are with high certainty “unprecedented” simply escapes me.

  347. Posted Feb 2, 2008 at 8:52 PM | Permalink

    The point is, this analysis method is more accurate and better defined (in CI, etc) than previous proxy studies. It’s one of the best-defined proxy studies available.

    this is total nonsense.

    the Loehle study only got published in E&E.
    even with the corrections, which improved the paper massively, i doubt any real paper would publish it, because Craig keeps making claims about a period, not covered by his data.

    the methods used in the paper 8yet again improved a lot in the correction) are still simplistic.
    Loehle only used proxies that are normalised into temperature. he uses extremely simply methods on the data.

    the claim that it is the best paper available is total nonsense.

  348. Posted Feb 2, 2008 at 9:00 PM | Permalink

    There have been clear problems with the tree ring reconstructions. The proxy record I put together clearly is noisy and imperfect as well. How there is any way to spin this and say recent temperatures are with high certainty “unprecedented” simply escapes me.

    sorry Craig, but you will get a pretty clear result if you splice an unsmoothed GISS data to your result and give it an error range that is realistic in comparison with the one you are using.

    how you can deny this fact simply escapes me.

  349. bender
    Posted Feb 2, 2008 at 9:08 PM | Permalink

    sod, was it warmer on February 2, 1912 noon than it was from 1850 to 1880?

  350. Posted Feb 2, 2008 at 9:20 PM | Permalink

    sod, was it warmer on February 2, 1912 noon than it was from 1850 to 1880?

    bender, is proxy data slightly less rely able that modern measured data?

  351. bender
    Posted Feb 2, 2008 at 10:08 PM | Permalink

    sod, you continually try to use this uncertainty inequity to your advantage in favor your pet hypothesis. you do not know, sir, how warm it was in the MWP. within the bounds of uncertainty, it was as warm then as it is now. i understand your frustration that the historical data are uncertain. i sympathisize. but that does not mean you are correct. it means you are incorrect. it is that simple.

  352. Wayne Hamilton
    Posted Feb 2, 2008 at 11:21 PM | Permalink

    One off-the-top thing from one new to this thread: Anyone aware of the ‘divergence’ problem in deriving proxy temperature from tree rings at high elevation in the Arctic? Special session at Fall AGU two months back. Half the trees show rising temperature and the other half show falling temperature. Maybe it’s something other than temperature doing this in Bristlecone(?) and that might obviate the need to make a phase correction in Mosher’s graph above.

  353. Wayne Hamilton
    Posted Feb 2, 2008 at 11:57 PM | Permalink

    Should have mentioned I’m looking at proxy wind using ringwidth eccentricity. It works for conifers on the west coast.

  354. bender
    Posted Feb 3, 2008 at 11:19 AM | Permalink

    There are about a dozen threads that discuss the divergence issue. Search on “divergence”, “positive and negative responders”, “Rob Wilson”, “Martin Wilmking”.

  355. Wayne Hamilton
    Posted Feb 3, 2008 at 11:30 AM | Permalink

    Bender. Over on 2506 the last comment on Divergence was on 9 January. Where’s the action? Wilmking found something “odd” with spiral grain, but he hasn’t told me what it was.

  356. Scott Brim
    Posted Apr 8, 2010 at 5:38 PM | Permalink

    I’ve resurrected this thread from January 2008 entitled “Loehle Correction” to ask a question of Pat Frank concerning his opinion that currently there are no valid paleoclimate proxies.

    Within the “Dealing a Mortal Blow to the MWP” thread from April 2010, Pat Frank makes the following comment in the context of discussing a Jonathan Overpeck climategate e-mail:

    [Pat Frank]: “Someone needs to deal a “mortal blow to the misuse of supposed” paleo-temperature proxies. Presently, there aren’t any, and the wholly specious imposition of physical meaning onto numerical constructs is a “travesty.”

    Mr. Frank, may I ask you to offer a viewpoint concerning the validity — or possible lack thereof — of the Loehle-McCulloch 2008 paleotemperature reconstruction?