Mann 2008 MWP Proxies: Punta Laguna

I’m spending a little extra time examining the “new” Mann 2008 MWP non-dendro “proxies”. There are 4 series from varved lake sediments in Punta Laguna, Mexico: two dO18 and two C13 series, each for a different gastropod species.

Original data is archived at WDCP here. I compared this data to Mann’s versions and noticed some puzzling differences. The coronatus data (both C13 and O18) contains only one value in the 20th century. Although the Mann version is consistent with interpolation in the pre-modern period, Mann’s modern values contain values that do not exist in the official archive. Where on earth did Mann’s 20th century values come from?

Mann reports flashy correlations for a couple of these series (382, 383) with NA values reported for the other 2 series – I’m not sure why. For 382, Mann’s SI shows correlations for 1850-1995 (0.40), 1896-1995 (0.60) and 1850-1949 (0.52) and for 383, he shows correlations for 1850-1995 (0.63), 1896-1995 (0.8693) and 1850-1949 (0.7467). In the latter series, the underlying data set has reported values for 1893 and 1993 and none in between, so it would be interesting to know how the calibration was done.

The underlying publication only discusses the dO18 series – NOT the dC13 series, and does not interpret them as a temperature proxy. They observe that the O18 variability is much greater than can be accounted for by temperature and hypothesize that changing O18 values correspond to changing circulation patterns, moving zones north and south. This is a theme that is picked up in other studies – I’ve mentioned this in connection with Newton et al 2006 on Pacific Warm Pool proxies, who discusses this.

The original publication clearly marks a MWP, though it ends in the 950-1100 period that is often highlighted by other data. I’m not convinced that a lot of weight can be placed on the author’s dates.
The sediments are dated by 5 dates from terrestrial wood (as there is a large reservoir effect in the gastropod radiocarbon dates) with only 2 dates from the last millennium. Dates are interpolated assuming that each date is exact i.e. varying sedimentation rates. If a uniform rate were assumed (and there are only 5 dates), this would reduce the estimated age of sediments in this period by about 90 years or so and would delay the estimated date of the end of the MWP in this region. (Craig Loehle has always emphasized these sorts of dating issues as a potential source of incoherence.)

It’s hard to develop an objective way of handling such things, as efforts to wiggle match can easily cause one to find what one “Wants” to see. For the present, I merely urge people not to place undue weight on the dates of these squiggles as the dating uncertainties could easily be a century or so.

Reference:
Curtis, J.H., D.A. Hodell, and M. Brenner, 1996, Climate Variability on the Yucatan Peninsula (Mexico) during the last 3500 years, and implications for Maya cultural evolution, Quaternary Research 46: 37-47.

Click to access Curtis_1996_QR.pdf

ftp://ftp.ncdc.noaa.gov/pub/data/paleo/paleolimnology/yucatan/readme_curtis1996.txt
ftp://ftp.ncdc.noaa.gov/pub/data/paleo/paleolimnology/yucatan/puntalaguna.txt

45 Comments

  1. Gary
    Posted Sep 3, 2008 at 11:28 AM | Permalink

    Just a side note on ‘wiggle-matching’ in these types of cores. The technique is somewhat controlled by the sedimentation rates, which in this core seem to be fairly consistent between dated points. Obviously sed rates will vary, but for wide segments uniformity would be expected and bizarrely high or low rates are a red flag calling for more careful examination. Varve-counting, if possible, is another check on the dating.

  2. Jean S
    Posted Sep 3, 2008 at 1:11 PM | Permalink

    Steve:

    Where on earth did Mann’s 20th century values come from?

    He used RegEM. From SI:

    The RegEM algorithm of Schneider (9) was used to estimate missing values for proxy series terminating before the 1995 calibration interval endpoint, based on their mutual covariance with the other available proxy data over the full 1850–1995 calibration interval.

    I haven’t checked, but it even seems that he first cut Briffa’s MXD series, and then extrapolated the amputed series with RegEM to 1995! Again from SI:

    Because of the evidence for loss of temperature sensitivity after ~1960 (1), MXD data were eliminated for the post-1960 interval.

  3. Steve McIntyre
    Posted Sep 3, 2008 at 1:31 PM | Permalink

    I checked the data and all the MXD series have values up to 1998 though most series end in the 1980s. I don’t recognize the nomenclature – schweingruber_mxdabd_grid11 , for example. These aren’t sites, but are some sort of Mannian or Briffian calculation. IT’s a bottomless pit. 🙂

    Maybe they used Mannian padding as well – so as to leave nothing on the table.

    • Craig Loehle
      Posted Sep 3, 2008 at 2:05 PM | Permalink

      Re: Steve McIntyre (#3),
      Steve: I’m amazed you can still use a smiley face on that post. To say that one has ESTIMATED the data for proxy series for the crucial post-1980s period that is specifically what one wants to compare to the instrumental data which has been padded out to the future also by reflection, means that the key endpoints (post-1990s) of both proxy and instrumental series are artificial (extrapolations). Wow.

      • Jean S
        Posted Sep 3, 2008 at 2:37 PM | Permalink

        Re: Craig Loehle (#6),
        Yes, and in the case of Briffa’s MXD series they actually threw out data in order to replace it with “estimated” values 🙂 And this is before “smoothing” the series (with the nice padding) for the calibration…

  4. DaveM
    Posted Sep 3, 2008 at 1:48 PM | Permalink

    A quick (I hope) and stupid (perhaps) question…

    Is there anything at all produced by Mann that does not require the use of peculiar or otherwise innovative and or unorthodox “adjustments”?

    Has he used anything other than his or a collaborators research/review to verify his papers? Why must he be so innovative?

    Cheers!

    • Jean S
      Posted Sep 3, 2008 at 1:55 PM | Permalink

      Re: DaveM (#4),
      Not to my knowledge. Nope. Dont know. 🙂

  5. Andy
    Posted Sep 3, 2008 at 2:15 PM | Permalink

    Sorry you mean he has made up values (evidently in a scientific manner) ?

    • Craig Loehle
      Posted Sep 3, 2008 at 2:35 PM | Permalink

      Re: Andy (#7), Andy: I try to be a little more diplomatic than that.

  6. Steve McIntyre
    Posted Sep 3, 2008 at 2:54 PM | Permalink

    Here’s Mann’s version of the Almagre series updated to 1998 🙂 Why go to the trouble and expense of collecting data when you can just make it up?

    Here’s the actual data that we collected (prior data ended in 1984):

  7. Posted Sep 3, 2008 at 2:56 PM | Permalink

    Steve, theoretically 13C values can be an indirect global temperature proxy, as an increase of vegetation depletes the atmosphere of 12C, thus enriches the atmosphere in 13C. The opposite happens when global vegetation decreases as result e.g. of the increase of land ice area/tundra at the cost of forests. This can be seen e.g. in the change of 13C/12C ratio of ice core CO2 during ice age – interglacial transitions and vv.

    The sediment 13C data is from a complete different order and I suppose that this only shows the local ratio between vegetation growth and decay in the lake itself and the supply of decaying organics of the surroundings via rivers into the lake. The link with local (let it be global) temperatures and the 13C values of the sediment seems quite problematic to non-existent to me. This is quite clear from the profile: nearly continuous decline of 2-3 per mille d13C throughout the whole period, while global atmospheric and oceanic surface d13C values in the period 1350-1850 show no practical decline and lower values at -6.3 +/- 0.1 per mille (coarse resolution of about 50 years intervals from ice cores CO2). After 1850 one need to look at the decline of 13C in the atmosphere since the use of fossil fuels, with gradually faster decreasing d13C values: from -6.3 per mille in 1850 to -8.0 per mille today. That should show up in the lake sediments, but it isn’t… Only if one knows the exact influence of increasing local temperatures and global decreasing atmospheric d13C values on the d13C ratio’s of local vegetation growth/decay in the lake, one can calculate the past temperatures…

    For a graph of d13C values in the atmosphere (measured in ice cores, firn and recently atmosphere) and shallow ocean waters (measured in coralline sponges calcite) over a period of 600 years, see here. The graph is obtained from:
    “Evidence for preindustrial variations in the marine surface water carbonate system from coralline sponges”, Böhm ea., GEOCHEMISTRY GEOPHYSICS GEOSYSTEMS, 2002.
    http://www.agu.org/pubs/crossref/2002/2001GC000264.shtml

  8. Steve McIntyre
    Posted Sep 3, 2008 at 3:01 PM | Permalink

    Jean S/UC, can you tell if Mann’s archived code includes the steps where the values of the proxies after the end of the series are made up?

  9. Andy
    Posted Sep 3, 2008 at 3:08 PM | Permalink

    Craig #8, I don’t mean to be rude and I apologise if I have been but…….you just cannot “estimate” values (Is that Ok?;))!

    • Craig Loehle
      Posted Sep 3, 2008 at 3:17 PM | Permalink

      Re: Andy (#13), Andy: I didn’t say you were wrong…I just try to keep it calm. BUT: I am just dumbfounded that extrapolated data are used AT ALL but especially for the critical 1990s period. Dumbfounded. Stunned. And without any caveats or cautions given in the paper. Not easy to stay calm…take a deep breath…

      • bender
        Posted Sep 3, 2008 at 5:30 PM | Permalink

        Re: Craig Loehle (#14), The team apparently do not understand the difference between interpolation and extrapolation. (Of course they MUST know the difference. I can only assume they know EXACTLY what they are doing, that they are deliberately choosing not to explain what they are doing in those clear and simple terms.)

  10. Steve McIntyre
    Posted Sep 3, 2008 at 3:43 PM | Permalink

    Can you imagine how Gavin Schmidt would have foamed at the mouth if Craig Loehle had made up data for his reconstruction to bring it “up to date” the way that Mann has? And if he’d archived it as actual data with imaginary counts of trees? Loehle would have been run out of town.

    • bender
      Posted Sep 3, 2008 at 5:35 PM | Permalink

      Re: Steve McIntyre (#15),

      Can you imagine how Gavin Schmidt would have foamed at the mouth …

      Yes, I can. My bad. My bad imagination imagining something I’ve seen more than a dozen times already.

  11. Steve McIntyre
    Posted Sep 3, 2008 at 3:59 PM | Permalink

    Can anyone locate where the archived code creates the imaginary proxy “data”? Or did this take place off balance sheet?

  12. MrPete
    Posted Sep 3, 2008 at 4:18 PM | Permalink

    I am having a hard time believing this.

    This sounds like a good time to become VERY calm, cool and collected. No flying off at the handle. Basic investigation of the facts. If what seems apparent is actually true, no commentary is necessary… the facts will speak for themselves. Simply publish a comparison of actual data vs provided-data.

    May I predict that this will be treated as all a misunderstanding, and certainly within three years the confusion will be resolved?

    Just collect the facts, and publish. Certainly this is worth publishing.

  13. Andy
    Posted Sep 3, 2008 at 4:19 PM | Permalink

    My goodness, most of this goes WAY over my head, but its little things like this which make sense…and then people FAR more qualified than I point out the problems…………

  14. Alan Bates
    Posted Sep 3, 2008 at 4:29 PM | Permalink

    Steve

    I am out of my depth in the detail of all of this (can’t see the wood for the trees??).

    From your 2 graphs in #10 I draw two conclusions:

    1 Generally there is close agreement between Mann’s co524 and your Almagre britstlecone 2007 over the period 1100 to nearly the end of the 20th century (taking into account the difference in y-axis scale)

    2 There is a BIG difference between the end of the 20th century with (estimated?) co524 staying high (nearly +2 SD units) and the (measured) Almagre data falling back to the zero line.

    Thus, the agreement is close throughout the entire range except for the part that matters i.e. currently which (in comparison with the data) has been sharply over-estimated for reasons that are not totally clear. Have I got the drift of this?

    Alan

    • bender
      Posted Sep 3, 2008 at 5:25 PM | Permalink

      Re: Alan Bates (#20), read the various “Almagre” and “bristlecone” and “divergence” threads. Your summary is correct. It is called modern-era divergence. And the alarmists need to get rid of it – the same way they needed to “get rid of” the MWP – through algorithmic and lexical abuses of authority. Creative and utterly opaque.

  15. bender
    Posted Sep 3, 2008 at 5:38 PM | Permalink

    Is it breaking a blog rule if you imagine a motive, but don’t say it aloud?

  16. Posted Sep 3, 2008 at 6:08 PM | Permalink

    I am fairly new to climate science, I love the commentary. I have to say this is one of the most disturbing things I have seen, professionals in climate science imagining the worst. I can’t imagine this would be done on purpose, it would be caught too quickly. I am still reviewing the Mann paper and it seems that 484 series were kept. Can someone help me catch up? How significant is this series weight in the final graph?

  17. Posted Sep 3, 2008 at 7:16 PM | Permalink

    I have been studying the significance criteria for selection of datasets. By eye, I wonder if the original bristle cone dataset would pass the p 0.1 test for significance.

  18. Posted Sep 3, 2008 at 7:42 PM | Permalink

    You know I only work in Optics. Its a more simple field which still doesn’t hold all the answers either. When I see that of 1209 records only 484 pass the p=.1 test for measured records and the measured records have problems, how can we make any real conclusion. I understand the reasoning for an initial filter of the data. I really have a hard time resolving why there is this unreadable algorithm to combine the accepted data. It doesn’t seem kosher and it wouldn’t pass the stink test in my field. I’m pretty sure it couldn’t be published at this point.

    What is happening here? Is it as bad as it looks? Am I missing something?

    I just have to ask, if you put together a bunch of random lines or mildly significant data sorted by the single requirement that they meet a rising curve at the end, you will get a flat average with a peak at the end. I really have to question whether the statistical variations in the data which eliminated 60% of the initial sets are misunderstood processes which exist throughout the graphs of even the “significant” data.

    The net result is obvious, a peak with a flat curve before it.

    I just started coming to this site recently, is this the point? I mean beyond the implications of the curve with the spike above.

  19. Posted Sep 3, 2008 at 8:21 PM | Permalink

    It’s not enough for Mann to have the AGW peak at the end, they need the medieval warming and the Maunder minimum to be resolved. Am I getting this?

    So I’m sorry to fill this post with so many questions, some feedback would be helpful, but unless I am totally off base, one thing we can say for sure is that any “trend” in historic data from these datasets would to a “high degree of certainty – IPCC nomenclature” be significantly rounded from peak actual temperature values.

    Is this what the rest of you see?

  20. Craig Loehle
    Posted Sep 3, 2008 at 8:54 PM | Permalink

    If anyone like RomanM, UC, MrPete would like my help to write up a report on the extrapolation problem, I am quick on the turnaround. Next week my paper on the divergence problem comes out in Climatic change.

    • chopbox
      Posted Sep 3, 2008 at 9:45 PM | Permalink

      Re: Craig Loehle (#29),
      I’d love to Craig, but I don’t have the expertise. Still, just to experience the flavour of your exercise, I think I’ll grab my shotgun and go visit my fish barrel out back. 🙂

  21. Curt
    Posted Sep 3, 2008 at 11:57 PM | Permalink

    jeff_id (25-28):

    You are going through the same questions that I went through when I found this blog over two years ago. (BTW, I came here with no particular viewpoint, and after reading realclimate.org for a while.) I kept saying to myself, “No, it can’t be that bad!” But I kept finding out that it was that bad, and worse. It kind of reminds me of watching the movie “Dangerous Liaisons”, thinking repeatedly that the characters couldn’t keep stooping lower, but they did.

    I think you will find it worth your while (as for other newbies) to spend a good amount of time methodically reviewing the archives here. Start with Ross McKitrick’s “What is the Hockey Stick Debate About?” Also read the Wegman Report. What struck me about both papers is how quickly they went from the issue of the (in)correctness of the hockey stick (there are lots of mistakes in science) to the bigger issue of how quickly and uncritically it and similar papers were accepted by the climate science community. I agree with them that this is the much more important issue.

    As for the issue of intent, we are discouraged from speculating on motives here. But since you are statistically savvy, try this thought experiment as you go through the archives. Start with the null hypothesis that these are random errors that could go either (any) way in terms of your result. Keep track of which “direction” the errors tweak the data. What p-value do you end up with for your null hypothesis?

    • bender
      Posted Sep 4, 2008 at 3:55 AM | Permalink

      Re: Curt (#31),

      try this thought experiment as you go through the archives

      The question is very simple. Is there a systemic “confirmation bias” in the review process? i.e. When reviewers see an alarming current warming trend, are they less critical than when they see an unalarming trend? When you do the thought experiment it’s hard to reject this hypothesis that reviewers are less critical when the data fit a preconceived notion of the state of the world. (I suspect I could dredge up a half dozen papers proving that confirmation bias exists in several scientific fields.)

      Mind you, this is not speculating on motive. Confirmation bias is a very passive effect. No motive implied. Other than perhaps the motive of expediency, which, of course, is not inherently a bad thing. Though it can lead to bad results when expediency is consistently favored over correctness. (In which case the “self-correcting” process is a slow process.)

      Several paleoclimatological results have been rushed to publication in order to squeak past IPCC deadlines. Some fail to make it. So expediency is a known problem.

  22. Alan Bates
    Posted Sep 4, 2008 at 3:05 AM | Permalink

    I am aware of divergence but I find it hard to believe that so obvious a problem is allowed to be propagated by the peer reviewer. Also, to start off with a Trojan Number of 1209, to then dismiss nearly two thirds because they did not achieve an arbitrary P level, then to ignore real data and put in extrapolated values and rely on them to make your point is … words fail me! I used to work as a technical specialist in a nuclear power station. If anyone did that in a Nuclear Safety Case Submission they would, at best be re-trained, more likely sacked and possibly prosecuted.

    • bender
      Posted Sep 4, 2008 at 3:33 AM | Permalink

      Re: Alan Bates (#32),

      I find it hard to believe that so obvious a problem is allowed to be propagated by the peer reviewer.

      You do? Remember that the reviewer has no knowledge of the Almagre update. They likely wouldn’t know about Ababneh’s data either. All they have in front of them is the manuscript, maybe a few references. They’re not doing comparative sleuthing the way Steve M does. Peer review is not audit. I absolutely do not find it hard to believe that such errors are overlooked. Peer review was simply not designed to catch those kinds of errors. Remember: “the science is self-correcting” and “we’ve moved on” – this is the prevailing attitude in this field.

      • Jeff Norman
        Posted Sep 5, 2008 at 1:57 PM | Permalink

        Re: bender (#33),

        Remember that the reviewer has no knowledge of the Almagre update. They likely wouldn’t know about Ababneh’s data either. All they have in front of them is the manuscript, maybe a few references.

        Yes but surely the reviewer if they are knowledgable in this field at all must have been aware of the problems with the original hockey stick and therefore at the very least had their shields up.

        • bender
          Posted Sep 5, 2008 at 2:00 PM | Permalink

          Re: Jeff Norman (#42), You think the average reviewer will have read CA or even the foundational M&M papers?

  23. bender
    Posted Sep 4, 2008 at 4:07 AM | Permalink

    Here is an example from UC of confirmation bias: two different types of method are used depending on the trend observed, yet no one questions: why the double-standard? Why does no one question? I suppose because no one disputes the result. Yet it is the scientific reviewer’s responsibility to criticize, independent of the result. Why is it left to CA to point out the double-standard?

    But I must apologize to Steve, as the topic here is Punta Laguna. (When do we get “unthreaded” back? Lots of newbies arriving could make good use it.)

  24. Posted Sep 4, 2008 at 6:00 AM | Permalink

    I’m sorry to push the thread off topic. When I started reviewing AGW science more seriously last year, I didn’t know anything about paleoclimate temperature reconstruction. I had no preconcieved notion of who was right and flatly refused to make any conclusions. I hoped it would be a bit easier to sort out not wanting to become a climatologist after all, no offense. I have read a couple dozen scientific papers on the topic, this is my first reading of Mann’s hockey stick methodology.

    I am surprised to say the least, that this is what you have been fighting with and the same was done with tree ring data. I could barely sleep last night after figuring this out. Instead of resolving anything, I need to delve deeper into the reconstructions. Maybe I’ll need to find the datasets myself and write some of my own code.

    Thanks for the help. I’ll leave you alone now.

    • bender
      Posted Sep 4, 2008 at 10:24 AM | Permalink

      Re: jeff id (#36), There are lots of data sets and R scripts already available on this site. I don’t mean to put you off, jeff id. All I’m saying is when you want to explore an idea on a particular topic, just be sure to comment in an appropriate thread. Steve M likes to keep the paleoclimate threads particularly well organized as it’s his specialty. Realize that this is basically his lab notebook that we’re scribbling in! (Once “unthreaded” is back, there will be more room to “scribble in the margins” as it were.)

  25. Tim Ball
    Posted Sep 4, 2008 at 11:12 AM | Permalink

    One of the major problems with dendroclimate studies discussed at length at CA is the assumption that they reflect temperature when in many cases other factors, but especially precipitation, is more important. For example, years ago Parker showed that fall (September, October) precipitation was the most important determinant of spring growth rate in the trees he was studying. Scott showed that winter snow cover was very important for affecting soil temperatures and providing moisture for spring growth for trees in the Churchill, Manitoba region

    Sediments have the same problem. Why assume they are only a reflection of temperature? They are a function of temperature when the lake is fed by meltwater from glaciers or snowfields. This was the original source of varve studies in climatology from Scandinavia. (Originally, rhythmites were the collective term for annual or seasonal layer, varve specifically for glacially related layers) Even here however, the amount of runoff is a function of the snowpack formed during the winter months. Witness flooding in mountain regions. However, in all other situations the sediment load is a function of precipitation through erosion and transportation and deposition of the sediment to the lake or ocean.

  26. KevinUK
    Posted Sep 4, 2008 at 1:30 PM | Permalink

    #36 jeff id

    After you’ve finished catching up on all the proxy reconstruction stuff, don’t forget to have a good look at all the threads on instrumental temperature record adjustments and in particular the ‘bucket adjustments’ and the voefully inadequate GISS UHI adjustments etc. Once you’ve finished that, make sure you then have a good look at the GCMs and eventually it will all become clear how all this fits together to make up the scam that is man-caused global warming.

    Right now you have twigged that the MBH98 and the ‘new’ non-dendro HS are ENTIRELY an artefact of Mann’s cherry picking and ‘novel’ statistic techniques. You’ll soon find out after more digging that the claimed warming trend at the end of 20th century is also largely an artefact of the ‘adjustments’.

    If you get a chance to see Bob Carter’s lecture on YouTube (see here) you’ll also see that even if you are unwise enough to think that the ‘adjusted’ instrumented temperature record is sound and that the claimed recent warming trend is ‘real’ it is hardly significant and is most definite NOT ‘unprecedented’ in the last 1000, 10000 (pick your time span) years. It is but a ‘blip’ when compared to past temperature anomalies when viewed on a geological timescale (see Bob’s very witty presentation to see what I mean).

    After you’ve researched how the GCM’s are ‘backcasted’ and deliberately tuned (by invocation of aerosols, soot etc) to fit with the ‘adjusted’ 20th century instrumented record and in particular the recent warming trend and are so therefore claimed to be able to model past climate with some skill, you’ll then grow to appreciate just why it is so important to ‘get rid of the MWP’ because not that much CO2, nor aerosols etc existed back so forcing the backcasted GCM’s to agree with the clearly well documented significantly high temperature anomaly that existed back then (which makes the recent anomaly therefore look just like yet another natural anomaly) is somewhat inconvenient so AGW protagonists MUST discredit and at the very least play down the significance of the MWP.

    You’ll also then see that the ENTIRE AGW protagonist case that ‘we must act now to save the planet’ from imminent catastrophic climate change is based on the deliberate assumption (IMO) built into the GCMs that water vapour (by far most dominant GHG) is a net vey significant POSITIVE feedback and that without this vital assumption (which is largely based on the opinions/papers of two men, Brian Soden and Isaac Held) there is no mechanism for catastrophic climate change, no ‘tipping point’, nada, so absolutely no need to ‘act now to save the planet’.

    Regards

    KevinUK

  27. Posted Sep 4, 2008 at 1:30 PM | Permalink

    Dr Loehle has pointed out this discussion to me. Unless we are misunderstanding something, I am amazed by this “extrapolation” to the crucial decades as much as he is.

    This reminds me of a story from Feynman’s book, “Surely You’re Joking Mr Feynman”.

    http://www.lib.ru/ANEKDOTY/FEINMAN/feinman_engl.txt

    I went to Professor Bacher and told him about our success [with Gell-Mann], and he said, “Yes, you come out and say that the neutron-proton coupling is V [vectorial] instead of T [tensorial]. Everybody used to think it was T. Where is the fundamental experiment that says it’s T? Why don’t you look at the early experiments and find out what was wrong with them?”

    I went out and found the original article on the [hockey-stick-like] experiment that said the neutron-proton coupling is T, and I was shocked by something. I remembered reading that article once before (back in the days when I read every article in the Physical Review — it was small enough). And I remembered, when I saw this article again, looking at that curve and thinking, “That doesn’t prove anything!”

    You see, it depended on one or two points at the very edge of the range of the data, and there’s a principle that a point on the edge of the range of the data — the last point — isn’t very good, because if it was, they’d have another point further along. And I had realized that the whole idea that neutron-proton coupling is T was based on the last point, which wasn’t very good, and therefore it’s not proved. I remember noticing that!

    … Since then I never pay any attention to anything by “experts.”

    That was a story when Feynman and Gell-Mann, two theorists, were completely right and the experimenters were wrong. The particular error of the experimenters was that their whole conclusion depended on the data on the edge of the range. That surely affects Mann et al. 2008 because the extrapolation done here is probably unacceptable, but even if it were acceptable, like in the case of the experiment Feynman referred to, there would be a lot of reasons why the extraction of qualitative facts from the edge of the range – the blade – could mislead us.

    • Schnoerkelman
      Posted Sep 5, 2008 at 4:35 AM | Permalink

      Re: Luboš Motl (#40), I have thought, more than once, that it is very sad that Feynman isn’t around for this circus. His “grasp of the obvious” in the face of what “everyone knows” and his talent for making it simple enough for laymen to understand is sorely needed.

  28. Posted Sep 5, 2008 at 7:27 PM | Permalink

    RE Schnoerkelman #41,

    Re: Luboš Motl (#40), I have thought, more than once, that it is very sad that Feynman isn’t around for this circus. His “grasp of the obvious” in the face of what “everyone knows” and his talent for making it simple enough for laymen to understand is sorely needed.

    This is getting OT, but alas, yes. Now it’s up to the rest of us to carry on!
    — Hu McCulloch, Caltech ’67
    … who attended Feynman’s “Lost Lecture”, but unfortunately took no notes …

    • DeWitt Payne
      Posted Sep 13, 2008 at 12:59 PM | Permalink

      Re: Hu McCulloch (#44),

      … who attended Feynman’s “Lost Lecture”, but unfortunately took no notes …

      Was that the geometric proof of the r-2 dependence of gravity? I have the CD of that.

One Trackback

  1. […] 27 reply and paste link […]