Sheep Mountain Update

Several weeks ago,  a new article (open access) on Sheep Mountain (Salzer et al 2014 , Env Res Lett) was published, based on updated (to 2009) sampling at Sheep Mountain.

One of the longstanding Climate Audit challenges to the paleoclimate community, dating back to the earliest CA posts, was to demonstrate out-of-sample validity of proxy reconstructions, by updating inputs subsequent to 1980. Because Graybill’s bristlecone chronologies were so heavily weighted in the Mann reconstruction,  demonstrating out-of-sample validity at Sheep Mountain and other key Graybill sites is essential to validating the Mann reconstruction out of sample.

The new information shows dramatic failure of the Sheep Mountain chronology as an out-of-sample temperature proxy, as it has a dramatic divergence from NH temperature since 1980, the end of the Mann et al (and many other) reconstructions.  While the issue is very severe for the Mann reconstructions, it affects numerous other reconstructions, including PAGES2K.

Salzer et al 2014 

Salzer et al show eight Sheep Mountain chronologies for 1600-2009 (their Figure 6),  showing variations over four elevations (treeline, minus 30 meters, minus 60 meters and minus 90 meters) crossed by exposure (north-facing, south-facing). They report material differences between the chronologies and conclude with a recommendation that specialists take altitude and exposure into consideration in constructing chronologies.  (The failure of dendro specialists to document such information has been a longstanding criticism here.)

In their Figure 5, they zoom in on two treeline chronologies (north-facing (blue) and south-facing (red)) in the period 1980-2009, observing that there is a divergence between the two chronologies, with the south-facing chronology declining relative to the north-facing chronology.  The 1980 startpoint, by coincidence, is the endpoint of the Mann reconstruction – thus there is no overlap between the chronologies shown below and the chronologies used in Mann et al 1998-1999. Note that some very recent SFa values are below the long-term mean (1).

salzer-2014_figure-5

Figure 1. Salzer et al 2014 Figure 5, showing treeline north-facing (NFa -blue) and south-facing (SFa -red) chronologies for 1980-2009. This information was digitized for use in Figure 2 comparisons.

Comparison to Graybill Chronology

Unfortunately, Salzer et al did not compare their new data to the chronology versions used in Mann et al 1998-99, Mann et al 2009 and many other multiproxy reconstructions.

In the Figure below, I’ve started with the Sheep Mountain chronology as used in Mann et al 1998 (left panel). As CA readers are aware, it has a very dramatic HS-shape and is heavily weighted in the MBH reconstruction. Indeed, without the Graybill bristlecones, the MBH reconstruction is basically noise.  In the middle panel (1902-2009), I’ve added chronology updates (green) and HadCRU NH (red).

For comparison, the HadCRU NH temperature data is scaled here so that its mean and standard deviation match the Graybill Sheep Mountain chronology in the MBH98 1902-1980 calibration period. The original Graybill Sheep Mountain chronology ended in 1987 (rather than 1980) and, though little discussed previously, actually declined quite sharply in the 1980s. The updated Salzer SFa chronology is shown in thin green and, while slightly elevated relative to the thousand-year mean, also shows a dramatic decline from the closing values of the series used in Mann et al 1998.  While the Salzer NFa chronology (right panel – blue) is slightly elevated relative to the SFa chronology and to the millenium mean, its values are also much lower than closing MBH98 values of the Graybill chronology.

Both diverge dramatically from the NH temperature.  To have kept pace, SFa and NFa chronology values ought to have reached nearly 3, while the SFa chronology has almost reverted to the long-term mean, with several recent values actually below the long-term mean. Perhaps this accounted for the interest in looking at north-facing exposure separately.
sheep_mountain_update CLICK!
Figure 2. Comparison of Sheep Mountain (black-green-blue) and HadCRU NH (red).  Left – Sheep Mountain (ca534) as used in Mann et al 1998-99; middle – Sheep Mountain updates, showing both post-1980 Graybill values and the Salzer 2014 NFa values (green); right – with Salzer 2014 SFa (blue).  HadCRU NH scale chosen to match mean and standard deviation of chronology in 1902-1980 calibration period.

Discussion

In the financial world, analysts are accustomed to supposed models/systems of the stock market that are highly tuned to historic data and which fail out of sample.  With this example very much in mind, one of my very first challenges to the paleoclimate community was to demonstrate out-of-sample validity of the multiproxy reconstructions (mentioning Moberg et al 2005; Mann et al 1998-99) by bringing their inputs up-to-date.   Because the Mann and other reconstructions ended in 1980, I observed that the records could be readily updated and confirm whether the linear combination of proxies in the various steps of, for example, the Mann reconstruction were valid measures of temperature out-of-sample, writing as follows at the time:

One of the first question that occurs to any civilian becoming familiar with these studies (and it was one of my first questions) is: what happens to the proxies after 1980? Given the presumed warmth of the 1990s, and especially 1998 (the “warmest year in the millennium”), you’d think that the proxy values would be off the chart. In effect, the last 25 years have provided an ideal opportunity to validate the usefulness of proxies and, especially the opportunity to test the confidence intervals of these studies, put forward with such assurance by the multiproxy proponents.

Being suspicious of over-tuning and data-snooping, I speculated at the time that the so-called proxies would not work well out of sample:

What would I expect from such studies? Drill programs are usually a surprise and maybe there’s one here. My hunch is that the classic proxies will not show anywhere near as “loud” a signal in the 1990s as is needed to make statements comparing the 1990s to the Medieval Warm Period with any confidence at all.

The new results of Salzer et al 2014 (though not candid on the topic) fully demonstrate this point in respect to Sheep Mountain.  In the warm 1990s and 2000s, the proxy not only doesn’t respond linearly to higher temperatures, it actually goes the wrong way.   This will result in very negative RE values for MBH-style reconstructions from its AD1000 and AD1400  networks when brought up to date, further demonstrating these networks have no real “skill” out of sample.

We’ve also heard over and over about how “divergence” is limited to high-latitude tree ring series and about how the Mann reconstruction was supposedly immune from the problem.  However, these claims mostly relied on stripbark chronologies (such as Sheep Mountain) and the validity of such claims is very much in question.

As previously discussed on many occasions, stripbark chronologies have been used over and over in the canonical IPCC reconstructions, with the result that divergence problems at Sheep Mountain and other sites do not merely impact Mann et al 1998-99, but numerous other reconstructions.  Even the recent PAGES2K North America reconstruction uses non-updated Graybill stripbark chronologies.  It also ludicrously ends in 1974.  So rather than bringing the Mann et al network up-to-date, it is even less up-to-date.

Nor can the original challenge to demonstrate proxy validity out-of-sample be met with a new reconstruction using different proxies (such as Kaufman’s muds, upside-down or not). Financial analysts are used to this sort of switch, also discussed in an early CA post here, about the interaction between data mining/snooping and spurious regression, in which I quoted Ferson et al 2003 (which is about financial instruments) but with data snooped paleoclimate reconstructions in mind:

The pattern of evidence in the instruments in the literature is similar to what is expected under a spurious mining process with an underlying persistent expected return. In this case, we would expect instruments to arise, then fail to work out of sample…

With fresh data, new instruments would arise then fail; the dividend yield rose to prominence in the 1980s, but fails to work in post-1990 data. The book-to-market ratio seems to have weakened in recent data, With fresh data, new instruments seem to work. There are two implications. First we should be concerned that these new instruments are likely to fail out of sample. Second, any stylized facts based on empirically motivated instruments and asset pricing tests based on such tests should be viewed with scepticism.

CA readers will also be aware of earlier discussions (see tag) of Ababneh’s Sheep Mountain reconstruction, which had previously failed to replicate the huge HS of the Graybill chronologies.

Alert CA readers will also recall that Jacoby distinguished between north-facing and south-facing chronologies in his original work, but focused mainly on south-facing chronologies. (CA readers are aware that Jacoby selectively reported and archived only the most “temperature sensitive” chronologies.)

Salzer has not yet archived data for this article.  He’s got a pretty good record of archiving and I anticipate that it will be archived, but the unavailability of data at the time of publication is a pernicious practice.

 

148 Comments

  1. Don Keiller
    Posted Dec 4, 2014 at 1:56 PM | Permalink

    Great work, as usual, Steve.

    In any other branch of science, apart from paleoclimatology, these observations would be a show-stopper.

    I guess I shouldn’t get my hopes up, though.

  2. Don Keiller
    Posted Dec 4, 2014 at 2:01 PM | Permalink

    The final sentence in the abstract sums it up nicely.
    “This suggests the possibility that the climate-response of the highest South-facing trees may have changed and that temperature may no longer be the main limiting factor for growth on the South aspect. These results indicate that increasing warmth may lead to a divergence between tree growth and temperature at previously temperature-limited sites.”

    Lovely bit of genuflection to the global warming paradigm.
    In other words “business as usual”.

    • DaveS
      Posted Dec 4, 2014 at 3:57 PM | Permalink

      Indeed. It”s embarassing to read this kind of tosh. The more obvious explanation is that the climate response remains exactly the same as it always has been and the assumption that temperature was the dominant factor was simply wrong.

  3. Posted Dec 4, 2014 at 2:03 PM | Permalink

    In the middle panel (1902-2009), I’ve added chronology updates (green) and HadCRU NH (red).

    Devastating.

    • Steve McIntyre
      Posted Dec 4, 2014 at 2:49 PM | Permalink

      doublechecked PAGES2K and their Sheep Mountain version ends in 1980 at its maximum of 2.36.

      • Posted Dec 5, 2014 at 5:16 AM | Permalink

        Independent confirmation at its finest.

      • Kim Davies
        Posted Feb 21, 2015 at 2:13 PM | Permalink

        Steve,

        As I understand it, you conducted a tree ring study of your own in the summer of 2007. Has this ever been published?

    • Jimmy Haigh
      Posted Dec 5, 2014 at 5:19 AM | Permalink

      Hey – if they turned it upside down it would work a treat.

    • Anthony Watts
      Posted Dec 5, 2014 at 9:59 AM | Permalink

      This is quite possibly the most definitive thing I’ve ever seen published on tree ring proxies. It’s vindication and falsification all in one. The out-of-sample issue is indeed devastating as Richard Drake points out.

      • Salamano
        Posted Dec 5, 2014 at 11:35 AM | Permalink

        snip – OT

  4. Posted Dec 4, 2014 at 3:24 PM | Permalink

    How long before somebody tries to present this as evidence that the “divergence problem” is in fact now robustly established as a phenomenon affecting all temperature-sensitive tree rings, and so justifying the Briffa Bodge?

    • Posted Dec 4, 2014 at 4:32 PM | Permalink

      sssshhhh – Jonathan don’t give them any tips…

      quite how they know, which sort of tree it s (treenometer or not) going back centuries, is still beyond me (or anybody else, because its nonsense?)

      • Greg
        Posted Dec 4, 2014 at 9:01 PM | Permalink

        I see a 2 pronged strategy- 1) don’t need the proxies any more because we have instrumental measurements now, combined with 2) modern elevated CO2 distorts trees’ temperature sensitivity, so back to point 1)

  5. Steven Mosher
    Posted Dec 4, 2014 at 4:33 PM | Permalink

    steve which tree ring series are

    a) most critical — without them there is no HS
    b) in need of a post 1960 update.

    https://experiment.com/

    • Willis Eschenbach
      Posted Dec 4, 2014 at 11:02 PM | Permalink

      Mosh, not in lieu of but in addition to Steve’s answer, for the Mann2008 analysis I’ve split them up by using cluster analysis, and you can easily see which ones have hockeysticks. See Figure 2.

      w.

    • Curious George
      Posted Dec 6, 2014 at 4:33 PM | Permalink

      Steven – do you still believe that tree rings are good temperature proxies? There are so many factors affecting their width, temperature being only one .. can you define conditions which eliminate all other factors?

      • Posted Dec 7, 2014 at 12:59 PM | Permalink

        Before or After it gets run through the “Best-o-Matic” slicer and dicer??

      • Steven Mosher
        Posted Dec 9, 2014 at 6:36 PM | Permalink

        Of course. How good is the question.

  6. MikeN
    Posted Dec 4, 2014 at 4:34 PM | Permalink

    This merely establishes a nonresponsiveness of trees at Sheep Mountain to temperature starting in 1980. We can assume that this is because of higher CO2 levels, or some other anthropogenic source. The correct way to handle this is to then take the proxies, and use them only up to the point that they stop correlating with temperature. One has the option of then extending the proxies with extrapolated values taken from the proxy to match the trend in the proxy, which would be in line with the actual known temperature trend. This should yield a well performing proxy.

    • Curious George
      Posted Dec 4, 2014 at 4:43 PM | Permalink

      Mike – please establish a responsiveness of trees at Sheep Mountain to temperature prior to 1980. List your assumptions.

    • Howard
      Posted Dec 4, 2014 at 5:34 PM | Permalink

      MikeN Non responsiveness cannot be dismissed with assumptions that, according to Wikipedia are merely a collection of:

      “Possible explanations

      The explanation for the divergence problem is still unclear, but is likely to represent the impact of some other climatic variable that is important to modern northern hemisphere forests but not significant before the 1950s. Rosanne D’Arrigo, senior research scientist at the Tree Ring Lab at Columbia University’s Lamont-Doherty Earth Observatory, hypothesises that “beyond a certain threshold level of temperature the trees may become more stressed physiologically, especially if moisture availability does not increase at the same time.” Signs suggestive of such stress are visible from space, where satellite pictures show “evidence of browning in some northern vegetation despite recent warming.” [5]

      Other possible explanations include that the response to recent rapid global warming might be delayed or nonlinear in some fashion. The divergence might represent changes to other climatic variables to which tree rings are sensitive, such as delayed snowmelt and changes in seasonality. Growth rates could depend more on annual maximum or minimum temperatures, especially in temperature limited growth regions (i.e. high latitudes and altitudes). Another possible explanation is global dimming due to atmospheric aerosols.[2]

      In 2012, Brienen et al. proposed that the divergence problem was largely an artefact of sampling large living trees.[6]”

      • Betapug
        Posted Dec 5, 2014 at 7:38 PM | Permalink

        D’Arrigo is speculating that the trees are afraid? Perhaps all the coring is causing them pain and apprehension.

    • Nicholas
      Posted Dec 4, 2014 at 6:30 PM | Permalink

      I’m guessing Mike forgot the /sarc tag.

    • Jeff Alberts
      Posted Dec 4, 2014 at 9:01 PM | Permalink

      It seems pretty obvious that any correlation to only temperature by tree rings is accidental.

    • hunter
      Posted Dec 5, 2014 at 6:08 AM | Permalink

      How to tell if they were ever responsive is the real question. There is no reason to believe they have ever been significantly linked to temperature. And since no mechanism is offered to explain why the changes in CO2 since 1980 turned some switch in tree metabolism we can dismiss the idea as post hoc arm waving.

    • Posted Dec 7, 2014 at 2:13 AM | Permalink

      This merely establishes a nonresponsiveness of trees at Sheep Mountain to temperature starting in 1980.
      ============
      no, what it establishes is that the correlation between trees and temperature was spurious.

      the problem isn’t with the trees, it is with “methods fitting”. similar to “curve fitting”, when you try different statistical methods until you find one that shows correlation, the most likely result is that what you have found is simply a spurious correlation. similar to curve fitting it will show no skill when tested against out of sample data.

    • Kenneth Fritsch
      Posted Dec 10, 2014 at 11:50 AM | Permalink

      When MikeN comments:

      “This merely establishes a nonresponsiveness of trees at Sheep Mountain to temperature starting in 1980. We can assume that this is because of higher CO2 levels, or some other anthropogenic source. The correct way to handle this is to then take the proxies, and use them only up to the point that they stop correlating with temperature. One has the option of then extending the proxies with extrapolated values taken from the proxy to match the trend in the proxy, which would be in line with the actual known temperature trend. This should yield a well performing proxy.

      And Nichoclas replies:

      “I’m guessing Mike forgot the /sarc tag.”

      I would like to point out that MikeN is not the first Mike to use that line of rationalization and forget to use the sacr tag. See Mann(2008).

  7. James Smyth
    Posted Dec 4, 2014 at 4:40 PM | Permalink

    Can you plot the full Salzer South/North series in your figure 2? Or did you only pull the 1980+ data from their graph?

    Steve: just pulled post-1980 from their graph. I contacted Salzer for data last week and he said that he’d try to put it online when he got around to it.

  8. Follow the Money
    Posted Dec 4, 2014 at 4:51 PM | Permalink

    http://ca.water.usgs.gov/archive/reports/wsp2370/

    Here, at “hydrologic system pdf” at page 25, col. 2 is a description of three types of precipitation events for the Owens Valley: north Pacific storms, south Pacific storms, and summer storms. Variability in their rates and relationship to the growing season could account much for annual and N/S divergences.

    • Posted Dec 4, 2014 at 6:12 PM | Permalink

      So, you think it’s water and CO2 response and not temperature, is that correct?

      • Follow the Money
        Posted Dec 4, 2014 at 6:44 PM | Permalink

        I have never seen any evidence for the hypothesized increased atmospheric CO2 fertilization effect. Interesting idea though. To monetize that would require a counter-IPCC to define increased growth as a problem, one anthropogenicly caused, for which human activities must be charged money, whether such payments actually abate AGGrowth or not. To my mind this money-making opportunity cannot happen until the AGW bubble is utterly burst.

        If temperature can be detected in the growth of treeline species, I think it is a correlation to growing season cloudiness (i.e., more or less radiance hitting the trees). Not annual anomalies of .1 degree or whatever. Call it the “sun-fertilization effect.” Might as well!

        My point about water is: it’s not just how much, but when. Ask any farmer. Here the Pacific storms are winter and spring. The summer rain patterns are unrelated. The Pacific precipitation is bigger, but it is not just how much, but when. Maybe seasonal growth responses in bristecones reflect multi-years of moisture storage. I do not know. But I do know that when I read “cool-season precipitation” I will ask, “ok, where’s the warm?”

        • Bozo
          Posted Dec 5, 2014 at 5:32 AM | Permalink

          Sun fertilisation at the tree line sounds plausible; you could probably demonstrate it by looking at the carbon and oxygen isotopic ratios. I left tree physiology 20+ years ago, but I have always wondered why no one appears to be doing these types of measurements.

          Steve; there are some O18 measurements on bristlecones. There is even an article by Max Berkelhammer on the O18 values of the bristlecones collected in the Climate Audit sample.

        • Steve McIntyre
          Posted Dec 5, 2014 at 9:36 AM | Permalink

          there has been extensive discussion at CA about radial deformation in stripbark bristlecones as a major problem with bristlecone chronologies – this issue was not addressed by Salzer et al

        • Clark
          Posted Dec 8, 2014 at 2:10 PM | Permalink

          I think most people are not familiar with the fact that these strip bark “tree rings” are not rings. They are nothing like the images I show in biology lectures – they are more like a chopped-up Salvador Dali impression of a tree ring.

          What kind of “ring” will these trees give you:

        • Posted Dec 8, 2014 at 8:08 PM | Permalink

          Looks like a scene from Lord of the Rings after Saruman’s done his worst.

        • hswiseman
          Posted Dec 13, 2014 at 9:37 PM | Permalink

          I am Groot.

    • Follow the Money
      Posted Dec 4, 2014 at 6:12 PM | Permalink

      To avoid any confusion, “North” Pacific travel west to east in those parts. The paper says White Mountain is in the rain shadow of the Sierra, hence lower Pacific Storm precipitation than the western Sierra. (no snow pack, right?) But the paper does not mention summer precipitation (I don’t think that is the same as “subtropical westerlies.”) The paper takes a look at “cool-season” precipitation via CRU records at Figure 3, but those indicate the months of “Jan-to Aug” were used. This should be November to May, the Pacific storm season. For all I know these tree ring records correlate well with non-Pacific storm season rainfall records. I would like to see local data applied. The differences between true tree-line and lower specimens is interesting.

      • harkin
        Posted Dec 9, 2014 at 5:36 PM | Permalink

        The White Mountains get snow and sometimes a lot of it but they truly are in the rain shadow of the Sierras, based on my non-scientific experience as a desert rat and skier.

        White Mtn Peak is about the same longitude as Mammoth Mountain and in the early summer White Mtn may have trace or even no snow while Mammoth Mountain (even at 3,200′ lower) is still covered with deep snow.

        • harkin
          Posted Dec 9, 2014 at 8:06 PM | Permalink

          Make that latitude.

  9. KNR
    Posted Dec 4, 2014 at 5:43 PM | Permalink

    Bottom line one of many problems that dog climate ‘science’ is the proxies used in it are only just ‘better than nothing ‘ which is the alternative,

    • David Jay
      Posted Dec 4, 2014 at 10:49 PM | Permalink

      You’ve hit one of my hot buttons. Using something defective because “it’s better than nothing”.

      snip – OT

    • Tom O
      Posted Dec 5, 2014 at 8:21 AM | Permalink

      I’ve always said “proxies are someone’s best guess.” Better than nothing sounds even worse.

      • M Happold
        Posted Dec 5, 2014 at 9:59 AM | Permalink

        I would argue that they are worse than nothing. Nothing leaves you with an open mind as to what might have taken place in the past. Proxies that produce unverifiable estimates about the distant past because validation data only exists for the recent past create a false sense of certainty and only serve to ossify biases that the researchers have. The “results” then get propagated to other, less skeptical people who treat them with even higher levels of certainty.

        • Andrew
          Posted Dec 5, 2014 at 3:18 PM | Permalink

          The Four Laws of Data:

          1. No data is better than bad data

          2. You’ve got to be able to recognize bad data.

          3. Stop collecting data before you get confused and look stupid.

          4. Never presume you have just discovered some heretofore unknown aerothermodynamic phenomenon when really you just failed Rule number 2.

          (HT Jim Grube, Solar Turbines, Inc engineer)

  10. michael hart
    Posted Dec 4, 2014 at 7:42 PM | Permalink

    “Fine-scale spatial sensitivity in climate response near upper treeline is not well understood. How close to treeline do these trees need to be to show a temperature-limited growth pattern?”

    I’m astonished that a question such as that is only being addressed now.

  11. bernie1815
    Posted Dec 4, 2014 at 8:02 PM | Permalink

    Steve:
    At what point can we say that tree-rings, even at the tree-line, are problematic temperature proxies and that significant correlations with local temperatures are more a matter of chance?

  12. Barclay E MacDonald
    Posted Dec 4, 2014 at 10:16 PM | Permalink

    I fail to see the problem. Bristle cones were a perfectly accommodating series all the way to 1980. The answer is simply to lop off the post-1980 portion and splice in the actual temperature record. Not only is there precedence for such splicing, well known to followers of CA, but arguably this result has in fact been accomplished simply through benign neglect. Sarc/off

  13. Posted Dec 5, 2014 at 4:36 AM | Permalink

    One of my biggest regrets is that when I formulated the climategate petition back in 2009, that I wasn’t aware of your work on tree rings.

    One of my biggest concerns about tree-rings is not the lack of response to temperature and the impact of water, but that the whole canopy will adapt to changes. The premise of tree-rings as a proxy is that the spacing between trees remains fixed and that individual trees grow faster or slower. This may be true over the short-term, but over the long term in adverse climaatic conditions, the spacing will altar as weaker trees dies so as to allow the remaining trees to grow faster.

    So, in effect the tree-ring proxy is highly frequency sensitive and whilst they may respond to short-term inter-year changes (such as a volcanic eruption), they will have a much diminished response to century-to-century changes.

    And if you want clear evidence that the canopy adapts – just look at the tree spacing in the north and compare that with the tree spacing in warmer climes. If Mann were right – the tree spacing would be identical and only the rate of growth of individual trees would change. Instead it is painfully obvious that as climate deteriorates, that the spacing increases massively – so allowing the remaining trees to grow much faster than the Mann-type reconstruction.

  14. NeilC
    Posted Dec 5, 2014 at 7:29 AM | Permalink

    I have never understood why tree ring growth was used a proxy for temperature in the first place. There are far too many other variables other than temperature to be considered.

    For tree growth to occur, it needs a specific quantity and wavelength of photons, and CO2 to trigger photosynthesis, producing glucose, ATP and NADPH all necessary for metabolic processes.

    Depending on the quantum properties of photons, specific (inverse acting) signalling molecules are released which creates a positive/negative feedback loop for the requirements of essential nutrients and water.

    For tree growth to occur or not, depends on varying levels of atmospheric gases (mainly CO2/oxygen/hydrogen/water vapour), heat/cold, light/dark (photon quantity/wavelength), wet/dry (atmosphere). Then there is the quality of nutrients (minerals, acids, salts) and water to consider.

    And finally every tree throughout time, has evolved differently (species) to adapt to both differing meteorological and environment conditions in their immediate vicinity.

    Whilst knowledge of historic instrumental temperatures records can give a limited indication of tree ring growth. Tree rings measurements are by no means, an accurate measure for temperature reconstruction.

    A little anecdote explains all this: Imagine a temperature of 25C, sunny, dry and calm; would you like an ice cream?

    Now imagine a temperature of 30C, cloudy, raining and windy; would you like an ice cream now?

    90% of people asked these questions answer yes to the first and no to the second, which proves people eat more ice cream when it’s colder!

  15. NeilC
    Posted Dec 5, 2014 at 7:30 AM | Permalink

    Well done Steve, good science vs bad science, the truth will out, eventually.

  16. Sven
    Posted Dec 5, 2014 at 8:46 AM | Permalink

    Paging Nick…

  17. Steve McIntyre
    Posted Dec 5, 2014 at 9:52 AM | Permalink

    I would appreciate it if readers would stop generalized whinging and editorialzing about dendro, as most of these issues have been discussed ad nauseam. The focus of this post was on out-of-sample verification, not whinging about dendro.

    Further, there were caveats from the original authors and the dendro community against the use of stripbark bristlecones as a temperature proxy well prior to Mann et al 1998-99 and the 2006 NAS panel, relying on dendro Biondi, recommended against their use as a temperature proxy.

    In an interesting Climategate exchange (which I’ve been meaning to narrative for a while), Briffa described bristlecones as “Pandora’s box” which they would open “at [their] peril”.

    The main issue, in my opinion, is the use of stripbark chronologies in multiproxy reconstructions, their lack of interest in out-of-sample verification,, the acquiescence of the dendro community in misuse of this data and the failure of IPCC assessments to clearly attach red flags to reconstructions using this data.

    • M Happold
      Posted Dec 5, 2014 at 10:28 AM | Permalink

      Given that there are new out-of-sample data, what would happen in my field (Machine Learning) is that people would evaluate the major methods against these data to determine how well they worked, and these results would get published without political suppression of the papers. In fact, this happens all the time on Challenge sites such as Kaggle. The fact that the Paleos don’t do this is an indication that they have no interest in how good their methods and proxies actually are but are instead interested in maintaining a narrative.

      The question for you Steve is why don’t you expand what you have done in this post to a whole set of methods and publish the results. It may have to go into a publication that specifically handles time series prediction rather than a climate journal, but who cares. Once it is in the literature, they will be forced (eventually) to come to grips with it.

      Steve: there are many, many things that warrant writing up more formally. Some of these points are ones that I feel that the specialists themselves ought to have addressed.

    • MikeN
      Posted Dec 5, 2014 at 1:37 PM | Permalink

      Note that Salzer previously authored a paper that the team used to justify ignoring the recommendation of the NAS Panel against these chronologies.

  18. dimitris poulos
    Posted Dec 5, 2014 at 10:10 AM | Permalink

    did you delete my comment? how’s that?

    Steve: because it was a self-advertisement had nothing to do with the topic of the thread.

    • dimitris poulos
      Posted Dec 5, 2014 at 11:38 AM | Permalink

      well it was not self-advertisement but an effort to communicate science that has huge to do with the selection of proxies

  19. Terry
    Posted Dec 5, 2014 at 1:25 PM | Permalink

    So, the obvious next step is what happens when Mann’s original analyses are rerun with the updated proxies?

    The updated analysis should probably be rerun with the same weightings on the various proxies.

    • Steve McIntyre
      Posted Dec 5, 2014 at 2:24 PM | Permalink

      Here is a map of AD1400 network weights

      Other than bristlecones, the only sites with relevant weights are Cook’s Tasmania ring widths (also used in Gergis et al ); Tornetrask; Gaspe and lesser contributions from Quelccaya and a Greenland series.

      The NOAMER tree ring network has contributions from other sites that have not been updated – so one would have to figure out how to handle them. The PAGES2K Tasmania series goes to 2001. PAGeS2K Tornetrask to 2004; Quelccaya is updated; Gaspe to 1991. Greenland only to the early 1990s.

      • Terry
        Posted Dec 5, 2014 at 4:32 PM | Permalink

        If I remember correctly, Marcott et al. simply dropped series from the analysis when they ended and continued on with the remaining series. They didn’t even recenter the remaining series to match up with the old ones, if I remember correctly, so there were jumps in the reconstruction when a series dropped out.

        Using the Marcott approach, since this updated series is the only one from 2005-2009, the reconstruction probably produces a decline in 2005 with low temperatures through 2009.

        Steve: 🙂

  20. Hmmm
    Posted Dec 5, 2014 at 2:04 PM | Permalink

    dumb question: wouldn’t it make more sense to compare the chronologies to the HADCRUT local gridcell which contains said proxy, rather than comparing to the entire HADCRUT Northern Hemisphere average?

    And along with that question, what is standard in these paleo reconstructions? Do they usually look for correlation with local or hemispherical temperatures? I don’t understand why a tree would respond stronger to hemispherical temps rather than local temps, but I remember hearing about “teleconnections” and am not sure what is standard in this field…

    Probably a moot point since it sounds like these proxies no longer correlate to either of them…

    • Steve McIntyre
      Posted Dec 5, 2014 at 2:31 PM | Permalink

      dumb question: wouldn’t it make more sense to compare the chronologies to the HADCRUT local gridcell which contains said proxy, rather than comparing to the entire HADCRUT Northern Hemisphere average?

      One presumes that gridcell temperature is related to hemispheric temperature with some spatial variability. The idea of proxy reconstructions is that they can estimate hemispheric temperature. So even if the tree is responding locally, they assume that there is enough relationship between hemispheric and gridcell temperatures to permit proxy reconstructions to work.

      Separately, if one contests lack of correlation to local gridcell, the response of Mann and defenders is that they do not assert correlation to local gridcells, but to “temperature fields”. Mann has also asserted that bristlecones are in a “sweet spot” location for measuring world temperature.

      • Posted Dec 5, 2014 at 3:01 PM | Permalink

        Proxy reconstructions aim to represent hemispheric or global temperatures. But individual sites don’t, any more than individual thermometer locations do. Trees may respond to the temperature of the environment, but not of the hemisphere.

        Briffa, in his divergence studies, matched the proxy to some corresponding regional temperature, and of appropriate season. In the Nature paper, he divided the NH into regions, and checked regional proxy averages against regional. In the Roy Soc paper, he used Northern Urals mean summer temp.

        Steve: whatever the merits of your position here, Mann et al believed that tree chronologies were linearly related to “one or more instrumental training patterns”, not “local temperature”. The most heavily weighted of these “instrumental training patterns” (PCs of gridcell temperature) was virtually identical to NH temperature. So your issue on this point is with Mann, not with me, and it would be much more constructive if you argued this point at Real Climate and resolved the issue with them.

        Secondly, the concept of a proxy reconstruction depends on the notion that there is a relationship between hemispheric temperature and gridcell temperatures and that a hemispheric temperature can be plausibly estimated from a relatively limited number of locations. If you dispute this idea, then it would be constructive if you took it up with Phil Jones and reported back to us.

        • MikeN
          Posted Dec 5, 2014 at 4:10 PM | Permalink

          There is moving the peas and the thimbles, and then there is the Supreme Master Three Card Monte posting by Nick Stokes.

        • MikeN
          Posted Dec 5, 2014 at 4:13 PM | Permalink

          So then, if there was a study that correlate proxies to a hemispheric temperature, where the proxies are not correlating to local temperature, such a study would be invalid, correct?

        • Carrick
          Posted Dec 6, 2014 at 1:05 PM | Permalink

          Nick:

          Secondly, the concept of aa proxy reconstruction depends on the notion that there is a relationship between hemispheric temperature and gridcell temperatures and that a hemispheric temperature can be plausibly estimated from a relatively limited number of locations. If you dispute this idea, then it would be constructive if you took it up with Phil Jones and reported back to us.

          I believe it’s widely accepted by paleoclimatologists that most tree ring proxies don’t simply track temperature. Rather they track multiple climate variables. In deed many of Mann’s proxies are actually precipitation proxies, as he himself acknowledges. So Steve McIntyre really doesn’t need to explain any of this to Mann or to Jones, it’s well known and accepted.

          Mann provides a hand-waving argument that temperature can correlate with precipitation followed by a “correlation = causation” by screening by correlation over some fixed period. Testing the validity by comparing his screening results against out of sample data is one way of testing the validity of that screening protocol, which is how we got here right now.

          It’s hardly amazing to me that the Sheep Mountain proxies fail to validate in this way because the inclusion of strip barked proxies has already been heavily criticized, and by people “in the know” discounted as unlikely to be valid temperature proxies.

          Further it doesn’t amaze me that tree ring proxiesas a class fail as long-period temperature proxies, even if they are truly exhibiting temperature limited growth: I’ve made this argument before on your blog.

          We both know that tree-rings do not have a linear relationship with temperature (rather it’s a compressive nonlinearity). Since the climate signal shows a 1/f behavior, this means that small frequencies (long periods) will have larger amplitudes and show more compression than larger frequencies (short periods). Ergo “loss of low frequency information”.

          Using the climate signal from trees must involve some method of correcting for loss of low-frequency information. Generally this is done by inclusion of temperature-calibrated proxies like borehole measurements (which are more nearly linear, but suffer from poor temporal resolution). The Moberg2005 method is a good example of this approach (though it has problems of its own).

          The trouble with Mann is frankly he’s just not very good at what he does. He’s apparently never either bothered to learn enough signal processing to help him overcome the issues, relying instead on arrogance, intimidation, dishonesty, bullying and even demonizing of people who disagree with him.

          Which is why I claim that many of Mann’s supporters are genuinely uninterested in what the true answer to the question of “what does historic climate prior to good surface temperature data look like”. Otherwise there’d be a whole lot less patience within the climate and activity community with Mann’s tactics than there is already. Mann’s big problem is he has friends who support him because of his mouth, rather than support him because of the quality of what he does.

        • Posted Dec 6, 2014 at 4:11 PM | Permalink

          Carrick,
          The quote you’ve attributed to me is actually Steve. My point is simply that you can’t show divergence by comparing one site to the NH average. On that basis, plenty of well-instrumented met stations would “diverge”. It doesn’t show a failure of the proxy.

        • Steve McIntyre
          Posted Dec 6, 2014 at 5:23 PM | Permalink

          Nick says:

          On that basis, plenty of well-instrumented met stations would “diverge” [from the NH average]. It doesn’t show a failure of the proxy.

          Nick, it appears that you’re making stuff up once again. You assert as a fact that “plenty” of ‘well-instrumented met station” show a similar “divergence” over the past 30 years. I don’t believe that you have examined the record to locate even a single example. If I am incorrect in this surmise, please identify the station. If I am correct, I would appreciate it if you would apologize to readers for making a fabricated assertion.

        • Posted Dec 6, 2014 at 6:11 PM | Permalink

          Steve,
          “If I am incorrect in this surmise, please identify the station.”
          Indeed you are. Nearly three years ago I wrote a post here which has a map showing all the GHCN stations shaded according to their trends over various periods, including 1980-2010. You can see that various stations in the world have negative trends. One of them (you can click the map for details) is Yosemite National Park HQ, not so far from Sheep Mtn. Its trend was -0.46°C.

        • Steven Mosher
          Posted Dec 6, 2014 at 6:33 PM | Permalink

          Yosemite park is a really oddball station

          station moves, empirical breaks.

          Personally I would not trust nicks version of GHCN. black box and all

          http://berkeleyearth.lbl.gov/stations/32268

        • Steven Mosher
          Posted Dec 6, 2014 at 6:59 PM | Permalink

          hmm, looking at the grid ( I need better coords ) this is what
          I see

          http://berkeleyearth.lbl.gov/locations/37.78N-117.97W

          Yose, PK headquarters is in the pile

          http://berkeleyearth.lbl.gov/stations/32514

        • Steve McIntyre
          Posted Dec 6, 2014 at 9:21 PM | Permalink

          Mosh, for what it’s worth, as we observed in our 2005 articles, none of the original specialists, including Hughes, believed that bristlecone chronologies correlated to local temperature.

          Hughes inconsistently stated that the bristlecone growth pulse in the late 19th and 20th century was due to something else, but despite his belief that the post-1850 pulse could not be attributed to temperature, it was the dominant contribution to the early steps of the Mann reconstruction. Obviously, none of this makes any sense.

          In any case, the relevant point is that there is no meaningful correlation with local temperature. Not all high-elevation tree-ring records from the West that might reflect temperature show this upward trend. It is only clear in the driest parts (western) of the region (the Great Basin), above about 3150 meters elevation, in trees old enough (>~800 years) to have lost most of their bark – ‘stripbark’ trees. As luck would have it, these are precisely the trees that give the chance to build temperature records for most of the Holocene. I am confident that, before AD1850, they do contain a record of decadal-scale growth season temperature variability. I am equally confident that, after that date, they are recording something else.

        • Posted Dec 6, 2014 at 8:13 PM | Permalink

          Mosh,
          “Yosemite park is a really oddball station”
          OK, close to home. Unimpeachable source.
          San Francisco. Trend since 1990 -0.03°C/Century.

          Steve: again you’re dishonestly moving the pea. The Mann reconstruction ends in 1980. The bristlecone series goes dramatically down in the 1980s. Not just a slightly negative trend from 1990 on. If this is your example, then you did not have one and owe readers an apology. Secondly, I take it that you do not agree with Mann’s claim that this location is a “sweet spot” for world temperature and that it should receive dominant weight in a temperature reconstruction. Please confirm that you do not agree with Mann’s claim. It would be constructive if you also recorded this disagreement at Real Climate and got back to us when you’ve done so.

        • davideisenstadt
          Posted Dec 6, 2014 at 9:01 PM | Permalink

          nick: you’ve commented below that what many feel would disqualify a proxy doesn’t do the trick for you…so…the world waits…just what would disqualify a proxy, in your opinion?
          some objective criteria would please us all.
          it shouldn’t be too hard for you to articulate these criteria for the hoi poloi.

        • Posted Dec 6, 2014 at 9:24 PM | Permalink

          Steve,
          “If this is your example, then you did not have one and owe readers an apology. “
          I gave a whole mapful of examples. Mosh wanted an alternative source – there’s his home town, with BEST, but they give 1990-. The key thing from that BEST page – SF trend from 1990 -0.03 C/cen; Calif 0.83 C/Cen, NH 3.65. You can’t deduce from a “divergence” from NH that the measurement is faulty. Different places.

          More examples – just click the map. LA 1980-2010 -1.65 C/cen. Ukiah, Cal, -1.29 C/cen. Chula Vista -2.4 C/Cen. These are unadjusted GHCN.

          Steve: so presumably it is your opinion that Mann’s claim that this was a “sweet spot” for measuring NH temperature was fabricated?

        • Posted Dec 7, 2014 at 1:37 AM | Permalink

          Steve: “so presumably it is your opinion that Mann’s claim that this was a “sweet spot” for measuring NH temperature was fabricated?”

          No, I do not form opinions on “fabrication” so easily. I don’t know what Mann said or what the context was.

        • John Bils
          Posted Dec 7, 2014 at 6:07 AM | Permalink

          Nick Stokes,

          The sweet spot is mentioned in Manns’s latest book.

        • TAG
          Posted Dec 7, 2014 at 8:01 AM | Permalink

          In note 24 on page 275 of his book “The Hockey Stick and the Climate Wars”, Mann cites a paper by Bradley “Are there optimum sites for global paleotemperature reconstruction?”. He describes the paper as being concerned with determining the “sweet spots”[Mann’s phrase] to estimate the average Northern Hemisphere temperature given only a modest number of sites. In the note, Mann states that the research found North America to be a key region. Mann provides the opinion that the basic result of the study still appears to be valid. I’m a layman at all of this so this is the extent of my knowledge on the issue.

          The paper is paywalled at

          http://link.springer.com/chapter/10.1007/978-3-642-61113-1_29

        • Steve McIntyre
          Posted Dec 7, 2014 at 9:41 AM | Permalink

          he also used the term “sweet spot” in his 2006 NAS workshop presentation – see CA blog article on the presentation.

        • kim
          Posted Dec 7, 2014 at 8:13 AM | Permalink

          Sweet spot is
          The focus of the forces;
          Swing low,
          Sweet chariot.
          =========

        • Carrick
          Posted Dec 7, 2014 at 9:51 AM | Permalink

          Nick, sorry for the misattribution of the quote, but what I said I think still applies.

          Regarding sites that show divergence in recent periods, you’re entirely missing the point. The question isn’t whether specific tree-ring proxies have negative slope, but whether the trend in the tree-ring proxy fails to track the local temperature for that period.

          More specifically, when somebody uses and algorithm and a period say, 1940-1980, to generate a linear relationship between some local, or regional, temperature and tree-ring proxy, for that model that they have constructed to be valid, that relationship must be preserved in out-of-sample data.

          If the model fails out-of-sample, this is evidence of the lack of validity of the model.

          Again nobody disputes that tree-ring proxies are climate indicators of some sort, the question is whether you can use a linear model, correlating tree-ring proxies to temperature, and accurately recreate the long-term temperature series from that simple model.

          The evidence, from multiple lines of argument, is that it is not and these models fail.

          In essence, the question a San Francisco station has negative trend over a period is entirely irrelevant to the question of whether you can build a valid relationship between temperature and long-term trend.

          Note I have suggested that it may be possible to “anchor” short-period variability of tree-ring proxies to long-term variability in temperature, much like the position on your smart phone is updated from the accurate but 1-second-per-sample GPS time/position with 3-axis acceleration and magnetometer data.

          This is an assumption that would need testing, but it plausibly is more likely to work than the use of a nonlinear, multiple climate parameter proxy to deduce, by itself, a long-duration temperature record.

        • MikeN
          Posted Dec 7, 2014 at 10:01 AM | Permalink

          I would be shocked if there were no stations that diverged from hemispheric trend.

        • Kenneth Fritsch
          Posted Dec 7, 2014 at 11:39 AM | Permalink

          Carrick, as I noted in a post above, in my searches of the literature on the use of spectral analyses in climatology, I found that Michael Mann did some of his earliest work in this area and was looking at natural climate cycles. It would appear that after the hockey stick made him famous he has published little or nothing in this area – which I find curious.

          Mann, M.E., Lees. J., Robust Estimation of Background Noise and Signal Detection in Climatic Time Series, Climatic Change, 33, 409-445, 1996.
          Mann, M.E., Park. J., Greenhouse Warming and Changes in the Seasonal Cycle of Temperature: Model Versus Observation, Geophysical Research Letters, 23, 1111-1114, 1996.

          MTM-SVD References

          Mann, M.E., Park, J., Spatial Correlations of Interdecadal Variation in Global Surface Temperatures,Geophysical Research Letters, 20, 1055-1058, 1993.

          Mann, M.E., Lall, U., Saltzman, B., Decadal-to-century scale climate variability: Insights into the Rise and Fall of the Great Salt Lake, Geophysical Research Letters, 22, 937-940, 1995.

          Mann, M.E., Park, J., Bradley, R.S., Global Interdecadal and Century-Scale Climate Oscillations During the Past Five Centuries, Nature, 378, 266-270, 1995.

          Mann, M.E., Park, J., Greenhouse Warming and Changes in the Seasonal Cycle of Temperature: Model Versus Observations, Geophysical Research Letters, 23, 1111-1114, 1996.

          Koch, D., Mann, M.E., Spatial and Temporal Variability of 7Be Surface Concentrations,Tellus, 48B, 387-396, 1996.

          Mann, M.E., Park, J., Joint Spatio-Temporal Modes of Surface Temperature and Sea Level Pressure Variability in the Northern Hemisphere During the Last Century, Journal of Climate, 9, 2137-2162, 1996.

          Rajagopalan, B., Mann, M.E., and Lall, U., A Multivariate Frequency-Domain Approach to Long-Lead Climatic Forecasting, Weather and Forecasting,, 13, 58-74, 1998.

          Tourre, Y., Rajagopalan, B., and Kushnir, Y., Dominant patterns of climate variability in the Atlantic over the last 136 years, Journal of Climate, 12, 2285-2299, 1998.

          Mann, M.E., Park, J, Oscillatory Spatiotemporal Signal Detection in Climate Studies: A Multiple-Taper Spectral Domain Approach , Advances in Geophysics, 41, 1-131, 1999.(click here for version w/ color figures)

          Delworth, T.L., and Mann, M.E., Observed and Simulated Multidecadal Variability in the Northern Hemisphere, Climate Dynamics, 16, 661-676, 2000.

          Mann, M.E., Bradley, R.S., Hughes, M.K., Long-term variability in the El Nino Southern Oscillation and associated teleconnections, , Diaz, H.F. and Markgraf, V. (eds) El Nino and the Southern Oscillation: Multiscale Variability and its Impacts on Natural Ecosystems and Society, Cambridge University Press, Cambridge, UK, 357-412, 2000.

        • Carrick
          Posted Dec 7, 2014 at 2:40 PM | Permalink

          Kenneth Fritsch, thanks for the reference list. I hadn’t seen all of these.

          I actually think Mann’s early “Robust Estimation…” paper is one of his better and fundamentally more sound ones.

        • Posted Dec 7, 2014 at 3:44 PM | Permalink

          Carrick,
          “you’re entirely missing the point. The question isn’t whether specific tree-ring proxies have negative slope, but whether the trend in the tree-ring proxy fails to track the local temperature for that period.”

          I’m not missing the point. That is my point. The NH average shown in Fig 2 is not the local temperature.

          Salzer et al did it properly. In Fig 3 they correlated with, variously, the Berkeley 1° grid and the CRU 0.5° grid. Local temperatures. And they used values corresponding to the growing season, not whole-year. And they did get useful results on correlation. Trees at the treeline correlated with local temperature rather than precipitation. Away from the treeline (and not so far away) they diverged to the opposite.

        • Steve McIntyre
          Posted Dec 7, 2014 at 4:39 PM | Permalink

          Nick says:

          I’m not missing the point. That is my point. The NH average shown in Fig 2 is not the local temperature.

          No, Nick, you are missing the point. The Mann reconstruction is supposed to be an estimate of NH temperature, not local temperature. As has been said repeatedly, Mann claimed that bristlecones could measure “training patterns”, one of which was equivalent to NH temperature, not local temperature. You’re misrepresenting the situation, as you do far too often.

          The Mann network in its early steps is heavily weighted by bristlecones, so its “skill” depends on the ability of bristlecones to track world temperature. The idea that they were magic thermometers for world temperature has been one that we’ve contested all along, as for example:

          Given the pivotal dependence of MBH98 results on bristlecone pines and Gaspé cedars, one would have thought that there would be copious literature proving the validity of these indicators as temperature proxies. Instead the specialist literature only raises questions about each indicator which need to be resolved prior to using them as temperature proxies at all, let alone considering them as uniquely accurate stenographs of the world’s temperature history.

          It turns out that they are not magic thermometers for world temperature – surprise, surprise.

        • TAG
          Posted Dec 7, 2014 at 4:22 PM | Permalink

          Nick Stoks writes:

          Trees at the treeline correlated with local temperature rather than precipitation. Away from the treeline (and not so far away) they diverged to the opposite.

          And if the temperature increases and the treeline moves up, what happens to the correlation. And if the temperature gets cooler and the treeline moves down, what happens to the correlation. Your observation appears to me to point out a fatal handicap in using the response of these trees as a temperature proxy. If the temperature changes and the treeline moves then there is no useful correlation. There is an assumption of stationary of the temperature response for long term reconstructions and your observation casts this into severe doubt at least for these trees.

        • Carrick
          Posted Dec 7, 2014 at 6:37 PM | Permalink

          Nick Stokes:

          I’m not missing the point. That is my point. The NH average shown in Fig 2 is not the local temperature.

          Good grief.

          You were the one discussing the local temperature in SF.

          Amnesia much?

        • Carrick
          Posted Dec 7, 2014 at 6:46 PM | Permalink

          Nick Stokes:

          Trees at the treeline correlated with local temperature rather than precipitation. Away from the treeline (and not so far away) they diverged to the opposite.

          Which works as long as the tree line doesn’t shift, which it will as the Earth warms and cools. This illustrates the problem with assuming persistent temperature limited growth.

        • Posted Dec 7, 2014 at 8:55 PM | Permalink

          Carrick,
          “Amnesia much?”
          I don’t know how you can misunderstand that very simple point. This post, with Fig 2, says that the Sheep Mtn proxy is shown to be faulty because it diverges from NH average temperature. I say that it was not expected to follow NH average temperature; as a proxy it is expected to follow local temperature. And I list local Calif instrumental temperatures that similarly diverge from NH average. I’m not saying that it is following SF temperature either. I’m just saying that a hemisphere recon, whether proxy or instrumental, is make up of a diverse collection, each of which may individually diverge from the average. That’s why Mann was collecting a whole lot of proxies in the first place. MBH98 says:

          “Although studies have shown that well chosen regional paleoclimate reconstructions can act as surprisingly representative surrogates for large-scale climate, multiproxy networks seem to provide the greatest opportunity for large-scale palaeoclimate reconstruction and climate signal detection.”

        • Steve McIntyre
          Posted Dec 7, 2014 at 10:02 PM | Permalink

          Nick, again, you are making stuff up. As I told you previously, Mann et al 1998-99 did NOT say that “proxies” had a linear relationship to local temperature. They claimed that the bristlecones were linearly related to “instrumental training patterns” (which were PCs of gridcell temperatures and thus the PC1 was almost identical to average temperature). Here’s one of their statements:

          In fact we specified (MBH98) that indicators should be “linearly related to one or more of the instrumental training patterns”, not local temperatures.

          In addition, they weighted the proxies so that the bristlecones ended up being heavily weighted.

          Now due to unavailability of data, I haven’t tried to calculate an out-of-sample MBH reconstruction, but it is evident that the bristlecones are heavily weighted in the AD1400 and AD1000 networks and that the out-of-sample index constructed by applying MBH weights to the out-of-sample proxies will be heavily influenced by the downward bristlecone results and will not track NH temperature.

          But you know this.

          out-of-sample MBH reconstruction

        • Posted Dec 7, 2014 at 11:02 PM | Permalink

          Steve,
          “Nick, again, you are making stuff up.”
          No. There is nothing new in the idea that dendro’s are seeking a proxy for local temperatures. Briffa, for example, starts his Nature paper with:
          “Tree-ring chronologies that represent annual changes in the density of wood formed during the late summer can provide a proxy for local summertime air temperature.”

          The problem is finding a local temperature to calibrate against. MBH in 1998 could not, like Salzer, pluck highly resolved grid values from KNMI. So Mann did a PCA of spatial/temporal instrumental temperature, to derive EOFs to use in regression. These explicitly provide for spatial variation, and he assumes, for the purposes of his analysis, that they express the proxy behaviour, because globally they express most of the variation. He directly acknowledges that despite this, they may not:
          “Implicit in our approach are at least three fundamental assumptions. (1) The indicators in our multiproxy trainee network are linearly related to one or more of the instrumental training patterns. In the relatively unlikely event that a proxy indicator represents a truly local climate phenomenon which is uncorrelated with larger-scale climate variations, or represents a highly nonlinear response to climate variations, this assumption will not be satisfied.”


          Steve: Nick, I’m not going to waste time with you on this other than to observe that gridcell data was available in 1998 and was used in Mann et al 1998. As you know, the Mann reconstruction purported to reconstruct NH temperature, not Crooked Creek temperature.

        • Carrick
          Posted Dec 8, 2014 at 2:09 AM | Permalink

          Nick, I’m not sure how you can accuse me of missing a point, when what I was addressing from the start was local divergence of proxy with temperature (which is the critical issue for a local climate proxy). The comment about amnesia showed up when you tried to divert attention to a separate issue being raised by Steven McIntyre. That’s not me missing a point, that’s you skirting the one I was actually addressing.

          You pointed out that yes local temperature can deviate from the global mean. But this really doesn’t have much significance. We know today that local temperature is highly correlated (up to microscale issues), so we can average a number of stations, and reduce the self-noise associated with individual stations. But we’re not, if we are sensible anyway, going to advocate mixing in stations from coastal locations into a series to compare against a tree-line proxy on a mountain.

          The other point is when you construct a statistical model for a tree ring proxy using a temperature series, validity testing of that model requires out-of-sample comparison of the model prediction against the temperature series that was being used by the model, not against some random third series. So if Mann uses gridded temperature to construct the model, you need to use gridded temperature in the comparison.

          It’s a fair point to raise that if we constructed a more appropriate model, the out-of-sample testing may not fail. So it’s entirely hyperbole to claim as BishopHill did that tree-ring proxies are invalidated because a particular statistical model that was tested against out-of-band data failed. All that showing that a particular model fails, is the model fails and possibly the methodology used to construct the model, rather than the proxy itself if immediately invalidated as a temperature proxy.

          IMO the use of strip bark trees is unlikely to ever work however, because it simply isn’t plausible that these tree-ring proxies would ever make good temperature proxies. In addition to gravitating towards flawed proxies that show signals he apparently likes to reproduce, Mann has often used implausible physical models like the “pick 2” method in Mann 2008, which have little chance of being physically relevant (and hence little chance of having correlations that persist outside of the training period). It’s not a surprise to me that such poorly conceived models fail to validate, but we shouldn’t get confused about what we’re actually testing here.

          I’m a bit surprised that you now seem to be raising the counterfactual argument that independent temperature series weren’t available in 1998 as Mann’s motivation for the use of PCA. As a cursory scholar.google.com search will confirm, the GHCN and USHCN were available well before this, as was gridded land temperature. And as Steve McIntyre pointed out and many of us were already aware, Mann in fact used gridded temperature data in his reconstruction.

          As far as I can tell as well as based on anything Mann said in his 1998 paper and since, the use of PCA had absolutely nothing to do with with not being able to “pluck highly resolved grid values” from KNMI or any other source. It was simply an attempt to reduce the noise in the comparison by truncating a noisy data series. There’s actually nothing wrong with the concept, though there are probably better ways of achieving this.

        • Posted Dec 8, 2014 at 3:46 AM | Permalink

          Carrick,
          “But we’re not, if we are sensible anyway, going to advocate mixing in stations from coastal locations into a series to compare against a tree-line proxy on a mountain.”
          “I’m a bit surprised that you now seem to be raising the counterfactual argument that independent temperature series weren’t available in 1998 as Mann’s motivation for the use of PCA.”
          I didn’t say that temperature series weren’t available. I said:
          “MBH in 1998 could not, like Salzer, pluck highly resolved grid values from KNMI”
          Salzer used 0.5° and 1° grids. Those can be regarded as local. Mann had available 5° grids. 300 mi. The grid including Sheep Mtn doesn’t quite reach the sea, but includes much of the Central Valley, and also much of Nevada. Mann presumably thought that as an local measure wasn’t what he needed. So he used PCA on the instrumental data to extract EOfs (different to its use with proxies).

          “requires out-of-sample comparison of the model prediction against the temperature series that was being used by the model, not against some random third series.”
          I think the NH average qualifies as a random third series. In fact Mann did not use a grid average. He used the gridded data to calculate EOFs, then used EOFs in regression.

          “validity testing of that model requires out-of-sample comparison”
          Somehow Salzer’s paper has been badged elsewhere as out of sample testing of MBH98. It wasn’t. In fact, neither MBH9x nor Graybill were referred to.

        • Steve McIntyre
          Posted Dec 8, 2014 at 9:10 AM | Permalink

          Watch Nick move the pea, while failing to properly cite the post under discussion.

          In my post, I stated that Salzer et al didn’t compare to Graybill versions used by Mann. (To their shame, in my opinion).

          Unfortunately, Salzer et al did not compare their new data to the chronology versions used in Mann et al 1998-99, Mann et al 2009 and many other multiproxy reconstructions.

          Now watch Nick assert (seemingly as an original observation) that Salzer had not compared to MBH or Graybill – without noting that I had already pointed this out. By not doing so, Nick creates a false impression.

          Somehow Salzer’s paper has been badged elsewhere as out of sample testing, of MBH98. It wasn’t. In fact, neither MBH9x nor Graybill were referred. to.

          And while we may regret that Salzer et al evaded the direct comparison that ought to have been done, nonetheless the updated ring width measurements are entirely relevant to out-of-sample testing, contrary to Nick’s most recent whinge.

        • Steven Mosher
          Posted Dec 8, 2014 at 11:37 AM | Permalink

          “Watch Nick move the pea, while failing to properly cite the post under discussion.”

          it’s getting pathetic.

        • Carrick
          Posted Dec 8, 2014 at 1:30 PM | Permalink

          Nick Stokes:

          I think the NH average qualifies as a random third series. In fact Mann did not use a grid average. He used the gridded data to calculate EOFs, then used EOFs in regression.

          If the purpose of the EOF analysis is to allow you to construct a statistical model relating tree-ring proxies to local temperature, then you should be able to use the out-of-sample local-temperature record to test the tree-ring proxy temperature model. Presumably trees are sensing local temperature rather than some component of an eigenvalue expansion over the regional temperature field. Otherwise there’s something physically wrong with your model assumptions.

          This is a side point though.

          The main question is whether Sheep Mountain tree-ring proxies are good temperature proxies. This is a science question, and I think the appropriate answer is “they are not”. Obviously this has direct implications for MBH or any other study relying on these proxies.

          Out-of-sample testing of an a priori model is one particular statistical method for model invalidation. Recognizing that the proxy fails to be a temperature proxy for other reasons is also a method for model invalidation and I would say it is a more powerful one.

          In other words, I don’t think Salzer has to specifically test Mann’s Graybill chronologies before we can arrive at unambiguous conclusions here.

      • Steven Mosher
        Posted Dec 6, 2014 at 9:16 PM | Permalink

        wow nick is really off his game.

        The location of that station in SF is effected by SST.

        one thing berkeley doesnt do very well is stations near the coast because we havent added a term for distance to coast to the regression.

        Now, globally this term doesnt add much explained variance, but locally the lack of this term WILL give you
        answers that are not optimal.

        @SteveM if you look at the white mountain grid… its correlation with global is not magical.

        • Curious George
          Posted Dec 6, 2014 at 9:47 PM | Permalink

          Steven – is there a documentation of BEST adjustments? I believe the code is available, but it is usually a pain in the ass to decipher – will appreciate a link anyway.

        • Steven Mosher
          Posted Dec 6, 2014 at 11:16 PM | Permalink

          “Steven – is there a documentation of BEST adjustments? I believe the code is available, but it is usually a pain in the ass to decipher – will appreciate a link anyway.”

          there aren’t any adjustments.

          Think of this way. we do a regression against latitude and altitude. That gives you the climate of an area.
          Next, we take the residual and this is defined as the “weather”

          The combination of the climate field and the weather field is the temperature.

          lastly we compute what amounts to “fitted values” for the combined fields. This is “what would have been measured” under the assumptions of the statistical model.

          The fitting values of a regression differ from the actual data.

          So, we dont adjust data series and then average them together. We use the data to come up with a prediction
          or estimation of the field that minimizes the error on a global basis.

          You can thnk of the fitted values as an “adjusted” series, but there is no explicit human directed adjusting going on.

          again the fitted values represent what we predict would have been measured IF the assumptions of the model hold true.

          of course we compare that to explicit adjustments that other folks do, but philosophically and operationally it is an entirely different beast..

          You could say its a “metaphor” to call the series adjusted or “adjusted”

          % In many cases, raw temperature data contains a number of artifacts,
          % caused by issues such as typographical errors, instrumentation changes,
          % station moves, and urban or agricultural development near the station.
          % The Berkeley Earth analysis process attempts to identify and estimate
          % the impact of various kinds of data quality problems by comparing each
          % time series to neighboring series. At the end of the analysis process,
          % the “adjusted” data is created as an estimate of what the weather at
          % this location might have looked like after removing apparent biases.
          % This “adjusted” data will generally to be free from quality control
          % issues and be regionally homogeneous. Some users may find this
          % “adjusted” data that attempts to remove apparent biases more
          % suitable for their needs, while other users may prefer to work
          % with raw values.

          So, when you minimize the error in the local fit there can be a tendency to over smooth. However,
          as you test this fitting you find that the global average is insensitive to this. That can put you in a position where you are globally correct and locally wrong. So, if you are interested in a specefic small region, then you probably want to take the raw data and do a local regression where you may be able to add more explanatory variables to the regression, land type, distance from coast, and indices for cold drainage areas are the most imporatnt things to add.

        • Curious George
          Posted Dec 8, 2014 at 2:55 PM | Permalink

          Steven – thank you. I’ll need quite some time to digest your answer. Can you please link to a list of assumptions you refer to in [This is “what would have been measured” under the assumptions of the statistical model]?

        • Steven Mosher
          Posted Dec 9, 2014 at 4:15 PM | Permalink

          U have to read the papers

      • Steven Mosher
        Posted Dec 7, 2014 at 3:07 PM | Permalink

        Carrick

        “In essence, the question a San Francisco station has negative trend over a period is entirely irrelevant to the question of whether you can build a valid relationship between temperature and long-term trend.”

        I think Nick raises a good point and an interesting thought experiment, one I suggested long ago on CA.

        question: what would we say if someone took a single station or single area and suggested that it was a proxy for the entire globe or just a hemisphere?

        And how would you react when that station ceased to correlate well with the global record.. would you doubt
        it.

        In fact it might be a fun game to play

        • Carrick
          Posted Dec 7, 2014 at 7:19 PM | Permalink

          Steven Mosher, interesting question. If you look at the correlation between individual station and e.g. global mean temperature, is there a geographic effect, or is it just random?

          I don’t know that I’ve seen anybody analyze that.

        • Pouncer
          Posted Dec 7, 2014 at 8:32 PM | Permalink

          With climate researchers Jimmy Stewart and Jane Wyman, who conflict over preservation of the “magic thermometer” ?

        • Gerald Machnee
          Posted Dec 7, 2014 at 9:38 PM | Permalink

          ***question: what would we say if someone took a single station or single area and suggested that it was a proxy for the entire globe or just a hemisphere?

          And how would you react when that station ceased to correlate well with the global record.. would you doubt
          it.

          In fact it might be a fun game to play***

          That game was played – with a hockey stick. I think it is over.

  21. Phil B.
    Posted Dec 5, 2014 at 2:37 PM | Permalink

    There are two weather stations within 15 or so miles from these trees. White Mountain RS and Crooked Creek RS, seem like the paper should have included the nearby instrumental record. What is interesting is that 1998 was one of the coldest years in a 60 year record in this grid cell and in California.

    Phil B

    • Duke C.
      Posted Dec 5, 2014 at 6:08 PM | Permalink

      Crooked Creek is ~2 miles from Sheep Mt., Barcroft ~4 miles. There is an inversion layer (see LaMarche 1973 pg. 636) that collects colder air around Crooked Creek which makes interpolation unreliable. I have some Temp data from the new Salzer site (only a few days worth) I would estimate that the temps at the upper tree line are 5-8 degrees F warmer than Crooked Creek at night, even though it’s 500 meters higher.

    • Juan Slayton
      Posted Dec 5, 2014 at 9:13 PM | Permalink

      In a special supplement to last Sunday’s Arizona Daily Star (Novemeber 30), Dr. Salzer presents his view of tree ring thermometry. Some interesting information, which I had never seen before:

      “…during the summers of 2013 and 2014 we have deployed hundreds of button-sized temperature recorders on bristlecones in four different mountain ranges in California and Nevada. These devices record temperature readings every hour for up to a year.

      “By combining the temperature data from the recorders with high-precision computer mapping, we found that individual trees growing at least as close as 100 yards can experience large differences in temperature….

      The article is on line at
      tucson.com/ua-science-a-place-of-discovery-and-education/image_71e72df6-7a4a-11e4-a914-43d379c3baf6.html

      • John F. Hultquist
        Posted Dec 7, 2014 at 9:59 PM | Permalink

        I’m just catching up and know this is late (and not too relevant), but …
        the “button-sized temperature recorders” or similar such things have been used in high-end vineyards for a number of years. Some people think that keeping track of how well vines grow is really important.

  22. Kenneth Fritsch
    Posted Dec 5, 2014 at 2:50 PM | Permalink

    SteveM, that is an important find for some out-of-sample data for proxy responses. I would expect that is not something to which the climate science community is going to point. I cannot remember ever seeing the critical statistical difference between out-of-sample and in-sample discussed in temperature related reconstructions and as a result I do not even have a good feel for how well the community understands the difference. They might be merely ignoring it until they can find some hoped for evidence that the divergence problem is related to AGW and thus can be dismissed as a problem back in time for proxies responses to temperature.

    Since I have been looking at more proxy data from temperature reconstructions I have gained an increased appreciation for what SteveM has done in past and more recent posts on these matters. I also appreciate Craig Loehle’s comments from a biological aspect. I have found, as SteveM and others such as Craig Loehle have indicated at this blog, that the model for proxy response can be rather complicated. Those complications have not changed my view of the wrong headiness of the approach of most those doing temperature reconstructions in selecting proxies, and, in fact, has made me even more curious about the lack of detailed analyses of specific proxies and proxy types.

    Recently, I have been doing spectral analyses of proxy responses and station data. As an aside, I was rather surprised to find that Michael Mann had started his career in climate science with a good deal of interest in spectral analyses of data representing natural climate cycles. After MBH (1999), I have not seen much published interest by him in this area of investigation. My analyses require more work to provide some quantitative measures of the differences I see in proxy pair, station pair and station/proxy comparisons. Generally I have found that near neighbor station data has much better coherence than near neighbor proxy data and those proxy pairs that show reasonably good coherence in the instrumental period can have poor and very poor coherence in historical periods of the same length of time as the instrumental period. I have seen in some few cases near neighbor station data having poor coherence and that needs more investigation. My analysis has allowed me to look at two series and find at what frequencies (periods) the series have higher and lower coherence and by how much the coherence is in or out of phase.

    My literature searches have found spectral analyses used in climatology but not applied as I am with a more or less negative hypothesis. Most papers concentrate on finding cycles and secular trends in the data.

  23. Kenneth Fritsch
    Posted Dec 5, 2014 at 5:08 PM | Permalink

    As a practical matter in proxy responses to temperatures, and particularly in remote areas of the globe, there may well not be a near neighbor station with a sufficiently long instrumental record to make a good comparison with the proxy response and therefore regional data are used. I would suppose if one wants to argue that local temperature anomalies can be very different than even a reasonably close station record might not make a good comparison. Further if it is the difference in local and regional temperatures changes that makes proxies appear to diverge from the instrumental record then that great variability means that a very large number of well spaced proxy sites are required to avoid huge confidence intervals in determining a mean regional temperature anomaly.

  24. Posted Dec 5, 2014 at 5:15 PM | Permalink

    Meanwhile, temps continue to rise.

    • Kenneth Fritsch
      Posted Dec 5, 2014 at 5:58 PM | Permalink

      Shelama, and location is?

    • Mooloo
      Posted Dec 6, 2014 at 2:00 AM | Permalink

      Meanwhile, temps continue to rise.

      Yes, we know. From which it follows that the bristlecones of Sheep Mountain are useless as a temperature proxy, because they don’t show it. From which it follows that the Mann temperature “hockey stick” is dealt a massive credibility blow.

      • PhilH
        Posted Dec 7, 2014 at 11:47 AM | Permalink

        That happened a long time ago.

  25. Posted Dec 5, 2014 at 7:30 PM | Permalink

    Jim Bouldin’s posts on tree rings have not been acknowledged by the community. It still bothers me.

    Jim Bouldin/a> seems very persuasive

    • kim
      Posted Dec 6, 2014 at 12:50 AM | Permalink

      Science is golden. Er, uh, only in it for the silence.
      ===========

    • pdtillman
      Posted Dec 6, 2014 at 2:56 AM | Permalink

      –nor have Our Host’s, and he does good, persuasive work, and actually has a clue about how to handle time series, autocorrelations and what statistical tools to use. Hint: don’t make up your own on the fly…. ;-]

      I get the distinct feel of whistling past the graveyard on the part of the dendro-paleoclimatology community. Time will tell, but the bulk of their work looks pretty useless (and clueless) to me. As Peter Webster once remarked, the death-rattle comes to mind.

      Cheers — Pete Tillman
      Professional geologist, advanced-amateur paleoclimatologist

      • pdtillman
        Posted Dec 6, 2014 at 2:59 AM | Permalink

        Bah, this was intended as a followup @ man in barrel, Dec 5, 2014 at 7:30 PM

        “Jim Bouldin’s posts on tree rings have not been acknowledged by the community. It still bothers me.”

        Me, too.

    • MikeN
      Posted Dec 6, 2014 at 5:12 PM | Permalink

      Instead, he was removed earlier this year from the list of contributors at RealClimate.

      • kch
        Posted Dec 11, 2014 at 8:09 PM | Permalink

        Not just removed from the contributor’s list, but apparently stuffed down the memory hole. In yesterday’s “Ten years of thanks” post at RC, he was not even mentioned as a contributor, let alone an early member of the their team. Seems rather petty…

    • RayG
      Posted Dec 6, 2014 at 6:01 PM | Permalink

      Thank you for the Jim Bouldin link. Clear writing and interesting responses to solid questions.

  26. Posted Dec 6, 2014 at 8:45 AM | Permalink

    Reblogged this on I Didn't Ask To Be a Blog and commented:
    Another nail in the coffin…

  27. RalphR
    Posted Dec 6, 2014 at 12:01 PM | Permalink

    Interesting words (to me, anyway) in the Data and Methods section of the article under discussion:
    “Two of the trees at SFa had been previously sampled so we used four cores from two of the trees at SFa for a total of 12 series at this site.”
    Does this have any implication one way or the other in a discussion of SFa divergence and out-of-sample validity?

  28. EdeF
    Posted Dec 6, 2014 at 8:33 PM | Permalink

    The link below is the average annual temperature at the Bishop, CA airport drom 1944 to 2013. If you stand at Sheep Mt. with a 3-iron, face due west and slog your way down the
    mountain hitting bad golf shots you will wind up at Bishop A/P. It is the best surrogate for climate in that area going backto WWII.

    The plot of annual temperatures looks absolutely uncontroversial: gradual cooling since the 40s until the 70s, then an increase until about 2000 (1998 being a real hot bugger), then the average looks like gradual cooling since then. The trees have it
    spot on. Have seen this same trend at NAWC, China Lake; Lone Pine, and several other
    areas and indeed in most of North America.

    Listen to the trees……….

    http://www.wrcc.dri.edu/cgi-bin/cliMAIN.pl?ca0822

    • kim
      Posted Dec 7, 2014 at 7:51 AM | Permalink

      Climate Scilence.
      ===========

    • EdeF
      Posted Dec 7, 2014 at 10:38 AM | Permalink

      Meant to say local temps follow the usual trends, the trees are highly divergent
      after about 1985, which is telling us something.

    • Duke C.
      Posted Dec 8, 2014 at 12:03 AM | Permalink

      Why Bishop? Barcroft Station is a few miles from Sheep and close to the same elevation. There’s 33 years of data, 1951-2014.

      Here’s daily max temps, starting from ’51:

      • Duke C.
        Posted Dec 8, 2014 at 12:09 AM | Permalink

        sorry, 63 years

    • Phil B.
      Posted Dec 8, 2014 at 12:50 PM | Permalink

      EdeF, I didn’t see any annual data from the Bishop site, did you average the monthly data and then plot? One of the tables showed that 1998 was the coldest year for their record.

      Phil B

  29. HankH
    Posted Dec 6, 2014 at 11:57 PM | Permalink

    Steve, I don’t comment much at all but wanted to stop in and congratulate you on the fine work you do to underscore the divergence and failure of these out-of-sample temperature proxy studies.

  30. john robertson
    Posted Dec 7, 2014 at 12:35 AM | Permalink

    Thanks again Steve McIntyre, your persistence is amazing.
    so as you expected the divergence is clear. Good call.
    no gold in the updated samples.
    Just made small donation, Merry Christmas. I appreciate your work.

  31. Posted Dec 7, 2014 at 6:41 AM | Permalink

    I haven’t read all through , so forgive me if someone’s already mentioned it.
    Has anyone considered polution, as a possible cause for retarding growth. ?
    This from a study in the Great Smokies around 1984 suggests a link.
    “… growth suppression and increases of iron and other metals were found in rings formed in the past 20 to 25 years, a period when regional fossil fuel combustion emissions increased about 200 percent. Metals concentrations in phloem and cambium are high, but whether they exceed toxic thresholds for these tissues is not known.”

    Click to access science.pdf

    Are there any useful studies of pollution in tree rings for Sheep Mountain ?

    Hate to suggest a get out but seems the Warmists will be all over pollution as an explanation.

  32. MikeN
    Posted Dec 7, 2014 at 10:05 AM | Permalink

    For updating proxy reconstructions, is it likely that the act of taking the original sample would affect the growth of that tree?
    It would be nice if you could get updates from the same trees.

    Steve: look at the series of posts on ALmagre, where Pete Holtzmann located some of Graybill’s trees (and tags.)

  33. mpainter
    Posted Dec 8, 2014 at 12:40 AM | Permalink

    I just have to say that the divergence between Sf vs NFL BCP on Sheep Mtn. fascinates me.
    I would guess that it has to do with soil moisture. Why so?Need more data to make a determination.

  34. Posted Dec 8, 2014 at 3:45 AM | Permalink

    Reblogged this on Wolsten and commented:
    The death knell for Mann’s Hockey Stick, the crumbling foundation upon which stands the temple of climate alarmism.

    • Ian H
      Posted Dec 12, 2014 at 3:56 AM | Permalink

      How can this be the death knell. Mann’s hockey stick is long buried. This study merely puts a stake through the heart of the exhumed corpse.

  35. observa
    Posted Dec 8, 2014 at 7:18 AM | Permalink

    ‘Out of sample’? A very polite euphemism indeed Mr McIntyre. Are all statisticians this polite?

    I have this vision of drug companies quoting ‘out of sample’ while their patients are dropping like flies.

  36. EdeF
    Posted Dec 11, 2014 at 7:31 PM | Permalink

    Reports of 139 mph winds at the summit of White Mt this morning about 11:30 am.

  37. AndyL
    Posted Dec 18, 2014 at 6:46 AM | Permalink

    Greg Laden has a post on tree rings which mentions Sheep Mountain. It ends with the amazing finding that “change in regional (and global) temperature is increasingly implicated as the cause of the divergence problem”

    http://www.gaiagazette.com/new-research-on-tree-rings-as-indicators-of-past-climate-greg-ladens-blog/

    So that’s settled. Divergence is caused by changes in temperature.

  38. gregladen
    Posted Dec 18, 2014 at 9:46 AM | Permalink

    A small blog post on the latest paper from the Sheep Mountain people: http://scienceblogs.com/gregladen/2014/12/17/new-research-on-tree-rings-as-indicators-of-past-climate/

    Steve: thank you for drawing my attention to the post. I’ve taken a quick look at the Salzer et al 2013 paper and it doesn’t change my results at all. If one compares the chronology shown in its Figure 3 with the Graybill version, it lacks the dramatic HS used in Mann et al 1998. Nor does anything in the comment refute my observation about out-of-sample performance of the proxy.

    As to the divergence, I’ve been meticulous in my descriptions of the issue. While Mann puffs that his reconstruction to 1980 is not impacted by inconsistency between tree rings and temperature, if one looks at post-1980 performance of the proxies used in Mann et al 1998-99, they do not perform out of sample. Mann et al ended their reconstruction at the absolute top of the market of the Sheep Mountain chronology, following which it had a precipitous decline in the Graybill version. The Salzer version doesn’t have the large chronology uptick of the original Graybill version.

    • Layman Lurker
      Posted Dec 18, 2014 at 12:53 PM | Permalink

      This article is appalling. Absolute dreck. An example:

      …there was a correction of the Bristlecone Pine data for inflated 20th century increase (which was attributed to CO2 fertilization at the time) in MBH99. So we actually applied a downward correction of the trend in those data. McIntyre doesn’t want people to know that. So need to make sure that is crystal clear.

      How many CA blog articles and comment threads discuss the CO2 “correction” in MBH99 Steve?


      Steve: Jean S and I spent quite a bit of time on the Mannkovitch bodge, with Jean S finally figuring it out. The bodge is totally arbitrary and ad hoc.

      Andrew Montford wrote an excellent overview
      http://bishophill.squarespace.com/blog/2010/4/26/a-good-trick-to-create-a-decline.html

      Posts include the following

      The MBH99 CO2 "Adjustment"

      Mann's co2detrend.f Calculation

      Re-scaling the Mann and Jones 2003 PC1

      The Hockey Stick and the Milankovitch Theory

      Moderate Low Weight

      Kevin O’Neill’s “Fraud” Allegations

      Funny, I’d been thinking about doing an update on this in connection with the EPA report.

      Taking a quick look at the Laden post, I’ve been meticulous in distinguishing the decline problem in Briffa reconstruction after 1960. The point in my post is that the bristlecone ring widths after 1980 don’t perform out of sample as the Mann reconstruction requires. Nothing in the post at Laden contradicts this.

    • sue
      Posted Dec 18, 2014 at 2:07 PM | Permalink

      Greg, did you really write that blog post or did Mike Mann hand it to you? Mosher, with your skills, what do you think?

    • Steve McIntyre
      Posted Dec 18, 2014 at 4:31 PM | Permalink

      For all the bad-mouthing from Hughes, he doesn’t actually contradict anything in my post, though the following implies or states that Salzer et al 2013 contradicted me:

      McIntyre shows the same figure I show above (Figure 5 from that paper) and critiques the researchers for failing to integrate that figure or its data with Mann et al’s climate reconstructions. But they shouldn’t have. That is not what the paper is about. Another very recent paper by the same team is in fact a climate reconstruction study (published in Climate Dynamics) but McIntyre manages to ignore that.

      If one compares the Sheep Mountain reconstruction used in Mann et al 1998-1999 to the information shown in Salzer et al 2013, precisely the same out-of-sample failure results, as shown below, where I’ve plotted the Graybill chronology used in MBH98 (red to 1980,green after) on the Salzer 2013 version of the chronology.

      salzer_2013_chronology_annotated

      The Salzer 2013 chronology has a lesser post-1980 decline, because it doesn’t replicate the huge pre-1980 HS of the Graybill chronology used in Mann et al 1998.

      Hughes argued that it was not the purpose of the 2014 paper to compare performance of the updated series to the version used in Mann et al 1998-99, but the 2013 Salzer et al paper doesn’t do it either. It doesn’t cite Graybill’s original study nor does it note the importance of Graybill chronologies in the Mann et al reconstructions.

      Hughes also stated:

      Back in 1999 we (Mann et al) made the best available choices with the information and data we had. Now, more than 15 years later, with a Bristlecone Pine record that extends back 5000 years, the original results hold up remarkably well.”

      Even in 1999, there were serious caveats and questions about the validity of Graybill chronologies as temperature proxies. To say that they were the “best available” choices is absurd. They did not even meet the stated selection criteria of the original study.

      And to say that the Graybill chronologies have held up “remarkably well” as temperature proxies simply shows the erosion of language. The Sheep Mountain ring width chronologies have not soared with late 20th century temperatures, they have retreated towards the mean. If this counts as good performance in Mannian world, I cannot imagine what bad performance is.

      Nor is the 1999 bodge of the PC1 relevant to this discussion. Hughes claims that I’ve ignored it, but its’ been discussed at length at Climate Audit. I’ll spend some time on it on another occasion.

      • sue
        Posted Dec 18, 2014 at 6:43 PM | Permalink

        Steve, I think those are Greg Laden’s words (possibly Mann’s) not Hughes’ words.

13 Trackbacks

  1. […] well. Look what Steve McIntyre has found. After all those years of sceptics calling for tree-ring series to be updated so as to provide […]

  2. […] […]

  3. […] https://climateaudit.org/2014/12/04/sheep-mountain-update/ […]

  4. […] https://climateaudit.org/2014/12/04/sheep-mountain-update/ […]

  5. […] McIntyre has been asking for an update since 2005. He has the details of the new paper by Salzer, and produces this devastating graph below. The black line is MBH98 – the Michael Mann curve of […]

  6. […] new information shows dramatic failure of the Sheep Mountain chronology as an out-of-sample temperature proxy, as it has a dramatic divergence from NH temperature since […]

  7. […] https://climateaudit.org/2014/12/04/sheep-mountain-update/ […]

  8. […] writes: There’s an excellent post over at Climate Audit on Sheep Mountain which seemed too good not to […]

  9. […] McIntyre, Steve, 2014, “Sheep Mountain Update” blog post […]

  10. […] McIntyre, Steve, 2014, “Sheep Mountain Update” blog post […]

  11. […] McIntyre, Steve, 2014, “Sheep Mountain Update” blog post […]

  12. […] Per la controversia sull’Hockey Stick vedere ad es. Wikipedia, qui e, per un recente sviluppo su nuovi dati, due post di Steve McIntyre su Climate Audit, qui. […]

  13. […] here and the video by Prof. Richard Muller here. Also, see the updated (2009) Sheep Mountain data here. For a less technical review, see […]