Kaufman and Upside-Down Mann

Kaufman et al (2009), published at 2 pm today, is a multiproxy study involving the following regular Team authors: Bradley, Briffa (the AR4 millennial reconstruction lead author), Overpeck, Caspar Ammann, David Schneider (of Steig et al 2009), Bradley as well as Otto-Bleisner (Ammann’s supervisor and conflicted NAS Panel member) and “JOPL-SI authors” who are various contributors of sediment series.

One of the few proxy data contributors not listed as a coauthor is Mia Tiljander, whose data was used upside down in Mann et al 2008. Amusingly, the Kaufman Team perpetuates Mann’s upside down use of the Tiljander proxy, though they at least truncate the huge blade (resuling from modern sediments from bridge-building and farming.)

The graph below shows the original data from Tiljander (oriented so that warm is up.)

Figure 1. Excerpt from Tiljander Boreas 2003 Figure 5 – rotated to warm is up orientation. The increased sedimentation in 19th and 20th centuries is attributed to farming and bridge construction and is not evidence of “cold”.

Mann et al 2008 series #1064 can be seen to be an inverted version of the Tiljander series, as shown by the plot below.


Figure 2. Mann et al 2008 proxy 1064 plotted reverse to Mann orientation (showing that the author’s original orientation is achieved only by inverting the Mann orientation.)

Kaufman et al make decadal averages of their proxies. The graph below shows the Mann 2008 data (Mann orientation), converted to 10 year anomalies, truncated to 1800 and then scaled. Mann orientation is upside-down to the orientation in Figures 1 and 2.


Figure 3. Mann et al Series 1064 (in Mann orientation) converted to 10-year averages, truncated to 1800 and scaled. Mann orientation is upside-down to the orientation in Figures 1 and 2.

Next here is a plot of Kaufman series #20 (lake Kortajarvi) from their SI. This was presented in an exceedingly annoying format – it was available only in a photo form and thus the data was not available digitally. I transcribed series 20 manually and may have a couple of discrepancies as the data format was very annoying. (I’ve uploaded my transcription) In addition, data was missing in the SI from 1225 to 1105. Unlike Mann et al 2008, Kaufman et al truncated post-1800 data. You can readily see that this closely matches the Mann version and is thus also upside-down relative to Tiljander’s intended orientation.


Figure 4. Plot of Manually Transcribed Kaufman series #20.

The continued use of upside-down data by the Team is really quite remarkable. It’s not as though they were unaware of the issue.

The upside-down use of Tiljander data was originally observed at CA http://www.climateaudit.org/?p=3967). We know that Mann and Schmidt were monitoring CA because changes to Mann’s SI (always without attribution) were made soon after CA posts.

The use of upside-down data in MAnn et al 2008 was even published at PNAS earlier this year (McIntyre and McKitrick PNAS 2009 see here). In their response at PNAS, Mann et al described the claim that they used the data upside-down as “bizarre”, notwithstanding the fact that the correctness of the observation could be readily seen merely by plotting Mann’s data (and even in the data plots in the Mann et al 2008 SI).

The Team is exceptionally stubborn about admitting even the least error. We had seen an amusing illustration in Mann et al 2007, where the incorrect geographic locations of MBH98 proxies was perpetuated: the rain in Maine still continued to fall mainly in the Seine.

It is even more amusingly illustrated by Kaufman’s perpetuation of Mann’s upside down use of the Tiljander proxy (rather than conceding Mann’s error and using the data in the right orientation.) Also note here that Bradley was involved in both studies.

I’m sure we’ll soon hear that this error doesn’t “matter”. Team errors never seem to. And y’know, it’s probably correct that it doesn’t “matter” whether the truncated Tiljander (and probably a number of other series) are used upside-down or not. The fact that such errors don’t “matter” surely says something not only about the quality of workmanship but of the methodology itself.

[Update Sep 8] – Last week, I notified Kaufman about the use of Upside Down Tiljander, asking in addition for various “publicly available” data sets that do not appear to actually be available anywhere that I know of. He replied yesterday attaching a graph indicating that it doesn’t matter whether Tiljander is used upside down and unresponsively referred me to the decadal values of the data already available.

What does “matter” in these sorts of studies are a few HS-shaped series. Testing MBH without the Graybill bristlecones provoked screams of outrage – these obviously “mattered”. Indeed, in MBH, nothing else much “mattered”. The Yamal HS-shaped series (substituted in Briffa 2000 for the Polar Urals update which had a high MWP) plays a similar role in the few studies that don’t use Graybill bristlecones. The present study doesn’t use bristlecones, but Briffa’s Yamal substitution is predictably on hand. (See the latter part of my 2008 Erice presentation for some discussion of this.)

Further analysis will require examination of the individual proxies. Kaufman et al provide 10-year decadal averages in their photo SI, promising that data will be made available at NCDC, but it wasn’t as at the time of writing this note. While they say that all data is public (other than annual versions of some series that they obtained from original authors), but I could only locate digital versions of some of the series.

The problem with these sorts of studies is that no class of proxy (tree ring, ice core isotopes) is unambiguously correlated to temperature and, over and over again, authors pick proxies that confirm their bias and discard proxies that do not. This problem is exacerbated by author pre-knowledge of what individual proxies look like, leading to biased selection of certain proxies over and over again into these sorts of studies.

We’ve seen this sort of problem with the Yamal tree ring series (22), which has been discussed at CA on many occasions. (See for example the discussion in the latter part of https://climateaudit.org/wp-content/uploads/2008/09/mcintyre.2008.erice.pdf ). Briffa originally used the Polar Urals site to represent this region and this data set was used in MBH98-99 and Jones et al 1998. This data set was updated in the late 1990s, resulting in an elevated Medieval Warm Period. Briffa did not report on the updated data; it has never been reported. The data only became available after quasi-litigation with Science in connection with data used in Esper et al 2002. Instead of using the updated Polar Urals version with an elevated MWP, Briffa constructed his own chronology for Yamal, yielding a hockey-stick shaped result. The Yamal substitution has been used in virtually every subsequent study (a point noted by Wegman et al 2006) and is used once again in Kaufman et al 2009. In other studies, a simple replacement of the Yamal version with the updated Polar Urals version impacts the medieval-modern relationship and this needs to be considered here.

On the other hand, a long Siberian tree ring series known to have an elevated MWP is not used: the Indigirka River (Siberia) tree ring series was used in Moberg et al 2005, but is not used in this study, though it is a long chronology in the same sort of region.

They use Briffa’s version of Tornetrask (as a leading component of their Fennoscandia (#18). Tornetrask is used in virtually every reconstruction, a point made on many occasions at CA (also see Wegman et al 2006). An updated Tornetrask version (Grudd 2008) had an elevated medieval warm period – see discussion in https://climateaudit.org/wp-content/uploads/2008/09/mcintyre.2008.erice.pdf).

Notable omissions are the Mount Logan ice core and Jellowbean Lake sediment series. (See http://www.climateaudit.org/?p=2348, http://www.climateaudit.org/?p=806 for discussion of the Mount Logan proxies.) The Mount Logan ice core delO18 values decrease in the 20th century, contrary to the presumed increase. Although Mount Logan isotopes are as well resolved as the ice core isotopes used by Kaufman et al, they are excluded (along with a candidate sediment series) on the basis that the “bad” results for these proxies are due to changes in “moisture source” rather than temperature.

We excluded the isotope-based records from ices cores in the Saint Elias Mountains (S4) and from Jellybean Lake carbonate (S5), both in the Yukon, because the proxies are more strongly controlled by changes in moisture-source and atmospheric moisture transport patterns than by temperature.

The problem with this sort of reasoning is: if changes in moisture source cause isotope values to go down, they will also cause isotope values to go up.

Worsening this particular situation is the failure of Lonnie Thompson to report “adverse” results at Bona-Churchill (see the CA posts mentioned above.) Bona-Churchill, an ice core site near Mount Logan, was drilled in 2002. The unseemly delay in reporting results led me to speculate several years ago that these results were “bad” for Lonnie Thompson’s advocacy. This prediction was confirmed in a diagram presented in a workshop; the data itself remains unpublished to this day.

I note that the Dye-3 isotopes (#12) have been “corrected” to account for ice flow. In my opinion, the place for such adjustments should be in the original articles and not in multiproxy compilations. This will need to be assessed.

As has observed on many occasions at CA and on other critical blogs (it’s been independently noted by Jeff Id, David Stockwell and Lubos Motl as well as myself), when data sets are selected ex post according to whether they go up in the 20th century – as opposed to all the data sets, the results are subject to a very severe HS bias. David Stockwell published this result in 2006 (see here) (an article cited in McIntyre and McKitrick PNAS 2009) illustrating it as below (similar illustrations are available at Jeff Id’s and Luboš’):

The most cursory examination of Kaufman et al shows the usual problem of picking proxies ex post: e.g. the exclusion of the Mount Logan ice core and Jellybean Lake sediment series; or the selection of Yamal rather than Polar Urals – a problem that is even pernicious because of the failure to archive “bad” results (e.g. Thompson’s Bona-Churchill or Jacoby’s “we’re paid to tell a story”). Until these problems are rooted out, it’s hard to place much weight on any HS reconstruction.

Update: Here are interesting layers extracted from Kaufman showing the respective contributions of the respective proxy types clearly rather than the typical spaghetti graph. This shows nicely that the seven (of 23) ice core proxies make no contribution to the HS-ness of the result and that they do not show 20th century uniqueness. The biggest HS comes from the Briffa tree rings (and I’m sure that the Yamal series will contribute the majority of the HS-ness in this composite.) The 12 sediment series are intermediate: here we still need to examine the orientation of these series and which sediment series contribute to HS-ness.




Excerpt from Kaufman et al.

Update: As noted by a reader in the Loso thread, compaction is a problem with this sort of data. The Murray Lake data includes density information, which is plotted below. Density stabilizes at a mean of about 1.09, but less compacted recent sediments are less dense.

Reference: Kaufman et al Science 2009. SI Data is supposed to be at http://www.ncdc.noaa.gov/paleo/pubs/kaufman2009 but isn’t there as at Sep 3, 2009 6 pm when the article was published.
Tiljander, Boreas 2003 https://climateaudit.org/wp-content/uploads/2009/09/tiljanderetal.pdf

292 Comments

  1. Steve McIntyre
    Posted Sep 3, 2009 at 12:12 PM | Permalink

    Google under News for “arctic kaufmann” for predictable articles.

  2. Tim G
    Posted Sep 3, 2009 at 12:22 PM | Permalink

    Open Source Temperature Reconstruction

    I think someone should collect all the proxy data that is available. Convert it to some common format. And then share it as a GIT archive.

    Next, others can create code that filters, massages and joins the data into a temperature reconstruction. That code can augment the original GIT archive and be shared, itself. Anyone who wishes to look at the code can clone the archive. If they find a flaw, they can change it locally. If the owner of the archive wishes, he can push the changes back in. Otherwise the second author can share his archive.

    Finally, using a common output format, results can easily be shared. Different methodologies can be compared. And anyone wishing to investigate/improve the methods can share their ideas and code via their GIT archive.

    Sorry. Just my idea about how science could work.

    –t

  3. Steve McIntyre
    Posted Sep 3, 2009 at 12:27 PM | Permalink

    If authors routinely archived data at the time of publication at NCDC, this would accomplish the most important part.

    And if multiproxy authors routinely provided accurate data citations (in compliance with existing policies at some journals), that would solve another major problem.

    There are a number of problems in converting data sets to common formats. I don’t mind working with native formats. My bigger beef is unavailability of some of the data and lack of clear definition of the version used.

  4. Posted Sep 3, 2009 at 12:31 PM | Permalink

    I’m just curious, what was the ‘point’ of this new article? Was it meant to be just another piece in a long line of AGW evidence?

  5. Steve McIntyre
    Posted Sep 3, 2009 at 12:38 PM | Permalink

    See here for example for what the authors say about their study: http://www.cbc.ca/technology/story/2009/09/03/climate-environment-arctic-change.html

  6. bernie
    Posted Sep 3, 2009 at 12:38 PM | Permalink

    Steve:
    Is Mann an author on this article?
    Did this type of confirmatory “data mining” come up as a general issue at Erice?

  7. Posted Sep 3, 2009 at 12:40 PM | Permalink

    See press release about this 9/4 Science article at http://www.eurekalert.org/pub_releases/2009-09/ncfa-aaw083109.php. Lead author is Darrell Kaufman, with one n according to the release.

  8. Matthew W
    Posted Sep 3, 2009 at 12:50 PM | Permalink

    ARRRRGGGHHH !!!
    Do you have a
    Cliff Notes” version so some of use dummies can understand ???

  9. Posted Sep 3, 2009 at 12:53 PM | Permalink

    The following picture, based on one in the Science article, is included with the eurekalert press release:

    Caption: New research shows that the Arctic reversed a long-term cooling trend and began warming rapidly in recent decades. The blue line shows estimates of Arctic temperatures over the last 2,000 years, based on proxy records from lake sediments, ice cores and tree rings. The green line shows the long-term cooling trend. The red line shows the recent warming based on actual observations. A 2000-year transient climate simulation with NCAR’s Community Climate System Model shows the same overall temperature decrease as does the proxy temperature reconstruction, which gives scientists confidence that their estimates are accurate. (source of caption)

    • Clark
      Posted Sep 3, 2009 at 1:38 PM | Permalink

      Re: Hu McCulloch (#9),

      It’s scary how much that looks like Jeff Id’s many recreations of the Mann calibration effect on random data. Always a long gradual downward slope followed by the sharp up-tick.

      • Steve McIntyre
        Posted Sep 3, 2009 at 1:42 PM | Permalink

        Re: Clark (#14),

        Well, there’s a reason. The Team selects series that go up in the 20th century and discards ones that don’t. This sort of ex-post correlation picking generates HS’s from random data, a point made not just by Jeff Id (who’s done so very effectively) but is one that’s been made here as well over the past few years and by David Stockwell.

        • Craig Loehle
          Posted Sep 3, 2009 at 6:24 PM | Permalink

          Re: Steve McIntyre (#16), Steve said: “Well, there’s a reason. The Team selects series that go up in the 20th century and discards ones that don’t. This sort of ex-post correlation picking generates HS’s from random data, a point made not just by Jeff Id (who’s done so very effectively) but is one that’s been made here as well over the past few years and by David Stockwell.”

          It is simply stunning how simple this point is…and yet the Team does not seem to understand this concept.

        • Posted Sep 3, 2009 at 6:35 PM | Permalink

          Re: Craig Loehle (#43), or they simply choose to ignore it. Or actively seek it out.

        • Craig Loehle
          Posted Sep 3, 2009 at 6:36 PM | Permalink

          Re: Jeff Alberts (#47), One of those things I think but try not to say publicly.

        • Pat Frank
          Posted Sep 3, 2009 at 7:38 PM | Permalink

          Re: Craig Loehle (#43), Craig, did Kaufman, ea remark upon or even reference your 2007 E&E paper?

        • Craig Loehle
          Posted Sep 4, 2009 at 6:20 AM | Permalink

          Re: Pat Frank (#54), Ha ha! You must be kidding!

    • Posted Sep 3, 2009 at 3:15 PM | Permalink

      Re: Hu McCulloch (#9), That blade doesn’t look right to me either:

      Polyakov, I.V., et al., 2003. Variability and Trends of Air Temperature and Pressure in the Maritime Arctic, 1875-2000. Journal of Climate, 16, 2067-2077.

    • Posted Sep 3, 2009 at 5:44 PM | Permalink

      Re: Hu McCulloch (#9), to me it is outrageous that people can still get away with this graph.

      Over the long term, it is completely at variance with archaeological and historical evidence from Viking-medieval settlement of Greenland, to say nothing of your 2008 study with Craig Loehle.

      On the red “measured temperature” section from around 1850, it is totally at variance with the many temperature records in the Arctic regions that John Daly collected – for example Bodø, Norway.

    • Bob McAlpine
      Posted Feb 6, 2010 at 10:34 PM | Permalink

      Is there any way we can find which thermometers were used for this graph. I seem to recall that they are now using only one Canadian station in the arctic. I think we have over 50 stations there, and this graph is an intetpolation of the temperatures from the nearest southern station.

  10. Kenneth Fritsch
    Posted Sep 3, 2009 at 12:55 PM | Permalink

    With all the proxy data available I suggest that all proxies be put in a hat (a large one) and withdrawn randomly until a predetermined number is reached that would give a reasonable global or regional coverage. Repeat several times and compare the results as a sensitivity test for cherry picking proxies.

    The paper’s claim that we were experiencing 2000 years of Arctic cooling before the past 50 years of AGW would indicate to me that we have averted in impending ice age or the beginnings of one – if we assume that the recent ice ages have been initiated by Northern hemisphere cooling and albedo feedback effects. Would not that be a beneficial effect of AGW?

  11. Calvin Ball
    Posted Sep 3, 2009 at 12:59 PM | Permalink

    Is it time to roll out the CareerBuilders vid again?

    Steve: I presume that you mean the one below. It’s not quite the same here as in Mann 2008 as they truncated in 1800. So it’s not as spectacular as Mann 2008.

  12. Calvin Ball
    Posted Sep 3, 2009 at 1:13 PM | Permalink

    Upside-Down Mann

  13. MikeN
    Posted Sep 3, 2009 at 1:17 PM | Permalink

    Why does it matter if a proxy is upside-down? Won’t the algorithm just flip it back in the direction oflining up with temperature?

    Steve:
    This study used CPS which assumes that the orientation of the proxies is known a priori.

  14. Posted Sep 3, 2009 at 1:42 PM | Permalink

    How did Kaufman et al calibrate Tiljander to instrumental temperatures if they truncated it to 1800??

    Next here is a plot of Kaufmann series #20 (lake Kortajarvi) from their SI. This was presented in an exceedingly annoying format – it was available only in a photo form and thus the data was not available digitally. I transcribed series 20 manually and may have a couple of discrepancies as the data format was very annoying. (I’ve uploaded my transcription)

    What is the URL of their SI? I guess the paper won’t exist until tomorrow, but maybe the SI is already linked. Perhaps a reader or readers would proofread your transcription for errors if pointed to the SI.

    Steve: SI is temporarily here.

    They scaled all the proxies on 980-1800, then averaged the scaled proxies, then re-scaled (CPS).

  15. Hoi Polloi
    Posted Sep 3, 2009 at 2:03 PM | Permalink

    “Global Warming Could Forestall Ice Age”

    http://www.nytimes.com/2009/09/04/science/earth/04arctic.html?hp

  16. Steve McIntyre
    Posted Sep 3, 2009 at 2:09 PM | Permalink

    I sent the following email to Kaufman:

    Dear Dr Kaufman,

    Mann et al 2008 used the Tiljander series upside down from the orientation in the underlying articles (see McIntyre and McKitrick, PNAS 2009) – a point confirmed with Mia Tiljander (pers comm). I notice that you used this data in the upside-down Mann orientation, though you seem to be aware of the issues surrounding this series, as you truncated it at AD1800. You should report that you used this series upside-down to the orientation recommended by the authors.

    You say that you selected series from those with “publicly available data (8) (table S1) (www.ncdc.noaa.gov/paleo/pubs/jopl2008arctic/jopl2008arctic.html “. The link only refers to 6 or so of the 23 data sets. I have been unable to locate “publicly available” versions of many of the data sets including: SI Table 1 series 3, 6, 7, 12,13,19, 21 (in a few cases, as you note, you used annual versions that are not publicly available.) Could you please provide me with the above data that is not publicly available. You may wish to amend your text to be a bit more precise prior to the final print version.

    Regards
    Steve McIntyre

  17. tty
    Posted Sep 3, 2009 at 2:10 PM | Permalink

    Why would you need to correct the Dye 3 data for flow? For this recent interval (2000 years) annual layers are still discernible, so there can’t really be any doubt about the dating.

    Steve: They say:

    The oxygen-isotope values from DYE-3 were corrected to account for the flow of ice from higher elevation. Using an ice-flow model for the area (S34 – N. Reeh et al., Am. Geophys. Union Geophys. Mono. 33, 57 (1985)), we determine the relation between the change in the depositional elevation of the snow and the age of ice (0.035 m yr-1). Combining this with the Greenland isotope-elevation gradient of -0.006‰ m-1 (S35 – W. Dansgaard, Medd. Grønland 165(2) (1961).), a correction of 0.00021‰ yr-1 was derived for the DYE-3 isotope time series.

    • tty
      Posted Sep 4, 2009 at 12:49 PM | Permalink

      Re: tty (#20),

      But the formula they use for adjusting the d18O values assumes that the horizontal displacement is directly proportional to the age of the ice, i. e. that the horizontal component of ice-movement does not vary with depth. This is a good approximation for temperate glaciers which are at pressure-melting temperature throughout, but it definitely does not apply to the much colder arctic icecaps. Also the ice-divide very likely has moved a bit over 2000 years, also affecting the speed of movement.

      And if they are doing corrections this finicky (an altitude change of 3.5 cmy-1), they should also correct for changes in the thickness of the icecap, and for isostatic movement of the bedrock under the ice which are of the same order of magnitude.

  18. mondo
    Posted Sep 3, 2009 at 2:19 PM | Permalink

    It would seem that Mann et al might have missed their philosophy classes. If they hadn’t, they might have known about the pertinent Friedrich Wilhelm Nietzsche quote:

    “The most perfidious way of harming a cause consists of defending it deliberately with faulty arguments.”

    • jeez
      Posted Sep 3, 2009 at 2:42 PM | Permalink

      Re: mondo (#20),

      Don’t forget the modern corollary from Roger Pielke Jr:

      If Michael Mann did not exist, the skeptics would have to invent him.

    • Calvin Ball
      Posted Sep 3, 2009 at 2:49 PM | Permalink

      Re: mondo (#20), which is why it’s sometimes hard to tell sincerity from satire.

  19. Posted Sep 3, 2009 at 2:51 PM | Permalink

    Well, there is an anthropogenic signal in the data!

    😆

  20. hemst101
    Posted Sep 3, 2009 at 2:53 PM | Permalink

    Hu #9

    I have a few problems with the graph.

    The red line obscures the blue reconstruction. (Wish they wouldn’t do that – just make two charts ) However, from what I can make out the blue line increases rapidly in the early 1900s.

    According to CAGW, this increase was caused by normal variation not GHGs or humans. Some people contend that the amount and rate of the 1st and 2nd warmings in the 20th century are statistically indistinguishable.

    Notice the blue lines second peak ~ 1995 is quite close to the first ~1940. This agrees with the historical instrumental temperatures as given in a website such as climate4you. ie the temperature in the present arctic is very similiar to the arctic temperature of the 1940s. Also Syun Akasofu indicates the similarity between now and the 1940s.

    The red line shows a dramatic difference between the arctic conditions of the 1940s and the present. Why??

    The red line when compared with the blue reconstruction shows much more variation. This makes me think that the blue line of the past is not accurate.

    Just knowing some temperature history makes me very skeptical that this graph is accurately portraying the past temperatures of the Arctic.

    The graph just rings alarm bells.

  21. hemst101
    Posted Sep 3, 2009 at 3:15 PM | Permalink

    Hu #9

    I note with interest that the blue reconstruction line in the early 20th century increases rapidly. According to past AGW dogma this was caused by “normal variation”. Has this dogma changed?

    I have read (Michaels; Climate4you) that the increase and rate of increase of early and late 20th century are statistically indistinguishable.

    Also noted is the great difference between the anomalies of ~1940 and the present. Historical temperature records that I have seen indicate that the differences between ~1940 and now are not very different (if at all).

    Being aware of (hopefully) resonable reliable past Arctic temperature histories makes me very skeptical of the chart.

  22. Posted Sep 3, 2009 at 3:18 PM | Permalink

    ARGABLE!

    That blade looks wrong wrong wrong:

    Polyakov, I.V., et al., 2003. Variability and Trends of Air Temperature and Pressure in the Maritime Arctic, 1875-2000. Journal of Climate, 16, 2067-2077.

  23. Scott Brim
    Posted Sep 3, 2009 at 3:20 PM | Permalink

    OK, why do we observe the reuse of upside down data in this latest study? Could a process of reverse engineering have been used to remmanufacture some previous version of the study’s original design into this new analysis product?

  24. Steve McIntyre
    Posted Sep 3, 2009 at 3:42 PM | Permalink

    I’ve added a couple paragraphs at the end of the post. I’ll add links to the relevant posts at Jeff ID and Lubos if someone sends them to me.

  25. Posted Sep 3, 2009 at 3:53 PM | Permalink

    From the MS SI online at http://data.climateaudit.org/pdf/multiproxy/kaufmann.2009.si.pdf,

    For the t-test of the correlation coefficients reported in the text, we also reduced the number of degrees of freedom, according to the formula:
    n* = (n – 2) [(1 – r1^2) / (1 + r1^2)]
    where n = original sample size; n* = adjusted sample size; r1 = lag-1 correlation coefficient of the two time series being compared. This was then used in a two-tailed t-test and compared withthe critical t-value.

    This is interesting! The Bartlett-Quenouille “effective DOF” adjustment, as used by climato-statisticians like Santer and Nychka, is in fact based on r1, rather than r1 squared as here. Furthermore, the “r1” that is used is supposed to be the lag-1 autocorrelation coefficient of the regression residuals, not the correlation between the two series at lag 1. See Steig 2009’s Non-Correction for Serial Correlation” and comments for details.

    (Bartlett (1935) and Quenouille (1952) in fact correctly interacted their r1 with the lag-1 autocorrelation of the independent variable. However, the latter is unity in large samples with a time trend or strongly trending independent variable, in which case the Santer-Nychka simplification is adequate, for moderate AR(1) serial correlation.)

    RE #16, I see now that the scanned table is too grainy to waste much time on trying to transcribe accurately. When urged, Nature actually made Steig come forward with at least some of the data required to replicate his study, in accord with Nature’s data rules (the AVHRR table). Does Science have a similar policy? If so, this table is unacceptable. Let’s hope the official version that comes out tomorrow is better.

    • Posted Oct 18, 2009 at 1:12 PM | Permalink

      Re: Hu McCulloch (#35),

      The Bartlett-Quenouille “effective DOF” adjustment, as used by climato-statisticians like Santer and Nychka, is in fact based on r1, rather than r1 squared as here

      But in Fig 3c “Least squares linear regression yields a cooling trend of –0.22° +- 0.06°C per 1000 years (8) (Fig. 3C).” I get 0.06 by using r1, not r1 squared. The components that make this trend are quite interesting:

      Last time we had long-term cooling there were some CO2-adjustments involved
      http://www.climateaudit.org/?p=2344#comment-159821

  26. Hoi Polloi
    Posted Sep 3, 2009 at 3:54 PM | Permalink

    […] soot, too). [UPDATE, 4:45 pm: Steve McIntyre, seeing a familiar climate “hockey stick” curve, has weighed in with some complaints about data […]

    Interesting update from Revkin. I did sent him a link to this page after posting the above NYT link.

  27. Posted Sep 3, 2009 at 4:26 PM | Permalink

    RE #34,
    The official SI now online at http://www.sciencemag.org/cgi/data/325/5945/1236/DC1/1 gives the same description of the incorrect serial correlation adjustment as cited in #34. However, I see now that shortly before they (wrongly) define r1 to be the “lag-1 correlation coefficient of the two time series being compared,” they instead define it (correctly) as “the lag-1 autocorrelation of the regression residuals.” So it’s not clear which one they used. In any event, r1 shouldn’t be squared before plugging into the Bartlett-Quenouille formula.

    The table of the decadally averaged data used is more legible now than in the draft, but still is not digital. Acrobat’s “select” tool just gives a camera image, rather than text values.

    Table S-2 just gives the values of the 23 data series listed in Table 1. Where is the resulting temperature reconstruction tabulated??

  28. Posted Sep 3, 2009 at 4:49 PM | Permalink

    RE #34 35,

    The official SI now online at http://www.sciencemag.org/cgi/data/325/5945/1236/DC1/1 still gives the incorrect description of the serial correlation adjustment quoted in #34 35 above.

    However, I see now that in addition to the incorrect definition of r1 as the “lag-1 correlation coefficient of the two time series being compared”, shortly above they also correctly define it as the “lag-1 autocorrelation of the regression residuals.” In any event, r1 should not be squared before plugging into the B-Q formula, as they still have it.

    Table S-2 just gives the decadal values of the 23 input series listed in Table S-1. Where is the output temperature reconstruction series tabulated??

    • Ross McKitrick
      Posted Sep 4, 2009 at 12:33 PM | Permalink

      Re: Hu McCulloch (#38), By squaring r they may reduce it quite a bit depending on the estimated value (which they don’t report in the SI). If the error is in the actual procedure, rather than just in the SI write-up, they will have overstated their significance levels.

  29. Feedback
    Posted Sep 3, 2009 at 5:02 PM | Permalink

    Isn’t there a “remarkable similarity” with the original Hockey Stick here (like colors, shape)?

    Personally I am most alarmed with the almost 2000 year downward trend, as I live in Norway… maybe global warming, sorry, climate change, has come to rescue us, after all.

    Anyway, from global warming to northern hemispheric warming to arctic warming… what comes next?

  30. artemis
    Posted Sep 3, 2009 at 5:28 PM | Permalink

    In Your Back Yard Warming

  31. Steve McIntyre
    Posted Sep 3, 2009 at 6:07 PM | Permalink

    Series # 1 (blue Lake) had high values (thick varves) from 10 to 730 AD. The original authors deleted them from their reconstruction:

    Results indicate that climate in the Brooks Range from 10 to 730 AD (varve year) was warm with precipitation inferred to be higher than during the twentieth century. The varve-temperature relationship for this period was likely compromised and not used in our temperature reconstruction because the glacier was greatly reduced, or absent, exposing sub-glacial sediments to erosion from enhanced precipitation.

    Values for the period 10-730 AD were likewise excluded from the Kaufman recon.

    • Posted Sep 3, 2009 at 6:31 PM | Permalink

      Re: Steve McIntyre (#42),

      To me it’s more telling that they say the glacier may have been absent. We’re lead to believe that glaciers didn’t go away until the last 50 years.

  32. Posted Sep 3, 2009 at 6:25 PM | Permalink

    You can move this if you want Steve to the silly questions bin:
    I was wondering if you have ever looked at the coral reef based proxy data.
    I was watching a program that interviewed a scientist that used core’s from coral reefs as a proxy data for past temps. The scientist was from your favourite school, Penn state. His temp data was based on that increased C02 warmed temps that warmed the oceans, and that coral thrived with warmer water. Therefore the “thicker” calcification rings and greater growth, showed a warming world.
    Yet every other study, including other penn state studies seems to say that the corals are shrinking as a result of AGW/Co2.
    Is this just another case of making data fit the predetermined outcome?

  33. MikeN
    Posted Sep 3, 2009 at 6:30 PM | Permalink

    Hu McCulloch, send your e-mails to Science ASAP.

    >This study used CPS which assumes that the orientation of the proxies is known a priori.

    Is Fig1 a temperature reconstruction,or the proxy’s values? Is the proxy fed into Mann’s algorithm Fig2 or Fig2 upside down.
    Let me see if I understand what is happening. I think the upside-down proxy is OK, because the proxy correlates with temperature, and the reconstruction should show historically cooler temperatures than now in the case of this proxy. You’re saying the CPS algorithm assumes higher numbers mean warmer, so an inverted hockey stick will still be treated as a regular one,because the downslope is correlated with the temperature record?b

  34. Posted Sep 3, 2009 at 6:54 PM | Permalink

    Re #11,

    Last year, in all the commotion, I missed the punch line in the Monkey Business video:

    After the Boss Chimp gestures, the Truth Teller mutters “yes, sir” before he joins in the dancing!

  35. Steve McIntyre
    Posted Sep 3, 2009 at 7:02 PM | Permalink

    Hmmm, Tiljander and the Finnish authors appear to argue thicker varves (other than thickness from human disturbance) signifies cooler temperature, while the North American authors of the Alaska series argue the opposite.

    Kaufman et al do not discuss or reconcile this interesting inconsistency.

  36. Posted Sep 3, 2009 at 7:06 PM | Permalink

    Does anyone else feel that we’ve seen all of this happen before? It’s déjà vu all over again…

  37. Steve McIntyre
    Posted Sep 3, 2009 at 7:24 PM | Permalink

    This one’s a little bit different – as it uses a bunch of varve thicknesses as proxies. WE haven’t looked much at these proxies yet.

    The tree ring proxies are familiar – it’s annoying to see the Yamal addiction rear its head once again.

    • Pat Frank
      Posted Sep 3, 2009 at 7:58 PM | Permalink

      Re: Steve McIntyre (#52), “it’s annoying to see the Yamal addiction rear its head once again

      The only time I can remember any significant self-correction of an error in climate science was John Christy’s participation in the correction of his early satellite temperature construction, for the error caused by orbital decay. He described that effort as exhausting. Christy’s response to the problem was a model of scientific integrity that is honored primarily in the breach by AGW climate scientists at large.

      Apart from the example of John Christy, I can recall no instance of the acknowledgment or repair of any AGW-positive error by any involved scientist.

      Notable examples of ignored contradictory results include the completely convincing criticisms of proxy-reconstruction methods by you and Ross, along with those of the North and the Wegman committees, the criticism of surface station integrity by Roger Pielke Sr., which was completely blown off by Tom Karl, the recent destruction of the Steig, ea, Antarctic temperature trend by Jeff ID (Jeff, are you writing that up?) and Jeff C, RomanM (I think) and you, and Ross’ and Pat Michaels’ very strong case for technological contamination of the surface temperature record. I’m sure there are more examples.

      It would be valuable if someone compiled a referenced list of ignored publications that documented or demonstrated errors in AGW climate science.

      • Pat Frank
        Posted Sep 3, 2009 at 11:49 PM | Permalink

        Re: Pat Frank (#55), Among the contradicting findings ignored by the AGW climate science community, I forgot to mention Demetris Koutsoyiannis’ virtually definitive demonstration of the total unreliability of GCM predictions of regional temperature/precipitation trends.

        Of course, we all know regional unreliability “doesn’t matter” because when one averages all the regional predictions together, the errors fully cancel and the global average trend is dead right on.

  38. Steve McIntyre
    Posted Sep 3, 2009 at 7:52 PM | Permalink

    A CA reader has sent me a PDF splitting the Kaufman spaghetti graph into layers (See end of post update.) The ice core composite is remarkable: no HS whatever. The biggest HS comes from the tree ring series – the Briffa Yamal series will prove to the the largest contributor here. The sediment series are intermediate. Added to orientation problems, they also have dating problems – an issue discussed by Craig Loehle in one of his papers.

  39. Posted Sep 3, 2009 at 8:39 PM | Permalink

    This flawed study is also cited in USA Today:

    http://blogs.usatoday.com/sciencefair/2009/09/arctic-temperatures-hit-2000year-high.html

  40. Steve McIntyre
    Posted Sep 3, 2009 at 8:50 PM | Permalink

    A CA reader has sent in an OCR-digitization of Science’s obstructive supply of data in non-digital form. The digitization is at http://www.climateaudit.org/data/kaufman/SI.xls

    It is really appalling that Science would go along with such petty obstruction.

    • bender
      Posted Sep 3, 2009 at 9:36 PM | Permalink

      Re: Steve McIntyre (#57),
      Amazing OCR worked that well. Which proxy is column #22? The HS shape is robust to the removal of all but that series.

  41. Posted Sep 3, 2009 at 9:51 PM | Permalink

    For those who want a more recent Arctic series:

    http://junkscience.com/MSU_Temps/GISS64N-90N-an.html

    Use your thumb, and block out the last three years. Wham. Suddenly the huge difference between the thirties and today suggested in:

    Is gone! What gives? I smell Rhamsmoothing.

  42. Phillip Bratby
    Posted Sep 3, 2009 at 11:46 PM | Permalink

    It’s prominently cited in the BBC complete with hockey stick.

    We all must thank The Team from preventing descent into the next ice age.

  43. Posted Sep 4, 2009 at 12:09 AM | Permalink

    I can’t believe they used CPS data mashing again. It’s amazing it passed peer review when practically the whole team is on the paper. Who’s left to review it? I’m going to snip myself on the – [snip] there it was.

    What’s more amazing is that CPS probably didn’t matter much on this one. With a low proxy count, it’s all in how the fruit was picked from the proxy tree. The best proxies are the ice cores in my opinion and there’s no hockey signal even after careful, poorly explained and oddly fortuitous pre-selection.

    The last time I had a thread about upside down thermometers, it extended for 300 plus comments because people needed convincing that you can’t simply flip temperature for a result. If a thermometer is that confusing, what hope is there for making sense of flipping a flippin’ proxy.

    What is truly bizarre is that somehow anti-data has the same mass, charge and energy content as normal data.

  44. Posted Sep 4, 2009 at 2:48 AM | Permalink

    What is this wobble the BBC seems so keen to promote ?

    Arctic wobbles

    The root cause of the slow cooling was the orbital “wobble” that slowly varies, over thousands of years, the month in which the Earth approaches closest to the Sun.

    This wobble slowly decreased the total amount of solar energy arriving in the Arctic region in summertime, and the temperature responded – until greenhouse warming took over.

    http://news.bbc.co.uk/1/hi/sci/tech/8236797.stm

    thanks

  45. Posted Sep 4, 2009 at 3:59 AM | Permalink

    Is there any explanation as to how the thickness of a varve can be related to temperature when clearly its more influenced by precipitation? Anyone?

    • Steve McIntyre
      Posted Sep 4, 2009 at 5:13 AM | Permalink

      Re: John A (#68),

      IT seems that some paleos (Tiljander) associate thick varves with cool weather and some paleos (Kaufman) associate thick varves with warm weather. If you want to understand why they think why they do, some of the underlying references are online.

    • Jean S
      Posted Sep 4, 2009 at 6:29 AM | Permalink

      Re: John A (#68), Re: Steve McIntyre (#69),
      Correlations among varve series 1 (Blue Lake), 2 (Hallet Lake), 4 (Iceberg Lake), and 20 (Upside-Down Korttajärvi) are rather interesting.

      If someone wants to check correlations to instrumental data, I think the long, monthly Tornedalen (here) and Greenland (here) series might be useful.

      • Posted Sep 4, 2009 at 7:36 AM | Permalink

        Re: Jean S (#72),
        Well there’s that fundamental problem with proxies involving varves again. What exactly is being measured climatically that corresponds to the thickness of a varve? Looking at Steve’s link to Iceberg Lake, it ain’t temperature.

        BTW I did my proxy calculations using Mathcad, which goes to show how weird I am.

        Re: Steve McIntyre (#69),

        IT seems that some paleos (Tiljander) associate thick varves with cool weather and some paleos (Kaufman) associate thick varves with warm weather. If you want to understand why they think why they do, some of the underlying references are online.

        snip

        The only thing I can conceive is that varve thickness will vary strongly with precipitation, which causes me to wonder how they can reject other proxies because they are precipitation proxies (and because they have the bumps and wiggles in the wrong place) but yet invite varves.

        snip – (Steve: John A, please dial back the editorializing; the dissection will speak loudly enough.

      • Steve McIntyre
        Posted Sep 4, 2009 at 9:14 AM | Permalink

        Re: Jean S (#72),

        Yep:cor(proxy[,c(1,2,4,20)],use=use0)

        X1 X2 X4 X20
        X1 1.00000000 0.370965703 -0.30828162 0.002314720
        X2 0.37096570 1.000000000 0.05537678 -0.003178299
        X4 -0.30828162 0.055376779 1.00000000 -0.144586852
        X20 0.00231472 -0.003178299 -0.14458685 1.000000000

        The mean correlation between the 4 series is -0.0045: kind of like a Mannian verification r2 statistic.

        • bender
          Posted Sep 4, 2009 at 9:38 AM | Permalink

          Re: Steve McIntyre (#85),
          Those are about as low as you can go. (Well, I guess there are negative numbers!) The question is whether they are lower than one would expect given a set of 23 “proxies”. This subset of 4×3/2=6 correlations represents a fraction of the full set of 23*22/2=253 correlations. 2.3%, in fact. Is it reasonable that 2.3% of the proxy correlations might average to zero? That would depend on the proxy strength, obviously. Let’s say the mean correlation betwen proxy and the thing it’s proxying is r=0.5. Let’s say the proxies are independent random samples subject to a certain amount of sampling error. Depending on the size of that sampling error I’m betting you could get 2.3% of the proxy pairs exhibiting no correlation. Just a guess.

      • Posted Sep 4, 2009 at 7:42 PM | Permalink

        Re: Jean S (#71),

        Proxies 1,2,4 and 20 provide a lovely HockeyStickiness that we all know and love (average of all four is the black line):

    • Andy
      Posted Sep 4, 2009 at 7:30 AM | Permalink

      Re: John A (#68),

      You can search Tiljander doctoral thesis for the more detailed description, but basically Tiljander and other finnish researchers see that here in the subarctic region after cold winters with heaps of snow generate stronger spring floodings and thicker varves.

      Another issue is why did Kaufman select the X-ray density as the proxy, why not “light sum”, “dark sum” or “total sum”. Or more Mannian approach; use them all.

      • Posted Sep 4, 2009 at 7:55 AM | Permalink

        Re: Andy (#76),

        You can search Tiljander doctoral thesis for the more detailed description, but basically Tiljander and other finnish researchers see that here in the subarctic region after cold winters with heaps of snow generate stronger spring floodings and thicker varves.

        Now this I can follow. Except that the production of snow is itself temperature dependent – the colder it is, the less snow is produced. The greatest amount of snow is generated when the temperature is cold enough to snow and yet warm enough to carry moisture in the first place.

        I could equally argue that the amount of snow is a reflection of the position of the jet stream or the size of the Arctic polar high or the warmth of the North Atlantic or any number of other variables.

        Somebody is guessing.

        • bender
          Posted Sep 4, 2009 at 10:34 AM | Permalink

          Re: John A (#77),

          I could equally argue that the amount of snow is a reflection of the position of the jet stream or the size of the Arctic polar high or the warmth of the North Atlantic or any number of other variables.

          Lamoureux & Bradley (1996):

          Lake C2 record is sensitive to large-scale synoptic changes.

          So there you have perfect agreement, John A.
          .
          Somewhere along the way someone is converting “synoptic conditions” to temperature, and ignoring the confounding role of precipitation implied by these synoptic conditions.

      • bender
        Posted Sep 4, 2009 at 1:17 PM | Permalink

        Re: Andy (#75),
        This is very interesting. Very complex.
        .
        To the extent that global forcings dominate the time-scale of analysis you will have a cold vs. warm contrast affecting a process whose link to climate is still a matter of debate. If Tiljander is right that snow depth & spring flooding are the dominant force, then it is likely to be a stronger proxy of precipitation, which would fluctuate ambiguously in response to changes in global (Milankovitch) forcings, but may fluctuate unambiguously in reponse to changes in synoptic conditions. If Bradley is right that summer temperature is the stronger force governing sedimentation rate, then you have a proxy that responds strongly to changes global forcings, but ambiguously in response to changes in synoptic conditions.
        .
        Does anyone think this sounds like a consensus? I don’t.
        .
        In the Kaufman paper they refer to GCM runs simulating changes in Milankovich forcings over the last 2000 years. Fine. But how well do those models do at simulating the alternative hypothesis of arctic climate being driven mostly by changes in circumpolar synoptic conditions? If ice cores and varved sediments are largely a precipitation proxy responding to changes in synoptic conditions, I wouldn’t want to dismiss that idea too prematurely. Over geological timescales, perhaps. But not over time periods as short as 2000 years, where changes in precip patterns may dominate.

  46. Posted Sep 4, 2009 at 6:22 AM | Permalink

    Can anyone identify proxies 1,2 and 4? These appear also to contribute rather strongly to the 20th Century uptick…not just Yamal.

    Here is a simple average of all the proxies by year. Quite close to Kaufman’s original apart from the fact that the whole lot appears to be be shifted by 0.5 to make it fit the temperature anomaly

    Here is the comparison between the average of all the proxies and the average without Yamal. Apart from the final two data points, there’s very little difference:

    Here are proxies 1,2 and 4 traced out individually:

    and here is the average of 1,2 and 4. They give an extra kick in the 20th Century just when its most needed.

    • bender
      Posted Sep 4, 2009 at 9:59 AM | Permalink

      Re: John A (#71),
      Yamal #22 & Tiljander #20 have a stronger impact than the ones you mention. Those two drive CWP higher than MWP. The rest is filler. You are over-emphasizing blade shape, de-emphasizing MWP vs. CWP contrast.

      • Posted Sep 4, 2009 at 7:46 PM | Permalink

        Re: bender (#89),

        Yamal #22 & Tiljander #20 have a stronger impact than the ones you mention. Those two drive CWP higher than MWP. The rest is filler. You are over-emphasizing blade shape, de-emphasizing MWP vs. CWP contrast.

        What? Who? Me? 😉

  47. Posted Sep 4, 2009 at 6:38 AM | Permalink

    Time for another short M&M comment like the PNAS one!

    And may I encourage people to post comments on the various news web sites where this comes up.

  48. Steve McIntyre
    Posted Sep 4, 2009 at 6:51 AM | Permalink

    Here is a comparison of the re-scaled underlying Blue Lake data and the truncated version used in Kaufman (red). (The truncation of high first millennium values was advocated in the underlying study because they didn’t make sense to the authors.)

  49. Steve McIntyre
    Posted Sep 4, 2009 at 7:04 AM | Permalink

    Note that we discussed proxy #4 (Iceberg Lake) a couple of years ago at CA here: http://www.climateaudit.org/?p=1473

    Comments to this thread had many interesting comments about the connections between varves and sedimentology. I urge readers interested in the Kaufman reconstruction to re-read this older thread.

  50. Jim Turner
    Posted Sep 4, 2009 at 7:56 AM | Permalink

    Methodology aside, the results seem suspicious. Is the Arctic overall really warmer than at the time of the Viking farmers in Greenland (The temp drop from 950-1100AD looks at most to be ~0.5C, while the apparent 20th C warming is ~1.5C)? AD 950 temps appear to have been restored by the early 20th century, could they farm there again now? Or are they asking us to believe that while the Arctic as a whole is MUCH warmer, sub-Arctic Greenland is not? Do individual proxies point to such large local dicrepancies?

  51. stan
    Posted Sep 4, 2009 at 8:01 AM | Permalink

    snip – please do not indulge in this sort of speculation about motives

  52. Posted Sep 4, 2009 at 8:08 AM | Permalink

    On our site we did what some contributors have suggested, we selected tree ring proxies on the basis of a simple unbiased set of criteria.
    * End date toward the end of the 20th century to include the late century warming.
    * Start date before 1600 to include the “little ice age”.
    * Trees from both northern and southern hemispheres.
    You can see the results here.
    http://www.climatedata.info/Proxy/Proxy/treerings_introduction.html

    Steve:
    Please note that I myself did not raise this issue. If authors archive their data when they publish their articles, that’s all that I expect. There are reasons why one data set is structured differently than another. Personally I like to handle original data, data that is not necessarily massaged into one format. Note that Mann also purported to do this sort of collection in MBH98 and Mann 2008.

  53. Steve McIntyre
    Posted Sep 4, 2009 at 8:17 AM | Permalink

    Here is a plot of Loso Core A (log scale). Readers in the previous thread noted that geological factors affected sediment generation – an entirely distinct issue, and one that I urge readers to attend to closely.

    Here’s what Loso says:

    From 1958 to 1998, varve thickness has a positive and marginally significant correlation with May’June temperatures at the nearest coastal measurement stations. Varve sensitivity to temperature has changed over time, however, in response to lake level changes in 1957 and earlier. I compensate for this by log-transforming the varve thickness chronology, and also by using a 400-year-long tree-ring-based temperature proxy to reconstruct melt-season temperatures at Iceberg Lake. Regression against this longer proxy record is statistically weak, but spans the full range of occupied lake levels and varve sensitivities.

    A log-transformation (as shown in the above plot) does nothing to remove an inhomogeneity though it makes it less grotesque.

    Squinting at the above graph, there’s a 19th century episode that looks somewhat like the present sedimentation episode in Core A, albeit at a lower intensity. One wonders whether the sedimentation in Core A isn’t going to revert back to a background pattern, along the lines of the earlier reversion.

  54. Steve McIntyre
    Posted Sep 4, 2009 at 9:06 AM | Permalink

    In the Loso thread, a geologist observed that compaction was likely a problem in thickness data sets. Note that Murray Lake ftp://ftp.ncdc.noaa.gov/pub/data/paleo/paleolimnology/northamerica/canada/ellesmere/lower-murray2008.txt has information on not just thickness, but also density and mass. In the past 1200 years, the average density has decreased to 50% of medieval values.

    There is a trend in the period for thickness, but not for mass. This looks like it might be a pretty important issue -check the Loso thread for previous discussion in connection with Iceberg Lake.

  55. Steve McIntyre
    Posted Sep 4, 2009 at 9:10 AM | Permalink

    The decadal data from the SI is now online ftp://ftp.ncdc.noaa.gov/pub/data/paleo/reconstructions/arctic/kaufman2009arctic.txt . This is the rescaled versions. As noted aboe, I’ve asked KAufman for original data for series that are not public, but have not yet received an acknowledgement.

  56. Steve McIntyre
    Posted Sep 4, 2009 at 9:38 AM | Permalink

    As noted by a reader in the Loso thread, compaction is a problem with this sort of data. The Murray Lake data includes density information, which is plotted below. Density stabilizes at a mean of about 1.09, but less compacted recent sediments are less dense. This is NOT a negligible effect: a mass time series has a different medieval-modern relationship than a thickness time series. I am unable to think of a good reason for preferring thickness to mass.

    • Craig Loehle
      Posted Sep 4, 2009 at 9:43 AM | Permalink

      Re: Steve McIntyre (#86), This is why river deltas sink–compaction. Hilariously, if you think thicker varves indicate warming, then this data will show continuous warming for the past 1000 years, all else being equal.

  57. Antonio San
    Posted Sep 4, 2009 at 9:43 AM | Permalink

    Steve, among the main stream media, the Canadian Press through Mr. Bob Weber is commenting on the Kaufman et al. 2009 paper and this has been published by the Globe and Mail. Perhaps you may want to forward your blog post on the subject to Mr. Weber so he can update his article and all Canadians can enjoy real information? Thank you.

  58. bender
    Posted Sep 4, 2009 at 10:02 AM | Permalink

    Plot the mean curve in Excel and selectively remove the columns of your choice, using “Clear Contents” on selected columns. Then use Ctl-Z (“undo”) to put them back, and try a different set. A quick-and-dirty, but effective robustness test.

  59. bender
    Posted Sep 4, 2009 at 10:07 AM | Permalink

    Look at the new SI. The optical quality of the data table S2 has improved substantially. Last night it was barely legible.

  60. bender
    Posted Sep 4, 2009 at 10:14 AM | Permalink

    Lamoureux & Bradley (proxy #6) wrote this fairly unremarkable paper on Ellesmere lake varves, S19 in the Kaufman SI. Couple this unremarkable proxy #6 with remarkable Tiljander #20 and Yamal #22 and you get a Science paper, rather than a J. Limnol. paper. Any wonder now why people continue to do it? Thus ‘The Team’ ever grows. Wegman’s social network, again.

  61. AJ
    Posted Sep 4, 2009 at 10:34 AM | Permalink

    Hey Steve,

    I’m writing a letter to the editor about the Kaufman article that appeared today on the front page of our local paper.

    Can I refer to you as a Climate Science Scientist? That is, you study climate science, but you don’t necessarily study climate.

    Thanks, AJ

  62. bender
    Posted Sep 4, 2009 at 11:01 AM | Permalink

    Lamoureux & Bradley (1996) refer to:

    A general correspondence between the varve record and other North American proxies for the Little Ice Age period (1400-1900AD)…

    Note, they do not say “strong correlation” and they do not mention time periods outside that bracket, such as MWP or CWP.
    .
    Nowhere do they suggest that the “correspondence” is strong enough that it can be used as a reliable temperature proxy for MWP vs. CWP comparisons. They seem to be quite clear in caveating the use of varved lake sediments as a temperature proxy. The paper spends quite a bit of effort clarifying all the idiosyncracies and complexities of the sedimentation process.

  63. Posted Sep 4, 2009 at 11:01 AM | Permalink

    RE SM # 19, #58, Bender #91,
    The NCDC version of the data is now online via http://www.ncdc.noaa.gov/paleo/pubs/kaufman2009/.

    It looks like this is just the same as the tables in the SI, but now digitized, and now with the average of the 23 series in column 24. Presumably their reconstruction could be “reconstructed” from this average by inverting their regression equation, P = 2.079T + 0.826 (n = 14).

    The calibration temperature series T could perhaps be constructed from the sources given in the paper, but it would have been handy if they had given it as used.

    Science’s data policy online at http://www.sciencemag.org/help/authors/policies.dtl provides,

    • Any reasonable request for materials, methods, or data necessary to verify the conclusions of the experiments reported must be honored.

    • Before publication, large data sets, including protein or DNA sequences, microarray data, and crystallographic coordinates, must be deposited in an approved database and an accession number provided for inclusion in the published paper, under our database deposition policy.

    Perhaps this policy could be invoked to obtain any unarchived annually resolved numbers that went into the now-archived decadal averages.

  64. Antonio San
    Posted Sep 4, 2009 at 11:04 AM | Permalink

    Steve FYI and snip at will:

    here is the link from the Globe and Mail piece by Mr. Weber: http://www.theglobeandmail.com/news/technology/science/arctic-warmest-its-been-in-2000-years-study-finds/article1275018/

    and here is my complaint to the Canadian Press:

    “The Bob Weber

    The Canadian Press
    Last updated on Thursday, Sep. 03, 2009 05:11PM EDT
    A groundbreaking study that traces Arctic temperatures further back than ever before has shown the region is now warmer than at any time in the past 2,000 years…

    is truly an incomplete description of the state of scientific knowledge in this field. In particular it completely fails to check the co-authors past history of flawed studies, the validity of the proxies and take the PR from Science and the lead author at face value, despite the existence of a significant amount of peer reviewed literature demonstrating the flaws in the previous studies by IPCC co-authors, rehashed in the Kaufman et al. 2009 paper.
    A scientific case is built at
    http://www.climateaudit.org/?p=6932
    where Mr. Weber could find all the information he needs to amend his article and transform a piece of propaganda into a piece of information.”

  65. Posted Sep 4, 2009 at 11:12 AM | Permalink

    RE AJ #93,

    I’m writing a letter to the editor about the Kaufman article that appeared today on the front page of our local paper.

    Can I refer to you as a Climate Science Scientist? That is, you study climate science, but you don’t necessarily study climate.

    It’s good to mention that Steve (as well as Ross) is officially qualified, as an IPCC4 WG1 Expert Reviewer. As such, he is an internationally recognized Climate Expert, one of the “2500” such experts who we are told unanimously endorse IPCC4’s findings. (In fact, there were only about 700 lead and contributing authors of WGI, and 650 reviewers, with a lot of overlap, making only about 1000 such experts on the scientific part of the report.)

    • Calvin Ball
      Posted Sep 4, 2009 at 12:57 PM | Permalink

      Re: Hu McCulloch (#98), you mean the 2500 is based on shoddy statistics? How can that be?

  66. bender
    Posted Sep 4, 2009 at 11:38 AM | Permalink

    In the arctic, in the 20th c., I would expect air to be predominantly cold & dry (polar desert) or warm & wet (snowy & thick varves), depending on where it comes from (polar vs. tropical origin). As far as weather goes, anyways. On paleoclimatic time-scales, I’m not sure that association would hold as strongly. If the association were as strong, then the relationship between synoptic conditions and temp and precip should be unambigious; thick varves should only mean one thing. But if the association is not strong then varve thickness may be an ambiguous indicator. Maybe this is why different groups working in different parts of the world have different interpretations of the same data set & type? In some parts of the world the synoptic flux may be more along the lines of cold & wet (snowy & glacial growth) vs. warm & dry (melting and glacial recession) – in which case varve thickness might have the opposite interpretation vis a vis temperature.
    .
    In the circumpolar region one of the dominant forces would be the shape (and hence strenght) of the circumpolar vortex. When it is tight and strong, there should be a sharper delineation between polar and circumpolar climate, with cold/dry at the poles and war/wet at the circumpolar vortex boundary southward. When it is weak and wide and meandering there should be a lot more mixing of the air types, i.e. an ambiguous climate. In which case it would matter a great deal exactly where the proxy is located – at the pole, at the mean position of the circumpolar vortex, or southern limit of the vortex.
    .
    Are not all of the proxies located along the periphery of the vortex? Is any located at the pole itself?
    .
    Circumpolar vortex (hinted at by John A, where he mentions polar jet stream) is obviously not the only climatic driver. But it is a powerful one among others.
    .
    Musing aloud. Real-time. Always dangerous.

    • bender
      Posted Sep 4, 2009 at 11:51 AM | Permalink

      Re: bender (#99),
      Whoops, heh, heh. Temporarily forgot we were talking lake sediments, not ice.

      • bender
        Posted Sep 4, 2009 at 1:37 PM | Permalink

        Re: bender (#101),
        Forced to re-correct my own self-correction. I had been thinking only in terms of snow depth simply because that is what Andy said Tiljander thought was driving the varving. Reading Bradley made me reconsider the importance of summer temperature. Turns out I was thinking about lake sediments … but only from Tiljander’s perspective.

  67. Posted Sep 4, 2009 at 11:43 AM | Permalink

    Marc Morano says

    Climate data analyst Steve McIntyre who publishes Climate Audit and is known for his research discrediting Mann’s original “Hockey Stick” temperature graph, weighed in on the new Arctic study. “Amusingly, the [Arctic study’s lead author] Kaufman Team perpetuates Mann’s upside down use of the Tiljander proxy,” McIntyre wrote on September 3, 2009. “You can readily see that this closely matches the Mann version,” McIntyre noted. “The most cursory examination [of the study] shows the usual problem of seemingly biased picking of proxies without any attempt to reconcile proxy conflicts,” McIntryre wrote.

    in his piece “Not Again! Media Promoting Arctic ‘Hockey Stick'” that appeared yesterday.

  68. Posted Sep 4, 2009 at 11:53 AM | Permalink

    RE Steve #85,
    How do lake sediment densities drop below unity? Wouldn’t the lake just invert as all the submerged pith rose to the surface?

  69. deadwood
    Posted Sep 4, 2009 at 11:56 AM | Permalink

    I’m not sure if I got this right. Did Kaufman and the team ignore density as a factor in varve thickness?

    • bender
      Posted Sep 4, 2009 at 12:31 PM | Permalink

      Re: deadwood (#103),
      What they did was ignore the varve thickness record from Tiljander and use the varve density record instead. But how is varve density supposed to relate to temperature? Positively, I presume? I guess I’ll have to look that one up now.

  70. bender
    Posted Sep 4, 2009 at 12:08 PM | Permalink

    I read Lamoureux & Bradley (1996), expecting that it would describe how varve thickness is calibrated to temperature. Instead, this is what I find:

    Process studies at Taconite Inlet have revealed the complex inter-relationships between climate and sediment flux (Hardy, this vlume, Hardy et al., 1996). The factors controlling sedimentation and the thickness of a varve are not simple, involving the interaction of both climatic and geomorphic controls, as well as within-lake processes, which may vary in importance over time.

    .
    Eh? It doesn’t even postulate a mechanism, let alone calibrate a relationship!
    .

    So what has changed in 13 years to lead to a sudden unambiguous interpretability of varve thickness, and where is that work cited in Kaufman et al. (2009)?

  71. bender
    Posted Sep 4, 2009 at 12:15 PM | Permalink

    Lamoureux & Bradley (1996):

    … as discharge rates and suspended sediment flux are strongly related to summer temperature (Hardy et al. 1996) it is reasonable to interpret the long-term varve thickness record as a proxy of summer temperature until other factors can be demonstrated as having been of more importance

    warmer temperatures <=> thickly varved sediments

  72. Posted Sep 4, 2009 at 12:25 PM | Permalink

    [duplicate post due to software slowdown]

  73. bender
    Posted Sep 4, 2009 at 12:38 PM | Permalink

    Lamoureux & Bradley (1996) Fig 8A shows proxy #6 varves were thicker (temps supposedly warmer) 2000 years ago than now. What heppened to reverse that trend? The series were not flipped. There is a rise in the 20th c., but it’s weak. Much stronger in the Kaufman version.

  74. bender
    Posted Sep 4, 2009 at 12:44 PM | Permalink

    Why does the Kaufman SI state in Table S1 that the S19 proxy #6 extends to 2000 when it actually only goes to 1991? Their own spreadsheet shows no data value for the decade centred on 1995, consistent with paper S19.

  75. tty
    Posted Sep 4, 2009 at 1:15 PM | Permalink

    Using varve thickness as a temperature proxy is indeed vastly complex.

    The area in Finland that Tiljander studied has a climate where a slight warming means that snow will not persist through winter, but will melt periodically and drain slowly with little sediment load. A slight cooling means that snow will accumulate throughout the winter and drain as a strong spring flood with a heavy sediment load. This was confirmed by SEM study that showed that the organic sediment (=summer) was mostly above the mineral component (=spring) in each annual varve.

    I’m not familiar the sites in Alaska, but if there are glaciers anywhere in the catchment area, then warm summers will indeed result in thicker varves.
    I’m not so sure about the effect of winter temperatures. I should think that in Alaska snow always persists through winter, in which case mild winters might well mean more moisture from the north Pacific, more snow and a stronger summer runoff.

    However there is a host of other factors that can affect varve thickness, forest fires, logging, grazing and trampling by animals and extreme rainstorms will all increase erosion and increase varve thickness.

  76. bender
    Posted Sep 4, 2009 at 1:28 PM | Permalink

    Now I see why Mann said the accusation of upside-down use of Tiljander’s data was “bizarre”. He made some irrelevant argument about statistical analysis being insensitive to the sign of the input (which is true, but irrelevant), and left it at that. What he did NOT realize is that Tiljander had a different interpretation from Bradley of how varve thickness and/or density related to climate. No wonder he thought Steve M’s accusation was “bizarre”. What is “bizarre” is how such a festering lack of consensus over a very fundamental proxy relationship could be covered up as though these differences in interpretation were trivial.

  77. bender
    Posted Sep 4, 2009 at 2:04 PM | Permalink

    I just read the Tiljander pdf posted above. It is a must-read. Let me get this straight: these guys Kaufman et al are taking Tiljander’s own data and using an interpretation diametrically opposed to her own to take down her argument that the MWP was relatively warm and snow-free in Finland? What scientific study gives them the authority to overturn her interpretation of this proxy? This is bizarre. She must be beside herself. Kaufman et al simply have to do the right thing and remove the controversial Tiljander series and try to defend what’s left.

    • Steve McIntyre
      Posted Sep 4, 2009 at 2:28 PM | Permalink

      Re: bender (#116),

      Bender, I don’t think that you were hanging out here when Mann et al 2008 was discussed. The upside-down interpretation was discussed in connection with Mann et al 2008 – see the Saturday Night Live post. I corresponded with her at the time to confirm the correctness of my understanding.

      Specialists are highly reluctant to speak out against Mann – I challenged some of the dendros (the Silence of the Lambs) in connection with Mann et al 2008’s using drought-sensitive ring widths upside down. There was no way that they would dream of submitting a comment to PNAS or elsewhere. Same with Tiljander.

      • bender
        Posted Sep 4, 2009 at 2:44 PM | Permalink

        Re: Steve McIntyre (#121),
        I was lurking then, just not commenting. I reviewed that thread before posting here. Tiljander will not call these guys on their, errr, “reworking” of her own data? Is she nuts? This is her big chance to strut her stuff! She doesn’t need to comment at CA! She can send a letter to Science pointing out their misuse of her data.
        .
        Mia: a letter to Science is a publication in Science. Send it today! Don’t be a silent lamb! You are a tiger!

  78. bender
    Posted Sep 4, 2009 at 2:07 PM | Permalink

    Has the Tiljander 2006 paper been discussed yet?

  79. bender
    Posted Sep 4, 2009 at 2:18 PM | Permalink

    the Kaufman Team perpetuates Mann’s upside down use of the Tiljander proxy

    That’s not what’s happening here. Kaufman et al are using an interpretation that is consistent with that of Mann et al and, bizarrely, opposed to that of the originator of the data: Tiljander. The issue is not “perpetuating” the “use” of a particular graphical orientation. That makes it sound like a methodological slip-up; an error carried over from one paper to another. We have a more fundamental disagreement here about how this series should be interpreted vis a vis climate. Highlight the fundamental lack of agreement. That is the issue here.

  80. bender
    Posted Sep 4, 2009 at 2:20 PM | Permalink

    Why is no one else commenting?

    • Calvin Ball
      Posted Sep 4, 2009 at 2:26 PM | Permalink

      Re: bender (#119),

      Why is no one else commenting?

      Overload? This seems like same hyjinks, different day. After a while, it just ceases to be remarkable.

      • bender
        Posted Sep 4, 2009 at 2:48 PM | Permalink

        Re: Calvin Ball (#120),
        That’s what I’m telling you. This is not mere hyjinx. This is a fundamental disagreement as to what a specific proxy actually represents. It is a lynchpin in the house of cards. Pull at it. What does 4AR say about varved lake sediments? Who were the editors on that section? What was the chapter review like? Etc.

  81. bender
    Posted Sep 4, 2009 at 2:50 PM | Permalink

    Overload? After a while, it just ceases to be remarkable.

    Take a break, then, and come back to it in a few months. The remarkability of it all will return.

    • Skiphil
      Posted Dec 4, 2012 at 3:21 PM | Permalink

      Ha, I just used the word “remarkable” in my previous comment on another thread….. That’s me trying not to shout “I simply cannot believe what goes on in this field sometimes!”

      It is indeed worth following Bender’s advice….

  82. Carl Gullans
    Posted Sep 4, 2009 at 2:57 PM | Permalink

    It remains remarkable to me… that this ridiculousness is continuing six years after congress looked into it is absurd. That so many simple and easy to understand violations of basic statistical laws can remain after that many years is astounding.

    • bender
      Posted Sep 4, 2009 at 3:21 PM | Permalink

      Re: Carl Gullans (#125),
      What NAS says about methodology is irrelevant; no one is listening. What congress says and does is irrelevant; no one is listening. Wegman’s social network will continue to grow as long as the incentives for growth are strongly positive. The only way to impact the social network is to impinge upon the strong forces favouring network growth. Show them their errors. Let them be discussed, openly and behind closed doors. Truth will reign. It will.

  83. bender
    Posted Sep 4, 2009 at 3:09 PM | Permalink

    Remove Yamal #22 and invert Tiljander#20 and you get a sloped HS where temperatures were warmest in the 400s and reached the modern peak in the 1960s. MWP is no different from CWP when sampling error is considered. Let Kaufman et al. defend THAT graphic.

  84. steven mosher
    Posted Sep 4, 2009 at 3:22 PM | Permalink

    Welcome back bender.

  85. bender
    Posted Sep 4, 2009 at 3:28 PM | Permalink

    lucia, talk to Mia will you? Tell her she’s a tiger. Tell her mosh thinks so too.

    • steven mosher
      Posted Sep 5, 2009 at 3:30 AM | Permalink

      Re: bender (#132),

      Can you imagine the fury that would be released if somebody inverted Lucia’s data?

  86. David Brewer
    Posted Sep 4, 2009 at 3:30 PM | Permalink

    Steve – Without prejudice to a possible future letter to Science for publication, suggest you now copy your letter to Kaufman (comment 19) to Science for information. We don’t want another “independent” discovery and (mis)correction of some of these errors, do we?

  87. Posted Sep 4, 2009 at 3:31 PM | Permalink

    Steve I’m always impressed with how well you support your concerns about how proxies are chosen and also the questionable statistical approaches in some of the studies. However even if we assume many of these studies suffer from questionable proxy selections and bad math, isn’t it true that there are hundreds more that are not compromised in that way? How much are you focusing narrowly on a handful of studies vs the much larger body of climate research? I may well be mistaken but I have been under the impression that the folks you call “The Team” are only a handful of the thousands working on these issues.

    • bender
      Posted Sep 4, 2009 at 3:36 PM | Permalink

      Re: Joe Hunkins (#131),

      even if we assume many of these studies suffer from questionable proxy selections and bad math, isn’t it true that there are hundreds more that are not compromised in that way?

      Uh, list them, please. If there were “hundreds more” don’t you think they would be used? Did you ever look at Wegman’s report showing how there are so few proxies that just get recycled over and over in non-independent (i.e. partially overlapping) studies? Please … do a little research, people.

    • bender
      Posted Sep 4, 2009 at 3:42 PM | Permalink

      Re: Joe Hunkins (#131),

      I may well be mistaken but I have been under the impression that the folks you call “The Team” are only a handful of the thousands working on these issues.

      There are perhaps hundreds, not thousands, developing climatic proxies. There are only a handful who publish global-scale “multiproxy” studies. These are esentially super-anlyses where the authors do not necessarily understand or care about all the nuances studied by the hundreds that feed them their data. “The Team” is a self-styled term that comprises the bulk of these handful. Steve M did not make it up.

    • bender
      Posted Sep 4, 2009 at 3:47 PM | Permalink

      Re: Joe Hunkins (#131),
      Follow my instructions in #126 and publish the graph. See for yourself that the choice of proxies in a multiproxy study matters quite a bit.
      .
      Ask yourself this: on what page do you find a leave-some-out robustness test in Kaufman et al. 2009?

      • Frazzled
        Posted Sep 4, 2009 at 3:53 PM | Permalink

        Re: bender (#135),

        Science in action! Love it! Love YOU!! PS Keep it up!

    • Scott Brim
      Posted Sep 4, 2009 at 4:05 PM | Permalink

      Re: Joe Hunkins (#131),

      How much are you focusing narrowly on a handful of studies vs the much larger body of climate research?

      If a Medievel Warm Period occurred worldwide which raised temperatures to roughly the same extent, and at roughy a similar rate, as the warming trend experienced over the last 150-200 years, then Mother Nature is fully capable of doing the job herself without direct assistance from humans in the form of man-made GHGs.
      .
      Thus exists an absolutely critical need on the part of AGW theorists to discount the existence of an MWP. All of the AGW theorists, not just The Team by itself, must buy into what The Team has done with the Hockey Stick, or they cannot credibly defend much of their own research in identifying CO2 as the primary driver of recent global warming.
      .
      The original 1998 Hockey Stick is an analysis product manufactured to specification to fit this critical requirement, and this latest version is simply the latest model in a product line of promotional material that is of fundamental importance in maintaining continued public acceptance of AGW theory.

      • Posted Sep 4, 2009 at 4:55 PM | Permalink

        Re: Scott Brim (#140), This is a good point and I wish more research would address the MWP “controversy”, which seems to have fallen victim to a lack of journalistic interest or understanding of key issues that smart skeptics often want to see addressed by AGW enthusiasts.

    • Gerald Machnee
      Posted Sep 4, 2009 at 4:06 PM | Permalink

      Re: Joe Hunkins (#131),

      However even if we assume many of these studies suffer from questionable proxy selections and bad math, isn’t it true that there are hundreds more that are not compromised in that way?

      The ones that are not compromised do not reach the same conclusions as the “Team”.

  88. Frazzled
    Posted Sep 4, 2009 at 3:34 PM | Permalink

    Hey, lots of insightful comments! Lots of people who have obviously read the paper that they are commenting on! Keep it up!

  89. bender
    Posted Sep 4, 2009 at 3:56 PM | Permalink

    To elevate your work from the mundane world of J. Paleolimnol. to Science you basically need to report on something “unprecedented”. Yamal plus upside-down Tiljander are just enough to elevate CWP temps over MWP temps. Hard to believe, but that is the difference between obscurity and fame in this game.

  90. Posted Sep 4, 2009 at 4:01 PM | Permalink

    Yes Bender I’ve read Wegman’s very interesting paper and it leads a clear thinking person to question how *some* of the research has been compromised by social forces and bad stats. However it also very conspicuously does not suggest MBM98 was “wrong”. Too often here at CA folks appear to think that minor methodological flaws are major ones. This is rarely the case.

    … on what page do you find a leave-some-out robustness test

    This robustness question (rather than “Tiljander data is upside down!” is a great approach to addressing if Kaufman’s conclusions are reasonable. However you can’t throw our your own cherry pick combo to suggest his is cherry picked. Above somebody reasonably suggested a great study where proxy selections would be determined at random and a large number of analyses would be done.

    • Frazzled
      Posted Sep 4, 2009 at 4:03 PM | Permalink

      Re: Joe Hunkins (#138),

      Love YOU TOO!!!

    • bender
      Posted Sep 4, 2009 at 4:16 PM | Permalink

      Re: Joe Hunkins (#138),

      you can’t throw our your own cherry pick combo

      Yamal and Tiljander were not combed out post hoc by me. They were identified a priori by Steve as problematic for some very specific and legitimate reasons. I point to them as a counterpoint to John A – showing what is contributing most to a pattern that is completely contrary to that described by Tiljander: a relatively warm MWP/MCA. I clearly advocated a systematic leave-some-out robustness procedure. To suggest that I advocate reverse cherry picking is disingenuous. You will learn not to do that.

    • bender
      Posted Sep 4, 2009 at 4:20 PM | Permalink

      Re: Joe Hunkins (#138),

      *some* of the research has been compromised by social forces and bad stats

      No. All of the multiproxy studies are compromised because they all include one or two toxic, if you will, ingredients. If you had actually read Wegman’s report (it’s not a paper) you would know that. You are clueless.

    • bender
      Posted Sep 4, 2009 at 4:24 PM | Permalink

      Re: Joe Hunkins (#138),
      Suppose you could decide on a robustness test. If you found that a multiproxy reconstruction was highly dependent on a small fraction of inputs, then what would you do to try to present a less biased reconstruction? What would be most reasonable? You decide that a priori and then we’ll follow your recipe. Deal?

    • bender
      Posted Sep 4, 2009 at 4:53 PM | Permalink

      Re: Joe Hunkins (#138),

      I’ve read Wegman’s very interesting paper and it leads a clear thinking person to question how *some* of the research has been compromised by social forces and bad stats. However it also very conspicuously does not suggest MBM98 was “wrong”.

      It was NAS who said that (1) climate in the past 400 years was warming but uncertainties were too high too say much about climate 1000 years in the past, and (2) one should not use strip bark bristlecone pines to do climate reconstruction. Meaning that MBH98 was W R O N G. So yes, Wegman didn’t say so. NAS did. You gamer, you. Go get some more game, will you?
      .
      And please stay on-topic. The topic here is Kaufman.

  91. bender
    Posted Sep 4, 2009 at 4:28 PM | Permalink

    All: Notice Joe Hunkins did not answer my question:

    On what page do you find a leave-some-out robustness test in Kaufman et al. 2009?

    Is that because there is no such test? Maybe someone can tell me.
    .
    Joe, ask yourself this question: Why is there no such test in this paper? What is the most resonable answer? Is it because bender is smart and they are all so dumb? Or is it because they knew better than to tinker with data subsets? They knew what would happen.

  92. Posted Sep 4, 2009 at 4:52 PM | Permalink

    Bender how many cups of coffee did you have today dude? 🙂

    I’d be interested in how you’d answer your own question about testing for robustness wrt proxy selection. I still like the idea suggested above that would work to randomize proxy selection but I’m not at all familiar enough with all the issues here to make a case one way or another.

    • bender
      Posted Sep 4, 2009 at 4:55 PM | Permalink

      Re: Joe Hunkins (#146),
      The problem with sub-setting, Joe, whether systematic or random, is that you don’t have a whole lot of choice. You seem to still be under the misguided view that there are hundreds of series to choose from. It ain’t so, Joe. How many did Wegman list?

  93. Posted Sep 4, 2009 at 5:06 PM | Permalink

    I say again, look at John Daly’s plethora of arctic temperature records, many of which extend back to before 1900, to give the lie in the simplest, most direct way to the red line recent “increases” of this latest HS.

    Thanks Bender for your superb and agile fencing.

    Steve: I haven’t parsed Daly’s information nor do I have time to do so. I urge readers to remain especially cautious of information that they “like” – as the Team should also be, of course.

    • Posted Sep 8, 2009 at 1:18 PM | Permalink

      Re: Lucy Skywalker (#153), Having taken note of Steve’s warning not to “believe” the stuff one likes too readily, I’ve now compiled 22 of Daly’s Arctic graphs (40 records, many start before 1900) on one sheet with direct links to each graph fullsize. Jeff Id has put it on his blog today. Visually I think the evidence is stunning but I (we) are putting it out for audit, so that corrections can be made if needed. Then it can stand as strong evidence to help assess the worth or otherwise of Kaufman. I’m not a statistician but I’ve done what I can to make evidence transparent.

  94. John A
    Posted Sep 4, 2009 at 5:26 PM | Permalink

    I could be guessing here but nothing excites Bender’s blood pressure as the sight of a badly done scientific paper which has disproportionate impact.

    Why is no one else commenting?

    Because they can’t get a word in edgeways? 😉

    • bender
      Posted Sep 4, 2009 at 6:01 PM | Permalink

      Re: John A (#151),
      I’m just jealous. How can I spit in the face of my peers – people like Tijlander – and get away with it? How can I publish my papers without doing robustness tests? I’ll do anything to be that big and powerful and successful …

  95. curious
    Posted Sep 4, 2009 at 5:47 PM | Permalink

    Re: Joe Hunkins (#131), Joe – I’m one of those following along at this.

    I would be very grateful if you could list your top 5 climate reconstruction papers dealing with (approx) the last 1000yrs, giving a one line description of it and the reason for your choice. If you think there are more or less than 5 that are especially noteworthy and reliable then please feel free to quote accordingly.

    Also, I read the Wegman report a while ago and following your comment:

    However it also very conspicuously does not suggest MBM98 was “wrong”.

    I had to refresh my memory. I think we have read different reports because, to my mind, they very conspicuously trashed MBH98 on several grounds – incorrectly centred data; a likely inadequate data model; no serious investigation of the underlying process structures; lack of sophistication wrt “present” instrumental record, a query whether Mann and associates had realised their error at the time of publication and a lack of full doumentation, data and code preventing replication of the results the paper claimed. The Executive Summary ends with the conclusion:

    Overall, our committee believes that Mann’s assessments that the decade of the 1990s was the hottest decade of the millennium and that 1998 was the hottest year of the millennium cannot be supported by his analysis.

    Please can you check we are referring to the same report and then read page no.s 2-6 here (if the link doesn’t work it is the bottom item in the “links” list at the top left of this page) and tell me what, if any, support the Committee gave to MBH98? Alternatively if there is a supportable rejection of the Committee’s findings please can you supply a reference?

    Steve: In that respect, as I’ve noted on many occasions, it is important that North and Bloomfield, when placed under oath, said that they did not disagree with any of Wegman’s findings, which they described as “more detailed” than their own.

  96. Steve McIntyre
    Posted Sep 4, 2009 at 6:20 PM | Permalink

    A quote from Mia Tiljander (pers comm):

    Varve parameters (thickness, light sum, dark sum or xray thickness) are not temperature proxies. The fact is that in warmer times the light layer is very thin or totally missing in the yearly cycle.

    [my bold]

    • bender
      Posted Sep 4, 2009 at 6:33 PM | Permalink

      Re: Steve McIntyre (#157),

      Varve parameters (thickness, light sum, dark sum or xray thickness) are not temperature proxies.

      Let Dr. Lamoureux argue otherwise, right here, right now. Let’s go, buddy. Me and you.

  97. Steve McIntyre
    Posted Sep 4, 2009 at 6:39 PM | Permalink

    Here’s a quick robustness analysis. First here is the average of 19 of 23 proxies (excluding 1- Blue Lake varve thickness,4 – Iceberg Lake varve thickness chronology, 9 – Big Round Lake varves, 22 – the Yamal addiction. This includes the three Finnish series in “native” orientation (as opposed to Mannian orientation):

    Here is a plot of the 4 excluded from the first plot. Clearly any modern “edge” over the MWP comes from these four series (especially Yamal).

    Clearly any conclusions that

  98. Paul Penrose
    Posted Sep 4, 2009 at 6:47 PM | Permalink

    Joe,
    When doing a robustness test it does not matter how you select which data to exclude; if the exclusion changes the results is a significant way then the they are not robust. The data selection had already been done by the original authors so there is no possibility of bias; any of it is valid for selection in a robustness test.

  99. Ian
    Posted Sep 4, 2009 at 6:51 PM | Permalink

    Stumbled across this debate as a follow-up after reading about the original problems with the statistical analysis of the 1998 work – a stunning example of bad science if ever I saw one! One of the key elements of science is the ability and necessity to (a) replicate results/analyses and hence (b) question conclusions that have been made. The fact that Mann, and a respected, high-ranking journal such as Nature, are seemingly unwilling to enter into scientific discussion about this work, or make any sort of admission that the conclusions could be wrong, should be deeply worrying for the science community. How else can science advance if the fundamental aspects of scientific research are being ignored by ‘prominent’ figures like Mann?

    More specifically to this ridiculous example of what seems to be “making the data fit the wanted conclusions.” Surely the key aspects that determine whether the data should, or more importantly can be used as supporting evidence, are relevance and validity across the range chosen. In this case, irrespective of whether the data was used in an upside-down format or not, a little research (in my case a simple, fast google search) should have told Mann et al. that they could not use the entire data set. It was certainly immediately clear to me, and shouldn’t matter what significance it has to the conclusion, when the author clearly states in the discussion of their thesis:

    Since the early 18th century, the sedimentation has clearly been affected by increased human impact and therefore [is] not useful for paleoclimate research. (Link to thesis)

    To then go and include said post-1700 data as a proxy is ludicrous. If research scientists had licences, I’d have removed Mann’s solely for this reason. It’s more than an oversight, it is playing the system.

    There is already a major problem with how scientific stories are presented in the mainstream press, as highlighted by the excellent Bad Science Blog of Ben Goldacre. It is certainly not going to be helped by a similar problem with the underlying works and fundamentally flawed research. It seems that peer review can only weed out certain aspects of poor practice. However an alternative, such as insistence that every piece of original research is independently verified before publication, would be over-stifling and ridiculously time-consuming. Unfortunately, it seems like that this may be a problem that we cannot avoid, especially if scientists are routinely not ‘understanding’ the basics of maths/physics/statistics/correlation/causation, and then not going to listen to reasoned arguments when they have been ‘found out’…

    • j ferguson
      Posted Sep 4, 2009 at 7:43 PM | Permalink

      Re: Ian (#161),

      The perplexing thing about papers like this is that their credibility fails at the high school level, not through internal inconsistencies, but through fundamental misrepresentation of the data.

      There may be material here for a study or two on the pathology of emulated scientific studies.

  100. j ferguson
    Posted Sep 4, 2009 at 7:45 PM | Permalink

    ” ….. pathology of emulated scientific studies”

    Make that the “Imitation of Science”.

  101. Posted Sep 4, 2009 at 8:05 PM | Permalink

    Thanks, Steve. This was so well done that even my statistics-naive mind could follow it. I do have one question, though, about the sudden, sharp uptick in all of the graphs for the late 20th century–whether they show a MWP or not. That sudden spike just seems questionable to me, out of character, so to speak for the way the numbers behave over a long period of time. Is there something different about the measurements we are taking in the past several decades? I’m pretty stupid about this stuff (why I never comment), but that spike just makes me scratch my head and wonder if we are doing something wrong or different all of a sudden. (And Bender, you had me laughing with the rat-a-tat-tat comments there!)

    • Posted Sep 4, 2009 at 8:24 PM | Permalink

      Re: Queen1 (#166),

      It appears that the sharp 20th Century uptick is the product of five or six proxies only. One is a well-known and controversial tree ring series used repeatedly in multiproxy reconstructions called Yamal.

      Steve talked about the Yamal series some time ago. See here for links to articles on the Yamal series.

      The other four or five are reconstructions based on the width of varves. As we’ve discussed, the use of varve widths is a novel technique and nobody knows how the width of varves relates to temperature. Certainly in the case of one of them, the 20th Century shows such a sharp deviation that that part of the series is discarded by the authors because it is likely to be caused by modern farming disturbing the sediment rather than from any climatic cause.

      • bender
        Posted Sep 4, 2009 at 9:15 PM | Permalink

        Re: John A (#167),
        The other issue is potential cherry picking of varve width vs varve density. Why one over the other? What do these different things really measure?

        • Nicholas
          Posted Sep 4, 2009 at 10:48 PM | Permalink

          Re: bender (#169),

          I’m just guessing here but I would look at the product of those two terms. In other words, the total volume of the material making up the varve before it was compressed (volume x thickness).

          Re: Joe Hunkins (#168),

          That sure sounds like a polite way of saying “Mann was wrong” to me. Yes, it APPEARED to be compelling evidence until the VALID criticisms came along.

        • Posted Sep 5, 2009 at 4:41 AM | Permalink

          Re: bender (#169),

          The other issue is potential cherry picking of varve width vs varve density. Why one over the other? What do these different things really measure?

          Well if we’re talking about peer reviewers then we’re definitely measuring gullibility.

          Otherwise you’ve got me because I haven’t got a clue.

  102. Posted Sep 4, 2009 at 9:14 PM | Permalink

    Sorry as this got somewhat o/t:

    Curious: Agree the Wegman report is a credible, reasoned critique. Wegman’s statements such as this are what I was talking about in terms of his avoiding stating things like “Mann is wrong”…

    While the work of Michael Mann and colleagues presents what appears to be compelling
    evidence of global temperature change, the criticisms of McIntyre and McKitrick, as well
    as those of other authors mentioned are indeed valid.

    Curious RE: Your homework assignment: No. But thanks for asking?. I do find Loehle 2007 as an impressive proxy study because his approach seems reasonable, he eliminates the “one tree ring to rule them all” problem, and he does not appear to be compromised by the politics that have started to define too much of the science.

    • fFreddy
      Posted Sep 5, 2009 at 12:28 AM | Permalink

      Re: Joe Hunkins (#168),

      Agree the Wegman report is a credible, reasoned critique. Wegman’s statements such as this are what I was talking about in terms of his avoiding stating things like “Mann is wrong”…

      While the work of Michael Mann and colleagues presents what appears to be compelling
      evidence of global temperature change, the criticisms of McIntyre and McKitrick, as well
      as those of other authors mentioned are indeed valid.

      What he is saying there is that Steve and Ross are right. What he is saying here …

      Overall, our committee believes that Mann’s assessments that the decade of the 1990s was the hottest decade of the millennium and that 1998 was the hottest year of the millennium cannot be supported by his analysis.

      … is that Mann is wrong.

    • curious
      Posted Sep 5, 2009 at 4:39 AM | Permalink

      Re: Joe Hunkins (#168), Joe – I don’t think this is OT. Isn’t the very point that, despite clear evidence that the techniques used in many reconstructions are unfit for purpose, their usage persists?

      My reading of the Wegman extract that you selected is that it is actually one of complete dismissal for MBH98 as an illusion – that is why they use the phrasing:

      “While the work … appears … compelling…, the criticsms … are indeed valid.”

      In other words – It (MBH98) doesn’t stand up to scrutiny. As others point out, this view was confirmed by North and yet still you make the assertion that Wegman:

      does not suggest MBM98 was “wrong”.

      As I asked above – what did the Wegman Committee find that was right about it?

      Thanks re: Loehle 2007. My request wasn’t meant to be a homework assignment – it was a response to your assertion:

      However even if we assume many of these studies suffer from questionable proxy selections and bad math, isn’t it true that there are hundreds more that are not compromised in that way?

      I hoped you’d be able to support this with some evidence by giving an off the cuff guide to the best of the hundreds of good studies out there. This would be very useful for those of us following at a distance as drive by comments without references can be misleading in the same way as claims that Wegman “does not suggest MBM98 was “wrong””.

      Re: bender’s point and “nobody listening” – I am baffled. How can any scientist read the Executive Summary of the Wegman Report and not be dismayed that 10 years on from MBH98 the same failings are being presented as world class research worthy of front pages aroung the world?

      • Calvin Ball
        Posted Sep 6, 2009 at 11:41 AM | Permalink

        Re: curious (#177), I think we may be going OT, but “does not suggest MBM98 was “wrong”” is just plain silly. I think he was trying to say “does not make MBM98 “wrong””. That’s a true statement. But when you refute the very basis of something, that most certainly does “suggest”, and very strongly so. This sort of semantic sloppiness is very puzzling.

  103. Calvin Ball
    Posted Sep 4, 2009 at 10:39 PM | Permalink

    Didn’t somebody a few comments up say that density essentially is a function of age? I think a fair question should be isn’t varve mass (per area) really the better indicator?

  104. Posted Sep 5, 2009 at 2:34 AM | Permalink

    And here are the details around NAS North’s agreement with Wegman

    …Barton asked North very precisely whether he disagreed with any Wegman’s findings and North (under oath) said no as follows:

    CHAIRMAN BARTON. I understand that. It looks like my time is expired, so I want to ask one more question. Dr. North, do you dispute the conclusions or the methodology of Dr. Wegman’s report?

    DR. NORTH. No, we don’t. We don’t disagree with their criticism. In fact, pretty much the same thing is said in our report…

  105. Frank Lansner
    Posted Sep 5, 2009 at 2:40 AM | Permalink

    A few graphics for the debate, speaks for themselves:

    I think its nice to see temperaturedata without UHI including practically all the worlds temperature proxies, the sea levels and solar energy play well together, while corected GISS temperatures does not hvae ots corrections supported by data from nature. (Yes the solar curve Hoyt and Schatten, perhaps not correct, but solar trend is)

  106. steven mosher
    Posted Sep 5, 2009 at 3:31 AM | Permalink

    opps that link has bad words

  107. bender
    Posted Sep 5, 2009 at 6:14 AM | Permalink

    I’m guessing Steve M would prefer to have Wegman discussed in Wegman threads. And generic stuff in “unthreaded”. This here is about the Kaufman paper. It’s an important paper. As John A says: disproportionately influential.

  108. bender
    Posted Sep 5, 2009 at 6:26 AM | Permalink

    In 1996 Bradley stated:

    it is reasonable to interpret the long-term varve thickness record as a proxy of summer temperature until other factors can be demonstrated as having been of more importance

    That “until” lasted until 2003, when Tiljander provided the requested demonstration (that snowpack & flooding were of more importance).
    .
    What gives these guys the authority to interpret Tiljander’s data in the opposite fashion that she herself does? Ignoring her work would be crmnl nglgnc. Inverting it is frdlnc.

  109. bender
    Posted Sep 5, 2009 at 6:34 AM | Permalink

    I insist that one of these authors present himself to this court of public opinion to defend this work. Your court date is Tuesday morning, gentlemen. Have a good weekens, all.

  110. stephan harrison
    Posted Sep 5, 2009 at 7:40 AM | Permalink

    Just a quick comment on varves. Glacier recession produces a pulse of increased sediment supply to valley bottoms (paraglacial response) which may complicate the varve-climate signal (if there is one!).

    • Steve McIntyre
      Posted Sep 5, 2009 at 9:36 AM | Permalink

      Re: stephan harrison (#182),

      one can see how glacier recession would produce increased sediment, but once the glacier had receded, the sediment supply to that particular core would diminish.

      For the Finnish data sets, there are no relevant glaciers. Tiljander relates the varves to prior winter snowfall, connecting thin varves to warmth and thick varves to cold (LIA). Upside-down Mann produces a warm Little Ice Age in Finland.

      I’ve been parsing through some underlying data and thus far, I’ve been unable to exactly reproduce any of the archived 10-year series from original data. I can sort of get similar shapes, but these are baby food calculations and there should be no difficulty in reproducing the rescaled data exactly. In some cases, I’m wondering whether they used different versions than presently archived. Usual Team nonsense. If anybody can replicate any of the Kaufman decadal series from original daa anywhere, I’d be much obliged.

  111. stephan harrison
    Posted Sep 5, 2009 at 9:52 AM | Permalink

    Hi Steve
    You said: “one can see how glacier recession would produce increased sediment, but once the glacier had receded, the sediment supply to that particular core would diminish”.

    Yes, but this would take a lot of time and the enhanced sediment supply to the basin would be maintained if the basin is coupled to the glacier (by rivers for instance). In addition, an identifiable paraglacial signal typically lasts centuries after glacier melt, and millennia if rockslopes are included.

  112. pat
    Posted Sep 5, 2009 at 11:44 AM | Permalink

    Re: Ian #161

    Ian, does your last name start with the letter ‘J’ ?

  113. Steve McIntyre
    Posted Sep 5, 2009 at 1:31 PM | Permalink

    Comparing KAufman series #1 to original data (using a log-transform of varve thickness – the precise original data is nowhere specified, there are several possibilities and none match exactly) , high early values are excluded as shown below.

    This was done in the original article with the following “argument”:

    The period from 10 to 730 AD is excluded from the reconstruction because varve thicknesses during this interval are well beyond the range of the calibration data, and because previously published paleoclimate studies from elsewhere in the region suggest that temperature may not have been the dominant control on varve thickness prior to 730 AD (discussed below).

    We suspect that increased precipitation during this period of glacier recession influenced varve thicknesses at Blue Lake. With the Blue Lake glacier recessed, large amounts of unconsolidated sub-glacial sediment would have been exposed in the upper reaches of the catchment and subsequently available for transport. We suggest that enhanced sediment availability, in combination with increased precipitation, amplified sediment delivery to Blue Lake, resulting in thicker varves that were not directly related to temperature-controlled summer melt. This inference is supported in part by higher sedimentation rates between 10 and 730 AD

    This is precisely the sort of ex post data manipulation that so plagues this field.

    • Posted Sep 5, 2009 at 4:31 PM | Permalink

      Re: Steve McIntyre (#186),

      They might as well have said:

      “The data prior to 730AD is inconveniently high, blowing our … arguments about temperature sensitivity straight out of the water.

      snip

  114. Michael Smith
    Posted Sep 5, 2009 at 3:26 PM | Permalink

    We suspect that increased precipitation during this period of glacier recession influenced varve thicknesses at Blue Lake.

    I’m confused. I thought “glacier recession” was considered strong evidence of warm temperatures. But here it seems that “glacial recession” indicates nothing about temperatures. Or am I missing something?

    Steve: The original reference is online and suggests that both are at work. However, both are presumably at work all the time and arbitrary deletion of data on this sort of basis is very disquieting.

  115. Jim Jazz
    Posted Sep 5, 2009 at 4:56 PM | Permalink

    snip – you’ve used language that is against blog rules.

    • Posted Sep 5, 2009 at 5:40 PM | Permalink

      Re: Jim Jazz (#189),

      What are you talking about?

      The dissection of Kaufman 2009 is in the early stages, and it could be somehow, somewhere that there are perfectly rational reasons why the climate scientists did what they did. We could be all wrong about Kaufman 2009.

      In any case, you assume far too much about “a letter to Science criticizing the data and enabling a reply by the authors to your criticisms”. There’s no guarantee that Science will print it, nor allow anyone to follow up with rebuttals if and when those scientists deign to reply. There’s equally no guarantee that “a formal complaint to their respective academic institutions” would produce anything other than a closing of the ranks as has happened many times before.

      Repeatedly in the recent past, criticisms have been made of canonical papers in climate science where the criticized scientists make no admission of error other than typographical and the journal concerned has simply shut down all debate after that, regardless of the adequacy of the explanation. This has happened so frequently that its not funny.

      This is the perfect forum for arguments such as these because at least everyone can see what is being discussed, what is and is not dubious, and the authors themselves (whom I guarantee are watching) can intervene to explain themselves in an open debate.

    • Jason
      Posted Sep 5, 2009 at 6:08 PM | Permalink

      Re: Jim Jazz (#189),

      Steve would be very quick to disclaim this sort of view. He is simply analyzing the data, and cataloging this paper’s imperfections.

      snip – use of words not allowed here

      On a completely different, and altogether unrelated topic: If Steve found himself on a jury in a murder trial, and the murderer was found hovering over the victim with the murder weapon in his hands, Steve would vote to acquit unless indisputable evidence were provided showing that the weapon was in the accused’s hands at the time of the murder.

      Unfortunately, most commentators on this website have yet to achieve Steve’s total mastery of not drawing conclusions based on circumstancial evidence.

      Steve: I’m not sure that this is a very helpful analogy. Despite what occasional readers may think, I’m not the least bit interested in people’s motives. I have no way of determining what their motives are and therefore I don’t worry about them and have established rules asking people not to indulge in such speculations here.

    • curious
      Posted Sep 5, 2009 at 6:45 PM | Permalink

      Re: Jim Jazz (#190), See comment 19 for an invitation to the author(s) to clarify their position.

    • Steve McIntyre
      Posted Sep 5, 2009 at 7:28 PM | Permalink

      Re: Jim Jazz (#190),

      I refrain from speculating about authors’ motives as such speculation is pointless and doesn’t matter. If there are empirical points that you wish to dispute, please do so. On the fact of it, it appears, for example, that Kaufman et al used the Tiljander series upside down from that of the originating authors (following Mann in this respect) and that the Blue Lake authors truncated the high early portion of this series.

      You are welcome to post disagreements. But please do not put words into my mouth or allege that I made accusations that I did not make.

      If the authors disagree with any observations made here, they are welcome to post their comments or to post their own counter-thread without any editorial intervention on my part. Personally I think that this sort of exchange at a blog level would be productive.

  116. Steve McIntyre
    Posted Sep 5, 2009 at 7:40 PM | Permalink

    Proxy #2 is Hallett Lake, Alaska BioSilica. This is one of the few contributors to a HS shape. You’ll noticed a pronounced blade on the right-hand side of the graph (at least it’s pronounced after CPS re-scaling). This is the only occurrence of this proxy type in this network – it would be interesting to see what other examples look like. In this case, the original data goes back to the Holocene Optimum and looks like this:

    I guess you have to be on the Team to understand how this series shows that current temperatures are “unprecedented”. Most CA readers will probably be unable to make this deduction from this particular proxy.

  117. Tim Channon
    Posted Sep 5, 2009 at 8:02 PM | Permalink

    I’ve written something to do with Blue Lake which might be of interest here, uses other data in the published set. Didn’t the authors of the Science paper look?

    Skip the preample chat to the graph. I don’t mention the predating varve suggesting warmer but it would fit.

    In the unlikely event I am correct this suggests there was indeed a cooling but from a much higher level of warmth, so that today perhaps we are starting to come back out. This is a problem?

    http://ccgi.flute.plus.com/thor/concept/weather/data-analysis/blue-lake-isnt-hockey/

    Steve: A reader posting a link does not mean that I endorse the theory in the link. In this case, I suspect that varves are affected by micro-site factors and do not believe that “answers” or “explanations” are required for changes in local varve thickness that are not replicated at multiple sites.

    • Tim Channon
      Posted Sep 6, 2009 at 4:53 AM | Permalink

      Re: Tim Channon (#196),

      I guess that looked reasonable on posting content, but less so when other posts became visible. No intent to cause upset. (delete it if you wish)

  118. Steve McIntyre
    Posted Sep 5, 2009 at 8:12 PM | Permalink

    Kaufma proxy #4 (Loso’s Iceberg Lake varve thickness) is one of the few contributors to HS-ness. In this case, Loso combined results from several different cores. Cross-dating cores is something that you have to do in making tree ring chronologies – where replication is of the essence. Seems to me like something similar should be a prerequisite of sediment chronologies.

    There are 3 archived Loso cores with values to the present. Loso took an average (removing very high thicknesses). Here’s a plot of the three cores going to the present. Core M has a 400-year or so interval. I haven’t studied the article so far to determine how they figured out that there was a 400-year hiatus – seems odd.

    In any event, core K does not have a jump in 1957 corresponding to the jump in Core A. The HS-ness in Proxy #4 results from the jump in Core A blended out with series that don’t change levels – disguising the size of the Core A change. It sure looks to me like there is some sort of inhomogeneity in Core A of this data set.

    As noted before, there are only 3-4 proxies that contribute to the HS-ness of this reconstruction, including Yamal (which we’ve discussed at length on other locations) and now Iceberg Lake – where the contribution comes from only one core, where there seems to be a discontinuity.

  119. Posted Sep 5, 2009 at 10:15 PM | Permalink

    RE Fig. 4 in post and Steve’s

    I transcribed series 20 manually and may have a couple of discrepancies as the data format was very annoying. (I’ve uploaded my transcription) In addition, data was missing in the SI from 1225 to 1105.

    I don’t see that these years are missing for Tiljander (#20) in either the new official SI or the NCDC file, both of which are online now. Were they missing from the draft SI you originally linked? I’ve pitched my copy of it, so I can’t check.

  120. Larry Huldén
    Posted Sep 6, 2009 at 1:50 AM | Permalink

    If the thickness of sediments decrease with time may we expect that a hockey stick will look similar after 1000 years going back 100 years?

  121. Posted Sep 6, 2009 at 2:31 AM | Permalink

    RE Bender #199,

    The SI has changed, as I pointed out earlier.

    In #91, you noted that the official SI (with a legible but still non-digital table) had replaced the draft SI (with a grainy table), but not that the coverage of the numbers had changed. Or are you referring to a different comment?

  122. Posted Sep 6, 2009 at 3:43 AM | Permalink

    Kaufman & Co calibrate their proxy average to “the spatially averaged summer temperature for all land area north of 60° latitude from the CRUTEM3 data series”, but don’t include their computation of this in their SI.

    Is this trivial and unabiguous to calculate from the gridded CRUTEM3 file? If so, can someone please calculate it? (Note that climate people define summer as June July August, or JJA, even though June is mostly in spring.)

    Thanks!

    • romanm
      Posted Sep 6, 2009 at 5:57 AM | Permalink

      Re: Hu McCulloch (#202),

      I was looking into this yesterday. The calculation is not ambiguous since the data is in 5×5 degree gridded format with grid cells of unequal areas and with varying numbers of missing cells throughout the temperature record.

      From my initial calculations, it seems that what they did was simple unweighted averaging. I will look at the details more in-depth today and post a comment on it later.

      • kim
        Posted Sep 6, 2009 at 7:46 PM | Permalink

        Re: romanm (#204),

        Whaddya bet the method of averaging will introduce a bias, too. Gad, I hate being so cynical. This voyage of discovery is supposed to be a glorious adventure.
        =====================================

    • Kenneth Fritsch
      Posted Sep 6, 2009 at 10:52 AM | Permalink

      Re: Hu McCulloch (#202),

      Kaufman & Co calibrate their proxy average to “the spatially averaged summer temperature for all land area north of 60° latitude from the CRUTEM3 data series”, but don’t include their computation of this in their SI.

      Is this trivial and unabiguous to calculate from the gridded CRUTEM3 file? If so, can someone please calculate it? (Note that climate people define summer as June July August, or JJA, even though June is mostly in spring.)

      I am not certain what your question is, but the CRUTem3 data for any month and latitude and longitude can be easily extracted from the the KNMI web page here:

      http://climexp.knmi.nl/selectfield_obs.cgi?someone@somewhere

      One could do a comparison with the GISS 250 km and 1200 km data series also.

  123. Steve McIntyre
    Posted Sep 6, 2009 at 7:11 AM | Permalink

    It appears to me that Kaufman used an obsolete version of Proxy #2. This was one of the first series that I looked at my attempts to replicate it from archived data were very frustrating. Consulting the notes to the archive:

    #LAST UPDATE: 2/2009. Hallet Lake Biogenic Silica data replaced.
    #Incorrect data file was originally archived 11/2008.

    Does this “matter”? Probably not. Nothing usually “matters” in these sorts of study except Yamal,bristlecones and one or two other series with problems of their own.

  124. curious
    Posted Sep 6, 2009 at 12:11 PM | Permalink

    Calvin – this is what JH said, trying or otherwise, in 141 (my bold):

    Yes Bender I’ve read Wegman’s very interesting paper and it leads a clear thinking person to question how *some* of the research has been compromised by social forces and bad stats. However it also very conspicuously does not suggest MBM98 was “wrong”. Too often here at CA folks appear to think that minor methodological flaws are major ones. This is rarely the case.

    My thoughts on this are up in 177 which you are responding to – Wegman very conspicuously said it (MBH98) did not stand scrutiny. Sorry for all the bold but IMO for a scientific paper that does make it wrong and the claim to the contrary is ridiculous.

    Re: OT – I realise this is an old topic. I’m happy for moderators to snip or move as required.

  125. Terry
    Posted Sep 6, 2009 at 12:28 PM | Permalink

    Steve,

    Have you received any response from your e-mail to Kaufman? Did you copy the other authors? Just curious.

    • curious
      Posted Sep 6, 2009 at 12:58 PM | Permalink

      Re: Terry (#209), Me too! 🙂

    • Steve McIntyre
      Posted Sep 6, 2009 at 1:43 PM | Permalink

      Re: Terry (#209),

      No response or acknowledgement. Nor did I get any from my last couple of emails to Gavin Schmidt etc. If pressed, I’m sure that Kaufman would say that it was a long week-end. On the other hand, it’s equally possible that he’s intentionally ignoring the email. Impossible to say. I’ll send a refresher early next week.

  126. Hu McCulloch
    Posted Sep 6, 2009 at 4:55 PM | Permalink

    Re Ken Fritsch #206,

    I am not certain what your question is, but the CRUTem3 data for any month and latitude and longitude can be easily extracted from the the KNMI web page here:
    http://climexp.knmi.nl/selectfield_obs.cgi?someone@somewhere

    Thanks, Ken — I tried selecting 1850-Now CRUTEM3, then 60°N – 90°N and 0°E – 360°E, and left the default 30% under “Demand at least __% valid points in this region.” This gave me a nice graph and ASCII file, but only with 1896-2009 even though I had asked for 1850-Now. (There was also a line for 1895, but only August was non-missing.)

    Kaufman in fact only used 1860 – 2000, so I don’t really need the first decade, but how do I get 1860 – 1895?? Did I set a different % of valid points than Kaufman would have?

    It might be interesting to compare GISS to CRUTEM3 eventually, but first we need to find the CRUTEM3 numbers Kaufman used. Or, if it’s not obvious what he used, to ask him for them.

    • Posted Sep 6, 2009 at 7:33 PM | Permalink

      Re: Hu McCulloch (#213),

      Did I set a different % of valid points than Kaufman would have?

      If I had to guess, I’d say that they didn’t bother to exclude years of sparse data from analysis. 1% ought to do it.

    • RomanM
      Posted Sep 7, 2009 at 6:35 AM | Permalink

      Re: Hu McCulloch (#213),

      The Kaufman paper states that the CRUTEM3 data set was used for temperatures and reference 14 in the paper reads: “Climatic Research Unit CRUTEM3 temperature data are described in (33) and are available at http://www.cru.uea.ac.uk/cru/data/temperature “.

      The CRUTEM3 file on that page is in NetCDF format and can be downloaded and read into R pretty easily and the Arctic data extracted. This page also refers to an ftp site ( ftp://ftp.cru.uea.ac.uk/data ) for the temperature data. Among the files on that page, there is a zip file crutem3.zip which contains a very large ascii file. This file would require considerably more programming to read the data.

      #download 18 MB data set
      #crut3.url = “http://hadobs.metoffice.com/crutem3/data/CRUTEM3.nc”
      #download.file(crut3.url, “CRUTEM3.nc”, quiet=F, mode = “wb”)

      #read 5×5 degree gridded data
      #dimensions: lon, lat, time.
      #monthly data start time is January 1850
      #need R library ncdf to read the file
      library(ncdf)
      crud = open.ncdf(“CRUTEM3.nc”)
      crutemp = get.var.ncdf(crud)
      #dim(crutemp) #[1] 72 36 1915

      #extract north of 60
      arctemp = crutemp[,31:36,]

      #count available grid squares
      avail.temp = !is.na(arctemp)
      avail = matrix(NA,1915,6)
      for (i in 1:1915) avail[i,] = colSums(avail.temp[,,i])

      matplot(1850+(0:1914)/12,avail,type=”l”,xlab=”Year”,main =”Available Grid Squares”,
      ylab=”Number Available in Latitude Band”)
      legend(“topleft”,legend=c(“60-65″,”65-70″,”70-75″,”75-80″,”80-85″,”85-90”),lty =1,lwd=2,col=1:6)

      There are no grid values for the 85 to 90 degree band since the data is land temperatures only. The Kaufman paper cuts the temperature data off at 1860. When reading the paper, one should be aware that there is extensive comparative use made of “temperature” data from the ERA40 project, a re-analysis hodgepodge of information from many sources. More information on the project can be found in this document.

  127. Posted Sep 6, 2009 at 11:55 PM | Permalink

    RE Andrew #214,

    If I had to guess, I’d say that they didn’t bother to exclude years of sparse data from analysis. 1% ought to do it.

    Thanks — 1% gave me readings back to 1850, with nothing missing. Of course, Kaufman might have set a threshold somewhere between 1% and the default 30%.

  128. Posted Sep 7, 2009 at 9:40 AM | Permalink

    RE RomanM #218,
    Thanks — I gather then that KNMI’s “% valid points” corresponds to your “available grid squares” as a % of total land grid squares. I was thinking perhaps it related to the number of valid station readings within each grid square. It’s obvious from their graph why Kaufman left out the 1850s.

    Does NetCDF compute areas for you, or is this left to the user? I trust KNMI uses exact areas rather than approximating area with the cosine of the latitude of the midpoint of the grid “square”.

    • romanm
      Posted Sep 7, 2009 at 10:25 AM | Permalink

      Re: Hu McCulloch (#219),

      I would assume the same thing as you as to the meaning of “% valid points” for KNMI. There is nothing in the Crutem3 dataset to indicate which or how many stations go into a grid cell. The 1850s aren’t all that different from the 1860s, but they were already down to very few decadal values as it was for the “scale part of their procedure.

      The areas are not computed for you. The areas of the grid cells are “spherically rectangular” so the exact calculation looks pretty easy. The area of a band between latitudes a and b is proportional to A(a,b) = abs(sin(a)-sin(b)), the difference of the sines of the two latitudes. Since each band contains 72 identical grid cells, this means you don’t need the actual area and you can use A(a,b) as the relative weight for all grid cells in that band since you normalize the weighted average by dividing by the sum of the relative weights.

      In R (using angles from the pole instead of the equator):

      #weights for area weighting
      wts = -diff(cos( 2*pi*seq(0,30,5)/360))
      wts # [1] 0.003805302 0.011386945 0.018881927 0.026233206 0.033384834 0.040282383

      for 85-90, 80-85 etc.

      The average can be calulated by simple averaging of the cells in each band first and then using the above weights to combine the bands, however, I believe this would not be quite correct, because it does not take into account that some of the grid cells in the band may be water, not land. I simply took all of the grid cells without missing values, weighted them according to their latitude band and then divided the sum by the total of all the weights of the cells used in the sum.

  129. Posted Sep 7, 2009 at 11:58 AM | Permalink

    RE Roman #220,
    Great! Can you post what you got, and upload it to CA? That would be a great help.

    Doesn’t CRU indicate which cells it considers to be land and which water? A table with the fraction of land cells represented would be hepful, but not essential.

    • romanm
      Posted Sep 7, 2009 at 2:43 PM | Permalink

      Re: Hu McCulloch (#221),

      I calculated the unweighted and area weighted monthly series and put them in a text file which I uploaded to the CA site. Is that what you wanted?

  130. Posted Sep 7, 2009 at 10:41 PM | Permalink

    RE RomanM #222,
    Yes, thanks!

    What is the third column? It says it is the area coverage, but often it is negative.

    • RomanM
      Posted Sep 8, 2009 at 4:18 AM | Permalink

      Re: Hu McCulloch (#223),

      Sorry, I should have explained. Column 1 is time. Column2 is the monthly average temperature anomaly for the north of 60 Arctic using simple equal weight averaging of all grid cells having a temperature value. Column 3 is similar to column two, but uses simple area weighted averaging (use only non-missing values, but with weight proportional to the area of the grid cell).

      Anything more complicated involving weights would need to determine which grid cells are land and which are water when combining latitude bands. I don’t think that such results would differ much from column three.

      From these sequences, one can calculate June – July – August means and decadal means in a reasonably simple way.

  131. Steve McIntyre
    Posted Sep 8, 2009 at 7:57 AM | Permalink

    Last week, I notified Kaufman about the use of Upside Down Tiljander, asking in addition for various “publicly available” data sets that do not appear to actually be available anywhere that I know of. He replied yesterday attaching a graph indicating that it doesn’t matter whether Tiljander is used upside down and unresponsively referred me to the decadal values of the data already available.

    I am obviously not surprised that it doesn’t “matter” whether this truncated version of the Tiljander data is used upside down or not. (The huge blade of Mann et al 2008 was truncated in this application.) In my head post, I noted that, in these networks, most proxies don’t “matter”. However, Yamal and a couple of other HS-series (Loso’s ICeberg Lake) do “matter”; indeed they are pretty much all that matters. Here was my prediction of Kaufman’s response:

    I’m sure we’ll soon hear that this error doesn’t “matter”. Team errors never seem to. And y’know, it’s probably correct that it doesn’t “matter” whether the truncated Tiljander (and probably a number of other series) are used upside-down or not. The fact that such errors don’t “matter” surely says something not only about the quality of workmanship but of the methodology itself.

    What does “matter” in these sorts of studies are a few HS-shaped series. Testing MBH without the Graybill bristlecones provoked screams of outrage – these obviously “mattered”. Indeed, in MBH, nothing else much “mattered”. The Yamal HS-shaped series (substituted in Briffa 2000 for the Polar Urals update which had a high MWP) plays a similar role in the few studies that don’t use Graybill bristlecones. The present study doesn’t use bristlecones, but Briffa’s Yamal substitution is predictably on hand.

    • bender
      Posted Sep 8, 2009 at 8:11 AM | Permalink

      Re: Steve McIntyre (#225),
      He’s wrong. Tiljander DOES matter. Removing Yamal alone doesn’t quite tip the scale in the CWP warmer than MWP debate. Removing Yamal AND turning Tiljander upside down is what is needed to make CWP “unprecedented” in 2000 years. IOW, without this operation there is no Science Paper. Let Dr. Kaufman reply to THIS assertion. Today is Tuesday. Let’s hear from the authors.

  132. Posted Sep 8, 2009 at 9:51 AM | Permalink

    RE Roman #224,
    Thanks — Then column 3 of your file at http://www.climateaudit.org/wp-content/monseries.txt is the one that should correspond to the series Kaufman used. I’ll check if it matches the KNMI series with 1% validity when I get a chance.

    Unless I missed something in the paper, I think there’s still some ambiguity whether “decades” start in year 0 or year 1 of the decade in question. The NCDC file “centers” them on year 5, but since decades have an even number of years, they have no central year. I guess we just have to try it both ways and see which replicates their regression, using the proxy average on the NCDC file.

    • Steve McIntyre
      Posted Sep 8, 2009 at 10:12 AM | Permalink

      Re: Hu McCulloch (#227),
      I’m pretty sure that decades start in year 0. I’ve matched a few series from original data on that method. The following are the steps (I’ll post up an organized script later today) to get a Kaufman version from a data frame A for an individual proxy with columns year; proxy (year being perhaps irregular), rescale standardizes on 980-1800.

      kaufman=function (A, year=seq(5,1995,10)) {
      x=round(tapply(A$proxy,factor(floor(A$year/10),levels=0:199),mean),3);
      h=approxfun(year,x)
      test=ts(h(year),start=5,freq=.1)
      y=rescale(test)
      return(y)
      }

  133. Posted Sep 8, 2009 at 10:37 AM | Permalink

    RE Steve #228,

    It’s conceivable that a different convention was used for the proxies than for CRUTEM3, so just because the proxies use year 0 doesn’t necessarily mean the instrumental temperatures do. Lonnie Thompson’s decadal ice core files sometimes say they are using one convention, and sometimes the other. Which year you start on probably won’t affect the results significantly, but if you’re trying to replicate a study, it’s nice to get an exact fit whenever possible.

    RE #225, Science’s Data policy requires,

    Any reasonable request for materials, methods, or data necessary to verify the conclusions of the experiments reported must be honored.

    (See #97 above). I’d send him a new e-mail politely clarifying your request and citing Science’s policy, and if he still refuses, would take it up with the editors.

    RE Tiljander, I have been trying to upload a new post on a problem I see with the calibration procedure used in Kaufman, but WordPress hasn’t been letting me post for some reason. (See e-mail I just sent you).

  134. Posted Sep 8, 2009 at 1:20 PM | Permalink

    sorry, missed the link to Jeff Id “Rewriting Arctic History”.

  135. Wansbeck
    Posted Sep 8, 2009 at 5:43 PM | Permalink

    Amusing exchange on Tamino’s blog:

    “TCO // September 8, 2009 at 10:48 pm | Reply

    So how about the Climate Audit stuff? Why waste time with the least sophisticated stuff? If you’re a math stud, isn’t it more interesting to deal with the smarter critics?

    [Response: I don’t put McIntyre in that category.]”

    I think it was TCO who spotted Tamino’s unannounced correction to his 2 box model after Lucia was banned for daring to ask if Tamino had checked his work.

    Of course the error didn’t matter!

  136. Posted Sep 9, 2009 at 8:09 AM | Permalink

    RE #227 etc,
    I’m getting a substantial difference between KNMI’s version of CRUTEM3 for 60N-90N, versus Roman’s as computed directly from CRU’s file. KNMI reminds us that the CRU site is the “authoritative” version, but they should be on the same page if this is as simple as it sounds:

    Here I’m using column 3 of the file Roman uploaded to CA, which is areally weighted. Just in case KNMI was equally weighting, I tried comparing his column 2 (equal weighted) to KNMI, and the fit was much, much worse.

    The first two graphs look pretty similar, but the difference has a substantial downdrift to it, about .4°C per century by eyeball.

    I terminated both series at the end of 2008, in order to avoid the nuisance of a fractional last year.

    Can anyone reconcile the two sources?

    • RomanM
      Posted Sep 9, 2009 at 12:38 PM | Permalink

      Re: Hu McCulloch (#235),

      I’ll get the KNMI data and look into it.

      By the way, I don’t know if anyone else has noticed, but there is a difference between the ASCII version of the proxy data and the Excel version of data at the NOAA site. Although the Excel version shows only two decimal places for each value, in fact, the data are there in 14 place decimal format. When I copied the data to the clipboard for transfer to R, the end result was to get the data exactly as visible in the spreadsheet. The same thing occurred when I saved the data in a csv format format. I finally reformatted the cells to show all of the digits and was then able to take it that way to R.

      Although the difference in the calculations results will not be that substantial between the two sets, it should be easier to get matches to more decimal places when trying to verify the reconstruction of the paper’s conclusions. The extra places will also be helpful when trying to reconcile annual proxy series with the decadal ones used by K et al.

    • RomanM
      Posted Sep 10, 2009 at 7:00 AM | Permalink

      Re: Hu McCulloch (#235),

      From a little experimentation, I have deduced that KNMI does appear to use different weights than I did for calculating averages over grid cells from separate latitude bands. However, the difference seems to go a lot deeper than that.

      I calculated the mean monthly series for all grid cells in the 60N to 65N range for both KNMI and Crutem3. Since all cells would have the same area, no weighting should be necessary. The difference between them was surprising:

      The reason for the differences appears to be explained on the KNMI website. If you go to the Field selection page for Crutem3 T2m anom and look at the Extract timeseries dialogue box, one of the choices is “Noise model – the same in all grid points”. Clicking the “information box” next to it produce the explanation:

      Noise model:

      To compute an area avergae (sic), the average anomaly is computed and this is added to the climatology. This usually gives a better estimate than a straight average in the presence of missing data, especially when there is a gradient in the clmatoloy (sic) over the area (e.g., Niño3). However, this assumption that the noise amplitude is equal in all grid points can give rise to negative numbers for positive-definite quantities such as precipitation. For this variable an error model proportional to the variability is better but this option has not yet been implemented.

      The graph indicates that a variable month-based adjustment has been made to the averages calculated from the Crutem3 data set to compensate for the highly variable number of empty grid cells (see comment 218 above). Without knowing any more details, I would not want to use the KNMI values in any sort of analysis.

  137. dougie
    Posted Sep 9, 2009 at 1:24 PM | Permalink

    Anyone know why TCO seems to have stopped posting\commenting at CA?

    and nice to see bender is back posting, i have missed your quick wit.
    you certainly liven up the posts.

    great blog by the way. thanks Steve & all concerned.

  138. bender
    Posted Sep 9, 2009 at 2:05 PM | Permalink

    Hey, where’d Smokin’ Joe Hunkins go?

  139. dougie
    Posted Sep 9, 2009 at 5:55 PM | Permalink

    O/T bender, who is Smokin’ Joe Hunkins.

  140. Posted Sep 9, 2009 at 7:25 PM | Permalink

    snip – no need to discuss this

  141. Steve McIntyre
    Posted Sep 10, 2009 at 8:07 AM | Permalink

    Roman, I’ve written some R routines to scrape KNMI data (not easy BTW). As I recall, KNMI normalizes over a 2-year period – not the CRU 30-year period, an annoying discrepancy. I think that you can specify the normal period to match CRU, but I don’t have time to look into this for a few days.

  142. Posted Sep 10, 2009 at 8:22 AM | Permalink

    RE RomanM #43241,
    Thanks, Roman — I’ll just use your numbers and forget about KNMI.

    Given the convenience and therefore popularity of KNMI, this is too important a point to be buried at comment 2431 of a very specialized thread. Can you write up a quick post about KNMI-CRU vs CRU-CRU? Feel free to lift my graphs if they are helpful. Perhaps the KNMI people would then contribute some clarification.

    (I don’t remember what the URLs of my graphs were or even if they are on CA or OSU, but the URL of an image you want to link can usually be found by right clicking on it, and then selecting “print picture”. IE then prints the URL at the bottom of the page. Unfortunately, it does not give you a “print preview” option to save a little ink. This is usually easier than sifting through “View/Source”.)

    PS: The discrepancies look like they are just computing the seasonal adjustment differently — the one perhaps with a constant SA, and the other with an updating SA.

    • romanm
      Posted Sep 10, 2009 at 10:56 AM | Permalink

      Re: Hu McCulloch (#243),

      Any post would have to wait until the weekend. For determining the whereabouts of a graph, I generally right-click the graph and select “Properties”. The url can usually be copied and pasted from the information window.

    • Kenneth Fritsch
      Posted Sep 11, 2009 at 9:53 AM | Permalink

      Re: Hu McCulloch (#243),

      Given the convenience and therefore popularity of KNMI, this is too important a point to be buried at comment 2431 of a very specialized thread.

      As a layperson who appreciates the convenience and flexibility of the KNMI climate series, I agree that it would be worthwhile to pin down these differences that we see and determine in more detail how KNMI is treating the data and any unique treatments that the Kaufmann may have applied to it.

      When I was extracting CRUTEM3 zonal data from KNMI (the tropics) I was wondering whether I might be seeing some effect of the overlap of the zone of interest into the adjacent one. I know that the GISS 250 km and GISS 1200 km data tropical zone series extracted from KNMI give significantly different results. I need to look into this aspect in more detail.

  143. Posted Sep 10, 2009 at 12:17 PM | Permalink

    RE #233, #241,
    I just read somewhere (probably Kaufman) that summer and winter trends have been different in the Arctic. A moving average SA would tend to weaken such a difference, relative to a constant SA.

    • Posted Sep 12, 2009 at 12:15 PM | Permalink

      Re: Hu McCulloch (#245), Ferdinand Engelbeen noted

      that, while yearly average temperatures after 2000 are near equal to the 1930-1950 temperatures, summer temperatures seems to be lower. And Egedesminde and nearby Jacobshavn (Ilulisat), have the same yearly trend, but the summers in Egedesminde are somewhat cooler…

  144. Posted Sep 10, 2009 at 12:48 PM | Permalink

    If univariate calibration would be appropriate for the problem at hand, Kaufman result would go well along with what I’ve learned from Brown&Sundberg articles:

    ..but if I consider this as multivariate calibration problem, inconsistency statistic ( http://www.climateaudit.org/?p=3364 ) gets quite high values. Related to Jean’s observation , I think.

  145. Posted Sep 10, 2009 at 1:32 PM | Permalink

    RE UC #246 —
    I have a forthcoming post, to be entitled “Invalid Calibration in Kaufman 2009,” where we can discuss these issues at length.

    Stay tuned!

    • romanm
      Posted Sep 10, 2009 at 1:59 PM | Permalink

      Re: Hu McCulloch (#247),

      Their method of calibrating the temperature also struck me as strange. I was going to go after that after I had figured out exactly how they got their temperature sequence, but I’ll wait until you do your thing first. 😉

  146. dougie
    Posted Sep 10, 2009 at 6:33 PM | Permalink

    Steve OT – put in unthreaded if best.
    wish i could add to the debate but can’t, not smart enough!
    but have noticed something related, that i think should be
    noted –

    on CA
    Re: Ian #161

    Ian, does your last name start with the letter ‘J’ ?

    over at Tamino

    drawp
    Tamino,
    Some fellow named Ian posted on CA “Kaufman and Up-side Down Mann” and accused Mann of f…. Do you have any idea what is Ian’s last name?

    now that bothers me,
    if Ian would out\himself then i can see problems for him.

    Steve: As readers know, such accusations are prohibited under blog policies. I moderate after the fact and am not online 24-7. I deleted the accusation in question as soon as I got up one morning. It seems curious that an accusation posted here against blog policies was almost immediately complained about at Tamino’s; makes one wonder about the motives of the accuser.

    • Michael Jankowski
      Posted Sep 10, 2009 at 7:00 PM | Permalink

      Re: dougie (#249), That’s pretty amusing considering what Tamino accuses Steve of. But I don’t see any reference to “fraud” in Ian’s post #161. I see him refer to “bad science.”

  147. John M
    Posted Sep 10, 2009 at 6:41 PM | Permalink

    Tried posting this several times on unthreaded, but it wouldn’t take. It is related to upside Mann though.

    I don’t know how many remember the side-bar discussion going on during the analysis of Mann’s upside down PNAS paper, but as far as I can tell, it was a “Track I” submission, where a good buddy helped grease the skids.

    Science has a blog that the Academy is changing their policy.

    The Proceedings of the National Academy of Sciences will discontinue a submission option for members that, at its best, repeatedly put prestigious scientists in awkward situations and, at its worst, critics alleged, allowed scientists to ease their way through the peer-review process.

    But there’s still hope for innovative and idioynchratic statistical treatments!

    …a “determined minority” opposed the move because they felt the option offered a publication route for innovative and idiosyncratic papers. Schekman argues that another mechanism—the ability of authors, when they submit, to suggest who should review their paper as editors—ensures that such work will be judged fairly.

    link

  148. Kenneth Fritsch
    Posted Sep 14, 2009 at 1:53 PM | Permalink

    I do not know how all the information I intend to present here will load, but I thought it important to see all the 23 Kaufman proxies in one place at one time. I included the break point trends in the graphs which will, I hope, be presented below. The R script for the break point calculations and graphs is given below and was copied from the form used originally by Steve M.

    When looking at these proxies in detail it is difficult for me to see how these very different, for the most part, proxies will add up the average of all them by more than
    chance. We are looking at proxies with very different break points with none of them showing the unprecedented break in the past 50 years of so. A segment of minimum length of 5 data points was used in the calculation to allow the short end segment to break.

    The ice core proxies appear to have none or one break point per proxy, while the other proxies have a varying number of break points, depending on the individual proxy. Could one conjecture that, since the breaks (and the peaks and valleys of the proxy time series) do not coincide as would be expected if they were climate influenced, we could be looking at non climatic influencing conditions that are rather unique to the individual proxy?

    The proxy data was extracted from the link here:

    ftp://ftp.ncdc.noaa.gov/pub/data/paleo/reconstructions/arctic/kaufman2009arctic.xls

    A numbered description of the Kaufman proxies along with the break points dats for each is shown in the table below.

    library(strucchange)
    x=ts(data=read.table(“clipboard”),start=5, deltat=10)
    year=c(time(x))
    fm=lm(x~year);
    bp = breakpoints(x ~ year,h=5)
    bp

    #if bp = 0 do not continue, but use this script:

    plot(x, type=”l”, xlab= “Years”, ylab=”Standardized Proxy Response”, main= “Breakpoints: Kaufman Proxy 16″)
    make.bp=function(x,nbreaks=6) {
    year=c(time(x))
    fm=lm(x~year);
    bp = breakpoints(x ~ year,breaks=nbreaks, h=5);
    fac0 = breakfactor(bp)
    fm0 = lm(x ~ year);
    fm1 = update(fm0,x ~ year+fac0*year)
    make.bp=list(x=x,bp=bp,fm=fm0,fm1=fm1)
    make.bp
    }
    A=make.bp(x=x,nbreaks=6)
    year=c(time(x))
    plot(year,A$x, type=”l”, xlab= “Years”, ylab=”Standardized Proxy Response”, main= “Breakpoints: Kaufman Proxy 23”)
    lines(year,fitted(A$fm1),col=2)
    abline(v=year[A$bp$breakpoints],col=2,lty=3)

    • Posted Sep 17, 2009 at 11:06 AM | Permalink

      Re: Kenneth Fritsch (#253),

      Kenneth,

      I’m confused as I haven’t had time to keep up. Can you tell me what the meaning of the break points is?

      • bender
        Posted Sep 17, 2009 at 6:52 PM | Permalink

        Re: Jeff Id (#256),
        FWIW I think these breakpoints may indicate potentially important non-stationarities in proxy responsiveness – not climate signal. So I’m not surprised at all that they don’t correlate well between series. They’re random demonic intrusions.

        • Posted Sep 18, 2009 at 8:11 AM | Permalink

          Re: bender (#257),

          That’s what it looked like to me too so if that’s the source of the break point I was hoping to ask Kenneth for some description of how they were determined. One thing that always bothers the engineer in me is that proxies never look like what you would expect a temp signal to look like. #2 #11 #6 #17 #22 what are those? It’s baffling to think of these scribbles as temp.

          Ah, I see 22 is the infamous Yamal. It looks an awful lot like the final result. Maybe my eyes are tricking me but even some of the smaller squiggles are matching up.

      • Posted Sep 18, 2009 at 8:09 AM | Permalink

        Re: Jeff Id (#256), Thanks to Kenneth for posting all the graphs. It helps to make it clear why #22 (Yamal) is so popular.

        But IMHO the break points have no meaning at all. I always prefer to look at graphs of data without misleading straight lines drawn through it. Given that the break points show no consistency even for nearby sites, I wonder if Kenneth would agree?

      • Kenneth Fritsch
        Posted Sep 18, 2009 at 8:25 PM | Permalink

        Re: Jeff Id (#256),

        Jeff, I was not ignoring you as I was on deep vacation for a couple of days. Hu M, answered your query much better than I could and I agree (as a layperson) that the break points could give some valid analytical insights. My point is that I believe the Kaufman authors assume that these proxies all are responding to temperatures above 60N and that these breakpoints are occurring rather randomly in time in most of the proxies (with little of no break points in ice core proxies). Now an alternative explanation could be that climate at these locations are very local and very unique. Unfortunately in that case one would need very large number of proxies at widely scattered and representative sites to obtain an “average” temperature for the “arctic”.

        Hu M, makes a cogent point in his comment about finding break points at the end of the time series. Finding shorter length break points depends on the value, h, used in R which defines the minimum length segment. I used 5 which corresponded to 5 data points or 50 years. I also used h = 3 in trial for the average of the 23 proxies and obtained the same break points as for h = 5. But to be honest, I would have to understand the entire procedure better or empirical test to say that my calculations would not have missed an end of series break point of 50 years in length.

        I also agree, I think, with the poster, PaulM, who suggests pasting line segments onto a time series might incorrectly imply to the viewer that the series was made up of a number of linear trends. That was not my intent.

        The break point calculation in R is described as:

        All procedures in this package are concerned with testing or assessing deviations from stability in the classical linear regression model

        y_i = x_i’ b + u_i

        In many applications it is reasonable to assume that there are m breakpoints, where the coefficients shift from one stable regression relationship to a different one. Thus, there are m+1 segments in which the regression coefficients are constant, and the model can be rewritten as

        y_i = x_i’ b_j + u_i (i = i_{j-1} + 1, …, i_j, j = 1, …, m+1)

        where j denotes the segment index. In practice the breakpoints i_j are rarely given exogenously, but have to be estimated. breakpoints estimates these breakpoints by minimizing the residual sum of squares (RSS) of the equation above.
        The foundation for estimating breaks in time series regression models was given by Bai (1994) and was extended to multiple breaks by Bai (1997ab) and Bai & Perron (1998). breakpoints implements the algorithm described in Bai & Perron (2003) for simultanous estimation of multiple breakpoints. The distribution function used for the confidence intervals for the breakpoints is given in Bai (1997b). The ideas behind this implementation are described in Zeileis et al. (2003).

        The algorithm for computing the optimal breakpoints given the number of breaks is based on a dynamic programming approach. The underlying idea is that of the Bellman principle. The main computational effort is to compute a triangular RSS matrix, which gives the residual sum of squares for a segment starting at observation i and ending at i’ with i < i’.

        Given a formula as the first argument, breakpoints computes an object of class “breakpointsfull” which inherits from “breakpoints”. This contains in particular the triangular RSS matrix and functions to extract an optimal segmentation. A summary of this object will give the breakpoints (and associated) breakdates for all segmentations up to the maximal number of breaks together with the associated RSS and BIC. These will be plotted if plot is applied and thus visualize the minimum BIC estimator of the number of breakpoints. From an object of class “breakpointsfull” an arbitrary number of breaks (admissable by the minimum segment size h) can be extracted by another application of breakpoints, returning an object of class “breakpoints”. This contains only the breakpoints for the specified number of breaks and some model properties (number of observations, regressors, time series properties and the associated RSS) but not the triangular RSS matrix and related extractor functions. The set of breakpoints which is associated by default with a “breakpointsfull” object is the minimum BIC partition.

        Breakpoints are the number of observations that are the last in one segment, it is also possible to compute the corresponding breakdates which are the breakpoints on the underlying time scale. The breakdates can be formatted which enhances readability in particular for quarterly or monthly time series. For example the breakdate 2002.75 of a monthly time series will be formatted to “2002(10)”. See breakdates for more details.

        From a “breakpointsfull” object confidence intervals for the breakpoints can be computed using the method of confint. The breakdates corresponding to the breakpoints can again be computed by breakdates. The breakpoints and their confidence intervals can be visualized by lines. Convenience functions are provided for extracting the coefficients and covariance matrix, fitted values and residuals of segmented models.
        The log likelihood as well as some information criteria can be computed using the methods for the logLik and AIC. As for linear models the log likelihood is computed on a normal model and the degrees of freedom are the number of regression coefficients multiplied by the number of segements plus the number of estimated breakpoints plus 1 for the error variance. More details can be found on the help page of the method logLik.breakpoints.

        As the maximum of a sequence of F statistics is equivalent to the minimum OLS estimator of the breakpoint in a 2-segment partition it can be extracted by breakpoints from an object of class “Fstats” as computed by Fstats. However, this cannot be used to extract a larger number of breakpoints.

        For illustration see the commented examples below and Zeileis et al. (2003).

        • bender
          Posted Sep 19, 2009 at 2:50 PM | Permalink

          Re: Kenneth Fritsch (#273),

          My point is that I believe the Kaufman authors assume that these proxies all are responding to temperatures above 60N and that these breakpoints are occurring rather randomly in time in most of the proxies

          I would think so. That is the general idea behind multiproxy reconstruction: gather enough samples and the inhomogeneities – whatever their cause – will average out.

        • bender
          Posted Sep 19, 2009 at 3:30 PM | Permalink

          Re: Kenneth Fritsch (#273),
          Follow-up.
          .
          If proxy response to climate is monotonic over the range of the study (2000 years, supposedly dominated by orbital forcing) then this is the time-scale over which proxy spatial “responsiveness” should be measured. If orbital forcing does in fact dominate over that time period then it is unlikely that proxy correlations with climate will be restricted to the “local” spatial scale. The proxy is, of course, responding to “local” climate. But over that large a time-scale (where global-scale forcings supposedly dominate) the local signal is going to scale up, to be correlated with a much larger-scale climatic signal.
          .
          That is why the GCM is important in this study. It is the GCM output that (they think) allows them to assume that natural (and regional) variability across the arctic is minimal compared to the (global) response to orbital forcing.
          .
          If the breakpoints detected in those “proxies” ARE indications of regionally varying climate (natural background fluctuations) within the arctic circle then the GCM may be under-representing the natural variability. But my bet is that at that time scale the arctic is incredibly well-teleconnected. One caveat however. The authors sample only the peripheral land mass of the region. There is going to be way more climate regionality at the periphery than at the pole. So some of those breakpoints MAY be regional signal variability.
          .
          Group the breakpoints by region. Any pattern?
          .
          The authors only report on PC1 – the cooling trend. The higher PCs may reflect regional departures from the global mean and/or the worst of the breakpoints you identified. How do the higher PCs load spatially?
          .
          I am concerned about the lack of intrinsic background variability in both PC1 and the GCM output. This may be a distortion of reality that was overlooked ,or downplayed, by the authors.

  149. Posted Sep 17, 2009 at 8:51 AM | Permalink

    RE #227, 228, 233, 240, etc,
    Using the third (areally weighted) column in the CRUTEM3 60N-90N file that Roman compiled, I get the following decadal averages, using both “0” decades (like 1900-1909) and “1” decades (like 1901-1910):

    After 1940, there are visible differences between the two. Comparing this to the thick black line in Kaufman’s Figure 2, it seems clear that he was using “0” decades, at least to compile his temperature series. This should make replication easier.

    • RomanM
      Posted Sep 17, 2009 at 10:52 AM | Permalink

      Re: Hu McCulloch (#254),

      I agree with your assessment that the “0” decade years were used in the Kaufman calculation although there do appear to be one or two minor differences between the decade 0 graph and Kaufman’s Fig. 2 temperature plot.

      By the way, I have looked at the KNMI procedures and been able to establish some facts about what is done in their calculations:

      – The Crutem3 data set used by KNMI is identical to that available from the Crut web site.

      – I have been able to duplicate their procedure for “adjusting” for missing monthly values when averaging gridded anomalies in the same latitude band (e.g. 60N-65N).

      -They seem to use the same relative weights as I do for combining different latitude grid cells within the same longitudinal 5 degree band in the case where there are no missing monthly values.

      However, I am not sure yet how they combine different latitude grid cells when there are missing months in one or more grid cells. I will continue to look into that.

  150. bender
    Posted Sep 17, 2009 at 6:52 PM | Permalink

    But ask an expert – like Tiljander.

  151. Posted Sep 18, 2009 at 7:56 AM | Permalink

    RE Jeff Id #256, commenting on Ken Fritsch, #253,

    I’m confused as I haven’t had time to keep up. Can you tell me what the meaning of the break points is?

    I’d interpret these break points just as red flags that should be checked out to see if the series has some sort of discontinuity in the way it was constructed that should be taken into account. They may be valid indicators of AGW, volcanic eruptions or solar extrema, they may be totally random noise (not necessarily “demonic” per bender #257) that just has to be lived with, or perhaps they indicate a specific problem should disqualify the series.

    I’m not sure how R’s breakpoint routine works, but it probably does something like a multi-break generalization of the Goldfeld-Quandt breakpoint test for breaks in a straight line trend model. (As I understand it, Goldfeld-Quandt in turn modifies the Chow switching regression critical values to allow a single breakpoint to be dictated by the data.)

    In the case of Loso’s Iceberg Lake (#4), for example, a CA reader has alerted Steve that there was a 26 meter drop in Iceberg Lake circa 1957-58, presumably because of an abrupt break in the ice dam. Ken’s series 4 does show a pronounced “breakpoint” near the end. It falls short of 1960, but perhaps this is only because of the 5-point minimum that Ken imposed in searching for regimes.

    The existence of “breakpoints” should not in itself disqualify a series. However, they may lead to the discovery of special problems like this level drop. If the sudden surge in varve thickness at that time could easily have been caused by the change in lake level rather than by an otherwise unnoticed hike in temperatures at that precise time, the series should be be dropped because of the special circumstance, but not directly because of the statistical “breakpoint”.

    It would be interesting to hear from some varvologists, over on the Iceberg Lake thread, as to whether the level change could or could not have caused the sudden change in the series.

    • bender
      Posted Sep 18, 2009 at 8:05 AM | Permalink

      Re: Hu McCulloch (#259),

      series should be be dropped because of the special circumstance

      In my parlance “demonic intrusions” are “special circumstances” leading to uninformative (i.e. non-climatic) non-stationarities. That’s the first thing I would investigate, for each and every breakpoint in each and every series.

      they may be totally random noise (not necessarily “demonic” per bender #256)

      Demonic noise may be random, just severely non-normal. Who was it mentioned Levy distributions? They’re random, but wild (i.e. demonic).

  152. Posted Sep 18, 2009 at 8:22 AM | Permalink

    Wow I cross posted with a half dozen people. Thanks Hu, I’m unfamiliar with methods of breakpoint analysis in data. It looks like there’s some reading to do.

  153. Posted Sep 18, 2009 at 9:25 AM | Permalink

    RE #253, 256:
    Roman, your area-weighted CRUTEM3 60N-90N is definitely not the same one Kaufman et al used.

    I attempted to replicate their basic calibration equation, in which they regressed the average of the 19 series that extended to 1980 on the temperature series T.

    I get P = 1.8973T + .750, whereas they got P = 2.079T + .826. Comparing the scatterplot below to their figure S3 in the SI (which I can’t get to download at the moment — maybe Science is busy) shows that I have the same “P” series they are using (to within eyeball precision), but definitely not the same “T” series.

    Do you get the same discrepancy?

    Any chance you could do a CRU-CRU vs KNMI-CRU (and perhaps now Kaufman-CRU) post this weekend?

    PS: I am of course looking at the JJA average summer temperatures.

    • RomanM
      Posted Sep 18, 2009 at 1:25 PM | Permalink

      Re: Hu McCulloch (#264),

      I sort of knew that there were differences between K’s temperatures and both of the series that I calculated from Cru and I intend to go bakck and look at it further. I should have some time in the next two days.

      Can you give me some details on what you think needs to be included in such a post?

  154. bender
    Posted Sep 18, 2009 at 10:13 AM | Permalink

    Scratch my comment about correlation with PC1. The reason Yamal correlation with PC1 is only 0.22 is simply because PC1 doesn’t extend into the 20th c blade!

  155. bender
    Posted Sep 18, 2009 at 10:36 AM | Permalink

    PC1 is interpreted as the (mostly) orbitally-forced cooling trend 1AD up to 1900AD. If variation is attenuated in the proxy recon then the idea that orbital forcing is the dominant source of variation may be incorrect; natural variation could be hihger, in which case modern vs. historical comparison is not justified.

  156. bender
    Posted Sep 18, 2009 at 10:39 AM | Permalink

    That PC1 trend and variation matches that of a GCM is not particularly impressive given GCMs known problems in simulating natural variability (see lucia’s Blackboard). Autocorrelated noise component (due to ENSOs etc) may be underestimated in each.

  157. bender
    Posted Sep 18, 2009 at 10:46 AM | Permalink

    Uncertainty that goes into the estimates in Fig 4 data is totally missing from the calculation. So head-to-head comparing an outlying 20th century (a single data point) to an orderly cooling trend over the previous 19 centuries may not be justified. The true uncertainty on that 19-century trend may be quite huge. (Recall there is error in every step of the calculation, from calibration to reconstruction.)

  158. Posted Sep 18, 2009 at 1:39 PM | Permalink

    RE Roman #269,
    I just had in mind an explanation of how you computed your series, a link to it, your graph in #240, plus graphs like mine in 233. (You can just use mine if you like.) I’ve e-mailed Kaufman and co-authors asking if they have any idea why your numbers are giving a different regression line in #264 (and also inviting comments on my “Invalid Calibration” post).

  159. Posted Sep 18, 2009 at 7:10 PM | Permalink

    Here is Kaufman’s Figure S3 for comparison to #264 above:

  160. Posted Sep 20, 2009 at 7:56 AM | Permalink

    RE Paul M, #261,
    The article you cite, “How the IPCC invented a new calculus” , at http://globalwarmingquestions.googlepages.com/howtheipccinventedanewcalculus , is a brilliant condemnation of Fig. 1 of IPCC WG1 FAQ 3.1 on p. 253, worthy of inclusion in Huff’s classic How to Lie with Statistics, if he were still around.

    But unfortunately, it is neither signed nor dated. On the homepage linked to your handle, you indicate that many of the ideas came from CA or Pielke Sr. Can you give us a source on this?

    Thanks!

  161. Posted Sep 20, 2009 at 9:09 AM | Permalink

    RE #264, 272 —
    Roman, the visual match with Kaufman’s Figure S3 is actually almost perfect (apart from a small scale shift) using the KNMI data rather than using the data you computed directly from the CRUTEM3 file:

    However, the regression line still is not quite the same — they got P = 2.079 T + .826, while I’m getting P = 2.235 T + .750 using KNMI. If they had eg changed the base year, that should only affect the intercept, not the slope.

    • romanm
      Posted Sep 20, 2009 at 3:31 PM | Permalink

      Re: Hu McCulloch (#277),

      I thnk we are getting closer to it. There are several ways to calculate the “average temperature” when there are many missing values.

      The first time I calculated the weighted averages, I calculated an average for each latitude band first and then used weights to combine the bands. This time, I calculated the average of ALL grid cells simultaneously from available cell values for a given time using weights which are dependant on the latitude of the cell. This does not give the same answer as the previous method(or KNMI since I did not infer the missing values using “climatology”). The result using this series was pretty good:

      The R-quare is just about right on and the slope and intercept are close as well. I have been using the multidecimal data from the Excel file instead of the rounded values in the text file. The difference is in the fourth decimal place. If you want the temperature series, I can upload it later.

  162. Posted Sep 20, 2009 at 6:20 PM | Permalink

    RE Roman #277,
    Thanks! It sounds like this is plenty close to what Kaufman did. Please do upload the new series.

    Confession: On my first try at replicating the Kaufman regression, I got entirely different results — because I forgot to reverse the time dimension of the proxies, which run from newest to oldest, to match the time dimension of the CRU indices, which run from oldest to newest. Fortunately, I caught this blunder myself before rushing to press!

    How does your new CRU series compare to KNMI?

    • romanm
      Posted Sep 21, 2009 at 4:54 AM | Permalink

      Re: Hu McCulloch (#279),

      I have uploaded a txt file to CA with the new weighted temperature series. It contains two columns time and temperature (with no header variables names). I think it is pretty close to what they used for the paper. The difference between KNMI and this series looks similar to the graph in comment #240 since no imputation is done for missing grid cell values.

      Yeah, I caught myself having to reverse the series on several occasions as well, but, unlike some climatologists,at least I didn’t flip any of the proxies upside down. 😉

  163. Posted Sep 21, 2009 at 8:38 AM | Permalink

    Re RomanM #278, 280,

    The first time I calculated the weighted averages, I calculated an average for each latitude band first and then used weights to combine the bands. This time, I calculated the average of ALL grid cells simultaneously from available cell values for a given time using weights which are dependant on the latitude of the cell.

    This seems to be what Kaufman did, so let’s go with your new file, http://www.climateaudit.org/wp-content/arctempwght.txt, for JJA CRUTEM3 60N-90N, with “0 decade” averaging.

    It occurs to me now that in fact this will give less weight to isolated stations the farther north they are, since an isolated station represents its entire grid cell, and these get smaller as you move north. Before 1896, fewer than 30% of land cells had any stations at all, to judge from KNMI, so that a large proportion of those with any stations had only one. If there were several stations per grid cell, this wouldn’t be an issue, since then cells will pick up stations in proportion to their areas. If this is exactly what Kaufman et al did, their index in fact underrepresents northern latitudes.

    Be that as it may, I’m just trying to replicate Kaufman, not create an ideal average of CRU data.

  164. MikeN
    Posted Oct 2, 2009 at 9:46 AM | Permalink

    Bender, how did you invert Tiljander? Is it OK to just negate the values in the SI?

  165. Greenfyre
    Posted Oct 10, 2009 at 8:38 AM | Permalink

    RotFL
    Of course the data “flips” when it is plotted as anomalies. The Xray density is scaled from high to low, while the anomaly data is scaled low to high; after conversion and scaling the accurate plotting would naturally lead to a reverse image …. the only way for it to be otherwise is to deliberately reverse the actual values to their opposite.

    McIntyre belabours “Tiljander’s original orientation” as if it means something … as long as the plot is accurate to the axes it doesn’t matter a [snip – language] (this is what, Grade 3 math?), just as Kaufman pointed out to McIntyre.

    and then he cherry picks 3 sites and pretends it is a representative survey??? who is this suppsed to fool? brain damaged gerbils?

    • romanm
      Posted Oct 11, 2009 at 6:34 AM | Permalink

      Re: Greenfyre brain damaged gerbil (#283),

      Of course the data “flips” when it is plotted as anomalies.

      Say what??? …and “the accurate plotting would naturally lead to a reverse image“??? Your post makes no mathematical or logical sense. The orientation is an active choice not some sort of natural result of simple conversion and scaling.

      McIntyre belabours “Tiljander’s original orientation” as if it means something …

      You would think the people who collect and work with the data might have an inkling of what the “correct” orientation might be.

      Does the orientation matter? In an analysis where the proxies are simply averaged to produce the reconstruction, of course it does.

      Imagine a study where someone is trying to show that a system for weight loss works. They measure people’s weight each week and average these results. Inconveniently, a lot of the people are gaining weight. No problem. Convert the weights to anomalies and plot them “accurately” – the weight gainers will of course produce a “reverse image” and now we can demonstrate just how marvelously our product works. Definitely ROTFL!

      By the way, it was me who snipped the grade three language in your post.

  166. MikeN
    Posted Oct 20, 2009 at 7:24 PM | Permalink

    Kaufman says he has issued a correction to Science.

  167. Posted Oct 21, 2009 at 8:41 AM | Permalink

    RE Mike N #287,
    Can you provide a link to where he says this?

  168. Jean S
    Posted Oct 21, 2009 at 10:46 AM | Permalink

    Have I understood correctly that ALL THREE Finnish series (Lake Nautajärvi (Ojala et al), Lake Korttajärvi (Tiljander et al), Lake Lehmilampi (Haltia-Hovi et al)) are upside-down, not only Lake Korttajärvi?

  169. Posted Oct 21, 2009 at 12:32 PM | Permalink

    Kaufman09:

    For trees, only three records extend back before 720, which is not enough to determine a reliable trend

    Those three tree records have nonsignificant trends, all above the -0.22. Would more trees help? Possibly.

    All records are positively correlated with PC1, indicating that the trends are predominantly of the same sign over the 1900-year period

    Correlations with standardization-period-PC1:

    0.4640
    0.4925
    -0.2140
    -0.6343
    0.0241
    0.5943
    -0.0562
    -0.3872
    0.2144
    -0.2966
    -0.2189
    -0.1461
    0.1156
    0.2440
    -0.0157
    0.0274
    -0.7111
    0.0809
    -0.0479
    0.7591
    0.5040
    -0.2960
    -0.4190

    • Jean S
      Posted Oct 21, 2009 at 2:09 PM | Permalink

      Re: UC (#290), UC (#291),
      did you try what happens if our dear Finnish climate is not standing on her head? 😉 Lake Lehmilampi [OT: who the heck invented these lake names?!!] -series seems to be one of the fifteen series going into “PC analysis”.

      It is interesting that it did not occur to Kaufman et al that there is a need to flip Lehmilampi-series as even the abstract of the reference
      Haltia-Hovi, E., Saarinen, T. and Kukkonen, M. 2007. A 2000-year record of solar forcing on varved lake sediment in eastern Finland. Quaternary Science Reviews 26: 678-689.
      states

      A high-resolution study was performed on varved sediments from Lake Lehmilampi in eastern Finland. Varve data was collected by digital image analysis using standard 1.8 mm thick samples impregnated in epoxy and X-rayed. Climatic variability is imprinted on varve properties (varve thickness and accumulation of mineral and organic matter) during the last 2000 years. The cumulative counting error of the varve record is estimated as 2.3%. Qualitative comparison of varve parameters and residual Δ14C constructed from tree-rings revealed close correspondence between the two records, suggesting solar forcing on lake sedimentation. Classical climatic periods of the last millennia, Medieval Climate Anomaly (1060–1280 in the varve record) and Little Ice Age (cooler phases culminating in 1340, 1465, 1545, 1680, 1850 and also in 1930 in the varve record) are clearly evident in the varve record. At present the physical link between solar activity levels and lake sedimentation has not been established.

      and the series looks like (plot from co2science.org who have added the text “MWP”)

      Maybe, indeed, Medieval Climate Anomaly refers nowadays to Medieval Windy Period 😉

      • Posted Oct 21, 2009 at 2:50 PM | Permalink

        Re: Jean S (#292),

        did you try what happens if our dear Finnish climate is not standing on her head? 😉 Lake Lehmilampi [OT: who the heck invented these lake names?!!] -series seems to be one of the fifteen series going into “PC analysis”.

        SI Table S1

        21 Lake Lehmilampi Finland Varves- thickness 0.05

        for our Lake Cowpond would become

        21 Lake Lehmilampi Finland Varves- thickness -0.05

  170. Posted Oct 21, 2009 at 1:57 PM | Permalink

    Without proxies Lake Nautajärvi and Lake Korttajärvi, the standardization period extends to 980-1940. And let’s send Yamal to the penalty box for 2 minutes :

    No need to change the title of the paper!

  171. Tony Hansen
    Posted Oct 21, 2009 at 9:15 PM | Permalink

    JeanS re ‘who the heck invented those lake names’.
    Perhaps we should be thankful the lakes are not in Wales.

  172. Dr Michael Koch
    Posted Nov 13, 2009 at 5:18 AM | Permalink

    snip – you’re a new poster and welcome here. However, blog policies request that posters refrain from being angry, don’t “vent” and avoid talking about policy. There are a couple of reasons: it deflects from scientific discussion and it detracts editorially from the content for lurkers. Thanks.

  173. DR_UK
    Posted Dec 7, 2011 at 4:02 PM | Permalink

    Well, Kaufman was warned!…

    http://di2.nu/foia/foia2011/mail/0900.txt

    date: Thu, 28 May 2009 12:22:57 -0700
    from: Jonathan Overpeck
    subject: Re: Your Science manuscript 1173983 at revision
    to: Darrell Kaufman ,[…]

    Hi Darrell et al – got a chance to read the paper and comments enroute to Atlanta. Here’s some feedback.
    General – comments are modest and should be easy to accommodate. That said, I think we have to take the comments of Rev 2 seriously. I’m guessing that its Francis Zwiers and in any case, he knows what he’s talking about regarding stats.

    Also – IMPORTANT – I’d make sure we check and recheck every single calculation and dataset. This paper is going to get the attention of the skeptics and they are going to get all the data and work hard to show were we messed up. We don’t want this – especially you, since it could take way more of your time than you’d like, and it’ll look bad. VERY much worth the effort in advance. […]

    Interesting that ‘the skeptics’ are the impetus for getting the science right – kudos to Steve, Hu, et al. for that. Clearly the climate science community was not expected to check anything.

    [apologies if this is the wrong place to post this, so long after the rest of the thread]

  174. bender
    Posted Sep 3, 2009 at 3:11 PM | Permalink

    Re: Dot Earth Blog – NYTimes.com (#24),

    Steve McIntyre, seeing a familiar climate “hockey stick” curve …

    He didn’t just “see” it. He predicted it would appear.

14 Trackbacks

  1. […] soot, too). [UPDATE, 4:45 pm: Steve McIntyre, seeing a familiar climate “hockey stick” curve, has weighed in with some complaints about data […]

  2. […] Steve McIntyre digs into more proxy hijinx from the usual suspects.  This is a pretty good summary of what he tends to find, time and again in these studies: […]

  3. […] Climate Audit: Kaufman and Upside-Down Mann […]

  4. By Arctic Temperatures: Not So Hot - Wry Heat on Sep 4, 2009 at 10:59 AM

    […] McIntyre of Climate Audit discusses the new study. “The problem with these sorts of studies is that no class of proxy (tree ring, […]

  5. […] LINK […]

  6. […] HS blade isn’t used, but the Little Ice Age and MWP are flipped over, a point made at CA here Kaufman and Upside Down Mann. Two other Finnish paleolimnology series also appear to have been used upside down by […]

  7. […] von Steven McIntyre (ClimateAudit) Die Upside-Down Methode des Hockey-Teams (ClimateAudit) Studie von Lindzen und […]

  8. […] see here. Since they only needed to correct four out of 23 proxies, there is no need to name those who pointed out errors. There is a small improvement over the draft version though; congratulations Hu! We thank H. […]

  9. By The Kaufman Corrigendum « Climate Audit on Aug 2, 2010 at 11:24 PM

    […] et al (PNAS 2009). On the day that Kaufman 2009 was released, its upside down use was again noted here and a note on the matter sent to Kaufman by email. I invited Kaufman to post a thread at CA and […]

  10. […] 5000+ emails lead to an “out of context” interpretation.  Apparently over there in upside down Mann-Tiljander world more information results in less […]

  11. […] these extra 5000+ emails lead to an “out of context” interpretation.  Apparently over there in upside down Mann-Tiljander world more information results in less […]

  12. […] […]

  13. […] jejichž autorem je Mia Tiljander a už dříve je vzhůru nohama použil Michael Mann (viz McIntyre 2009). Když badatel McIntyre na chybu upozornil, Kaufmann ho vztekle odbyl a když pak vydal opravu, […]

  14. […] jejichž autorem je Mia Tiljander a už dříve je vzhůru nohama použil Michael Mann (viz McIntyre 2009). Když badatel McIntyre na chybu upozornil, Kaufmann ho vztekle odbyl a když pak vydal opravu, […]