Varves: To Log or Not to Log

The majority of Kaufman’s varvochronology proxies are various functions of varve thickness – which, if anything, seem more problematic than sediment BSi.

While Kaufman’s offering memorandum to NSF promised consistency, the handling of varve thicknesses in the various selections seems to be anything but. Kaufman et al 2009 gives no hint of the varied functional forms used to create varve thickness proxies. While none is quite exotic as the Hallet Lake cube root of the product of BSi and flux_all, there is an intriguing variety of forms.

1- the Blue Lake temperature reconstruction is a linear function of logged varve thickness (unadjusted for density). “Turbidites” are a problem in these records and they state (Bird et al 2009):

Twenty-one anomalously thick layers, ranging from 1.1 to 18.14 cm thick were also identified in the cores (Fig. 4). Based on sedimentological characteristics (i.e. presence of graded bedding) the layers were identified as turbidites and excluded from the thickness record and varve chronology.

There isn’t any mention of tephra, though tephra are an important feature of some lake sediments. Cross-dating is via Cs137, Pb210 and C14 (per NSF abstract.)

4. Loso’s Iceberg Lake varves are linear in varve thickness (no logging), also unadjusted for density. They removed 82 varve measurements; they combined the remaining varve measurements. (As noted in the Loso post, there is considerable evidence of a 1957 inhomogeneity not adjusted for.) Loso:

Scattered among the other well-dated sections are isolated strata that record episodic density flows (turbidites), resuspension of lacustrine sediment by seismic shaking and/or shoreline-lowering events, and dumping of ice-rafted debris. The case for excluding such deposits from climatologically-oriented varve records has been made elsewhere (Hardy et al., 1996), and we accordingly removed measurements of 82 individual laminae from these other sections. Those removed (mostly turbidites) include many of the thickest laminae, but sediment structure (not thickness) was in all cases the defining criterion for exclusion from the master chronology.

Again, no mention of tephra. No mention of Cs137, Pb210 or C14 cross-dating.

Note: Cascade Lake, Alaska, another Kaufman student site, was not used. Kathan stated:

The cores are composed of rhythmically laminated mud, with an average lamination thickness of 0.4 cm; these laminations do not represent annual sedimentation (e.g. varves).

Kathan observed the presence of tephra and used a couple of known tephra for her age-depth relationship.

6. Lake C2, Ellesmere Island. This is an old 1996 data set (not collected in the ARCUS2K program) for which there is no archive (Bradley of MBH was a co-author; Bradley assured the House Energy and Commerce Committee that he had archived all his data, but I guess that he forgot about Lake C2.) [Update: sept 23 3.20 pm: Scott Lamoureux of Queen’s University near Toronto has just emailed me the C2 annual data, noting that it had originally been archived with NCDC and been inadvertently removed at some point, a situation he is now redressing. I’ve posted the data up at CA/data/kaufman/C2_annual.dat.] They made a varvochronology explicitly borrowing from tree ring methods by first “standardizing” the data against their “expected” values, which, in this case, appear to be by dividing the varve width by the linear trend. Bradley observes that one trend goes up and one goes down – an inconsistency that didn’t seem to bother anyone at the time, nor seemingly Kaufman in its present iteration. (Without original data, it is of course impossible to determine exactly what they did.) No mention of Cs137, Pb210 or C14 cross-dating. (Lamoureux and Bradley 1996; Hardy et al 1996)

A major turbidite was mentioned in 1957 (the same year as a major turbidite in Iceberg Lake above, presumably a coincidence.) Lamoureux and Bradley 1996 mention both a filtered and unfiltered version – in the filtered version, identified turbidites are replaced by the series mean. This data set is pretty much inaccessible to any any analysis.

8 – Lower Murray Lake, Ellesmere. This is a new record from Bradley’s program. The varve chronology is archived, but not the varve measurements. The caption to Besonen et al 2008 Figure 7 states: “13 turbidites were replaced with the period average thickness” in one version of the top panel. It looks like the version after adjustment is what is archived. This article also posits that a “clearly erosive” 1990 turbidite event cleanly erased the previous 20 years from the lake:

the prominent turbidite event recorded in the 1990 varve was clearly erosive. Thus, assuming the 137Cs peak in sample 3 represents ~1963, if we take the centre of the 0.5 cm zone as 1963, it suggests that the varves from ~1970 to 1989 inclusive were eroded by the 1990 turbidite.

As far as I can tell, no other erosive turbidite events are accounted for in any other Kaufman series – surprising that one only occurs at this site. In this case, density values are calculated yielding a mass_flux estimate. Kaufman uses the log(mass_flux) in his composite.

C14 cross-dating carried on 2 samples (due to unavailability.) Dates were older/much older than the presumed varve date. Limited Cs137 and Pb210 dating was carried out due to “cost constraints” (notwithstanding the commitment to NSF to do this work):

Owing to cost constraints, only the top 2.5cm (five samples) of sediment were analysed.

The Cs137 peak was unexpectedly high in the sample leading to the hypothesis of an erosive 1990 turbidite (as opposed to core loss at the top, the possibility of which seems at least possible but which was not discussed – the team being content with the erosive turbidite theory.)

9 – Big Round Lake, Baffin Island. This varve thickness record goes back only to 980AD, prior to which the sediment lacked a varve structure. Sand thicknesses (if present) are reported for each varve layer and the residual is used by Kaufman. A linear function is used.Thomas and Briner 2009 url Cross-dating here was done by 239+240Pu; no 137Cs or 14C measurements are reported (and none were carried out for some reason, a point confirmed by the authors.)

10 – Donard Lake, Baffin Island. This is the last North American varve thickness series in Kaufman 2009. This is an older series (2001) predating the ARCUS2K program. Moore et al 2001 does not discuss the handling of turbidites. No radiogenic cross-dating is reported.

Non-Normality
The need for some sort of handling of non-normality is shown by the following diagram showing a histogram of the Loso varves (truncated at 60 mm for visibility here, though there are a number of thicker varves) together with the best fit under several different distributions. As you can see, the distribution is far from normal. The log transformation, that is semi-popular among varvochronologists, doesn’t really do the trick either. An inverse gamma distribution fits better than log-normal (I looked at this because the fat-tailed Levy distribution is a special case of an inverse gamma distribution.)

Taking a simple average from this sorts of weird distributions (as is done in most of these recons) doesn’t seem like a very diligent way of handling the non-normality. While the functional form of the Hallet Lake BSi flux series is not very attractive, one can sense the underlying motive for trying to do something to coerce the data into a comparison with temperature.

I presume that varve distributions for the other sites (with far more incomplete archives than Loso) have similarly problematic distributions and this is one of the reasons for the quirky handling. All in all, we see at least 4 distinct methods (before turbidite adjustment): linear; log of varve thickness; log of varve mass flux; detrended linear. I suspect that there are further differentiae beneath the surface.

As to the question – to log or not to log – my guess (as of today) is that, if these series are to be used, it would make more sense to do a sort of non-parametric mapping of the distribution (handling nugget effects severely) onto a normal distribution – prior to doing any correlations or reconstructions. Of course, there’s another possibility – these things aren’t temperature proxies and shouldn’t be used in major reconstructions – a possibility that we shall examine more as we get a better understanding of varvochronology.

30 Comments

  1. John A
    Posted Sep 23, 2009 at 12:11 AM | Permalink

    I can’t help but wonder how much of the mathematical treatments used are based on physical theory and how much based on opportunistic guessing based on an expectation that some derived parameter or other is related to mean temperature.

    Looking at the reconstituted formula for Hallet Lake:

    mg_{BSi}= 0.465 \times 1.0000624^{3(0.914T+0.99)}

    …I’d love to know the physics text book that that one came out of!

  2. Andy
    Posted Sep 23, 2009 at 1:37 AM | Permalink

    So its really a dogs’s dinner?

  3. Geoff Sherrington
    Posted Sep 23, 2009 at 2:11 AM | Permalink

    Steve

    Any chance of a reference to this? I don’t have the original paper.

    Big Round Lake, Baffin Island. This varve thickness record goes back only to 980AD, prior to which the sediment lacked a varve structure. Dating here was done by 239+240Pu; no 137Cs or 14C measurements are reported (and none were carried out for some reason, a point confirmed by the authors.)

    One tends to think of plutonium as post-reactor 1950 material from the interaction of uranium etc with synthetic neutrons, a process that can also happen in Nature, but not abundantly. Cs-137 is fission material from bomb tests starting 1940s.
    ……………………….
    By coincidence, Sydney Australia had an enormous dust storm today with airborne particulates 1,500 times normal, origin tracked hour by hour for 1,500 km or more from the dry red centre to the west. I wonder if such effects are accounted for if they happen above the lakes the subject of this tread? Or are such events just one more perturbation assumed not to exist or to be unrelated to temperature?

    Steve: Thomas and Briner 2009 url

  4. bender
    Posted Sep 23, 2009 at 3:39 AM | Permalink

    That inset page is telling. Mysterious demonic intrusions. Doing wacky things like introducing opposing trends. When I look at those individual series, that is exactly what I see. No coherent climate signal. (Has anyone done the full set of pairwise correlations yet? How are they distributed?)

    • Geoff Sherrington
      Posted Sep 23, 2009 at 4:21 AM | Permalink

      Re: bender (#4),

      This is to exorcise the mysterious demonic intrusions and recalibrate your mind to the possible pure starting point. You are not alone in missing its elegance lately.

  5. Geoff Sherrington
    Posted Sep 23, 2009 at 3:42 AM | Permalink

    I’ve not been field active in geological sciences for a few years and it might have moved on. However, when I see “turbidite” I think “Sedimentology of Some Flysch Deposits: A Graphic Approach to Facies Interpretation”, the thesis of Bouma in 1962, wherein a described sequence of items like particle size is given for a complete turbidite. Particles that finally make a turbidite are carried by density flow, in water with suspended solids that make it possible to lift larger particles than purer water. The other watery forms of transport of particles to sediments are tractional and frictional flow, broadly, and these have a different final appearance. Again broadly, turbidites can have a graded size sequence with large particles at the base, too large to be transported by purer water.

    Turbidites were most commonly associated with deep water, such as in the delta fans of large rivers and the slopes of continental shelves. Sometimes, the flow leading to turbidites is ascribed to shock such as earthquakes produce.

    Here in Australia we do not have the abundance of high altitude, glacier-related shallow lakes that pepper Canada, for example. So we are less erudite. However, I would be grateful if a contemporary geologist working on lakes like those described above would briefly explain the mechanisms of creation of turbidites and whether the defiition of a turbidite sequence has widened since the seminal work of Bouma.

    Although we are getting perilously close to “thixotropy”, I will not invoke it here but will mention “rheopexy”, this being not a French taunt like Monty Python’s, but a Dutch one to recognise Bouma.

  6. curious
    Posted Sep 23, 2009 at 4:36 AM | Permalink

    Nice graphic – but do you have a peer review lit. reference for it? 🙂

  7. MarcH
    Posted Sep 23, 2009 at 4:47 AM | Permalink

    Use of varve thickness as a temperature proxy appears to be highly problematic. Seasonal/annual variation in sediment supply in source regions feeding lakes is affected by a multitude of factors. Do Kaufmann et al explain how they can separate out such minor variations in temperature from all the other influences that affect varve thickness? Any refs on this anyone?

    Geoff S, turbidites in lakes are not unknown and appear to share similar bed features to the better studies marine equivalents. However at max depth of 45m I agree Lower Murray Lake does appear a little shallow for “true” turbidites. Perhaps it’s a slump desposit.

  8. MarcH
    Posted Sep 23, 2009 at 5:05 AM | Permalink

    Limitations of using varve at Lower Murray lakes indicated here from Besonen et al 2008. Draws the string to breaking point on link to TEMPERATURE.

    “However, it is extremely difficult to quantitatively relate varve thickness, or other sedimentary characteristics, directly to meteorological data because the nearest long term weather stations (Eureka and Alert) are 320 and 180 km from the lake, respectively, and the closer station is in a quite different meteorological setting (on the Arctic Ocean coast, where fog and low cloud are common in summer months). Furthermore, sediment erosion beneath the ~1990 turbidite (which we can only estimate as roughly 20 years) greatly limits comparison of the varves with
    regional meteorological data. We are thus constrained to simply hypothesize that the varve record represents summer temperature conditions in the Murray Lakes watershed, based on our observations and experience in similar settings. We can also assess the climatic implications of the LML varves, by comparing them with other proxy records, for which climate links have been suggested.” “…all proxy records are noisy, and their links to climate are generally poorly defined, in large part because of the severe limitations imposed by extremely sparse meteorological data.”
    DOI: 10.1177/0959683607085607 page 179.

  9. John A
    Posted Sep 23, 2009 at 5:12 AM | Permalink

    Would anyone like to hazard a guess as to the stationarity of the relationship between carve thickness and summer mean temperature?

    • Dave Dardinger
      Posted Sep 23, 2009 at 7:47 AM | Permalink

      Re: John A (#10),

      Would anyone like to hazard a guess as to the stationarity of the relationship between carve thickness and summer mean temperature?

      From what I’ve observed of the carve thickness in late November, it’s more closely related to the speed of movement from bird to plate than the summer mean temperature. Now the plumpness of corn on the cob might relate to summer temperature but here you have to worry about an inverted U relationship since higher temperatures can be related to drought and thus smaller ears.

      • John A
        Posted Sep 23, 2009 at 4:44 PM | Permalink

        Re: Dave Dardinger (#14),

        Memo to self: stop writing comments late at night on your iPhone. That way you won’t look stupid.

        Re: Steve McIntyre (#20),

        I can’t help wondering if Kaufman has missed a trick. If it was me, I’d be looking at calibrating variance with inverted temperature, the higher the variance – the lower the mean temperature. That would at least make meteorological sense since warm periods are generally periods of reduced seasonal variability, especially in the Arctic.

  10. Charlie
    Posted Sep 23, 2009 at 6:20 AM | Permalink

    “Varves: To Log or Not to Log” isn’t the biggest problem.

    The big problem seems to be ““Varves: To Archive or Not to Archive”.

    It’s unfortunate that even some of the records that do get archived are modified before archiving.

    Digital storage isn’t all that expensive. I don’t see any reason that both the adjusted and prior-to-adjustment data can’t both be archived.

    It’s too bad that the NSF doesn’t take data archiving seriously enough to make that a condition for future grants.

  11. Jeff Id
    Posted Sep 23, 2009 at 7:22 AM | Permalink

    The big problem is varves (like noodles) are not a thermometer. If they’re guessing at how to make it one, the decision is already made.

  12. Steve McIntyre
    Posted Sep 23, 2009 at 7:40 AM | Permalink

    I’ll expand this survey a little. Interesting things like Tiljander need to be mentioned – Tiljander uses varves in an opposite orientation to Kaufman.

  13. Kenneth Fritsch
    Posted Sep 23, 2009 at 8:23 AM | Permalink

    In attempting to keep the big picture in mind, can I assume that these different models for relating varve properties to temperatures are developed with the calibration against the instrumental records in mind.

    I have seen otherwise intelligent people use square and cube roots of stock properties and use the ranks 2-5 eliminating 1 for investment strategies. After the fact, these very intelligent people are good at rationalizing why these strange models “work” and the strange variables make sense.

    They have no concept of the problems arising from initial overfitting of the model with “in-sample” data or the truer test of the model being out-of-sample. These strange and different varve models appear to me (without some a prior rationale for them) as overfitting.

    Would we expect these models, if overfit in-sample to agree with one another out-of-sample? No. And that appears to be the case.

    • bender
      Posted Sep 23, 2009 at 9:46 AM | Permalink

      Re: Kenneth Fritsch (#15),

      They have no concept of the problems arising from initial overfitting of the model with “in-sample” data or the truer test of the model being out-of-sample.

      And do they go broke as often as you’d expect?

  14. bender
    Posted Sep 23, 2009 at 10:27 AM | Permalink

    Reminds me of the Chicago Bears, Ken. Always wanting to believe the next new guy is ready to be the next Jim McMahon. Too ready to demonize the guy going out. That kind of greed and impatience is always justly rewarded. Sometimes you have to go with the bargain known-quantity, and keep an open mind about the future. If you don’t invest in youth you’ll always be harvesting the low-hanging fruit that’s about to rot.

  15. Steve McIntyre
    Posted Sep 23, 2009 at 1:25 PM | Permalink

    Scott Lamoureux of Queen’s University near Toronto has just emailed me the C2 annual data, noting that it had originally been archived with NCDC and been inadvertently removed at some point, a situation he is now redressing. I’ve posted the data up at http://www.climateaudit.org/data/kaufman/C2_annual.dat

  16. Steve McIntyre
    Posted Sep 23, 2009 at 1:41 PM | Permalink

    Lamoureux provided both the filtered and unfiltered versions. Kaufman used a variant of the filtered version as illustrated by the comparison below between a kaufmanization of the original (filtered) version and the archived Kaufman version of this data set. I’m not going to try to match this further – this sort of slight discrepancy might be a transposition by a year or two somewhere.

    I’ll see what the explanation for the “warm” Little Ice Age is.

  17. bender
    Posted Sep 23, 2009 at 1:56 PM | Permalink

    Just invert it!

    • Scott Brim
      Posted Sep 23, 2009 at 6:17 PM | Permalink

      Re: bender (#21)

      Just invert it!

      Here you go:
      .

      .
      It’s upside down cake. Plus, you can have your cake and eat it too.

  18. Kenneth Fritsch
    Posted Sep 23, 2009 at 6:24 PM | Permalink

    They have no concept of the problems arising from initial overfitting of the model with “in-sample” data or the truer test of the model being out-of-sample.

    And do they go broke as often as you’d expect?

    Once the out-of-sample data indicates that the strategy is going south they move onto a new strategy. Sound familar. The biggest losses I have seen are when a model with no a prior rationalizations makes an initial big out-of-sample splash and then goes south. That out-of-sample success produces some true believers – even without a rational model for support.

    • bender
      Posted Sep 23, 2009 at 10:37 PM | Permalink

      Re: Kenneth Fritsch (#24),

      Sound familiar.

      Lehman Bros, The Bears, or both?

      • John A
        Posted Sep 24, 2009 at 4:45 AM | Permalink

        Re: bender (#25),

        Yes

      • Kenneth Fritsch
        Posted Sep 24, 2009 at 8:46 AM | Permalink

        Re: bender (#25), Re: John A (#26),

        And I thought I was making a point about the need for a physical/rational model even when initial out-of-sample data might support a purely empirical model based on in-sample data.

        Familar: as in the Team moving on from errors and the great divergence.

        I am grading myself with an F for failure to get my point across. I’ll let Bender and John A grade themselves.

        • bender
          Posted Sep 24, 2009 at 9:22 AM | Permalink

          Re: Kenneth Fritsch (#27),
          I caught your message. Give yourself a B. I kind of disagree. The Kaufman paper is arctic region, not NH, not global. So I see it as not really a “move on” to a different strategy. Same strategy, different region.
          .
          Because the real problem in Kaufman (aside from questionable varvoclimatology) is series #22 Yamal. They’ve “moved on” from bcps in California, but have NOT moved off of Yamal. They’re not THAT ready to “move on”.
          .
          I think it is important to explain that uptick in Yamal. Hence my renewed interest in wild distributions of ring widths in coniferous species such as bcp. I should review the Yamal threads. Any pictures of those trees, site conditions, natural history of the area? Based on bcp’s I’m guessing there was some cataclysmic disturbance there followed by a biased sampling. What were the circumstance under which those trees were sampled? Sort of like if you were to try to reconstruct climate in the northeast by sampling right after the great ice storm of ’98. A worse sampling scheme could not be imagined. (One wonders why Graybill chose the trees he did when he did.)

  19. Chas
    Posted Sep 24, 2009 at 2:03 PM | Permalink

    Slightly OT; There is a rather enjoyable bit of software for fitting and comparing many many pdf’s called EasyFit (www.mathwave.com )- It has a 30 day free trial period, which is very handy.
    It can save quite a bit of digging around sometimes.

  20. Johan i Kanada
    Posted Sep 27, 2009 at 12:59 AM | Permalink

    If I understood this correctly, in some cases, the assumed relationship is T = f(X) and in others T = f(log X). And the main/only justification for the choice (log or not log) is what fits best?

    Or is there a hidden physical reason somewhere?

    (I am perhaps “beating a dead horse”, but statistics, however valuable tools it does indeed provide, does not become science without science. And, please re-direct me to some other thread/blog of this is OT here.)

    Thanks,
    /Johan