AGU – Day Four

Day Four at AGU didn’t have as much climate stuff as the first three days. Aside from Al Gore, I went to a number of ice core and ocean sediment presentations.

First I heard the end of a presentation on lake sediments, saying that there were more droughts/century in the past in New England.

Then a modeler who discussed a supposed reconciliation between GISS model E and Dasuopu results. The GISS unit cell was 4 degrees lat and 5 degrees longitude, with the relevant gridcell assigned an altitude of 4000 meters. An article by Bradley et al was mentioned, as was Hansen PNAS 2006 for a supposed difference in temperature trend between E and W Pacific. She said that they could not replicate a result of Guiot et al and did not pursue that line further.

Then to D. Schneider from Colorado who discussed high-resolution Antarctic ice cores over the past two centuries (GRL Aug 2006) referring to Law Dome, Dronning Maud, ITASE 2001-1, ITASE 2001-5 and Siple Dome, analyzing dO18 and dD. He did principal components analysis and claimed the ability to reconstruct the Southern Annular Mode. The ice cores had an anti-correlation of -0.6 in the interior and +0.2 on the outside of the annulus. The PC1 had no trend over the 19th and 20th century. However the late 19th century was lower than the earlier 19th century or the 20th century. This negligible bit of red noise was noted up as a trend.

Then to Urman who discussed tropical ice cores and cyclones. He put up a correlation map between cyclone activity and PAC1 SSTs. More cyclones started further east in Nino years and fewer in Nina years. The Nina typhoons were more zonal, while Nino typhoons often went further north. He asked whether ice cores could clarify the dispute between Webster and Chan 2006 as to whether typhoon data from the 1950s showing elevated typhoon levels was valid. He showed a graph with strong decadal correlation between Quelccaya dO18 and PAC2 SST (0.893 — definitely a high correlation) and with NW typhoons (0.58). His legend said that he used centered 5-year averages. SM note: Quelccaya precipitation actually comes from the Atlantic and even from the Amazon basin from the South American monsoon. So there are a couple of legs to this proposed teleconnection.

Then I went to a session describing an ocean sediment core from Kau Bay, Indonesia, a closed basin of about 470 m in depth with a sill of only 40 m depth. The anoxic depth had been surveyed a couple of times — anoxic levels began at relatively shallow depths in a 1930 survey after a strong El Nino; only at the base in 1985 after a Nina; and once again at higher depths in 2003 after a Nino. She suggested a steady reduction in Nino frequency since about 700 BP. I left the session early to wait for Al Gore.

After the Gore speech, I went to some ice core posters. Kreutz described a new core from Mount Logan. There are now 4 cores from this area — 3 taken around 2002-3. We’ve discussed Fisher’s core from the Summit which has a sharp decrease in dO18 in the mid-19th century through the 20th. A shorter core (showing the same phenomenon) had been taken in 1980 — so this has been known to (say) Lonnie Thompson for some time. The new Fisher core was also from the Summit. The Kreutz core (Eclipse) was taken at a lower elevation and showed less centennial-scale variability. Kreutz and Fisher have concluded that the higher elevation core is reflecting precipitation from a much greater distance while the lower cores are more local. He showed a panel of comparanda from Seager et al (which is a version of the Graham-Hughes presentation discussed previously).

A nearby poster showed temperature estimates from several Canadian Arctic cores — Hudson Strait, Baffin Island, Beaufort Sea. The estimates were done from dinoflagellate cysts; sedimentation rates were 1.7- 4 mm/year, sampled every 10 cm, thus about 20 years resolution. He reported that temperatures in the Hudson Strait about 4000 years ago were up to 11 degrees C higher (!), but not much difference at the Beaufort Sea. The explanation for this was that the warm Greenland Current penetrated much further north into Canadian Arctic waters at this time (as seen trough North Atlantic flora).

Nearby was a poster for a short drill core from the Pamirs from a very large glacier. The reconnaissance was done by a group from the University of Idaho. Their glacier was 1 km thick — pretty remarkable. (The day before, Lonnie Thompson said that the 160 meter thick Quelccaya glacier was as good as it gets.) They are trying to get NSF funding — I hope that they get it. Maybe they will even archive their sample results before 20 years has passed.

Then to poster 1234 -Neny Fjord, Norway had a thick (12 m ) Holocene sequence with basal C14 date of 8060 C14 BP. 1235 — a record from Cabo Frio at the South Atlantic Convergence Zone.

Then to a presentation by Nicola Scafetta on his latest thoughts on solar-climate relationships building on several earlier articles. He made the obvious point that forcing factors prior to the 20th century were agreed to be solar and volcanic. Thus variation in a reconstruction — whatever the merit of the reconstruction — had to result from these factors. He then took the view that feedback to solar forcing should be deduced from this information rather than on a priori grounds. He then used the various temperature and solar reconstructions to give bounds to each. Of the temperature reconstructions, Mann and Jones 2003 was at one end of variability (0.2 deg C) and Moberg at the other end (0.8 deg C). (As an aside, in the discussion of reconstructions here, I’ve not really tried to assess the knock-on impact in attribution studies, although I’m obviously aware of the issue; for those who say that these reconstructions don’t “matter”, here’s a case where they are being used for forcing attribution.) e also noted the big differences between variability in different iterations of solar reconstructions, with Lean et al 2005 being much less variable than the earlier versions. He also drew attention to the lag factor between forcing and temperature (his tau) resulting from ocean inertia. He mentioned that a tau of 10 was held to be sensible for physical reasons, but that the tau from some solar proxy-temperature proxy combinations was much less. (He noted that differences for solar variability resulted even from interpretations of ACRIM where an adjustment in the middle of the record was not without controversy and led to different knock-on results.) The tau of MJ03-Lean 2000 was 0.75; 3.75. Scafetta had an interesting graphic showing the GISS model against actuals — something dear to Willis’ heart. Prior to 1958 in this graphic, the GISS reconstruction was really bad. He mentioned a possible feedback from solar forcing onto cloud cover.

I browsed more ice core and sediment posters for a while. One that caught my eye was from Wyss Yin of Hong Kong (U43B-0858) who had a theory of why CO2 levels rose in interglacials. He was a geologist who had studied dozens of boreholes in the Hong Kong shelf. He pointed out much loess had been de-calcified. He proposed that dropping ocean levels led to exposed continental shelfs, that this exposed marine pyrite, which then oxidized and dissolved carbonates yielding carbon dioxide. I don’t know whether this can be reconciled with timing (and, in particular, with the ice core information.) However, there are many Pleistocene loess exposures and lots of potential information to be assimilated.


16 Comments

  1. Sara Chan
    Posted Dec 17, 2006 at 1:00 PM | Permalink

    [Nicola Scafetta] made the obvious point that forcing factors prior to the 20th century were agreed to be solar and volcanic.

    What about changes in the biosphere due to factors there?—e.g. land-use changes, algae changes.

    Thus variation in a reconstruction — whatever the merit of the reconstruction — had to result from these factors.

    Doesn’t this assume that the reconstructed data are independent of (possibly unavailable) prior data?—i.e. no long-lag internal feedbacks.

  2. Gary
    Posted Dec 17, 2006 at 2:46 PM | Permalink

    Steve, After the AGU meetings are over, are you planning to post some summary conclusions on how well the research community is addressing your concerns about climate data?

  3. Posted Dec 17, 2006 at 5:11 PM | Permalink

    Let me ask a question from relative ignorance.

    Supposing I request funding to go to a remote place to sample a core of say, ice or marine sediment. I take the core back to the lab, analyse it with a battery of different tests and find, well, nothing. The core proxies do not correspond to imputed temperature, or precipitation or wind stress or anything in the verification period.

    I have spent the federal dollars and got a null result. My supposed test for climate variables X,Y or Z yields an insignificant result that is overwhelmed by random noise in the sample.

    The funding agency won’t want to hear that the core expedition was a bust. Neither will my departmental head. Nor my wife.

    Do you think that in those circumstances, where I’ve spent lots of money for an expedition and an expensive analysis, and my department head is expecting a prestigious publication and I a promotion (or even tenure), I might be tempted to reexamine the data to look for trends hidden in the noise that perhaps are on or below the resolution of the sample or my instruments?

    Or perhaps I might be beguiled by this or that statistical technique by an esteemed colleague in the field who seemed to achieve widespread fame for using his unique method.

    What do I do then?

  4. Steve Bloom
    Posted Dec 17, 2006 at 6:29 PM | Permalink

    Re #1: Bill Ruddiman thinks the LIA was a result of CO2 uptake from
    largeo-scale forest growth due to depopulation, first in Europe due to the Black Death and a couple centuries later in the Americas due to assorted diseases passed on by colonists. This issue is by no means settled, so Nicola seems to be getting a little ahead of himself. Regarding longer-term influences, since IIRC it’s known that the Holocene thermal optimum was a consequence of Milankovitch cycles, with a major resultant step-change having occurred just 5,000 years ago (rather recent as these things go), is it entirely clear that Milankovitch cycle influences can be excluded as a factor during more recent times?

  5. glrmc
    Posted Dec 17, 2006 at 8:05 PM | Permalink

    Re 4. Steve, I don’t quite understand your comments.

    First, what is IIRC?

    Second, regarding your question “can Milankovitch cycle influences can be excluded as a factor during more recent times?”, surely it is a matter of wavelength (and resolution) rather than “recent time”.

    Milankovitch forcing is operating at the moment as it has throughout our recent Pleistocene past. But the wavelength of the cycles that it controls are 5,000 yrs (half-precession) long at least.

    Therefore, when we see decadal or multidecadal cyclicity in our instrumental or short-term and recent proxy records, we have to seek causes other than Milankovitch cyclicity. So it’s not that Milankovitch cyles are exluded in recent times, but rather that they are not a control on decadal to centennial cyclicity. Or have I misunderstood you?

    Bob

  6. Armand MacMurray
    Posted Dec 17, 2006 at 8:43 PM | Permalink

    IIRC = If I Remember Correctly

  7. Gary
    Posted Dec 17, 2006 at 10:18 PM | Permalink

    #3 – Federal funding, and probably all serious funding for that matter, requires a highly detailed proposal of what will be done, how it will be done, and what the expected outcome should be. In other words, you better have a pretty good line of evidence for what you eventually expect to “prove” by the research in order to get the money. The old joke is that every grant you get is meant to provide the results you need for your next proposal. The trick then is getting your first grant. That usually happens by doing a good doctoral thesis, getting a good post-doc position, and attaching yourself to an already successful group. Getting grants is highly competitive; I’ve heard numbers like 1-in-4 to 1-in-10 proposals being funded depending on the reseach area. With this high stress it’s natural that the data would be milked, often through multiple papers rehashing nearly the same thing. If a project should fail for any reason there also is incentive to make anything out of it so as not to break the funding chain.

    OTOH, most people aren’t stupid and hedge their bets by gathering as much data as possible. So much is unknown that speculation isn’t necessarily a bad thing and why not try some new statistics or extrapolating beyond the data when fresh thought just might be needed.

  8. richardT
    Posted Dec 18, 2006 at 4:27 AM | Permalink

    I’d be slightly skeptical of dinoflagellate cyst-based reconstructions. The dinocysts transfer functions I’ve seen have some undesirable statistical properties, and many of their results strain credibility.

  9. Para Gary
    Posted Dec 18, 2006 at 10:10 AM | Permalink

    To paraphrase Gary in #7–“Hey folks, we’re drowning here! We haven’t got a clue so we gotta fake it and make up the statistics. Too bad for you, cause the big money funders are on our side. So go back to your video games already and leave us to audit ourselves!”

  10. Gary
    Posted Dec 18, 2006 at 11:06 AM | Permalink

    #9 – Hard to see how that’s a paraphrase. My post wasn’t snarky at all. The two dozen climate scientists I’ve encountered DO have a clue, DON’T fake their data, and are pretty careful about the statistics and what they claim as results or offer as hypotheses.

    The funding process is what it is and has feedbacks that reinforce the convential wisdom. That’s good when it limits the pointless research John A was questioning; it’s bad when it prevents worthwhile projects for political reasons.

    Some researchers have taking the alarmist tack doesn’t condemn them all. You paint with too broad a brush. The last thing a responsible researcher wants is to find out that half of his career was wasted on a false hypothesis. Steve M has been right to counter the alarmism with a call for audited results, but I’m sure he doesn’t think that all federally-funded researchers are frauds as you imply.

  11. Steve Bloom
    Posted Dec 18, 2006 at 2:32 PM | Permalink

    Re #5: Bob, my point (agreeing with Sara Chan’s comment) was just that if Scafetta wants to say that the only forcings he needs to account for are volcanoes and solar, he needs to make sure about possible other signals (both forcings and feedbacks) that might convolute the solar signal he’s trying to extract. Milankovitch cycles are obviously a forcing, not a feedback as Sara mentioned (and I would appreciate it if she could indicate what she had in mind for that, although maybe it was Ruddiman’s work), but the point is that Scafetta needs to go through the step of accounting for all such factors. My understanding, BTW, is there is presently a slight overall cooling forcing from Milankovitch cycles.

  12. PHE
    Posted Dec 18, 2006 at 3:20 PM | Permalink

    Steve Bloom says (No. 4):

    “This issue is by no means settled”.

    … EXACTLY.

  13. Paul Penrose
    Posted Dec 18, 2006 at 7:17 PM | Permalink

    Gary,
    The big problem is that novel statistical methods are indeed being utilized in some, if not many, of these temperature reconstructions, however they are not being vetted by professional statisticians first. This is very risky since it’s easy for a layman to get this statistical stuff wrong.

  14. Gary
    Posted Dec 18, 2006 at 10:24 PM | Permalink

    #13, agreed. The only time novel techniques should be tried is when they are clearly identified as such. They then should get a double dose of scruteny.

  15. PHE
    Posted Dec 19, 2006 at 4:44 AM | Permalink

    I’m sure everyone knows the famous quotation:

    “There are three kinds of lies: lies, damned lies, and statistics”.

    Statistics, like computer modelling, is a very useful scientific tool. However, like modelling, it can be used very cunningly to present a pre-conceived outcome, whether subconciously, or consciously.

    As a modeller myself, I never fully trust the outcome of someone elses model unless I can understand the mathematics and assumptions behind it. It is the duty of a modeller to make these available as a report appendix. I’m sure that statisticians feel the same when they see someone elses work. Saying: “trust me, I’m an expert, I have a degree/PhD, , I know what I’m doing” is not enough for your peers. And its not about distrusting your peers. Its in the spirit of openess and acceptance that scientific interpretation and opinion have many facets.

  16. EW
    Posted Dec 19, 2006 at 5:25 AM | Permalink

    #4
    The guy from the Ancarett’s blog says that the historians don’t share the view about reforestation and LIA. Sources quoted in the article.

    Well, it’s just that most medievalists point to much earlier events than 1347 signalling the start of the Little Ice Age. Glacial advances that were noted in thirteenth century Europe. The two early fourteenth century cold snaps that contributed to the decline of the Greenland colony. The Great Famine of 1315-1317. I could go on and on.

    And while the Black Death was briefly effective at reducing Europe’s population, an agricultural and demographic upswing was well underway by the seventeenth and eighteenth centuries. By 1750, England was home to some 3.7 million people (approximately the same population that had inhabited the country circa 1300) intensively working the land, deforesting the countryside and draining the marshlands to increase the total amount of arable land by some 30% over the previous era.

Follow

Get every new post delivered to your Inbox.

Join 3,204 other followers

%d bloggers like this: