Gore Gored: Monckton replies – Round 2

This is a continuation of the original post for comments.

230 Comments

  1. Steve Sadlov
    Posted Nov 28, 2006 at 11:14 AM | Permalink

    RE: #139 – “Do you honestly believe that sea-level scientists have somehow missed the fact that the crust of the Earth moves both due to ice and water loading and tectonic activity?”

    I honestly believe that certain scientists who have an agenda to “prove catastrophic AGW” as a foil to foist extreme ecotheological notions on the world, have taken advantage of the fact that most of the world’s tide gages are located along the passive (tectonically subsiding) margin of the Atlantic in order to depict “average sea level rising disturbingly” etc. I truly believe that to be the case. Prove me wrong.

  2. Lee
    Posted Nov 28, 2006 at 11:15 AM | Permalink

    rocks,

    yews, it was warm and the norse colonized Greenland,and then it got cold and the colony failed. That is true, and not relevant to this discussion about whether the norse farm sites are now, as Monckton said, under permafrost.

    That farm was buried under sand, and was revealed when the river uncovered part of that sand. The farm was not rendered useless by climate, it was rendered useless by being covered with sand. This is irrelevant to Monckton’s claim.

    This same farm often gets trotted out when I ask for evidence of farms emerging from under glaciers – it isn’t relevant to that either. Glaciers aren’t sand.

  3. Dave Dardinger
    Posted Nov 28, 2006 at 11:41 AM | Permalink

    re: #150

    I would love to see how one imagines a a backloading of CO2 without an accompanying backloading (“therefore of”) the effects of CO2.

    Apples and Oranges. Obviously if there’s more CO2 coming, i.e. backloaded, regardless of what we do then there will be more of whatever effect may be going to be produced by this CO2. But your wording indicated that there would be such inevitable CO2 which I couldn’t see and you didn’t mean as it turns out.

    Words have meanings. Sentences are words arranged in stereotyped ways. Being unhappy that people complain about your structure is equilivent to complaining to your math teacher about marking a derivation wrong when you “obviously meant to give the correct derivation.” Sometimes teachers will overlook small things like understandable misspellings, but there are limits. And for me the limit is when the meaning is both wrong and not obvious. But this is my last reply on this particular subject. If you can’t even be repentant about an obvious error on your part, then that’s your problem. Live with the consequences. Which will include that people will be less likely to want to discuss things with you.

  4. Loki on the run
    Posted Nov 28, 2006 at 11:48 AM | Permalink

    Apples and Oranges. Obviously if there’s more CO2 coming, i.e. backloaded, regardless of what we do then there will be more of whatever effect may be going to be produced by this CO2. But your wording indicated that there would be such inevitable CO2 which I couldn’t see and you didn’t mean as it turns out.

    Actually, it seems to me that once all the outgoing LWIR that can be absorbed by the CO2 in the atmosphere is being absorbed, there can be no more affect.

    Earth’s Annual Mean Global Energy Budget seems to suggest that 50-60 percent (Figure 1) of the available energy that can be absorbed by C02 is being absorbed. Secondly, they suggest that the response in more nearly logarithmic.

  5. bruce
    Posted Nov 28, 2006 at 11:52 AM | Permalink

    The issue of Australian sea level change measurements has probably been addressed on CA before. However, in the context of the above discussion it seems relevant to raise it here.

    http://www.ozestuaries.org/indicators/sea_level_rise.jsp

    “The estimated relative sea level trends for tide gauge locations around Australia which have at least 25 years of hourly data on the National Tidal Facility archive are shown in Table 2. The overall Australian average sea level rise of 0.30 mm per year is substantially lower that the global estimates of IPCC (2001) of 1-2 mm per year over the last 100 years. Table 2 also shows a considerable variation between sites, driven by combinations of the factors outlined above. A good example of this regional variation is the sea level fall of 0.19 mm per year at Port Pirie compared to the >2 mm per year sea level rise at nearby Adelaide.”

    What is interesting about this paper by the way, which was picked up by the press in a sensationalist fashion at the time, is how it fails to address the >2mm per year sea level rise at Port Adelaide, the impact of which was doubled in calculating the average due to the fact that TWO tidal stations are located at Port Adelaide. Port Adelaide – Inner showed sea level rise of 2.06 mm pa, and Port Adelaide – Outer showed sea level rise of 2.08 mm pa.

    A Google search on “Adelaide Subsidence” found http://www.coastal.crc.org.au/coast2coast2002/proceedings/Theme3/Sea-level-change-coastal-stability-SA.pdf which says

    “Port Pirie and Port Adelaide, only 200 km apart (see Figure 1), have tidal records of 63 and 55 years respectively and yet they reveal markedly different mean sea-level trends. At Port Pirie it appears that sea level is actually falling at a rate of -0.19 mm per year whereas at Port Adelaidethe sea-level appears to be rising very rapidly at 2.08 mm per year (see Table 1). Clearly this difference is related to local effects which need explanation from the geological record. First, it is necessary to identify reliable sea-level indicators in the modern environment and then relate these to the same indicators in the geological record. In this manner it is possible to decipher past relative sea-level change. This has been done in geological studies at both the Port Adelaide [Belperio, 1993] and the Port Pirie tidegauge sites [Harvey et al., 1999a].”

    And

    “Preliminary estimates for correcting mean sea-level trends from South Australian tide gauge data were first outlined by Harvey and Belperio [1994]. These estimates have recently been refined by Harvey et al. (2002) who produced corrected figures for all South Australian tide gauge sites based on geologic, isostatic and anthropogenic influences. These calculations show that at Port Pirie the relative land/sea movement, is a rise of around 0.31 mm yr giving an adjusted sea-level rise of +0.14 mm yr instead of – 0.19 mm yr as indicated by the tide gauge data (Table 1). When the same adjustments are made using Belperio’s data for Port Adelaide it can be seen that the isostatic effects are outweighed by the anthropogenic effects of subsidence so that the sea-level trend derived from tide-gauge data of 2.08 mm yr when corrected for Holocene relative land/sea movement, is actually rising at a much lower rate of around 0.21 mm yr [Harvey et al., 2002].”

    Another relevant piece found by Google was: http://www.aph.gov.au/house/committee/jsct/kyoto/sub44c.htm.

    This says:

    “The Australian National Tidal Facility (NTF) at Flinders University in Adelaide published a `Mean Sea Level Survey’ in 1998 to establish sea level trends around the Australian coast from tide gauges having more than 23 years of hourly data in their archive [29]. This survey was particularly relevant for global application since Australia is tectonically stable and much less affected by PGR than either Europe, Asia or North America. Since nearly two-thirds of the world’s total oceanic area is in the southern hemisphere, Australia is best placed to monitor southern hemisphere trends and probably best represents the true MSL globally. Also, the Australian coast adjoins the Indian, Pacific, and Southern Oceans, making its data indicative of sea levels in three oceans, not just one.”

    “Eleven of the 27 stations recorded a sea level fall, while the mean rate of sea level rise for all the stations combined is only +0.3 mm/yr, with an average record length of 36.4 years. This is only one sixth of the IPCC figure. There was also no obvious geographical pattern of falls versus rises as both were distributed along all parts of the coast.

    “But there’s more. It was shown earlier that Adelaide was a prime example of local sea level rise due to urban subsidence [3]. It’s two stations in the above list are the only ones to record a sea level rise greater than the IPCC estimate. The same NTF survey pointed out the Adelaide anomaly and directly attributed it to local subsidence, not sea level rise, on the grounds that the neighboring stations of Port Lincoln, Port Pirie and Victor Harbour only show a rise of +0.3 mm/yr between them. If we exclude Adelaide from the list, the average sea level rise for the other 25 stations is then only +0.16 mm/yr, or less than one tenth of the IPCC estimate.”

    I’m only a lay person, but these accounts seem to me to illustrate that pronouncements by the IPCC and others can tend to exaggerate the real position as shown by the evidence.

  6. Loki on the run
    Posted Nov 28, 2006 at 11:58 AM | Permalink

    Dave Dardinger says:

    Anyway, the point is that it’s not the convection per se which causes the heat to escape via radiation. If air is prevented from mixing, say in an inversion situation, the radiation portion of atmospheric heat loss continues to work as usual since the CO2 (& H2O, etc.) pick up the energy needed to radiate from other molecules around them, not by having to physically move up (or down) in the atmosphere.

    Oh well, life’s always more complex that it looks on first impressions 🙂

    So what’s the time constant?

  7. welikerocks
    Posted Nov 28, 2006 at 12:11 PM | Permalink

    Ok this is my last comment on Greenland.
    I’m just going to quote Lee and then quote the article I posted.
    Any reasonable person can see what’s going on.

    Lee says:

    That farm was buried under sand, and was revealed when the river uncovered part of that sand. The farm was not rendered useless by climate , it was rendered useless by being covered with sand. This is irrelevant to Monckton’s claim.

    Right, glacier advances have nothing to do with climate change (but retreats do for GW purposes) and hey no surprise, completely disregards where the sand came from too!

    What does seem to have contributed to the abandonment of the Western Settlements, archaeologists said, is climate change. The onset of a “little ice age” made living halfway up Greenland’s coast untenable in the mid-1300’s, argues Dr. Charles Schweger, an archaeology professor at the University of Alberta, who has studied soils around the Farm Beneath the Sand.
    Dr. Schweger said the Norse were no match for cooling temperatures, which caused a glacier several miles up a valley to expand. As this glacier grew, it also released more water every summer into the valley, causing turbidity in drinking water and raging floods that blanketed meadows with sand and gravel. Today the edge of Greenland’s ice cap is only six miles from the old farm site. But in the mid-14th century, it probably was far closer.

  8. Dave Dardinger
    Posted Nov 28, 2006 at 12:15 PM | Permalink

    re: #154 Loki,

    Actually, it seems to me that once all the outgoing LWIR that can be absorbed by the CO2 in the atmosphere is being absorbed, there can be no more affect.

    No more direct effect, but the warmer position is that there will be positive feedback from the temperature increase caused by this direct effect. And then the failure to observe as much warming as they’d like / predict is blamed on the long time it takes to warm the ocean depths, melt Greenland, etc. This isn’t entirely wrong, but while the models all have the positive feedback built into their innards, it’s by no means clear that the net feedback from all water feedbacks is very, or even necessarily at all positive.

    To explain that last, if the Earth climate system has its “thermostat” set for temperature T then if additional CO2 tried to produce a temperature T + delta T, it might be that clouds or other mechanisms might reduce the actual temperature rise so that it’s less than it would be based simply on the known physical properties of CO2.

  9. Dave Dardinger
    Posted Nov 28, 2006 at 12:44 PM | Permalink

    re: #156

    So what’s the time constant?

    The collision frequency in air for the standard atmosphere at sea level (which is at 288.15 deg K) is 6.9189 x 10^9 per second which gives a mean free path of 6.6332 x 10^-8 meters. Particle speed is 458.94 m/s. I can’t remember the exact average time to emit an IR photon by CO2 but I remember it’s something like 10,000 or 100,000 times slower. So the question is what % of the time an individual CO2 molecule will have enough internal energy to be able to produce a photon. This will presumably be a pretty low %. I’m not sure if there’s a minimum latency period after obtaining that energy before it’s possible for the molecule to emit, but I expect it’s rather short if there is since we’d be dealing with electronic rearrangements which probably occur at close to light speed.

  10. Posted Nov 28, 2006 at 12:47 PM | Permalink

    Bruce:

    What is interesting about this paper by the way, which was picked up by the press in a sensationalist fashion at the time, is how it fails to address the >2mm per year sea level rise at Port Adelaide, the impact of which was doubled in calculating the average due to the fact that TWO tidal stations are located at Port Adelaide. Port Adelaide – Inner showed sea level rise of 2.06 mm pa, and Port Adelaide – Outer showed sea level rise of 2.08 mm pa.

    That’s easy. Port Adelaide is subsiding.

    See Belperio, A, “Land Subsidence and Sea-level Rise in the Port Adelaide Estuary: Implications for Monitoring the Greenhouse Effect”, Australian Journal of Earth Sciences, v.40, p.359-368, 1993

    Basically groundwater withdrawal and urban buildup are causing the land to subside relative to sea level. Three nearby tide gauges (Port Pirie -0.19mm/year, Port Lincoln +0.63mm/year, Victor Harbour +0.47mm/year) are not showing this dramatic apparent rise.

  11. L Nettles
    Posted Nov 28, 2006 at 3:38 PM | Permalink

    152

    Glaciers aren’t sand.

    Research assignment for today. Why do golf course have sandtraps? What geologic feature do traditional golf courses try to emulate.

    Hint: Glaciers play a role.

  12. Jim Barrett
    Posted Nov 28, 2006 at 4:05 PM | Permalink

    I don’t have time to individually answer all the responses on sea level, but here are a few comments:

    Welirocks (posting 142):

    You say “I could pull up hundreds of papers that show the sea level has been rising in a constant curve since the last ice age”.

    I’m not sure what you mean by “constant” – the slope showed in http://tinyurl.com/yxzupu certainly isn’t constant and certainly decreases towards the present day. It is also clearly RELATIVE sea level and strongly affected by land sinkage. On the other hand, the image in http://tinyurl.com/se8uo looks very much like “absolute” sea level (i.e. that due to a change in volume of the ocean) and shows exactly what I have been saying all along — “absolute” sea level has been rising since the last glaciation (no surprise here – the ice had to go somewhere) but the rate of rise declined to virtually nothing a few thousand years ago – until it increased to the present rate of order 2 mm/year (nothing to do with us, though, I suppose?!).

    Paul Dennis (posting 143):

    I agree with much of what you say. You confirm what I have been saying about the basis of Willis et al.’s statements when you say “they are basing predictions on a simple extrapolation of existing data over a hundred year time scale” and “to this extent their argument is still based on a model, one that does not agree with the projections of the TAR or your analysis of the situation”. Yes, of course Willis et al. rely on a “model”, albeit a very naive one – extrapolation never holds up well against physical understanding. However, you blow it all when you say: “none the less, given our understanding, it is a perfectly valid position to hold”. I cannot understand how it can be a “valid position” for a lay person to ignore the majority of working climate scientists, and instead use one’s own model which is no more than extrapolation – this is exactly “flat Earther” country (“the Earth looks flat from where I am standing therefore it must be flat everywhere”).

    JAE brings out a very old chestnut (posting 144) with “how do you blame the rise at this time on CO2? Where is the CO2-driven acceleration since 1931”. This chestnut assumes that if I say that “A is mainly caused by B”, then this means that “A is completely caused by B” which means that “if there is a single factor in A that cannot be ascribed to B then I am wrong”. Enough said – it’s standard contrarian logic.

    I have to say that I’m missing the point of the argument about Greenland and the Vikings etc. I am sure that there have been some changes in Greenland ice over that past few millennia. I’m sure the odd bit of land has become more or less inhabitable. But does this mean a significant change in Greenland’s ice mass? I think not – Greenland’s ice is typically 2 km thick. If its ice could raise sea level over the whole world by 7 metres, discussions about burial of the odd farm seem pretty irrelevant!

    Steve Sadlov (posting 151) points out that `most of the world’s tide gages are located along the passive (tectonically subsiding) margin of the Atlantic in order to depict “average sea level rising disturbingly” etc.’, without also noting:

    (1) that tide gauge observations are adjusted for vertical land movement in order to infer “absolute” sea-level rise, and

    (2) that satellite altimeters, which have nothing to do with tide-gauge locations, show a global sea-level rise of 3.0 +/- 0.4 mm/year from 1993 to present — this is HIGHER than most estimates of global sea-level rise from tide gauges!

    And Bruce and John A go back over the old Australian sea level story (posting 155). The story covers tidal records that are short and where estimated trends are severely corrupted by ENSO events, and records from tide gauges which are not vertically stable. As has been said many times before, careful analysis of tide-gauge and satellite observations indicate a 20th century rate of global sea-level rise of about 1.7 mm/year – this is a robust conclusion from a number of researchers over a significant period of time.

  13. Lee
    Posted Nov 28, 2006 at 4:53 PM | Permalink

    161
    netttles – there is no evidence that the farm was ever covered by a glacier. It was covered with sand. That sand may have (almost certainly did) come from a glacial event up-valley. This being Greenland, I will wager that throughout the periods we are discussing, including the MWP and the LIA, that there was a glacier up-valley. That fact tells us nothing about changes in temperature at the farm. This farm was rendered unfarmable by sand, not by permafrost – and we are discussing permafrost, and Monckton’s claim about present day farms and permafrost. This farm is irrelevant to that question.

    This is also true of the ‘farms emerging from glaciers’ argument. A farm covered with sand from (probably) an up-valley glacial event, is NOT a farm that was covered with a glacier. It is a farm that was unlucky enough to be down-valley from a glacier when there was a discharge of sand that ended up covering the farm. When that sand is eroded and uncovers the farm, that is NOT a farm emerging from under a glacier.

    The fact that glaciers and sand are associated in some ways, does not change one bit of this. Neither does the fact that sand traps contain sand.

    Sheeesh.

  14. Hans Erren
    Posted Nov 28, 2006 at 5:11 PM | Permalink

    The bulk of the emission in this centurye arises from the developing world, so even if we would close down the developed world entirely, that would only account for 25% of the emissions in the year 2100, so you’d have to prevent China and India from emitting to top CO2.

    On CO2 backloading, I cant see much CO2 in the pipeline. The additional effect due to increased temperature is about 10 ppm/k (from vostok ice core), that is not enough for a runaway positive feedback.

    For the rest, the sink of CO2 increases proportional to atmospheric concentration and has been stable for the last 50 years at roughly 1.7% of atmospheric concentration. Which means that as concentration increases, the sink increases as well, which is counter the ideas of Joos et al, who envisage a dramatic saturation of the sinks this century.

    As for the thermal inertia of ice caps, this has at least a resonance time of 500 years, which means that the current spike in co2 has little effect. Using ice age sensitivities is misleasding as the melting icecaps then were at low lattitude and having a land based terminus. All major present day icecaps (greenland and antarctica) are surounded by water, which means that the melting mechanism is not comparable. I have the dissertation of Hans Oerlemans which deals with the rapid end-of-ice-age mechanism, which he explains by sliding rapidly to the south into the heat. The rapid mechanism is impossible with seabounded icecaps, this also explains the strong resistance of the last remaining icecaps to melting.

    The thermohaline conveyor has also a cycle time of 1000 years, which gives this also a very stong inertia.

  15. welikerocks
    Posted Nov 28, 2006 at 5:21 PM | Permalink

    Jim Barrett I am not going to hog the board arguing with you any more. Let’s say we disagree, and you keep believing what you want to. The acceleration you keep referring to isn’t certainly agreed upon by all scientists. The 2mm measurement is ONLY for RI, on that graph where that tidal gage is. Data handling is also an issue with sea level measurements, just like the temp proxies. The AGU has pages and papers all about that. Also a few thousand years ago the earth was going through many different phases as it always does, warming, cooling, droughts, lakes growing and forming after all, in other words all kinds of senerios if you want to be aware of it-we don’t even understand how it all works exactly. Then theres the Little Ice Age and that cooling trend began at different times in different parts of the world and often was interrupted by periods of relative warmth. All agree, however, that it lasted for centuries, and that the world began emerging from its grip between 1850 and 1900.

    My husband is fully aware and in this field and in the loop on these matters, and there is no significant sea level rise out of the ordinary; he can see only false alarms being raised for GW belief purposes.

  16. Steve Sadlov
    Posted Nov 28, 2006 at 6:40 PM | Permalink

    RE: #162 – your adjustment is my data manipulation to acheive a predetermined outcome (e.g. AGW hysteria). Those so called satellite measurements over a decade are highly suspect – what is the resolution of them, do you even know? I’d love to sea the measurement system analysis on that one. What, there is none? …… I knew that would be the answer.

  17. bender
    Posted Nov 28, 2006 at 7:18 PM | Permalink

    Re #163
    Lee, are you sure you’re not distorting the observations to favor your interpretation, and to disfavor Monckton’s? The fact is that the area obviously was farmable during the MWP, whereas until quite recently the farm was buried under permafrosted sand – presumably inoperable because of the permafrost, not the sand. Whether the sand arrived before the permafrost (and possible glaciation) is immaterial. It’s the operability of the farm relative to the climate of the time that is the issue.

    Please don’t say “sheesh.” You’re a smart guy, but you’re not smarter than the web.

  18. jae
    Posted Nov 28, 2006 at 7:58 PM | Permalink

    162, Jim Barrett, regarding your response to my 144: Please just answer the question, and skip the illogical logic.

  19. Lee
    Posted Nov 28, 2006 at 8:29 PM | Permalink

    bender, as I point out in the other thread, that farm was discovered in 1991, as un-frozen sand eroded away from on top of it. At that time 15 years ago, it was not “under permafrost,” it was under thawing or thawed sand.

    It no longer exists, having been washed away down that river, so it is not even a farm today, under permafrost or otherwise. Given that it doesn’t exist today, it is not a viable data point for whether permafrost renders farms inviable “to this day”.

    Much less whether “The Viking agricultural settlements remain under permafrost to this day.”

    The question isnt whether there was sufficient permafrost in the past to have rendered farms unusable, or even if there might be a farm somewhere today (for which there has been proffered NO evidence yet) rendered unusable by permafrost today. The question is whether “The Viking agricultural settlements remain under permafrost to this day” as Monckton said. Yes, I’m repeating this – it doesn’t seem to ahve sunk in with some people. The available and abundant evidence is that there is a quite successful agricultural economy in that same area of Greenland.

  20. Dave Dardinger
    Posted Nov 28, 2006 at 9:33 PM | Permalink

    re: #166,

    You might want to go to John Daly’s old “Still waiting for Greenhouse” site and read the article he produced concerning the satellite sea-level accuracies. Things may have changed since then, but they sure weren’t very convincing when he looked at them. Note especially some of the problems concerning ships in the radar path, etc. Perhaps someone here could tell us if such problems can now be avoided or not.

  21. Mr. welikerocks
    Posted Nov 28, 2006 at 11:52 PM | Permalink

    Lee,

    You say “Given that it doesn’t exist today, it is not a viable data point for whether permafrost renders farms inviable “to this day”.

    Much less whether “The Viking agricultural settlements remain under permafrost to this day.””

    So am I assume to all geologic/environmental/biological et al., that has been destroys via mother nature is no longer viable? What kind of a no scientific statement is that?

  22. Lee
    Posted Nov 29, 2006 at 12:00 AM | Permalink

    yes mr rocks, you may assume that a farm that has been washed away and no longer exists today is not a viable farm today, and therefore not a valid data point for whether farms today are “under permafrost”.

    To do otherwise would be to assign scientific validity to nonexistent data. “What kind of scientific statement is that?”

  23. Mr. welikerocks
    Posted Nov 29, 2006 at 12:12 AM | Permalink

    Lee, your fired from all international geotechnical/environmental/engineering firms. Nobody would hire and surely fire you if you really believe that.

  24. Lee
    Posted Nov 29, 2006 at 12:14 AM | Permalink

    mr rocks, I am stunned to hear a self-proclaimed scientist say that you believe in data that does not exist from a farm that does not currently exist.

    Remind me never to hire your firm.

  25. Mr. welikerocks
    Posted Nov 29, 2006 at 12:24 AM | Permalink

    Lee,

    where do you get your data from? The samples in my industry are routinely destroyed by the lab after 30 to 60 days, depending on the contract with the lab. Geo tech samples may be kept slightly longer.

    Seismic data is collected once, sometimes more, for a certain time period and used as a sample for that sight over that selcted time period. It cannot be recreated. You must obviously have a government job where budget is no issue and and you don’t have to worry about hiring the best firm for the money.

  26. MarkR
    Posted Nov 29, 2006 at 12:24 AM | Permalink

    To qualify in Lee’s book for “to this day”, the permafrost farms must be found on the exact date of publication, or reading, or writing of the words “to this day”. If the permafrost farm is discovered because of thawing, then it doesn’t count, because it’s no longer permafrosted. If it’s been washed away, it doesn’t count.

    In fact there is no way that Lee will ever admit to any permafrosted farm, because in doing so he would be admitting that there was a MWP, and a LIA.

    And that would be heresy for Lee.

  27. Lee
    Posted Nov 29, 2006 at 12:42 AM | Permalink

    Mark, stop assigning to me ideas that Ive repeatedly said I don’t hold. IOW, stop lying about me – it is detestable behavior.

    I’ve said repeatedly here that there was an MWP and an LIA. Ive said it in these threads – I believe earlier today, even. It was warm and the norse settled, then it got cold and the settlement failed. Ive just been saying that the farm was frozen under the sand – and then it thawed. How yo get from that to the inane and false accusations that I’m denykn gther ever was permafrost in that or other farms is simply beyond me.

    See, the thing is, when the LIA ended it got warmer. By definition. The question is whether Monckton’s proffered ‘data’ for claiming that temps today, after the warming we’ve had so far out of the LIA, are colder than during the MWP.

    Monckton made specific claims that farms today are “under permafrost” TODAY. I’m disputing that – I’m not disputing whether the MWP existed (it did) and I’m not disputing whether the LIA existed (it did). I’m disputing that farms today are under permafrost as Monckton claimed and therefore clearly and decisively colder than during the Norse settlements during the MWP as he claimed. And THAT claim is laughably wrong.

  28. Lee
    Posted Nov 29, 2006 at 12:47 AM | Permalink

    mr rocks,

    lets say I send you to collect rain data from a rain gauge at a specific location, because a claim has been made that the rain this last month has been at record low levels, and I want to know exactly how much rain we’ve actually had over the last 30 days.

    You get to the location, and the gauge does not exist – it has been destroyed.

    Apparently what you would do is look at data collected in the past, and simply substitute it for the non-existent last-30-days data, and tell me that this is how much rain fell at that non-existent gauge – because of geologic time or something.

  29. MarkR
    Posted Nov 29, 2006 at 12:58 AM | Permalink

    #177 Oh Goody, so what do you say about the Hockey Stick shaped graph.

    No MWP on it, an no LIA either.

  30. Lee
    Posted Nov 29, 2006 at 1:09 AM | Permalink

    nice apology, mark. Nice change of subject, too.

    I could now ask which hockey stick from which data and analyses, or from regional or global data sets, and them point out that many of them do show an MWP aand LIA so even your premise is false, and then go from there to explain my current position on the dendro reconstructions. But I’ve explained my position on dendro recons, recently and repeatedly in the past – and I’m not going to let you off the topic currently under consideration while you continue to attempt to attack me on somethign you seem to think I beleive.

    So back to the Monckton claim –

    Monckton said:

    “The Viking agricultural settlements remain under permafrost to this day.”

    You have been disputing this with me – is that staatement true of false?

  31. Paul Dennis
    Posted Nov 29, 2006 at 1:10 AM | Permalink

    re #162

    Jim, with all due respect your claim for a 5.8m or more sea level rise being plausible is not based on a model. I’m not aware of any climate model that incorporates ice sheet dynamics and that has made reliable and robust predictions of such sea level rises. Rather there is a wide spectrum of possible future scenarios. You have chosen one extreme, that which involves significant mass wasting of the polar, continental ice sheets leading to significant sea level rise. There is the opposite extreme which involves a net positive mass balance of the ice sheets. This could lead to a falling sea level. Then there is the position taken by many that sea levels will rise by several 10’s of cm as a result of thermal expansion and the addition of a small amount of freshwater from continental ice, wether it be polar or not.

    With respect to:

    “I cannot understand how it can be a “valid position” for a lay person to ignore the majority of working climate scientists, and instead use one’s own model which is no more than extrapolation – this is exactly “flat Earther” country (“the Earth looks flat from where I am standing therefore it must be flat everywhere”).”

    I would suggest that it is your position, and that of Al Gore’s claims for possible sea level rise that is far outside the bounds suggested by most working climate scientists.

    Not with standing this your estimate is interesting and it would be useful to know if there is evidence of such rapid melting of continetal ice and sea level rise in either the present interglacial, or previous ones. What signature would it leave etc. Certainly there is some good evidence that temperatures have been higher during the Holocene (Holocene climatic optimum and other periods) and some evidence of slightly higher sea levels but nothing approaching 5.8m What other evidence might it leave. Well dumping of a large amount of freshwater in a comparatively small period of time will significantly modify the salinity of the ocean surface and change it’s oxygen isotopic composition. Witness the 8.2 ka event. However, I’m not aware of any evidence for such events during the Holocene. One might look for ice core eveidence. Again I’m not aware of any but would be excited to be pointed in the direction of some data.

    I’ll nail my colours to the mast. I’m a working geochemist and palaeo climate scientist (certainly not a flat earther) and I’ll wager that Willis’ estimates are closer to reality than your’s or Al Gore’s.

  32. Paul Dennis
    Posted Nov 29, 2006 at 1:14 AM | Permalink

    Before every one jumps on me I’ll clarify my second to last paragraph.

    ‘Witness the 8.2ka event…….I’m not aware of and evidence for such events during the Holocene……’

    I meant to say the latter half of the Holocene! Easy to make a slip even for us palaeoclimate scientists!

  33. MarkR
    Posted Nov 29, 2006 at 1:44 AM | Permalink

    #180 Certainly a The Viking agricultural settlementt remain under permafrost until very recently.

    Now then back to the Hockey Stick, as you brought up the MWP, and LIA, surely you don’t mind repeating your thoughts on it for those who were unlucky enough to miss it.

  34. Brooks Hurd
    Posted Nov 29, 2006 at 2:10 AM | Permalink

    Re: 180,
    Lee,
    There may still be more Norse farms to discover. I highly doubt that even a large farm would be the entire Greenland colony. With any remote colony you would need a certain minimum number of people to have a viable colony. If the colony were too small, then it would be difficult to justify supply ships at reasonable intervals (for the colonists).

  35. Jim Barrett
    Posted Nov 29, 2006 at 4:29 AM | Permalink

    JAE (posting 168): you ask me to “answer the question”, which was:

    > How do you blame the rise at this time on CO2? Where is the CO2-driven acceleration since 1931?

    I thought I had answered this in my posting 162, where I indicated the rather obvious fact that an effect can have a multiplicity of causes. Just because we believe that anthropogenic global warming is the major cause of present-day sea-level rise does not mean that every feature of sea-level rise at a given place should have a corresponding feature in the greenhouse forcing. Sea level at a single location (Newport, RI) is also affected by meteoroligical and solar variability, to name just two examples. Also, given the clear temporal variability in the Rhode Island record, how strong do you think the acceleration would have to be for you to be able to actually detect it? In fact, careful analysis of the global tide gauge network DOES detect an acceleration (Church and White, 2006).

  36. Jim Barrett
    Posted Nov 29, 2006 at 4:31 AM | Permalink

    Steve Sadlov and Dave Dardinger (postings 166 and 170): It now appears to be the turn of the satellite data to be knocked. Firstly, Steve raises the question of whether there is a “measurement system analysis” and then assumes the answer (“there is none”). It is marvellous what one can say on a blog like this from total ignorance. Of course the measurement system has been analysed for uncertainty (I assume Steve means “uncertainty” and not “resolution”) – it might be an idea if he reads the literature. I have to say that this kind of uninformed innuendo only reinforces my growing apprehension that the internet is becoming just a cesspit of disinformation.

    And yes Dave, John Daly also had a go at the satellite data. If you go back and read what he wrote, I think you will find that he didn’t even understand how to combine uncorrelated errors – he assumed they were just added! Enough said.

  37. Steve Bloom
    Posted Nov 29, 2006 at 4:31 AM | Permalink

    Re #181: Paul, various folks including Jim Hansen, Richard Alley and Michael Oppenheimer seem to think a substantial rise due to dynamical melting during the next century is a serious possibility. See the discussion here for the reasons why. There is no model showing this as yet, but the fact that sea level during the Eemian was 4 to 6 meters higher than present with a temperature only 1C higher would seem to be some cause for concern. In any event, these and other leading climate scientists (including every glaciologist I’ve ever seen go on the record) seem rather more in accord with Gore than Willis.

    Re #s 183/4: This paper describes a recent soil survey of much of the Viking settlement area. There was apparently little or no permafrost present at the time of the survey, which seems fairly dispositive of Monckton’s claim. I don’t know whether it would be feasible to establish whether there had been significant permafrost in these areas a few hundred years ago, but in any case a pretty thorough search for information on that turned up nothing.

  38. Jim Barrett
    Posted Nov 29, 2006 at 4:34 AM | Permalink

    Paul Dennis (posting 181): You are quite wrong to say that I “have chosen one extreme”. As I said in posting 140, this was simply “my reading of the TAR” or, to be more specific, Section 11.5.4. I don’t know why you would think I have any reason to “choose one extreme” – if your interpretation of the Section is substantially different, then let’s hear it. This also, I think, answers your claim that “Al Gore’s claims for possible sea level rise ….. is far outside the bounds suggested by most working climate scientists” – it is quite in line with millennial expectations (if we fail to curb greenhouse emissions).

    And thank you Steve B for yor supportive comments – of course “leading climate scientists (including every glaciologist I’ve ever seen go on the record) seem rather more in accord with Gore than Willis”! And thank God for that!

  39. Jim Barrett
    Posted Nov 29, 2006 at 4:50 AM | Permalink

    Paul Dennis: here is an addition to my posting 188. From the TAR Section 11.5.4, I derived the following “working values” (you won’t, of course, agree with them exactly as there is huge uncertainty – but I think they are a fair summary of what the TAR is saying):

    During the next 1000 years we should see roughly:

    Thermal expansion: 2 m
    Glaciers and ice caps: 0.5 m
    Greenland: 3 m
    West Antarctic Ice Sheet: 2 m
    East Antarctic Ice Sheet: 0 m
    TOTAL: 7.5 m

    I rounded this to the nearest even number, giving 8m.

  40. Stan Palmer
    Posted Nov 29, 2006 at 6:08 AM | Permalink

    re 18

    Steve Bloom shows a paper that indicates there is no permafrost at some Viking farmsteads. The paper also indicates that the ground at these farmsteads does not thaw until July and that barley does not have time to come into flower. So I suppose that the clai that his paper disposes of Monckton is premature.

  41. Willis Eschenbach
    Posted Nov 29, 2006 at 8:46 AM | Permalink

    Jim, the TAR makes the claims you post based on the assumption that the airborne CO2 will rise at 1% per year for the next 140 years. The TAR describes the two scenarios as

    Figure 11.15: Global average sea level rise from thermal expansion in model experiments with CO2 (a) increasing at 1%/yr for 70 years and then held constant at 2x its initial (preindustrial) concentration; (b) increasing at 1%/yr for 140 years and then held constant at 4x its initial (preindustrial) concentration.

    Of these two scenarios, the one showing thermal expansion of 2m as the average of the models after 1000 years is (b), a 1% annual rise sustained for 140 years.

    Now, given that:

    “‚⠠ the CO2 rate has never risen that fast in modern times (the largest modern increase was only two thirds of that, 0.67% per year, 1986-1987), and given that

    “‚⠠ the average rise over the last 10 years is only half of that amount (0.5%/year), and given that

    “‚⠠ the average rise over the last 20 years is 0.5%/year, and given that

    “‚⠠ the average rise over the last 30 years is 0.5%/year,

    … the perhaps you could tell us why you think an analysis that assumes we will have TWICE THE CURRENT RATE OF INCREASE FOR A CENTURY AND A HALF means anything at all …

    So yes, Jim, if you believe that a never-before-seen exponential rate of CO2 growth will not only suddenly and miraculously start to occur, but will last for a century and a half, you can prove anything … and the IPCC clearly depends on people being foolish enough to believe that kind of nonsense. A 1% CO2 growth rate sustained for over a century might give an 8 m sea level rise … and if pigs could fly, we might see an 8 meter deep layer of flying pigs … but pigs can’t fly, and we’ve never once had a 1%/year increase in CO2, much less one that was sustained for a century and a half.

    You might believe the IPCC’s models, and their claims. You will find, however, that folks around here are a little less credulous “¢’‚¬? we prefer those old-fashioned things called facts, we’ve actually heard of “garbage in, garbage out” and thought about what that means, and we believe in models … but only after they have been tested and shown to work.

    w.

  42. Lee
    Posted Nov 29, 2006 at 9:19 AM | Permalink

    mark, yo did not answer the questions. True or false?

  43. MarkR
    Posted Nov 29, 2006 at 9:47 AM | Permalink

    Sure I did Lee, now what about the Hockey Stick?

  44. Michael Jankowski
    Posted Nov 29, 2006 at 9:47 AM | Permalink

    This also, I think, answers your claim that “Al Gore’s claims for possible sea level rise ….. is far outside the bounds suggested by most working climate scientists” – it is quite in line with millennial expectations (if we fail to curb greenhouse emissions).

    Well, when Gore was confronted about this very claim by George Stephanopolous, his response was “[The scientists] don’t know…they just don’t know.” So Gore himself didn’t refer to the TAR for his defense – he simply implied his guess was as good as the scientific consensous. So if you think the TAR supports his views, how come he didn’t use it as support? How come he instead acknowledged his projections differ from those of “the scientists” if they are in fact “quite in line” with the TAR?

  45. Steve Sadlov
    Posted Nov 29, 2006 at 10:30 AM | Permalink

    RE: #20 – Merely consider the basic facts of the measurement technique. You are using radar, with a certain frequency spectrum. Your transceiver has certain discrimination and tuning capabilities. There are losses and phase shifts (as well as false returns and noise). The electronics themselves have innate limitations in resolution – the sum total of all the 10% tolerance capacitors, 1% tolerance resistors, the jitter specs, the linearity specs of the amps, etc, etc, etc. And we are using this in an attempt *TO MEASURE A FEW MILLIMETERS OF CHANGE IN PATH LENGTH?!!!!*

  46. Steve Sadlov
    Posted Nov 29, 2006 at 10:37 AM | Permalink

    RE: #36 – Your statement reveals much. It is clear that you do not personally understand what a measurement system analysis is (it is not what you stated). Hopefully others in the field do know what it is and are planning to conduct one at some point. Advice to you would be, start out by cracking a basic text regarding Six Sigma.

  47. Steve Sadlov
    Posted Nov 29, 2006 at 10:39 AM | Permalink

    RE: #39 – So you actually beleive that Greenland will lose that much ice over the next 1000 years? Fascinating. Malthus smiles. (“We’re doomed, DOOMED!”)

  48. Steve Sadlov
    Posted Nov 29, 2006 at 10:45 AM | Permalink

    RE: #41 – Another key assumption in all that doom and gloom is that there is any sort of causally determinstic outcome which prediposes all that contintental ice to melt and the seas to warm that much, based on even the most extreme carbon dioxide increase assumption. Even that particular science is not in. I like Pielke Sr’s blog these days, he has myriad new and challenging climate science ideas and studies popping up almost daily. There is so much more than what the current orthodoxy are portraying. The climate system is only just now beginnning to be understood. It’s funny, the AGW dramatists call any who question them “flat earthers” but in reality, they are about at the level of “learned men of the Church” circa 1300AD, regarding their understanding of the climate system! It is THEY who are the flat earthers, fiercely defending their orthodox views!

  49. Michael Jankowski
    Posted Nov 29, 2006 at 12:00 PM | Permalink

    Re #181: Paul, various folks including Jim Hansen, Richard Alley and Michael Oppenheimer seem to think a substantial rise due to dynamical melting during the next century is a serious possibility. See the discussion here for the reasons why. There is no model showing this as yet

    Model superman Jim Hansen thinks it’s a possibility, yet “there is no model showing this?” Fascinating.

    So when it comes to radiative balance, temp change, precip change, etc, we’re supposed to believe the models represent a “serious possibility.” But when the modellers themselves don’t have a model suggesting a serious sea level rise, we’re supposed to believe their intuition and not the models?

  50. bender
    Posted Nov 29, 2006 at 12:07 PM | Permalink

    Fallible human intuition is one of the main reasons we build models. They prevent our imaginations from running wild with improbable scenarios.

  51. Steve Bloom
    Posted Nov 29, 2006 at 4:10 PM | Permalink

    Re #49: You know perfectly well why those concerns are being expressed, Michael. Why pretend not to? The observations that invalidated the prior “slow melt” model assumption are very new, as you know, and until the new model(s) are ready all we have to go on are the observations of present and past climate. As I understand constructing such a model is extremely difficult. In the meantime, if you check today’s news you’ll find some related concerns regarding the Ross ice shelf. Naturally, until such time as we can accurately model the circumstances under which it will again abruptly collapse (as it has in the past), you seem to think we should assume it can’t. You are aware that the 4 to 6 meter Eemian sea level highstand seems to have been the result of WAIS collapse following loss of the Ross ice shelf, right?

    Re #50: Yes. Models are good.

  52. Lee
    Posted Nov 29, 2006 at 4:22 PM | Permalink

    so…

    1. Current models calculate melt rates on the assumption that those big blocks of ice sit there and melt away in local temperatures.

    2. Data indicate that the assumption is wrong, that the ice does NOT just sit there, but undergoes dynamics that accelerate melting beyond the previous model predictions (so much for exaggerating those past results, BTW).

    3. Modeler says, we seem to have been wrong; the data is telling us that there are unmodeled effects that cause melting to happen faster than our models say.

    4. Modeler is attacked by people who frequently complain about relying too much on models and not enough on data, for pointing out that the data is pointing out that things are happening faster than his model.

  53. bender
    Posted Nov 29, 2006 at 5:08 PM | Permalink

    Re #52
    You still haven’t answered my question from another thread as to the effectiveness with which moist convection is represented in the GCMs. Is this not a negative feedback which could ultimately cap the rate and level of warming, in spite of all the alarming runaway positive feedbacks?

  54. Lee
    Posted Nov 29, 2006 at 5:16 PM | Permalink

    Bender, I don’t believe there are “alarming runaway positive feedbacks.” Nor do any of the serious climate scientists I’ve read.

    I do believe that there are amplifying feedbacks, most likely leading to a response to 2xCO2 likely to be in the range of 1.5 – 4.5C. This is congruent with the evidence from several lines of analysis. Implicit in this is that there is net positive feedback (with limited gain) – which does not preclude that some feedback effects will be negative.

  55. Paul Dennis
    Posted Nov 29, 2006 at 5:33 PM | Permalink

    re #51, 52

    Steve, Lee our understanding of ice sheet dynamics and melting/ablation mechanisms is clearly undergoing significant re appraisal. My earlier comments concerning models was a simple statement that we don’t have any. It wasn’t meant as a criticism. Let’s try and get away from the constant antagonism from entrenched positions.

    Clearly we need to develop a better understanding of ice sheet mechanics, rheology, the role of melt water, moulins and basal sliding, the extent to which strain softening is important etc. Such empirical data needs to be incorporated into a phenomenological description of what is happening to the ice sheets (linear visous behaviour, power law creep etc.) and better still constitutive relationships based on deformation mechanisms for the ice. We need to understand the stress field and the ice response to that. Undoubtedly it is complicated.

    Observations of ice sheet behaviour are limited temporally. There is some data to show that the response to collapse of an ice shelf is a speeding up of the ice flow. However this may be a transient phenomena as the sheet adjusts to a new stress field and the field subsequently decays. It is rather like doing a stress relaxation experiment. We need more experiemtnal data and field observations over a longer period to answer that.

    In the absence of models it is fine to suggest that a possibility is that on millenial time scales we may observe wide scale mass loss from Greenland and the WAIS. However, we should be in no doubt that these are speculation and not predictions. We can draw comparisons with previous interglacials and Steve rightly pointed out the Eemian, where, in the period 120-130 ky ago sea levels were several metres higher than present levels and temperatures were higher than present, perhaps by as much as 3 degrees C. There is still debate on this. However, temperature should not be taken as the only metric that characterises an intergalcial. There are important differences between the Eemian, and the two interglacials before that and the present Holocene that may be relevant.

    Thus what we are left with are a series of scenarios based on doubling of CO2 levels (1% per annum growth over a 70 year period then stabilisation), quadrupling of CO2 levels (1% per annum growth of CO2 over a 140 year period then stabilisation).

    However, the crux of the debate with respect to sea level rise and ice sheet melting is whether these are realistic scenarios, and if they are the effect these are likely to have on temperatures. Here it comes down to what we think is the climate forcing with respect to radiation imbalance. Here the jury is still very much out and where you stand on this issue largely puts you in the AGW or the GW camp!

    Personally I tend towards the lower end of the scale, don’t see danger of tipping points and subsequent rapid loss of polar continental ice sheets. I’ve yet to see any experimental data, or natural observations that would put a figure as high as Hansen has suggested. I’m well aware of where his estimates come from but yet to be convinced that we really have a good handle on the climate forcing during the last glacial maximum.

    There is a vast amount of exciting and good science to be done here. None of us are ‘flat earthers’ as has been suggested. We’re all trying to understand the way the earth system works and how climate may change wether naturally, or through anthropogenic forcing.

  56. bender
    Posted Nov 29, 2006 at 5:35 PM | Permalink

    Ok, you disagree with one of my presumptions, or at least my choice of language. Fine. Now what do you think of the question itself?

  57. Lee
    Posted Nov 29, 2006 at 5:40 PM | Permalink

    bender, I thought that was implicit and obvious in my answer.

    There are likely to be individual negative feedbacks (and your proposed mechanisms could well be one such – I don’t know, but it’s plausible) but the net feedback is almost certainly positive and very likely to be in the range I mentioned.

  58. Paul Dennis
    Posted Nov 29, 2006 at 5:49 PM | Permalink

    re #57

    Lee, is there any experimental, or observational data that suggests the net feed back is lilely to be positive? It seems to me this is crucial to the debate.

  59. Lee
    Posted Nov 29, 2006 at 6:06 PM | Permalink

    re 58:

    Annan and Hargreaves 2006 contains a reasonable overview of the field. Here are some more, quickly acquired from a cursory attempt with google.

    Here are some copied quickly from wikipedia:
    http://en.wikipedia.org/wiki/Climate_sensitivity
    * Quantifying uncertainties in climate system properties with the use of recent observations; Forest, C.E., P.H. Stone, A.P. Sokolov, M.R. Allen, and M.D. Webster, 2002. Science, 295. 24 (preprint)
    * Using multiple observationally-based constraints to estimate climate sensitivity; Annan and Hargreaves, GRL, 2006 [7]
    * Constraining climate forecasts: the role of prior assumptions; Frame, D.J., B.B.B. Booth, J.A. Kettleborough, D.A. Stainforth, J.M. Gregory, M. Collins, And M.R. Allen, 2005. Geophysical Research Letters, 32, L09702, doi:10.1029/2004GL022241. [8]
    * An observationally based estimate of the climate sensitivity; JM Gregory, RJ Stouffer, SCB Raper, PA Stott, NA Rayner; Journal of Climate, 2002 .
    * Objective Estimation of the Probability Distribution for Climate Sensitivity; Andronova, N., and M. E. Schlesinger. 2001., J. Geophys. Res., 106, D19, 22,605-22,612 [9] data: [10]

    Here’s another:

    ScienceWeek

    CLIMATOLOGY: ANCIENT CLIMATE AND FUTURE CLIMATE

    The following points are made by D.P. Schrag and R.B. Alley (Science 2004 306:821):

    1) Humans are changing the amount of carbon dioxide in the atmosphere by burning coal, oil, and gas. The current atmospheric CO2 concentration is higher than it has been for at least the past 430,000 years [1], and perhaps for tens of millions of years [2]. Over the next 100 years, without substantial changes in energy technology or economic development, the atmospheric CO2 concentration will rise to 800 to 1000 ppm [3]. This rise represents a spectacular uncontrolled experiment that humans are performing on Earth. The paleoclimate record may provide the best guess as to what may happen as a result.

    2) One crude measure of how much the climate will warm in response to an increased atmospheric CO2 concentration is the climate sensitivity, often taken as the globally averaged warming expected from doubling the atmospheric CO2 concentration. This sensitivity is usually estimated as between 1.5 deg and 4.5 deg C on the basis of results from a suite of complex climate models and from efforts to explain temperature changes over the past century [4]. However, many uncertainties exist in that estimation, including large gaps in our understanding of water vapor and cloud feedbacks on climate.

    3) The study of past climates provides information about the magnitude of, and causes for, many preinstrumental climate changes, allowing for comparison with climate models and an independent assessment of climate sensitivity. Periodic ice ages over the past 2 million years were paced by Earth’s orbit around the Sun. However, the synchronous and substantial glaciation in both hemispheres requires some additional feedbacks beyond the orbital variations to amplify the climate response and make it uniform in both hemispheres. Changes in the atmospheric CO2 concentration are likely responsible for both [5]. The sea surface temperature in the Western Equatorial Pacific was about 3 deg C colder during the last ice age than it is today. Given that this warm and stable area of the world ocean was relatively unaffected by changes in high-latitude ice cover and in ocean circulation, the cooling must be explained predominantly by radiative effects associated with changes in atmospheric CO2 concentration. This observation yields a climate sensitivity that is on the high end of modern estimates, consistent with model simulations of the ice ages.

    4) Likewise, warm episodes in Earth’s history reveal a similar cautionary lesson. During the Eocene, 50 million years ago, palm trees grew in Wyoming and deep ocean temperatures were more than 10 deg C warmer than present. Because we do not know exactly how high the atmospheric CO2 concentration was at that time, we cannot use it as a direct measure of climate sensitivity. However, the extreme warmth at high latitudes — especially during the winter in continental interiors — cannot be simulated by climate models purely through elevating greenhouse gas concentrations. Special cloud feedbacks must be included that are not present in the models used to predict future climate change. This observation suggests that feedbacks may be missing from current models and that future climate change may be underestimated in these models, particularly at high latitudes.

    References (abridged):

    1. J. R. Petit et al., Nature 399, 429 (1999)

    2. M. Pagani, M. A. Arthur, K. H. Freeman, Paleoceanography 14, 273 (1999)

    3. Intergovernmental Panel on Climate Change, Climate Change 2001: The Science of Climate Change (Cambridge Univ. Press, Cambridge, 2001)

    4. R. A. Kerr, Science 305, 932 (2004)

    5. C. Lorius et al., Nature 347, 139 (1990)

    Science http://www.sciencemag.org

  60. bender
    Posted Nov 29, 2006 at 6:08 PM | Permalink

    I think I understand what you are trying to say, Lee, but “net feedback” is meaningless in a dynamical system. It is not a matter of “summing” up the positives and negatives and looking at the “balance”. The feedbacks operate on different time and space scales with different time lags.

  61. Paul Dennis
    Posted Nov 29, 2006 at 6:17 PM | Permalink

    Lee, thanks for the references. I look forward to some interesting reading and will let you know what I think.

  62. Lee
    Posted Nov 29, 2006 at 6:23 PM | Permalink

    bender, the best evidence we have is that all of those operating together and in concert, end up amplifying the direct effect of forcings, such that 2xCO2 will cause 1.5 – 4.5C warming – with the bottom end of that range much better constrained than the top.

    and note, bender – you are the one trying to get me to look at a single isolated mechanism, out of context with the entire system. if yo are saying that one mechanism is going to cause negative feedback, what is that if not an attempt to ‘sum’ a single negative mechanisms into the overall system. The ‘net’ feedback I’m referring to is the result of all those working together at different time scales, such as we observe in the real system.

  63. Loki on the run
    Posted Nov 29, 2006 at 6:32 PM | Permalink

    Seeing the wood

    says:

    THE clearing of forests for agriculture or logging is progressing at a worrisome rate around the world. But that is not the whole story. A new study shows that, in richer countries at least, many more trees are springing up than are being felled.

  64. Lee
    Posted Nov 29, 2006 at 6:52 PM | Permalink

    Journal of Climate

    Article: pp. 3117–3121 | Full Text | PDF (172K)
    An Observationally Based Estimate of the Climate Sensitivity

    J. M. Gregorya, R. J. Stoufferb, S. C. B. Raperc, P. A. Stottd, and N. A. Raynerd

    a. Hadley Centre, Met Office, Bracknell, Berkshire, United Kingdom
    b. Geophysical Fluid Dynamics Laboratory, Princeton, New Jersey
    c. Climatic Research Unit, University of East Anglia, Norwich, Norfolk, United Kingdom, and Alfred Wegener Institute for Polar and Marine Research, Bremerhaven, Germany
    d. Hadley Centre, Met Office, Bracknell, Berkshire, United Kingdom
    ABSTRACT

    A probability distribution for values of the effective climate sensitivity, with a lower bound of 1.6 K (5th percentile), is obtained on the basis of the increase in ocean heat content in recent decades from analyses of observed interior-ocean temperature changes, surface temperature changes measured since 1860, and estimates of anthropogenic and natural radiative forcing of the climate system. Radiative forcing is the greatest source of uncertainty in the calculation; the result also depends somewhat on the rate of ocean heat uptake in the late nineteenth century, for which an assumption is needed as there is no observational estimate. Because the method does not use the climate sensitivity simulated by a general circulation model, it provides an independent observationally based constraint on this important parameter of the climate system.

    Manuscript received May 8, 2002, in final form July 5, 2002

    DOI: 10.1175/1520-0442(2002)0152.0.CO;2

  65. 2br02b
    Posted Nov 29, 2006 at 7:09 PM | Permalink

    Re #51:

    In the meantime, if you check today’s news you’ll find some related concerns regarding the Ross ice shelf. Naturally, until such time as we can accurately model the circumstances under which it will again abruptly collapse (as it has in the past), you seem to think we should assume it can’t. You are aware that the 4 to 6 meter Eemian sea level highstand seems to have been the result of WAIS collapse following loss of the Ross ice shelf, right?

    Considering the Ross ice Shelf on its own for a moment:

    (1) The Ross Ice Shelf is basically a gigantic iceberg floating on the ocean.

    (2) Icebergs and ice shelves float because they are less dense than water.

    (3) The proportion of their volume that appears above sea level is a measure of how much less dense than water they are.

    (4) When they melt, they acquire the same density as water–because they have become water.

    (5) Therefore their melted volume is exctly equal to the volume of iceberg or ice shelf that had been hidden below sea level before they melted.

    (6) Therefore the net effect of a melting iceberg–or Ross Ice Shelf– upon world sea level can be stated with total precision.

    (7) It is Zero. Nada. Nix. Nil. Nowt. Zilch. And that is exact, to however many decimal places you fancy.

    And if you think that models that overlook this elementary fact of high school physics are good, well, that says a lot about you. And the models.

  66. Lee
    Posted Nov 29, 2006 at 7:21 PM | Permalink

    re 65 – did you miss the part about “WAIS collapse following loss of the Ross Ice shelf?”

    What makes you think that this observation comes from the models? What makes you think that the models miss the basic fact that floating ice is in flotational equilibrium?

    The WAIS is largely grounded. That means it is NOT in flotational equilibrium. And that means that collapse of the WAIS will contribute volume to oceanic waters and cause sea levels to rise.

    It seems that loss of bounding ice shelves can increase ice flow in bounded glacial masses. We know that following loss of the Larsen A and B ice shelves, that glaciers feeding into those shelves accelerated. The hypothesis here is that loss of the Ross shelf destabilized the WAIS, and that subsequent loss of some or all of the WAIS caused sea levels to rise. That does not violate any physics, high school or otherwise.

  67. Willis Eschenbach
    Posted Nov 29, 2006 at 7:22 PM | Permalink

    Lee, you cite a paper above saying:

    Over the next 100 years, without substantial changes in energy technology or economic development, the atmospheric CO2 concentration will rise to 800 to 1000 ppm [3]

    The “[3]” refers to the IPCC TAR. However, I cannot find any such claim in the TAR. In fact, none of the scenarios shown in the TAR here even get to 1000 ppm by 2100, and only the most extreme one gets over 800 ppm by 2100. The paper you cite is simply wrong.

    Before you just randomly post “scientific” reports, think about them critically. Not all “science” these days is science. This is climateaudit, and posting studies without thinking about them, no matter where they are published, will not gain you any bonus points. Instead, it just points out that you haven’t done your homework …

    w.

  68. 2br02b
    Posted Nov 29, 2006 at 7:33 PM | Permalink

    re 65 – did you miss the part about “WAIS collapse following loss of the Ross Ice shelf?”

    No.

    But did you miss the bit where I said, “Considering the Ross ice Shelf on its own for a moment:”?

    Try reading what is said before answering. You may look less like an idiot.

  69. bender
    Posted Nov 29, 2006 at 7:35 PM | Permalink

    Re #62
    Lee, I am not trying to twist your words or trip you up. I am not trying to win a semantic debate. I am trying to clarify my question so that you (or someone else) can answer it.

  70. Lee
    Posted Nov 29, 2006 at 7:35 PM | Permalink

    Willis, I clearly said that those cites were “quickly acquired from a cursory attempt with google”. I offered them as possibilities of kinds of approaches being taken, and to put them out there for people to evaluate.

    Dinging me for offering unevaluated citations that I clearly indicated were not evaluated, kind of points to you not having done even so much of your homework as to have bothered to notice what I said in that post.

  71. Lee
    Posted Nov 29, 2006 at 7:39 PM | Permalink

    re 69 – bender, I gave you my best answer. I don’t know, but it is plausible that the mechanism you mention might constitute, in itself, a negative feedback. That doesn’t change the arguments for an overall positive ‘amplifying’ feedback.

  72. Lee
    Posted Nov 29, 2006 at 7:41 PM | Permalink

    68 – No I did not. Nor did I miss the rest of your post, where you set about showing the loss of the Ross shelf wont increase sea level,a s if that is a novel idea, and then attributed to the models the idea that melting of floating ice can change sea level,adnthen effectively calling the modelers and boris idiots for thinking that – when in fact no one waas atributing sea level rise to loss fo floating ice..

  73. Dave Dardinger
    Posted Nov 29, 2006 at 7:43 PM | Permalink

    Lee, usually saying something like, “quickly acquired from a cursory attempt with google” is used as an insult to the person it’s spoken to meaning “See, you should have found this without half trying.” That you claim to have meant “Well, I didn’t put much effort into this, but that’s all you’re worth!” is a different insult but still par for the course for you.

  74. Lee
    Posted Nov 29, 2006 at 7:51 PM | Permalink

    Dardinger, please keep this crap to yourself. I was asked, I did a quick search, labeled it for what it was, and put it up in an honest attempt to be responsive to a direct request, and I received a thank you from the person who asked. Your attempts to attribute motivations, and those of Sadlov who also has bene doing this a lot, seems to be a direct violation of Steve’s site rules, and it’d be good if the both of you knocked it off.

  75. Stan Palmer
    Posted Nov 29, 2006 at 8:03 PM | Permalink

    For those who are wondering about the interminable number of posts that have been submitted here about seemingly trivial issues, I refer you to the paper at the link below on the topic of “Motivated Reasoning”. The Internet seems to be an ideal place for people who really believe in something and sincerely want to prove their beliefs correct. Google and the ability to post freely are liberating for such people.

    Click to access Kunda_The%20Case%20for%20Motivated%20Reasoning.pdf

  76. Lee
    Posted Nov 29, 2006 at 8:10 PM | Permalink

    Stan,

    Personally I would attribute it to the fact that JohnA posted several links regarding Monckton’s article, that Monckton’s article is full of multiple false statements relevant to central issues at this blog, and that even some off the more absurd of those is being strongly defended by a number of people here.

    But if you want to attribute a psychological ‘syndrome’ of sorts to those defending absurd falsehoods, be my guest.

  77. Willis Eschenbach
    Posted Nov 29, 2006 at 8:39 PM | Permalink

    Lee, you criticize me above for what I didn’t say:

    Dinging me for offering unevaluated citations that I clearly indicated were not evaluated …

    I’m not dinging you for that. I’m dinging you for being so uninformed as to believe that the IPCC said that business as usual will lead to 800 to 1000 ppmv of CO2. I knew immediately that the number was way too high, and when I researched it, it was way too high. In climate science, riddled as it is with bad procedure, bad data, and inflated claims, posting anything when you are that clueless about the subject is dangerous.

    w.

  78. bender
    Posted Nov 29, 2006 at 8:43 PM | Permalink

    The first aspect of the reasoning process to be poisoned by motivation is the way in which uncertainty is modeled in the mind. If you desperately want a conclusion – either way – you will convince yourself that the uncertainty is negligible. This allows you to use hard logic, as opposed to fuzzy logic, thus leading to a cleaner-looking conclusion. I contend this is a major reason why robust confidence intervals are frequently absent in climate science, from hockey sticks to hurricanes to GCMs. The high levels of uncertainty are damning to the consensus.

  79. Lee
    Posted Nov 29, 2006 at 9:02 PM | Permalink

    willis, I could (and did) say just about the same thing regarding letting a trend analysis of a noisy data set be biased by zeroing to a single year, and letting a single year’s year-to-year variation offset the entire trend. Except that was intentional, and you’ve been defending it.

    I glanced through that paper quickly, that value is high but within some ‘alarmist’ values, I didn’t follow the cite, I was careful to qualify the level of attention I had given those papers. You found that the value cited didn’t match the paper they cited – fair enough. There’s an error, and that impacts (to some extent) the validity of the argument they make. Now get over yourself.

  80. 2br02b
    Posted Nov 30, 2006 at 2:10 AM | Permalink

    Re 72:

    … you … attributed to the models the idea that melting of floating ice can change sea level,adnthen effectively calling the modelers and boris idiots for thinking that…

    No I didn’t call them idiots. That’s a notion you introduce here for the first time.

    (Still, if you think the cap fits…)

    All I had to say about ‘idiots’ was that you might look less like one if you read properly what you were responding to first. I see that’s a lesson you still have to learn.

  81. Willis Eschenbach
    Posted Nov 30, 2006 at 3:34 AM | Permalink

    Lee, you say:

    willis, I could (and did) say just about the same thing regarding letting a trend analysis of a noisy data set be biased by zeroing to a single year, and letting a single year’s year-to-year variation offset the entire trend. Except that was intentional, and you’ve been defending it.

    Sorry, Lee, while you could say I’m uninformed about the Hansen analysis, you’d be just as wrong there. The Hansen model was run for 100 years to “zero” it to a certain year, 1958, and the “A, B, C” scenarios were started from the end of the run. The model did pretty well, it ended up with a value only 0.1°C cooler than the actual temperature … but remember that according to Hanse, the end point of the model run was only accurate to ±0.22°C (2sd). In other words, they could as easily have ended up above the 1958 value as below,

    Since the obvious intention was to set the model to the 1958 conditions, and since they did pretty well in that they could have ended up slightly above or slightly below the 1958 conditions … give me a good argument why we should not start the model runs at the actual 1958 conditions. After all, that was the intention behind running the models for 100 years at the 1958 conditions, wasn’t it?

    I’m not “zeroing in on a single year” as you say. I didn’t pick 1958 out of a host of possibilities just to make my case, as you imply … Hansen zeroed in on 1958 and ran his model for 100 years to set the start to 1958. But Hansen didn’t stick to the idea. Since the final year of the run was in his favor, he didn’t start them together. Do you think that if the final year had ended up warmer than the instrumental data, that he would have argued that the model runs should have started from there?

    Since we want to compare the model results and instrumental data since 1958, and since the model was run 100 years on 1958 data to get it as close as possible to 1958 temperatures, please give me a good reason to bias the results in favor of the models by starting them anywhere but at the same point, the actual 1958 temperature.

    w.

  82. Jim Barrett
    Posted Nov 30, 2006 at 4:29 PM | Permalink

    Willis (your posting 41) – here are a number of points to consider:

    1. In posting 39, I used terms like “working values”, “huge uncertainty” and “roughly”.

    2. Anyone is free to interpret TAR Section 11.5.4 as they like; this was my open and honest interpretation.

    3. The most important feature of Fig 11.5 of the TAR is the ultimate CO2 concentration (2x or 4x preindustrial), since the resultant sea-level is almost constant by the end of the millennium. So the actual rate at which CO2 gets there (over a century or so) is relatively unimportant. A figure of 2 metres of sea-level rise due to thermal expansion is the mid-range one for reaching 4x CO2.

    4. If you think we are going to safely fall short of 4x CO2 during this millennium, consider the latest emission figures – the rate of growth of emissions rose from 0.8% per year from 1990 to 1999 to 3.2% per year from 2000 to 2005 – a pretty impressive acceleration!

    (see: http://www.physorg.com/news82381987.html
    and http://www.newscientist.com/article/dn10507-carbon-emissions-rising-faster-than-ever.html)

  83. bender
    Posted Nov 30, 2006 at 4:39 PM | Permalink

    Jim Barrett, how do you propose reigning in emissions in countries other than your own, such as India and China?

  84. Jim Barrett
    Posted Nov 30, 2006 at 4:40 PM | Permalink

    2br02b (posting 65): This is typical of the dismal drivel so often posted here. Firstly, it suggests that climate scientists do not know the “bleeding obvious”. What you say is an APPROXIMATION understood well by anyone who seriously works with sea level. However, it is an APPROXIMATION – to say:

    “Therefore the net effect of a melting iceberg — or Ross Ice Shelf — upon world sea level can be stated with total precision ….. It is Zero. Nada. Nix. Nil. Nowt. Zilch. And that is exact, to however many decimal places you fancy.”

    simply indicates the ignorance of the writer (even ignoring the knock-on effect of ice-shelf melting on the flow of continental ice). The actual result of melting an iceberg on sea-level depends subtly on the equation of state of seawater — the thing we can say with TOTAL confidence is that it is NOT zero!

    The sea contains salt, you know …..

  85. Hans Erren
    Posted Nov 30, 2006 at 4:42 PM | Permalink

    emission rates (from cdiac) compared to sres scenarios

    data:
    observed
    http://cdiac.esd.ornl.gov/trends/emis/tre_glob.htm
    sres projected
    http://www.grida.no/climate/ipcc_tar/wg1/521.htm

    Looks like we are coming out of a recession, but be reassured the next one will come.

  86. Jim Barrett
    Posted Nov 30, 2006 at 4:44 PM | Permalink

    Bender (posting 83): perhaps we should only reign them in when their per-capita emissions reach some small prescribed proportion of those of the bloated Western world? It that turns out to be insufficiently restrictive, we could also consider reigning in our own ….

  87. Hans Erren
    Posted Nov 30, 2006 at 4:49 PM | Permalink

    so re 82:

    4. If you think we are going to safely fall short of 4x CO2 during this millennium, consider the latest emission figures – the rate of growth of emissions rose from 0.8% per year from 1990 to 1999 to 3.2% per year from 2000 to 2005 – a pretty impressive acceleration!

    The emission growth rate isn’t unprecedented, in fact it isn’t even a record.

  88. bender
    Posted Nov 30, 2006 at 4:49 PM | Permalink

    perhaps we should only reign them in when …

    I didn’t ask “when?” I asked “how?” i.e. What is your plan to take control of the international agenda?

  89. 2br02b
    Posted Nov 30, 2006 at 5:42 PM | Permalink

    Re 84:

    The actual result of melting an iceberg on sea-level depends subtly on the equation of state of seawater “¢’‚¬? the thing we can say with TOTAL confidence is that it is NOT zero! The sea contains salt, you know …..

    No, you display your elementary ignorance here.

    The volume of water–whether salt water or not–displaced by a melted iceberg (or ice shelf) is exctly equal to the volume of the iceberg’s submarine ice before it melted. Therefore it is exactly the same volume as the volume previously displaced by the iceberg. It makes no difference whether the iceberg had been floating in salt or fresh water before it melted.

    Therefore the the thing we can say with TOTAL confidence is that it is EXACTLY and PRECISELY zero.

    Which part of Archimedes Principle do you not understand?

  90. John Reid
    Posted Nov 30, 2006 at 7:26 PM | Permalink

    Re #59 – Lee

    Thank you for posting the abstract by Schrag and Alley. It provides a succinct summary of the AGW position.

    I am particularly interested in paragraph 3). To paraphrase:

    Fluctuations in climate over the last 2 million years are due to changes in solar heating due, in turn, to chhanges in the earth’s orbit around the sun according to Milankovitch’s theory.

    However these changes are too small in themselves to account for the temperature fluctuations actually observed. Therefore some form of positive feedback must be invoked.

    Changing atmospheric CO2 concentration actually observed in ice cores provides a suitable feedback mechanism.

    Under these assumptions we calculate climate sensitivity to CO2 changes and it is large.

    I am skeptical about this for the following reasons:

    1) Milankovitch’s theory does not stack up quantitatively but it is the only way we can account for the ice ages. Milankovitch may well be wrong. Invoking positive feedback to account for the quantitative inadequacies in a physical theory is a rather desperate measure. Anyone who has played with positive feedback in computer models or electronic circuits knows the rapid and chaotic instabilities which can occur. The Vostok ice cores do not show such instabilities – the cycle of ice ages and interglacials is remarkably stable and repeats itself quite precisely over the last 4 cycles. It does not look like the outcome of a positive feedback mechanism.

    2) Even if Milankovitch is correct there are other positive feedbacks apart from CO2 which can be invoked, e.g. changing albedo due to increased ice cover, changing atmospheric H2O concentration, changing ocean dynamics. Water vapour is a much more powerful greenhouse gas than is CO2 because it covers much more of the IR spectrum.

    To calculate a climate sensitivity based on a fix-up of Milankovitch’s theory of ice ages and then to apply it to the present day seems to me to be drawing a rather long bow.

    JR

  91. Nicholas
    Posted Nov 30, 2006 at 8:53 PM | Permalink

    Jim Barrett: I have a better idea. How about all the people in the developed world (i.e. us) have 10 children each, until we dilute our per capita emissions sufficiently so that they’re the same as the developing world.

    That would be entirely fair, and should do wonders for the CO2 content of the atmosphere, no?

  92. Lee
    Posted Nov 30, 2006 at 9:09 PM | Permalink

    re 89 – the volume of water once that ice thaws is temperature dependent – so no, unless it is precisely temperature regulated after that, the change will NOT be precisely zero.

  93. Lee
    Posted Nov 30, 2006 at 9:48 PM | Permalink

    re 81 –

    Wilis, you misquoted me. I’m not quite sure how, given that you first copied and pasted the paragraph containing that quote, but there it is.

    I said ‘ zeroing to a single year.” You misquoted that as “Zeroing IN to a single year,” as if I was complaining about the year selected, not the problematic approach of choosing ANY single year as a common starting value. I am referring to the problems inherent in choosing ANY single year to derive the common starting value.

    The point is that in each of those traces, there is a long-term trend (which we are interested in discerning) and for each year some amount of year-to-year variation superimposed on that (which makes it harder to discern the trend).

    If you pick a single year to create a common starting point (a “zero” – ie, if you “zero to a single year,” as you did), and set that year’s values for each curve as “zero” for the trends you are looking at, then you inevitably add to each curve, for every point on the rest of the curve, an amount equal in magnitude and opposite in sign to the year-to-year variation of that particular year.

    This is why I pointed out that if you take 10 different years, you will get 10 different sets of such offsets – the value of the year to year variation you are adding to the rest of the curve, varies depending on which year you choose. Using a single year as the common departure point (the “zero”) makes your analysis extremely sensitive to the values of the variation in that one year. And it means that you are including an undetermined and unreported error equal to teh single-year value of the year to year variation.

    The way to avoid doing that is not to use a single year, no matter how arrived at, to derive the common starting value. You need to average each of the curves over a longer period, as Hansen did, so that the positive and negative year-to-year variations tend to average each other out. This reduces the error introduced by year-to-year variation into your common starting point, and gets you closer to the trend line, which is what you are interested in.

    You mention that Hansen allows that there is an error of +/- 0.22C. Note that he has quantified this and reported it.

    Your method also has an error – for each trace, an error equal in magnitude and opposite in sign to the value of the year-to-year variation of whichever year you choose as a common starting value. Your method doesn’t quantitate that, doesn’t report it, and in fact pretends it does not exist.

    I’ll also point out that your casually uttered charge that Hansen chose to average over many years in order to consciously skew his results, ESPECIALLY when your method ends up with an unquantified and potentially larger error which is exactly what the averaging you decry is designed to avoid, is simply despicable. And yes, I am attempting to be moderate in choosing that word.

  94. Willis Eschenbach
    Posted Nov 30, 2006 at 10:58 PM | Permalink

    Lee, as always, thank you for your posting above.

    I have to say it again. I didn’t “zero on a certain year”. Hansen did. He ran the model for 100 years to get as close as possible to the 1958 data. Given that, why should we use anything but the 1958 data as a starting point?

    You say that:

    If you pick a single year to create a common starting point (a “zero” – ie, if you “zero to a single year,” as you did), and set that year’s values for each curve as “zero” for the trends you are looking at, then you inevitably add to each curve, for every point on the rest of the curve, an amount equal in magnitude and opposite in sign to the year-to-year variation of that particular year.

    Again, I have not picked “a single year to create a common starting point”, Hansen has. As you point out, this adds to each curve a certain amount. But since we are adding it to all of the curves, what’s the difference?

    You say my comment about Hansen was “despicable”. I had asked:

    Do you think that if the final year had ended up warmer than the instrumental data, that he [Hansen] would have argued that the model runs should have started from there?

    Given Hansen’s “smoking gun” paper in which he reported 10 years of very good agreement with the instrumental data, and did not report the previous 40 years of extremely poor agreement with the instrumental data, I think that it is a reasonable question. Your answer to the question may be yes, he would have started the models above the instrumental data. But my question is not “despicable”, it’s a reasonable question given his selective misreporting in the “smoking gun” paper …

    Finally, Lee, I realize in re-reading my postings to you that I have been overly harsh in some of my replies. I am making a serious effort to be more collegial in my discourse, and I hope that you will accept my apologies for any times when I have gone over the boundaries.

    w.

  95. John Reid
    Posted Nov 30, 2006 at 11:23 PM | Permalink

    Further to #90

    Maybe the simplest conclusion is that Milankovitch is just plain wrong about the most recent ice-ages, viz:

    Science 11 July 1997:
    Vol. 277. no. 5323, pp. 215 – 218
    DOI: 10.1126/science.277.5323.215

    Prev | Table of Contents | Next
    Reports

    Glacial Cycles and Astronomical Forcing

    Richard A. Muller, Gordon J. MacDonald

    Narrow spectral features in ocean sediment records offer strong evidence that the cycles of glaciation were driven by astronomical forces. Two million years ago, the cycles match the 41,000-year period of Earth’s obliquity. This supports the Croll/Milankovitch theory, which attributes the cycles to variations in insolation. But for the past million years, the spectrum is dominated by a single 100,000-year feature and is a poor match to the predictions of insolation models. The spectrum can be accounted for by a theory that derives the cycles of glaciation from variations in the inclination of Earth’s orbital plane.

    R. A. Muller, Department of Physics and Lawrence Berkeley Laboratory, University of California, Berkeley, CA 94720, USA.
    G. J. MacDonald, International Institute for Applied Systems Analysis, A-2361 Laxenburg, Austria.

    Once Milankovitch is gone there is no further need to invoke crazy feedback mechanisms to make it work. No ice age CO2 feedback means no support for high values of climate sensitivity.

    JR

  96. Lee
    Posted Nov 30, 2006 at 11:37 PM | Permalink

    willis,

    Hansen ran a long series of years under 1958 conditions, to ‘run up’ his models, and to create a baseline. That run was not a straight line, even though the conditions were the same for each ‘1958’ in the runup. There were differences each year. That yearly variation was different for each model, as one would expect – climate is not weather, even in the models. He then altered conditions to simulate his forcing scenarios. We know that.

    Each curve had its own history in that runup and baseline period. The conditions were 1958 conditions in each, the trend was flat, some years were above the trend, and some below – but climatically speaking they were all 1958. Hansen created 100 years of 1958 data, discarded the first 50 as runup and stabilization, and then used the last 50 to create his baseline.

    You chose the last of those years of ‘baseline’ 1958 to align the curves. You could just as logically have chosen the “1958” from the previous ‘year’ – it was the same conditions, it was climatically 1958 just as much as the last of those ‘years.’ Or the one before that. Or the one from 50 ‘years’ before. But in each case, you would get a DIFFERENT initial offset of the curve, because you would be applying a DIFFERENT set of annual variation to the initial point. And in each case, you would offset the initial alignment of the curves by a different and unreported amount.

    The point is that each individual one of those 50 sets of “1950” has an unspecified and potentially large vertical offset, corresponding to that year’s “weather” or annual variation. If scenario B just happens, in the last ‘year’ of the baseline period, to be a quite warm year, and you use that, then you are applying that large single-year positive excursion (in opposite sign) as an error TO EVERY SUBSEQUENT POINT ON THE ANALYSIS. You end up offsetting the curve by an unknown and unreported amount.

    But we aren’t interested in annual variation. We are interested in the trend, so we want to align TO THE TREND. This is what the averaging does. By taking the last 50 ‘years’ of 1958 climate data, and averaging it, Hansen does his best to remove the impact of whatever annual excursion exists in any single year. Your technique doesn’t, it instead applies any annual variation IN THAT ONE YEAR among 50 applicable ‘years’ of 1958 data, as an offset to the entire remainder of the curve. Given 50 years of data for 1958 climate conditions, you use one year, and discard 98% of the data.

    And THIS is what pushes Scenario B low in your analysis. The last year of the Scenario B runup period was warmer than average, so when you apply that one year (instead of averaging the 50 ‘years’ of 1958 data that Hansen used) you push the underlying trend line low by that amount. And then you then announce that Hansen cheated, and you caught him. Nope.

  97. Lee
    Posted Nov 30, 2006 at 11:39 PM | Permalink

    So, John Reid – are you arguing that solar variations DON’T cause climate change?

  98. Willis Eschenbach
    Posted Dec 1, 2006 at 3:13 AM | Permalink

    Lee, thanks for your post above. Set me see if I can find some points of agreement.

    First, what is the point of running the model for a hundred years at the 1958 conditions? Clearly, it is to set the models so that they start at the same point as the instrumental data.

    Now, the reported data for the instrumental period are an anomaly with respect to the 1950-1979 average. Obviously, we cannot use that as the starting point for the models, because we don’t have any data other than the 100-years-of-1958 runup period for the models. We don’t have a 1950-1979 average for the models.

    Thus, there is no inherently equivalent way to compare the two datasets (modeled and instrumental) point by point, as they are being compared to two different things.

    Since we have no agreement on the question of the starting point, let me instead concentrate on something we do agree on. You say that the goal is to compare the trends. You say that the averaging does this … but it does not. However, we can compare the trends as you recommend, that’s easy. We just take the annual trend for each dataset, and multiply it by the number of years in the record (1958-2005). Here are the trends over the period:

    Data_____________________ Trend
    GISTEMP___________________0.6°C
    HadCRUT3__________________0.6°C
    Scenario A________________1.2°C
    Scenario B________________0.8°C
    Scenario C________________0.7°C

    A few things of note:

    1) The two instrumental datasets are equal.

    2) Both of them are below all three scenarios, even Scenario C (which has no rise in CO2 after the year 2000).

    3) Scenario B is 33% too high, Scenario A is 100% too high.

    In short, the “high – medium – low” scenarios in Hansen’s paper missed the boat entirely. This has been my main point all along, that despite Hansen’s claims, the combination of models and scenarios used in the paper do not give believable forecasts, even with unbelievably low CO2 assumptions (in Scenario C). Also, the oft-repeated claim that Scenario “B” is the closest to the instrumental data is not true.

    Now, these are the trends that you said we wanted to compare. They do not depend on where the trends start, we’ve avoided that discussion entirely. Hansen said he picked the three scenarios to “bracket” the actual outcome, and they have not done so. At this point, can we agree that the Hansen model forecasts have not done well?

    w.

    PS – I have looked at the Theil-Sen trend estimator, as well as the OLS estimator of the trend. The TS estimator is more robust and resistant to endpoint choices and outliers. The two are identical to the nearest tenth of a degree (in 48 years), so the figures shown above are for both estimators.

  99. Jim Barrett
    Posted Dec 1, 2006 at 5:03 AM | Permalink

    2br02b (posting 89): I can hardly believe I’m having to have this conversation but I’ll persevere as it illustrates the calibre of some of the claims proffered on climateaudit. You ask:

    “Which part of Archimedes Principle do you not understand?”

    Well, firstly I understand that it is not fundamentally due to VOLUME (on which is you appear to base your argument), not even to MASS but rather to WEIGHT. Here is a reasonably common web definition of Archimedes Principle:

    “Any body partially or completely submerged in a fluid is buoyed up by a force equal to the weight of the fluid displaced by the body.”

    (e.g. http://www.onr.navy.mil/focus/blowballast/sub/work2.htm).

    So let’s go through the argument properly, noting the approximations.

    1. The buoyant force depends on the WEIGHT of the displaced fluid, which balances the WEIGHT of the floating body. It is not the MASSES that balance as “g” as seen by the displaced fluid is different from “g” as seen by the floating body (they are is slightly different places).

    2. Let the volume of displaced fluid be Vd and the volume of the body above the water be Va. Also let the relevant densities be di, ds and dw for ice, seawater and freshwater, respectively.

    (This may be a good time to make sure you that know that icebergs are made almost exclusively of frozen snow (i.e. very fresh water) and the sea is made of salty water. They have DIFFERENT DENSITIES.)

    3. The mass of the iceberg is (Vd+Va)*di = the mass of displaced seawater, Vd*ds, assuming “g” is constant (APPROXIMATION 1).

    4. If we first warm the iceberg to the surface freezing point of seawater (about -1.9 deg C), then this heat will partially come from the sea, causing cooling of the sea and slight LOWERING of sea level. This is related to Lee’s point (posting 92). However, we will ignore this effect (APPROXIMATION 2).

    5. Let us now melt the iceberg but magically keep the melted freshwater in the region of the originally displaced seawater without any mixing. As the density of seawater is greater than that of freshwater, the freshwater will still project above the surface. The volume of freshwater above the surface is Vd*(ds/dw-1). However, to melt the ice we require some latent heat, which again will partially come from the sea, causing cooling of the sea and slight LOWERING of sea level. We will ignore this effect (APPROXIMATION 3).

    6. Now it starts to get a bit harder as we need to know how the total volume of the fluid will change as we mix the fresh water from the iceberg with the seawater. However, if we assume that the equation of state of seawater is linear with respect to temperature and salinity (i.e. the seawater density is a constant plus a part which is proportional to temperature and salinity), then this means that adiabatic mixing conserves volume (I am making a few minor approximations here, but let’s ignore them). Let’s again ignore any nonlinearity in the equation of state (APPROXIMATION 4).

    7. So let’s mix the freshwater with the seawater – THERE IS NO CHANGE OF VOLUME, so we are still left with this extra volume above the sea surface equal to Vd*(ds/dw-1), which will RAISE sea-level when everything is ultimately levelled out. Again, we generally ignore this (positive) contribution to sea-level rise (APPROXIMATION 5).

    So, we have made FIVE approximations (at least) to get to the approximate rule that there is no change in overall sea level when an iceberg melts. Unless you want to look even sillier and claim that ALL THESE APPROXIMATIONS MAGICALLY CANCEL OUT, I think you should concede that it is untrue to say that “therefore the the thing we can say with TOTAL confidence is that it (i.e. the change in volume) is EXACTLY and PRECISELY zero”. In fact it is extraordinarily dumb to claim that any property of a physical system is EXACTLY any specific value!

  100. Jim Barrett
    Posted Dec 1, 2006 at 5:08 AM | Permalink

    Hans Erren (posting 87): You say:

    “The emission growth rate isn’t unprecedented, in fact it isn’t even a record.”

    I never said it was unprecedented (or even a record) – I said it was “a pretty impressive acceleration”. Don’t agree that it is?

  101. Steve Bloom
    Posted Dec 1, 2006 at 7:25 AM | Permalink

    Re #98: Lee, the solarphiles are picky. They like insolation changes due to solar variability, but they don’t like Milankovitch cycles. They also don’t seem to like doing their homework. For example, if Reid would have googled that paper before posting about it, he would have found that the authors have abandoned the idea. There’s also the fact that the implications of inclination for climate sensitivity are no different than those of obliquity.

  102. 2br02b
    Posted Dec 1, 2006 at 11:53 AM | Permalink

    Re #100

    What a farrago of nonsense.

    I shall restate the blindingly obvious:

    The weight of an iceberg = the weight of the same iceberg, melted. BY DEFINITION

    The volume of water (salt or otherwise) displace by an iceberg (fresh water or otherwise) = the volume of water displaced by the melted iceberg. BY DEFINITION

    Therefore the total volume of the ocean (or whatever) the iceberg is floating in before it melts = the total volume of the same ocean, etc., the melted iceberg is in after it melts. BY DEFINITION.

    Therefore sea level does not budge one iota.*

    That is all there is to it. Archemides grasped this a couple of thousand years ago. Why can’t you now?

    Everything else you say on this is irrelevant or misleading waffle.

    Are you trying to baffle us with b”‚⡬ls”‚⠢‚⡴, or are you just baffling yourself?

    If it is the first, I suggest you stop treating us like fools; if it is the second, I suggest you sign up for a course in elementary school introductory physics.

    —-
    *Of course, sea level may vary for a large number of extraneous reasons. But they are entirely extranious to this.

  103. Lee
    Posted Dec 1, 2006 at 11:53 AM | Permalink

    Willis, it looks like here you simply took starting and ending values for 1958 and 2005, and calculated linear trends using that? If you took the average annual change and multiply by the number of years, that is what you did.

    By including the single year 1958 (and also the single year 2005) as endpoints for calculating a linear trend over that period, you again make your analysis sensitive to the single year variation of the starting point – and now the end point as well.

    Again, Hansen had 50 years of 1958 data to use to determine the 1958 starting point. This isn’t perfect, Hansen quantified and reported the potential error, as you note above. Saying that there is no perfect way to arrive at that starting value, and then substituting a method with a potentially larger and inherently unquantifiable error, is NOT an improvement on what he did. Why are you insisting on throwing out 98% of that data?

  104. Lee
    Posted Dec 1, 2006 at 12:08 PM | Permalink

    2br02b = he’s got you.

    The key issue (ignoring temperature effects) is that the melting dilutes the seawater and therefore reduces its density.

    Before melting, the berg is floating in water of a given density, and therefore displaces a certain amount of eater, equal in weight to the berg – just restating the obvious starting point.

    When it melts, the berg add fresh water to the ocean, and therefore reduces its density. Less dense water requires more displacement to support the same weight. This means that the effective displacement of the berg, calculated in the density after melting (which is what matters when considering sea level after melting) must be a bit larger, because less dense sea water will require a slightly larger displacement to support the same weight of berg ice.

    A VERY, VERY, VERY slight deviation from exactly zero, but a deviation none the less.

  105. Loki on the run
    Posted Dec 1, 2006 at 12:12 PM | Permalink

    Re: 102

    Re #98: Lee, the solarphiles are picky. They like insolation changes due to solar variability, but they don’t like Milankovitch cycles. They also don’t seem to like doing their homework. For example, if Reid would have googled that paper before posting about it, he would have found that the authors have abandoned the idea. There’s also the fact that the implications of inclination for climate sensitivity are no different than those of obliquity.

    But likewise, the AGWers fail to keep up with discoveries on new mechanisms affecting the atmosphere: Influence of Cosmic Rays on Earth’s Climate and Experimental evidence for the role of ions in particle nucleation under atmospheric conditions (That last one published in a peer reviewed journal par exellence).

    There are more links at: Cosmic Rays and Earth’s Climate at Junk Science

  106. Posted Dec 1, 2006 at 12:21 PM | Permalink

    Lee #98 and Steve B. #102,

    One need to read wat is written. Solarphyles (of which I am a fan), know that Milankovitch cycles are the trigger for the ice ages/interglacials. The discussion is not about the influence of solar (not enough, according to CO2phyles), but about what feedback mechanism is needed to make the complete jump from cold to warm or vv.

    According to CO2phyles, it is necessary to include a (huge) feedback from CO2 changes, which follows the temperature changes, with +/- 600 years for cold to warm transitions and many thousands years for warm to cold transitions.

    As there is a huge overlap for temperature and CO2 levels, there is no definitive conclusion possible about how much CO2 helps in the transitions. But there is one exception: the end of the last warm period, the Eemian. When temperatures (and methane) levels went down, CO2 levels remained high, until after the lowest temperatures were already reached. The subsequent reduction of 40 ppmv CO2 had no measurable influence on temperatures. This points to a low influence of CO2 on temperature.

    Thus the influence of CO2 is not necessary to explain the ice age/interglacial transitions. And using that to calculate the influence of CO2 in current circumstances is not warranted.

    For the graphs, see here.

  107. Lee
    Posted Dec 1, 2006 at 12:39 PM | Permalink

    107 – Ferdinand –

    In 96, John Reid was apparently arguing that the paper he cited showed that Mil. cycles aren’t the answer, and then said:

    “Once Milankovitch is gone there is no further need to invoke crazy feedback mechanisms to make it work. No ice age CO2 feedback means no support for high values of climate sensitivity.

    JR ”

    I’m not quite sure how yo get from a single datum showing that temps dropped while CO2 stayed up for a while, to a statement that CO2 is not necessary to explain the transitions. ‘Sufficient,’ sure, at least during cooling. But ‘necessary?’

    Are you arguing that Mil cycles alone create enough forcing to drive the glacial – interglacial transitions? Remember that mechanisms going up, are not necessarily the same mechanisms going down.

  108. Posted Dec 1, 2006 at 1:11 PM | Permalink

    Re #108,

    Lee, I suppose that John made a mistake, as the paper makes a difference between the 41,000 obliquity cycle (which was dominant 2 million years ago) and the inclination cycle, which is dominant in the last million of years or so. In both cases, it is the change in insolation which triggers the ice ages. But as the paper says, not enough to explain the complete transition.

    Climate models include a huge feedback from CO2 (about halve the transition!). Without CO2, the models are not capable to describe the transition. Thus if it is proven (like at the end of the Eemian) that CO2 is not necessary at all to make a transition, then the role of CO2 in climate models is overblown. For ancient and modern times. And other feedbacks need to be invoked. E.g. a change in cloud cover, as the insolation wanders over different latitudes. And of course, an increased response to ice albedo (higher than currently implied in the models). The latter is by far more important for ice age/interglacials than for current times, where the ice cover is much smaller.

    As far as I know, the same mechanism (and the same feedbacks) are responsible for the warming and cooling episodes. The Milankovich cycles are the triggers, and feedbacks (especially ice albedo feedback in this case) help to reach the other point of (more or less) equilibrium. The only difference is the time constraint: melting is going quite rapidly (3,000-5,000 years), while freezing goes more slowly (10-20,000 years). May be a matter of ocean heat accumulation, but I don’t know for sure.

  109. Loki on the run
    Posted Dec 1, 2006 at 2:15 PM | Permalink

    Ferdinand, thanks for the link to the graphs and other info. The pictures were great.

  110. 2br02b
    Posted Dec 1, 2006 at 2:26 PM | Permalink

    Re #105:

    2br02b = he’s got you.

    The key issue (ignoring temperature effects) is that the melting dilutes the seawater and therefore reduces its density.

    Very sorry and all that, but you STILL haven’t ‘got it’.

    It is true that the melted fresh water ice dilutes the seawater reducing its density.

    However at the same time, it is also true that the infusion of salt from the surrounding salt water into the fresh meltwater increases its density.

    The two processes have EXACTLY equal and opposite effects on overall ocean volume, so the net overall impact is zero. Rather obviously, sea level does not budge one iota.

    (What you are really trying to say is that the fresh meltwater and the existing sea water will mix, so let’s consider the effect of the fresh water on the salt water, but lets not bother considering the effect of the salt water on the fresh water. Not very convincing at all, is it?)

    So now we have established that there is not even…

    A VERY, VERY, VERY slight deviation

    …who’se ‘got’ who?

    So it’s still back to that physics class for seven-year-olds for Jim and yourself.

    (If you can’t get something as simple as this right, why should we believe anything you say when you try to tackle grown-up questions?)

  111. Lee
    Posted Dec 1, 2006 at 2:43 PM | Permalink

    Nope – because when the fresh water becomes salt, it also end up slightly less salty than the prior seawater condition.

    This corresponds to his point 5 above.

    If you were magically to preserve the “hole” in the ocean created by the berg, and fill it with the fresh water from the melted berg, that fresh water (because it is the same weight but less dense) will have a slightly larger volume than the hole in the ocean.

    You are correct that the salt water will act on the fresh, and the fresh on the salt, to preserve the original state. But the original state being preserved includes this slight volume differential after melting.

  112. Dave Dardinger
    Posted Dec 1, 2006 at 2:53 PM | Permalink

    Hey, all you sea level arguers! Has anyone ever heard of numbers? Equations are fine, Bartlett. And words like “VERY, VERY, VERY slight” are fine, 2br02b but just working through a simple example is a far better way of deciding things. The density of fresh water in the area we’re talking is just a hair less than 1.000. That of saltwater about 1.025. Now let’s try an example. Say we have a floating iceshelf 200 km. on a side and averaging 100m thick On average it will stick out of the ocean about 10 meters. If melted in situ as Jim suggests it will stick out of the water about 2.5 meters; not exactly 3v-slight, but at least rather less than the previous 10 meters. The area of the worlds oceans is 3.61 x 10^8 km2. 200 x 200 = 4.0 x 10^4 so we’ll assume it’s slightly less for ease of calculation. We have 2.5 m / 10^4 or .25 mm. This super burg, BWT would contain 400 cubic Km. of ice. So how long would it take for this to melt and is .25 mm slight vs, vvs, vvvs? I don’t know, but now at least you now have something concrete to argue about.

    And no, 2b, your argument in the previous message doesn’t matter. As salt comes from other water it will make it less dense and the net result will be just the same or almost so (the density vs salinity graph isn’t totally linear.)

  113. Dave Dardinger
    Posted Dec 1, 2006 at 2:57 PM | Permalink

    Oops, make that that the burg / shelf contains 4000 km3. That’s what happens when you do everything in your head!

  114. Lee
    Posted Dec 1, 2006 at 3:00 PM | Permalink

    Why, thank you, Dardinger.

  115. Steve Sadlov
    Posted Dec 1, 2006 at 3:22 PM | Permalink

    A less saline ocean freezes at a higher temperature. Anyone considered that?

  116. Lee
    Posted Dec 1, 2006 at 3:45 PM | Permalink

    I would also point out that this:

    “So it’s still back to that physics class for seven-year-olds for Jim and yourself.

    (If you can’t get something as simple as this right, why should we believe anything you say when you try to tackle grown-up questions?) ”

    is yet another example of the kind of unremarked-upon warm welcome that one comes to expect in a nearly continual stream, when one is not a subscriber to the orthodoxy at this site.

    It is even funnier when attached to an argument as incorrect as 2b’s is here.

  117. Dave Dardinger
    Posted Dec 1, 2006 at 3:46 PM | Permalink

    re: #116 Steve S,

    I don’t know that that exactly helps the skeptic side of the issue since the advantage of un-iced oceans is that it’s easier for it to act as a radiator of heat at night (i.e. winter). If ice freezes at a higher temperature that cuts down on the amount of heat that can escape before convection and evaporation are cut off and as the ice accumulates, before the ice surface becomes very cold, reducing the emission of IR.

    OTOH, I found one interesting fact as I was researching for my previous post in this thread. That is that salt water doesn’t have the 4 deg C inflection point in density that fresh water does. That means that as salt water cools, it can use its increased density to sink without having to produce ice first. This seems to happen all the way down to less than 10 g/l salt. This may be partly why the mixing layer in cold water is so much thicker than the mixing layer in the tropics.

  118. Lee
    Posted Dec 1, 2006 at 3:46 PM | Permalink

    re 116 –

    Why, that sounds like a positive feedback mechanism, Sadlov.

    Grin.

  119. Lee
    Posted Dec 1, 2006 at 3:49 PM | Permalink

    re 119 – p.s. – yes, I know. It’s a probably poor attempt at self-deprecatory humor. Upon consideration, I thought I’d better point that out preemptively.

  120. 2br02b
    Posted Dec 1, 2006 at 3:53 PM | Permalink

    Re #113:

    words like “VERY, VERY, VERY slight” are fine, 2br02b

    That was Lee’s remark, not mine.

    but just working through a simple example is a far better way of deciding things.

    I quite agree. The problem with Lee and especially Jim is that the concept of ‘simple’ appear to be alien to them: they seem to see pointless compexity as a virtue.

    Say we have a floating iceshelf 200 km. on a side and averaging 100m thick On average it will stick out of the ocean about 10 meters. If melted in situ as Jim suggests it will stick out of the water about 2.5 meters

    Do pray tell how a melted iceshelf can stick any distance whatever out of the water?

    As salt comes from other water it will make it less dense and the net result will be just the same or almost so

    That’s just what I said, minus the “almost so”: sea level will not budge.

  121. Lee
    Posted Dec 1, 2006 at 4:03 PM | Permalink

    re 121;

    “Do pray tell how a melted iceshelf can stick any distance whatever out of the water?”

    It wont, out side of our thought experiments – instead, it will displace a bit of additional seawater beyond what the berg did. Dardinger just showed the calculation for how much more.

  122. Posted Dec 1, 2006 at 4:24 PM | Permalink

    Hi Lee,

    Going back to the sensitivity issue, I was wondering if you could help me solve an old dilemma of mine that I for one haven’t seen properly addressed anywhere yet.

    You’re pretty confident that the “consensus” climate sensitivity (to a doubling of CO2) is in the range 1.5 C – 4.5 C. However, if we take into consideration A) The rest of GHGs we have been emitting to the atmosphere and B) That the effect of the CO2 increase is logarithmic in its property as a GHG, it turns out that the current ~35% increase in CO2 concentration equates, in terms of GHG forcing, to the equivalent of a 70-75% of a CO2 doubling, as explained by Lindzen here.

    In other words, the earth has already experienced an anthropogenic GHG forcing equivalent to 70-75% of a CO2 doubling. And the temperature has only risen ~0.7 C, almost half of which occurred before anthro GHGs played any major role. With only 25-30% left for an effective CO2 doubling equivalent, how do you jump to a temperature increase the lower bound of which is about 4 times of that observed and the upper one 13 times higher??

    I would expect you to argue that there are various “masking effects” in play (although from a previous post of yours I gather you -like me- don’t believe in the aerosols-mid century cooling connection too much) but do you really think that they can account for such a huge jump?

    Many thanks in advance,

    Mikel

  123. Patrick Trombly
    Posted Dec 1, 2006 at 4:26 PM | Permalink

    Re: 2

    It wasn’t a river but a glacier that pushed the sand down onto the farm. Moreover the diet
    of the Greenland Vikings changed from 80/20 land/sea to 80/20 sea/land. Also, trade with the
    continent fell off, as ship traffic in the North Atlantic fell off, due to increased drift
    ice. This started in the 1200s. The Pope wrote about it. But then, maybe he was part of the
    Vast Right Wing Conspiracy, hundreds of years before the climate became a political issue.

    The Viking settlement is one of many examples of the effects of climate change in the MWP.

    The severe drought in the Southwest US, which is predicted to happen again if it warms another 1-2
    degrees F, the plains buffalo migrating about 500 miles northward into land in central Canada that
    then was lush grasslands, the Viking settlement of Greenland, the Vikings’ exploration of harbors
    and inlets on the Canadian coast that later would be iced over for much of the year, vineyards in
    England (1000 years ago, with simple hand tools, before man had cultivated hardy varieties that could
    withstand cold climates, and at a time when a landowner could not afford to devote acreage to a new
    plant on a whim), fig and olive trees in Germany, and similar examples of what grew when and where
    abound from around the world, as do written contemporaneous observations that the changes were taking
    place due to changes in the climate.

    The most plausible theories about what will result from global warming curiously mirror what happened
    last time. All these people insisting that it’s irresponsible for “us” not to act pretend that there
    was no last time. Wouldn’t the responsible approach be to first study what happened last time to get
    an idea of what’s likely to happen this time?

    But that would require admitting that there was a last time, which would mean that this time isn’t
    “unprecedented” or even abnormal, which means that without direct tangible proof that human-generated
    CO2 is the cause, this cannot be assumed, and it’s more important to have something go wrong and
    think you have a valid reason to blame activities you’ve sought to curtail for dozens of other
    reasons for decades than to actually do anything to try to adapt to the problem.

    Honestly you people remind me of my ex-wife.

  124. Dave B
    Posted Dec 1, 2006 at 5:01 PM | Permalink

    lee, i tried to post this before, but got caught in the filter.

    re: greenland.

    feces contains anaerobic bacteria which will break it down regardless of oxygen. given sufficient cold, however, this process is halted. finding 500 year old smelly feces is pretty good evidence of a freezing event, rather than a warm “burying in sand” event.

  125. Willis Eschenbach
    Posted Dec 1, 2006 at 5:32 PM | Permalink

    Lee, I appreciate your post above. However, it appears you did not read mine. You say:

    Willis, it looks like here you simply took starting and ending values for 1958 and 2005, and calculated linear trends using that? If you took the average annual change and multiply by the number of years, that is what you did.

    By including the single year 1958 (and also the single year 2005) as endpoints for calculating a linear trend over that period, you again make your analysis sensitive to the single year variation of the starting point – and now the end point as well.

    These are not simple subtraction of endpoints. They are trend analyses, in particular ordinary least squares linear regression (OLS) trends, and Theil-Sen trend analysis (TSA) trends. As I said in the post you are objecting to:

    PS – I have looked at the Theil-Sen trend estimator, as well as the OLS estimator of the trend. The TS estimator is more robust and resistant to endpoint choices and outliers. The two are identical to the nearest tenth of a degree (in 48 years), so the figures shown above are for both estimators.

    You then say:

    Again, Hansen had 50 years of 1958 data to use to determine the 1958 starting point. This isn’t perfect, Hansen quantified and reported the potential error, as you note above. Saying that there is no perfect way to arrive at that starting value, and then substituting a method with a potentially larger and inherently unquantifiable error, is NOT an improvement on what he did. Why are you insisting on throwing out 98% of that data?

    Since you say that the potential error is “quantifiable”, perhaps you could satisfy my curiosity as to what you mean by “error.”

    But that is curiosity only, because regarding the larger issues, why are you back to the question of the starting point? You yourself said the issue is the trend, and I agree. The 50 years of 1958 data contributes nothing to the trend, it does not change the trend at all. Our disagreements about the starting point do not affect the trend. You ask for trend, I calculate the trend, and now you’re back to the starting point question. I don’t follow that.

    In any case, having clarified that the trends are in fact robust trends and not “endpoint-startpoint”, can we now agree that the three scenarios that Hansen thought would bracket the observational data did not do so, he missed the target entirely, and thus Hansen’s claims of skill for his model is not justified?

    All the best,

    w.

  126. Lee
    Posted Dec 1, 2006 at 8:14 PM | Permalink

    Quickly
    123 – Mikel. Unless one assume an instantaneous response to the final temperature for a given
    CO2 increase, one would not expect to see al lthe result yet. There are time lags in the system – warming of oceans especially – and we are not yet at the equilibrium temperature increase for a given increase in forcing. This is really basic to discussions of finding the sensitivity to a given forcing, and I have seen it widely discussed in a lot of places. In fact, it is

    124 – Patrick. There is no evidence of which i am aware that a glacier “pushed” that sand onto the farm. The sand very likely originated from up-canyon glacial causes. That is a much different thing from ‘pushed by a glacier onto the farm.’.
    And would you PLEASE not assign to me ideas when you have not one damn bit of evidence that I hold them!

    125. Dave, that appears to have been a working farm when it was covered. However, even if it were first abandoned to cold and and then covered with sand, that does not change my argument. I am not disputing that there was a MWP or a LIA. I am disputing Monckton’s absurd claim (one of several laughably false claims he made) that “The Viking agricultural settlements remain covered by permafrost to this day.”

  127. bender
    Posted Dec 1, 2006 at 10:35 PM | Permalink

    Re #127 No reply to Willis? How much time does it take to admit a mistake?

  128. Lee
    Posted Dec 1, 2006 at 11:14 PM | Permalink

    Willis, let me remind everyone what the actual data looks like. This is from Pielke’s site. I’m not quite sure what you’ve done to get an annual trend 33% higher in scenario B – and I don’t much care about scenario A, but 100% higher? – but it damn sure doesn’t match what the graph shows.

    bender – ohrr nee.

  129. Dave Dardinger
    Posted Dec 1, 2006 at 11:42 PM | Permalink

    re: # 129 Lee,

    Where’s the problem? Look at where the black and red lines start in 1958. The red is .12-.13 deg lower. If you make them coincide, then in 2004, which is what I take to be the comparison year, as the line for 2005 is apparently estimated, then you will see that black would be just below .5 deg C while Red would be almost to 1.0 deg about twice what black has. Yes we know you’ve complained about aligning the starting points, but Willis has explained what he did and unless you’re willing to do the math yourself, you really can’t use the graph itself to complain.

    BTW, this logic also explains the one third higher in scenario B. Align and look at 2004.

  130. Willis Eschenbach
    Posted Dec 1, 2006 at 11:48 PM | Permalink

    Lee, the trend is 100% higher (0.6° over the period of record, vs 1.2° over the period of record). If you have different results, please post them.

    This is obscured in the graphic you show above, because of the difference in the starting points of the various scenarios (which we have discussed before), and because the trend lines are not shown.

    Finally, I haven’t seen that incarnation of the graph before. It is curious, because Scenario “C” starts in the middle of nowhere in 1990. Something is curious in that graph, because Scenario “C” started in 1958 like all the rest, and began to deviate from Secenario “B” in about 1978.

    w.

  131. Loki on the run
    Posted Dec 2, 2006 at 12:18 AM | Permalink

    Here is the latest version of that graph (from PNAS)

    It looks to me like Hansen zero’d it at 1970 … It is also interesting that the two sets of observations correlate very well.

  132. John Reid
    Posted Dec 2, 2006 at 12:29 AM | Permalink

    Re 98 Lee

    So, John Reid – are you arguing that solar variations DON’T cause climate change?

    No. Why would you think I was?

    Re #102 Steve Bloom

    If Reid would have googled that paper before posting about it, he would have found that the authors have abandoned the idea.

    I didn’t pick up on that. Can you quote a reference?

    My point is not that Muller and MacDonald are correct but that Milankovitch is wrong. This paper shows it to be wrong because it does not even predict the right period.

    Re #102 Steve Bloom amd #109 Ferdinand

    There’s also the fact that the implications of inclination for climate sensitivity are no different than those of obliquity.

    Not so. The idea is that as the earth moves away from the plane of the ecliptic more interplanetary debris is encountered which cuts down the amount of solar radiation. There is some experimental support for this, e.g.

    A 100,000-Year Periodicity in the Accretion Rate of Interplanetary Dust.
    S. J. Kortenkamp and S. F. Dermott (1998)
    Science 280, 874-876

    I am not overly enthusiastic about this theory myself. All I am saying is that the jury is still out on the causes of ice ages. The regularity of the cycle suggests an astronomical mechanism but the sawtooth shape suggests that something else is going on.

    Re #123 Mikel Mariñelarena

    A nice argument. The evidence seems to point to the grey-body Stefan’s Law calculation of about 1 degC for CO2 doubling being about right. Ain’t physics wonderful.

    JR

  133. Lee
    Posted Dec 2, 2006 at 12:47 AM | Permalink

    Dardinger – willis just said he didn’t simply subtract end points. Your explanation brings us right back to the calibration and end point error issue.

    willis, I don’t have the data, or a statistical package installed. I do see two curves that cross each other 17 times in the 56 year length of the record, scattered across the entire length of the series, and including within the first two years and the last two years of the data. I see you reporting results rounded or truncated at 0.1, for data reporting differences of a couple tenths. And I see no statistical analysis of whether the differences you report are significant.

  134. Willis Eschenbach
    Posted Dec 2, 2006 at 3:12 AM | Permalink

    Lee, I’m sure you see two curves that cross each other 17 times in the record. I’m not quite convinced, however, that you see 56 years in a record that goes from 1958 to 2005 … however, your mileage may vary. But I gotta ask, Lee … what on earth does this have to do with the trend? Are you sure that you know what a trend is?

    You say you don’t have the data, or a statistical package installed. I will not comment on that.

    I will say that when I started to analyze Hansen’s results, I didn’t have the data either, so I digitized it from the graph. And in general, I don’t use a statistical package, I use Excel for all my analyses.

    To show that you are competent to comment on these matters, how about you get the data where I got it, you do the analysis you requested (don’t forget to adjust for autocorrelation), and you get back to us with the results.

    Because if you can’t do that … why are you wasting our time talking about how many times two lines cross on a graph when we’re discussing a trend?

    w.

    PS – While you are at it, you might do some research on the concept of “significant digits”. Here is a good place to start.

  135. Hans Erren
    Posted Dec 2, 2006 at 4:08 AM | Permalink

    loki, please limit your pictures to 640 pixels width

  136. Jim Barrett
    Posted Dec 2, 2006 at 4:30 AM | Permalink

    2br02b (posting 100 etc): You respond with

    “What a farrago of nonsense …. I shall restate the blindingly obvious”

    and

    “I suggest you sign up for a course in elementary school introductory physics”

    I will not argue with you any more; you are wrong – full stop; interestingly, no one has come to your defence. However, our little discussion nicely illustrates a common fallacy of postings on this site – that climate science can be argued at the level of “elementary school introductory physics” and a “physics class for seven-year-olds” (your later posting 111). The problem we were discussing is certainly not “simple” – it contains subtleties relating to the variation of “g” with position, the non-linearity of the equation of state of seawater and the thermodynamics of the warming and melting of the iceberg. Just picking up on one point, you make it quite clear that one of the things which you think is “blindingly obvious” is that the mixing of seawater with freshwater occurs with ABSOLUTELY NO CHANGE IN VOLUME – this quite wrong, as any capable student of oceanography will tell you. And, because it is quite wrong, your argument falls in a heap – and, as I showed, there are several other reasons.

  137. MarkR
    Posted Dec 2, 2006 at 5:13 AM | Permalink

    #132 Loki

    It is also interesting that the two sets of observations correlate very well.

    Also makes suspect the claim that latent global warming is being stored in the oceans. No sign of that here, just a lock step of peaks and troughs. No evidence of feedback.

  138. Willis Eschenbach
    Posted Dec 2, 2006 at 7:13 AM | Permalink

    Re 132, Loki, since the “Station” data is a subset of the “Land-Ocean” data (it is the “Land” part), it would only be interesting if the two sets of observations did not correlate well …

    w.

  139. Paul Dennis
    Posted Dec 2, 2006 at 7:37 AM | Permalink

    re #137 Jim, these are all second order effects and insignificant. b2r02B is wrong on this one. The main reason why sea level rises when an ice berg melts is that the desity of freshwater is about 1 and the density of sea water about 1.025.

    I think you probably pointed out a while ago that the volume of sea water displaced by the berg is V(berg) x 0.9/1.025. The volume of water produced by the melted berg is V(berg) x 0.9/1. Simply a 2.5% difference between the two exactly as Dardinger pointed out in his thought experiment. Of course when considering the percentage change in volume of the global ocean the rise is insignificant and probably not measurable when a significant floating ice shelf collapses.

    On your point about climate science being answered using elementary school physics. I think there is ample room for the application of fundamental physical principles, simple considerations of mass and energy balance and fluxes, simple box models etc. These are often very intuitive and give great insight into the likely magnitude of effects.

    One of the problems with current GCM’s, coupled atmosphere-ocean models etc. is their very complexity. The ability to constantly refine and tune the models, to scale and rescale processes to fit the grid size etc. means they no longer give any real insight into a problem.

    I’m not arguing that our efforts are wasted, just that there is scope for both approaches. I like to see problems tackled at a multitude of levels but suspect that we will ultimately gain more understanding from the lower level models.

    Well that’s my view.

  140. Lee
    Posted Dec 2, 2006 at 10:19 AM | Permalink

    willis, I know about significant digits. I know what a trend it; even though I have clearly stated that I am not qualified to analyze time series, I do know know the basics. I also know what a properly supported statistically significant difference is, and you, who claim to expertise, have not reported such. I also know that when you go comparing two results that differ by 0.2, sig to tenths, with values derived from measuring points off a noisy data plot, and without reported CIs, and doing this INSTEAD of reporting a valid statistical analysis of whether those two curves are significantly different, that I go ‘hmmmm…’

    You have not reported any analysis showing that those curves are statistically significantly different from each other – no statistically valid comparison of two time series to test for significant differences – and you are the person making the claim.

  141. Lee
    Posted Dec 2, 2006 at 10:30 AM | Permalink

    btw – 47 (56 was an obvious typo) from 1958 – 2004 (not 2005 in the first posted graph. Stop being an a****** and present a proper statistical analysis supporting your clam that Hansen’s B is different from obserrved.

  142. Dave Dardinger
    Posted Dec 2, 2006 at 10:32 AM | Permalink

    re: # 138 MarkR,

    Also makes suspect the claim that latent global warming is being stored in the oceans.

    This is a question I wanted to get around to discussing with Lee, but since you’ve stated it clearly I’ll talk about it here. Two things I’ve noticed when looking at SSTs is that they vary a lot, and over a short time period. Meanwhile the temperatures at depth only change slowly. The main reason is that water only moves to deep water at a few points and then generally in cold areas. Therefore it becomes hard to believe that there will ever be much warming of the ocean depths by mixing. This recent “finding” (I put it in quotes because it’s not yet really verified) that a lot of heat was lost from the oceans recently, presumably into space, only highlights the situation. Can we really hold that there’s a huge amount of latent heating that’s going to ocean depths and thus delaying the total AGW signal or is this just an excuse?

    I’m not making any final decision at this point. I’d like arguments both ways and particularly hard data. That’s why I wanted to run it by Lee and see what can be gleaned from a warmer.

  143. Posted Dec 2, 2006 at 11:35 AM | Permalink

    Re #27

    Hi Lee,

    Thanks for your answer. However, it looks like my dilemma will remain unsolved. You imply that the oceans’ thermal inertia (and some other unspecified “time lags”) will be able to multiply several fold in the future the part of observed warming we can plausibly attribute to anthropogenic GHG increases. Apart from this being intuitively hard to believe, a simple observation at how the IPCC TAR models treat ocean heat uptake shows that it cannot be true. For small transient responses, such as the one we’ve witnessed so far, the final climate sensitivity in “equilibrium” varies also very little: http://www.grida.no/climate/ipcc_tar/wg1/356.htm

    In summary, direct evidence seems to speak very strongly against the climate sensitivity range you support. With so much literature on the subject, as you rightly point out, is it really so difficult to tackle Lindzen’s challenge?

  144. Lee
    Posted Dec 2, 2006 at 11:36 AM | Permalink

    Dardinger, IIRC – I don’t have the paper here any mroe since my old computer went kaput, and the link I can find is to the abstract only – the recent observed cooling (if it is a real cooling, and not heat transport outside the areas with good data coverage) appear similar to an event in the early 1980s. If so, it demonstrates significant previously unrecognized short-term variability in the ocean heat system, and has two computational consequences – it reduces the rate of observed warming over the last few decades (but still above 0), and by increasing the short-term variability it increases the uncertainty in the long-term rate of heating – with the lower bound still remaining above zero.

    They also said that the cooling event extended down to 700m, and likely deeper – if heat can be lost at this magnitude and rate to at least that deep, to me that implies that heat CAN be moved into and out of the oceans readily. I think I remember that they claimed local heat losses equivalent to a surface flux of about 50 W/m2. Since we have observed long-term warming and increased heat content of the oceans, the oceans are (must be) able to store large amounts of heat – because they are doing so. The rates and ‘equilibrium’ heat content are still quite fuzzy, it appears.

    One interesting issue is that the satellite-observed sea level increase continued during this putative cooling event. The loss of heat should have created an observable change in that – implying that the heat was transported and not lost, or that there is some compensating increase in fresh water addition to the oceans that overcame the effect of the cooling event, or that the satellites simply aren’t measuring sea level height.

    The authors also mention that the GRACE gravity mapping satellites will soon have enough data to look for deltas in local anomalies in ocean mass, and that this will help to resolve some of these issues.

  145. Posted Dec 2, 2006 at 12:22 PM | Permalink

    re: 132 The Hanson forecasts

    The three scenarios were based on the amount greenhouse gases.

    “Scenario A has a fast growth rate for greenhouse gases. Scenarios B and C have a moderate growth rate for greenhouse gases until year 2000, after which greenhouse gases stop increasing in Scenario C…”

    Does anyone know the forecasts vs: the actual for the greenhouse gases? IMO, that is the key
    question. If the greenhouse gases are close to scenario A, the temp forecast in Scenario B is irrelevant. Has the Scenario A, B and C growth rates been published?

  146. Lee
    Posted Dec 2, 2006 at 12:47 PM | Permalink

    144 –
    yes, many fo the feedbacksa are liekly not yeat realized. Remember that many fo the feedbacks are temeprature dependent – taht is until enough initlial warmign is realized, they dotn start to kick in, or only a bit. so if the oceans are warmin gslowly and soaking up a lot of the increased heat – which they ahve been doing for a few decades now – then many of the amplifying feedbacks ae not yet realized.

    The time constants for some of those basic mecahnisms, such as ocean temeprature increase, are decades at least. Adn becasue they are complex andinteracting, the only way to appraoch them is through the models.

    Respectfully, your argument reduces to – We don’t know the time constants for the approach to ‘equilibrium,’ so I’m going to assume they are small and argue from that assumption.’ But observational and modeling approaches both claim pretty good evidense that the time constants for ocean heating (as just one example) after delta forcing are large, on the order of decades (plural), so that assumption is not valid.

  147. charles
    Posted Dec 2, 2006 at 1:09 PM | Permalink

    #147

    lee says “then many of the amplifying feedbacks are not yet realized”.

    what are these feedbacks and what will trigger them?

  148. welikerocks
    Posted Dec 2, 2006 at 1:16 PM | Permalink

    Re:124,127
    It is simply mind boggling that several times the links to the correct information for the Viking settlement excavation have been provided to Lee more then once and he still says this to a new commenter:

    “There is no evidence of which i am aware that a glacier “pushed” that sand onto the farm. The sand very likely originated from up-canyon glacial causes.”

    All this at the same time asserting his expert knowlege based on what he doesn’t know is far more substantial knowlege then what the articles linked we are quoting to him contain.

    One more time: A climate change from a warm period to a colder one, glacier advancing, Viking settlement inbetween the glacier and the sea. A tremendous melt and sediment influx in summer of the advancing glacier, bitter cold in winter, constant wind that carries sand too equals a coastal settlement that is abandoned and covered with sand and ice.

  149. Lee
    Posted Dec 2, 2006 at 2:52 PM | Permalink

    Rocks, I invite you, if JohnA ever restores the dead thread, to review the corrections to your utter falsehoods about that farm. There were not, ever, as you claimed, 400 buildings on that farm – there was one building of 5 rooms. There was not, ever, as you claimed, 5000 people on that farm.

    There are about 400 known ruins encompassing the entire area of viking settlements across all of southern Greenland. There were perhaps up to 5000 people at peak, across all of the viking settlements. This article said exactly this.

    That farm was not covered by a glacier, nor is their any evidence that anything was “pushed onto” that farm by a glacier. Sand resulting from up valley glacial events, from a glacier somewhere up that valley, ended up covering the farm for reasons that, based on what Ive read, are not entirely known. Nor am I aware of evidence that the farm was not already covered at the transition from the warm to the cold period – it may have been covered only then, but it looks more likely that it was an active functioning and well managed farm up until very shortly before it was covered. The piles off smelly fecal matter strongly imply so. But even if it WAS a direct glacial ‘push,’ that is irrelevant to Monckton’s claim about the state of “The Viking agricultural settlements… to this day.”

    Now, if you are going to make this kind of accusation, would you first at the very least try to check that you have some at least some basic understanding of what the articles you cite actually say, or even that you can read a simple English sentences such as the ones in that article saying that farm had one house of 5 rooms, or that the 400 ruins and 5000 people were for the the entire colony, before doing so.

    Can you get it into your thick sckull that I’m not disputing that it was warm when the vikings settled, I’m not disputing that it got cold and this was at least in part responsible for the failure of the colony, I’m not disputing that the farm got covered by sand of glacial origin, and I’m not disputing that the sand froze.

    I am disputing specific absurd claims and the arguments being made from them. Such as Monckton’s inclusive claim that “The Viking agricultural settlements remain under permafrost to this day.” And the claim of some here including you that this farm which became unfarmable by virtue of being buried 500 years ago, and is now washed away and no longer exists, is evidence for that inclusive claim about “The.. settlements” (plural) “to this day.”. And I have to say, your claim in that other thread that this farm had 400 buildings and 5000 people is about as absurd a claim as I have ever come across – for you to hurl accusations about other people’s understanding while displaying this kind of utter ignorance and failure to understand even the most basic facts from your own cited article is “mind boggling.”

  150. Lee
    Posted Dec 2, 2006 at 2:53 PM | Permalink

    148 – charles, do your own damn homework – even the most cursory look at the field will elucidate the feedbacks being considered.

  151. Dave B
    Posted Dec 2, 2006 at 3:02 PM | Permalink

    lee re 127

    “Monckton’s absurd claim (one of several laughably false claims he made) that “The Viking agricultural settlements remain covered by permafrost to this day.”

    ok lee, fair enough. but because at least part of one farm melted out only 10 years ago, mockton’s claim is “absurd”?

    i must disagree. it is more likely that other farms remain.

  152. Lee
    Posted Dec 2, 2006 at 3:15 PM | Permalink

    Dave, point to them. Shem them to me.
    There is ample evidence of a couple hundred Viking farm sites that are now farmable. The pictures show them surrounded by pasture or pasturable land, with green grasses growing upon them. Vikings weren’t crop farmers, they were hay and pasture farmers.

    If your argument is “well, I don’t have any evidence, but there might be a farm somewhere that isn’t farmable,” I have to point out that this is simply a baldly unsupported claim. And that it isn’t even a support for Monckton – Monckton’s wording was inclusive – he said ‘THE Viking…” not “some viking…”
    Lets remember also that Monckton embedded this claim amongst his claims that there were no Andes glaciers during the MWP – we have ice cores spanning that period – or that we know the arctic was ice free because the Chinese navy sailed through there and saw no ice. Right.

    The fact is, the man was spinning a web of disputed, unsupported and simply false evidence, and this is only one part of it.

  153. Loki on the run
    Posted Dec 2, 2006 at 4:18 PM | Permalink

    Lee in 147 says:

    yes, many fo the feedbacksa are liekly not yeat realized. Remember that many fo the feedbacks are temeprature dependent – taht is until enough initlial warmign is realized, they dotn start to kick in, or only a bit. so if the oceans are warmin gslowly and soaking up a lot of the increased heat – which they ahve been doing for a few decades now – then many of the amplifying feedbacks ae not yet realized.

    Can you perhaps suggest some mechanisms for these feedbacks?

    It seems to me that Hansen’s initial claims and models made no allowance for the oceans and now you AGWers are simply trying to shoehorn them in in an ad-hoc way.

  154. welikerocks
    Posted Dec 2, 2006 at 5:21 PM | Permalink

    Lee, what is the name of the farm he is talking about? Do you know? All these settlements have names and they all had a farm and they only had each other to depend on! One of them is refered to as the Pompei Under the Sand, those are the facts of it’s size and population and sheesh I hope that I spelled that right. They seem to know alot about it in two magazines dedicated to science and history. [You are like talking to a wall , it hurts my head to think straight enough to satisfy your maze of replies.]

    Name of farm?

    I believe in every subject someone is asking you to do name something specifically.

  155. Willis Eschenbach
    Posted Dec 2, 2006 at 6:02 PM | Permalink

    Lee, I invited you above to do the homework and tell me about the trends. You have responded:

    willis, I know about significant digits. I know what a trend it; even though I have clearly stated that I am not qualified to analyze time series, I do know know the basics. I also know what a properly supported statistically significant difference is, and you, who claim to expertise, have not reported such. I also know that when you go comparing two results that differ by 0.2, sig to tenths, with values derived from measuring points off a noisy data plot, and without reported CIs, and doing this INSTEAD of reporting a valid statistical analysis of whether those two curves are significantly different, that I go “hmmmm…’

    You have not reported any analysis showing that those curves are statistically significantly different from each other – no statistically valid comparison of two time series to test for significant differences – and you are the person making the claim.

    Curiously, you follow your post by telling Charles to “do your own damn homework” when he asks you a question, yet you won’t do yours … so I suppose I’ll have to do it for you.

    Now, Hansen has made the claim that his models are skillful at forecasting future temperatures, and he has offered the graph as a “proof” of that. Do you see confidence intervals on his graph? Have you asked Hansen why there are no confidence intervals? If you do, you might find his reply surprising. It seems that modelers believe that models do not have confidence intervals, (presumably because they are not stochastic processes), and that the only way to get a confidence interval is to run the model several times and take the range of the model results as the confidence interval. For example, this is the justification for using the “0.85 W/m2 ±0.15W/m2” as his result in the “smoking gun” paper.

    Is this true? I don’t know. It seems a bit self-serving to me. But if that is the case, then the differences between the data (95% CI on 48 year trend is ±0.06°C) and the model results is significant for the difference between the A and B scenarios and both instrumental datasets, and is significant for the C scenario vs GISS data but not HadCRUT data.

    However, we must bear in mind that the “C” scenario is entirely unrealistic, as it assumes no growth in CO2 after the year 2000. This is why the scenario C trend results are (barely) significant, but are not meaningful. If the CO2 growth rates in the “C” scenario were continued past 2000, the difference in the trends would be significant.

    Next, let’s assume that the model results do have confidence intervals. What do we find in that case?

    We find that the model results are much more auto-correlated than the instrumental data (model average lag-1 autocorrelation 0.93, data average 0.77). Now, this should tell us something about the models and their ability to reproduce the climate metrics … but I digress. The problem is that with the autocorrelation that high, it is not possible to calculate the confidence intervals for the model results, as there are too few degrees of freedom.

    Finally, we can look at whether there is a significant correlation between the data and the model results. The answer is unequivocally no ( p = 0.25, 0.21, and 0.14 for the A, B, and C scenarios vs the data respectively, a long ways from significant). The model results, in other words, do a very poor job of approximating the real world.

    I’m sorry that there is no definitive answer to your question about confidence intervals. That’s the reason I left the CI’s out of my previous analysis, it’s not an oversight or a devious plot on my part, it is that either the CIs do not exist for the model results, or that they cannot be calculated.

    I had hoped that by encouraging you to do this homework, you would be forced to take a look at some of the intricacies of trend analysis, and would discover these things for yourself. Then, you couldn’t argue about the conclusions.

    The basic fact still stands, however. Hansen picked a high, middle, and low estimate of the future forcings, saying that these would bracket the actual results. They have not done so, and in addition, none of the model forecasts have a significant correlation with the data.

    w.

  156. charles
    Posted Dec 2, 2006 at 6:25 PM | Permalink

    #151

    Lee why don’t you give me one example of a postulated feedback that has a trigger. That is the feedback has not been in effect in the past but it will be in the future.

    I’m not aware of any. If Lee is unwilling to provide one can anyone? What is Lee referring to?

  157. Loki on the run
    Posted Dec 2, 2006 at 6:36 PM | Permalink

    In Evaluation of clear-sky solar fluxes in GCMs participating in AMIP and IPCC-AR4 from a surface perspective the abstract says, in part:

    Solar fluxes at the Earth’s surface calculated in General Circulation Models (GCMs) contain large uncertainties, not only in the presence of clouds but, as shown here, even under clear-sky (i.e., cloud-free) conditions. Adequate observations to constrain the uncertainties in these clear-sky fluxes have long been missing. The present study provides newly derived observational clear-sky climatologies at worldwide distributed anchor sites with high-accuracy measurements from the Baseline Surface Radiation Network (BSRN) and the Atmospheric Radiation Measurement Program (ARM). These data are used to systematically assess the performance of a total of 36 GCMs with respect to their surface solar clear-sky fluxes. These models represent almost 2 decades of model development, from the atmospheric model intercomparison projects AMIP I and AMIP II to the state of the art models participating in the 4th Assessment Report of the Intergovernmental Panel on Climate Change (IPCC-AR4). Results show that earlier model versions tend to largely overestimate the surface insolation under cloud-free conditions. This identifies an overly transparent cloud-free atmosphere as a key error source for the excessive surface insolation in GCMs noted in previous studies. Possible origins are an underestimated water vapor absorption and a lack of adequate aerosol forcing. Similar biases remain in a number of current models with comparatively low atmospheric clear-sky solar absorption (around 60 Wm’ˆ’2 in the global mean).

    So, it would seem that the models have lots of uncertainty … It goes on to say:

    However, there are now several models participating in IPCC-AR4 with higher atmospheric clear-sky absorption (70 Wm’ˆ’2 and up, globally averaged) and more realistic aerosol treatment, which are in excellent agreement with the newly derived observational clear-sky climatologies. This underlines the progress made in radiative transfer modeling as well as in the observation and diagnosis of solar radiation under cloudless atmospheres and puts the most likely value of solar radiation absorbed in the cloud-free atmosphere slightly above 70 Wm’ˆ’2.

    Perhaps someone can find out if Hansen’s models are more realistic, and if so, when the changes was made?

  158. Willis Eschenbach
    Posted Dec 3, 2006 at 5:59 AM | Permalink

    Loki, you ask above, “Are the GISS models more realistic?” with respect to clear sky absorption? Research in the GISS ModelE analysis paper reveals that no figure is given for clear-sky absorption.

    Whether the clear-sky absorption is correct or not is hard to determine, because the amount of cloud cover is systematically underestimated by the GISS model. The three versions of the model considered (F20, M20, and M23) give an average cloud cover which is 13% too low worldwide. This huge error has to be made up somewhere, but it is not clear where.

    w.

  159. MarkR
    Posted Dec 3, 2006 at 6:36 AM | Permalink

    Lee

    You could look for evidence of amplified feedbacks here.

  160. Loki on the run
    Posted Dec 3, 2006 at 12:54 PM | Permalink

    Re: 159.

    So, given that the models underestimate cloud cover by 13% and they mishandle clear-sky radiative forcing by some 10 Wm-2 and all the feedback mechanisms Lee tells us about, Hansen’s graphs could be off by 50-100% or more.

    Personally, I like this graph:

    The correlations look so much more robust.

  161. Lee
    Posted Dec 10, 2006 at 8:42 PM | Permalink

    re 156:
    Willis – a trend calculated through a noisy time series WILL have a confidence interval. You are claiming that the trend you calculate through the scenario B time series values, is different by 33% from the trend calcualted through the actual temperature record. This is YOUR claim. The lines you fit to that data DO have confidence intervals, they are relevant to YOUR claim that the two trends are different by sufficient to invalidate claims about the Hansen model, and you are so far failing to support that claim with any kind of valid statistical comparison of the two trends you report. I’m not asking for confidence intervals for the model results – I’m asking for confidence intervals for the calculation you are making and that you are claiming show a difference in the trends.

    THAT claim is subject to statistical evaluation – is the difference in the calculated trend between those those two trend lines different from each other. You claim that “B” is 33% greater than observed, based on the data here. that claim is subject to a statistical analysis. It is interesting that you go to so much trouble to avoid doing that analysis.

    You have not previoously claimed that there isn’t sufficient data to say that “B” and the observed temps are the same – you claimed that they are 33% different and therefore Hansen was wrong. Your latest reply is a nice attempt to divert the question to whether or not those two time series can be said to be statistically distinguishable – but that is irrelevant to what you claimed. You claimed that your analysis shows that Hansen’s model result was simply wrong – by 33% – and that is a claim which you have NOT supported adequately.

  162. Lee
    Posted Dec 10, 2006 at 8:47 PM | Permalink

    157 – charles, where on earth did I claim that there are feedbacks which have triggers and will only occur in the future?

    The ocean is warming now – that warming will cause further effects as it happens- and the warming will continue for some time to come, simply to come to equilibrium with the increased forcing already in effect. that isn’t a trigger – it is a process with a time constant measured in decades.

  163. charles
    Posted Dec 10, 2006 at 9:56 PM | Permalink

    #163

    lee see 147

    “yes, many fo the feedbacksa are liekly not yeat realized. Remember that many fo the feedbacks are temeprature dependent – taht is until enough initlial warmign is realized, they dotn start to kick in, or only a bit.”

    ???????

  164. charles
    Posted Dec 10, 2006 at 10:12 PM | Permalink

    163

    Lee, are not seasonal ocean temp variations an order of magnitude larger than the rise due to gw? Seems to me we should be able to observe any feedbacks past or future now in the seasonal data.

  165. Willis Eschenbach
    Posted Dec 11, 2006 at 1:19 AM | Permalink

    Steve B., you comment above that:

    Willis – a trend calculated through a noisy time series WILL have a confidence interval.

    This is not true. You have neglected the effect of autocorrelation.

    Autocorrelation is adjusted for by calculating an “effective N”, a reduced count of the dataset which allows for the fact that the data series is autocorrelated. The problem arises when the number of degrees of freedom becomes too small, due to high autocorrelation as in the model results.

    In that situation, the number of degrees of freedom is too small to calculate a confidence interval. To wit:

    Nychka et al.’s Monte Carlo simulations show that this gives reasonably accurate and unbiased uncertainty estimates for trend analyses. They also note that when neff [effective N] is less than 6, estimates of uncertainties and confidence intervals are likely to be unreliable even if equation (22) is used.

    In other words, we can’t reliably calculate the uncertainty of a trend if effective N is too small. This happen when the autocorrelation is too high given the length of the dataset, as in the present case.

    w.

  166. Lee
    Posted Dec 11, 2006 at 11:23 AM | Permalink

    Willis, that does not say one cant calculate a confidence interval. It says it is likely to be unreliable – an important but different point.

    You have claimed that the two trends are different by 33%, and therefore that Hansen’s model is wrong. Now you point out that one cant put a reliable confidence interval on those trends to do a statistical comparison between the two trends. This second point invalidates your first point. If one cant reliably compare the two trends, then one cant make claims about whether they are different, as you did.

  167. Lee
    Posted Dec 11, 2006 at 1:08 PM | Permalink

    charles –
    thanks – that was badly worded. Lets try this.

    Let us assume for the sake of argument that carbon release from thawing permafrost is a positive feedback. Clearly, such carbon release is going to be dependent on temperatures warming enough to thaw the currently-frozen carbon stores.

    But this is not going to be an all-or-none thaw – ther might be a ‘trigger’ at any given locatin as temperatures exceed the threshold for thawing, but teh spatial distributin sfo permafrost means that ther is no overall ‘trigger.’. As temperatures warm a little, there will be thawing along the margins, and perhaps slightly deeper summer thaw, causing some slight carbon release. As temperatures warm more, then there will be more carbon release.

    If the oceans are currently capturing the increased heat content due to increased forcing, and therefore delaying the eventual “equilibrium” temperature increase due to that forcing, then this kind of feedback will be delayed to an extendt determined in part by that lag in warming.

  168. charles
    Posted Dec 11, 2006 at 1:22 PM | Permalink

    168

    Lee, it seems to me you are saying that we should be able to see any and all feedbacks in the historical record. Thus feedback assumptions going forward should be the same as those observed in the past.

    My understanding is that forward looking models assume positive feedbacks will increase. Do I misunderstand?

  169. Mark T
    Posted Dec 11, 2006 at 2:29 PM | Permalink

    Willis, that does not say one cant calculate a confidence interval. It says it is likely to be unreliable – an important but different point.

    Not a different point at all. Having an unreliable confidence interval is no different than no confidence interval. I.e., certainly you can _attempt_ to calculate a confidence interval in such cases, but the result is not a confidence interval, it is merely a few numbers thrown down on paper no different than a percentile dice throw. That’s exactly what Willis’ statement means, and you trying split hairs over that is rather silly.

    Mark

  170. jae
    Posted Dec 11, 2006 at 2:48 PM | Permalink

    Hope everyone noticed the figure in 161.

  171. Lee
    Posted Dec 11, 2006 at 2:55 PM | Permalink

    MarkT: sure it’s splitting hairs – I thought that was mandatory here.
    Fact is, that was an irrelevant response to willis’ irrelevant remark.

    The important thing is that willis is claiming that there is a difference between those two trends, and he just detailed for us why there is no statistical justification for his claim. He wasn’t splitting hairs – he was making unsupportable claims, and he just told us that he knows why they are unsupportable.

  172. Lee
    Posted Dec 11, 2006 at 2:59 PM | Permalink

    jae – a correlation between the LENGTH of solar magnetic cycle, in years for each given year, and the temperature deviation for that year?

    What on earth does that mean?

  173. Mark T
    Posted Dec 11, 2006 at 3:22 PM | Permalink

    Sorry Lee, but you’re just plain wrong. If the number you calculate is not a confidence interval,
    then it is safe to say you cannot calculate a confidence interval. I.e., your attempt to split
    hairs is incorrect, and testiment to a lack of understanding of what Willis’ statement means.

    Mathematically speaking, the statement “you cannot calculate a confidence interval” simply means
    “any attempt at calculating a confidence interval yields bogus numbers.” Get with the program.

    Mark

  174. James Erlandson
    Posted Dec 11, 2006 at 3:27 PM | Permalink

    Re 168 Lee: An interesting logic exercise. You said …

    Let us assume for the sake of argument that carbon release from thawing permafrost is a positive feedback. Clearly, such carbon release is going to be dependent on temperatures warming enough to thaw the currently-frozen carbon stores.

    Let us assume for the sake of argument that heat release to space by tropical storms is a negative feedback. Clearly, such heat release is going to be dependent on sea surface temperatures warming enough to generate more storms.

    But this is not going to be an all-or-none thaw – ther might be a “trigger’ at any given locatin as temperatures exceed the threshold for thawing, but teh spatial distributin sfo permafrost means that ther is no overall “trigger.’. As temperatures warm a little, there will be thawing along the margins, and perhaps slightly deeper summer thaw, causing some slight carbon release. As temperatures warm more, then there will be more carbon release.

    But this is not going to be an all-or-none release of heat – there might be a trigger at any given location as sea surface temperatures exceed the threshold for generating tropical storms, but the spatial distributin of sea surface temperatures means that there is no overall trigger. As temperatures warm a little, there will be a few more storms, and perhaps more intense storms, causing some heat release. As temperatures warm more, then there will be more heat release.

    If the oceans are currently capturing the increased heat content due to increased forcing, and therefore delaying the eventual “equilibrium” temperature increase due to that forcing, then this kind of feedback will be delayed to an extendt determined in part by that lag in warming.

    If the oceans are currently capturing and releasing to space the increased heat content due to increased forcing, and therefore reducing the eventual “equilibrium” temperature increase due to that forcing, then this kind of feedback (melting permafrost) will be delayed forever.

    A physicist, a chemist and an economist are stranded on an island, with nothing to eat. A can of soup washes ashore. The physicist says, “Let’s smash the can open with a rock.” The chemist says, “Let’s build a fire and heat the can first.” The economist says, “Let’s assume that we have a can-opener…”

    Economist Jokes

  175. Lee
    Posted Dec 11, 2006 at 3:35 PM | Permalink

    MarkT – it is interesting that you arent taking on willis’ claim about the two trends.

    But back to the admittedly irrelevant issue upon which you choose to focus – look at the quote that willis presented:

    “when neff [effective N] is less than 6, estimates of uncertainties and confidence intervals are likely to be unreliable even if equation (22) is used.”
    First, willis doesn’t report Neff, so even this claim of his is unsupported. The authors say the CIs are likely to be unreliable – not that they are “bogus.” And finally, as I frickin’ said, what it comes down to is that for Neff less than 6 (if that is the value these two calculations yield), by the statement willis quoted, one cannot rely on the CIs – so willis claim that the two trends are clearly different would be unsupported.

  176. Lee
    Posted Dec 11, 2006 at 3:39 PM | Permalink

    re 175:

    Sure – another illustration of potential time lags in feedbacks to forcing, which of course apply to both positive and negative feedbacks with any time lag.

    identification of specific positive and negative feedbacks, or more relevant the integrated result fo all such feedbacks – is a different question from the one I think charles was asking.

  177. jae
    Posted Dec 11, 2006 at 4:01 PM | Permalink

    173: Read this article.

  178. jae
    Posted Dec 11, 2006 at 4:10 PM | Permalink

    173: Veizer’s paper provides a very plausible mechanism for temperatures to vary with sunspot activity (solar wind/cosmic rays, etc.), but he didn’t use the best variable for correlating observed temperatures with solar activity. That variable is solar cycle length. It makes sense that the Earth will cool less between cycles that are close together than it will when cycles are long. It seems reasonable that a lot of heat is stored (probably in the oceans) when the sun is most active.

  179. jae
    Posted Dec 11, 2006 at 4:14 PM | Permalink

    Also, I think it’s now possible to predict the future 40 or so years. We just finished Cycle 23. Cycle 24 expected to be similar to Cycle 23, but Cycle 25 is expected to be weak and have a longer time. Hence, I would hazard to “predict” that we will have a slight cooling for a few years, then more warming, then some serious cooling. Hopefully not another LIA.

  180. Mark T
    Posted Dec 11, 2006 at 4:16 PM | Permalink

    MarkT – it is interesting that you arent taking on willis’ claim about the two trends.

    Actually, I was taking on your claim that

    Willis, that does not say one cant calculate a confidence interval. It says it is likely to be unreliable – an important but different point.

    Which is incorrect on its face value, regardles of any claim about the two trends made by Willis.

    Then, after calling it “an important but different point,” you follow it up by stating it is irrelevant. Make up your mind.

    And, now as a second follow-up, you state

    The authors say the CIs are likely to be unreliable – not that they are “bogus.”

    An unreliable confidence interval is bogus, i.e. it has no meaning, no matter how you want to spin it.

    Keep on banging that drum, Lee. You’re not quite as good at disinformation as Steve B., however, I’ll admit.

    Mark

  181. jae
    Posted Dec 11, 2006 at 4:32 PM | Permalink

    175:

    Let us assume for the sake of argument that heat release to space by tropical storms is a negative feedback. Clearly, such heat release is going to be dependent on sea surface temperatures warming enough to generate more storms.

    Very good assumption, IMO.

  182. Lee
    Posted Dec 11, 2006 at 5:02 PM | Permalink

    MarkT – I SAID that the confidence interval is not useful – assuming as willis implies (but doesn’t support or report) that the Neff is below the cutoff he reports from that author. I said it to willis, and I said it to you. Asusming that the Neff is less than 6, as willis implies, means that calcualted CIs are suspect – which is what willis said.

    What is irrelevant is whether the CI is uncalculatable or unreliable if calculated – either way, it can’t be used to make a statistical claim.

    What IS relevant is that willss is making a claim that those two trends he calculated are different by 33%, and yet he also reports (or at least implies, given that he didn’t report the Neff) that one cant derive valid statistical uncertainties for those two trends. IOW, wqillis just explained to us why his cliam aobu the difference between those two curves cant be supported.

    Unless, of course, the Neff is greater than 6, in which case there is a different set of issues.

  183. Willis Eschenbach
    Posted Dec 11, 2006 at 5:10 PM | Permalink

    Jae, you say:

    Hope everyone noticed the figure in 161.

    I did notice it, but because there was no information as to where it came from, I fear that I ignored it …

    w.

  184. Lee
    Posted Dec 11, 2006 at 5:13 PM | Permalink

    jae, your link in 178 leads to an irrelevant page.

    In 179, the Vieser paper, I would invite you to look at figure 14 on page 10, and especially to note the dramatic breakdown of that correlation between “11 year average temperature” delta Tc and “Solar Cycle Length” – one assumes 11 year average for this as well) for the most recent years. These most recent years are conspicuously absent from the version of the graph presented in post 161.

  185. Willis Eschenbach
    Posted Dec 11, 2006 at 5:29 PM | Permalink

    It seems that some people have lost sight of the original question, and are getting bogged down in specifics. Let me review the bidding regarding the trends in Hansen’s figure.

    1) Hansen claimed that his figure of the forecasts made in 1988 showed that his model was skillful in forecasting future temperature trends.

    2) I pointed out that the trends for all three of his forecasts (Scenarios A, B, and C) were above the actual trend.

    3) Steve B. asked if the difference in trends was significant.

    4) I replied that this depends on whether the trends of the model results have a confidence interval (CI). The modelers say that they do not, because they are not the result of a stochastic process. If that is the case, then the difference between the actual trend and Scenarios A and B is significant, and the difference regarding Scenario C is not quite significant. I also pointed out that this is because Scenario C used a CO2 trend that went flat in 2000, otherwise that difference would have been significant as well.

    5) I also noted that because of high autocorrelation, if the model results do have a CI, we cannot calculate reliably what it is. To me, this is the same as not having a CI, but the terminology is not material to the question. The point is that if we cannot calculate a CI, then Hansen’s claim is false.

    We are thus left with two possibilities. Either:

    a) Model results do not have a CI, the difference in trends is significant, and Hansen’s claim is false, or

    b) Model results do have a CI, that CI cannot be reliably calculated, and Hansen’s claim is false.

    w.

  186. jae
    Posted Dec 11, 2006 at 5:32 PM | Permalink

    178, 184, 185. Sorry for the link goof. Hope this works.

  187. Lee
    Posted Dec 11, 2006 at 5:48 PM | Permalink

    willis –

    the first part is back to your original offsetting of the Hansen graph, in which yo use one year of the runup period to align, instead of the average, and thus introduce substantial year=-to-year variation into the analysis. I was pressing yo on this when yo diverted to the trend analysis.

    You then made the simple claim that a trend through one time series was 33% greater than a trend through another time series, and that therefore Hansen was wrong.

    I have now pressed you several times now to show that THE TRENDS YOU CALCULATED THROUGH THE TIMESERIES are statistically distinguishable – IOW, to support YOUR OWN DAMN CLAIM!

    The argument you make here is absurd. Of course one can not calculate a CI around individual data points in an observation – a CI for a single datum is a null concept. What we have here in the basic time series is the observed output of one run of the model, and the observed real-world temperatures. Those points are what they are – the observed real world data ALSO has no CI, because each point is a single datum, and it is what it is. But that lack of a CI for each individual datum does NOT apply to the trend calculated through those time series – that DOES have a CI, and a claim that two such trends calculated through the multiple oints of those two times series are different, is subject to statistical analysis.

    That CI may not be reliable because of insufficient data, or it may not. But the fact is that YOU calculated trends through those time series, and YO claim that they are different by 33% and that therefore Hansen is wrong, and that YOU then claimed that the data on which those trends were calculated – the time series – aren’t sufficient to give reliable CIs. You have now argued that the claim you made – that the trends are different and therefore Hansen is wrong – is not statistically supportable. I have asked you how, then, you can make the claim that those two trends are actually different – and you are avoiding responding to that. Instead yo post thsi stuff implying that because the individual distinct points on those curves have no CIs, that the trend line has no CI.

    This is a simple question, willis – are those two trends, the ones you claim differ by 33% – are they statistically distinguishable or not?

  188. Lee
    Posted Dec 11, 2006 at 5:49 PM | Permalink

    btw, that was me, not SteveB. I am not SteveB, SteveB is not me.

  189. Lee
    Posted Dec 11, 2006 at 5:56 PM | Permalink

    btw, before anyone diverts to another meaningless issue, yes, the actual yearly temperatures can have a CI – if one wants to go back and look at measurement errors and sampling from discrete weather stations, and so on. But for this purpose, that time series, there is a single delta-T for each year, and that single value is the one we are working from. Just as, for the model results, each year has a single value output.

    The point is that willis calculated a trend through those reported discrete data, and that trend WILL have a CI, and comparison of two such trends IS subject to statistical analysis. Either they are statistically distinct as willis claimed or they are not – and willis now says the data is insufficient to allow one to say.

  190. Ken Fritsch
    Posted Dec 11, 2006 at 7:01 PM | Permalink

    The point is that willis calculated a trend through those reported discrete data, and that trend WILL have a CI, and comparison of two such trends IS subject to statistical analysis. Either they are statistically distinct as willis claimed or they are not – and willis now says the data is insufficient to allow one to say.

    A great TCO imitation. Best I’ve seen since the real thing (the sober version, that is). Lee, I think most of us appreciate the point that Willis E was making, but, on the other hand, if you have discovered/uncovered some statistical CIs that climate modelers use, please share them with us.

  191. Lee
    Posted Dec 11, 2006 at 7:11 PM | Permalink

    Fritch, willis claimed that those two trends are different by 33%, and therefore Hansen must be wrong. He then claimed that the data is insufficient to apply a CI to this trends. Those two statements are not comapatible – one or the other must be incorrect. He has not addressed that – instead, he sidetracked to whether there are CIs for data on the time series themselves, which is irrelevant to the claim he made, as he made it.

    So far I see willis throwing away 49 years of calibration data and arbitrarily using only the final year of teh 50 eyars of data. I see hime confusing year to year ‘weather ‘variation with “climate” trends – he did this notable abo ve where he reported the low correlation of the two time series, when of course one would not expect annual values (weather0 to be correlated in a time series looking for correlations is long term ‘climate.” Adn I see him making explicit claimes that two derived trends lines from two time series are different, while simultaneously claiming that they are not sufficient for statistical analysis.

    I’m sorry, but given all this, the only point I see willis making is that he is willing to use different rules for his own analyses than he uses when looking at other’s analyses.

  192. Welikerocks
    Posted Dec 11, 2006 at 7:15 PM | Permalink

    How about that report in the news today about cows and contributing the most to the greenhouse effect (more than the cars do).
    I thought I’d divert to a meaningless issue-and they are showing this on the evening news tonight for us here in the USA. It’s on the United Nations website a week or so old here

    Cattle-rearing generates more global warming greenhouse gases, as measured in CO2 equivalent, than transportation, and smarter production methods, including improved animal diets to reduce enteric fermentation and consequent methane emissions, are urgently needed, according to a new United Nations report released today.

    Cheers!

  193. Lee
    Posted Dec 11, 2006 at 7:29 PM | Permalink

    rocks – quick: where does all that carbon come from? What is the atmospheric half-life of methane?

  194. Willis Eschenbach
    Posted Dec 11, 2006 at 8:15 PM | Permalink

    Lee, please, please read what I wrote.

    The question is whether Hansen is correct that his models skillfully forecast the future in 1988. Let me attempt to clear up some of your misconceptions surrounding this issue.

    I am not talking about a single data point as you claim. I have patiently explained more than once that I am talking about the trend, I say again, the trend over the 48 year period (1958-2005). This is not the difference between the final data points, or the difference in the change between the start and finish points. As I said in previous posts, this is the annual trend times the length of the period. I even pointed out that I calculated the trend in two ways (OLS and TSA), and that the results were the same, giving me confidence that I had a robust trend estimate … but I guess you didn’t notice that either.

    As to whether the model forecasts have a CI, I’ll go over that again as well. The modelers say no. Therefore, if the forecast trends themselves are outside the 2-sigma CI of the instrumental data trend, they are significantly different. Are you with me so far?

    So we have two choices.

    If the modelers are right, the difference between the trends is significant, and Hansen is wrong because his forecasts are significantly higher than the reality.

    If the modelers are wrong, the CI of the forecasts cannot be reliably calculated, and Hansen is wrong because we can’t conclude anything about the skill of his forecasts.

    Me, I think the modelers are right, the forecasts are not the result of a stochastic process, and thus stand on their own.

    But it doesn’t matter, because either way Hansen’s claim of model skill is incorrect.

    Finally, this has nothing to do with whether I vertically offset the graphic results to start them at the same point, as you seem to think. Adding a constant to a dataset does not change the trend in any way.

    w.

  195. Steve Bloom
    Posted Dec 11, 2006 at 8:31 PM | Permalink

    If Lee is not me
    and I (Steve B) am not he
    then no Willis glee?

  196. John M
    Posted Dec 11, 2006 at 8:32 PM | Permalink

    #194

    Lee, these numbers appear to include significant levels of fossil fuel-derived CO2 for cattle production, as is summarized here.

    “Burning fuel to produce fertiliser to grow feed, to produce meat and to transport it – and clearing vegetation for grazing – produces 9 per cent of all emissions of carbon dioxide, the most common greenhouse gas.”

    The original report goes into great detail about CO2 emissions derived from fossil-fuel based fertilizers and from land-use practices.

    So the methane isn’t simply from a closed carbon cycle, wherein sunlight converts CO2 to animal feed, the animals fart out methane, and the methane gets converted back to CO2. This doesn’t mean we give up meat, but it does mean we need to understand the problem more before we dive into “obvious” solutions for GHGs.

  197. Lee
    Posted Dec 11, 2006 at 8:48 PM | Permalink

    willis, you are confusing two issues. Yo initially, in yor post on thsi subject, re-aligned the curves to the single value at the end of the 50 years of calibration data. Thais means that yo threw away 49 of the 50 years of that data. This is, as you say, irrelevant to the trend after the initialization – you nonetheless did it, made ti part of the basis for that post that SteveM promoted to an article, and argued that therefore the curves didnt match and therefore Hansen was wrong. That is one of your errors, and you diverted from it way up above int his thread, rather than explain why, for tat issue, you feel it is valid to throw away 98% of the initialization data.

    On the (separate issue of) the trend, you should read what I wrote.

    Your calculated trend through the Hansen time series is NOT THE TIME SERIES. No matter what Hansen claims about CIs for the time series itself, that does not change the fact that your calculated trend through that noisy data set has its very own uncertainties, and therefore a calculatable CI. That CI will of course be greater for a noisier data set, it will be greater for a shorter data series. It is a CI for a trend FITTED TO a finite time series, not for the time series itself. As such, that trend has has its very own CI. When you report a trend of “0.8” through that time series, the trend is, MUST BE, 0.8 +/- some value – whether or not you report that value, whether or not the data is sufficient to derive an accurate CI, there is uncertainty in the trend fitted to the time series. You even indicated this when you allowed that the Neff matters – but didnt report the Neff.

    You claim that the trends of “0.6” and “0.8,” rounded or truncated, with unreported uncertainties are different, with no validation of the claim other than pointing at the numbers and saying that they are different.

  198. Lee
    Posted Dec 11, 2006 at 8:59 PM | Permalink

    197 –

    John, I certainly agree that feedlot livestock production comes with very large environmental costs. But I’d want to see a very carefully detailed accounting before I agree that livestock production produces more greenhouse gas production than “all other forms of transport put together.” I suspect that the major driver would be deforestation to create grazing – and assigning causes to such deforestation is very tricky accounting.

    In additioin, fo course, to properly accounting for carbon cyucling rapidly in biiologic time – from that article, i cant tell if they did that or not.

  199. bender
    Posted Dec 11, 2006 at 9:21 PM | Permalink

    Willis,
    Thanks for the periodic summaries such as #195. Your argument is logical, and your summary of the exchange undistorted. Your detractors are inconsistent if not incoherent.

  200. Lee
    Posted Dec 13, 2006 at 5:14 PM | Permalink

    Lets see if I have this right –

    Because Hansen claims that the output of any single run from the models doesn’t have a CI, our favorite statistician willis can claim that his reported value of a slope of a trend line fit to 48 years of noisy output from a model also doesn’t have a CI.

    willis then further apparently claims that he can therefore simply say that the trends from that fit, with calculated values of 0.6 and 0.8, are different – with NO further statistical analysis reported.

    He can also say that for an Neff less than 6, the CIs arent reliable (fair enough) and then without reporting the neff (even when asked again), inform us that there aren’t reliable CIs for the trend values of 0.6 and 0.8 he calculates (also fair enough, assuming neff lt 6) – and therefore apparently decide he can simply ignore uncertainty CIs and say the trends are 33% different and therefore hansen is clearly wrong (uh – what?).

    And then bender chimes in without addressing any of these issues, and declares that willis’ summaries – which detour around all of this – somehow show that willis is right.

    And we’re supposed to take this seriously -why?

  201. Ken Fritsch
    Posted Dec 13, 2006 at 9:44 PM | Permalink

    Lee, I think what you want to see, for some reason, are 2 sigma limits around a trendline (linear??) that has been derived from Hanson’s computer projections using annual results.

    While Willis makes good points in my mind about CLs for computer models and the importance of graphic displays showing the trends, I personally think that even more important is the fact that Hansen’s out-of-sample results are much too sparse from which to draw conclusions and he admits that in the link http://pubs.giss.nasa.gov/docs/1998/1998_Hansen_etal_1.pdf .

    My thinking on Hansen is that he is a scientist on one hand and a policy promoter on the other and one must carefully read his publications to distinguish who is doing the writing. He would not be above spinning (but not misrepresenting) data to get the best policy response.
    Roger P Jr made a very valid point in the original Willis E thread http://www.climateaudit.org/?p=796#more-796 about Hansen’s scenarios in that the important facts to know are what went into each of the Hansen’s models — since being right for the wrong reasons is really not better than predicting incorrectly.

    Hansen’s scenarios A, B and C are described in his linked article above and from it, I judge, makes his predictions very susceptible to some rather arbitrary predictions about volcanic activity and magnitude and carbon dioxide equivalency level increases in the atmosphere. It would be better to know how the models react to specific effects and not be given three scenarios that could just about cover all possible temperature outcomes.

    Lee, I think a more productive discussion of Hansen’s models would be to determine in more detail the differences between Hansen’s Scenario s A, B and C. If one looks at Figure 5B in Hansen’s report linked above one sees that the projected climate forcing growth rate and climate forcing as determined by the six major green house gas levels in the atmosphere have not come close to those used in Scenarios A and B and have actually been significantly below Scenario C at the publication date of 1998 and continue on that path into the current year. See the Table 2 in the link here for more current climate forcing rates and totals (relative to the year 1990). Hansen, the scientist, again expresses concern about the uncertainties in projecting GHG levels into the future, never mind the uncertainties involved with out-of-sample testing of the computer model predictions and the readers’ ignorance of exactly what conditions went into the models.

    Obviously Scenario A was a projection of GHG increases from the early 1970s and had an exponential trend, i.e. the real scary scenario that could be quoted in the mainstream media. Scenario B was a linear projection of GHG (CO2 equivalents or climate forcings) from, I assume, 1988 with a very large volcanic eruption included. This was the conservative estimate of climate forcings that turned out to be a large overestimate and just goes to show how difficult these estimations are even over short periods of time. The Scenario C was the Kyoto-in-over-drive teaser to show what regulations and mitigations could accomplish, but it turns out that even this scenario appears to under estimate are current climate forcing. What Hansen, the real scientist, should have done was applied (actually applied by an independent body) his computer model (unchanged after 1988) to whatever conditions existed each year and published those results for comparison to the actual temperatures. I wonder why he did not use this approach.

  202. bender
    Posted Dec 13, 2006 at 10:27 PM | Permalink

    #201
    The problem, Lee, seems to be your superficial understanding of the application of trend statistics to various types of “data” series. When the data are a time-series of observations, you ought to report a confidence interval in order to account for sampling error and noise, the goal being an out-of-sample forecast of future observed values. When the numbers come from a computer model there is no sampling error. The error there stems from the error that accumulates through the GCM calculation. That error is large because all the known parameters are subject to error, many of the parameters are unknown, and many of the processes are only crude approximations. Hansen neglects to consider these errors in his inferences. When Willis claims the modeled trends in the different scenarios are different, this is based on Hansen’s own presumption that the model errors are negligible, or, what is the same thing, not worth discussing.

    You are twisting your logic in a tortured effort to show that Hansen is right and Willis is wrong, when it’s actually the other way around. If you would think objectively about the data and arguments, rather than starting from the position that everything at CA is wrong, you will have any easier time understanding the essence of the argument.

  203. Willis Eschenbach
    Posted Dec 14, 2006 at 3:18 AM | Permalink

    Lee, you keep asking for the Neff results. I’ll give them to you, but first, a question – what will they prove?

    In any case, here they are … big revelation coming up … are you ready?

    For the GISS Temperature Data, HadCRUT3 Temperature Data, Scenario A, Scenario B, and Scenario C, the effective N is respectively …

    … drum roll …

    20, 17, 3, 4, and 5

    Boy, that was exciting.

    Let me go over the logic again, since you don’t seem to have grasped it yet.

    Hansen claimed the results showed that the models were skillful in forecasting the future.

    We have two choices. Either the model results have a CI or they don’t. Hansen and the modelers say no. I agree, because they’re model results, not outcomes from a stochastic process.

    If the model results do not have a CI, then they are significantly different from the data only if their trends are outside the 2-sigma envelope of the data. Since in fact the trends are outside that envelope, we can say that they are significantly different from the data, and Hansens claim that the models are skillful is wrong.

    On the other hand, if we assume that the model results do have a CI, we get into trouble when we try to calculate the CI. This is because the model results are too autocorrelated to reliably calculate a CI.

    But if there is no way to reliably calculate a CI, then there’s no way to say if the results are significant or not. And if we can’t say if they are significant or not, then Hansens claim that the models are skillful is wrong, because that can’t be determined.

    So either way, Hansen’s claim is wrong.

    w.

  204. Lee
    Posted Dec 14, 2006 at 9:40 AM | Permalink

    willis, one more time:

    The frickin’ trends you calculate from a subset of the data returned from that model run, is NOT the result of the model run. When you fit a line to the results of the model run, that LINE FIT has a CI associated with it. This is true whether or not the model results have an associated CI. When comparing two line fits, one must consider the CIs from the line fit. The CIs wil be larger for shorter tim eperiods, and for noisier data, whether or not the data itself has a CI. You are conflating the results of the model run (which extends for more than 48 years, and consists of the discrete values, NOT the fitted trend), with the trend you calculate from a part of that model run.

    When you fit a line to noisy data, that line will NOT perfectly reflect the trend in that noisy data. Youare conflating the results of the model itself (which is NOT A TREND) with the trend you calculate from fitting a line to part of the output from that model.

    You earlier (and continue to) made the overt claim that the slope of the trend FIT TO 48 years of the model run, is different from the slope FIT TO 48 years of temperature data. You are claiming that your line FIT TO that single model run, has no error. You are claiming that the “33%” difference is greater than 2 sigma from the temp, while arguing that you can ignore the CI of YOUR LINE FIT to the model results, without even reporting the error of the temperature fit. And you are doing this for figures reported to ONE SIGNIFICANT FIGURE. Hell, the ‘useful’ values are not 0.6 and 0.8 – they are a range of 0.55-0.65 and 0.75-0.85, because of the precision to which you report.

    I don’t remember exactly what Hansen has claimed these data show – I do know that you have been claiming his model failed. I readily admit that there is likely insufficient data to show that those two trends are statistically similar – I haven’t said otherwise. I do know that you have been loudly proclaiming that they are statistically distinct, and YOU DON’T HAVE THE DATA TO MAKE THAT CLAIM.

    Bouncing between the arguments that ‘the two trends are statistically different’ and ‘hansen is wrong anyway’, do not rescue your claim that the two lines are different.

  205. Lee
    Posted Dec 14, 2006 at 10:06 AM | Permalink

    Actually, this is what Hansen said in 1999 about those results:

    “Taking account of the fact that the real world volcano occurred in 1991, rather than 1995 as assumed in the model, it is apparent that the model did a good job of predicting global temperature change. But the period of comparison is too short and the climate change too small compared to natural variability for the comparison to provide a meaningful check on the model’s sensitivity to climate forcings. With data from another decade we will be able to make a much clearer evaluation of the model.”

    And this is what he said in the 2006 PNAS update:
    Close agreement of observed temperature change with simulations for the most realistic climate forcing (scenario B) is accidental, given the large unforced variability in both model and real world. Indeed, moderate overestimate of global warming is likely because the sensitivity of the model used, 4.2C for doubled CO2, is larger than our current estimate for the actual climate sensitivity. … More complete analyses should include other climate forcings and cover longer periods. Nevertheless, it is apparent that the first transient climate simulations proved to be quite accurate, not “wrong by 300%”.


    IOW, willis, he was NOT claiming that the model is skillful. He is pointing out that claims that the model was wildly wrong (similar if more exaggerated to the claims you are making, in fact) are not true. So yo also seem to have misrepresented Hansen’s claims.
    Again, why on earth should we pay attention to you?

  206. jae
    Posted Dec 14, 2006 at 10:29 AM | Permalink

    Sheesh, Lee, I even understand this.

  207. Lee
    Posted Dec 14, 2006 at 11:58 PM | Permalink

    jae, please explain exactly what you understand?

    Willis was arguing based on ‘claims’ he attributes to Hansen, that Hansen apparently hasn’t made. His ‘hansen is wrong either way’ argument fails on this misrepresentation.

    If one can’t make the statistical claim, then willis has no basis on which to say that the two trends are different. And since Hansen seems not to have said what willis claims he said, then Willis argument about Hansen being wrong if one cant make the statistical argument, also fail.

    Various people have been arguing that the trends from the Hansen 1988 model run are clearly wrong – Hansen has simply been saying that those arguments are false – he has NOT been arguing that the results verify the model. He has been arguing that the results do not falsify the model, despite many claims that they do.

  208. Posted Dec 15, 2006 at 12:41 AM | Permalink

    He has been arguing that the results do not falsify the model, despite many claims that they do.

    What kind of results would falsify his model?

  209. Earle Williams
    Posted Dec 15, 2006 at 2:25 AM | Permalink

    UC,

    I think Lee means the model can only be wrong when Jim Hansen says it’s wrong. So we should be asking Dr. Hansen that I guess.

  210. bender
    Posted Dec 15, 2006 at 5:14 AM | Permalink

    #205
    1. There is no need to estimate the slope (and CI) of a fitted line to the “noisy” model output; the slope is a deterministic product of the model itself. i.e. A look at the model code will tell you what the model slope is with 100% certainty.
    2. The bottom line: Hansen’s model is not skillful in any meaningful sense of the word.

  211. bender
    Posted Dec 15, 2006 at 5:22 AM | Permalink

    Not relevant to Hansen’s case, but … if two estimated slopes are deemed to be not significantly different as a result of their CIs being very wide … that’s not a very convincing argument that the two slopes are the same. It’s proof that you need either more data to get a more precise estimate of the two slopes, or else some way of extracting some of the noise.

    I guess Lee is giving in on the Neff argument. Not sure that he’s understood it though.

  212. Willis Eschenbach
    Posted Dec 15, 2006 at 6:01 AM | Permalink

    Lee, Hansen said in the 2006 paper you quote from, and I quote:

    Nevertheless, it is apparent that the first transient climate simulations (12) proved to be quite accurate …

    I said that he claimed that his model was “skillful”. He said that the results were “quite accurate”. Perhaps you can explain how there’s a difference between those two, I see none.

    He also played fast and loose with the global temperature data in the 2006 paper. Rather than showing the data he had used in his 1998 discussion (global data, including both land and ocean), he confuses things by including the land data in addition, and saying that the average of the two is the best metric to compare to the models. Regarding this, he says (omissions are reference numbers only):

    Temperature change from climate models, including that reported in 1988….usually refers to temperature of surface air over both land and ocean. Surface air temperature change in a warming climate is slightly larger than the SST change…., especially in regions of sea ice. Therefore, the best temperature observation for comparison with climate models probably falls between the meteorological station surface air analysis and the land–ocean temperature index.

    Now, the climate models include both the land and the ocean, and are reporting the temperature for both the land and the ocean. Why is it suddenly better to use a higher temperature for comparision, something between the land data and the ocean data? What he suggests using is weird, particularly since the land is only 30% of the globe. Once again, he is comparing apples to oranges.

    Global data is a weighted average of land data (30%) and ocean data (70%). Averaging this with land data (100%) as he proposes results in a weighted average of land data (65%) and ocean data (35%). What on earth is the reasoning behind weighting land data about twice as much as ocean data, when the actual areas are the reverse of that? Yes, “Surface air temperature change in a warming climate is slightly larger than the SST change” as he points out, but the models (if they are skillful (or accurate) as he claims should reflect that. If they don’t, we shouldn’t re-weight the observational data to make them look better.

    Finally, if you haven’t understood yet that the trend line fit to a climate model result does not have a CI, I fear I can’t help you. A trend line from data reflecting a stochastic process has a CI. A trend line from data from a model does not, because it is not the result of a stochastic process. If you don’t get it, write to Hansen, I’m sure he can explain it to you.

    w.

  213. Ken Fritsch
    Posted Dec 15, 2006 at 11:28 AM | Permalink

    So yo also seem to have misrepresented Hansen’s claims.

    Would that be Hansen, the scientist, or Hansen the policy advocate? Hansen, the policy person makes slide presentations that predict a probable 50% species extinction withing the next 100 years or so. Would that be based on the projections of his computer models and climate forcings that Hansen, the scientist, admits to much uncertainty about?

    I asked again: why does not Hansen use the yearly measured factors and give us a calculated annual temperature to compare to the instrumentally measured temperatures. Otherwise his predictions are more based on predicting those factors than the capability of his computer model to handle those factors and it is that capability that is mainly in question here?

    Again, why on earth should we pay attention to you?

    I think by paying attention to Willis E one can learn even when you do not agree completely with his conclusions. It may be subconsciously, but I see the same tendency in your posts that Bender sees, in that you, too often in my view, aim your rebuttals at the capabilities of those to whom you respond and as a result lose sight of the subject at hand.

    To a degree I understand that inclination since many comment posts and subject posts at CA are aimed at the published works of individuals (mainly but not entirely AGW advocates) and in doing so the capabilities of those people is discussed as it affects their work. Most of the heavy lifting critiques, however, that I read, and from which I learn the most, at CA concentrate on the subject matter.

  214. Lee
    Posted Dec 15, 2006 at 1:08 PM | Permalink

    willis, this is blatant quote mining – you truncated that sentence. If you include the last 4 words with its quote, Hansen was disputing – AS I FREAKING SAID – claims that the model was wildly inaccurate. Hell, if you actually read the PNAS paper (which I assumed you had done, but apparently not) you will see that he is attributing the match in part to ‘accident’ not to ‘skill.’ You are blatantly misrepresenting what Hansen said.

    People are looking at this very early model result, and saying it is wildly inaccurate, and therefore the models are garbage. Hansen is responding by saying, no they are not wildly inaccurate, so the conclusion that they are garbage is not correct. In doing so, Hansen also says that the very close agreement is at least in part accidental,and lists issues that make the model inaccurate – and you are interpreting this as a claim that the model is skillful??

    If the models are run multiple times with the identical starting conditions, you will get multiple outputs. One can see it even in this data – B and C are identical conditions through 2000. In addition, the model run does not end in 2005 – the line fit is to a subset of the entire noisy curve. The model includes annual variationas well as climate signal – not perfectly, as y’all have been pointing out, but it is there. That in turn means that each run is going to have different annual variation at least in part, different noise, different output – so the output is not strictly deterministic. Also, this is a subset of the model run – fit a line using different time periods in this data, and you will get different slopes – which pretty much puts to rest the claim that the ‘trend’ willis reports from a line fit to data measured off a graph of a subset of a single run of the model is a deterministic result of the model.

    For some reason, the antis have decided this 20 year old early model needs to be discredited. Hansen is saying – read his words in the damn papers – that the model is not perfect, that it has acknowledged errors, that the close agreement of the model with the data is at least partly accidental, but that the claim that this early 20 year old model run is simply wrong, are incorrect. And in response, you guys and the broader anti community keep misreporting what he said, removing data from that graph (the scenario C only thing in 1998), resetting the baseline by dropping 98% of the calibration data, comparing slopes from line fits to portions of the data while arguing that no statistical analysis is necessary, and acting bewildered when you are asked to justify your claims.

  215. Willis Eschenbach
    Posted Dec 15, 2006 at 5:06 PM | Permalink

    Lee, you seem to think we’re picking on a “20 year old” model as though that was somehow unfair. One of the unanswered questions of GCMs is how well they perform. The only way to do this is to wait a while, and then compare the forecasts with the actual outcomes. Perhaps you can explain how we can compare oh, say, a 20-year forecast with 20 years of data and not use a 20 year old model ….

    Nor were the “antis” the ones to bring it up. Hansen brought it up, to try to prove that the model worked, that it was “quite accurate”.

    Hansen did not say it was “fairly accurate” or “somewhat accurate” or “reasonably accurate”. Responding to claims that it was 300% in error, he said it was “quite accurate”. It is not. There is no “close agreement of the model with the data”, the results were outside the Goldilocks scenarios (that’s a technical term for a group of three scenarios where one is too high, one is too low, and one is “just right”). He chose the scenarios to bracket the results, and the attempt failed.

    Parts of your message I simply don’t understand, call me “bewildered”. What does “dropping 98% of the calibration data” mean? Which 2% was kept? What does “comparing slopes from line fits to portions of the data” mean? Which portions of what data? Where has anyone argued that “no statistical analysis is necessary”, when a CI is the result of a statistical analysis? What data has been removed from what graph?

    Finally, nobody has acted “bewildered when [we] are asked to justify [our] claims”, we have explained and justified them ad nauseum, in a perhaps vain attempt to offer you a bit of statistical education. We are bewildered by your lack of understanding of simple mathematical concepts.

    You seem to be losing the plot here …

    w.

  216. bender
    Posted Dec 15, 2006 at 5:17 PM | Permalink

    The model may be great. However the parameters of the model are always a concern. That overfitting problem …

  217. Posted Dec 15, 2006 at 5:53 PM | Permalink

    Hello Lee,

    Shall we distend a little the debate and go back for a while to the climate sensitivity issue?

    In #147 you say that I implicitly assume the time lags to be small but in fact, in my attempt to understand how the orthodox climate sensitivity estimate is worked out, I explicitly mentioned the models the IPCC TAR uses to address this question: http://www.grida.no/climate/ipcc_tar/wg1/356.htm

    Irrespective of the length of the lags, two things stand out in these models:

    1) For small transient responses, as the one we have witnessed so far, the oceans’ thermal lag also adds a small T increment to the final “equilibrium” value.
    2) Even the models with the highest sensitivity only reproduce a roughly 2-fold increase from transient to final response due to thermal inertia. None of them produces a 7-fold jump, as a climate sensitivity of 3C would demand.

    In summary:

    a) The earth has already experienced the equivalent to a 70-75% CO2-doubling GHG forcing.
    b) The observed temperature increase during this period has been roughly 0.7C but almost half of that warming cannot be attributed to GHGs, as it occurred in the early 20th century, when they had only increased very little.
    C) IPCC models do not project a several-fold increase from the transient to the final temperature response at all.

    With only 20-25% GHGs increase left to effectively reach a CO2 doubling plus the logarithmic effect of this additional increase, how do we possibly jump from ~0.7 C / 2 to ~3 C??

    Thanks,

    Mikel

  218. Posted Dec 15, 2006 at 5:56 PM | Permalink

    With only 20-25% GHGs increase left to effectively reach a CO2 doubling plus the logarithmic effect of this additional increase, how do we possibly jump from ~0.7 C / 2 to ~3 C??

    Only by the magic of positive feedbacks.

  219. Hans Erren
    Posted Dec 15, 2006 at 6:06 PM | Permalink

    … and the extreme aerosol cooling heritage of Rasool and Schneider.

  220. eduardo zorita
    Posted Dec 15, 2006 at 6:11 PM | Permalink

    #218 Mikel,

    you are forgetting here part of the IPCC reasoning: the warming has not been larger so far because of the aerosols. Actually, this would be the weak link in the chain in trying to estimate climate sensitivity from the instrumental record: the large uncertainty in aerosol forcing. For a good (imho) discussion on this, see
    Anderson et al. Science 300, 1103

  221. Lee
    Posted Dec 15, 2006 at 6:19 PM | Permalink

    willis, rather than guessing incorrectly at what I think, why not just respond to the damn issues.

    If one looks at Hansen’s ‘predicted’ period, from 1988 when he presented his scenarios to present, one gets a decadal trend for scenario B of 0.19C/decade. During that same period, the GISS station data yields a decadal trend of 0.19C / decade, and using the observational global land-sea data from Hansen’s PNAS 2006 yields a decadal trend of 0.21C.

    For the last year of the “prediction”, the GISS station data temperature lands slightly above Scenario B temps in 2005, while the land-sea temps are slightly below – both are within a few hundredths.

    This latter point is the one you originally disputed when you posted your CA article, the one where you made the temperature offset relative to the Hansen alignment. To do so, you ignored the 50 years of calibration data that Hansen used to set the baseline, and aligned to the single year, with El Nino real-world temps (ie well above average for that year) with the slightly below average model results for that year. In doing so, you threw away all but one year of the 50 year calibration period run under initial conditions (ie 98% of the calibration data) and introduced a substantial amount of annual variation, all acting in one direction.

    Hansen did not run that 50 year calibration period to ‘end up’ with the single best year – unforced annual variation mean that there IS NO single best year. He did it to get 50 years of data under the same conditions, and then he took the average, so that he could minimize the effect of unforced annual variation. Your approach maximizes the impact of unforced annual variation in the one year you chose. Stop pretending you haven’t heard this – it is precisely the point I as making when you dropped that argument and moved to the ‘trend’ argument.

    Finally, you are continuing to mischaracterize what Hansen said, in ways that do not reflect well on you. Here is an extended quote from the PNAS 1996 paper – you would do well to read this carefully:

    “”Because of this chaotic variability, a 17-year period is too brief for precise assessment of model predictions, but distinction among scenarios and comparison with the real world will become clearer within a decade.
    Close agreement of observed temperature change with simulations for the most realistic climate forcing (scenario B) is accidental, given the large unforced variability in both model and real world. Indeed, moderate overestimate of global warming is likely because the sensitivity of the model used, 4.2C for doubled CO2, is larger than our current estimate for actual climate sensitivity, which is 3 +- 1C for doubled CO2, based mainly on paleoclimate data. More complete analyses should include other climate forcings and cover longer periods. Nevertheless, it is apparent that the first transient climate simulations proved to be quite accurate, certainly not “wrong by 300%”.

    That is not a claim of model skill. Simply out, he is saying, “we cant know yet, we know there must be some problems, but so far,despite what soem are saying, there is nothing in that data to say the model is wrong by much.” That is a substantially different thing from the claim you are putting in Hansen’s mouth.

  222. Posted Dec 15, 2006 at 6:29 PM | Permalink

    Hi Eduardo,

    In my first post I did mention the masking effects commonly alleged to explain this conundrum. However, I have another big logical problem with the sulphate aerosols cooling theory. If these aerosols are only short-lived and basically a northern hemisphere phenomenon, why was the mid-20th century cooling global? In fact, the southern hemisphere, where they had little to no effect, cooled more than the NH in that period: http://hadobs.metoffice.com/hadcrut3/diagnostics/hemispheric/difference/

  223. Hans Erren
    Posted Dec 15, 2006 at 6:30 PM | Permalink

    more on aerosols:

    Andronova, N., and M. E. Schlesinger. 2001.Objective Estimation of the Probability Distribution for Climate Sensitivity, J. Geophys. Res., 106, D19, 22,605-22,612 http://crga.atmos.uiuc.edu/publications/Objective_Est_dT2x.pdf data: http://crga.atmos.uiuc.edu/publications/Climate.html

  224. Lee
    Posted Dec 15, 2006 at 6:30 PM | Permalink

    218:

    “2) Even the models with the highest sensitivity only reproduce a roughly 2-fold increase from transient to final response due to thermal inertia. None of them produces a 7-fold jump, as a climate sensitivity of 3C would demand.”

    What on earth does this mean? ‘transient response’ over what time period?

    Temps have increased roughly 0.6C over the last 50 years. Whan does that stop being ‘transient’ and become ‘final?’

    If you had made this argument after the first 16 years or so of the 50 year period, with a temperature increase of about 0.2C, you would be claiming that we can only get an increase of 0.4C to ‘final’ temperatures.

    Much of he argument is about the rate of increase during the transient period – which in turn means the time it will be going on. You are assuming without stating so that the ‘transient’ is over. The fact is, because we don’t know the time constants (although we know that many are on the order of decades or longer, and they are being investigated and modeled), you can NOT make the argument you just made – because you cant know how much of the ‘transient’ we have observed yet adn how much is still to come.

  225. Hans Erren
    Posted Dec 15, 2006 at 6:35 PM | Permalink

    The thermal inertia is in the order of 300-500 years, the co2 peak has long subsided due to the ever increasingf absorption of the sinks,
    Which means that the inertial system is not set in motion due to the short impulse, its like trying to move a tonne with a hammer beat.

  226. Hans Erren
    Posted Dec 15, 2006 at 6:40 PM | Permalink

    Here are the results of a realistic model (Using an unrealistic SRES A1B scenario) with a climate sensitivity of 1K/2xCO2

    Michael Kliphuis, Frank Selten & Henk Dijkstra Technical Report Dutch Computing Challenge Project Simulation of extreme weather events present and future December 15th 2004

    Click to access Techrep_DCCP.pdf

    Dutch Challenge Project
    http://www.knmi.nl/onderzk/CKO/Challenge/

  227. Ken Fritsch
    Posted Dec 15, 2006 at 8:35 PM | Permalink

    Lee, I’ll ask you directly the questions that arise from Hansen’s three scenarios and the basis of his predictions in general.

    Do the scenarios depend critically on estimating the GHG atmospheric concentrations and to a lesser extend on predicting the occurrences and intensities of volcanic eruptions?

    Is the skill in Hansen’s predictions determined exclusively by how accurately the computer can take these variable inputs and predict temperatures or does it include estimating accurately the GHG levels and volcanic eruptions?

    If one agrees that the prediction includes the GHG and volcanic variable estimations then Hansen has failed, since Scenario A (exponential extrapolation from the 1970s) the GCG climate forcing levels were hugely overestimated and no volcanic eruptions were included, Scenario B (the conservative projection without significant GHG regulation with a linear extrapolation) overestimates the GHG forcing levels and by significant amounts and does include a very large volcanic eruption with an extent not quantified, and Scenario C includes predicting, evidently, some very stringent GHG regulations that has not occurred.

    If one says that the models skills are relegated first and foremost to how well it uses variable inputs to predict temperatures, then one needs much more information about the inputs Hansen used and a test that would be run (out-of-sample) in an entirely different manner. It is under these conditions of testing that I believe Willis E points would be more validly applied.

    The actual way the scenarios were set out (and measured) makes it obvious to me that it was done with policy advocacy more in mind that doing more exact science on testing the skill of 3D climate models. While Hansen did publish these scenarios in 1988, here, it is important in such evaluations to also determine whether other model outputs were published by Hansen and/or his associates around that time and to compare how well they performed out-of-sample.

  228. Posted Dec 16, 2006 at 7:54 AM | Permalink

    Re #225

    What on earth does this mean? “transient response’ over what time period?

    Lee: do read the link I provided. You’ll understand how Raper et al define transient climate response in the context of their figure: “at the time of CO2 doubling at year 70, the 20-year average (years 61 to 80) global mean temperature change.”

    In a more general sense, transient response is by definition a response not yet completed at any given time. For example, in 2005 we can postulate that the TCR is the part of ~0.7C attributable to GHG increases. Perhaps 0.4C?

    As for the 7-fold increase I mentioned, 3C/0.4C = 7.5. My apologies if this arbitrary number caused you any confusion.

    Temps have increased roughly 0.6C over the last 50 years. Whan does that stop being “transient’ and become “final?’

    I don’t know when the final response to CO2 doubling would be realized. Do you? I’m just trying to understand how you can have such a big confidence in the IPCC sensitivity range. But we’re not making much progress so far.

    If you had made this argument after the first 16 years or so of the 50 year period, with a temperature increase of about 0.2C, you would be claiming that we can only get an increase of 0.4C to “final’ temperatures.

    No. In 1972 we didn’t have the equivalent of a 3/4 of effective CO2-doubling so I couldn’t make the argument I am making. OTOH, in 1972 the global temps had been decreasing for almost 3 decades, so it would be difficult to make much sense of the 0.2 figure you mention, if available at the time.

    You are assuming without stating so that the “transient’ is over.

    Am I really? But I am saying that we still have 20-25% GHG increase left to reach a CO2-doubling (?).

  229. Posted Dec 16, 2006 at 9:00 AM | Permalink

    Lee and Steve B.

    I have seen the film of Al Gore last week and followed a debate about it from a housewive here who managed to invite over 1,000 politicians to see the film (yes we have so many politicians in our federation: 1 king (and a lot of relatives), 3 language areas, 4 governments with near a hundred ministers and alikes, with overlapping federal and state authority. No worry if you don’t understand it, we don’t either! But somehow it prevented Balkan-like civil wars…). She was invited afterwards by the federal minister of environment to do the speach for Belgium at the Nairobi Climate Conference. It was all by all a quite emotional letter (and debate). In the debate, I said that I could follow the second part of the film (how to reduce our dependence of fossil fuels), but that I had a lot of problems with the “science” of the first part. The journalist on duty called it “a high Hollywood content”, which describes it very well.

    In the film I noticed many untruths, half truths (by omitting relevant information) and exaggerations. To name the most important:

    – Mann’s hockeystick as proof that the MWP was cooler than today.
    I suppose that this needs no more explanation here.

    – The disappearence of the Kilimanjaro glacier.
    The graph of historical temperatures based on the ice core of Kilimanjaro itself shows a warmer 9th and 14th century.

    – The increase of tropical storms (and/or intensity).
    More than enough discussed on these pages. No increase in number of storms, a small increase in the highest categories. Much depends on the accuracy of historical storm intensity. 2006 seems to be a very calm year.

    – The New Orleans disaster.
    The problems with Katarina don’t have anything to do with global warming, everything with the fact that the dikes around New Orleans were not high enough to widstand any tropical storm surge (the center of Katarina landed over Mississippi, not over Louisiana).

    – Increased property loss from storm disasters.
    That there is increased damage is clear, because more people live now in vulnerable habitats, just have a look at the number of Florida inhabitants a hundred years ago and now.

    – Pictures of glaciers in the Alps.
    The Alps have been nearly glacier free several times in the past 10,000 years. Several passes which still are closed today were open in Roman times (Col d’HĂ©rens, Sustenpas).

    Connection between CO2 and temperature in ice cores over the past 600,000 years (very suggestive performance by Al Gore!).
    The temperature caused the change of CO2, not the reverse. CO2 changes followed temperature changes with hundreds to thousands years delay, be it with an overlap. But there is one period where the temperature decreased until a minimum, before CO2 levels started to decrease (the end of the Eemian). The subsequent decrease of 40 ppmv CO2 had no measurable effect on temperature. This points to a small effect of CO2 variations on temperature.

    The melting of Antarctica:
    Some parts (mainly the Peninsula) are melting, other parts are increasing (the eastside), in general it is in equilibrium.
    Greenland melts now more at the edges than it increases at the top. Summer temperatures of Greenland nowadays still are lower than in 1930-1945, when the edge melt was faster, before GHGs did have much importance.

    The North Pole did melt during the MWP more than today. From a recent publication in the Journal of Geophysical Research (14 April 2006):
    “The degree of summer melt was significantly larger during the period 1130-1300 than in the 1990s”.

    What Gore did, was very suggestive, but a little “economical with the truth”… And indeed the comparison with snake oil salesman (and TV preachers) comes into mind…

  230. bender
    Posted Sep 21, 2008 at 10:18 AM | Permalink

    Bump.
    Re: Willis Eschenbach (#203),
    calculates Neff for the GISS Temperature Data, HadCRUT3 Temperature Data, Scenario A, Scenario B, and Scenario C.