Hansen Scenarios A and B – Revised

This is a somewhat restated version of an earlier post seeking to understand the differences between Hansen Scenarios A and B. Rather than trying to clarify matters here, Gavin Schmidt posted over at Tim Lambert’s. In this morning’s post, I correctly identified that the difference between Scenarios A and B for periods up to the present pertained primarily to the handling of CFCs and had nothing to do with one being “exponential” and one being “linear”, as Gavin had stated at realclimate. However, I got wrongfooted somewhat in my interpretation of the data as archived at realclimate.

In Hansen et al 1988, they stated that they dealt with other CFCs and trace gases by doubling the effect of CFC11 and CFC12, a point that I noted in my post yesterday. They said:

Potential effects of several other trace gases are approximated by multiplying the CFC11 and CFC12 amounts by 2.

In my post yesterday, I incorrectly surmised that this would not be a substantial effect. When I analysed the data as archived at realclimate, I assumed that the effect of other trace gases would be counted by a separate line item in which they set the impact equal to the combined CFC11 and CFC12 impact. I did not consider the possibility that they would actually insert incorrect CFC11 and CFC12 values into their Schedule A physical inventories, making these values not directly reconcilable to physical measurements. However, this turns out to be what they’ve done. Schmidt snickered at my failing to consider the possibility of them altering their physical inventories to incorrect values. I see now what they’ve done and can follow through their reasoning, but I’d have been inclined to keep the Other Trace Gases as a distinct entry so that people can keep track of it. Climate scientists can be tricky accountants.

The more substantive issue is that this ad hoc handling of Other Trace Gases is what accounts for the near time differences between Scenarios A and B – a point that I made this morning, even if I didn’t quite connect to the doubling for OTGs.

Here is a diagram showing CO2 concentrations for Scenarios A,B and C (per realclimate version), as compared with observations to 2006 (GISS). Obviously, there is no material difference between Scenario A and Scenario B concentrations or, for that matter, with observations up to 2006 or so. CO2 concentrations start to diverge between the two scenarios in the next decade, but, for the near time analysis, calling one graph “exponential” and another “linear” is not really salient to the quantitative analysis (contra Schmidt’s characterization at realclimate.)

cfc_ha45.gif
Figure 1. CO2 concentrations. A,B, C from realclimate; observed from NASA GISS.

Next methane, the second most important GHG. The left panel shows methane levels in Hansen et al 1988 as compared with present estimates; the right panel shows radiative forcing estimates. There is a slight difference between Scenario A and Scenario B up to 1998, when the Hansen-Michaels dispute occurred, but the difference is insufficient to yield the difference in Scenario A and B reported outcomes. A couple of interesting points though: the levels in Hansen et al 1988 reported for 1987 are noticeably higher than levels presently believed to have existed in 1987. Is this because of changing ways of measuring methane content? At present, I don’t know, but the differences are surprisingly high, as compared to CO2. As noted yesterday, methane concentrations have not increased nearly as much as Hansen projected, following a course parallel to Scenario C at a lower concentration. The forcing estimates obviously parallel the concentration estimates.

 cfc_ha46.gif  cfc_ha54.gif

There is negligible difference between Scenario A and Scenario B N2O.

The main action is in CFCs and, in particular, the Trace Gases other than CFC11 and CFC12. Here’s how Hansen et al 1988 described their CFC scenarios:

In Scenario A, … CCl3F (F-11) and CCl2F2 (F-12) emissions are from reported rates (Chemical Manufacturers Association (CMA) 1982] and assume 3% yr-1 increased emission in the future with atmospheric lifetimes for the gases of 75 and 150 years respectively. ,, In Scenario B… the annual growth of CFC11 and CFC12 is reduced from 3% yr-1 today to 2% yr-1 in 1990, 1% yr-1 in 2000 and 0 in 2010…. In Scenario C, CFC11 and CFC12 abundances are the same as scenarios A and B until 1990; thereafter CFC11 and CFC12 emissions decrease linearly to zero in 2000.

As noted above, they also said:

Potential effects of several other trace gases are approximated by multiplying the CFC11 and CFC12 amounts by 2.

The Figure below shows CFC11 and CFC12 concentrations, using the data from realclimate only pertaining to CFC11 and CFC12. In this calculation, the arbitrary doubling of CFC11 and CFC12 values in Scenario A (representing OTGs) has been backed out and will be shown separately under OTGs. Scenario A and B projections for CFC11 and CFC12 are identical ; and thus this is not a reason for Scenario A and B differences. Both scenarios over-estimated actual concentrations, which were like Scenario C.

 otgsh69.gif  otgsh70.gif

In my first try at this post yesterday, when I noted the doubling as an ad hoc method for Other Trace Gases, I hypothesized that the effect would not be material, but this does not prove to be the case. The figure below show how the primary near-time difference between Scenarios A and B arises. The solid black line shows the Scenario A radiative forcing (equal by assumption to the sum of Scenario A-B forcing from CFC11 and CFC12) – Hansen’s “Business as Usual” case; the dotted line (zero for all years) shows the Scenario B forcing (Hansen’s “more plausible” case). The red points show the radiative forcing for OTGs from realclimate; green lines are a first try at IPCC A1B and A2.

otgsh68.gif

So where does this leave us? To clarify one point, I don’t have any problem with properly articulated scenarios as a way of analyzing projections. How else are you going to do it? With respect to projections of CO2 concentration made in the 1980s, the projections are pretty much on the money. But even if they were off to some degree, all it would do is defer or accelerate the date at which CO2 doubled somewhat. And I agree with concerned climate scientists that doubling CO2 is an unplanned real-life experiment that could have very serious consequences. If the impact is 2.5 deg C, then we should probably be glad that that’s all it is, as the results could have been much worse.

If Hansen’s 1988 model didn’t work very well, then that does not mean for me that some other model mightn’t work, even some other model by Hansen. Many readers here take much more hard-edged positions on these things than I believe to be justified.

Having said that, I’m also trying to figure out how one gets from point A to point B in all of this, a path made more difficult because the field is all too often characterized by sloppy practices.

The different handling of Other Trace Gases was material to Scenarios A and B outcomes, accounting for virtually the entire difference between scenarios. Scenario B hypothesized that there would be no contribution from the Other Trace Gases. Why did Hansen think in 1987 that this was “more plausible” than the doubling in Scenario A. With the benefit of hindsight, it looks like OTG forcing was about halfway between Scenarios B and Scenario A (before any counting from tropospheric ozone etc. which would move the results closer to Scenario A.

What then is the basis, if any, for concluding that Scenario B was the more “plausible”? I doubt whether Hansen was really thinking about differences between near-time Other Trace Gas concentrations when he said that Scenario B was “more plausible” for the near-time dispute with Michaels. The 1988 paper strongly suggests that he was thinking about “deep time” differences out past 2025 where exponential versus linear actually matters. However the term was “convenient” when Hansen re-visited the matter in his 1998 debate.

However, trying to put oneself in Hansen’s shoes back in 1988, there’s nothing to suggest that Hansen thought that it was “less realistic” to account for OTGs by doubling than to ignore them (Scenario B.) Of course, this ad hoc method appears to have over-estimated the impact of OTGs, but, once again, there’s nothing to suggest that Hansen in 1988 thought that it was less “realistic” to ignore the contribution of OTGs.

As to how Hansen’s model is faring, I need to do some more analysis. But it looks to me like forcings are coming in below even Scenario B projections. So I agree that it’s unfair for Hansen critics to compare Scenario A temperature results to actual outcomes as a test of the model mechanics. On the other hand, Hansen’s supporters have also been far too quick to claim vindication given the hodgepodge of GHG concentration results. If it’s unfair to blame the blame the model for differences between actual and projected if the GHG projections are wrong, then it i equally unfair to credit the model with “success” if it gets a “right” answer using wrong GHG projections. One would really have to re-run the 1988 model with observed GHG concentrations to make an assessment. Given that GISS have changed their models since 1988, GISS would presumably argue that the run is pointless, but the cost of doing the run doesn’t appear to be large and it seems like a reasonable exercise for someone to do. It would be interesting to obtain a listing of the 1988 model to that end.


52 Comments

  1. H. Patrick Boru
    Posted Jan 18, 2008 at 5:35 PM | Permalink

    It would seem running a model with actual numbers would be useful in testing the model’s assumptions. Some of those assumptions might be applied in newer models.
    What if the model is spot on?
    Wouldn’t that encourage the modellers to incorporate more of the original model’s assumptions and methodology in later models?
    Or at least have a greater level of confidence in those assumptions?

  2. AK
    Posted Jan 18, 2008 at 5:57 PM | Permalink

    When I analysed the data as archived at realclimate, I assumed that the effect of other trace gases would be counted by a separate line item in which they set the impact equal to the combined CFC11 and CFC12 impact. I did not consider the possibility that they would actually insert incorrect CFC11 and CFC12 values into their Schedule A physical inventories, making these values not directly reconcilable to physical measurements. However, this turns out to be what they’ve done. Schmidt snickered at my failing to consider the possibility of them altering their physical inventories to incorrect values. I see now what they’ve done and can follow through their reasoning, but I’d have been inclined to keep the Other Trace Gases as a distinct entry so that people can keep track of it. Climate scientists can be tricky accountants.

    I would guess they had a big program with a lot of pieces that would have had to be re-coded and re-compiled, etc. For a quick&dirty, it’s easier to (mis)use an existing parameter than add one. It certainly wouldn’t have stood up to an accounting audit, AFAIK. (I’ve seen similar things done in a banking environment, but the final data massage was hands-on.)

    Since the purpose of the model is to relate GHG concentrations to climate projections, the entire system should have been set up to support regression testing from the start. I doubt there was any feedback between climate activity and GHG chemistry at that time, so a simple input dataset could have been used. Is the information wrt how the model worked even available? Do we know whether the data you’ve used above was input or intermediate?

  3. John Lang
    Posted Jan 18, 2008 at 6:27 PM | Permalink

    Thanks Steve. I’ve read that paper a dozen times before and come away with a different understanding of the scenarios each time. It is pretty clear on the GHG assumptions now.

  4. Rattus Norvegicus
    Posted Jan 18, 2008 at 7:13 PM | Permalink

    Steve, did you read the part in the 1998 paper which states that Model II at that time had a 4.2C value for S? You know that more modern models (specifically Model E) have lower values for S? In the case of Model E I believe that it is 2.5

  5. nanny_govt_sucks
    Posted Jan 18, 2008 at 7:25 PM | Permalink

    In Scenario B… the annual growth of CFC11 and CFC12 is reduced from 3% yr-1 today to 2% yr-1 in 1990, 1% yr-1 in 2000 and 0 in 2010….

    So, shouldn’t we see a flat line for CFC11 and CFC12 for Scenario B in your graphics starting around 2010? The Scenario B CFC11 and 12 concentrations apparently continue to rise – where is this “reduction”?

  6. maksimovich
    Posted Jan 18, 2008 at 7:47 PM | Permalink

    Global emissions of CFC-11 (88 Gg/yr, where 1 Gg = 109 grams), CFC-12 (114 Gg/yr),and CFC-113 (6 Gg/yr) in 2003 were approximately 25%, 25%, and 3% of their maximum values around 1986. Emissions of CFC-11, CFC-12, and CFC-113 have all continued to decrease since 2000.

  7. Shanghai Dan
    Posted Jan 18, 2008 at 11:14 PM | Permalink

    This needs to make it out into the general population a LOT more. As a layperson (but at least an engineer), it seems to me that the graphs above show that CO2 is NOT the problem; in fact, methane and CFCs are the primary forcing gasses of GW.

  8. mccall
    Posted Jan 18, 2008 at 11:34 PM | Permalink

    “If Hansen’s 1988 model didn’t work very well, then that does not mean for me that some other model mightn’t work, even some other model by Hansen. Many readers here take much more hard-edged positions on these things than I believe to be justified.”

    As posted elsewhere, Dr Hansen doesn’t make pronouncements without a GCMs telling him it’s safe. The GCMs he used in 1988 stunk ‘er “didn’t work very well” — the GCMs Hansen used in DEC’06 and JAN’07 worked about the same. Unlike Dr. Jones who took some lumps, we can be sure Hansen will try to spin this into a silk shirt; but that it will only work if one’s lost their sense of smell.

  9. mccall
    Posted Jan 18, 2008 at 11:39 PM | Permalink

    I guess that makes me a hard-a__ — hard-edge!

  10. Geoff Sherrington
    Posted Jan 19, 2008 at 2:14 AM | Permalink

    These gas analyses look largely bogus, CO2 (maybe) excepted. One simply cannot measure trace gases to ppb or ppt level and then distribute the gases in the atmosphere according to natural processes to arrive at 2 significant figure accuracy.

    Where is the science in lumping other trace gases with CFCs whose atmospheric lifetimes are said to be 75 and 150 years? Some OTGs might have lifetimes of rather different duration. If CFC lifetimes are 70 to 150 years, one cannot realistically get a sudden change in measured concentration, conveniently around the time the paper was printed in 1988 and one should not extrapolate it to OTGs. The credible response curve both during the term of CFC emission and since cessation would be rather smeared, could easily be half or twice the concentrations graphed above, and one would not wish to read much into graph shapes over a couple of decades.

  11. VG
    Posted Jan 19, 2008 at 4:34 AM | Permalink

    NCDC
    19th warmest on this page

    http://www.ncdc.noaa.gov/oa/climate/research/2007/dec/global.html#current-month-midtrop

    then go to the graph

    No comment.

  12. peter
    Posted Jan 19, 2008 at 6:12 AM | Permalink

    Went over and looked at Deltoid this am, and really, its gross. As gross as Eli’s site. These people are simply so rude and so condescending and so self righteous, and about what? You’re to be congratulated on preserving a thick skin and a mostly even temper. They are doing themselves no good with this stuff. Delete this one if you like, its way off topic, just wanted to say it.

  13. Craig Loehle
    Posted Jan 19, 2008 at 8:10 AM | Permalink

    FWIW I would guess that the 1988 model can no longer be run. The compilers and even the language have changed since then. Thus you can’t make a comparison run.

  14. Bruce
    Posted Jan 19, 2008 at 10:11 AM | Permalink

    VG (#11)

    How can the Global Land temp be the 8th warmest, when the NH is the 10th and the SH is the 24th?

    How can .98 NH + .27 SH = .81 Global?

    Shouldn’t it be .98 + .27 / 2 = .635?

    Doesn’t the SH count as half the earth even if there are fewer stations?

  15. Harry Eagar
    Posted Jan 19, 2008 at 10:29 AM | Permalink

    #12 Geoff, that comment is generalizable to a lot more claims about GW than just trace gases.

  16. Tim Ball
    Posted Jan 19, 2008 at 11:14 AM | Permalink

    This and the original Hansen studies are all very interesting, but they are focussing on less than 4 percent of the GHGs. I notice that immediately under Figure 1 you say “Next methane, the second most important GHG.” Actually, it is the third most important and though argued to be far more effective as a GHG than CO2 and H2O is only approximately 0.3% of the total. A further measure of the problems is the fact there is considerable disagreement about the effectiveness of H2O as a GHG. Surely as that varies the variaions in quantity of other GHGs becomes more or less important.

    I would like to know how much the most important GHG (95% by volume) changed over the same period as examined for the gases discussed here. How accurate were the measures and predictions for that GHG? It’s interesting how H2O essentially only gets attention when it is necessary to provide a positive feedback to overcome the upper limits of warming due to CO2 increases. It appears they have estimated the amount of increase of H2O necessary to achieve the scenarios for temperature increase provided by IPCC. What are these numbers and how do they match with actual changes in atmospheric H2O? IMO all these discussions about very minor GHGs are essentially a sidetrack if not redundant unless consideration is given to the most important one.

  17. James Bailey
    Posted Jan 19, 2008 at 11:20 AM | Permalink

    Whatever their reasoning for using that as an approximation, they know their approximation was the cause of the difference between case A and case B. Having proven that their approximation caused the difference, and knowing that it was inaccurate, they went ahead and labeled it as business as usual. Case A with exagerated warming, due to known innaccuracies in input, was labled business as usual and presented as the scare scenario for doing nothing. Even with mild disclaimers in the text, that is highly unethical. Considering it was intentionally misrepresented in testimony before Congress, it may be worse than simply unethical.
    And they have continued to allow detractors to follow the misrepresentations and then attack the results as being in error.
    It is not simply a question of is this a good choice. They proved it was a bad choice.
    It is a question of what they did with it knowing the results were in error.

  18. cbmclean
    Posted Jan 19, 2008 at 11:43 AM | Permalink

    #14 Bruce,

    When it comes to land area, rememeber that the SH is not half the world. Most of the land is in the NH.

  19. eric mcfarland
    Posted Jan 19, 2008 at 11:46 AM | Permalink

    Hansen had this discussion with Cheney, et. al., in 2001 when he discussed the so called other green house gases with Hansen. Cheney had seized on the following line from a paper to basically position C02 as a non-issue: “We argue that rapid warming in recent decades has been driven mainly by non-C02 greenhouse gases, such as chlorofluorocarbons …[CH4] … [N20] … and not by the product of fossil fuel burning …” The paper in question is “Global Warming in the Twenty First Century: An Alternative Scenario.” This is all discussed in Bowens new book called Censoring Science at 101.

  20. eric mcfarland
    Posted Jan 19, 2008 at 12:04 PM | Permalink

    Is it not the longevity of CO2, at least in part, that make is “the” player?

  21. DocMartyn
    Posted Jan 19, 2008 at 12:50 PM | Permalink

    Nice little paper on the kinetics and steady state levels of hydroxyl radical and methane.

    http://www.igac.noaa.gov/newsletter/igac21/methane_sink.html

    it appears that the half-life of methane was miscalculated.

    “The total methane lifetime was estimated to be tCH4=8.4 years. Using this method to estimate methane lifetime implies that any corrections of the methyl chloroform lifetime would lead to changes in the methane lifetime. In fact, from a recent analysis of methyl chloroform observations, Montzka et al. (2000) deduced a reduced atmospheric lifetime of methyl chloroform of 5.2 (+0.2-0.3) years.”

    This will make a big diffenece to the methane steady state, w.r.t. atmospheric input.

    Anyone know what the changes in UV solar output are tied into sunspot activity? I am a bit surprised that they do not have the 2xHO –> H2O2 reaction as a hydroxyl radical sink. Atmospheric H2O2 has been measured for some time.

  22. Phil.
    Posted Jan 19, 2008 at 2:15 PM | Permalink

    Re methane, you say:

    the levels in Hansen et al 1988 reported for 1987 are noticeably higher than levels presently believed to have existed in 1987. Is this because of changing ways of measuring methane content? At present, I don’t know, but the differences are surprisingly high, as compared to CO2.

    In the paper Hansen states that the CH4 values are based on Lacis et al. [1981] starting at 1.4ppb in 1958 with various increments over time as specified. As I pointed out yesterday this might have been based on N hemisphere measurements which are higher than global. Either way the reason CH4 is high is because the contemporaneous source he used was high, even with that his scenario C trajectory for CH4 seems remarkably prescient!

    Scenario A and B projections for CFC11 and CFC12 are identical ; and thus this is not a reason for Scenario A and B differences. Both scenarios over-estimated actual concentrations, which were like Scenario C.

    They’re actually rather good projections, the over estimate would be expected since the Montreal Protocol phasing out their withdrawal had not yet been signed much less implemented.

    The different handling of Other Trace Gases was material to Scenarios A and B outcomes, accounting for virtually the entire difference between scenarios. Scenario B hypothesized that there would be no contribution from the Other Trace Gases. Why did Hansen think in 1987 that this was “more plausible” than the doubling in Scenario A. With the benefit of hindsight, it looks like OTG forcing was about halfway between Scenarios B and Scenario A (before any counting from tropospheric ozone etc. which would move the results closer to Scenario A.

    In the intro to the paper (section 4, the paragraph before he explicitly explains why he believes scenario B the be the most plausible) Hansen says:

    “The range of climate forcings covered by the three scenarios is further increased by the fact that scenario A includes the effect of several hypothetical or crudely estimated trace gas trends (ozone, stratospheric water vapor, and minor chlorine and fluorine compounds) which are not included in scenarios B and C.”

    This is also explicitly referred to in the paper: “Figure B2 summarizes the estimated decadal increments to global forcing. The forcings shown by dotted lines in Figure B2 are speculative; their effect was included in scenario A but was excluded in scenarios B and C.”

    Also the paragraph specifically describing scenario B says:
    “No increases are included for other chlorofluorocarbons, O3, stratospheric H2O, or any other greenhouse gases.”

    This statement is repeated in the para referring to scenario C.

    I don’t know how you could have missed these repeated statements, I would suggest that an ‘auditor’ should read the material more carefully before rushing into print?

    Steve: Why do you say that I missed these statements? I saw those statements. But I don’t see how they tie in to my observation. Why is a projection excluding htese gases “more plausible” than one with them.

  23. Kevin B
    Posted Jan 19, 2008 at 2:23 PM | Permalink

    Steve, you say that you’re concerned about the ‘unplanned experiment’ of doubling CO2 in the atmosphere. For my part, I doubt that we are capable of doubling CO2 no matter how much fossil fuel we burn.

    Consider that 4 billion or so years ago the atmosphere had 20% CO2 and trace amounts of oxygen, but when life discovered the trick of turning water, energy and CO2 in it’s environment, (together with a few trace elements), into complex carbohydates with a spare bit of oxygen, the ratios were pretty quickly reversed to the 20% oxygen and trace amounts of CO2 we see today.

    Consider also that in a typical greenhouse, plant growth ceases by mid-morning as the CO2 content of the air has fallen to 150ppm/v or so. Commercial growers combat this by pumping up the level to 1200ppm, not to increase the greenhouse effect but to feed the plants. As long as the plants have those three basic things, water energy and CO2, and enough of the nutrients they need they will keep growing, (and pumping out oxygen).

    It’s worth remembering that 150ppm/v figure when the smart people start talking about seeding the oceans with iron to sink carbon. It’s also worth remembering that at 90 ppm/v photosynthesis stops. No photosynthesis, no oxygen, no life.

    Life, both in diversity and quantity, thrives in a warm wet world with plenty of CO2. Cold and dry with minimum CO2 is not so good.

  24. PaddikJ
    Posted Jan 19, 2008 at 5:49 PM | Permalink

    re: #20; alas, not even the residence time of CO2 in the atmosphere is “settled” (or less charitably, escapes the revisionism of the AGW Crusade). See here. Until the mid ’80s, the bulk of human-emitted CO2 was understood to stay aloft for 5-10 years. But as the AGW bandwagon gathered speed, the scales fell from our eyes, and lo, CO2 was found to persist for centuries.

    I wonder how long it will take before it’s millennia.

  25. Geoff Sherrington
    Posted Jan 19, 2008 at 8:08 PM | Permalink

    Firther to my #10 above, errors of chemical anaysis are a significant complication.

    Example from realclimate, eric, #10, 17 Dec 2004 –

    Most labs can measure 12C/12C ratios to a precision of about 0.005 percent. That’s about thirty time smaller the observed change.

    Read it again.

  26. Phil.
    Posted Jan 19, 2008 at 8:56 PM | Permalink

    Re #22

    Steve: Why do you say that I missed these statements? I saw those statements. But I don’t see how they tie in to my observation. Why is a projection excluding htese gases “more plausible” than one with them.

    If that were so you would have hardly made some of the statements in your original blog, for example:

    Since I wrote this, I became aware of Hansen data archived, oddly enough at realclimate, which shows that CFC11 and CFC12 concentrations were doubled in Scenario A as a means of modeling Other CFCs and Traces Gases and this accounts for the main near time difference between Scenarios A and B.

    As I have shown in #22 this fact is referred to in Hansen et al. 1988 several times; the paper you say you had read, you must have missed it if you weren’t aware of it until you found the data on RC!

    I think the following statement makes it clear why A would be less plausible than B:

    “The range of climate forcings covered by the three scenarios is further increased by the fact that scenario A includes the effect of several hypothetical or crudely estimated trace gas trends (ozone, stratospheric water vapor, and minor chlorine and fluorine compounds) which are not included in scenarios B and C.”

    Steve: Phil, it’s not that I hadn’t noticed this comment. This is evident by the fact that I referred to it on the prior day. I mentioned it in my post the previous day. However, in that post, I was only at the gas concentration stage and hadn’t yet modeled the RFs. Given the very cursory mention of minor gases and Gavin’s statement that the difference between the scenarios was related to exponential rather than linear models, I presumed (incorrectly) that the difference between the Scenarios lay elsewhere than the minor gases. So I was scratching my head trying to figure out what caused the difference. I also wasn’t sure whether I’d emulated the gas concentrations correctly. When Lucia pointed me to data at RC, I was able to determine that the difference did, after all, lay with the minor gases. I was wrongfooted temporarily by how they accounted for things – as I noted, I can see how they did it, but I think that it’s a very odd way and I didn’t connect the two statements. Yes, the statement is there; I’d seen it and noted it, though people seem to want to take cheapshots at me, but I didn’t connect the dots until the method was clarified.

    Having said that, as I work through it more, even if the method is mentioned in what is the equivalent of a footnote to the statements, there was no clear statement of the impact of minor CFCs on the two Scenarios. I’ve done some more plots and the impact is quite startling. If I didn’t figure everything out all at once, sorry about that, but I’m unaware of anyone else even raising the issue, despite all the ink spilled over Hansen et al 1988 (I noticed that it’s been discussed or referred to in at least 8 different RC posts over the last 4 years.)

  27. Phil.
    Posted Jan 20, 2008 at 1:14 AM | Permalink

    Re #26

    Steve: Phil, it’s not that I hadn’t noticed this comment. This is evident by the fact that I referred to it on the prior day. I mentioned it in my post the previous day. However, in that post, I was only at the gas concentration stage and hadn’t yet modeled the RFs. Given the very cursory mention of minor gases and Gavin’s statement that the difference between the scenarios was related to exponential rather than linear models, I presumed (incorrectly) that the difference between the Scenarios lay elsewhere than the minor gases. So I was scratching my head trying to figure out what caused the difference. I also wasn’t sure whether I’d emulated the gas concentrations correctly. When Lucia pointed me to data at RC, I was able to determine that the difference did, after all, lay with the minor gases. I was wrongfooted temporarily by how they accounted for things – as I noted, I can see how they did it, but I think that it’s a very odd way and I didn’t connect the two statements. Yes, the statement is there; I’d seen it and noted it, though people seem to want to take cheapshots at me, but I didn’t connect the dots until the method was clarified.

    I guess that’s the difference in our backgrounds, as a scientist when someone tells me he simulated the effect of minor species by doubling the major species concentrations that’s exactly what I’d expect him to do. As for the cheap shots I’m afraid ‘you reap as you sow’, you’re not exactly shy about doing that yourself, for example “Since I wrote this, I became aware of Hansen data archived, oddly enough at realclimate, which shows that CFC11 and CFC12 concentrations were doubled in Scenario A as a means of modeling Other CFCs and Traces Gases”, and yet you know that information is archived in the original paper, so why the gratuitous shot at RC?
    The method was made clear in the paper, that it results in 0.1W/^m warming in A as opposed to B over the last 50 years has been shown by your calculations.

  28. Geoff Sherrington
    Posted Jan 20, 2008 at 1:52 AM | Permalink

    Re # 20 eric mcfarland

    Is it not the longevity of CO2, at least in part, that make is “the” player?

    When I try to find the atmospheric residence time or half life of CO2 in the atmosphere, I get a range of results. PaddikJ in # 24 above gives a link to yet another value.

    Eric, is there consensus about CO2 and for that matter, other implicated gases? I’ve seen CFC half lives from 3 years to 150 years, depending on the expressed qualifications. Can you please post a table of “accepted” results?

    Re # 30 Phil, I have no problem jumping to Steve’s defence. I worked extensively with radioisotope decay schemes. If I’d read of a decay curve for an isotope that had a note added saying that some other isotopes of different half life had been lumped in the with species being examined, for convenience, I would probably miss the meaning of this because it’s a procedure that careful scientists do not use.

    Why don’t they? Simply look at the confusion this careless (pseudo)science has caused here. It’s wrong science and no semantics excuse it.

    I have deplored for 30 years the standard of “green” science in general and examples like this leave me sadder.

    Further, from above,

    “…scenario A includes the effect of several hypothetical or crudely estimated trace gas trends (ozone, stratospheric water vapor, and minor chlorine and fluorine compounds) which are not included in scenarios B and C.”

    Are you not alarmed that the vague mention of hypothetical gases suggests that the money had already been put on CO2 before before 1988, before the full results were in?

  29. pjjaco
    Posted Jan 20, 2008 at 2:56 AM | Permalink

    http://www.climateaudit.org/?p=2630

  30. Phil.
    Posted Jan 20, 2008 at 8:30 AM | Permalink

    Re #31

    Are you not alarmed that the vague mention of hypothetical gases suggests that the money had already been put on CO2 before before 1988, before the full results were in?

    The gases weren’t hypothetical their trends were, and it wasn’t a ‘vague mention’ they were explicitly identified and the aforementioned trends were shown in figure B2.

  31. Max
    Posted Jan 20, 2008 at 1:35 PM | Permalink

    About the data.
    Coming from an Industry that was one of the major users of CFC’s I would have to say that assuming that atmospheric concentrations of CFC would take such a hinged turn like at the point in time of the Montreal protocol signing, is a pipe dream. Vast amounts of CFC’s were still in machines and in inventory stock piles well past 1990. I am also probably not the only technician out there to have new r-134a both bottled and in shipped machinery from China mysteriously burn with a green flame. There are still a lot of leaky old r-11 machines in service due to the cost of conversion/replacement presently, in North America.. Just having legislation passed doesn’t mean reductions actually happen and targets get met.

  32. eric mcfarland
    Posted Jan 20, 2008 at 11:53 PM | Permalink

    Max:
    There was substantial voluntary phase out before Mont. Don’t know if that explains. In all events, thouht the blow would add some balance to all of this:

    http://www.logicalscience.com/skeptic_arguments/models-dont-work.html

  33. TAC
    Posted Jan 21, 2008 at 6:27 AM | Permalink

    It seems that none of Hansen’s scenarios have actually come to pass, which makes it hard to evaluate his 1988 predictions (but they may be suggestive nonetheless). CO2 seem to be close to “B”; CH4 is well below “C,” as is CFC-11.

    It does seem that “A” is off the table; “A” did not happen. Under the circumstances, to argue that Hansen was “wrong” about “A” seems utterly silly.

    As is often the case in climate science (and elsewhere, btw), there are reasons to be concerned about the accuracy of some of the measurements.

    Nonetheless, I find it refreshing to read discussions about testable predictions. Even though the scenarios were off, Hansen, Roger Pielke, Jr. and SteveM should all be commended for, respectively, having the courage to make “testable” predictions and then trying to evaluate the predictions rigorously. This is a good thing.

    Given the clear and straightforward analyses here, one wonders why the debate — as seen here and elsewhere — seems so heated and, to my ear, mean-spirited.

    Finally — and this may be completely off-the-wall — I am beginning to perceive a convergence of thinking on climate sensitivity: AGW proponents seem comfortable with 1.5-4.5 degrees C; skeptics seem comfortable with 1.0-1.5 degrees C. We’ve already observed about 0.5-0.7 degrees C. It might take a couple of decades (given LTP, volcanoes, etc.), but, given non-overlapping hypotheses, it might eventually be possible to settle the question of CO2 sensitivity.

    Or not. Humans are altering the atmosphere, land and oceans in so many poorly understood ways that it may never be possible to sort it all out.

    Or have I misunderstood completely? ;-)

  34. Max
    Posted Jan 21, 2008 at 7:09 AM | Permalink

    My industry was in the thick of it, I just find that turning point hard to swallow, when it didn’t even become economically viable to even recover refrigerants to the late 90’s. When I started in the industry in in the early 90’s, most people working with in the sector didn’t even own a recovery machine, let alone bother to recover it, it was standard practice to just blow off the charge. In 1995 the sell price of R-22 was around 8 lb, a recovery machine was 1800.00, and labour was 55-60/hour. With recovery rate of 4 lb/s hour, plus machine clean up, economics of job bidding took over from any protocols.
    Like I said, just because it was put into law, doesn’t mean it happened as such. Also I would have to add that R-22 is much more prolific gas, and I think it would be in the upper atmosphere if one considers the temperature of the liquid phases of both refrigerants. As for voluntary reductions, R-12 was still on the shelves for sale in 1995, as was r-12 equipment.

  35. Michael Smith
    Posted Jan 21, 2008 at 11:18 AM | Permalink

    Hansen et al 1988 states:

    (T)he global warming within the next several years is predicted to reach and maintain a level of at least three standard deviations above the climatology of the 1950’s.

    The standard deviation is then shown to be .13 degC.

    We conclude that, on a time scale of a few decades or less, a warming of about .4 degC is required to be significant at the 3 sigma level (99% confidence interval).

    Then they repeat their prediction:

    The model predicts, however, that within the next several years the global temperature will reach and maintain a 3 sigma level (i.e. a .4 degC increase) of global warming, which is obviously significant.

    Two points to note about this.

    1) Assuming “several years” means something on the order of 3 – 10 years, only scenario A meets this prediction. Presumably, scenario B would also meet it had they not included the assumption of a volcano in 1995 in scenario B. However, scenario B is what it is.

    2) As of the end of December, 2007, the GISStemp anomoly stands at .39 degC.

  36. Posted Jan 21, 2008 at 12:01 PM | Permalink

    It seems that none of Hansen’s scenarios have actually come to pass, which makes it hard to evaluate his 1988 predictions (but they may be suggestive nonetheless). CO2 seem to be close to “B”; CH4 is well below “C,” as is CFC-11.

    jup, this looks like the best approach to this. reality was somewhere in between.

    Like I said, just because it was put into law, doesn’t mean it happened as such.

    hm, the red line seems to be “observed values”. are you saying the measurement is all wrong?

  37. Andrew
    Posted Jan 21, 2008 at 12:34 PM | Permalink

    TAC, ideally we could settle the question. Trouble is, its easy at this point to just handwave and say, “Oh, the effect of aerosols was stronger than we thought, so the lack of observed warming still doesn’t conflict with our high climate sensitivities.” Until we can pin down the effect of aerosols precisely, which looks to be quite a feat, given the current error bars, this may well be a thorny question for the ages. To be safe, 1.5 is apparently least/most offensive to no one. So in way, that’s the “politically correct” sensitivity. Although the real politically correct sensitivity is something like 8 or 12 or something. That’s what Al Gore seems to suggest with his ice cores.

  38. Peter Thompson
    Posted Jan 21, 2008 at 1:02 PM | Permalink

    I have a simple question. I downloaded the the monthly RSS temperature anomalies from 1979 to present from the RSS site. I compared Dec 1987 to Dec 2007. I used this because we are talking about Hansen in 1988, and it was the closest 20 year spread. That seems long enough to help with the signal vs. noise issue. There are nine data points per month, 8 of which are colder in 2007 than in 1987, by about -.5 C on average. If Hansen’s model over twenty years shows warming in all scenarios, and the satellite data shows cooling, why would anyone point to it as anything other than utterly wrong?

  39. MattN
    Posted Jan 21, 2008 at 2:06 PM | Permalink

    Easy. They just *ignore* satellite data and continue to use Hansen’s “data” to verify Hansen’s projections.

    So easy a caveman could do it…

  40. Sam Urbinto
    Posted Jan 21, 2008 at 4:21 PM | Permalink

    Um, Phil: Are you blaming Steve for operating under the assumption that when Gavin said the difference was due to linear versus exponential he meant the difference was due to linear versus exponential? And for Steve not instantaneously realizing, without the original data handy, that what really was meant was the difference was due to including the other trace gasses by doubling the CFCs and not noting it on the chart? Or for making an incorrect assumption, learning that, adapting and then explaining/correcting it?

    What didn’t you understand about what he said in the other post?

    In my post yesterday, I incorrectly surmised that this would not be a substantial effect. When I analysed the data as archived at realclimate, I assumed that the effect of other trace gases would be counted by a separate line item in which they set the impact equal to the combined CFC11 and CFC12 impact. I did not consider the possibility that they would actually insert incorrect CFC11 and CFC12 values into their Schedule A physical inventories, making these values not directly reconcilable to physical measurements. However, this turns out to be what they’ve done. Schmidt snickered at my failing to consider the possibility of them altering their physical inventories to incorrect values. I see now what they’ve done and can follow through their reasoning, but I’d have been inclined to keep the Other Trace Gases as a distinct entry so that people can keep track of it. Climate scientists can be tricky accountants.

  41. Sam Urbinto
    Posted Jan 21, 2008 at 4:42 PM | Permalink

    As far as projections and assumptions and guesses of SWA nature.

    Back of the envelope calculations for any gas (or specific effects, such as soot on ice) can be put anywhere number-wise, depending on the actual percentage of whatever temperature change there may be is correlated to the substance (or substances). For example, if the anomaly trend is a side-effect of measurements and there actually is no net increase in temperature (as a proxy for energy level), then the answer is 0.

    Logically, one would think urbanization and fuel use would cause a rise in the energy balance and that sulfur particulates would moderate that etc. The question is how to quantify that. And I don’t think it’s possible. With regards to scenarios, there is so much wiggle room to explain away anything, we really get something that is basically whatever it is we want.

    If the anomaly trend is showing an actual net temperature increase, and assuming that is a proxy for energy balance, and Model E is correct that “atmosphere + CO2 = 100% GE” and “atmosphere – CO2 = 91% GE”, you have to figure out if doubling CO2 changes that 9% in the actual system. And then what it changes the percent to. (Ditto for ozone or whatever). Is “yes” the correct answer and “the percent of the current total that an additional 1.5 w/m2 provides” the correct amount? Maybe. Maybe not.

    Or put another way, if CO2 in the system in reality is a cause of the part of the effect that is temperature = energy balance, you would have to figure out what percentage of the effect CO2 is. I’ve ballparked it at 5-20%. 10% is .06C per 100 ppmv. So does 400 more ppmv equal .24C? Is that what you get from 1.5 w/m2? Then again, it could all be urbanization.

    So I agree, TAC, “Humans are altering the atmosphere, land and oceans in so many poorly understood ways that it may never be possible to sort it all out.”

    But the point is moot. Even if it is a proxy for energy levels in the first place. You can no more tell me what “the temperature” of water is by measuring the surface as an average over time than you can tell me what “the temperature” of the atmosphere is by sampling air 5 feet up every few kilometers and combining daily readings over a month than you can tell “the temperature” of the Earth by combining those two than you can tell me what “the temperature” of the Empire State Building is by getting the mean of every floor’s mean room temperatures.

    This will all sort itself out in a few years. And we’ll get to see how well the scenario predictions really are!

  42. Phil.
    Posted Jan 21, 2008 at 5:52 PM | Permalink

    Re #40

    Um, Phil: Are you blaming Steve for operating under the assumption that when Gavin said the difference was due to linear versus exponential he meant the difference was due to linear versus exponential? And for Steve not instantaneously realizing, without the original data handy, that what really was meant was the difference was due to including the other trace gasses by doubling the CFCs and not noting it on the chart? Or for making an incorrect assumption, learning that, adapting and then explaining/correcting it?

    No just for making assumptions instead of just following the original paper which he said he’d read, where all his questions were explicitly answered as I showed above. The difference is between exponential and linear growth as stated in the paper:

    “Schedule A …..net greenhouse forcing increases exponentially. Scenario B has decreasing trace gas growth rates such that the annual increase of the greenhouse forcing remains approximately constant at the present level.”

    What didn’t you understand about what he said in the other post?

    I understood it fine, I just didn’t understand the necessity for it since what was done originally was clearly spelled out in the paper.

  43. Sam Urbinto
    Posted Jan 21, 2008 at 7:00 PM | Permalink

    Paper, okay, yes, if one is aware of what the paper said first, versus learning what the spoken claims are first. What do you do when faced with two conflicting explanations? I’d take what was said, especially since the graph gives no indication that it covers CFC11+ or CFC12+ You’re criticising somebody doing what they did versus how you think they should have done it. At least Steve rectified it, and pretty quickly, unlike what others we could speak about often do; ignore the error and keep making it.

    Ah, well, to each his (or her) own.

  44. Kenneth Fritsch
    Posted Jan 21, 2008 at 7:08 PM | Permalink

    Speaking of trends and extrapolations do climate scientists ever comment on the extrapolated trend noted in the graph below? The data was taken from Steve M’s link here to NOAA’s Greenhouse Gas Index for the years 1979-2006:

    http://www.esrl.noaa.gov/gmd/aggi

  45. Phil.
    Posted Jan 21, 2008 at 7:45 PM | Permalink

    Re #43

    Paper, okay, yes, if one is aware of what the paper said first, versus learning what the spoken claims are first. What do you do when faced with two conflicting explanations? I’d take what was said, especially since the graph gives no indication that it covers CFC11+ or CFC12+ You’re criticising somebody doing what they did versus how you think they should have done it. At least Steve rectified it, and pretty quickly, unlike what others we could speak about often do; ignore the error and keep making it.

    First of all as I said above there is no conflict, A is exponential, B is linear. In answer to my original comments Steve McI said that he was aware of the paper and in fact in his original posting had quoted from it and Steve was quite indignant at my suggestion that he had missed the statements in the Hansen 88 paper which explicitly laid out the answers to all his questions (#22): “Why do you say that I missed these statements? I saw those statements.”
    So from that the inescapable conclusion is that Steve had read the paper which contained the answers to his original questions, had he just followed the paper rather than making assumptions it would have all been straightforward. As it was he became confused and made a mistake, unfortunately he’d already committed his findings to the blog and had to backtrack. Steve has left it at that, why do you feel it’s necessary to jump up in his defence and suggest that I’m unfairly maligning him when apparently he doesn’t feel that it’s necessary?

    Ah, well, to each his (or her) own.

  46. Steve McIntyre
    Posted Jan 21, 2008 at 8:59 PM | Permalink

    #45. Because people want to discuss things other than proxies, I’m trying to work through areas that I’m not intimately familiar with. IT’s impossible to de-code team articles without making occasional mis-steps in interpretation – that’s a reason for source code – to guide one through interpretations. In this case, I wasn’t wrongfooted all that long and became aware of the proper interpretation within a day or so, but, because I’m working through things in real time, this creates opportunities for certain people to get all excited about my getting wrongfooted now and then in trying to decode Team methods.

    In cases where I get wrongfooted, while the Team may take that as evidence of singular stupidity, it usually is because of something opaque in the description – even if with hindsight, when everyone knows how the method actually works, the description sort of makes sense.

    The problem with the the Hansen description is not that it doesn’t say that Scenario doubled the CFC11 and CFC12 for minor CFCs, but that it doesn’t set out clearly that this is the most important aspect of the differences between SCenarios A and B. Reading through it the first time, it seemed like an incidental point, just mopping up a minor and insignificant detail with a somewhat arbitrary assumption that was convenient but not material. IN fact, as I’ve already observed, it’s the major contribution to the Scenario A and B differences. I’ll show this in a graph in another post. I don’t think that anyone could read HAnsen et al 1988 straight through without the benefit of the commentary here and emerge with the understanding that the handling of incidental CFCs was the difference between the Business-as-Usual and “most plausible” scenarios. The scenarios look like they’re about something else – that’s all.

  47. S. Hales
    Posted Jan 21, 2008 at 9:14 PM | Permalink

    #44 Kenneth That is in line with declining energy usage growth in the industrialized world. It cuts across all sectors, transport to electricity. Declining acceleration in demand growth is one reason why replacing CO2 rich fuels with green fuels or sources will be more and more difficult.

  48. Phil.
    Posted Jan 21, 2008 at 9:49 PM | Permalink

    Re #46

    Actually I find it easier to read the paper rather than a source code but then I have a background in Physical Chemistry, and I sympathize with your problems when working out of your comfort zone.
    As I recall your original dilemma it was that you didn’t realise that the OTG was responsible for the difference between A & B in the ‘neartime’ (I believe that was your term). However a careful reading of the paper (pp 9361-2) shows that it was the only difference at the time of the paper. Of course subsequently it should become less important because of the reduced growth in CO2, CH4, N2O and Freons relative to A. Perhaps if those differences had been put in a table it might have been clearer, that’s what I did when reading it.

    Steve
    : “Of course subsequently it should become less important ..” You’d expect that, wouldn’t you. Wait and see.

  49. Kenneth Fritsch
    Posted Jan 22, 2008 at 9:03 AM | Permalink

    Phil., I think we have all gained insights into the Hansen scenarios from these discussions (I have personally found the stated and unstated uncertainties in the construction of these scenarios most revealing) and in the end we get a better picture of what exactly was done in drawing up the scenarios.

    I judge that for us who are here to learn, the personality part of the debate or who said what and when becomes a distraction from the real issues of climate science and an objective analyis of them.

  50. TAC
    Posted Jan 23, 2008 at 6:28 AM | Permalink

    In re-reading this thread, a thought crosses my mind. When Phil. (#48) writes that

    …if those differences had been put in a table it might have been clearer…

    he touches on a huge issue that, IMHO, has been retarding “auditor’s” (e.g. SteveM’s) efforts, and climate science generally, for years (if not decades): Data and methods are too often poorly documented, withheld, or presented in a way that is at best ambiguous and at worst uninterpretable. (If you want examples, this blog is full of them [yes, I should produce a list; but I am lazy, so let's "move on" ;-) ]).

    Why does this happen? I really don’t know, but I suspect it, too, is just laziness rather than malice. Researchers prefer doing research and presenting lectures to documenting boring details about data and methods; and they have found they can get away with it. Perhaps this is true in all scientific fields, but I am not so aware of it.

    It is, of course, likely that there is “sharing” among an inner circle of colleagues; but, if so, why not include everyone in the conversation? Doing so would help inspire confidence among the broader community, and — you never know — it might help advance the field. What we keep seeing here on CA is that “turning over the stones” reveals many surprises. Surprises lead to progress.

    Should we expect that all science should be instantly reproducible? Of course not. But — and I’m making comparisons with other disciplines, ones that seem to be making progress — it should be sufficiently easy to reproduce cutting-edge work that smart graduate students, under the guidance of advisors, routinely choose to do so. It is through this process that science advances: By rigorously repeating another researcher’s work, students ponder each assumption and decision and develop a deep sense of understanding; they come to appreciate the generality, fragility, dependencies, and applicability of each result (as well as uncovering errors, which — as every researcher knows — are ubiquitous in real research). And, magically, science progresses.

    My real concern about all types of “secrecy” — loosely defined as failure to provide clear and readily available documentation of all algorithms and data — is that it enables laziness and sloppiness and devalues progress and rigor. Is this a cultural failing of climate science? I don’t know. However, I am surprised by the slow rate of progress in climate science. Papers from 30 years ago often seem wiser and more informed than those appearing today. AFAICT, confidence intervals on GCM output have not improved at all. It seems that scientists in past decades — Hansen, for example — were more courageous about making testable predictions than are scientists today. For whatever reason, we now seem to get vague predictions more reminiscent of fortune telling than science.

    Of course, it could be that I am completely wrong. I hope I am. The reason I worry is that on those rare occasions where debate has focused on topics I know something about — when “the curtain” has been pulled back — I have been appalled by the ignorance of the supposed experts.
    ;-)

  51. Tom Still
    Posted Feb 1, 2008 at 8:09 PM | Permalink

    Steve:

    Lay people like me read your blog and the comments. Please define the initialisms eg GHG (Green House Gases) and wplease write a one or two sentence summary or conclusion in lay terms that describes the meaning of the post in lay terms.

    Dr. Thomas Still

    PS Many of Benny’s contributors do not provide these aids to understanding, either.

  52. Susann
    Posted Feb 1, 2008 at 8:32 PM | Permalink

    My real concern about all types of “secrecy” — loosely defined as failure to provide clear and readily available documentation of all algorithms and data — is that it enables laziness and sloppiness and devalues progress and rigor. Is this a cultural failing of climate science? I don’t know.

    I think it may be a function of how well you are trained in the use of data. I made a data request yesterday to one of our database people for a policy paper I am writing and she promptly provided me with the finished tables, the pivot tables, and the raw data, plus documentation. It can be done, but I suspect it depends on the quality of the education you get and your supervision as a student learning how to do data analysis. Perhaps people who train primarily as data analysts vs. other sciences are better trained than those who merely use the data to analyse something else. In other words, the data is not the end, it is the means. When the data is the end for a person — aka data analyst — perhaps they take more care with its production and documentation than those for whom it is merely a means to an other end. YMMV.

One Trackback

  1. […] discussions in posts http://www.climateaudit.org/?p=2630 and […]

Follow

Get every new post delivered to your Inbox.

Join 3,263 other followers

%d bloggers like this: