Publishing NASA Data at Realclimate

In 2006, Willis Eschenbach digitized Hansen A,B and C scenarios to 2005 (see here ). I used these values together with my own visual digitization extension to 2010 in my recent post. Subsequently Lucia drew my attention to a digital version of the NASA data placed online at realclimate here , referenced in a realclimate post here.

I carried out a routine comparison of the two versions. The realclimate version of Hansen Scenario C was 0.166 deg C warmer than Willis’ digitized version. In reviewing the data, realclimate Scenario C was higher than realclimate Scenario B, so an error has been introduced somewhere in the process.

Which raises the question: how did this error get introduced into the NASA data published digitally for the first time at realclimate? Did it get introduced in digital copying? Or did NASA itself digitize the Hansen scenarios from print media and introduce the error then? In fact, exactly what is the provenance of the digital version presently archived at realclimate? Gavin did not say in his post. Did Gavin digitize the print media? Did someone else digitize it? Or is it digital data? [Update: these matters have been resolved]

Steve: For some reason, the 1994 value of Scenario C is lower than the 1994 value of Scenario B in the original graphic, notwithstanding the lower forcing of C versus B. It appears that Willis’ digitization is from a later and muddier version of the graphic and is incorrect at a few points though the average difference is not material. This leaves the question of why C is higher than B in 1994.

Update Jan 21: Willis digitized an image from 1999. Gavin said in an RC comment on Dec 22, 2007 (but not in the note itself) that the scenario data was digitized (presumably from the Hansen et al 1988 graphic). However,as far as I can presently determine, the other data sets e.g. radiative forcing in wm-2, also archived by Gavin, does not correspond to a Hansen et al 1988 graphic and therefore could not be digitized from a Hansen et al 1988 figure. Maybe it’s a digitization from another publication or maybe it was calculated; hard to say at present. I’ll email Schmidt and ask him.

102 Comments

  1. Roger Pielke. Jr.
    Posted Jan 18, 2008 at 8:12 AM | Permalink

    How about we stick to the analyses? Posts on bad behavior at RC could go on forever and are of dubious relevance.

    How about those forcings?

    Steve: I deleted a comment about NASA. However the provenance issue is a real one: I pay attention to this sort of error because transcription errors like this are helpful in determining provenance. Who prepared the RC version and when? Is it contemporary or recent?

    I’m working on reconciling the GHG concentrations as we speak.

  2. Joe Black
    Posted Jan 18, 2008 at 8:46 AM | Permalink

    “reconciling the GHG concentrations”

    Including HOH?

    Is HOH a GHG or not? A significant GHG?

    Definitions are important.

  3. Daryl
    Posted Jan 18, 2008 at 9:00 AM | Permalink

    While I respect Roger Pielke Jr. and all his work, I think Steve is correct to question the provenance of any data posted by a Climatologist on RealClimate or anywhere else. There is no distinction between “peer reviewed” published data and personal blog postings containing the same data when this data is supported by the credentials of a scientist and is directly related to the person’s field of endeavour.

    Considering the way that RealClimate “grades” other web postings, research and articles, should not their own be subject to the same scrutiny?

    While I agree that commenting on their attitude or bad behaviour is not relevant, the provenance of data is entirely relevant due to the fact that journalists and laypeople visit this site for accurate information.

  4. Mike B
    Posted Jan 18, 2008 at 9:17 AM | Permalink

    Steve-

    I just did a comparison of the two versions, and found no significant differences.

    Scenario A avg abs difference 0.00817
    Scenario B avg abs difference 0.01246
    Scenario C avg abs difference 0.01493

    I did find a .166 deg C difference between the two versions of Scenario C in 1994 only.

  5. Mike B
    Posted Jan 18, 2008 at 9:35 AM | Permalink

    Also, FWIW, it looks to me like the RealClimate version of 1994 Scenario C matches the graphics better than willis version.

  6. Posted Jan 18, 2008 at 10:37 AM | Permalink

    @SteveMcIntyre:

    Gavin appears to have posted those sometime between Saturday, Dec. 22 when I asked for information and Monday Dec. 24 at 9:59 pm. when I thanked him. Given the timing, I’m not surprised he didn’t give a full provenance in the inline response to my comment.

    I exchanged a few emails with him earlier this week because I wanted more details on forcings. Is I sort of wondered if he’d digitized these himself. He told me he didn’t. He’d been given that data.

    So, it appears someone else digitized this. My guess is Gavin read my email on the weekend, emailed to people who might happen to have a graph, someone emailed it back, and he posted it. I didn’t really expect him to check it, and it save me time digitizing.

  7. Steve McIntyre
    Posted Jan 18, 2008 at 10:50 AM | Permalink

    #6. Lucia, I’m not suggesting that Gavin should have checked the data. It’s good that he posted it up; I’m working through some of the data sets now and they definitely clarify some puzzling features of Hansen et al 1988 – I’ll post on this later today.

  8. Gunnar
    Posted Jan 18, 2008 at 10:51 AM | Permalink

    While I respect Roger Pielke Jr. and all his work, I think Steve is correct to focus solely on the mistakes of Hansen and whether RC is being funded by NASA or not. I can think of no other issue more important than this .166 deg C error in a projection by Hansen from 20 years ago.

    I wholeheartedly join with Steve M and Daryl (and I’m sure bender and Larry) in demanding to know who prepared the RC version of a 20 year old AGW scenario and when? And while we’re at it, what does Al Gore know about the Thompson thermometer, and when did he know it?

  9. john
    Posted Jan 18, 2008 at 11:53 AM | Permalink

    Steve,

    Completly off topic and I am not a scientist.

    Hardly scientific, but a quick look at the lowest recorded temperatures in the United States indicates that 24 of 51 record low temperatures were recorded after 1950, (this is when Global Warming Alarmist feel that industrialization began to impact climate). 16 of 24 of the record low temperatures were logged after 1975. Following the Alarmist logic, (or illogic), wouldn’t the records indicate that record lows would be more prevalent in the period before industrialization; in other words, record low temperatures would become a thing of the past? Looking at these temperature records for a 100 year period the record low temperatures seem to be evenly spread across the century. Wouldn’t a higher percentage of record low temperatures be recorded before industrialization if the Global Alarmist theory were correct?

    Conversely, of the highest recorded temperatures in the United States over roughly 100 years only 11 of 51 have occurred after 1950, (21%). If increases in Global Temperature were influenced by industrialization, wouldn’t there be a higher percentage of record high temperatures recorded in the latter part of the 20th century? 9 of 51 occurring after 1975 (17%).

  10. Ian McLeod
    Posted Jan 18, 2008 at 11:53 AM | Permalink

    Dumb question,

    How does one go about digitizing the data? Do you simply read it off the graph and plug it into Excel? Surly not, anyone?

  11. Mike B
    Posted Jan 18, 2008 at 12:11 PM | Permalink

    How does one go about digitizing the data? Do you simply read it off the graph and plug it into Excel? Surly not, anyone?

    Not far off. With the correct software, point and click on the end points of the vertical and horizontal scales, entering the end point values when prompted by software. Then point and click on the individual data points.

    Old technology actually.

  12. Larry
    Posted Jan 18, 2008 at 12:13 PM | Permalink

    11, there are digitizer devices. They look mice, but they have crosshairs, and work with a pad.

  13. Steve McIntyre
    Posted Jan 18, 2008 at 12:14 PM | Permalink

    #11. For pdf documents – not scanned ones like Hansen et al 1988 though – Hans Erren has a way of pulling data (up-to-scaling) directly from pdf commands.

  14. Alan Bates
    Posted Jan 18, 2008 at 12:19 PM | Permalink

    Re 11

    Sorry to continue a topic off the main thrust but what software? Is it available free? (I am retired from gainful employment)

    Alan

  15. Mike B
    Posted Jan 18, 2008 at 12:19 PM | Permalink

    11, there are digitizer devices. They look mice, but they have crosshairs, and work with a pad.

    I used a device exactly as you describe in 1981. We used to “digitize our velocities.”

  16. S. Hales
    Posted Jan 18, 2008 at 12:28 PM | Permalink

    To digitize scanned graphs you can use this

  17. Mike B
    Posted Jan 18, 2008 at 12:28 PM | Permalink

    #14 try googling “graph digitizing software”

  18. j bono
    Posted Jan 18, 2008 at 2:05 PM | Permalink

    I use Engauge Digitizer. It is free and works well for graphs and maps. Go to the link.

    http://digitizer.sourceforge.net/

  19. Phil.
    Posted Jan 18, 2008 at 2:18 PM | Permalink

    Re #11

    Not far off. With the correct software, point and click on the end points of the vertical and horizontal scales, entering the end point values when prompted by software. Then point and click on the individual data points.

    Old technology actually.

    Yes, I wrote an application to do that using a Commodore computer and graph plotter in my lab in ~1980, it worked quite well.

  20. Pat Keating
    Posted Jan 18, 2008 at 3:06 PM | Permalink

    9 John
    Interesting post. if the information is correct, it suggests to me that cloud cover has decreased, cooling the nights and warming the days.

  21. Posted Jan 18, 2008 at 3:11 PM | Permalink

    @SteveMcIntyre– I didn’t think you thought Gavin should check it.

    I agree with you that it’s best to know the provenance of data. Because I know more about the series of events leading up to this being posted, I wanted to say what I knew. I just think Gavin posted at a favor to a nearly anonymous commenter at RC, and did it rather quickly.

    We could ask Gavin for details directly. He’s really the only person who would know for sure who gave him that file.

  22. Ian McLeod
    Posted Jan 18, 2008 at 3:31 PM | Permalink

    Quote is from FAQ page http://digitizer.sourceforge.net/index.php?c=4

    How accurate are the numbers produced by Engauge Digitizer?

    Assuming there are no significant distortions, the numeric resolution is only as good as the pixel resolution of the original image. The resolutions in the horizontal and vertical directions are displayed in the bottom right corner of the status bar once the coordinates are defined.

    The reason I asked the question in #10 – how the data is digitized?. I wondered what kind of error is associated with the process. It looks like there is error, but not quantified. If Willis used XYZ digitizer and Gavin used ZYX digitizer (assuming he was not privy to the original Hansen data) what kind of error propagates through the results when drawing comparisons? If the mean is off this will prejudice the variance and subsequent slope.
    Yes/no? Or am I splitting hairs.

  23. Geoff Sherrington
    Posted Jan 18, 2008 at 5:47 PM | Permalink

    One could write a small book, just a small one, about digitising graphs and the errors therein. Sensitive to software. Willis, if you want to cross-check your method, my email is sherro1@optusnet.com.au All I need is the raw graph at the same pixel resolution you used and a s/sheet of the results obtained. I’ll do it in my “free time”.

  24. Pat Keating
    Posted Jan 18, 2008 at 7:23 PM | Permalink

    16, 18 Hales Bono
    Is there anything that digitizes a graph in .png format?

  25. Steve McIntyre
    Posted Jan 18, 2008 at 8:15 PM | Permalink

    Based on Gavin’s comments at Lambert, it appears that Willis’ digitization is incorrect. In the original graphic, the 1994 value of Scenario C is lower than the 1994 value of Scenario B in the original graphic, notwithstanding the lower forcing of C versus B. Why the values reverse is a puzzle – I presume that they’ll attribute it to randomness in the model outputs.

    IN a post at Lambert’s, Gavin says that the temperature outputs were digitized:

    I did check the digitised values I posted.

    However he does not state the provenance of the GHG concentrations where the provenance is not provided.

  26. Posted Jan 18, 2008 at 8:41 PM | Permalink

    Ok– So, it appears he did check it.
    Where are the comments for Lambert so we can read it?

    Honestly, in this situation, I think asking Gavin directly would be the right thing to do. He posted this quickly as a favor to me a complete stranger. (And one who isn’t exactly consistently nice to him, only because when I disagree, I say so, and, even I know, I’m not the most gracious person on earth).

    I don’t always agree with Gavin’s posts, but come one. I’m feeling guilty that the way lots of this reads is as if he’s done something bad in volunteering the best data he had!

  27. Jan Pompe
    Posted Jan 18, 2008 at 8:44 PM | Permalink

    Pat

    Is there anything that digitizes a graph in .png format?

    engauge will dunno about getdata can’t run that on my linux box without some serious jigging and poking.

    There is a range of stuff out there like g3data
    and DigitzeIt
    and java based GrabIt

    The last two are shareware.

    Mileage might vary.

  28. bender
    Posted Jan 18, 2008 at 9:13 PM | Permalink

    I’m not the most gracious person on earth.

    Who told you that? Don’t believe him.

    I don’t always agree with Gavin’s posts, but come one. I’m feeling guilty that the way lots of this reads is as if he’s done something bad in volunteering the best data he had!

    Kudos to Gavin. He’s proven himself twice now to be a good auditor when he sets his mind to it. Guilt relieved.

    Now, about those pseudo-confidence intervals on his GCM pseudo-ensembles

  29. Posted Jan 18, 2008 at 9:42 PM | Permalink

    Un-Scan-It works on printed graphs pretty well, although it (and anything) will have trouble if there is more than a single line on a graph.

    You can in principle use any drawing program like Coral Draw. Import the a graph as a drawing file, rotate so that the axes are parallel to the horizontal and vertical (that turns out to be the hardest part). Set the zero at the origin, scale the x and y axes appropriately and then move the cursor to the points you want to digitize recording the x-y coordinates

    Better than using a vernier caliper, but [snip – this language triggered Spam Karma and I’m not overriding ]

  30. Posted Jan 18, 2008 at 9:44 PM | Permalink

    @bender–
    I’ve actually met some truly gracious people.

    When I worked at PNNL, there was a co-worker who was very nice (but not in that icky way that makes you hate a person because they are really a mealy – mouth – pseudo- nice guy idiot.) When I reviewed poor manuscripts sent to me by editors of journals, I would sometimes ask Janet to help me revise my review to try to take the edge off the prose while still saying the paper was… (well… hmm…, how might Janet put this? ) ‘not sufficiently interesting to permit me to recommend it for publication?’

    I’m afraid I have a tendency to say: “This is drivel!” Blogging and comments at blogs can bring.

    On the pseudo-confidence intervals: Yes. I agree they are pseudo.

    That said, I’m not entirely sure how he can fix that, and, though this probably revolts, you, the statistician, I’m happier with pseudo-confidence intervals than nothing. They give you a point to begin discussion of the issue.

    (Hey, I’m an engineer. I’ve seen pseudo-standard techniques for SWAGed-confidence intervals! SWAG== “Scientific Wild A** Guess” )

  31. _Jim
    Posted Jan 18, 2008 at 9:52 PM | Permalink

    11, there are digitizer devices. They look mice, but they have crosshairs, and work with a pad.

    For anyone who has trained in/worked with AutoCad et al in the preceding couple of decades, digitizing pads were all the rage in converting existing paper drawings into dxf’s …

  32. welikerocks
    Posted Jan 19, 2008 at 8:39 AM | Permalink

    Speaking of gracious people and NASA…my email exchanges with NASA regarding RealClimate.org linked on the NASA AGW/GW information pages turned quite positive. And as of the 9th of Jan. they indicated that they would look into my concerns as time permits. Hopefully they are reading these threads and also reading RC too (I provided links to here and other web blogs stating that it would be advantageous to all concerned if – a private-non-NASA (perhaps even PR company owned) internet blog was going to be suggested reading on an official NASA information site (no official relationship between RC and NASA what so ever they claimed to me) blogs of all points of view should be suggested and not just RC’s or no blogs should be linked at all. I will let you know if I get any conclusion or email as to a solution to what I think, are very valid concerns. Cheers!

  33. frost
    Posted Jan 19, 2008 at 9:44 AM | Permalink

    re Pat jan 17 7:28PM:

    If your digitizing software does not recognize .png files simply convert the image to a format
    that the software does recognize. To do the conversion you can use

  34. frost
    Posted Jan 19, 2008 at 9:46 AM | Permalink

    URL didn’t show. Here’s another attempt:

    irfanview

  35. Posted Jan 19, 2008 at 10:05 AM | Permalink

    Eli Rabett January 18th, 2008 at 9:42 pm,

    Plain old Paint works well too. I have used it to get numbers off graphs for nuclear cross sections. Fortunately all I had to do was scale. I can see where rotates would be a pain.

  36. Smokey
    Posted Jan 19, 2008 at 11:39 AM | Permalink

    Steve Mc said:

    …an error has been introduced somewhere in the process.

    Errors are to be expected. But why do NASA errors always seem to go in the direction supporting climate alarmism – and never the other way? If the errors were random, there would tend to be a 50/50 split.

  37. Alan Bates
    Posted Jan 19, 2008 at 2:02 PM | Permalink

    Re: several!

    Thank you for the kind replies re: digitising

  38. Willis Eschenbach
    Posted Jan 20, 2008 at 3:43 PM | Permalink

    Well, as I’m the one who did the digitizing, I should comment.

    There are two larger differences between my version and Gavins, both in Scenario C:

    1) 1994 — My value here, 0.305, Gavin’s, 0.471. I’m happy to admit I could be wrong here, the graph is not clear at that point. However, I see no reason to accept Gavin’s value, as it is quite a bit higher than Scenario B, and there is absolutely no indication in the graph of that.

    2) 1981 — My value 0.105, Gavin’s, 0.147. On this one, Gavin is definitely wrong, as he has it again higher than Scenario B. This is clearly contradicted by Hansen’s graph, which shows C as lower than B in 1981.

    Finally, Gavin has 2003 Scenario C as being smaller than 2002, which also does not agree with the graphic.

    Overall? I’d say my digitization and Gavin’s are much closer than say the GISS and HadCRUT temperature records. Even including the two major differences, the standard deviation of the difference between my digitization and Gavin’s is only about two hundredths of a degree, and without them, it’s about one hundredth of a degree …

    Best to all,

    w.

    Steve: Willis, it looks like you digitized a later version of the image. Here’s a blown up version of the image in question excerpted from HAnsen et al 1988 Figure 3a, rather than a later rendering. In this figure, Gavin’s value of for Scenario B in 1994 looks correct, while yours doesn’t.

  39. Geoff Sherrington
    Posted Jan 20, 2008 at 6:36 PM | Permalink

    Re # 29 Eli Rabett

    Since we are talking about accuracy, whereas you wrote

    You can in principle use any drawing program like Coral Draw

    The correct spelling is CorelDRAW . I’ve used it since about 1994.

    Various algorithms have been used for rotation over the years. Some ask if you want the rotation to maintain original size or to fit the page, some ask if you wish to anti-alias bitmaps. Possible sources of error.

    The obvious solution is to obtain the data in pre-graph form from the author. It’s a cooperation sort of thing.

    Willis # 38, I’m glad you are tracking this down but lament the time it takes for diversions.

  40. Posted Jan 20, 2008 at 8:23 PM | Permalink

    Folks

    Looks like Google is going to provide a free database for scientific knowledge. Maybe we can get Hansen’s real data now!

    http://blog.wired.com/wiredscience/2008/01/google-to-provi.html

  41. Posted Jan 20, 2008 at 8:48 PM | Permalink

    @Geoff Sherrington

    I suspect the data used to draw the original graph no longer exists.

    I was aware Willis had digitized data when I asked Gavin for data. My thought was: Maybe Gavin has access to the original. He provided digitized data, and said it was digitized. He also provided forcings, which I hadn’t asked for, but which turned out to be handy to have.

    Given how willingly he posted the data for me (on Christmas Eve as it happens), I suspect he would have published the original data if that had been available to him.


    Steve:
    At realclimate, there’s no statement or indication that the scenario data was digitized. Also there is another data file there which shows GHG concentrations for the three scenarios. There’s no corresponding graphic for this in Hansen et al 1988. Is it digitized from another publication? It’s nice that he provided data for you, but I’ve obviously had different experience in trying to get data from the Team (and this happened well before CA made this an issue).

  42. Curtis
    Posted Jan 20, 2008 at 10:24 PM | Permalink

    I would like to agree that this should entirely be about science. However when the other side of the argument says things like “the science is settled its now time for action” when the science isn’t settled and the actions (if any) aren’t clear… This is especially troubling when the science in question seems to be beyond examination… Its like **bang** “here is my brilliant analysis if the state of the world. And no you may not examine my data, nor methodology – just be humble in the presence of my brilliance”…

    I know nobody as actually said that – but from some of thier actions you have to wonder if they’re not thinking it…

  43. Willis Eschenbach
    Posted Jan 21, 2008 at 2:44 AM | Permalink

    Steve M., you say:

    Willis, it looks like you digitized a later version of the image. Here’s a blown up version of the image in question excerpted from HAnsen et al 1988 Figure 3a, rather than a later rendering. In this figure, Gavin’s value of for Scenario B in 1994 looks correct, while yours doesn’t.

    You are correct, and that’s the reason for the difference between Gavin’s figures and mine. Mine was from the 1999 Hansen paper entitled “The Global Warming Debate”, which I located in the Wayback Machine here. The web page has since been removed.

    w.

  44. TAC
    Posted Jan 21, 2008 at 7:56 AM | Permalink

    Willis,

    You are correct, and that’s the reason for the difference between Gavin’s figures and mine.

    Thanks! It is good to have at least one of the apparent discrepancies explained and resolved.

    Also, I enjoyed reading the Hansen piece that you cite.

  45. Posted Jan 21, 2008 at 8:13 AM | Permalink

    @Steve–
    Actually, there is an indication the data are digitized. The difficulty is that the information is located where only I– the one who requested the data– was is likely to find it.

    My request

    # lucia Says:
    22 December 2007 at 9:02 AM

    Gavin– Are the model temperature predictions for scenario’s A, B & C in Hansen et al. 1988 tabulated anywhere? I’d like to run some numbers, but digitizing would introduce the equivalent of “measurement” uncertainty in knowing the model numbers, so I’d rather find a place where they are tabulated. I have the actual GISS temperature data, but not the model values.

    (Sorry for the OT question; I wanted to ask in your may article, but the comments are closed.)

    [Response: I’ve added links to the data from the original post. Note that model data were digitised from the original figures in any case. – gavin]

    The original post was written in in May 2007.

    The net effect is some lack of clarity for readers like you, who are looking at it now. Possibly, Gavin could add a note to the May 2007 post stating the ABC data are digitized. I’ll try to make sure I always mention the data were digitized also (and go back and make sure I mention they are digitized in past posts). Likely, I’ve been sloppy in referencing end emphasizing this fact at my blog, and that has contribute to this confusion.

  46. Posted Jan 21, 2008 at 8:15 AM | Permalink

    Shoot– I broke the link to my request.
    my request for data in December book review post.

  47. Posted Jan 21, 2008 at 8:59 AM | Permalink

    Just to continue to beat on my favorite dead horse. All this would not have been necessary if qualified Quality Assurance procedures were in place so that independent Verification of calculations/applications could be conducted.

  48. Steve McIntyre
    Posted Jan 21, 2008 at 9:33 AM | Permalink

    #45. Lucia, Gavin says: “Note that model data were digitised from the original figures in any case.”

    We can identify the temperature data as ocming form a digitization of Hansen et al 1988 Figure 3a, but can you identify the figure from which the data set for effective forcings
    http://www.realclimate.org/data/H88_scenarios_eff.dat was digitized? I don’t see anything in Hansen et al 1988 that corresponds.

  49. Posted Jan 21, 2008 at 10:21 AM | Permalink

    Steve M.
    I don’t think the forcings are digitized. I’d always taken the “digitization” statement to apply specifically to the data I asked for– that is the temperature anomalies.

    He proactively provided the forcings, and until Jan. 16, I didn’t worry too much more about those. But, after fiddling a bit, I wanted more detail on forcings. I looked up Gavin’s email and on on Jan. 16, I asked him for more information on forcings, and he replied:

    Me: I’m writing hoping you can point me to more forcing
    data. (Example this table: http://www.realclimate.org/data/forcings_obs1880-2003.txt but monthly?)

    Gavin: Forcing data is derived from:
    http://data.giss.nasa.gov/modelforce/

    That page includes a link to a table for the actual forcings:
    http://data.giss.nasa.gov/modelforce/RadF.txt

    You can also find:
    http://data.giss.nasa.gov/modelforce/ghgases/

    I haven’t found any additional information providing precise numerical values for forcings — including the fictional volcanoes– actually used by Hansen 1988 ABC. So, I don’t know the details of precisely where or how Gavin created that.

    Gavin is probably the best source. I’ve asked at RC. He hasn’t answered yet. I could only speculate the reasons, but quite possibly, those numbers are simply not easily accessible to him.

  50. Willis Eschenbach
    Posted Jan 21, 2008 at 9:09 PM | Permalink

    Well, since we have the forcings used in the Hansen 1988 model, and the temperature projections from that model, I thought it would be interesting to compare the two, viz:

    A couple of things of note here:

    1) The temperatures are on average lower vis-a-vis the forcings in the earlier part of the record (prior to 2000), and higher in the latter part of the record. Don’t know why.

    2) The calculated climate sensitivity to give the best fit between the forcing and the response is 0.38°C/W-m2. This is significantly smaller than the current IPCC estimate, which is 2.5° to 4.5°C for a doubling. The sensitivity given by Hansen figures out to be 3.7 * 0.38 = 1.4°C for a doubling of CO2, a bit more than half of the IPCC’s smallest estimate.

    3) Obviously, there are some other forcings at play here, at least volcanic, and perhaps solar.

    Conclusions? Well, temperatures seem to have come in slightly less than Hansen modeled. However, a much larger issue is the sensitivity. If an (approximately) right answer occurs with a climate sensitivity of 0.4, this means that the IPCC mid-range forecast temperature changes are overestimated by 100% or more … shocking, I know …

    Best to everyone,

    w.

    Steve: these are pre-feedback numbers and equate to the (say) 1.2 deg C direct impact of doubled CO2 that one often sees prior to water vapor feedback, etc.

  51. Posted Jan 21, 2008 at 11:56 PM | Permalink

    3) Obviously, there are some other forcings at play here, at least volcanic, and perhaps solar.

    And/or PRNG ?

    See RomanM’s comment

    Where do the “wiggles” in the 1988 projections come from?

  52. Steve McIntyre
    Posted Jan 22, 2008 at 12:04 AM | Permalink

    I got different fits when I did the regressions. HEre’s my script:

    ##LOAD RC SCENARIO VERSIONS
    url=”http://www.realclimate.org/data/scen_ABC_temp.data”
    scenario=read.table(url,skip=3,header=TRUE) #1958 2019
    #G Schmidt said that he digitized this from Hansen et al 1988 in post at Deltoid

    #LOAD RC EFFECTIVE RF VERSIONS
    url=”http://www.realclimate.org/data/H88_scenarios_eff.dat”
    eff=read.table(url,skip=3,header=TRUE) #1958 2050

    #PLOT SIDE BY SIDE
    nf=layout (array(1:2,dim=c(1,2)))
    par(mar=c(3,3,2,1))
    ts.plot(ts(scenario[,2:4],start=1958),xlim=c(1958,2050),col=1:3,lwd=2,ylim=c(0,3) )
    title(main=”H88 Temperature Change”)
    ts.plot(ts(eff[,2:4],start=1958),xlim=c(1958,2050),col=1:3,lwd=2 )
    title(main=”H88 Forcing”)

    #DO REGRESSIONS
    h=function(x,y) paste(x,y,sep=”_”)
    name2=c(t(outer(c(“T”,”rf”),c(“A”,”B”,”C”),h)))
    Z=ts.union(ts(scenario,start=1958),ts(eff,start=1958) )
    Z=data.frame(Z[,c(1:4,6:8)]);names(Z)=c(“year”,name2)

    fm1=lm(T_A~rf_A,data=Z);summary(fm1);coef(fm1)
    #r2 is 0.966
    coef(fm1)
    #(Intercept) rf_A
    # -0.1013117 0.4166097

    fm2=lm(T_B~rf_B,data=Z);summary(fm2)
    #r2 is 0.9343,
    coef(fm2)
    #(Intercept) rf_B
    # -0.1602963 0.4761752

    fm3=lm(T_C~rf_C,data=Z);summary(fm3)
    #r2 is 0.8129,
    coef(fm3)
    #(Intercept) rf_C
    # -0.1484006 0.4625182

  53. Mike B
    Posted Jan 22, 2008 at 7:26 AM | Permalink

    #50 Willis

    3) Obviously, there are some other forcings at play here, at least volcanic, and perhaps solar.

    As Roman M alluded to earlier, the key is in in Figure 2 of Hansen’s 1988 paper.

    Scenarios B and C included a “simulated” volcano with forcings equivalent to El Chicon in 1995. What I find interesting is that we had Pinatubo in 1992, which actually had larger forcings than El Chicon (GISS forcings), but Pinatubo had a smaller impact on actual global temperatures than the projected impact from the 1995-1998 forcings in the Hansen simulation.

    I’ve provided is to a pdf of the 1988 paper, so digitizing Figure 2 should be do-able. Not that I’m trying to suggest anything.:-)

  54. Posted Jan 22, 2008 at 9:02 AM | Permalink

    Mike B–
    Digitizing Figure 2 c could be useful. But it might also be worth also looking at the published optical thicknesses at NASA & or Gavin’s file at RC.

    I plotted up the historic forcings. The explanation of what I did to Volcano blows.

    You should be able to add the modeled volcanos at the appropriate points to runs B&C if you like.

    Willis– As I’ve been fiddling with simple physical models (liked he one Schwartz used in his paper), the problem with doing a linear fit is that the leading order response to a linear increase in forcing is not linear.

    That is:
    If you assume the forcing had been constant until 1958, and then suddenly increased at a rate like

    F~ a* time.

    At small values of time, to leading order, the temperature anomaly for a single time constant world with time constant τ goes

    ΔT(time) ~ b * time^2.

    (If the ramp function continues, then at long times, the temperature anomaly increase would be linear.)

    where “b” is related to both the sensitivity and the time constant of the climate.

    Fitting a line through the quadratic b*time^2 shape at small times would be expected to underestimate the sensitivity. Moreover, it the time constant of the planet is large, it will underestimate very badly.

  55. Mike B
    Posted Jan 22, 2008 at 11:09 AM | Permalink

    Lucia – that’s a good idea, for instance repeating the El Chicon forcing at 1995. But that still leaves the question of any possible differences between the forcings used by Hansen in 1988 and the current forcings file (which I had linked in my prior note.) With a digitization of Figure 2, those things can “kinda sorta” examined.

  56. Willis Eschenbach
    Posted Jan 22, 2008 at 2:12 PM | Permalink

    Steve M. and others, thanks for your comments.

    Steve, you say:

    I got different fits when I did the regressions.

    Yes, there are some oddities in there. Why should the sensitivity depend on the different forcings of the different scenarios? Why would they not all be the same?

    I took the other route, and used the single sensitivity that gave the best overall result when applied to all three scenarios. This was 0.38, as compared to your three answers of 0.42, 0.46, and 0.48. All of these are around half of the IPCC canonical value of 0.8.

    Next, Steve, you say:

    these are pre-feedback numbers and equate to the (say) 1.2 deg C direct impact of doubled CO2 that one often sees prior to water vapor feedback, etc.

    Since these are comparing the forcings to the final temperature hindcast by the model, why are they “pre-feedback”? It seems to me that the model would have included water vapor feedback, and thus the hindcast temperature would be post-feedback, it would have taken water vapor into account … what am I missing?

    lucia, I fear I could not follow your math. If a forcing is increasing linearly with time, F~a*time, and the temperature is linearly related to the forcing, T ~ F * sensitivity, then T ~ a * time * sensitivity.

    Taking the discrete differential, I get that ∆T/d(time) = a * sensitivity.

    What am I not seeing here?

    Best to all,

    w.

  57. Posted Jan 22, 2008 at 2:38 PM | Permalink

    Willis– I’ll show the full math later. But, suppose the climate were a single lump that does this

    dT/dt = -T/τ + α F

    &tau is a time constant for the world and &alpha is inversely proportional heat capacity of this “climate lump”. (This is as in Schwartz.)

    Next, suppose at time t=0, the world is at T=0, and (for some mysterious reason) at equilibrium.

    Now make F~ a* time

    The T(t1) will increase as

    The integral from 0 to time (t1) of
    ( exp (-t/τ) (a * time) d time )

    You can integrate this using the old (vdu = uv-vdu trick. To see the small time behavior, expand the relation you get to see the behavior at small time. I’m pretty sure you’ll fine temperature increases quadratically near t1=0

    T(t1) ~ time^2.

    For small values of time.
    At ‘large’ values of time, T(t)~ time ± constant.

    I can show more later. I’m on traveling to a military facility tomorrow and I need to get some stuff done.

  58. RomanM
    Posted Jan 22, 2008 at 5:48 PM | Permalink

    UC #51

    I think I may have discovered part of the answer to my own question. In his post on the “Thoughts on Hansen et al 1988” thread, Kenneth Fritsch that the 100 year control run also had “wiggles” making it also look realistic. When I read a little more of the paper to see why, I finally noticed a statement in Appendix A of the Hansen 1988 paper which gives a clue as to how some of these may have gotten into the various graphs:

    In our 100-year control run there is no exchange of heat at the base of the mixed layer. In the experiments with varying atmospheric composition, we mimic, as a diffusion process, the flux of temperature anomalies, from the mixed layer into the thermocline.

    (Bold is mine)

    Now, I am not a climate modeller so I may not know what meaning the words diffusion process may convey to Dr. Hansen, but in my book, they describe a random process. My best guess is that “mimic” is a synonym for “randomly simulate” so that this might indicate that there are random elements in the procedures carried out to draw these graphs. What effect did thid have on the result? it is difficult to tell frtom a sample of n = 1 trial. Also, the implication seems to be that the control run did not use this so the reason for the wiggles there remains answered.

  59. Phil.
    Posted Jan 22, 2008 at 9:20 PM | Permalink

    Re #58

    Your quote from Hansen 88 refers to the modelling of the ocean, about 100m down.

  60. Posted Jan 22, 2008 at 10:13 PM | Permalink

    RomanM–

    Diffusion is a random process at the molecular level, but it can look deterministic at the continuum scale. We also use terms like “diffusion by turbulent motions”. It’s possible Hansen is speaking like someone who does heat/ mass/ momentum transport, and he ‘diffusion’ ends up having a constitutive equation that would appear to you to look like heat conduction (diffusion of heat ) or viscosity (diffusion of momentum.)

  61. RomanM
    Posted Jan 23, 2008 at 8:39 AM | Permalink

    #58 Phil.:

    What’s your point? What happens in the ocean 100m down does not have any effect on the overall result? I doubt that very much. Why include that in the model if it doesn’t?

    #60 Lucia:

    Thanks for the insight. If what you say is true and the treatment of the diffusion process is just as another deterministic factor in the model, then I am back to square one trying to understand how the wiggles got there. I fail to see how they could originate from the list of equations given or from quasi-periodic effects in the model. The remaining options that occur to me are

    1. Hidden simulated random factors in the model – factors whose effects due to random variation cannot be estimated from a single run of the model.

    2. Imputed variation in the various forcings (on an ad hoc questionable basis as in the postulated volcanic events).

    3. Artifacts of artificial bounds imposed on some of the factors as, e.g., at the end of Appendix A on p. 9360 of the paper in the treatment of ice. Such bounds could conceivably produce “corners” at various places in the resulting graphs. They can also be a way of keeping their results from “drifting” in directions other than the ones where they would like it to go.

    4. Accumulated calculation errors (rounding, etc.). Option 3 above could possibly keep these from overwhelming the model.

    The model is complex enough that it is likely impossible to determine what the real reason may be. Maybe we could ask Dr. Hansen or Dr. Gavin…

  62. Posted Jan 23, 2008 at 9:34 AM | Permalink

    Model runs B and C include simulated volcanic eruptions. See page 9345, first paragraph. You can also see these in figure 2.

    These should cause wiggles.

    I think that’s the main cause of wiggles in those simulations because, if I recall correctly, the paper says that GISS II couldn’t capture El Ninos/ La Ninas yet and so got lower values for inter-annual variability than exist in the real world. (Don’t take this for gospel. Read the paper and check.)

  63. RomanM
    Posted Jan 23, 2008 at 10:05 AM | Permalink

    #62 Lucia

    I am aware of the volcanic interventions. Perhaps I didn’t define “wiggles” particularly well. If you look at Figure 3 in the paper (p. 9347), 3(a) gives the predicted scenario anomalies and 3(b) gives the five year running mean. I would expect a deterministic model to give an initial result (without smoothing) that looks like 3(b), not 3(a) with its irregular zig-zag pattern. I am trying to understand why it doesn’t. I apologize for not reading the entire paper more carefully but unfortunately this is a particularly busy term for me.

  64. Phil.
    Posted Jan 23, 2008 at 10:11 AM | Permalink

    Re #61

    #58 Phil.:

    What’s your point? What happens in the ocean 100m down does not have any effect on the overall result? I doubt that very much. Why include that in the model if it doesn’t?

    That a diffusive process transferring heat to the deep ocean is unlikely to be responsible for short term ‘wiggles’ in the atmosphere.

  65. Posted Jan 23, 2008 at 10:23 AM | Permalink

    Roman M– Some transport models based on determinisitic systems do show wiggles. (See Direct Numerical Simulation and Large Eddy Simulation.)

    Based on the brief theory discussions in the papers describing the GCM’s I would expect them to smooth reality. But, if they capture some large scale behavior of the turbulent like fluctuations (tornados, hurricanes, anything that creates a large scale changeable weather pattern) then wiggles aren’t that surprising.

    But yes, you’d have to ask someone like Gavin and read whether the answer made sense to you.

  66. Kenneth Fritsch
    Posted Jan 24, 2008 at 12:33 PM | Permalink

    Re: #58

    Roman M, the wiggles (and perhaps the trends) in temperature curves in the control run without GHG forcings (and one would assume carried into the Scenario runs as well) might well be described in the Hansen paper where they say (in the 1988 Hasen et al paper under “3. A 100 Year Control Run”) where they say:

    Note that the seasonal thermocline (i.e., the water between the base of the seasonal mixed layer and the annual maximum mixed layer depth) can have a different temperature each year; this heat storage and release can affect the interannual variabilty of surface temperature.

    I am not sure I am describing this correctly but it appears the phenomena they describe above could be a source of short (wiggles) and long (trends)term autocorrelation.

    They also say in the same section, and without refutation, that:

    The conclusion that unforced (and unpredictable) climate variability may account for a large portion of climate change has been stressed by many researchers: for example, Lorenz [1968], and Robock [1978].

    It surprises me that more comment of these self-reported model limitations have not had more discussion here.

  67. Bugs
    Posted Jan 24, 2008 at 11:30 PM | Permalink

    The amazing thing, for me, is that the projections are anything like the measured results. We read so often about how complex the climate system is and how it is impossible to model, yet here we have a measured result that is roughly like the projection. This has been done using a model that is much more primitive than the models that are now being used, on hardware that was much more primitive.

    I’d chalk this achievement up as an amazing win for science.

  68. bender
    Posted Jan 25, 2008 at 1:08 AM | Permalink

    -descent with modification
    -selection of the fittest

    Yes, this yields “amazing” results. Is it a “win for science”, or just the way that life works: objects tend to fit their environment as engineers tweak them endlessly?

  69. bender
    Posted Jan 25, 2008 at 1:16 AM | Permalink

    The conclusion that unforced (and unpredictable) climate variability may account for a large portion of climate change has been stressed by many researchers: for example, Lorenz [1968], and Robock [1978].

    #66 Nice find, KF. I had missed that. One wonders about Robock v Hansen.

  70. bender
    Posted Jan 25, 2008 at 1:22 AM | Permalink

    Internally and externally caused climate change by Alan Robock

    Susann will like this abstract:

    “A numerical climate model is used to simulate climate change forced only by random fluctuations of the atmospheric heat transport. This short-term natural variability of the atmosphere is shown to be a possible “cause” not only of the variability of the annual world average temperature about its mean, but also long-term excursions from the mean.”

    Well, well. But I’m sure the paper must be wrong.

  71. bender
    Posted Jan 25, 2008 at 1:45 AM | Permalink

    Figure 4 in Robock (1978) shows a nice example of atmospheric LTP due to internal weather noise alone. Local eddy flux of sensible heat, integrated globally, produces a rather wide-swinging random walk. And this model does not even include an ocean!

  72. bender
    Posted Jan 25, 2008 at 1:53 AM | Permalink

    The temperature graphs from these runs show that not only is “noise” about a mean produced by the random eddy perturbations but also large excursions of the temperature. Even if the temperature response is scaled down so that the standard deviations are the same as the observations, NH temperature excursions of 0.5K occur in one year. The temperature may remain relatively constant for up to 10 years and then shift to a value 0.5K different and stay at that value for several years. Rapid shifts year after year also occur. Long-term trends are also produced. In fact with no external forcing, internal variations of the observed magnitude produce NH temperature fluctuations as large as those observed for the past 100 years!

    Exactly as I would expect! Jim Hansen, please explain how you have come to reject this hypothesis.

    If this is how Gavin Schmidt’s GCM ensemble runs behave today, then he’s got some statistical splaining to do.

  73. bender
    Posted Jan 25, 2008 at 2:04 AM | Permalink

    Is it possible that the entire IPCC is guilty of the unpardonable offence of comparing an ensemble of GCM runs – which have the internal variability ironed out of it – to a single instance of actual observations – where the internal obviously is not excluded? This can’t be?! Can it? I almost hesitate to post this comment, it’s so late at night and I am vulnerable to making a silly error.

    Gamble /on

  74. bender
    Posted Jan 25, 2008 at 2:17 AM | Permalink

    One last bit from Robock (1978):

    The natural variability of the atmosphere, through random short-term variations in the dynamical fluxes, has been shown to produce unpredictable long-term variations in the climate. This result can be considered as a demonstration of the importance of internal causation of cliamte change. It can also be thought of as a test of the sensitivity of the climate system to baroclinic instability as a forcing mechanism, since this is not explicitly calculated in the unperturbed model.

    1. So there is the name of Susann’s “cause”: baroclinic instability.
    2. This also shows why 1D, 2D models may not be as suitable as 3D GCMs for quantifying forcing effects: the GCMs generate far more internal “weather” noise.

    Gavin may be in for a big surprise in 2015.

  75. bender
    Posted Jan 25, 2008 at 2:19 AM | Permalink

    #66

    It surprises me that more comment of these self-reported model limitations have not had more discussion here.

    Very much so. Very much so.

  76. bender
    Posted Jan 25, 2008 at 2:48 AM | Permalink

    I am going OT, but it is probably better to keep this stuff on Robock bundled than to spread it around.

    This is a thread from RC where I asked a half dozen questions about GCMs and internal climate variability:

    von Storch Weighs In on Pielke's Challenge


    They answered some of them, poorly, but mostly blew me off. Curiously, nobody cited Robock (1978). I really wish they had.

    Noteworthy:
    -mike was willing to concede that Arctic warming in the 1930s-40s “could easily have arisen from the intrinsic natural variability of the climate at multidecadal timescales”, and even provided a nice reference (and this thankfully shut up layman Levenson).

    But then -mike stated that “Precisely the same thing could of course be happening today. However, such internal variability (both in this model, and all other current generation coupled climate models) is unable to generate a century-long trend in global mean temperature anywhere close to that observed for the past century.”

    When I asked for proof from the primary literature, none came. I was told to re-read AR4. I am still asking for that proof.

  77. bender
    Posted Jan 25, 2008 at 2:51 AM | Permalink

    Q: Why did no one at RC point me to Robock (1978)?

    • bender
      Posted Oct 7, 2009 at 8:52 AM | Permalink

      Re: bender (#77),
      A: Because they were already aware that Robock’s (1978) hypothesis had been vindicated by Tsonis & Swanson et al. (2006, 2007, 2008, 2009). That they deeply feared this hypothesis is evident in: (1) the RC post by Kyle Swanson this summer (his first move was to allay fears that alarmists were in the wrong), (2) the fact that they did not point me to Tsonis’s recent work, even though they could have.
      .
      What more are they hiding?

      • bernie
        Posted Oct 7, 2009 at 10:17 AM | Permalink

        Re: bender (#97), Bender,
        Now that you have revived a year old link can you quickly sumarize its relevance to the current Yamal discussion or are we on a very different track?

        • bender
          Posted Oct 7, 2009 at 11:26 AM | Permalink

          Re: bernie (#100),
          This is RC’s fault. They switched topics yesterday to “warming pause” – possibly to get eyes off the Yamal topic(?). In so doing they brought up Swanson, a seeming hero amongst alarmists for dispelling the idea that natural variability could be the cause of the current warming trend/cooling cycle. Let’s play “spot-the-double-standard”. Swanson (July 12 2009) is their hero when there’s a cooling cycle on; yet they didn’t want to talk about that *at all* until lucia made the lack of trend, and its significance, widely known and understood.
          .
          It is completely unrelated to Yamal. And that’s kinda the point, isn’t it?! 😉

  78. Raven
    Posted Jan 25, 2008 at 2:55 AM | Permalink

    bender says:

    Is it possible that the entire IPCC is guilty of the unpardonable offence of comparing an ensemble of GCM runs – which have the internal variability ironed out of it – to a single instance of actual observations – where the internal obviously is not excluded? This can’t be?! Can it? I almost hesitate to post this comment, it’s so late at night and I am vulnerable to making a silly error.

    Look the graph comparing CO2 and natural forcings in Chapter 9 of AR4.
    The IPCC report is here: http://ipcc-wg1.ucar.edu/wg1/Report/AR4WG1_Print_Ch09.pdf
    Look for Figure 9.5

    They show the individual model runs so you can see the the range produced by the different runs. However, this does not address issue of random walks because the models appears to built on the assumption that the climate won’t change without a ‘forcing’ (i.e. the noise in the models are only a result of high frequency daily/seasonal weather noise). We know from other sources that modellers are quick to exlude trends that go out of bounds.

    Demonstrating the potential for long term random variability would require different models. Those papers you referenced would probably be good starting points.

  79. Geoff Sherrington
    Posted Jan 25, 2008 at 5:08 AM | Permalink

    Re # 63 Roman M

    I asked about these a year ago. The closest I can get is that adjustments to some temperature series are ramped stepwise over intervals of several years. Maybe this texture shows through to the subsequent end calculations.

    Otherwise, I would surmise without evidence that they are camouflage deliberately inserted so that past actual rough graphs do not suddenly become smooth as the time axis enters the future, thus revealing the data period at which the graphs were calculated. This would make it easier to do adjustments after the event without being so noticed from the textural change.

  80. Posted Jan 25, 2008 at 5:45 AM | Permalink

    Hmmm. Nature of internal variability. It cannot be random walk (1/f^2) because system seem to be stable. Not white noise (1/f^0), as the residuals are never white with these models. I’ll choose flicker noise then (1/f^1) (a bit modified at very low and very high freqs to keep the system stable). Not a bad choice, 1/f is found from fluctuations of the Earth’s rate of rotation, undersea currents, traffic on Japanese expressways, even in the minimum and maximum flood levels of Nile (see [1] and references therein). Mann’s robust red-noise background estimator would cut off that kind noise, and interpret that as a signal. 1/f noise would result in weather like prediction problems; given this month average I cannot predict next month average very well. But 1/f has constant Allan variance, so this would continue with any time-scale; given this 100-year average I cannot predict next 100-year average very well! Just a thought, sorry about the OT interruption 😉

    [1] 1/f (Flicker) Noise: A Brief Review Voss, R.F. 33rd Annual Symposium on Frequency Control. 1979, page(s): 40- 46

  81. MarkW
    Posted Jan 25, 2008 at 5:53 AM | Permalink

    Just fitting a world wide average is the easiest thing for a model to do. Additionally, it is meaningless. We don’t live in an average world. We live in the spot where we are. Until models can accurately predict regional climate, they are worthless.

  82. kim
    Posted Jan 25, 2008 at 6:30 AM | Permalink

    Flickers, I knew it was for the birds. Yes, the stability is a key. That seems to be from Leif’s steady and gradually increasing insolation. So why and wherefore the variances? Protean inputs and processes, yet unguessed.
    ===================================

  83. bender
    Posted Jan 25, 2008 at 7:12 AM | Permalink

    #79 Raven Look at Fig 9.5b. How well is the model performing from 1900 to 1940? i.e. Do the slopes match? That departure is proof they’ve got the wrong error model. In fact there is not one run in the ensemble that exceeds observation! The true internal noise is likely larger than what they think. (It is perhaps what generated the MWP?) And -mike already concedes it probably caused 1930s-40s arctic warming – an explanation that does NOT fit with this 9.5b.

    Unprecedented? Unnatural? GMT rise greater than 1.5K? Skepticism growing.

    UC #81, what if it’s multistable as opposed to “stable”? i.e. It’s not random walk, but random walking jump. Like electrons in shells.

  84. bender
    Posted Jan 25, 2008 at 7:20 AM | Permalink

    Does AR4 cite Robock (1978)? In what context?

  85. bender
    Posted Jan 25, 2008 at 7:37 AM | Permalink

    No citation of Robock (1978) that I can find. Not in ch. 6,8,9.

    But check this:

    “for each experiment, multiple simulations were performed by some individual models to make it easier to separate climate change signals from internal variability within the climate system

    (ch. 8, p. 594)

    Read again, slowly. It looks like they are confusing the internal variability within the models for the internal variability within the real-world climate system. Do they get the answers they want because they’ve presumed the consequence?

  86. bender
    Posted Jan 25, 2008 at 7:43 AM | Permalink

    When I read GS at RC I sometimes get the uneasy feeling that he talks about virtual and real world climate systems interchangeably. This is a dangerous habit. It can lead to statements such as #86, where it is not clear that you have not “presumed the consequences” by starting off with a belief that your model is correct.

  87. RomanM
    Posted Jan 25, 2008 at 10:08 AM | Permalink

    bender, your comments in #85 and #86 apply equally well to Phil.’s post #64:

    That a diffusive process transferring heat to the deep ocean is unlikely to be responsible for short term ‘wiggles’ in the atmosphere.

    The “wiggles” he was referring to were the result of a set of equations someone programmed into a computer.

  88. bender
    Posted Jan 25, 2008 at 10:32 AM | Permalink

    #87 RomanM
    If Phil is asserting that “a diffusive process transferring heat to the deep ocean is unlikely to be responsible for short term ‘wiggles’ in the atmosphere”, then I believe he is correct, or at least with the consensus. The short-term (high-frequency) “wiggles” must come from the atmosphere (globally integrated baroclinic instabilities) or, less likely, sea surface. The deep ocean is much, much slower at warming and re-radiating. Maybe the deep ocean causes low-frequency wiggles. I do not know. I wish I knew.

    Phil is usually very, err, careful in his choice of words, esp. when it comes to climatological processes. The word “deep” is performing a major function in his assertion.

  89. RomanM
    Posted Jan 25, 2008 at 11:43 AM | Permalink

    Oops. Again, I don’t seem to have made myself clear. What Phil said may be correct when applied to the real world. My point is that he has tacitly assumed that the model is equivalent to the real world and has accurately represented the real world behaviour in the equations and their relationships. I am not willing to automatically assume without justification that every effect which would not occur in reality would also not manifest itself in the model. Your point was that “he (GS) talks about virtual and real world climate systems interchangeably”. I merely noted that this was implicit in Phil.’s comment as well.

    • bender
      Posted Oct 7, 2009 at 8:56 AM | Permalink

      Re: RomanM (#89),
      I think Phil may want to revisit his confident assertion after reading Tsonis and Swanson’s recent work.

  90. Phil.
    Posted Jan 25, 2008 at 11:51 AM | Permalink

    Re #89

    What Phil said may be correct when applied to the real world. My point is that he has tacitly assumed that the model is equivalent to the real world and has accurately represented the real world behaviour in the equations and their relationships.

    Ok, we have a misunderstanding here, I was referring to a computational procedure only, as I recall it was the use of a diffusion process to model heat flow across the thermocline. Diffusion is a ‘smoothing’ process in both the real world and computationally and as such is unlikely to cause wiggles (quite the reverse). A cause of wiggles could be to arbitrarily change the location of the thermocline from year to year, not that I’m saying that that was done, although I recall some quote about that effect earlier this week, whether in the real or virtual world I don’t know.

  91. bender
    Posted Jan 25, 2008 at 11:52 AM | Permalink

    #89 RomanM,
    Right, I follow now.
    In the modeling it world it is convenient shorthand to speak about the model as if it were reality. That’s all Phil is doing here. That’s ok. You can do it. I can do it. The problem comes when the modelers themselves – who talk like this in their daily work – they start to believe everything in the model is real. They transpose virtual and real interchangeably. They forget the talk is just shorthand.

    I’m not saying that that has happened with GS. Only observing it is a common phenom.

  92. RomanM
    Posted Jan 25, 2008 at 12:05 PM | Permalink

    #90, #91

    Points also taken and accepted that I misunderstood Phil.’s statement and that the diffusion is not likely the cause of the wiggles.

  93. Posted Jan 25, 2008 at 2:46 PM | Permalink

    Re 80

    even in the minimum and maximum flood levels of Nile

    [1] 1/f (Flicker) Noise: A Brief Review Voss, R.F. 33rd Annual Symposium on Frequency Control. 1979, page(s): 40- 46

    Interesting connection,

    Koutsoyiannis in The Hurst phenomenon and fractional Gaussian noise made easy cites the same study as Voss,

    0. Toussoun. Memoires de I’lnstirut d’Egypr 8-10. Cairo (1925).

    And I forgot to mention the Beatles in #80, sorry (excuse: I’m too young, Metallica-generation ) ,

  94. Sam Urbinto
    Posted Jan 25, 2008 at 2:51 PM | Permalink

    “unforced (and unpredictable) climate variability may account for a large portion of climate change”

    Gee, really?

    ‘comparing GCM runs with internal variability not included to actual observations with it included’

    Nah, who’d ever do such a thing?

    “a world wide average is … meaningless. We don’t live in an average world.”

    Exactly. We live in a real one.

    “Unprecedented? Unnatural? GMT rise greater than 1.5K?”

    I remain unconvinced the GMT in the anomaly means anything anyway in the first place. It’s never been over +/- 1 C though, so *shrug*. But your answer is “Never in a million years.”

    “Do they get the answers they want because they’ve presumed the consequence?”

    Oh, that could never happen.

    “virtual and real world climate systems interchangeably.”

    Or derived anomaly and real world energy energy balance!!! 🙂

  95. Leonard Herchen
    Posted Jul 29, 2008 at 10:39 PM | Permalink

    Steve: I want to add my voice to yours suggesting people tone down any gloating. All we need is a spike like 1998 and you’ll hear the deniers of the deniers screaming from the roof tops. The lack of warming since 1998 (10 years) is interesting, but we need another 10 years before we are able to draw much of a conclusion.

  96. bender
    Posted Jul 29, 2008 at 10:47 PM | Permalink

    Agree with the tone of #95, but this:

    we need another 10 years before we are able to draw much of a conclusion

    Why 10? Why not 20? 30? or 5? On what basis is this parameter derived? Gavin used to say 10. Now he says 30, apparently.
    And if the trend 1998-2008 were upward he would say “the science is settled”? The criterion – whatever it is – needs to be defined a priori.

    • bender
      Posted Oct 7, 2009 at 8:58 AM | Permalink

      Re: bender (#96),
      Gavin now says 30 because Swanson says “multi-decadal”. That was the trigger to switch his rhetoric. And around she goes. So what’s Swanson’s assertion based on?

  97. bernie
    Posted Oct 7, 2009 at 11:47 AM | Permalink

    Gottya!

One Trackback

  1. […] some comment thread at Climate Audit, Willis Eschenbach estimated the sensitivity of Hansen’s GISS II model by fitting a straight line to the data. I […]