Hansen Frees the Code

Hansen has just released what is said to be the source code for their temperature analysis. The release was announced in a shall-we-say ungracious email to his email distribution list and a link is now present at the NASA webpage.

Hansen says resentfully that they would have liked a “week or two” to make a “simplified version” of the program and that it is this version that “people interested in science” will want, as opposed to the version that actually generated their results.

Reto Ruedy has organized into a single document, as well as practical on a short time scale, the programs that produce our global temperature analysis from publicly available data streams of temperature measurements. These are a combination of subroutines written over the past few decades by Sergej Lebedeff, Jay Glascoe, and Reto. Because the programs include a variety of
languages and computer unique functions, Reto would have preferred to have a week or two to combine these into a simpler more transparent structure, but because of a recent flood of demands for the programs, they are being made available as is. People interested in science may want to wait a week or two for a simplified version.

In recent posts, I’ve observed that long rural stations in South America and Africa do not show the pronounced ROW trend (Where’s Waldo?) that is distinct from the U.S. temperature history as well as the total lack of long records from Antarctica covering the 1930s. Without mentioning climateaudit.org or myself by name, Hansen addresses the “lack of quality data from South America and Africa, a legitimate concern”, concluding this lack does not “matter” to the results.

Another favorite target of those who would raise doubt about the reality of global warming is the lack of quality data from South America and Africa, a legitimate concern. You will note in our maps of temperature change some blotches in South America and Africa, which are probably due to bad data. Our procedure does not throw out data because it looks unrealistic, as that would be subjective. But what is the global significance of these regions of exceptionally poor data? As shown by Figure 1, omission of South America and Africa has only a tiny effect on the global temperature change. Indeed, the difference that omitting these areas makes is to increase the global temperature change by (an entirely insignificant) 0.01C.

So United States shows no material change since the 1930s, but this doesn’t matter, South America doesn’t matter, Africa doesn’t matter and Antarctica has no records relevant to the 1930s. Europe and northern Asia would seem to be plausible candidates for locating Waldo. (BTW we are also told that the Medieval Warm Period was a regional phenomenon confined to Europe and northern Asia – go figure.]

On two separate occasions, Hansen, who two weeks ago contrasted royalty with “court jesters” saying that one does not “joust with jesters”, raised the possibility that the outside community is “wondering” why (using the royal “we”) he (a) “bothers to put up with this hassle and the nasty e-mails that it brings” or (b) “subject ourselves to the shenanigans”.

Actually, it wasn’t something that I, for one, was wondering about it all. In my opinion, questions about how he did his calculations are entirely appropriate and he had an obligation to answer the questions – an obligation that would have continued even if had flounced off at the mere indignity of having to answer a mildly probing question. Look, ordinary people get asked questions all the time and most of them don’t have the luxury of “not bothering with the hassle” or “not subjecting themselves to the shenanigans”. They just answer the questions the best they can and don’t complain. So should Hansen.

Hansen provides some interesting historical context to his studies, observing that his analysis was the first analysis to include Southern Hemisphere results, which supposedly showed that, contrary to the situation in the Northern Hemisphere, there wasn’t cooling from the 1940s to the 1970s:

The basic GISS temperature analysis scheme was defined in the late 1970s by Jim Hansen when a method of estimating global temperature change was needed for comparison with one-dimensional global climate models. Prior temperature analyses, most notably those of Murray Mitchell, covered only 20-90N latitudes. Our rationale was that the number of Southern Hemisphere stations was sufficient for a meaningful estimate of global temperature change, because temperature anomalies and trends are highly correlated over substantial geographical distances. Our first published results (Hansen et al., Climate impact of increasing atmospheric carbon dioxide, Science 213, 957, 1981) showed that, contrary to impressions from northern latitudes, global cooling after 1940 was small, and there was net global warming of about 0.4C between the 1880s and 1970s.

Earlier in the short essay, Hansen said that “omission of South America and Africa has only a tiny effect on the global temperature change”. However, they would surely have an impact on land temperatures in the Southern Hemisphere? And, as the above paragraph shows, the calculation of SH land temperatures and their integration into global temperatures seems to have been a central theme in Hansen’s own opus. If Hansen says that South America and Africa don’t matter to “global” and thus presumably to Southern Hemisphere temperature change, then it makes one wonder all the more: what does matter?

Personally, as I’ve said on many occasions, I have little doubt that the late 20th century was warmer than the 19th century. At present, I’m intrigued by the question as to how we know that it’s warmer now than in the 1930s. It seems plausible to me that it is. But how do we know that it is? And why should any scientist think that answering such a question is a “hassle”?

In my first post on the matter, I suggested that Hansen’s most appropriate response was to make his code available promptly and cordially. Since a somewhat embarrassing error had already been identified, I thought that it would be difficult for NASA to completely stonewall the matter regardless of Hansen’s own wishes in the matter. (I hadn’t started an FOI but was going to do so.)

Had Hansen done so, if he wished, he could then have included an expression of confidence that the rest of the code did not include material defects. Now he’s had to disclose the code anyway and has done so in a rather graceless way.

385 Comments

  1. DavidH
    Posted Sep 8, 2007 at 5:42 AM | Permalink

    Bravo, Steve! Jones next?

  2. Steve McIntyre
    Posted Sep 8, 2007 at 5:43 AM | Permalink

    In the gistemp.readme, Hans Erren is cited as a data source:

    For Hohenpeissenberg – http://members.lycos.nl/ErrenWijlens/co2/t_hohenpeissenberg_200306.txt
    complete record for this rural station
    (thanks to Hans Erren who reported it to GISS on July 16, 2003)

  3. Frank K.
    Posted Sep 8, 2007 at 5:57 AM | Permalink

    As they say, better late than never! I for one will be pouring over the code. Should be a fun adventure…

    BTW, my favorite line in his essay is:

    “Our procedure does not throw out data because it looks unrealistic, as that would be subjective.”

    This, of course, flies in the face of all the previous publications. There was a ** lot ** of subjectiveness in processing the historical temperature database!

    Frank K.

  4. Steve McIntyre
    Posted Sep 8, 2007 at 6:03 AM | Permalink

    It looks to me that the combining of stations – a process that we’ve been discussing – is in Hansen “Step 1”

  5. Steve McIntyre
    Posted Sep 8, 2007 at 6:13 AM | Permalink

    A number of the readme’s have been written in only the last couple of days. My initial impression is that the code is poorly documented. NASA has very specific standards applicable to software described here . They say:

    Software engineering is a core capability and a key enabling technology necessary for the support of NASA’s Mission Directorates. Ensuring the quality, safety, and reliability of NASA software is of paramount importance in achieving mission success. This chapter describes the requirements to help NASA maintain and advance organizational capability in software engineering practices to effectively meet scientific and technological objectives.

    Among other duties of the NASA Chief Engineer is the following:

    1.2.3 The NASA Chief Engineer shall periodically benchmark each Center’s software engineering capability against its Center Software Engineering Improvement Plan. [SWE-004]

    As I understand it, GISS is part of the Goddard Space Flight Center and is subject to these guidelines. It looks like they apply even to Hansen

  6. John Lang
    Posted Sep 8, 2007 at 6:14 AM | Permalink

    Congratulations Steve. This is a very important step.

    I note that the email makes it sound like they hurried the release through and would have liked a week or two more to produce a simpler code. If it was so easy to release the code in a short time, they should have done so years ago.

  7. Frank K.
    Posted Sep 8, 2007 at 6:29 AM | Permalink

    The time stamps on nearly all of the Fortran files indicate they were edited within the last few days. Two of the Python scripts were also edited recently (last month).

  8. Stan Palmer
    Posted Sep 8, 2007 at 6:45 AM | Permalink

    I suppose that I am only one member of the thudering herd who thinks that an analysis of the code that actually was sued is essential. The process of producing a simplified code would produce a whole new set of bugs that would obscure the set of bugs and undocumented features that affected the published results. One would have a hrd time discriminating between the novel bugs that did not affect the published results from the prior bugs that did.

  9. steven mosher
    Posted Sep 8, 2007 at 6:49 AM | Permalink

    Well, “free the code” worked.

    As far as unrealitic data goes, I’m wndering hw he justifies the removal f crater lake NPS HQ
    and the “cool stations” in Northern Ca, as documented in H2001

    Congrats Steve

  10. Dave Dardinger
    Posted Sep 8, 2007 at 6:49 AM | Permalink

    Congratulations, Steve! But your headline needs an “!” at the end. Steve Sadlov at least disserves it.

  11. bernie
    Posted Sep 8, 2007 at 6:57 AM | Permalink

    Steve:
    If combining stations from different locations is Step 1, are you indicating that the code that has been released does not specify how scribal versions and code from the same site were combined and “adjusted”, i.e., Step 0. If this is true, I can see the “auditors” becoming ferocious about the “attempt to mislead”.

  12. bernie
    Posted Sep 8, 2007 at 7:01 AM | Permalink

    Sorry, that should read “scribal versions and data series”.

  13. Posted Sep 8, 2007 at 7:01 AM | Permalink

    Where do we go to look at the SST (esp SH) history and data processing methodologies?

    If the top ~150 feet of ocean are the primary heat resevoir affecting the biosphere (land and sea) wouldn’t that be where the SH heating/cooling are taking place?

  14. Jaye
    Posted Sep 8, 2007 at 7:03 AM | Permalink

    Mixing shell scripts and Fortran disqualifies these guys from being software engineers, so those standards don’t apply to them.

  15. Fabius Maximus
    Posted Sep 8, 2007 at 7:07 AM | Permalink

    Steve, when was your first post on the code? That is, how quickly was this issue resolved — in the sense of having the information released?

    Might be worth tracking. The “establishment’s” response time to your work is probably decreasing, an indicator of growing openness in Climate Science.

  16. steven mosher
    Posted Sep 8, 2007 at 7:21 AM | Permalink

    Guys, if you get a chance drop over to RC, thank Hansen and Gavin. Let’s show
    some class.

  17. Posted Sep 8, 2007 at 7:33 AM | Permalink

    So many of the files are edited recently? Obviously the first thing to do then is to take the dataset used in a recent global temperature publication and run the code to see if the code produces identical results. Call me paranoid but it’s hard to imagine that they’ve been just frantically adding/editing comments there…

  18. Louis Hissink
    Posted Sep 8, 2007 at 7:34 AM | Permalink

    Steve

    Beware those bearing “gifts” – releasing code is one thing – encapsulating it in mealy mouthed resentment another, so bet you it will be “incomplete”, (the code, that is).

    Good work – we in Oz should make you an honary JOR committee member 🙂

  19. Steve McIntyre
    Posted Sep 8, 2007 at 7:43 AM | Permalink

    #18. I don’t necessarily expect the code to “work” in a turn key way, although that would be nice. My main interest is in the information that it provides on mysterious steps – which can to a considerable extent be resolved by inspecting code that isn’t operable. Of course, it would be better if it was operable so you could examine intermediates. We’ll see.

    In another case familiar to CA reades, Mann’s code provided to the House Energy and Commerce Committee was incomplete and inoperable, but it contained useful information: e.g. that he calculated the verification r2 that he denied calculating.

  20. Steve McIntyre
    Posted Sep 8, 2007 at 8:16 AM | Permalink

    I’m sure that one other consideration in freeing the code were the CA threads patiently reverse engineering what he did plus the fact that we were having some success in pinning down the steps. In combination with the “Y2K” publicity, I can’t imagine that NASA was pleased with the prospect of this playing out on the internet over the next few months and simply decided to cut their losses, regardless of Hansen’s views on the matter.

  21. Louis Hissink
    Posted Sep 8, 2007 at 8:17 AM | Permalink

    # 19

    Steve

    Should be interesting dissecting the code – I’ve been reading the posts on this (and other related ones) with interest – the technique of grafting disparate data sets isn’t strange, but the means by which it is done in climate science, is (at least from your analysis here).

    I’m almost tempted to subject the climate data you’re using to a standard levelling process we use to normalise geochemical data in mineral exploration but resist it as climate data are in the time domain while the data I am used to, not. (For those CA readers unfamiliar with geomaths or geostats, geochemical data levelling is about working out the spatial means about which the data vary (fluctuate) and levelling those means by addition or subtraction. But these data are, to all practical purposes time-invariant, and thus simple. In climate extra factors operate, principally in the time domain, and unless those are well understood, data levelling becomes somwhat problematical).

    Interesting none the less.

  22. John Norris
    Posted Sep 8, 2007 at 8:18 AM | Permalink

    Congratulations Steve!

    re #5

    Hansen is somewhat a victim of his own success. Source code for a global warming endeavor was not a very important concern when he started out on this, and probably didn’t merit much attention to quality and documentation. Of course it does now.
    – Lesson number one that has been demonstrated by Hansen, if you are going to start clamoring that you have a new understanding vitally important to the world, you better be prepared for the technical body cavity inspection because it is forthcoming.
    – Lesson number two, also demonstrated by Hansen, don’t document for the world to see your emotional feelings about those evil technical reviewers. Take the high road.

  23. Steve McIntyre
    Posted Sep 8, 2007 at 8:22 AM | Permalink

    #21. Louis, here’s how Hansen’s leveling process would apply to a mineral exploration sample. Let’s suppose that you had two copies of a geochemical survey, one of which had been transcribed and then some pages lost. For the most part the values were identical but occasionally one version would have a missing value. Hansen uses the other information to estimate the missing value; compares this to the version with an actual reading and estimates a bias for the first copy based on the difference and then adjusts all the data in one copy – regardless of whether it is identical.

    It sounds unbelievable but it’s what he did.

  24. BarryW
    Posted Sep 8, 2007 at 8:28 AM | Permalink

    As far as standards go it’s old code and probably grandfathered, you wouldn’t believe what some of the Air Traffic Control code looks like. Don’t assume either that government agencies actually adhere to their own standards. The resentment is from the fact that Hansen knows how bad the code is and is embarrassed by it (and should be). It takes alot of discipline, money, personel and time to write production quality code, and I doubt if his shop has any of these , especially the first. Having worked in a shop that did “quick and dirty” analyses I’m pretty sure I know what this is going to look like. The next “crossword” puzzle will be figuring out which piece of code was applied to what file and in what order. There probably are steps manipulating the files using shell commands that aren’t going to be obivous or documented.

  25. Andy
    Posted Sep 8, 2007 at 8:32 AM | Permalink

    Great work Steve!

    With all due respect to Dr. Hansen, “people interested in science” should only be interested in the code that was actually used to produce their reported data. The “simplified version” may be a useful exercise for GISS to make it easier to maintain the code going forward (after doing adequate regression testing against the existing code), but that’s about it.

    This will be the gift that keeps on giving though, as it’ll also be an interesting exercise to compare the “simplified” code to the original.

    As to Dr. Hansen’s statements about the large swaths of the planet that “don’t matter,” one is left with the the logical conclusion that the only parts that do matter to him must be the ones which support his hypothesis (cross-reference to “people interested in science…”).

    Now, on to the code to see what my tax dollars have bought me.

  26. Skip
    Posted Sep 8, 2007 at 8:34 AM | Permalink

    As a general rule I’m not fond of heavily documented code because it introduces an additional point of failure. As code is edited, it begins to no longer resemble the comments unless the extra work to maintain the comments is done as well. And in my experience, this is almost never done. So I’d tend to cut Hansen some slack here on the source code.

    Doing some initial digging, the combining step is done via some python scripts. I’m not terribly familiar with python but it looks like it mostly relies on the interpreter to do the right thing with numerics. In other words, integer math stays integers, until it’s automatically widened to floating point when an numeric operation containing a floating point is used. While that’s handy, it could certainly mask bugs.

  27. jae
    Posted Sep 8, 2007 at 8:36 AM | Permalink

    Congrat’s, Steve! Considering the apparent urgency with which the code was released, I think Hanson was under a LOT of pressure. I wouldn’t be surprised if it came from Congressmen, not just NASA.

  28. steven mosher
    Posted Sep 8, 2007 at 8:37 AM | Permalink

    re 20.

    Yes the way CA was going at the problem would create weekly or monthly “discoveries”
    or questions ( where’s waldo)

    Death by a thousand cuts. Bad PR.

    Interesting that they did the release on Friday, typical PR move.

    I suspect that they hope releasing the code will keep folks pinned down trying to figure
    it out.

    I noticed it was released without licence, so maybe some smart guy could put the code
    in a wiki where CA readers could add documenation and notes for understanding the code.
    Everybdy wrking at the code alone on their individual will be slow. A community effort
    is required.

    kinda like surface stations but with the code

  29. Mike Carney
    Posted Sep 8, 2007 at 8:43 AM | Permalink

    Way to go Steve! Yet more information has been placed in the public domain because of the efforts of you and others at Climate Audit. Finally, sufficient pressure was brought to bear on Hansen to make the information public. It would be nice to think it was Hansen’s scientific peers that brought that pressure on, but Hansen’s message implies otherwise since he wants “real” scientists to wait for a better organized version of the code. In addition, he caps off his message by addressing the scientific community to defend why he is releasing code — as if such a release of methodology is somehow a novel idea! He gives two good reasons for making the information public. Unfortunately those reasons have been valid for many years and don’t explain his change of heart now. So if it was not his peers, who did make him release the code?

  30. Louis Hissink
    Posted Sep 8, 2007 at 8:50 AM | Permalink

    #23

    Steve,

    Say that again?

    We have 2 geochemical surveys, say #A and #B. Survey #A was complete but survey #B,from transcribal errors, (missing pages etc) was incomplete).

    Which version would I use? The complete one, hence #A.

    But I do see your point now. Hansen is concatenating two time series data (Which climate has to be) sets by “biassing” one, in terms of the other ?

    That seems to be circular reasoning.

    Is that how you understand it?

  31. Louis Hissink
    Posted Sep 8, 2007 at 9:04 AM | Permalink

    #23

    Or is Hansen “manipulating” contemporary data sets?

    And then my comments above become superfluous.

  32. Mike Smith
    Posted Sep 8, 2007 at 9:19 AM | Permalink

    The source code may not reflect anything but the most current version of a given program or subroutine. Unless they are very good at configuration control, recovering the actual source may be at least as hard as the “crossword puzzle” approach.

    The code for H87 would be at least 20 years old! What are the chances that program and support elements remain intact? FWIW, Python’s popularity (it was first released in 1991) is fairly recent compared to say C or Fortran, and yet that’s in the mix.

  33. Steve McIntyre
    Posted Sep 8, 2007 at 9:21 AM | Permalink

    #28. The type of wiki that Mike Cassin has proposed StikiR would be appropriate. He’s trying to set up a site as a focus for that sort of activity and I support the enterprise. Given that there’s already a climateaudit “brand” and a lot of traffic here, I’d like to have a climateaudit wiki of this type and cross-post interesting things to the blog. I’ll mull it over,

  34. Steve McIntyre
    Posted Sep 8, 2007 at 9:26 AM | Permalink

    #30. No, you have two slightly inconsistent copies of one survey. Let’s say that both were copied by hand and that you’ve lost pages 1-3 from one copy and pages 5-8 from the other copy. On page 4, all the values but one are the same, except that one value in copy 2 was left out for some reason. Perhaps because coffee spilled on it early and it wasn’t re-copied.

    HAnsen estimates the one missing value and then calculates a bias. You’re misunderstanding things because it is so improbable a method. But the existence of the improbable method is no longer a guess. It;s there in black and white.

  35. Posted Sep 8, 2007 at 9:40 AM | Permalink

    This is good news for anyone who cares about advancing the science studying climate change.

  36. steven mosher
    Posted Sep 8, 2007 at 9:53 AM | Permalink

    re 33.

    Yes I was thinking of Mikes work, but I think the GissTemp Wiki ( or whatever) should
    happen under the CA Brand.

    I think Dan Hughes, Mike C and John V ( if I left any out sorry) might have some good suggestions
    on how to structure something.

  37. Yancey Ward
    Posted Sep 8, 2007 at 10:00 AM | Permalink

    Congratulations are due to all that are working on this.

  38. Steve McIntyre
    Posted Sep 8, 2007 at 10:02 AM | Permalink

    I would LOVE a CA wiki. I think that surfacestations would probably benefit from more of a wiki format as well. I’d like to keep it on the same server to economize on costs. I can’t time away from all the other things to learn how to do it myself. If someone can turnkey it, I’ll ensure that the blog is coordinated with it.

  39. Skip
    Posted Sep 8, 2007 at 10:02 AM | Permalink

    For people who want to look at the code, the relevant code first combines records that overlap by at least 4 records in:

    STEP1\comb_records.py

    And then combines the non-overlapping records in

    STEP1\comb_pieces.py

    It’s going to be difficult to spot rounding issues caused by integer math because being python it’s not at all clear what data type a variable is at any given time. For example, some averaging code:

    def average(new_sums, new_wgts, new_data, years):
    for m in range(12):
    sums_row = new_sums[m]
    wgts_row = new_wgts[m]
    data_row = new_data[m]
    for n in range(years):
    wgt = wgts_row[n]
    if wgt == 0:
    assert data_row[n] == BAD
    continue
    data_row[n] = sums_row[n] / wgt

    As an aside, the code tag performs poorly here, because in python the indent is syntactically significant, and the code tag loses the indent. Is there a better tag to use?

    In any case, the last line there, what data type do we end up with? From inspection, wgt is definitely an integer, and sums_row[n] should be a float, but there’s not anything that ensures this here. If it’s ever an integer we’ll introduce a bias.

    I don’t currently have any unix or linux boxen running. I may have to try and get one working to see if I can get the whole process to actually function.

  40. steven mosher
    Posted Sep 8, 2007 at 10:09 AM | Permalink

    re 32

    The Python stuff is all in step 1. Combining records. Everything else is Fortran prolly F77,
    but Model is F90 I think, I’ve seen some references to F90 in the SH files

  41. Tom T
    Posted Sep 8, 2007 at 10:27 AM | Permalink

    Great work Steve

  42. Reid
    Posted Sep 8, 2007 at 10:30 AM | Permalink

    Re #29, Mike Carney asks “So if it was not his peers, who did make him release the code?”

    Perhaps NASA Chief Michael Griffin is getting back at Hansen. Hansen attacked Griffin earlier in the year for expressing skepticism.

  43. Leonard Herchen
    Posted Sep 8, 2007 at 10:31 AM | Permalink

    Great work Steve M and everyone else. The gift of accountability is one of the greatest gifts someone can give the public sphere.

    Hansen’s cover letter is quite arrogant for someone who’s had material errors pointed out already. His position seems to have been, we have to change the way the world works, but I won’t show you how I come up with that conclusion. This was unsustainable.

    Now he is a victim of nasty emails. I’m sure Steve M.’s emails were amongst the nastiest. You know what they say about pride and falls. My prediction: A bias around 1/3 of the total calculated warming will be revealed, but a warming trend will still be present as is consistent with more concrete observations such as glaciers, with more warming in the northern hemisphere.

  44. Molon Labe
    Posted Sep 8, 2007 at 10:31 AM | Permalink

    Re: 39

    You can get a Windows version of Python by installing the Cygwin development environment. Very easy.

    http://www.cygwin.com

    This is basically all the GNU software and other unix utilities ported to Windows envirnment.

  45. Larry
    Posted Sep 8, 2007 at 10:47 AM | Permalink

    44, alternatively, you can get a Linux bootable CD, like Knoppix or Ubuntu. They even have versions now that can boot from a thumb drive, and keep all the files on the thumb, so you can work in a 100% unix environment without messing up your windows installation in any way.

  46. JerryB
    Posted Sep 8, 2007 at 10:47 AM | Permalink

    Congratulations Steve! Outstanding!

  47. Steve Moore
    Posted Sep 8, 2007 at 10:49 AM | Permalink

    The release was announced in a shall-we-say ungracious email…

    That’s a very gracious way of putting it, Steve.
    Me, I’d just call it “snotty”.

    GREAT WORK!

    Now, if I can remember where I put the KVM so I can hook up the Red Hat box…

  48. KDT
    Posted Sep 8, 2007 at 10:49 AM | Permalink

    Great work to everyone involved, and congratulations on the best possible outcome of the endeavor.

    PS the input temperature data are integers representing tenths of a degree C.

  49. JerryB
    Posted Sep 8, 2007 at 10:52 AM | Permalink

    There appear to be vesions of Python for MS Windows, as well as other non unix
    environments. http://www.python.org/download/

  50. Richard deSousa
    Posted Sep 8, 2007 at 10:57 AM | Permalink

    Congratulations, Steve M! Hansen is melting under the pressure and his status as a premier climate scientist is in jeopardy. His statistical methods are truly a comedy of errors.

  51. IanH
    Posted Sep 8, 2007 at 10:57 AM | Permalink

    #39 Python will the type of the parameters passed in the case given, unless you either expressely multiply by 1.0, or cast to float.

    Great work Steve

  52. Kenneth Fritsch
    Posted Sep 8, 2007 at 10:59 AM | Permalink

    This must be a most satisfying success for Steve M. In my next post to the GISS threads I was going to ask whether Steve M was intentionally attempting to shame Hansen and his NASA cohorts into releasing the code. I also was going to comment on the lack of any significant public effort by those who have used the temperatures series as part of their published climate analysis. While Hansen can argue that any small part of the puzzle does not in itself significantly affect the cherished global average temperature anomaly, we have a different story when many of these small parts start to crumble. Computer models look increasingly at regional anomalies and at small anomalies and if there is even some smallish error it could effect the validation of the models.

    When the Hansen email states that, “but because of a recent flood of demands for the programs, they are being made available as is”, I would be guessing that perhaps the climate science community with a vested interest in these temperature series was putting pressure on Hansen. The royal Hansen was in character with, “…People interested in science may want to wait a week or two for a simplified version”, but if it were not for the people interested in science protesting, I would wonder whether Hansen would have reacted.

    When Hansen says “Another favorite target of those who would raise doubt about the reality of global warming is the lack of quality data from South America and Africa, a legitimate concern”, I think he would have preferred to write all of the concern in these matters off from a Steve McIntyre as one who is unrealistically doubting that any warming has occurred when, of course, Steve has made it perfectly and publicly clear that he is not doubting that warming has occurred and his interests are in the domains of doing and reporting the science right and puzzle solving. In my mind Steve M’s evenhanded approach did not allow Hansen and his cohorts to change the subject sufficiently to ignore the issue at hand.

    I see no conspiracy with the attempts to not reveal the code, but more likely some embarrassment for the sloppiness of calculations and methods from a procedure that probably grew like Topsy and did so because the political urgency was in getting the information out and correcting errors later. That NASA did not follow its own rules would not be surprising to most of us who have worked in or with the quality control areas of large organizations – public or private. In my estimation the customer pressures in private concerns are more direct and immediate than in public ones and thus provide a more efficient discovery of when an organization is talking the talk and not walking the walk.

  53. KDT
    Posted Sep 8, 2007 at 11:04 AM | Permalink

    Hail to the King
    For doing the right thing
    For later is better than never

    Long live the Jesters!

  54. Pay No Attention To The Man Behind The Curtain
    Posted Sep 8, 2007 at 11:08 AM | Permalink

    What a stunning victory! Congratulations Steve.

    I can see the conversation now …

    Congresscritter (angry): Why isn’t this being released?
    Hansen: Well, um … it’s much too complex … not of interest … bluff … bluster
    Congresscritter (angry): Do you not work for a public agency?
    Hansen (resigned): Sanitized, I mean, simplified version in two weeks?
    Congresscritter (angry): Now. Release it right now.

    Next up, Hansen starts sacrificing his subordinates as the errors are revealed.

  55. John Goetz
    Posted Sep 8, 2007 at 11:13 AM | Permalink

    I like the idea of a Wiki on both this site and surfacestations. In particular, I’d like to see a Wiki dedicated to adding comments to the newly posted code.

  56. Larry
    Posted Sep 8, 2007 at 11:21 AM | Permalink

    Would it be out of scope to have a wiki entry to summarize in very basic outline form what the IPCC reports do and don’t say, and how the basic physics is supposed to work? Over on unthreaded #19, it’s pretty clear that that’s not universally understood. Even if it’s just a link to another site, I think there’s some value in getting everyone on the same page, so we don’t have all of this “greenhouse violates the second law” or “0.3% can’t do anything” kind of talk, which frankly, is embarrassing.

  57. Scott-in-WA
    Posted Sep 8, 2007 at 11:55 AM | Permalink

    Congratuations to Steve!

    However, a cautionary note….. From an information management / software configuration management perspective, having the code by itself, or having the data sets by themselves, is not enough.

    You have to know which code was used against which datasets, which means you have to institute an information configuration management program as well as a software configuration management program — one which marrys data sets to software and vice-versa, and which identifies both the software and the data sets as “record material in electronic format.”

    Unless NASA is using a disciplined process from start to finish in configuration managing their software, their climate data, and their processed information and reports, essentially as one set of related electronic records for any individual process run, then they are not implementing an appropriate information management philosophy in accordance with current US Government records management policies.

  58. paul graham
    Posted Sep 8, 2007 at 12:08 PM | Permalink

    FREE THE HADLEY THREE!!!!

  59. Posted Sep 8, 2007 at 12:15 PM | Permalink

    The problem Hansen has just admitted to is this: How can it be Global when there is no warming in South America, Africa or the US? If there is only warming in Europe and Asia, then it is regional North Eastern Hemispheric Warming not Global Warming. If the climatic change is regional, then only regional explainations can be considered, not global ones. Claiming CO2 as a cause of regional warming is automatically discounted. If one were to blame man for a regional climate change, the most reasonable culprit will be land use. There you can have your UHI effect and heat it too!

  60. Robert Wood
    Posted Sep 8, 2007 at 12:25 PM | Permalink

    Hehe, Hansen has a boss – there is a god 🙂

    Can you imagine him being pulled into his boss’s office and told to clear this matter up?

  61. DeWitt Payne
    Posted Sep 8, 2007 at 12:35 PM | Permalink

    57

    Would it be out of scope to have a wiki entry to summarize in very basic outline form what the IPCC reports do and don’t say, and how the basic physics is supposed to work?

    I’ll second that.

  62. Bob Koss
    Posted Sep 8, 2007 at 12:40 PM | Permalink

    While reading gistemp.txt I found this part suggests a bias in their thought process.

    This derived error bar only addressed the error due to incomplete spatial
    coverage of measurements. As there are other potential sources of error, such
    as urban warming near meteorological stations, etc., many other methods have
    been used to verify the approximate magnitude of inferred global warming.
    These methods include inference of surface temperature change from vertical
    temperature profiles in the ground (bore holes) at many sites around the
    world, rate of glacier retreat at many locations, and studies by several
    groups of the effect of urban and other local human influences on the global
    temperature record. All of these yield consistent estimates of the approximate
    magnitude of global warming, which has now increased to about twice the
    magnitude that we reported in 1981. snip…

    Apparently glacier advance doesn’t figure into their analysis.

  63. Martin Å
    Posted Sep 8, 2007 at 12:50 PM | Permalink

    I started to look into STEP0 in the code. It seems to be Fortran 95 and not 77 since I could compile all the Fortran files there with The GNU Fortran 95 compiler but not the 77 dito. I have no experience of Fortran though. I also had to install Korn Shell, since the shell scripts are in ksh. I run Linux.

  64. Harry Eagar
    Posted Sep 8, 2007 at 12:51 PM | Permalink

    Hmmm, so it seems that NOTHING can change Hansen’s curves.

    Sorta OT but I’m curious. You are Canadian are you not, Steve? Can furriners file FOI requests?

  65. Posted Sep 8, 2007 at 12:56 PM | Permalink

    Harry Eagar, for what it’s worth, it says here that Canadians can file American FOIA requests (search in page for “Canadian”).

  66. Wayne Holder
    Posted Sep 8, 2007 at 1:14 PM | Permalink

    The issue of best practices for code is a concern to me (#5) and even more so in light of the fact that most of the files seem to have been edited (or, hopefully, only copies edited) in the last few days. I manage a large software project and one of the key standards nearly everyone in the software industry follows is the use of version control to track and control changes to code. I can only hope that this kind of audit trail exists for Hansen’s code, as it would sad to find out that these recent edits have erased any useful clues as to how his code really works.

  67. D. Patterson
    Posted Sep 8, 2007 at 1:21 PM | Permalink

    Hansen wrote, “People interested in science may want to wait a week or two for a simplified version.” Hansen appears to imply the original and complex version doesn’t contain enough scientific value for “People interested in science….” Was this an unconcious and revealing slip of the tongue?

  68. steven mosher
    Posted Sep 8, 2007 at 1:28 PM | Permalink

    I’v looked around for a wiki plugin for WordPress.. On the forums there are
    requests for such a capability, but it appears that nothing is solid. I have
    one more lead to track down. Failing that there are these options.

    1. Somebody starts a Wiki. of course anyone is free to do that, but I think we agree it needs
    to fall under the ClimateAudit Brand. We all know what that stands for and it
    should not encourage crackpottery.

    2. There are some threaded comment ( nested comment) plug-ins that might make discussions
    a bit more structured…

  69. K
    Posted Sep 8, 2007 at 1:29 PM | Permalink

    Several comments – from Stan, Scott, et al – illustrate why there may still be difficulties. But this is definite progress.

    The recent edits may have been to remove or revise comments. That may be speed or delay comprehension. Removing all comments would certainly be a scorched earth 🙂 policy. But we have no knowledge of intentions and should not infer any.

    Changes, if any, to operational code are also unknown at the moment. So it is vital to see if the now public now produces exactly what Hansen said it produced years ago. But recreating the testing may not be possible.

    Such testing requires logs, the code, the data used for every run, the orginal compilers, and perhaps the original operating system – both the OS (Windows, Unix, ?). It may also require an obsolete version of software; Fortran, SAS, etc. constantly change.

    In theory the parallel reproducing Hansen’s results should not depend upon the OS or the language. And the data used should be an exact copy of the original readings from sensors around the world. In practice this may be a brutal trek with no map.

    I don’t understand the complaint that NASA and Hansen’s people haven’t had time.

  70. Posted Sep 8, 2007 at 1:42 PM | Permalink

    This was a team effort, and congratulations to all, especially Steve McIntyre. I agree with Mosher, post thanks on RC.

    Perhaps we can now fully understand why some stations that are in “pristine” condition, such as Walhalla, SC, with no obvious microsite biases, get “adjusted” by Hansen’s techniques. Shouldn’t good data stand on it’s own? Perhaps Walhalla would be a good first case study.

    http://gallery.surfacestations.org/main.php?g2_itemId=5405

    I got an email from one of the http://www.surfacestation.org volunteers, Chris Dunn, that sums up the problem pretty well:

    I downloaded the raw and adjusted text versions of the GISS data for Walhalla, and did a simple subtraction of annual figures: adjusted minus raw. It’s clear that they created a step-up over time. They started by subtracting 0.3 from the early record, then progressively reduced this amount by 0.1 degree a couple of times until 1990, after which there were no adjustments made. This artificial “stepping down” of the historical temperature record as you go back in time induces a false upward trend to the data where, in my opinion, one shouldn’t be. Consider that this is a rural site and the CRS was unmoved, and in the middle of a large, empty and level field in a relatively static, isolated setting from at least 1916 to 2000. There is just no justification for this whatsoever when looking at the site and the general area.
    Of course, this “step” procedure is what McIntyre et. al. have been documenting over on CA for some time, now, but having visited the Walhalla site personally and seeing how pristine it was during that period, I am just shocked to see how the data have been so clearly & systematically manipulated. It seems if they can’t find an upward trend, they simply create one. It’s an outrage to an average citizen such as myself, especially when I think of the good people (private observers, among others) who dedicated their time every day for so long to create an accurate record. That’s the real rub as I see it – the arrogant disregard of honest people who have put so much of their lives into it. I truly see just how important this work is that is being done by you and the folks over at Climate Audit.

    I’m considering writing my congressmen, but will wait to see what the results are when McIntyre is done.

    Now we’ll have a chance to understand this firsthand instead of having to reverse engineer the method. Perhaps we’ll go down this path and it will all be perfectly valid, in which case we have no argument. But independent verification is one of the basic tenets of science, and this has been long overdue.

  71. Allen C
    Posted Sep 8, 2007 at 1:45 PM | Permalink

    Of all the AGW websites I’ve kept track of over the years this is clearly the best and most interesting. I’m a retired geology professor who never had a problem with climate change (I worked at a midwestern university in a location that up until around 12,000 years ago was covered in several thousand feet of Pleistocene ice – how could I not believe in climate change!) but never went along with the catastrophic global warming scenario. I told my classes shortly before I retired that I thought global warming would, by 2010 be considered a bad scientific joke with many, many people looking like fools. Nothing has happened to change my assesment. What seems to be happening is that because of work by people like Steve Mc the data behind it all is, at best, going to be untrustworthy and suspect (and all work that relied on the data) and at worst proven to be simply crap used to push a particular agenda. What I keep trying to imagine is what the headlines will be in the MSM like the NYT when its clear that we’ve had a worldwide panic and gigantic scam over scientific nonsense.

  72. Posted Sep 8, 2007 at 1:51 PM | Permalink

    RE70: I’ve equipped the current server hosting CA to be able to run MediaWiki, though I’m hesitant to put all the eggs in one basket on one machine.

    http://www.mediawiki.org/wiki/MediaWiki

    I could configure another easily if there’s enough in the tip jar regularly to handle another monthly fee.

  73. Don Keiller
    Posted Sep 8, 2007 at 1:55 PM | Permalink

    Well done Steve. If it were not for you…people would still believe the Hockey Stick.
    I’m going to just sit back and let those with better knowledge than I deconstruct Hansen’s spaggetti.

  74. Stephen Richards
    Posted Sep 8, 2007 at 1:56 PM | Permalink

    Steve
    As I said before, BRILLIANT!!! BUT having been a software project troubleshooter (consultant type) in the last few years of my working life, i suspect that this is far from settled. Already you have noted the first signs of the “consultant is coming”. Hurried comment entries, offers of a more appropriate system in “a few weeks”, etc. I bet what you will not find is any form of software enginnering documentation which was written at the appropriate time. I sense this project is very, very like many I have had to sort. No standards, no controls, no version management, no release documentation etc etc. I bet it was started on the back of an envelope and the software was patched and patched again. Sorry I can’t help with this one but there are many brilliant correspondents on this site who can.

  75. hans kelp
    Posted Sep 8, 2007 at 2:20 PM | Permalink

    I have been watching Climate Audit from nearly the first
    day it went onto the internet.From the very first read I
    felt that the way Steve McIntyre ran his site ( TCO,stop
    banging your head into the wall now will yeah!! ) it would
    be making a difference. As everyone knows by now, Steve´s
    work has gained a considerably respect throughout the
    community, just look at the very fine gentlemen contributing
    to this site including a lot of respected scientists.
    It follows that the good work ought to be supported, and I will
    gladly tip the jar once again to help make it possible to
    keep up the good process. Thank you.
    To celebrate another day of exellent work I will go and get
    me a Tuborg….. ( ah, a couple then )

  76. Murray Duffin
    Posted Sep 8, 2007 at 2:29 PM | Permalink

    Re: Anthony, what is the monthly fee? Murray

  77. bernie
    Posted Sep 8, 2007 at 2:32 PM | Permalink

    I just visited RC. Seems that the release of the code is not news fit to print!! I couldn’t find a reference anywhere.

  78. BarryW
    Posted Sep 8, 2007 at 2:40 PM | Permalink

    I think there were code changes that showed up in the “Y2K” fiasco beyond fixing that. Data points previous to 2000 changed slightly as some have noted on CA. So the code is probably a moving target. Not only is it “simplified” but they’re going to play “oh, but we fixed that, we don’t use that code anymore”, when a problem is found.

  79. IL
    Posted Sep 8, 2007 at 2:47 PM | Permalink

    I don’t generally like ‘me too’ posts, but have to add congratulations to Steve, persistence paid off. I’ve been extremely impressed by how far Steve Mc and others managed to get, even though they had to slog through reverse engineering the code. So despite all the difficulties many have mentioned, I am sure much more progress will be made in understanding and checking what has been done.

    On the questions of ‘where is Waldo’ in all this – in the images of the Earth shown in the Hansen document referenced in the main post, some things are clear. The warming is almost totally in the Canadian far north and generally at the arctic circle. The other main warming is on the Antarctic peninsula. You will note that because of the map projection, these look like huge areas, so the projection really helps the PR. In reality of course, the Antarctic peninsula is not much larger than the UK or Baja California for example. Note also that Antarctica is blank – no doubt for the excellent reason that what is displayed is 1900-present day anomalies and there are no Antarctic records that go back that far. If more recent climate data was included however, (say the last 50 years since that is the critical period) there would be a great big blue/purple band at the bottom of the projection which would give a completely different PR impression to the overall global result.

  80. Dave Dardinger
    Posted Sep 8, 2007 at 2:54 PM | Permalink

    re: #80

    It’s not going to be that easy for Hanson et. al. to make any substantial changes. They’ve seen how easy it is to be caught out even without the code. To now try making changes after the fact just opens them up to much worse problems. I think they know this and are going to / have only made changes to documentation and the like.

    BTW, how do I, with just a simple windows box, convert the .tar thingee into a simple text thingee I can look at? I think I’ve done it before but I don’t remember how. I’m not interested in actually compiling and running the programs, I just want to look at the text of the programs / shell scripts. Or should I just wait till Steve or someone posts the material as text here?

  81. JSB
    Posted Sep 8, 2007 at 3:07 PM | Permalink

    I’m sure I’m not alone as there are many here who have been requesting the release of Dr. Hanson’s source code. Although I’ve been overseas much of late, but I’ve made a point to email Sen. Inhofe and Marc Morano @ EPW routinely to insist that something be done about Dr. Hansen’s reluctance to make public that which is the property of the public. I’ve included excerps of this blog demonstrating the utterly fantastic efforts involved in attaining transparency in the science of climate change. Mr. McIntyre and all who are active on this site will no doubt deserve recognition for bringing long needed light to such a world shaping issue.

  82. JerryB
    Posted Sep 8, 2007 at 3:21 PM | Permalink

    Re #82,

    Dave,

    7-zip from http://www.7-zip.org/ handled it fine.

  83. Ryan Roberts
    Posted Sep 8, 2007 at 3:22 PM | Permalink

    Winrar will handle tar archives. Use something like notepad++ to view them, things are much plainer with syntax highlighting. An installation of cygwin might be enough to run this stuff on a windows machine too.

  84. Posted Sep 8, 2007 at 3:42 PM | Permalink

    RE78 Murray its about $80 per month, but that is Steve McIntyre’s decision, not mine, to add a server for a Wiki. I don’t handle the cash, just the hardware and software to keep CA and surfacestations.org running

  85. jcspe
    Posted Sep 8, 2007 at 3:42 PM | Permalink

    These are a combination of subroutines written over the past few decades by Sergej Lebedeff, Jay Glascoe, and Reto.

    Anyone else catch this? A more competent manager would have owned the responsibility for the code no matter who wrote it. He could have accomplished the same thing without appearing to be a weasel by crafting a sentence that gave contact names for questions about the code.

    If his people feel like they are being thrown under the bus they will begin to undermine the “royalty” and his house of cards will not survive long. The saga of Dr. Hansen may be very interesting to watch from the cheap seats soon.

  86. steven mosher
    Posted Sep 8, 2007 at 3:46 PM | Permalink

    something like THIS

    http://climateaudit.wetpaint.com/

  87. Demesure
    Posted Sep 8, 2007 at 3:47 PM | Permalink

    Congrats Steve!
    Just a look in “the source” and I find in the file list.of.stations.someperiod.removed.txt
    some familiar names like, Lodi, Marysville, lake Spaulding.
    Hehe, congrats to Anthony Watts also.

  88. steven mosher
    Posted Sep 8, 2007 at 3:56 PM | Permalink

    RE 68.

    They have just started to practice version Control. Understand, This is Research Code.
    Written by Key guys. I’m not excusing the practice, just explaining it. One or two guys own
    the code and know it inside and out.

    My heart goes out to Reto. He’s been the stand up guy in all of this.

  89. Douglas Hoyt
    Posted Sep 8, 2007 at 3:57 PM | Permalink

    I haven’t looked at the code, but wonder if he includes a file for the population of the sites. In particular, I wonder if the depopulation error noticed for NYC shows up in other cities?

  90. Not sure
    Posted Sep 8, 2007 at 4:12 PM | Permalink

    From gistemp.txt in the sources:

    They (Hansen & Lebedeff, 87) obtained quantitative estimates of the error in annual and 5-year mean temperature change by sampling at station locations a spatially complete data set of a long run of a global climate model, which was shown to have realistic spatial and temporal variability. This derived error bar only addressed the error due to incomplete spatial coverage of measurements.

    Did I read that right? Does it mean that KDT nailed it when he said:

    The thing I notice is that the values used don’t seem unreasonable, but they’re also not constants. I’m guessing they come from some numerical model (a climate model?)…But I wonder if a similar step might be taken when considering missing data in these records under discussion. If they have that data at hand for these graphs, they have it for examining missing months at bias time.

    Impressive.

  91. steven mosher
    Posted Sep 8, 2007 at 4:17 PM | Permalink

    RE 89.

    In Hansen 2001, Hansen described how he “amended” 5 northern California Stations
    because they showed cooling.

    From memory these stations were willows, electra, lake spaulding, I FORGOT, and Crater
    Lake NPS HQ.

    ok… here is Hansen 2001

    The strong cooling that exists in the unlit station data in the northern California region is not found in either
    the periurban or urban stations either with or without any of the adjustments. Ocean temperature data for the same
    period, illustrated below, has strong warming along the entire West Coast of the United States. This suggests the
    possibility of a flaw in the unlit station data for that small region. After examination of all of the stations in this
    region, five of the USHCN station records were altered in the GISS analysis because of inhomogeneities with
    neighboring stations (data prior to 1927 for Lake Spaulding, data prior to 1929 for Orleans, data prior to 1911 for
    Electra Ph, data prior of 1906 for Willows 6W, and all data for Crater Lake NPS HQ were omitted), so these
    apparent data flaws would not be transmitted to adjusted periurban and urban stations. If these adjustments were not
    made, the 100-year temperature change in the United States would be reduced by 0.01°C.

    I looked at lake spaulding in the period of interest. I would agree with Hansen. The data is messed
    up to use a technical term. it cools relative to other sites ( like Tahoe city ) in a nearly
    linear fashion. That is, prior to 1927 or so, lake spaulding undergoes a nearly linear ( I think my r2
    was abve .9) cooling. This might be indicative an instrument failing. After 1927 or so it tracks
    ( correlates) with the nearby stations.

    Crater Lake? I disagree. I think he didnt like how
    cold it was there. The chart is not weird its just cold. I compared crater Lake to surrounding
    sites and if you adjust for lapse rates ( crater lake is high Alt) then the time series of nearby sites
    are in lockstep with Crater lake.

    Note, The code and the text dont document this Exclusion principal.

    Its ad hoc. I looked at 2 of the 5 cases. 1 made sense, the other was a Hansenism.

  92. Anthony Watts
    Posted Sep 8, 2007 at 4:17 PM | Permalink

    RE88 Mosh, I hear you, and wetpaint.com offers a free alternative, but just look at the kind of ads on the right that popped up in my first viewing of your Wiki setup there for the first entry you posted.:

    xxxxxToday on Wetpaint.com





    A.G.R. Paranormal
    Investigation Team




    Mythological Creatures
    and Beasts Wiki





    Medieval Crime
    and Punishment Wiki




    Witchie-poo
    A Wiccan wiki

    We want credibility for a climate Wiki, and I don’t think Wiccan, Mythical Beasts, and Paranormal is going to help that. I doubt we can control the ad content at wetpaint.com, so it is a risk using this service.

  93. scp
    Posted Sep 8, 2007 at 4:29 PM | Permalink

    Does anyone else find this excerpt just a little troubling? From the Hansen e-mail (my italics)

    For example, in 2005 we were the only group initially reporting 2005 as being, on global mean, the warmest year in the record. We would not have obtained that result without our method of extrapolating estimates of anomalies out to distances of 1200 km from the nearest station.

    … and that’s the part where he avoids going into more and more detail about the ranking of individual years. ; -)

  94. John Baltutis
    Posted Sep 8, 2007 at 4:33 PM | Permalink

    Cheers from southern California. Adding to the chorus: congratulations!! Well done. Now, on to the difficult part of dissecting the code.

  95. EW
    Posted Sep 8, 2007 at 4:39 PM | Permalink

    #95

    So what was the purpose of their exercise? To nail 2005 as the warmest year? Before anyone else? I’m rather confused.

  96. Posted Sep 8, 2007 at 4:47 PM | Permalink

    Re: Wiki

    I set up DokuWiki for myself to write documentation and it was easy (in Gentoo Linux anyway). The only thing I don’t know how to do is set it up so that people can register accounts, but that can’t be too hard. I did enable security and create myself an account but I think there’s a way to let people create them via a web/e-mail interface. I think it could be used for something other than documentation. If all you want is for anyone to be able to edit it, setting it up is trivial.

  97. Posted Sep 8, 2007 at 4:49 PM | Permalink

    Oh, I forgot to mention, DokuWiki runs on PHP/Apache. I think that is the same software this blog runs on but I’m not 100% sure. If so, there should be little trouble installing it, it’s just a question of putting the files in the right place and editing some configuration.

  98. Ian B
    Posted Sep 8, 2007 at 4:52 PM | Permalink

    Re #94, #88 now it exists, that Climate Audit @ wetpaint.com will turn up on Google pretty quickly unless somebody deletes it pronto…

  99. D. Patterson
    Posted Sep 8, 2007 at 4:54 PM | Permalink

    Re: #93

    If Hansen et al had bothered to look at the historical record, they would have found that the Spaulding Lake location’s temperatures are a very close correlation with the clear cutting of the forest in the bowl shaped valley to make way for the construction of a hydroelectric dam and reservoir. The cooler temperatures earlier in the record represent the cooling effect of the forest canopy. The temperature underwent a linear increase beginning in 1912 as the linear decrease of the forest was accomplished within the basin where the damns, reservoirs, and canals were under construction. By 1927 the construction projects and infilling of the reservoir were being completed, so the temperatures also stabilized in response to the new environment.

  100. Posted Sep 8, 2007 at 5:05 PM | Permalink

    I just set up a wiki on my web server, took about 5 minutes.

    I’ll install a couple of plug-ins, the orphans one is extremely useful.

  101. Martin Å
    Posted Sep 8, 2007 at 5:05 PM | Permalink

    OK, the god old <. Another try:

    I guess this part in STEP1/comb_records.py should clarify some of the things discussed in the crossword entries.
    This function (get_longest overlap) is called to compare different records from the same station.

    “new_sums”, “new_wgts” and “new_data” contains the data from the data set that was considered best. “begin” is the earliest start year in the data sets and “years” is total number of years covered by all the data sets. “records” is a data structure containing the rest of the data sets from the station.


    def get_longest_overlap(new_sums, new_wgts, new_data, begin, years, records):
    end = begin + years - 1
    average(new_sums, new_wgts, new_data, years)
    mon = monthlydata.new(new_data, BAD)
    ann_mean, ann_anoms = mon.annual()
    overlap = 0
    length = 0
    for rec_id, record in records.items():
    rec_ann_anoms = record['ann_anoms']
    rec_ann_mean = record['ann_mean']
    rec_years = record['years']
    rec_begin = record['dict']['begin']
    sum = wgt = 0
    for n in range(rec_years):
    rec_anom = rec_ann_anoms[n]
    if abs(rec_anom - BAD) < 0.1:
    continue
    year = n + rec_begin
    anom = ann_anoms[year - begin]
    if abs(anom - BAD) < 0.1:
    continue
    wgt = wgt + 1
    sum = sum + (rec_ann_mean + rec_anom) - (ann_mean + anom)
    if wgt < MIN_OVERLAP:
    continue
    if wgt < overlap:
    continue
    overlap = wgt
    diff = sum / wgt
    best_id = rec_id
    best_record = record
    if overlap < MIN_OVERLAP:
    return 0, 0, BAD
    return best_record, best_id, diff

    After line 3 “new_data” should contain the temperature data array. The “average” function is NOT an average over time. Instead, whenever new data should be incorporated in the series it is added to the new_sums array and new_wgts is inreased by 1 for the corresponding dates. The “average” function then merely produces the “new_data” array by dividing “new_sums” by “new_wgts” for every entry. I guess this is a way to avoid floating point until the last calculation.

    The lines:

    if abs(rec_anom - BAD) < 0.1:
    continue
    year = n + rec_begin
    anom = ann_anoms[year - begin]
    if abs(anom - BAD) < 0.1:
    continue
    wgt = wgt + 1
    sum = sum + (rec_ann_mean + rec_anom) - (ann_mean + anom)

    Indicates that values marked as bad (BAD=999.9) in either the data sets causes that year to be discarded for the calculation of the difference. This seems to be in contrast to what was concluded in earlier threads. But there are a lot more code of course.

  102. Posted Sep 8, 2007 at 5:08 PM | Permalink

    RE102, I appreciate the interest in Wiki’s, but this should be a reasoned decision based on goals, methods, and content. And Steve McIntyre should be part of the discussion. He’s offline at the moment so let’s wait to hear from him before anyone fires up any Wiki’s with the ClimateAudit name attached.

  103. KDT
    Posted Sep 8, 2007 at 5:08 PM | Permalink

    #92 I think they’re talking about foundational support for their method there instead of the method itself. Some of my theories were pretty far off base. But close enough for government work, apparently.

    The key to this outcome is Steve’s years of effort, his supporters, and everyone who has joined his call for openness in climate science. Maybe the dominoes will all fall soon.

    At least in this case, there will no longer be those who vocally cast doubt on the results based on secrecy alone. That’s good for everybody.

  104. Martin Å
    Posted Sep 8, 2007 at 5:11 PM | Permalink

    #103 corrected:

    it seems it is the annual mean and annual anomaly that is compared. These probably exists even though some monthly data is missing in one of the data sets. So, a fictive difference is produced, which not is in contrast to earlier findings.

  105. BarryW
    Posted Sep 8, 2007 at 5:23 PM | Permalink

    Re #93

    Except Hansen said in reference to S. America and Africa:

    Our procedure does not throw out data because it looks unrealistic, as that would be subjective.

  106. Armand MacMurray
    Posted Sep 8, 2007 at 5:26 PM | Permalink

    Re:#82 and others wanting to un-archive the download in Windows:
    The excellent WinZip program (freely available as a nagware trial version) uncompresses the .gz without complaint and knows about tar archives.

  107. Posted Sep 8, 2007 at 5:29 PM | Permalink

    Anthony : I set up that Wiki for myself, nothing related to CA, I just thought I’d demonstrate that the one I chose is easy to set up and let people play around with it a bit in case they want to see what it’s like.

    Of course I will wait to see what Mr. McIntyre says. I just thought it’d be nice if we could see some of the options.

  108. Erik Ramberg
    Posted Sep 8, 2007 at 5:37 PM | Permalink

    I’m a working scientist (physicist, not climatologist) and have been following the Climate Audit/Real Climate/Hansen saga for some time now. I must say that I am stunned at the strange view of the current status of climate science that most of you have.

    I think it is fairly safe to say that the majority of posters are global warming skeptics. While being a skeptic is not a bad thing per se, there comes a time when physical evidence paints a forceful picture that has to be faced realistically. There is evidence for global warming from balloon radiosonde data, satellite data, melting sea ice, glacier retreats, sea level rise, animal and plant migrations and from land monitoring stations.

    To cap it off, just by using the known absorption spectrum of carbon dioxide, it is a simple calculation to derive the approximate temperature response of the Earth to the undeniably anthropogenic increase in this gas in the atmosphere. Arrhenius got it roughly correct more than 100 years ago.
    The first level physics is pretty straightforward. Real Climate has the links to the equations, if you are brave enough to read that web site.

    Normally, scientists do not release their software or raw data to the public, for very good reasons. The emotional attitudes of critics displayed in this thread shows how bad things can get. I’ve read the email from Hansen. His announcement of the release of his code – a major concession -is relatively gracious. For those of you who don’t allow him to feel irked by his critics, please turn the spotlight on yourself. Let’s see your reaction to this post. Will it be gracious?

    Come on, people. It is apparent you have the brains. I don’t doubt you will find some tenth of a degree discrepencies, due to this or that interpretation of a subset of the data, perhaps two tenths of a degree. I’ll allow that it is important to get this analysis right and you are doing something worthy. However, the “you know what” is going to hit the fan very hard in the next two decades. What are you going to do about it?

  109. Follow the Money
    Posted Sep 8, 2007 at 5:42 PM | Permalink

    scp #95 –

    You ask if we find the following troubling-

    For example, in 2005 we were the only group initially reporting 2005 as being, on global mean, the warmest year in the record. We would not have obtained that result without our method of extrapolating estimates of anomalies out to distances of 1200 km from the nearest station.

    Troubling? It’s hilarious! And he talks about this openly? What group in-the-box blindness. Should be subject to a Congressional investigation.

  110. Posted Sep 8, 2007 at 5:47 PM | Permalink

    “Normally, scientists do not release their software or raw data to the public, for very good reasons.”

    In a form of science which seems to be mostly based on statistics, without the software or raw data, how can the study be independently reproduced? And without it being independently reproduced, how can it be science?

    Is the scientific method, as I was taught in high school, now somehow obsolete?

  111. Posted Sep 8, 2007 at 5:47 PM | Permalink

    101
    D. Patterson says:
    September 8th, 2007 at 4:54 pm

    I tend to trust the “old” data as it is difficult to understand how a sealed-in-glass, liquid thermometer could go south without the operator knowing.

  112. steven mosher
    Posted Sep 8, 2007 at 5:48 PM | Permalink

    RE 71.

    There is a world of difference between Research Code ( which Gisstemp and ModelE are ) and
    Production code. I’m Not saying the difference is justified. There just is.

  113. windansea
    Posted Sep 8, 2007 at 5:52 PM | Permalink

    Erik Ramberg says:

    welcome to the jungle

  114. PeterS
    Posted Sep 8, 2007 at 5:52 PM | Permalink

    @Anthony Watt

    RE102, I appreciate the interest in Wiki’s, but this should be a reasoned decision based on goals, methods, and content. And Steve McIntyre should be part of the discussion. He’s offline at the moment so let’s wait to hear from him before anyone fires up any Wiki’s with the ClimateAudit name attached.

    Absolutely. And no ads, no pop-ups, no ‘sponsored by…’ no ‘viisit our other wikis,,,’ etc etc. It will be WELL WORTH a small monthly outlay to do it properly. If any professional graphic design it needed, let me know – it’s the least I can contribute (and probably the only thing) to this very vital project. Congrats to all.

  115. bernie
    Posted Sep 8, 2007 at 5:56 PM | Permalink

    I was very polite at RC, but they nixed my message on the Friday Roundup. If the release doesn’t qualify as a “round up” I wonder what does?

  116. steven mosher
    Posted Sep 8, 2007 at 5:59 PM | Permalink

    RE 94.

    YA, I basically wanted to see how the junk worked. I’m not too happy with all the crap
    you pointed out. But It was dead easy to start.

    Principles.

    1. The Wikis should flow from the blog. By that I mean the wikis should be for long term projects
    and dedicated folks..
    2. The wikis should enhance the blog brand ( the wetpaint ads detract)
    3. By invitation only.

    The other packages I looked at were free to start but then $.

    Nevertheless, a project like this, without organization or some sort will soon be a thread of
    300 comments and harder to follow than the code.

    I’ll have a look at what you suggested..

  117. Posted Sep 8, 2007 at 6:00 PM | Permalink

    Well done, Steve… What could I tell you, but… CONGRATULATIONS!

    BTW, I’m still waiting for information on the liability of Linares MS.

  118. windansea
    Posted Sep 8, 2007 at 6:01 PM | Permalink

    I was very polite at RC, but they nixed my message on the Friday Roundup. If the release doesn’t qualify as a “round up” I wonder what does?

    Perhaps Gavin is pondering a river in Egypt.

  119. KDT
    Posted Sep 8, 2007 at 6:06 PM | Permalink

    #119 Be patient, it takes some time to build up the required spin.

  120. Follow the Money
    Posted Sep 8, 2007 at 6:09 PM | Permalink

    “Let’s see your reaction to this post. Will it be gracious?”

    Tough one. An ungracious reply to your post would be asymetrical because its tone is not ungracious but merely patronizing. I suppose the symetrical response would be to affect some comment in the lingo of the youth of today. Instead I will assist you and point out your pro-co2-based-aagw list fails to include “advancing glaciers.” Yes, not just retreating ones but advancing ones also are evidence of aagw. Explains Greenland.

  121. steven mosher
    Posted Sep 8, 2007 at 6:16 PM | Permalink

    RE 100.

    Content is gone.

  122. bernie
    Posted Sep 8, 2007 at 6:16 PM | Permalink

    One final note, then I have to be sociable. Is there a way we can organize to focus on different things. The release of the code is great but it is pushing the work of looking closely for Waldo in particular locations to the back of the queue. If that is OK with SteveMc – then fine but I have no Fortran skills and had not even heard of Python.

  123. MrPete
    Posted Sep 8, 2007 at 6:18 PM | Permalink

    Anthony, Stephen Mosher, et al – re Wikis…

    I have some significant MediaWiki experience (not an official developer, but have seriously hacked at it). I highly recommend that as an engine IF this is likely to become a large Wiki. It has built-in quality and performance tools unlike any other wiki. The public code contains the same tools used to run the many-server WikiPedia clusters.

  124. KDT
    Posted Sep 8, 2007 at 6:19 PM | Permalink

    From gistemp.txt:

    The various sources at a single location are combined into one record, if
    possible, using a method similar to the reference station method. The shift
    is determined in this case on series of estimated annual means.

    A method similar to the reference station method. Similar to the published method. Similar to what you would assume when you RTFR. Similar. Hmmmph.

  125. Steve Moore
    Posted Sep 8, 2007 at 6:21 PM | Permalink

    RE:112

    Putting one’s faith in data that cannot be verified is depending too much on the kindness of strangers for my taste.

  126. steven mosher
    Posted Sep 8, 2007 at 6:40 PM | Permalink

    RE 129.

    The free stuff ( wetpaint) was easy but junky

    I think the Key is getting a volunteer to step up and structure it.
    Murray’s probably gunna kick in some funding.. Right?

  127. Posted Sep 8, 2007 at 6:57 PM | Permalink

    Re 112 Echoing Steve Moore’s comment the rough estimates are too rough given the situation. Since the early 1900’s CO2 has not doubled, but the increase should be sufficient to narrow down the rough estimates if the data is correct and all natural forcing is understood. I just have this strong feeling the science is not settled. Then I am not a climatologist. What’s it gonna hurt to verify things?

  128. steven mosher
    Posted Sep 8, 2007 at 6:59 PM | Permalink

    RE 128.

    That’s the point of releasing the Code. By yourself you probably have easily 2 months before
    you are up to speed on all the code . A wiki would allow for a division of labor.

    somebody who knows the stuff need to step up.

  129. joel
    Posted Sep 8, 2007 at 7:08 PM | Permalink

    Normally, scientists do not release their software or raw data to the public, for very good reasons.

    You are kidding, I hope. Congress just voted about 16 billion (or was it just 6 billion) dollars of new taxes on oil companies because of the global warming scare.

    The corn-ethanol program is vastly expensive and destructive to the environment. And is driving up food prices all over the world. The UN is unable to meet its previous obligations of providing sufficient food aid for people starving (or, those with really small carbon footprints.)

    Shouldn’t the representatives of the public get to see the data and the code? Data and code paid for by the US Govt?

    Dr. Hanson is on a veritable crusade to change US energy policy, and has enlisted powerful political support, aka Al Gore.

    We are way beyond science here.

  130. Greg Murphy
    Posted Sep 8, 2007 at 7:09 PM | Permalink

    Steve, Well Done

    We should up the pressure on Jones to release his code and stations
    Greg

  131. Posted Sep 8, 2007 at 7:18 PM | Permalink

    Re 72:

    I have loaded low res browse Walhalla images onto http://www.surfacestations.com from the USGS Earth Explorer site, for 1948, 1956, 1977, 1994 (higher res digital images could be ordered from USGS EarthExplorer web site) just to demonstrate that archival imagery exists to evaluate how much a site has or has NOT changed. For Walhalla, all the way back to 1948.

    I’ve ordered two archival images for Paso Robles dated 1988 and 1976. They’re $3 each for digitized, $30 each for scanned … being cheap I went with digitized to if they’re sharp enuf. It looks like it takes a couple of weeks to fill the order.

    If they look good I’ll buy some for Walhalla and post them.

  132. Falafulu Fisi
    Posted Sep 8, 2007 at 7:30 PM | Permalink

    I think that Dr. Hansen might counting down his days at NASA as perhaps NASA would see him as a liability to the organisation itself and that might hinder their science fund application to Congress. Dr. Hansen is no fan of some Congressmen.

  133. Anthony Watts
    Posted Sep 8, 2007 at 7:33 PM | Permalink

    RE134 I’m already to “step up” for a Wiki and provide a server for it and configure it and get it running (I run about 50 servers now, whats one more?)

    There are two ways to do it:

    1 – use existing CA server and add MediaWiki for url http://www.climateaudit.org/wiki/

    pros- I made the CA server is already setup for that in advance, makes for easier all-in-one website organization, no additional fees
    cons- all eggs in one basket, if server goes down we lose two websites at once, may hit bandwidth limit on server, incurring additional fees anyway)

    2- create separate Wiki server with address like wiki.climateaudit.org

    pros – separate content on separate server prevents total loss scenario, bandwidth limit likely not an issue
    cons – extra hardware and monthly fees, extra system to manage.

    As I see it, the decision is a function of how much traffic the Wiki is likely to generate, if it is likely a lot, then a separate server from start is best choice. Then that begs the question of how many people are willing to step up and regularly contribute 5-10 dollars/month to keep the wiki running on separate server?

    If its just a small traffic hit, then both could run on same machine. The server I built for CA can handle quite a lot more than old one. But then there’s the crash scenario. I see a Wiki as a greater and more important resource than a blog, so perhaps reliability and separation should be key consideration.

    Steve McIntyre would of course have final say on which direction since he’d have to pay the colocation bill to place the server on the big pipe next to the CA server from tip jar for separate server.

  134. tetris
    Posted Sep 8, 2007 at 7:53 PM | Permalink

    Re: 112
    Erik
    “Arrhenius got it roughly correct…”. “The first level physics is pretty straightforward”.
    Based on best available data and what has been published since the last IPCC epistle, not so.
    Arrhenius got it wrong and the atmospheric physics involved are far from straightforward.
    For obvious reasons RC remains confused about the distinction between correlation and causality.
    CA is all about finding the fatal flaws in the very data held up as evidence for AGW. The “Hockey Stick”, GISS US temp data, “where’s Waldo?” and now the code for all to question.

  135. Posted Sep 8, 2007 at 7:59 PM | Permalink

    “Then that begs the question of how many people are willing to step up and regularly contribute 5-10 dollars/month to keep the wiki running on separate server?”

    Anthony,

    Could you rough out an annual high/low budget so we know what a general target might be? I would be pleased to subscribe for $10 month.

    I’m pretty sure the UN has set its sights at a considerably higher figure wrt my monthly budget.

  136. steven mosher
    Posted Sep 8, 2007 at 8:03 PM | Permalink

    re 139.

    write me in for 200 bucks to get the thing off the ground. When Mc gives his approval
    I’ll shoot you a check

  137. Erik Ramberg
    Posted Sep 8, 2007 at 8:05 PM | Permalink

    Re: #135:

    I absolutely agree with you concerning corn based ethanol production. I bet Hansen does, too. That is a simple matter of understanding the efficiency of various sources of energy.

    And, yes, by the way, I do advocate the open explanation of scientific data and analysis, especially that paid for by the government. It is difficult to do in a way that is accessible to the general public. Kudo’s to Hansen for spending so much time on it.

    But here is what I’m trying to get at:

    Every one of you is smart. You are on a web based blog, for God’s sake, discussing climate change. You have technology skills. When there are a dozen different avenues of evidence, and one incredibly simple theory to explain all that evidence, why are you spending all this precious time and human capital trying to falsify one aspect of the evidence. As I said in my initial posting, I’m stunned at everyone’s attitude. It seems like such an emotional crusade. Indeed, as you say, we are way beyond science.

    I sincerely wish good luck to everyone in their Fortran coding. It’s my favorite language!

  138. Anthony Watts
    Posted Sep 8, 2007 at 8:10 PM | Permalink

    Re141

    Well at the low end, about $80 per month for the colo fee, if the site generates a lot of traffic, we could see extra bandwidth fees to make the total around $120-150 per month, but I wouldn’t expect that to happen right away…though a couple of interesting discoveries could easiliy put it over the top fast as we saw with the 1998-1934 issue.

    For maintenance, figure about $200-400 per year, to replace hard drives as needed.

    Best case would be:

    $80 per month x 12 = $960/yr + $200 maint = $1160/year

    Thats 232 people at $5/year or 116 at $10/year

    Worst case:

    $150 per month x 12 = $1800 +$400 maint = $2200/year

    Thats 440 people at $5/year or 220 at $10/year

  139. Posted Sep 8, 2007 at 8:11 PM | Permalink

    I have been without an internet connection for the last couple of days, so, belatedly, congratulations.

    I just downloaded the archive and expanded it. This will be interesting.

    — Sinan

  140. physicist
    Posted Sep 8, 2007 at 8:11 PM | Permalink

    As another practicing physicist who reads this blog, RC and climatesci, I disagree with #112. It is quite common for simulation groups to make codes available. Indeed, large simulation codes used for climate modeling are publicly available. Hansen’s behavior here is odd and troubling, to say the least. Anyone writing about ‘the end of creation’ should be both willing and anxious to put all information and analysis out on the table so as to convince people of the correctness of their research, and to help others contribute. Why on earth would he wait all these years until he is forced to rush things out. As a rule, the more important the result, the more scrutiny one should expect, and desire.

  141. Steve McIntyre
    Posted Sep 8, 2007 at 8:11 PM | Permalink

    Thanks for the kind words. I’ve had a very pleasant day being a soccer grandpa.

    Since everything is done by volunteers, my preference would be for whatever has the least management requirements and if there’s a volume problem – which I don’t expect – deal with it after the event.

    For wiki, my inclination would be to have restricted posting rights – a wider circle than people authorized to start threads here – but not a free for all either. I also like some of the features of the StikiR wiki and will talk to Mike Cassin about this.

    #112. Providing working code and data as used is part of modern practices in econometrics. I’ve never suggested anything that is not practical. The other aspect in climate science – as opposed to many other academic disciplines – is that the results are being relied on for policy decisions and therefore invite a level of due diligence additional to that of, say, ornithological classification. I’ve seen the level of due diligence involved in very small prospectuses, and I remain shocked at the casualness of the due diligence for something like IPCC, which is, in effect, an international prospectus (using the term as businesses use it) not as academics use it. I’m especially amazed at the sneering and snickering attitude by many climate scientists towards even the suggestion that there be due diligence of the type that exists for prospectuses.

  142. Paul Penrose
    Posted Sep 8, 2007 at 8:14 PM | Permalink

    Am I the only one here that thinks Python is not the best language to use for number crunching?

  143. Posted Sep 8, 2007 at 8:14 PM | Permalink

    Eric, “audit” != “trying to falsify”

    “audit” == “trying to verify”

    If the verification fails, then it is falsified, but that’s not the point. The point is that science is all about scrutiny so that we can be sure that a theory is as solid is it gets. Why object to checking that it’s implemented correctly? There’s no valid reason. If it’s all correct, then the audit will simply verify that, and who could object to such a thing?

  144. John Norris
    Posted Sep 8, 2007 at 8:18 PM | Permalink

    re 142

    Some people collect stamps, some people bird watch, and some people question technical experts when they don’t release all of their technical detail, when the research drives global policy.

  145. John Norris
    Posted Sep 8, 2007 at 8:19 PM | Permalink

    oops, 150 should be re 143, not re 142

  146. Posted Sep 8, 2007 at 8:24 PM | Permalink

    $150 per month x 12 = $1800 +$400 maint = $2000/year

    Thats 400 people at $500/year or 200 at $10/year

    Quick, somebody call an auditor. 🙂

  147. Earle Williams
    Posted Sep 8, 2007 at 8:27 PM | Permalink

    Re #143

    Erik Ramberg,

    Dr. Hansen has done little to merit kudos with respect to the release of this code. I also am a scientist in the employ of the U.S. federal government and I state unequivocally that his behavior regarding the stonewalling and eventual release of this code is shameful. It is not science, it is not befitting of the director of a government institution, and it is contrary to federal laws and policies.

    As far as sharing code, I can speak for myself and tell you that for my graduate school work I included my entire source code and data files as apendices to my master’s thesis. That’s the standard I would expect from anyone publishing today. If there are logistical constraints preventing ready distribution at time of publication, certainly code and data must be supplied to anyone requesting it. Especially when the work is the property of the U.S. government. To not provide it is to engage in marketing and politics, not science.

    Thank you for sharing your viewpoint. Hopefully you also recognize that many people who post here don’t share that same viewpoint. Once you’ve accepted that it be should fairly straightforward to extrapolate to why they feel this endeavor is worth their time and effort. If you can’t understand that then I doubt any amount of dialog will make it clear.

  148. Posted Sep 8, 2007 at 8:27 PM | Permalink

    RE152, yeah whoops, I’m guilty – my son William (4) kept pulling on Daddy while I was writing, fixing that now

  149. Steve McIntyre
    Posted Sep 8, 2007 at 8:28 PM | Permalink

    #149. business auditors don’t expect to find problems. Probably more 99% of all business audits are uneventful. But nobody is calling for the elimination of business audits. When I started looking at MAnn’s work, I didn’t expect anyone to be interested in what I thought; I was just surprised that nobody had ever looked at it previously, as evidenced by the fact that Mann had “forgotten” where the data was and Scott Rutherford had to get it together for me. I didn’t expect to find problems.

    Ross McKitrick’s initial thoughts were that looking at data in the way that I had done was too high risk for an academic researcher since the odds of it yielding a publishable paper were slight.

    Bruce McCullough and William Dewalt have argued forcefully that archiving code and data reduces the researcher’s cost and risk in replication. I agree entirely. These ridiculous puzzles over each step of HAnsen’s methodology are reduced by having code to consult. IT would be reduced much further if the code were properly commented and documented, but, even in the absence of proper software standards, it is still helpful.

  150. mccall
    Posted Sep 8, 2007 at 8:44 PM | Permalink

    re 112: I have much disappointment in reading your post.

    I’ll echo 123 and say that over 50% of my RC posts have been censored — polite as they were/are, they just didn’t agree with or they questioned points of AGW dogma. What’s more, RC let stand related absurd posts — that agree with or extend AGW dogma.

    BTW, without intervention, what do you think will happen in the next two decades; and are you willing to wager on it?

  151. Posted Sep 8, 2007 at 8:48 PM | Permalink

    RE149 Steve are you speaking of this?

    http://www.stikir.com/

    for example:

    http://www.stikir.com/index.php/GHCN_Climate_Data_Sandbox

    While I like the idea in theory, I see two potential problems:

    1) You have no absolute control over your own destiny with a free hosted package. If they go belly up, have server issues, or kick you off then you are back at square 1

    2) Forcing R on everyone may lead to a lot of hand holding for those trying to make entries. While imperfect, a lot of people like to do Excel, and can create graphs etc from it that are pretty slick. R has a pretty steep learning curve and may limit contributions to the knowledge base whereas if you leave it more “free form” people can use whatever they want to create graphs, charts, tables etc and upload them.

    Certainly integrated R is a more powerful choice, but I think it would limit opportunities for those who don’t have time to learn it and cause more noobie support for those who do know it.

    MediaWiki plain version (less the StikiR module) is dirt simple to install and manage, and it has wide support plus help forums for it. That’s my vote.

  152. Posted Sep 8, 2007 at 9:04 PM | Permalink

    Anthony, I cannot see how the wiki part would get a heavy server load, as it will mainly be the playground of a limited number of maths and code junkies who get their kicks from digging into the guts of code, and who are dedicated to understanding and refining the nuts and bolts part.

    Most of the Climate Audit traffic is from people who might perhaps take a 5 minute look at the code wiki at best, decide they would not know a python fragment from a fortran routine even if they were looking right at both, and not bother to visit the wiki again.

    Any particularly interesting bits will no doubt get cross-posted on the main site along with some plain English explanations for the less code and maths savvy, being most of us.

    Any heavy traffic outbreaks are going to be as a result of especially interesting findings posted on the main site, not the wiki bits that relate to them.

    As long as you have the system set to back up regularly on a seperate HD somewhere I see no problem in having both on the same server, as I expect downtime will be minimal with the current server farm anyway … especially as you are looking after it!

  153. togger63
    Posted Sep 8, 2007 at 9:14 PM | Permalink

    Re #153. I suspect that you are right about Hansen’s behavior being contrary to federal law and policy in not releasing the code. I am a lawyer that used to counsel in-house at a major research university and in particular, the Office of Research and Project Administration. The NIH grants that our researchers received usually required data archiving and open access to data as conditions of the grant (among other conditons) and researchers had to certify to the NIH that they were complying with the grant conditions usually on renewal or on an annual basis for multi-year grants. I and my colleagues counseled the researchers (browbeat them really) that they needed to be scrupulous about satisfying all grant conditons or they could not certify their compliance to the NIH. To do so was potentially actionable under the Federal False Claims Act (the FCA) which is a very big deal.

    Now, I have no idea what conditions are in the grants Hansen relied on to generate his published research but if the grants had conditions that required data archiving or prescibed open access, he cannot deny access to the code and at the same time certify his compliance with the conditions of the grant. That could create headaches for him and Columbia with the FCA (big headaches if it happened). I don’t know, maybe NIH grant conditions are unique when it comes to archiving and access but I doubt it. IMHO, it wasn’t his NASA boss that told him to release it, it was the lawyers for Columbia University (his other employer).

  154. togger63
    Posted Sep 8, 2007 at 9:18 PM | Permalink

    Oh, and I’ll kick into the tip jar to get the climate wiki project off the ground. What is the suggested amount.

  155. Chris D
    Posted Sep 8, 2007 at 9:22 PM | Permalink

    re: 137, Leon:

    If you get a high res image of Walhalla, get the 1948 one, as it does include the actual site. The observer is a private observer who explicitly did not want the location clearly identifiable to the public, and I am bound to honor this request. You’ll notice that my Google Earth image only marks the location that is provided in the MMS, but is not the true location of the site. I’ll contact the observer and see if there is an old image of the field and post it up, if possible.

  156. Posted Sep 8, 2007 at 9:23 PM | Permalink

    RE158 Sinan

    That’s this equation for converting Fahrenheit to Centigrade, the first step in data ingest for US temperature data:

    From GISS:

    if(temp.gt.-99.00) itemp(m)=nint( 50.*(temp-32.)/9 ) ! F->.1C

    Wouldn’t that “50.” be a “5.” ?

  157. Erik Ramberg
    Posted Sep 8, 2007 at 9:25 PM | Permalink

    Re: 155

    Steve – I enjoy reading your website. Nice work on the GISS data. Since you bring up the subject, I’m interested in your viewpoint on the results of your critique of Mann’s work. This was certainly an in-depth audit of that work, spanning many years. I would be the first to admit that you approached the critique scientifically.

    My impression, however, is that the National Academy of Sciences has validated the Mann result. To quote: “It can be said with a high level of confidence that global mean surface temperature was higher during the last few decades of the 20th century than during any comparable period during the preceding four centuries. This statement is justified by the consistency of the evidence from a wide variety of geographically diverse proxies.”

    I don’t question your right to critique climate results, an area in which you are clearly a competent leader. But I’m really interested in your personal conclusions: do you agree that significant warming is occuring due to anthropogenic carbon dioxide, or do you feel that the evidence of global warming is due solely to poor statistical analyses? If it is the former, I’m curious as to what comes next. Will you continue auditing no matter what?

    With due respect,
    Erik

    p.s. I’ll shut up after this, and let you guys get back to debugging poorly documented Fortran.

  158. Chris D
    Posted Sep 8, 2007 at 9:28 PM | Permalink

    addendum to 162: the actual site can still be seen in the Google Earth image that I posted, it’s just not marked.

  159. paminator
    Posted Sep 8, 2007 at 9:32 PM | Permalink

    re #112-

    I’m a working scientist (physicist, not climatologist) and have been following the Climate Audit/Real Climate/Hansen saga for some time now. I must say that I am stunned at the strange view of the current status of climate science that most of you have.

    If you are a physicist, and you have followed the discussions on AGW at this site as well as others, then frankly I am stunned at your comments.

    I agree that a simple greybody calculation of the Earth’s climate sensitivity gives 0.25 – 0.3 C/W/m^2, or about 1 degree C for a doubling of CO2, which is not a crisis (particularly since half the warming has already occurred, and particularly when you look at solar forecasts for the next 25 – 50 years). The rest of the climate processes are very much more complex than this. For example, it is not yet clear that even the sign of climate feedbacks such as water vapor (which necessarily involves clouds) and aerosols are known, let alone their magnitudes.

    Real Climate has the links to the equations, if you are brave enough to read that web site.

    No need to advertise RC here. They are linked on the LHS of this site. Nir Shaviv provides the clearest presentation of the “equations” that I have found.

  160. Posted Sep 8, 2007 at 9:38 PM | Permalink

    Re 162

    Hi Chris,

    I’ve found the site on the 1948, 1956 and 1977 low res photos and marked them with an X and uploaded them (had to flip and rotate the photos but they all sync with north up, east right). The same big empty field shows up in all of them, there’s a forest feature to the right that helps identification (a small jut of forest that points to the site).

    I am going to order the $3 versions of digitized images, for Walhalla, will post them when I get them (in a couple of weeks?). Should be good enough to validate this site hasn’t changed much in 60 years, it should be a gold standard for rural sites.

    Hey Steve, is there some way we can get the USGS to provide free access to images for climate research?

    Leon

  161. VirgilM
    Posted Sep 8, 2007 at 9:45 PM | Permalink

    Anthony Re 163:

    If the intention of the programer is to store the temperature to the nearest tenth of a degree C into an integer variable, then a factor of 10 is needed in the conversion. If that is the intent, then 50 is good.

    Virgil

  162. Chris D
    Posted Sep 8, 2007 at 9:55 PM | Permalink

    re: 167 Leon: I just looked at the newer high-res 1948 shot, and sure enough, I could locate the true site, and discern that it was clearly an open field back then – if anything, fewer trees nearby. Very helpful – thanks.

  163. Posted Sep 8, 2007 at 10:02 PM | Permalink

    RE168, I’d considered that, but it seems an odd way of implementing such a scheme.

  164. Kenneth Fritsch
    Posted Sep 8, 2007 at 10:02 PM | Permalink

    Re: #143

    Every one of you is smart. You are on a web based blog, for God’s sake, discussing climate change. You have technology skills. When there are a dozen different avenues of evidence, and one incredibly simple theory to explain all that evidence, why are you spending all this precious time and human capital trying to falsify one aspect of the evidence. As I said in my initial posting.

    Eric, perhaps you are satisfied with the overall certainty and evidence for the normal range given for AGW into the future and the adverse versus beneficial effects that are predicted, but that is not the case for me. You have apparently generalized an opinion from CA posters just as you generalize the case for AGW and neither does the subject justice. SteveM’s efforts have not been to prove anything about the magnitude of AGW but to analyze and audit how climate sceince is being done.

    Staying at CA and discussing specific issues about how climate science is being done would from your post not be worth the time to one so convinced as you of what will happen in the future, but you might want to take a little time to inform us of what you think some scenarios for the future might be, what if anything we should be doing to mitigate it and (my other problem with AGW) whether attempts at mitigation could create more problems than it solves.

    I’m stunned at everyone’s attitude. It seems like such an emotional crusade. Indeed, as you say, we are way beyond, science.

    That is an interesting comment, because I would say the same of Hansen’s public statements. Go read Hansen’s comments linked in “Jesting the Adjusters”

    http://www.climateaudit.org/?p=1980#comments

    and then tell me if that does not sound like an emotional crusade. I know, I know you are going to say he was provoked and he had good cause — unlike us CA posters.

  165. Posted Sep 8, 2007 at 10:04 PM | Permalink

    RE 167,169

    Chris and Leon – what would really be useful for demonstrating land use and vegetation changes over the life of a station location would be a “flip book” animation with dissolves. Once all the photos are collected, let me know, I have software that will do that.

  166. Kenneth Fritsch
    Posted Sep 8, 2007 at 10:12 PM | Permalink

    Re: #16

    Guys, if you get a chance drop over to RC, thank Hansen and Gavin. Let’s show
    some class.

    I have no intentions of posting at RC, but to show my class I will apologize to Hansen here and now — if he is reading at CA: Sorry for thinking you had no shame.

  167. Dave Dardinger
    Posted Sep 8, 2007 at 10:13 PM | Permalink

    re: # 163, #168.

    Yes. It’s explicity stated … ! F -.gt. .1C I.e. it converts deg F to tenths of deg C.

  168. togger63
    Posted Sep 8, 2007 at 10:14 PM | Permalink

    Re #164: You posit the question a little oddly — either anthropogenic warming due to carbon dioxide or poor statistics, nothing else. Aren’t you leaving our natural variability? Aren’t you leaving out anthropogenic effects from land use changes? Aren’t you leaving out the possibility that all of these contribute in degrees that we do not yet understand?

  169. Jim
    Posted Sep 8, 2007 at 10:18 PM | Permalink

    In reply to Erik Ramsburg:

    Erik is obviously of the opinion that GW must be Anthropogenic.
    I am a physicist as well, and the key scientific question
    is

    “To what extent is GW caused by anthropogenic emissions”?

    The primary reason (in my view) for the existence of CA
    is that to some extent the debate is being driven by
    scientists/activists with a specific agenda. So what I
    was describe as proper scientific rectitude has been ditched.
    Somuttering in the bars
    at conferences), the question naturally arise, what
    can be trusted? Ergo, SM is now looking at the temperature
    record, and finding evidence of some rather shoddy
    work. The concern is that corrections to raw data
    that lead to an increasing 20th cent temp record will
    make it into the analysis easily, while any corrections
    that might result in decreasing 20th cent tempature
    record will be subjected to much more rigorous scrutiny.
    This easily could lead to a bias. How much? That is
    the question.

    However, finding the 20th century record is OK would
    still not prove the A in AGW. Looking at the past
    climate record and understanding the drivers is important.

    Jim

    PS, you should be careful with statements like
    “Arrhenius got it roughly correct…”. The current
    climate models predict that increased CO2 absorption
    is only responsible for 30-40% of the measured global
    warming. Basically, most of the MODEL warming comes
    from increased H2O concentrations as a result of
    the slightly warmer temps from CO2. There are major
    uncertainties in our understanding of how H2O concentrations
    affect the radiation balance of the earth.

  170. Scott
    Posted Sep 8, 2007 at 10:23 PM | Permalink

    Here’s an idea of where to put and what to do with the code:

    Google can serve open source projects. It includes a wiki, download site, a versioned repository (and online browser of the repository). Why not put everything up there? Use the wiki to try to share knowledge. Later when we understand what it does, we can check in instructions, recipes, and comments into the repository in an effort to understand the code and get it working?

    I’d do it, but what is the code’s license? (There’s no copyright notice or license anywhere.)

    To see an example, well, sorry, but if I put in the URL, the spam filter
    eats the comment, but you can do a google search for ‘google code’ and click the first link, then browse for an example.

  171. BarryW
    Posted Sep 8, 2007 at 10:27 PM | Permalink

    Re #170

    That’s what it appears to be doing. The code is converting to centigrade times ten which is making the least significant digit rounded to tenths of a degree in integer format. I think the comment after the exclamation point is that it’s converting Fahrenheit to tenths of degrees Centigrade.

    The NINT function rounds upwards if the fractional part of the argument is 0.5

  172. Steve McIntyre
    Posted Sep 8, 2007 at 10:29 PM | Permalink

    #164.

    Eduardo Zorita – a rather neutral party – thought that the NAS panel was as critical of Mann as was possible in the context. They did not contradict a single observation that we made. They observed that other studies – on which we had not published – had reported similar results to Mann, but performed no due diligence on these studies and Chairman North said that they just “winged it”. I’ve discussed the NAS panel extensively – see NAS panel in the lest frame Categories.

    As to my personal views: even though I think that many climate scientists are wildly over-promotional, I do not exclude the possibility that there is a valid argument, even if (say) Mann’s arguments are not valid. In this sense, the existence of a valid alternative argument would not prove that Mann’s methodology was “right” any more than valid geological evidence has vindicated the Piltdown Mann.

    I regular ask readers who are critical of me to provide a citation to a detailed exposition of how doubled CO2 results in 2.5 deg C – in which all arguments and assumptions are pulled together. No one has been able to provide one. I do not suggest that such an exposition is impossible, but the seeming absence of such an exposition really frustrates the debate. Prior to the framing of AR4, I suggested that such an exposition be included but IPCC apparently decided that it was irrelevant.

  173. Brian G
    Posted Sep 8, 2007 at 10:32 PM | Permalink

    re144:

    Yes. Labview anyone?

    Re141: I’ll cut a check upfront for $1000 US, if you decide to go to the separate (higher traffic) route. And $50/month after that.

    Just an interested guy in the US, and a big fan of Mc. Been following him since he trashed the hockey stick model.

    Web admin guy/Mc: email me– I am serious on the offer.

  174. Steve McIntyre
    Posted Sep 8, 2007 at 10:48 PM | Permalink

    Every one of you is smart. You are on a web based blog, for God’s sake, discussing climate change. You have technology skills. When there are a dozen different avenues of evidence, and one incredibly simple theory to explain all that evidence, why are you spending all this precious time and human capital trying to falsify one aspect of the evidence. As I said in my initial posting, I’m stunned at everyone’s attitude. It seems like such an emotional crusade. Indeed, as you say, we are way beyond science

    Eric, can you please provide me with a citation or reference which is the best exposition in your opinion of how doubled CO2 leads to 2.5 deg C. I would like something that is about 30-100 pages and is not (1) 1/2 page; (2) a citation of MODTRAN; (3) reporting the results of a GCM run. I want something that clearly explains all the relevant topics and quantifies the feedbacks.

    Others, please do not re-hash arguments like #176. The focus of this site is an auditing and verification and I’d rather spend time on mainstream analyses.

  175. Jan Pompe
    Posted Sep 8, 2007 at 10:54 PM | Permalink

    #170 Anthony

    It’s OK if we can be sure that the compiled program does the subtraction first then the multiplication then the division. Otherwise precision is lost and it can become a source of error.

  176. Posted Sep 8, 2007 at 10:54 PM | Permalink

    For maintenance, figure about $200-400 per year, to replace hard drives as needed.

    Anthony, what brand of hard drives are you using? That seems like an excessive failure rate.

    I manage servers with a total of about 20 drives and I’ve had one or two failures in the last several years.

    I buy Seagate drives because I’ve tried several brands and they have given me the best experience with reliability.

  177. nrk
    Posted Sep 8, 2007 at 11:18 PM | Permalink

    re: #164 & #179

    I also suggest that Erik read the Wegman Report (under Links on the left side of the page). Erik, if you read the report (and Steve Mc.’s submittal to NAS), you’ll understand the issues about the claims that the 1990’s were the warmest decade in the last 1000 years, and the counter arguments re: the Little Ice Age. P.S. Wegman and his co-writers are statisticians.

  178. Scott
    Posted Sep 8, 2007 at 11:51 PM | Permalink

    Instructions for getting STEP1 to compile and startup (NOTE: I HAVE
    NOT ATTEMPTED TO RUN IT ON REAL DATA) under a linux debian system
    running python 2.4/2.5

    First, in each subdirectory of EXTENSIONS, we must update each of the
    Makefile.pre.in (see diff below). (EDIT one and copy it over the
    others.)

    Some extensions need extra #define lines at the top (see diff below),
    I can invoke all of the toplevel .py files by only editing
    stationstringmodule.c and monthlydatamodule.c. They will compile
    without this fix, but you’ll get ‘undefined symbol: ‘ errors if you
    attempt to run the python scripts.

    Then, in each extension directory, build the extension, at the shell:

    make -f Makefile.pre.in boot
    make

    Result of this should be a .so file.

    Now, to invoke python, we need to tell it where to find the extensions, with a line like (at the shell)

    export PYTHONPATH=EXTENSIONS-mine/stationstring:EXTENSIONS-mine/monthlydata/

    I found that only these two needed to be in the PYTHONPATH to work to
    invoke any of the *.py scripts.

    Now, you should be able to at least invoke all of the *.py scripts.

    python alter_discont.py
    python comb_pieces.py

    With luck, Hansen will have instructions like these in a later release in a couple of weeks. Regardless, please set up a source control repository, like the google code suggestion. For there, I could simply commit these changes and put this text in a README.compile file.

    PROBLEMS and SOLUTIONS:

    **
    ImportError: /tmp/GISTEMP_sources/STEP1/EXTENSIONS-mine/monthlydata/monthlydatamodule.so: undefined symbol: Py_Free

    This means that you need to copy&paste the #define lines at the top of the file.

    **
    It complains about ld_aix not existing: You forgot to patch the Makefile.pre.in

    **
    It complains about @DEF@ not existing: You forgot to patch the Makefile.pre.in

    — EXTENSIONS/stationstring/Makefile.pre.in 1999-01-19 18:54:04.000000000 -0600
    +++ EXTENSIONS-mine/stationstring/Makefile.pre.in 2007-09-08 23:46:13.000000000 -0500
    @@ -119,5 +119,5 @@
    LDFLAGS= @LDFLAGS@
    LDLAST= @LDLAST@
    -DEFS= @DEFS@
    +DEFS=
    LIBS= @LIBS@
    LIBM= @LIBM@
    @@ -138,11 +138,11 @@

    # Uncomment the following two lines for AIX
    -LINKCC= $(LIBPL)/makexp_aix $(LIBPL)/python.exp “” $(LIBRARY); $(PURIFY) $(CC)
    -LDSHARED= $(LIBPL)/ld_so_aix $(CC) -bI:$(LIBPL)/python.exp
    +#LINKCC= $(LIBPL)/makexp_aix $(LIBPL)/python.exp “” $(LIBRARY); $(PURIFY) $(CC)
    +#LDSHARED= $(LIBPL)/ld_so_aix $(CC) -bI:$(LIBPL)/python.exp

    # === Fixed definitions ===

    # Shell used by make (some versions default to the login shell, which is bad)
    -SHELL= /usr/bin/bsh
    +SHELL= /bin/sh

    # Expanded directories

    — EXTENSIONS/stationstring/stationstringmodule.c 1999-03-04 10:47:48.000000000 -0600
    +++ EXTENSIONS-mine/stationstring/stationstringmodule.c 2007-09-09 00:29:15.000000000 -0500
    @@ -23,4 +23,12 @@
    #include “Python.h”

    +
    +// From http://mail.python.org/pipermail/patches/2000-April/000582.html
    +#define Py_Malloc(n) /* deprecated Py_Malloc */ PyMem_Malloc(n)
    +#define Py_Realloc(p, n) /* deprecated Py_Realloc */ PyMem_Realloc((p), (n))
    +#define Py_Free(p) /* deprecated Py_Free */ PyMem_Free(p)
    +
    +
    +
    #define FEQUALS(x, y) (fabs(x – (y))

  179. DeWitt Payne
    Posted Sep 8, 2007 at 11:57 PM | Permalink

    #181

    I want something that clearly explains all the relevant topics and quantifies the feedbacks.

    I don’t think you can do that in 30 to 100 pages, at least not for a reader who isn’t already well versed in the relevant science and mathematics. Even then it would be mostly equations and not actual calculations. I’m not at all sure that you could do justice to just Physical Meteorology in that little space.

  180. Dennis Wingo
    Posted Sep 9, 2007 at 12:04 AM | Permalink

    As a computer designer I think that it is also important that when Hansen releases algorithms of this type that the type of computer, the exact CPU be released as well. Those of us who design computers for a living understand that there are differences in how different floating point coprocessors (if used) or Arithmetic Logic Units within a CPU function in rounding and in precision in calculations. It was testing of mathematical algorithms in the early 90’s that pinpointed a problem in the FPU of the early Pentium processor from Intel at the time.

  181. Geoff Sherrington
    Posted Sep 9, 2007 at 12:27 AM | Permalink

    Congratulations Steve. I hesitate to type more because your methods have been so successful and I don’t wish to be seen to be disagreeing with your plan for success.

    The aim of this work is presumable to create a data base of global surface temperatures that, by examination and refinement, has been made acceptable to all. It might be that this task takes very little time, but then there might need to be a lot of back-and-forth with the original code writers.

    I’m a bit worried that the whole process could become so complicated that it bogs down before completion. From where I sit on the sidelines, talking as a theoretician, I see a need for a staged reconstruction, instead of people poking at code here and there. Do we need to have some planned objectives, like (a) a complete accepted raw data set (b) an accepted data set based on the first adjustment (c) an accepted data set based on the second, then third adjustment, etc.?

    Question is, what is to be done if examination uncovers an adjustment method that it is agreed ought to be modified or scrapped? It’s one thing for CA to scrap it, it’s another for GISS to scrap it formally. In such a debate, the decision makers and general public need informing of the change and its logic and then the accepted data taken into the next step of reconstruction.

    That’s how I’d approach it, but I don’t know how confident the CA “expert fixers” are that it will be short and persuasive effort. It’s your call, of course, I’m just making suggestions. Does your gentle hand need to guide the direction of effort? Geoff.

  182. Dennis Wingo
    Posted Sep 9, 2007 at 12:33 AM | Permalink

    I regular ask readers who are critical of me to provide a citation to a detailed exposition of how doubled CO2 results in 2.5 deg C – in which all arguments and assumptions are pulled together. No one has been able to provide one. I do not suggest that such an exposition is impossible, but the seeming absence of such an exposition really frustrates the debate. Prior to the framing of AR4, I suggested that such an exposition be included but IPCC apparently decided that it was irrelevant.

    Steve

    I am not critical but I have been doing some research in this area and have found the fundamental equations related to how increased concentrations of CO2 are measured and how the increase influences the emission and absorption of radiation at infrared wavelengths. What I have found is that the two mechanisms most often cited as the drivers of warming (pressure or collision broadening, and doppler broadening) are temperature dependent (to the square root power). The reference for this is in the following book.

    “The Quantum Theory of Light”, author Rodney Louden, Clarendon Press–Oxford, 1973. The discussion of doppler and pressure broadening begins on page 81. It is an interesting read and actually shows graphs of the frequency shift of the CO2 emission lines due to pressure broadening. However, pressure broadening is not dependent on the partial pressure increase in CO2, it is dependent on the total atmospheric pressure. If you really want to get into the deep details, the calculation would break out the pressure dependency of nitrogen, oxygen, and any other atmospheric gas that contributes to the total atmospheric pressure, which it can easily be seen would be different at different altitudes and would have to include the pressure broadening by CO2 vapor as well.

    Also, if both pressure broadening and doppler broadening have a temperature dependency then CO2 becomes a climate feedback, not a pure forcing mechanism.

    I have also found a lot of data from the late 50’s and early 60’s from the B-29 flights that measured absorption spectra at various altitudes. Supposedly this is what fed into HITRAN but it does not seem to validate claims related to how much energy is absorbed by CO2.

    Forgive me if this is something that everyone knows already but I have looked for about two years on the fundamental science papers on this subject and so far I have not been impressed! The absorption and emission of radiation by CO2 is fundamentally a quantum mechanical phenomenon and from what I have read so far in my research this has not been properly accounted for.

  183. D. Patterson
    Posted Sep 9, 2007 at 12:42 AM | Permalink

    Re: #115

    You would think that a glass thermometer is going to be highly reliable. But, like most endeavors in human society, there are always way to steal defeat from the jaws of victory and to extract error from the simplest routes to accuracy. For examples, you need to look no further than Anthony Watts’ surfacestations.org, which has documented COOP stations that have misapplied glass and mercury min/max thermometer measurements with improper siting and observation methods. Just remember the shouted warning of an irate meteorological observer: “Stand back! I have a mercury thermometer and I know how to use it!” Also, “I don’t need no steenkin’ meniscus.”

  184. D. Patterson
    Posted Sep 9, 2007 at 1:01 AM | Permalink

    Re: #188

    Does the author have anything to say whether or not the measurements are for a static or dynmaic environment with respect to pressure changes in the measured volume due to the effects of convection, vorticity, wave compression, and compression gradients within the vortices?

  185. D. Patterson
    Posted Sep 9, 2007 at 1:12 AM | Permalink

    Re: #187

    OGISS…OpenGISS…OpenSourceGISS? Version number and development branches?

  186. Posted Sep 9, 2007 at 1:23 AM | Permalink

    Hello auditors!

    The comments above make it look like no one has so far been successful in running all the scripts. My recommendation would be that someone who understands shell, F, and Python (at least superficially) will create a better version of the same package.

    There are probably various potential problems with directory paths and obsolete versions of the compilers, configuration etc. Most of these problems can probably be fixed without changing the content of the code.

    I think that every piece of code that is edited should be labeled by something like

    # ClimateAudit 2007: Your name: path fixed

    If completely new files or routines – or files explaining what’s going on in various directories are missing, new files should be created whose name starts with CA- – like ClimateAudit. For example, you may want to add new files CA-readme.txt in each subdirectory.

    If you create a more usable version of the package where no new errors are introduced, you should post it on a website of yours and inform the readers of this website about the URL so that they can stand on your shoulders. I hope that at least someone will find this to be a good rough plan. 😉

    Best
    Luboš

  187. Posted Sep 9, 2007 at 1:43 AM | Permalink

    Incidentally, for casual readers: if you just want to browse through Hansen’s unpacked files with your web browser, see

    http://hetglists.physics.harvard.edu/~motl/giss/

  188. Posted Sep 9, 2007 at 3:49 AM | Permalink

    re 2:
    I don’t know if I should be flattered or worried.
    Ironically the data of Hohenpeissenberg before 1880 is from GISS but was truncated by Hansen in
    january 2005

    update of Hohenpeissenberg here with public data from DWD
    http://home.casema.nl/errenwijlens/co2/t_hohenpeissenberg_200512.txt

    My kudos to Werner Schulz and DWD
    http://www.wx-schulz.de/Hohenp-bg/Hpbg-Seiten/HohenpbgZentrale.htm
    http://www.dwd.de/en/FundE/Klima/KLIS/daten/online/nat/ausgabe_monatswerte.htm

  189. Philip Mulholland
    Posted Sep 9, 2007 at 4:47 AM | Permalink

    Re #178 BarryW

    My father always advised me that when rounding .5 to the nearest integer
    the rule to apply, to avoid upward bias, is to adjust to the nearest even value.
    For example 1.5 rounds up to 2 while 2.5 rounds down to 2 etc.

    Regards
    Philip

  190. Dave Brewer
    Posted Sep 9, 2007 at 4:57 AM | Permalink

    Erik,

    You say:
    “Every one of you is smart. You are on a web based blog, for God’s sake, discussing climate change. You have technology skills. When there are a dozen different avenues of evidence, and one incredibly simple theory to explain all that evidence, why are you spending all this precious time and human capital trying to falsify one aspect of the evidence. As I said in my initial posting, I’m stunned at everyone’s attitude. It seems like such an emotional crusade. Indeed, as you say, we are way beyond science.”

    I agree with you on one point: lots of people, on both sides of the greenhouse argument, are on an emotional crusade. Gore is quite explicit about his; Hansen is close behind him. On the warmers’ side the emotion comes from the feeling that the earth is in danger. On the sceptics’ side it comes from the feeling that the evidence is rubbish and that the proposed remedies would be a disaster.

    Steve has addressed your point about “one incredibly simple theory”. Where is it? The simple presentations don’t stand up as science. The real theory, with quantities and reasoning attached, does not exist in any coherent and testable form. In practice, the theory being used is the output of climate models, fed with a mixture of data, approximations, and guesses, all of them much simpler than the real world.

    True, there are a dozen different avenues of evidence. But what do they prove? We only have that many lines of evidence for the most recent warming, and together they are strong enough to make us fairly sure temperatures have risen a few tenths of a degree since 1975. That is consistent with greenhouse theory, but also with other possible explanations – natural variation, aerosol warming (see Ramanathan’s recent paper) or solar warming – or with some combination of causes. Auditing the temperature record could have a profound impact on how we sort this out. You concede the audit could change the trend by 0.2 degrees. What if that turned out to be an additional 0.2 degrees cooling between 1940 and 1975, so that the fall in that period was roughly the same as the rise since then? That would make it much more likely that natural variability was the main cause of recent warming – and make disastrous greenhouse warming very unlikely.

    You know, there were a dozen different lines of evidence for eugenics, too. Family histories, the fates of separated twins, IQ studies, cranial measurements, crime statistics, characterological studies etc. etc. There was one incredibly simple theory to explain all that too – the defective gene. The theory had a profound impact on public policy for over 50 years – restrictive immigration laws and sterilization of imbeciles across much of the Western world, to say nothing of Nazi attempts at racial purification. Yet there was a good reason why that theory was so “incredibly” simple. It was bull. Some of us suspect global warming is the same.

  191. Andrey Levin
    Posted Sep 9, 2007 at 5:20 AM | Permalink

    My congratulations and condolences to Steve.

    You will have a whole mess of Hansen’s year after year, publication after publication result’s motivated adjustments on your hands to make a bit of sense out of it.…

  192. Posted Sep 9, 2007 at 5:25 AM | Permalink

    #143 ER

    “Every one of you is smart. You are on a web based blog, for God’s sake, discussing climate change. You have technology skills. When there are a dozen different avenues of evidence, and one incredibly simple theory to explain all that evidence, why are you spending all this precious time and human capital trying to falsify one aspect of the evidence. As I said in my initial posting, I’m stunned at everyone’s attitude. It seems like such an emotional crusade. Indeed, as you say, we are way beyond science.”

    You need to visit this blog more often. This blog is not an emotional crusade. Don’t get me wrong, a lot of passion is expressed from time to time on this blog including by myself. But that is very much understandable given what is at stake.

    So what is at stake? IMO the reputation of science in the eyes of the general public and in particular respect for the application of the scientfic method! What Steve M has consistently shown to date on this blog is that within the climate science community there are a number of eco-theologically politically inspired so-called scientists who do not want the scientific method to be rigorously applied to the GHG hypothesis. They have made up their minds and trying to make up our minds for us as well. They are attempting (and arguably succeeding) to force life changes upon our society which are based on poor data, suspect statistical methods and appeals to authority. A great deal of resources are currently being spent on climate change research as a consequence of the actions of these few. Valuable resources are as a direct consequence being diverted from dealing with real world issues such as poverty, famine and disease eradication.

    This blog is about systematically examining the evidence that lies behind the claims of these eco-theologically inspired few that the current warming trend is unprecendented and is directly caused by man and his consumption of fossil fuels. There are few on this blog who question that the earth is warming, there are many who disagree about the cause. There are many who most definitely disagree with the statements of the “few” of catastrophic warming, of tipping points, of species extinctions etc. The vast majority of the people who post on this blog are interested in the proper application of the scientific method. Once this has happened (thanks to people like Steve M, Anthony W etc) then we’ll make up our own minds about the validity of the GHG hyphothesis. Until then as the scientific method and the Popperian requirement for falsifiability dictates, we will remain skeptical.

    KevinUK

  193. GTTofAK
    Posted Sep 9, 2007 at 5:29 AM | Permalink

    I wonder if Gavin has a even the faintest clue that Hansen is going to cover his own ass and do as all politicians do in these situations and offer up his subordinates heads, chiefly Gavin’s.

  194. Posted Sep 9, 2007 at 6:21 AM | Permalink

    My impression, however, is that the National Academy of Sciences has validated the Mann result. To quote: “It can be said with a high level of confidence that global mean surface temperature was higher during the last few decades of the 20th century than during any comparable period during the preceding four centuries. This statement is justified by the consistency of the evidence from a wide variety of geographically diverse proxies.”

    But the National Academy of Sciences also said:

    * Large-scale surface temperature reconstructions yield a generally consistent picture of temperature trends during the preceding millennium, including relatively warm conditions centered around A.D. 1000 (identified by some as the “Medieval Warm Period”) and a relatively cold period (or “Little Ice Age”) centered around 1700. The existence and extent of a Little Ice Age from roughly 1500 to 1850 is supported by a wide variety of evidence including ice cores, tree rings, borehole temperatures, glacier length records, and historical documents.

    (Bold Added)

    Some calculation:
    Four centuries before now is: 2007 – 400 = 1607. Right in the middle of the Little Ice Age which was from roughly 1500 to 1850.

    It says that we are warming up after the little ice age. (And CO2 naturally increases a few hundred years after warming starts. See: realclimate.org/index.php?p=13)

    Thanks
    JK

  195. Falafulu Fisi
    Posted Sep 9, 2007 at 6:51 AM | Permalink

    Erik Ramberg said…
    I think it is fairly safe to say that the majority of posters are global warming skeptics.

    Ok, Erik. It is good that you know that I am one of those posters here as a skeptic.

    While being a skeptic is not a bad thing per se, there comes a time when physical evidence paints a forceful picture that has to be faced realistically.

    My training was in Physics, but write numerical based software for a living. The issue with me about AGW, is not really about Physical evidence that you said that exists, since the collected data are poorly observed and not enough that has been collected. The heart of the problem lies in the over reliance on computer projections which were based on simplistic mathematical models of the Physics. Climate is a complex systems and the advancement in the research in complex system theory as in climate has just started to take off. Climate is a complex system and you can’t rely on computer projections that were based on primitive and simplistic physics models. Some of those very difficult issues related to climate modeling were raised in a NASA sponsored workshop a few years ago (see below) and to the best of my knowledge, there is still little progress in this area. If you click on the link shown, then you must click refresh again to avoid texts being cluttered to the left hand side.

    “WORKSHOP ON CLIMATE SYSTEM FEEDBACKS”
    http://grp.giss.nasa.gov/reports/feedback.workshop.report.html

    You know Erik that most problems in Physics, if you take away the numerical modeling, then any claim to the validity about certain laws of Physics simply is unverifiable. The claim to AGW backed up by numerical modeling is not yet generalizable. It fits the data in a narrow time domain but completely useless when applied to a different time domain. When physical models do behave like this, then it is quite correct to question its validity. Just the same as the Black body radiation. Wien law and Rayleigh law work well in certain frequencies, but fail to generalize. Planck law generalizes Black body radiation and the same applies to climate model. At the current stage, climate models are still more probably like the Wien or Rayleigh description of physics, ie, not generalizable.

  196. steven mosher
    Posted Sep 9, 2007 at 7:13 AM | Permalink

    re 177.

    That sounds cool. If we dont do something it will be like herding cats with everybody
    posting code frags here and there.

  197. Falafulu Fisi
    Posted Sep 9, 2007 at 7:27 AM | Permalink

    Erik Ramberg said…
    Real Climate has the links to the equations, if you are brave enough to read that web site.

    That’s right Erik, I posted the following climate modeling paper (see end of message) at RealClimate and none of their members or the readers over there seem to understand and perhaps the equations are just too complex for them. I am just waiting to see if any reader or a climate scientist from RealClimate brings up the issues related to the paper (see below) which I have posted up there. I suspect that no one has any knowledge on the subject of feedback control or otherwise, they just ignore it since if it is to be brought up for discussion, then it makes all the available climate models to date look very simplistic and trivial. So, it doesn’t mean that RealClimate has got some top notch climate scientists over there then that they must be knowledgeable in every numerical modeling technique that is known to man on the planet. This is not the case.

    The paper was written by another NASA scientist , Dr. William Rossow who chaired the workshop on WORKSHOP ON CLIMATE SYSTEM FEEDBACKS a few years ago, in which I mentioned in my message #203.

    Inferring instantaneous, multivariate and nonlinear sensitivities for the analysis of feedback processes in a dynamical system: Lorenz model case-study

  198. steven mosher
    Posted Sep 9, 2007 at 7:42 AM | Permalink

    Well I have yet to find the code for the urban/rural adjustment.. as described
    in H99. And I see no place where nightlights is used.. maybe an offline process.

    I’ll let you know when I do

  199. Posted Sep 9, 2007 at 7:44 AM | Permalink

    Steve of someone who has already looked at the data: gistemp.txt explains the discontinuities they have been fixing. St Helena and Hawaii were given extra 0.8 or 1.0 Celsius degree to the earlier portions of the data. Can you say whether it was a correctly calculated shift? Is there a systematic method to look for suspicious discontinuities kind of objectively in all the data?

  200. steven mosher
    Posted Sep 9, 2007 at 8:03 AM | Permalink

    208.

    Those shifts are documented in H99

    We also modified the records of two stations that had obvious discontinuities. These
    stations, St. Helena in the tropical Atlantic Ocean and Lihue, Kauai, in Hawaii are both located on
    islands with few if any neighbors, so they have a noticeable influence on analyzed regional
    temperature change. The St. Helena station, based on metadata provided with MCDW records, was
    moved from 604 m to 436 m elevation between August 1976 and September 1976. Therefore
    assuming a lapse rate of about 6°C/km, we added 1°C to the St. Helena temperatures before
    September 1976. Lihue had an apparent discontinuity in its temperature record around 1950. On
    the basis of minimization of the discrepancy with its few neighboring stations, we added 0.8°C to
    Lihue temperatures prior to 1950.

  201. Andy
    Posted Sep 9, 2007 at 8:28 AM | Permalink

    Re #177/#205

    I’m loading the original code to google code under the project name “open-gistemp” as I type this post. Feel free to use this as an interim solution for version control while Steve and Anthony work out the long-term solution.

  202. Bruce
    Posted Sep 9, 2007 at 9:03 AM | Permalink

    re 209.

    There are lots of “we added”

    Are there are any “we subtracted”?

  203. John Hekman
    Posted Sep 9, 2007 at 9:13 AM | Permalink

    Waldo is now hiding in the SST, Steve. This will be the final battleground.

  204. Alexander
    Posted Sep 9, 2007 at 9:13 AM | Permalink

    As long as the “simplified” code reproduces the results from the original I don’t necessarily see a problem. One of the cornerstones of science is reproducibility. That is all that matters. If the simplified code can take the original data and reproduce the identical output, while being complete transparent, then it is “good”. If not, then God save Hansen from the hordes on this blog.

  205. Alexander
    Posted Sep 9, 2007 at 9:27 AM | Permalink

    Re 177
    Open source is a great idea. One could make a sourceforge.net project out of this. Actually, regardless of Hansen’s code, an open source project to independently process the data might be a good idea.

  206. Posted Sep 9, 2007 at 9:31 AM | Permalink

    #194 There seems to be no code control at all over this code. I suggest some thought about constructing test data first as a reference set that can be used to compare results before and after any changes to the code are made.

    For example, what is the minimal set of random data that can be fed into the processing chain, that gives a result? Eg, create a set of perfect climate stations that are evenly distributed with no trend and no missing values, and run this data set through periodically to make sure changes to the code have not introduced spurious signals.

  207. Mark
    Posted Sep 9, 2007 at 9:44 AM | Permalink

    Not sure (#212)

    Hi,

    Sorry about that, my misunderstanding.

    I have now removed all the glibc errors by applying changes as previously suggested and commenting out the following two lines in the function StationString_dealloc of stationstringmodule.c .

    Comment out the following …
    PyMem_Free(self->data_sp);
    at line 188, and
    PyMem_DEL(self);
    at line 191, and the code should run, although there is probably a much more elegant solution.

    I now have STEP1 running to completion.

    STEP2 has no python scripts and wants to use f77 as the fortran compiler. I have used gfortan (gnu fortran compiler) to compile the code without errors.

    The amount of information supplied on how to run the script(s) at this point is nil so it’s hard to figure out whats required. Any information on what GHCN.CL would be helpful.

    Mark.

  208. H
    Posted Sep 9, 2007 at 9:44 AM | Permalink

    #181

    Steve, I too have looked for a credible explanation how CO2 warms the atmosphere. I could not find one and decided to calculate it myself. So, I made a simple energy balance calculation based on the international standard atmosphere. I’m sorry that the text is in finnish, but the formulas are self-explanatory 🙂 Anyway the last figure shows all. I studied 4 cases of H2O feedback.

    blue – no feedback
    light green – linear decrease from 100% response at SSL to 0% at 4 km
    green – linear decrease from 100% response at SSL to 0% at 11 km
    red – 100 % through the troposphere

    Click to access Hiilidioksidin_vaikutus_H.pdf

    Only the last case shows stronger positive feedback. It is favoured by the modellers! The other climatologists seem to favour the second. My calculation indicates that the increase of H2O content of the upper troposphere is crucial to the H2O feedback. Has this been observed? I am sceptical that the big GCM get it right.

  209. John V.
    Posted Sep 9, 2007 at 9:46 AM | Permalink

    #211:

    There are lots of “we added”
    Are there are any “we subtracted”?

    Most (all?) of the “adds” are to the early part of the record. This is equivalent to subtracting from the latter part of the record. ie The changes actually reduce the calculated warming effect.

  210. Larry
    Posted Sep 9, 2007 at 9:48 AM | Permalink

    219 – however, when Mann takes inflated surface data like that and duct tapes it to his proxies, it makes the hockeystick look worse than it really is.

  211. Posted Sep 9, 2007 at 9:51 AM | Permalink

    Sort of my personal crossword puzzle

    Getting this single-step compilable would be a great step forward.

  212. steven mosher
    Posted Sep 9, 2007 at 9:52 AM | Permalink

    Ok. RC refused the nice thank you note I psted yesterday. It was nice. I tried again. Then
    I reread the 1934 thread, basically I was looking for the first time I posted “free the code”

    Then I found comment 211 and I decided to go Mosh pit on them. We will see if this post gets through
    SUBMITTED AT RC

    RE 620.

    For the record. In Comment 211 Gavin stated:

    Response: As I said above, complex codes can’t be derived directly from the papers and so should be available (for instance ModelE at around 100,000 lines of code – and yes, a few bugs that we’ve found subsequently). The GISTEMP analysis is orders of magnitude less complex and could be emulated satisfactorily in a couple of pages of MatLab. – gavin]

    Now. Go download Gisstemp. I did. looks like more than two Pages of code.

    So, Gavin characterized the problem of “duplicating” the analysis as a trivial one. It’s not.

    Why mischaracterize the problem this way?

  213. Posted Sep 9, 2007 at 10:02 AM | Permalink

    Mr. Ramberg, I find it fascinating that you as a scientist on one hand claim there is AGW and that we all should essentially get over it because you have “some” data that suggests you might be right yet at the same time ignore the other data available that shows otherwise. Have you ever heard of confirmation bias? The fact that alternative data exists such as 1998 no longer being the warmest year but 1934 should remind you as a scientist to keep an open mind. How about the alternative data such as Africa and South America? Now warming there either? How can you as a scientist disregard the vast body of data indicating a negative temperature trend in 3 significant parts of the world and yet continue to claim that warming is global and versus a regional warming in Europe and Asia???

    You reference Mann (hockey stick) being essentially correct, yet the IPCC has relegated the hockey stick to less than obscure status. What was the failing of the hockey stick by Mann? I’m not a scientist, I’m an engineer by training, what my training tells me is that averages are deceptive and as has been pointed out on this site, there are many approaches to determine the mean. If Europe and Asia are significantly warming more than the US, Africa and South America is cooling, what is the result on the Global Average temp?? Tell us, did Mann ever share the source codes????

    What Hansen has done by sharing the codes is to avoid the nasty publicity that Mann encured by his refusal to turn over the code thus destroying his credibility. Tell us, is anyone seriously waiving around the hockey stick anymore claiming it is proof of AGW??? No??? At this point Hansen is attempting to delay the inevitable debunking of his numerous and systematic errors that helped support the same erronious conclusion regarding the US temperature trend. I suspect once Steve M. and others have thoroughly evaluated the codes and Anthony Watts has surveyed all the weather stations in the US, we will have a clearly defined cooling trend. Then what will you say? The US doesn’t count because it represents a small percentage of land area as is now being said? What about Africa and SA? You can dismiss the US, Africa and SA individually, however together they paint a picture that says something else is occuring. It sounds to me when someone proclaims they are for AGW and we should all just believe you or your assertion, that in fact you are engaging in circular reasoning. Scientists don’t engage in circular reasoning, they formulate a hypothesis and then test it against real world observations over and over again. What do you do when your hypothesis doesn’t match the observations??? Toss the observations? Or toss the hypothesis?

  214. John V.
    Posted Sep 9, 2007 at 10:04 AM | Permalink

    Reference Data:
    I suggest a snapshot of the GISS data would make the best reference data. Any cleaned-up version of the program must start with dset=0, match dset=1, and finish with dset=2. The GISS data is the only data that will be run through the original GISS program (at NASA), so only the GISS data can be used to validate.

    Source Code:
    The use of Fortran is expected given the age of the program. However, Fortran programmers are now hard to find. I am considering a port of the program to C (using f2c) and eventually to Java or C# to make it more accessible to modern programmers. (My Fortran is *extremely* rusty). The reference data will be extremely important in this effort.

    Each processing step is done with a separate Fortran (or occasionally Python) program. This is very useful for porting and evaluating algorithm changes. Any of the Fortan programs can be modified or ported individually — as long as they read and write the same data files.

    In line with the recent discussions here, I suggest modifying STEP1 (combining various sources at a single location) to properly deal with scribal variations.

    This is just my 2 cents.
    I don’t know when I will find time to work on this, but I hope I can do it soon.
    On the other hand, it may make more sense to wait for the simplified version from GISS.

  215. Earle Williams
    Posted Sep 9, 2007 at 10:13 AM | Permalink

    Re Copyright and License

    The works by the U.S. government are not subject to copyright. This includes all works created by employees of the U.S. government as part of their duties. This generally applies to works created under contract but may not apply to works created under a grant. Given that this information is distributed by NASA it is a reasonable assumption that this code is a work by the U.S. government and not subject to copyright.

    See http://www.copyright.gov/circs/circ1.html#piu

    Many federal agencies request attribution and I certainly encourage complete recognition and attribution of the work completed by Dr. Hansen and his staff.

  216. Larry
    Posted Sep 9, 2007 at 10:17 AM | Permalink

    Many federal agencies request attribution and I certainly encourage complete recognition and attribution of the work completed by Dr. Hansen and his staff.

    Absolutely. You made your bed, now sleep in it.

  217. steven mosher
    Posted Sep 9, 2007 at 10:22 AM | Permalink

    Re 216.

    My thoughts exactly. It might be instructive to see what the global trend would be
    if all data sets where filled out with 0 C..

    Als studies on the predominance and importants of stations with long records

  218. Erik Ramberg
    Posted Sep 9, 2007 at 10:29 AM | Permalink

    Re: 224

    I find the unprecedented heating of the Arctic regions (>5 sigma deviations from normal) and record loss of sea ice in the polar ice cap (the Northwest Passage now exists) to be valid data points. I seriously doubt that any kind of global cooling trend can accomodate that data, no matter the results of the Hansen audit.

    The same holds true for species migration. I’m not an expert, but I doubt that there is any evidence for migration patterns supporting a cooling trend. But, now that I have mentioned that, it seems just as important to audit the biological evidence. Any takers?

  219. steven mosher
    Posted Sep 9, 2007 at 10:31 AM | Permalink

    RE 217.

    Agree. Essentially I’d like to see the code running in idependent hands so that certain decisions
    can be tested.

    1. The decision to favor the records of “sites with long records”. With Anthony’s work it might be interesting
    to look at good sites as the anchor.
    2. The 1200km issue. The correlation study isnt well documented.. what happens when 1200km changes
    3. Different treatment of rural/Urban
    4. Different spatial averaging appraches
    5. Approaches to missing data.
    6. ReExamining the stations that have been excised from the record.

  220. Earle Williams
    Posted Sep 9, 2007 at 10:33 AM | Permalink

    Re #230

    Erik Ramberg,

    I suggest you move this over to the Unthreaded discussion as our host has already indicated that this thread is focused on the GISTEMP code.

    Thanks!

  221. Posted Sep 9, 2007 at 10:35 AM | Permalink

    #230 See Idso’s review of the evidence here http://www.marshall.org/article.php?id=150

    Full Text of “The Specter of Species Extinction: Will Global Warming Decimate Earth’s Biosphere?” (PDF, 190 KB)

    Over the past century and a half of increasing
    air temperature and CO2 concentration, many species of animals have significantly
    extended the cold-limited boundaries of their ranges, both poleward in latitude and
    upward in elevation, while they have maintained the locations of the heat-limited
    boundaries of their ranges. Consequently, individual animal species, like individual
    plant species, have measurably increased the areas of the planet’s surface that they
    occupy, creating more overlapping of ranges, greater local species richness, and an
    improved ability to avoid extinction.

  222. Larry
    Posted Sep 9, 2007 at 10:36 AM | Permalink

    230 –

    the Northwest Passage now exists

    Tell that to that poor fool stuck in the ice off of Siberia right now.

  223. Steve McIntyre
    Posted Sep 9, 2007 at 10:39 AM | Permalink

    #230. Erik, I notice that you’ve not responded to my request for a clear exposition of how doubled CO2 results in 2.5 deg C. I don’t have strong views one way or the other on this prediction. My own approach is to try to understand things in detail – hence the request for a reference. So I take it that you don’t know of one.

    I’ve spent more time on proxies than on temperature and , in my opinion, there is strong evidence for a warm MWP in the areas where we have the clearest evidence of current warming: northern Asia and Europe. Some of the handling of the proxy data by Briffa, Mann and others is very unsatisfactory as discussed on many threads here.

    The Arctic warming is not “unprecedented”. It was much warmer in the Pliocene, not to speak of the Eocene. More recently, it was warmer in the previous interglacial (the EEmian) ~110K years BP and in the Holocene Optimum about 8000 BP and perhaps even in the MWP.

  224. Anderson
    Posted Sep 9, 2007 at 10:43 AM | Permalink

    There are many Fortran tutorials available online. For example, here is one originally developed at Stanford University:

    http://www.tat.physik.uni-tuebingen.de/~kley/lehre/ftn77/tutorial/

    Learning Fortran is probably easier than learning C, and considerably easier than learning C++/C# or the extensive set of class libraries that accompany modern programming languages, both compiled and scripted.

    Anyone who has done even a little programming C or C++ (or several other languages) should not have difficulty learning sufficient Fortran to review this code base.

  225. Earle Williams
    Posted Sep 9, 2007 at 10:44 AM | Permalink

    Re #234

    Larry,

    It is possible to get through, but it’s certainly not the first time. I was in Nome, Alaska, three days ago and a large cruise ship was moored there. Apparently one made it through two years ago as well. A little googling suggests that it happened in 1984, 1985, and 1988.

  226. Murray Duffin
    Posted Sep 9, 2007 at 10:45 AM | Permalink

    Re: 180 OK I am happy to kick in a share also. Happy to split with Brian G. Just tell me how. A brief note on what a wiki is/does would also be helpful. Murray

  227. John V.
    Posted Sep 9, 2007 at 10:51 AM | Permalink

    #236:
    You are right that Fortran is not difficult to learn (it is a very simple language). However, most programmers working since the early 90s are already familiar with the C-syntax shared by C, C++, Java, and C#. It would only be necessary to learn the most basic class libraries to port the GISTEMP code.

    The biggest hurdle with Fortran is the lack of easy-to-obtain-and-use development environments and compilers, particularly for Windows.

  228. Demesure
    Posted Sep 9, 2007 at 11:00 AM | Permalink

    #215 Alexander, I thought about Sourceforge too as repository for versionning and documenting the source code and the datasets (test vectors and GISS real data) since several mods have been made to get the project compiled. It would avoid useful and concrete steps beeing drowned in OT comments. Just my to cents.

  229. Andy
    Posted Sep 9, 2007 at 11:05 AM | Permalink

    Well that’ll teach me to hit “submit” and walk away for a while 🙂 Here’s a message that got eaten by the spam filter, with the URL now sanitized for your safety:

    ###

    Here’s the link to the google code version: code DOT google DOT com/p/open-gistemp/

    I had a couple of fits and starts with the setup and then realized the problem was that I also had the STEP0 input files included in the upload. I untarred a clean copy and reloaded the entire source tree, which is what leads to the base revision number being 7.

    To the extent anyone’s interested in using this, the first thing someone needs to do is run a diff against the GISS sources and validate that it is, in fact, an identical copy.

  230. bernie
    Posted Sep 9, 2007 at 11:19 AM | Permalink

    Erik:
    You said in #230: “I find the unprecedented heating of the Arctic regions (>5 sigma deviations from normal)” – measuring what against what exctly?

  231. Wayne Holder
    Posted Sep 9, 2007 at 11:46 AM | Permalink

    Having spent a bit of time just looking over the code with an eye toward trying to modernize it, I’m struck by the fact that Hansen and his programmers really seem to have gotten themselves stuck in the computing stone age. In particular, I find the reliance of a integer-based math and odd rounding techniques to be completely baffling in an age where numerous ways exist to do arbitrary precision math. Honestly, while it might be interesting to try and make this code compile and run (perhaps just to see if it really does reproduce Hansen’s claims), I think it makes far more sense to extract and document the algorithms, then redo the calculations in a modern language, such as Java, using something like this:

    http://java.sun.com/j2se/1.5.0/docs/api/java/math/BigDecimal.html

    It would then be very interesting to compare the results produced from code that tries to preserve as much precision as possible with the results from Hansen and company’s Fortan, etc. code (assuming someone does get it to compile, run and spit out something close to his published results.) Of course, it might make much more sense to implement Hansen’s calculations in a more math friendly package such as Mathematica:

    http://en.wikipedia.org/wiki/Mathematica

    Being more conversant in Java I’m inclined to take that approach, but I think redoing Hansen’s work in Mathematica might open it up to participation by a broader audience, particularly people with more experience with formal mathematics than I have.

  232. John V.
    Posted Sep 9, 2007 at 11:48 AM | Permalink

    Erik:
    I understand your frustration — I really do. Overwhelmingly, the evidence supports AGW and the negative consequence of AGW. I live in Alberta, where nobody wants to hear that. I’ve been in arguments where I’ve been left so frustrated that my hands were shaking. I’ve lost sleep over how to convince people that fiction authors and Exxon-funded “studies” do not match up against peer-reviewed science. That small mistakes in one analysis do not make the entiry body of climate research come crumbling down.

    I’m not going to convince anybody here. And neither are you.

    I’m here because there are problems in some of the published studies. I’m hoping to help fix some of the problems. I support Steve McIntyre’s efforts to audit the science — a clear and open audit may be the only way to dispel some of the conspiracy theories.

    The best outcome would be that the overwhelming scientific consensus is proven wrong, AGW is not real, and no hard choices need to be made (very unlikely).

    The next best outcome would be that the scientific consensus is right, that the public and our leaders believe it, and that we are all willing to make the necessary hard choices.

    Both of these outcomes are a little bit more likely with an open and accountable audit.

  233. John V.
    Posted Sep 9, 2007 at 11:56 AM | Permalink

    #244 (Wayne Holder):
    I completely agree with you (see my post #225). A modern implementation could:

    – be more readily updated and experimented with;
    – have much better visualization options;
    – include a GUI for those less comfortable with computers;

    My language of choice is C# since it seems to have better client-side support (and is availble in Unix with Mono). I’m not conviced that Mathematica is a good idea because it would severely limit the number of eyes on the code.

  234. David
    Posted Sep 9, 2007 at 11:59 AM | Permalink

    #239: I have not used Photran before, but you guys might want to check it out. It is an IDE for Fortran based on Eclipse, which I am very familiar with. Eclipse is very nice and available for many platforms:


    Eclipse is an open source community whose projects are focused on building an open development platform comprised of extensible frameworks, tools and runtimes for building, deploying and managing software across the lifecycle. A large and vibrant ecosystem of major technology vendors, innovative start-ups, universities, research institutions and individuals extend, complement and support the Eclipse platform.

    From: http://www.eclipse.org

  235. Larry
    Posted Sep 9, 2007 at 12:01 PM | Permalink

    John V. says:

    I’m not going to convince anybody here. And neither are you.

    Then stop cluttering the thread.

  236. Anachronda
    Posted Sep 9, 2007 at 12:17 PM | Permalink

    #239:

    The biggest hurdle with Fortran is the lack of easy-to-obtain-and-use development environments and compilers, particularly for Windows.

    There’s always OpenWatcom: http://www.openwatcom.org/ I’m a big fan of the C compiler, but haven’t done anything much with the F77 compiler.

  237. steven mosher
    Posted Sep 9, 2007 at 12:23 PM | Permalink

    Sourceforge does have an CASE tool for reverse engineering Fortran John V.

    Back in the early 90s my guys used f2c to do a bunch of conversions. I don’t recall it
    being pretty, but it worked..

    I suspect we will have two or three groups of folks attempting different things.
    All good, to each his own

    1. The rewrite team. Looks like you volunteered for that! ( one vote for java from me)
    2. The “get it running team” lots of folks headed dwn that path
    3. Understand the Code Team.. I suspect a bunch of folks headed down that path.

  238. Tony Edwards
    Posted Sep 9, 2007 at 12:25 PM | Permalink

    Erik and John V. Given your opinions, might I suggest that you go to the following address and answer to the following challenge, which is made on the Junkscience.com website

    http://ultimateglobalwarmingchallenge.com/

    CHALLENGE
    $100,000 will be awarded to the first person to prove, in a scientific manner, that humans are causing harmful global warming. The winning entry will specifically reject both of the following two hypotheses:
    UGWC Hypothesis 1

    Manmade emissions of greenhouse gases do not discernibly, significantly and predictably cause increases in global surface and tropospheric temperatures along with associated stratospheric cooling.
    UGWC Hypothesis 2

    The benefits equal or exceed the costs of any increases in global temperature caused by manmade greenhouse gas emissions between the present time and the year 2100, when all global social, economic and environmental effects are considered.

    This would appear to be a genuine offer, but no-one seems to be able to provide the necessary proof. If this cannot be done, then the “consensus ” is plain, flat wrong.

  239. tetris
    Posted Sep 9, 2007 at 12:28 PM | Permalink

    Re: 230
    Erik
    Pls consult: Chapman, W.L. and Walsh, J.E. “A synthesis of Antarctic Temperatures”, 2007, Journal of Climate, 20, 4096-4117. Conclusion: A review of all available data shows that with the exception of a minor temp increase on the Ross Peninsula, Antarctica has been cooling.

  240. steven mosher
    Posted Sep 9, 2007 at 12:36 PM | Permalink

    re253.

    Wrong pole.

  241. Posted Sep 9, 2007 at 12:39 PM | Permalink

    Falafulu Fisi September 9th, 2007 at 7:27 am,

    Thanks for that!!

    I have a process with basically one output and multiple inputs. I’m trying to sort out a control process for the system.

    Normally what is done is to have a hierarchy of control loops operating in different time scales with feed forward.

    ===========

    Let me see if I understand what you are trying to get at:

    You are trying to discern the instantaneous values of the transfer co-efficients by looking at how the system responds to disturbances over time.

    Kind of a more sophisticated Ziegler-Nichols.

  242. David
    Posted Sep 9, 2007 at 12:56 PM | Permalink

    #251: I vote for Java too, using the Eclipse IDE. Java has many advantages. The benefit to doing all three is that:

    1. Getting the original code running allows people to examine the behavior of the current code for peculiarities, etc.
    2. Understanding the code is essential, for obvious reasons.
    3. The rewrite team gets it into a form we can live with.

  243. steven mosher
    Posted Sep 9, 2007 at 1:07 PM | Permalink

    re 256.

    What about an independent path to look at the old code and try to create a UML version.

    Make any sense

  244. John V.
    Posted Sep 9, 2007 at 1:10 PM | Permalink

    #258:
    UML is (primarily) for modelling object-oriented programs. I don’t think it would yield much information for an old Fortran program.

  245. tetris
    Posted Sep 9, 2007 at 1:15 PM | Permalink

    Re: 254
    No Steven. Right pole. The AGW hypothesis postulates that BOTH poles should be warming. The Arctic is doing this to some extent and there are several plausible non-AGW explanations for this.
    The Penguins in the dark down under, however haven’t heard about this. Or maybe they’ve been talking to Waldo..

  246. Larry
    Posted Sep 9, 2007 at 1:17 PM | Permalink

    Before we run off converting this ratsnest of Fortran into C or Java, what exactly is the objective? What are we trying to accomplish here?

  247. Larry
    Posted Sep 9, 2007 at 1:20 PM | Permalink

    260, that’s true that it isn’t global if it isn’t global. But I hope everyone understands that what’s anomalous in the Arctic is primarily driven by water temperatures. This is an oceanic phenomenon, not an atmosphereic one.

  248. steven mosher
    Posted Sep 9, 2007 at 1:28 PM | Permalink

    re 261.

    Well Larry the cats are off in three directions. That’s perfectly fine. That’s what
    Being open results in.

    1. Mark and “not sure” and others are trying to get the code to compile and run. Thats a good
    thing. They will figure it out between them. IN FACT, Stevemc could start a thread dedicated
    to the “GET THE CODE RUNNING TEAM” That way they would not be bothered with chatter abut the ICE
    ( not that kind of ICE guys, the frozen water ICE)

    2. John V has been lobbying to do open climate code. He can take this code and follow that dream.
    Let him run down the path he loves.

    3. Other guys I expect are pouring over the code looking for little bugs or clues to algorithms.

    So, there are a bunch of groups. Which path do you like? Hook up with like minded guys and get
    crackin. Or watch and kibitz, thats ok too.

  249. Francois Ouellette
    Posted Sep 9, 2007 at 1:36 PM | Permalink

    #198,

    Actually, the theory behind eugenics was mostly OK: it’s the simple mendelean genetics rules. It’s the numbers that were wrong. If 10% of a population have 2 copies of a bad gene, it makes a lot of sense to sterilize them to get rid of the said gene, since you can get significant results within a few generations. But if it’s only 1% of the population, then it could take thousands of years even if you sterilize them all. It’s just not linear.

    So where the pro-eugenics got it wrong is in their definition of “feeble-minded”. Anyone who was poor and little educated, and that includes all the freshly arrived migrants who did not speak the native language well, all these people were readily characterized as feeble-minded, and thus carriers of defective genes. There was of course no scientific basis for such a categorization, as we know NOW. But in the minds of many people, including eminent scientists (and even, I just learned, Charles Lindbergh!), this was unquestionable evidence. YES, Erik, there was pretty much a CONSENSUS, spanning the whole political spectrum from extreme left to extreme right. And, many scientists who had doubts would just shut up.

    So THE NUMBERS are important! A few tenths of a degree more or less, a larger or smaller solar influence, or aerosol effect (and God knows we have very little quantitative data on aerosols!), all these seemingly insignificant numbers have in reality a HUGE effect on whether or not the accumulation of CO2 due to fossil fuel burning will end up in a catastrophic warming, or no warming at all. Just take this simple example: say there was 0.5C warming. If it’s only CO2 and we double it, we might be in trouble. Now if it turns out that the Sun is responsible for half of it, and the Sun’s activity now goes down during the next 50 years, we will see no warming at all even if CO2 keeps increasing. The “standard” AGW theory is full of such “if’s”, that all relate to relatively poorly known factors, or poorly obtained data. You can paint a catastrophic picture, just as well as you could paint a benign picture. In short (and I’m saying this as a physicist too, who looked at most of the evidence) the uncertainty is much larger than proclaimed.

    Some say we don’t need proof (Naomi Oreskes, for one). I say, we don’t need absolute proof to take some action (after all, we all have fire insurance on our house), but we still need the best proof we can get, otherwise we will be at the mercy of any doomsayer. Does Hansen sound like a doomsayer? Yes! That makes him all the more suspicious. Just like the eugenicists were doomsayers. It’s only fair that his work be audited. The way the scientific community works right now, you can’t get a proper audit from them. It’s just the sociological reality. We are lucky to have blogs. The peer-reviewed journals have their usefulness, but in certain circumstances, their many flaws are just an impediment to scientific progress. If you read this blog thoroughly, you will find that there is genuine scientific progress made here, in the purest sense of the word.

  250. Larry
    Posted Sep 9, 2007 at 1:44 PM | Permalink

    Quoting from the Hansen memo linked in the body of the post:

    Another favorite target of those who would raise doubt about the reality of global warming is the lack of quality data from South America and Africa, a legitimate concern. You will note in our maps of temperature change some blotches in South America and Africa, which are probably due to bad data. Our procedure does not throw out data because it looks unrealistic, as that would be subjective. But what is the global significance of these regions of exceptionally poor data?

    As shown by Figure 1, omission of South America and Africa has only a tiny effect on the global temperature change. Indeed, the difference that omitting these areas makes is to increase the global temperature change by (an entirely insignificant) 0.01C.

    This is all the time that I intend to give to this subject, but in case you wonder why we subject ourselves to the shenanigans, there are scientific reasons, repeated here from the “history” introduction to the program description.

    Aside from the snotty unprofessional tone (obviously this wasn’t his idea to release this code), does Hansen really not understand the objections to the general quality of the data? I can’t tell if he’s being disingenuous, or if he’s that poor a scientist. I really have a hard time believing that after everything that’s been posted here that he doesn’t understand that the issue isn’t the actual numbers, but the confidence limits.

    Am I missing something?

  251. steven mosher
    Posted Sep 9, 2007 at 1:51 PM | Permalink

    265.

    He does throw data out because it looks bad. Hansen 2001.

    The strong cooling that exists in the unlit station data in the northern California region is not found in either
    the periurban or urban stations either with or without any of the adjustments. Ocean temperature data for the same
    period, illustrated below, has strong warming along the entire West Coast of the United States. This suggests the
    possibility of a flaw in the unlit station data for that small region. After examination of all of the stations in this
    region, five of the USHCN station records were altered in the GISS analysis because of inhomogeneities with
    neighboring stations (data prior to 1927 for Lake Spaulding, data prior to 1929 for Orleans, data prior to 1911 for
    Electra Ph, data prior of 1906 for Willows 6W, and all data for Crater Lake NPS HQ were omitted), so these
    apparent data flaws would not be transmitted to adjusted periurban and urban stations. If these adjustments were not
    made, the 100-year temperature change in the United States would be reduced by 0.01°C.

  252. scp
    Posted Sep 9, 2007 at 2:00 PM | Permalink

    265 – Nobody has even started criticizing the code yet and he’s already flirting with the “fake but accurate” line of defense. I think maybe it gives a hint as to what the open climate community can expect to find.

    Of course, there’s apparently no consensus about whether the hockey stick was discredited or not, despite the unambiguous nature of the Wegmen report. I’m afraid the we-know-better machine may keep right on running no matter what the code reveals. Sadly, “fake but accurate” might actually stand up to scrutiny. It seems to be sustaining belief in the hockey stick.

  253. UK John
    Posted Sep 9, 2007 at 2:32 PM | Permalink

    #260 This is why the Artic Sea Ice is melting so fast its called Sunshine! for 24 hours a day!

    Quote From NSIDC:- August 2007 Contributing to the loss: sea-level pressure and clear skies

    Why have we seen such a rapid loss of sea ice in the summer of 2007? A major cause is unusually clear-sky conditions in the months of June and July. Figure 3 is a contour map of sea-level pressure averaged for June and July. High pressure dominated the central Arctic Ocean during this period, promoting very sunny conditions just at the time the sun is highest in the sky over the far north. This led to an unusually high amount of solar energy being pumped onto the Arctic ice surface, accelerating the melting process. Satellite data show that skies over the Beaufort Sea were clear or mostly clear for 43 of the 55 days between June 1 and July 23, 2007.

  254. steven mosher
    Posted Sep 9, 2007 at 3:15 PM | Permalink

    RE 272.

    two dogs? is that you? It’s your brother running deer!

  255. tetris
    Posted Sep 9, 2007 at 3:20 PM | Permalink

    Re: 262
    Larry
    That’s what I was alluding to with “several plausible non-AGW explanations”. :]

  256. Pedro S
    Posted Sep 9, 2007 at 3:27 PM | Permalink

    Re:112

    Normally, scientists do not release their software or raw data to the public, for very good reasons.

    Scientists DO release their software as a mater of course, though apparently not in Climate Science. Just check the huge number of freely available software (with source code or Web servers) programs in Bioinformatics. In Quantum Chemistry, even commercial programs are usually
    released as source code, which the user may compile, check, audit,ect. as he/she sees fit.
    Bioinformatics and computational chemistry journals require new programs to be made available (often as source code), and
    atomic coordinates of chemical species described in quantum chemistry studies must be provided, so that independent verification may be possible.

    I have published work on these areas, and believe me: we do not think that is any significant burden: On the contrary, the prestige that comes from the fact that many other groups are using one’s software is a better reason to make it available than any possible “good reason” for witholding it than Erik Ramberg (#112) might think.

  257. Larry
    Posted Sep 9, 2007 at 3:32 PM | Permalink

    Ok, 270, why don’t you take that theory over to 911truth.org, where there are people receptive to that sort of thing.

  258. John F. Pittman
    Posted Sep 9, 2007 at 3:38 PM | Permalink

    #252 Tony Edwards
    Everybody: sorry for taking up bandwidth, but always it seems that certain items need repeating. Truthfully, the number 1 reason I respect Steve McI is the way he can keep such a polite and level approach. I swear I would be tempted to tell somebody to “stick it where the sun don’t shime” with just half the provocation I have seen directed at Steve. And so much of it, so dead wrong and easy to see, if someone would only read what he actually said.

    Tony, that is a “stacked deck” bet. One of the de-emphasised points in the excutive summary of the IPCC AR4 is that it almost always pays to get rich rather than mitigate. If you do a cost/benefit ratio and get to the actual (or as close as I could figure) numbers (believe me it took me days to dig through the crap), you will find (depending on how you weigh their sigma…”likely” as in we likely did not do the math or if we did, we won’t show you how we did it), it is at least a 2:1 advantage to beef up your economy by emmitting CO2 in order to afford the cost of mitigation (new expensive, less relaible sources of energy, a given for present alternatives, I don’t count fusion). It was funny that John V. in #245

    I’ve lost sleep over how to convince people that fiction authors and Exxon-funded “studies” do not match up against peer-reviewed science

    I could be wrong but I would assume this is a dig of Dr. Michael Crichton, who despite being a popular award winning author, actually has excellent credentials. The problem that John does not recognize is that if you listen to Dr. Crichton in the NPR debate and go look up what the IPCC has written, he was actually supporting what the IPCC was advocating, though they are almost hiding it. One of the reasons for his being correct, that he alluded to, is that we have a good estimate of what the advantages of a good economy have for meeting the challenges of climate change. We do not have a correspondingly good estimate of what benefits we reap from mitigation. This is easily discernible and preditable from the World Bank’s and UN’s work on poverty and its deleterious effects on humans.

    As far as Exxon goes, I have to give them money in order to get to work….I have yet to see them give any of it back (lol). As far as peer reveiwed goes, well NOVA (great story of perception…hint spain and spanish). This site is full of found peer reveiwed errors. “A .05C here and a .05C there, pretty soon it adds up to some real anomolies” (forgive me Senator Dirkson). But at least they were peer reveiwed.

  259. Posted Sep 9, 2007 at 3:44 PM | Permalink

    Here I am, late again…
    It’s been a long haul, but worth it. Well Done!

  260. Hoi Polloi
    Posted Sep 9, 2007 at 3:51 PM | Permalink

    We do not know what motivated Hansen’s campaign to show that man-made CO2 caused global warming.

    Looking at Hansen’s emotional, non-scientific statements one can only conclude it’s tunnel vision and/or self-fulfilling prophecy caused by a sincere belief in AGW. It’s beyond science.

  261. Larry
    Posted Sep 9, 2007 at 3:52 PM | Permalink

    Can someone explain what’s the deal with Exxon? Is that code for Beelzebub, or something?

  262. steven mosher
    Posted Sep 9, 2007 at 3:59 PM | Permalink

    There is an interesting disconnect on the AGW side. SOME of them TEND to commit
    the fallacy of appeal to motives.

    “You said that because exxon paid you. Therefore what you said is untrue.”

    Exxon paid me to say 2+2=4.

    Now, to be sure we need to be attuned to researchers doing their patron’s bidding.
    What is the cure for this?

    Peer review doesn’t cure this.
    Government funding doesnt cure this.

    What cures this is a cornerstone of the scientific method. Independent duplication
    of results which entails open sharing the data and methods.

    Exxon may have paid you, but if I duplicate your results the taint of their money
    is washed away. Your buddies may have reviewed your paper, but if I duplicate your
    results that taint of a professinal coterie is washed away.

    Independent replication is key because it addresses the doubts about observer bias and motivation.

    Having said that, When results cant be duplicated, that is the time to wonder about bias and
    motivation.

  263. Alexander
    Posted Sep 9, 2007 at 4:03 PM | Permalink

    From 2001 to 2002 I had a programming job in Austin. The main project that I worked on was model to try to predict the temperature variation at different depths of a body of water by sending sonar pings out at various angles and record the time it took to receive each ping back. 1st I would create data using a known environment. And then I would use MINUIT (chi squared minimization) – blinded to the environment – to try to figure out what the environment was. I used a hodge podge of FORTRAN, C and Matlab to do this. I most say, that I would be SWEATING BULLETS if I had as many bright people as are currently efforting on this sight sifting line by line through that code I had written. Ha. I get more than a few of those NASA guy are checking this thread realtime.

  264. steven mosher
    Posted Sep 9, 2007 at 4:05 PM | Permalink

    SteveMC…

    Could you do a separate thread for the guys trying to compile the code. This has transfrmed
    into a victory lap thread ( not complaining just observing… and no use fighting it since
    even the regulars like me won’t obey the rules 100% of the time) Just a thought

  265. Alexander
    Posted Sep 9, 2007 at 4:08 PM | Permalink

    Re 225 I completely agree that after getting the original code to compile and hopefully reproduce Hansen’s numbers the code should be ported to a more friendly, more commonly used langua, like C. I don’t think JAVA is a good idea, being interpreted (and therefore slow). I also cringe at C#, since it is too closely tied to Windows boxes. OOP is overrated anyhow. Functioned based programming is fine for me. But if we must, how about just simple C++. It will compile everywhere without a fuss.

  266. Larry
    Posted Sep 9, 2007 at 4:44 PM | Permalink

    283, I’d take that a step further, and see if a unix nerd would want to volunteer to take it upon him(her)self to package it all up with a linux distro, so that it could be easy for the troops to work with. A Red Hat RPM would be lovely, and a windows exe would be even lovelier. But there’s going to be a lot of wasted midnight oil if everyone goes of in his separate direction.

  267. Larry
    Posted Sep 9, 2007 at 4:50 PM | Permalink

    And you wouldn’t want that carbon footprint from all that wasted midnight oil…

  268. Neil Haven
    Posted Sep 9, 2007 at 4:58 PM | Permalink

    A possible elaboration of Moshpit’s three branch program to improving the State of the Code:

    Branch 0 – Run the code!: Goals of the first layer of documentation:
    Goal 0: Describe how to configure a machine to run the code;
    Goal 1: Describe how to compile the code (as written!). Where the code is interpreted, rather than compiled,
    describe how to configure the interpreter;
    Goal 2: Describe how to acquire a standard input dataset for testing;
    Goal 3: Describe how to run the code using the dataset to produce an output;
    Goal 4: Describe how to verify the output.

    Branch 1 – Understand the code!: Understand that descriptions of the algorithm used will rest on a shaky foundation until Branch 0/ Goal 4 has been achieved. Documentation should be organized according to file, describing, for each file or significant module or function, the preconditions, algorithm in pseudocode where appropriate, and postconditions of the file/module/function.

    Branch 2 – Improve the code! Understand that describing the effects of modifications to the code will be unverifiable until Branch 0/Goal 4 has been achieved. Understand that serious modifications to the code (even to fix obvious errors!), undertaken with appropriate statistical/mathematical/empirical foundation, should be publishable additions to the climate science literature. Document things appropriately and do not treat this lightly.

    Hansen (and Reto?) believe they have extended themselves magnanimously here. Force yourself to accept the attitude, especially if you disagree with it, and work with it by keeping comments professional and courteous. Keep communications with them precise, circumscribed, and extremely rare. Done properly, most folks enjoy when other people take an interest in their work, with time Hansen’s group could come to see the CA effort here as a positive thing.

  269. steven mosher
    Posted Sep 9, 2007 at 5:11 PM | Permalink

    RE 284.

    When I look at all the different formats of input data, and all the disparate data sources,
    and metadata, I think it’s begging for OOP, where the temp station is an land object or ocean object
    encapsulated in the globe object and we apply methods to them.

    The current internal data structure is a mistake waiting to happen.

    Look at the code now, you have this 6 step chain of file read, processes, file write.. next step
    It’s prone to disaster. Just look at how much code is spent checking input constraints.

    Ok… End Rant. What ever modern langauge John V picks is fine with me.

  270. MarkR
    Posted Sep 9, 2007 at 5:20 PM | Permalink

    I believe that if changes are made to the original code, then that will lead to endless fights with the Warmers about which is the “correct” way to code. I would favour a twin track approach:

    Firstly, if the objective is to have an accurate record of temperature (and surely this is what everyone wants, as all policy decisions flow from that), then:

    1 Audit any record using the existing code to identify any flaws in the calculations, and logic of alteration. Any mistakes, ask GISS to correct them.

    2 If the output of the existing code (now explained by being visible, and hopefully annotated), doesn’t match the stated objectives, ask GISS to rectify it.

    3 If the current stated objectives of GISS in altering the raw data in detail are wrong, ask GISS to change them.

    So far GISS have done the right thing (allbeit slower than many wanted) in making alterations to the temperature record where justified, and releasing the code. I hope they will continue to do the right thing.

    Secondly, if waiting for GISS is going to take too long, or they drag their feet, then create a separate model, using the most favoured language, on a Wiki tye platform.

  271. Larry
    Posted Sep 9, 2007 at 5:20 PM | Permalink

    Isn’t there a significant chance of losing something in the translation if it’s rewritten in a modern language? What are we trying to prove? It’s quite possible that language and compiler idiosyncrasies affect the results. It would seem like you’d want to repeat the results with the original verbatim code before trying to improve things.

    There must be a bunch of engineers here (see the following joke: http://www.ncbuy.com/humor/jokes_view.html?jkv=11463 ).

  272. steven mosher
    Posted Sep 9, 2007 at 5:36 PM | Permalink

    RE 289.
    Goal 2 and 3 are set out in Step 0 ( where to get the input files, and some files are
    in the distribution)
    RE 289

    I like that. Personally I wasn’t going to try to compile it, because I’d have to ressurect
    an old system So, I was going to focus on understanding the code

    1. I don’t know Python so its a good excuse to learn it
    2. Fortan was years ago, but its not French to me.
    3. It would help me understand the math

    PS.. goal 2 &3 are pretty much covered in Step0 code

    GOAL 4 is the tough one. WHICH hansen dataset do we compare against

    H99? H2001? Urban adusted? urban unadjusted?

    Given the distributin we got you can be pretty confident that there isnt a Test set.
    ( step 0 reads from URLS) Nothing has been instrumented for unit test..

    Old school buddy.

    Still Cascade of data thrugh the varius steps will allow for checking if Reto
    will answer questions. That is every step ends with a set of output files picked
    up by the next step..so comparsins shuld be easy if somebdy wants to write a little
    test program.

  273. windansea
    Posted Sep 9, 2007 at 6:06 PM | Permalink

    Steve

    Just want to say thanks, when you broke the hockey stick I woke up.

    Hansen is equally odious to Mann. I smell a rat.

  274. MattN
    Posted Sep 9, 2007 at 6:35 PM | Permalink

    Just want to make sure I have this clear.

    The United States is insignificant. And

    omission of South America and Africa has only a tiny effect on the global temperature change.

    OK. Then what does matter? If US, Africa, and South America don’t matter, then what does?

  275. Posted Sep 9, 2007 at 6:36 PM | Permalink

    Where did Hansen make the jester comment? I don’t see a link.

  276. Larry
    Posted Sep 9, 2007 at 6:48 PM | Permalink

    298, search for the “usufruct” thread. Or just google for “usufruct”.

  277. BarryW
    Posted Sep 9, 2007 at 7:10 PM | Permalink

    Re #290

    You forgot the Hansen object that that adjusts or throws out temp station objects that don’t fit his agenda while claiming that doing so would be subjective. see #265, #266.

    In choosing a language one question would be how cpu intensive are we talking? If it’s not an issure I would vote for Java in an IDE such as Eclipse since it runs on multiple machines, well supported with libraries and easy to maintain (Re # 244: BigDecimal is a good example). Also, JUNIT test cases can be built to allow regression testing. It would be nice to avoid C’s pointer and memory allocation issues. Eclipse has UML, ANT, CVS and SVN support for modeling and version control. MatLab also has a Java interface, although I’m not sure what version it supports. I’ve also seen it tied to Excel for output.

  278. BarryW
    Posted Sep 9, 2007 at 7:20 PM | Permalink

    Re #297

    If you go to the giss site link and look at the animation’s last frame you’ll see the largest of the warming appears to confined to the arctic, northern parts of North America and Euro/Asia although because of the projection it looks larger than it is.

  279. Buddenbrook
    Posted Sep 9, 2007 at 7:21 PM | Permalink

    Excellent work!

    Now, let’s demand them to reveal the climate model parameters they have used to project catastrophic warming? Would be interesting to see what kind of an emphasis they have put on CO2, and have they e.g. ignored land cover change (like Pielke suggests!)?

  280. Paul Linsay
    Posted Sep 9, 2007 at 7:42 PM | Permalink

    Look at the code now, you have this 6 step chain of file read, processes, file write.. next step
    It’s prone to disaster. Just look at how much code is spent checking input constraints.

    That pretty much dates the code. It echoes how major computations were done in the days when the IBM 360/370 and especially the CDC 7600 were supercomputers (with 64 K of memory, yes 64,000). Read in the raw data from 6250 bpi 9 track magnetic tape on drive A, do your calculations, write the intermediate results onto a second tape on drive B. When a step was complete the code would stop and send a message to the operator to dismount A and mount a new tape for output, and rewind B, which was now the input. You’d then repeat the cycle as often as needed and the data center and your budget would allow. After a while your office would have piles of 2400 foot reels of tape, not always well labeled and archived, with raw data and the results of data analysis and calculations.

    Good luck to all of you trying to make this work, you’ve got one very dusty deck on your hands.

  281. henry
    Posted Sep 9, 2007 at 7:46 PM | Permalink

    Just one quick question:

    On Hansen’s original chart, there’s a grey area from about +.5 to +1C, listed as “estimated temperature range” of a couple of historical areas (altithermal and eemian times).

    I did google these, and came across at least one paper that lists the temp range for this area at +5C (+/- 2C). Is this a mistake, or did he actually under-estimate this temp range by a factor of 10 (perhaps to make the “projection” sound worse that it was?)

    This may have been covered already, and if so, please point to a link with the answer.

  282. Steve McIntyre
    Posted Sep 9, 2007 at 7:47 PM | Permalink

    Speaking for myself, the reason why I want to see the code is to see how Hansen actually did certain calculations that are either not described or mis-described. For example, it’s one thing to conclude that Hansen calculated one-digit deltas in his station combinations, another thing to confirm this hypothesis in his code.

    The main purpose in running code, in my opinion, would be to create intermediate debugging objects so that perhaps problematic steps can be analyzed.

    I don’t view this code as some sort of holy grail. Provision of code should be routine and normally one would not expect to encounter gross errors. The present case is therefore a little unusual in that, in addition to the Y2K error, there is accumulating evidence of another error in how Hansen combined data versions, an error identified primarily by John Goetz without access to the code, and this error may prove more important than the “Y2K” error.

    None of this shows that any other argument relied on by Hansen is incorrect. As Gavin Schmidt observed, the programming in this matter is relatively trivial. Nonetheless, no one who has examined Hansen’s code in this matter is likely to emerge with enhanced confidence in his ability to develop reliable computer programs for more complicated matters.

  283. Paul Linsay
    Posted Sep 9, 2007 at 7:48 PM | Permalink

    #302, Mark,

    You’ve rediscovered for yourself a famous problem in FORTRAN and the standard fix. Equal isn’t always what you think it is.

  284. David Smith
    Posted Sep 9, 2007 at 7:57 PM | Permalink

    Re #297 A GISS map comparing the 1930s to today is here .

    I’m not certain how GISS handles the gray grid boxes on the map. It seems reasonable to assume that, if there is insufficient data to make a conclusion about temperature trend in a grid box, then that gray grid box should be excluded from the global trend. I assume GISS excluded them.

    My back-of-envelope estimate is that, when the gray boxes are excluded, the ocean coverage rises to 85% of the globe (in this 1930s vs today comparison) instead of the 71% it actually covers. That means that the oceans play an extraordinarily large role and, in a sense, the colored land areas “don’t matter” in the global temperature trend.

    On land it looks like the greatest warming (1930s to today) was in Central Asia and Canada. On the oceans it looks like general warming, with some exceptions.

    Since the oceans are 85% of the global total, that sea of yellow accounts for the global trend.

    I marvel that the map makers exclude Africa, South America and the more-remote regions of Canada and Australia yet are comfortable with accepting a temperature estimate, for example, of the Southern Ocean 1000 miles east of Madagascar.

  285. Posted Sep 9, 2007 at 9:29 PM | Permalink

    Re: 158, 168 and 170

    To recap:

    Here is the kind of thing that bothers me:

    GISTEMP_sources/STEP0/USHCN2v2.f:

    if(temp.gt.-99.00) itemp(m)=nint( 50.*(temp-32.)/9 ) ! F->.1C

    – Sinan

    Now, why does that bother me? There are a number of reasons.

    Let’s say we have 42.0 F. Convert that using the code above, we get 56 tenths C. Now, convert it back to F, what do we get? 42.08 — I am assuming that becomes 42.1 F.

    Further, if I had started out with 42.1 F, I would have gotten the same Celsius value.

    The code (which I have not had time to examine and I probably miss out on this for a while as this is an extremely busy period in my personal life and at work) is littered with conversions and rounding in intermediate steps. I don’t see the point of rounding willy nilly in the intermediate steps.

    It bothers me that they don’t use internal binary representations in the intermediate stages (where it would actually be appropriate) but their gridded products are in binary format.

    In GISTEMP_sources/STEP2/padjust.f

    iadj=nint( (iya-knee)*sl-(iy2a-knee)*sl2 )

    — Sinan

  286. steven mosher
    Posted Sep 9, 2007 at 9:32 PM | Permalink

    RE 302..

    Mark that is the code I’ve been looking at for the past couple hours

    It’s the URBAN/RURAL adjust code. It’s at the heart of some of the issues
    folks have been discussing.

    I’ll have a look and see what’s going on

  287. D. Patterson
    Posted Sep 9, 2007 at 9:39 PM | Permalink

    Re: #296

    Where did Hansen make the jester comment? I don’t see a link.

    The Real Deal: Usufruct & the Gorilla

    Click to access realdeal.16aug20074.pdf

  288. Geoff Sherrington
    Posted Sep 9, 2007 at 9:40 PM | Permalink

    Re # 305 Steve

    The raw data cleansed of code and scribal errors is a first step. Then there are two more classes of error to face. First, mechanical errors in making adjustments for the first chosen effect, to see if one can match GIFF adjusted data or not. Second, errors of climate methodology or questionable assumptions.

    Re the second, I am increasingly uncomfortable with the use of a 1200 km linear extrapolation from one site to another. I am already uncomfortable with logic used to reject stations and with methods used for missing data and for UHI adjustment. This is because of the excellent science on CA explaining these matters. While I applaud the effort in cleaning up the code and running it to generate records less prone to error, I suspect that there are some errors of the second class that will impede progress.

    Is it worth trying to make an early list of these class two climatology ?errors (as opposed to code problems) in prepartion for application of realistic future adjustments? Another thread? Geoff.

  289. steven mosher
    Posted Sep 9, 2007 at 9:58 PM | Permalink

    RE 302..

    Ok Mark… That code is important.
    It would be neat to see what station you got the infinite loop on.

    Basically they are looping through the rural stations which they will

    Use to adjust the Urban stations.

    There are rules for adjusting, combining and trimming

    N3 is the number of good years in a series.
    (N3L-N3F+1) is the total number of years
    XCRIT is defined as 2./3.

    So if the number of good years is less than 2/3 of the total years
    THEN the early years are dropped off the record

    IY1=N3L-(N3-1)/XCRIT
    WRITE(79,'(a3,i9.9,a17,i5,a1,i4)’)
    * CC(NURB),IDU(NURB),’ drop early years’,1+IYOFF,’-‘,IY1-1+IYOFF

    IF(FLOAT(N3).LT.XCRIT*(N3L-N3F+1.))

  290. steven mosher
    Posted Sep 9, 2007 at 10:06 PM | Permalink

    re 302. Mark

    One more thing, when you finish step 2 we should have a file that represents
    Giss adjusted.

    Step 1… gives you stations combined
    Step 2 gives you adjusted.

    At this stage we Might do some file compare..so see how your results match up.

    I’m not sure if the output files are in final form though they do so much fussing with them.

  291. steven mosher
    Posted Sep 9, 2007 at 10:12 PM | Permalink

    311. Geoff

    Here’s a nice thing to consider…

    C**** The combining of rural stations is done as follows:
    C**** Stations within Rngbr km of the urban center U contribute
    C**** to the mean at U with weight 1.- d/Rngbr (d = distance
    C**** between rural and urban station in km). To remove the station
    C**** bias, station data are shifted before combining them with the
    C**** current mean. The shift is such that the means over the time
    C**** period they have in common remains unchanged. If that common
    C**** period is less than 20(NCRIT) years, the station is disregarded.
    C**** To decrease that chance, stations are combined successively in
    C**** order of the length of their time record.

    Rngbr is 1000km

  292. Leonard Herchen
    Posted Sep 9, 2007 at 11:48 PM | Permalink

    195
    Motl:
    I’m thinking of doing some work with the raw station data using some database utility software. When I go to your link you mentioned in 195, I seem to only see the antarctic data. Am I missing something, is that all there is, or is something else. The are some files called antart1.txt etc that seem to have the raw station data. Can you tell me if you have all the station data available? I may simply be looking in the wrong place.
    Thanks

  293. ural
    Posted Sep 10, 2007 at 12:23 AM | Permalink

    #302 Mark,

    From what I’ve read here – this code was running on an AIX system … 64 bit. Floating point numbers are not the same number on a 32 and a 64 bit machine. Moving to double precision might not help.

  294. Nicholas
    Posted Sep 10, 2007 at 12:46 AM | Permalink

    From what I’ve read here – this code was running on an AIX system … 64 bit. Floating point numbers are not the same number on a 32 and a 64 bit machine. Moving to double precision might not help.

    Actually most x86 32 bit machines have had support for 32/64/80 bit floats for years (as long as I can remember – at least a decade). I think this was introduced at least as early as the 386 (those which had FP co-processors anyway).

    In C, you can use “float”, “double” and “long double” for the 32/64/80 bit floats, if they are supported by your architecture.

    Rounding is a whole otehr kettle of fish, though.

    The main difference between a 32 and 64 bit machine these days is in integer unit. FP/SIMD have been 64/128+ bits well before the introduction of 64 bit x86-compatible chips.

  295. ural
    Posted Sep 10, 2007 at 2:14 AM | Permalink

    The main difference between a 32 and 64 bit machine these days is in integer unit. FP/SIMD have been 64/128+ bits well before the introduction of 64 bit x86-compatible chips.

    Single prescion on a 32 bit machine has a sign bit, 8 bit exponent, and a 23 bit mantissa … on a 64 bit machine it’s sign bit, 11 and 52. I am saying that you can’t expect the same results, using FP (running single precision), on a 32 and 64 bit machine. It does look like double on a 32 bit is the same as single on a 64.

  296. PaddikJ
    Posted Sep 10, 2007 at 2:16 AM | Permalink

    With all the action of the last month or so, this is getting almost embarrassing, but once again, congrats to Steve Mac.

    I’m starting to wish I was more of a code-head, because you guys look like you’re having way too much fun.

    I did ping a bunch of friends and told them they should visit this thread and watch a bit of history take place right before their eyes; because in 6-12 months, when the mainstream press finally gets a clue, and maybe there are Congressional hearings, they could all smugly say they watched it unfold as it happened.

    BTW Steve,

    The Arctic warming is not “unprecedented”. It was much warmer in the Pliocene, not to speak of the Eocene. More recently, it was warmer in the previous interglacial (the EEmian) ~110K years BP and in the Holocene Optimum about 8000 BP and perhaps even in the MWP.

    If memory serves, the same could also be said for the 20’s & early 30’s. I have a nice little clipping from the Nov 2, 1922 Washington Post which I would insert here if I knew how, but at any rate, the header reads “Arctic Ocean Getting Warm; Seals Vanish And Icebergs Melt”

  297. Phil
    Posted Sep 10, 2007 at 2:19 AM | Permalink

    RE: 163, 168, 170, 174, 178, 182 From:

    ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v2/v2.temperature.readme

    …The three raw data files are:
    v2.mean
    v2.max
    v2.min
    …Each line of the data file has:…

    Data:
    12 monthly values each as a 5 digit integer. To convert to degrees Celsius they must be divided by 10. Missing monthly values are given as -9999.

  298. Willis Eschenbach
    Posted Sep 10, 2007 at 3:07 AM | Permalink

    Steven Mosher, thanks for the post where you say (my emphasis):

    311. Geoff

    Here’s a nice thing to consider…

    C**** The combining of rural stations is done as follows:
    C**** Stations within Rngbr km of the urban center U contribute
    C**** to the mean at U with weight 1.- d/Rngbr (d = distance
    C**** between rural and urban station in km). To remove the station
    C**** bias, station data are shifted before combining them with the
    C**** current mean. The shift is such that the means over the time
    C**** period they have in common remains unchanged. If that common
    C**** period is less than 20(NCRIT) years, the station is disregarded.
    C**** To decrease that chance, stations are combined successively in
    C**** order of the length of their time record.

    Rngbr is 1000km

    This seems to me to be a poor algorithm. If 20(NCRIT) is say 20 years, we could have four records:

    a) 1830 – 1910 (80 years)
    b) 1920 – 2007 (87 years)
    c) 1865 – 1935 (70 years)
    d) 1895 – 1960 (65 years)

    Using the algorithm, the first record selected would be b), because it is the longest. (As an aside, was it ever determined whether the longest or the most recent is chosen to be the first record?)

    Record b) would then be compared to a), which would be rejected for lack of overlap. Then b) would be successively compared to c) and d), which would be similarly rejected, leaving only b). In total, 215 years of records would be unnecessarily discarded.

    w.

  299. Mark
    Posted Sep 10, 2007 at 3:48 AM | Permalink

    Steve Mosher wrote (#312):
    Hi,

    The (first) station that failed was

    348 57101605250003 87BISKRA

    I didn’t check any of the others.

    My simple debug output produced the following.

    101605250003 drop early years 1880-1887
    IY1: 9
    XCRIT*(N3L-N3F+1.)= 62.00000000
    IY1= 9 IYRM= 128
    N3L= 101 N3= 62

    repeating for ever.

    Mark.

  300. Not sure
    Posted Sep 10, 2007 at 3:58 AM | Permalink

    Mark (257) Thanks, that did help.

    It seems that setup.py is the new and improved way of building and distributing Python extensions. It’s remarkably simple and easy to write these scripts (I’m becoming a Python fan). I would share them here, but I think it would be cumbersome. Once you have them, all you have to do is “python setup.py install” in the extension directories and you’re off and running.

    I’m using my system’s python, which is not compiled with debug symbols, and this is making things somewhat harder. Check this out for a neat trick for Python extension debugging.

    STEP1 now runs to completion for me and produces a 30MB text file called Ts.txt for me. On to STEP2.

  301. Not sure
    Posted Sep 10, 2007 at 4:05 AM | Permalink

    Mark, your new patch does not have the C stuff in it. Don’t know if this is intentional.

  302. Mark
    Posted Sep 10, 2007 at 4:31 AM | Permalink

    Not Sure (#325)

    Mark, your new patch does not have the C stuff in it. Don’t know if this is intentional.

    Hi,

    I forgot to extract the extensions tar file when I generated the diff. I’ve updated the patch and put it up at the patch page.

    Thanks for pointing that out.

    Just for comparison, my Ts.txt is 29075384 byes in size.

    Mark.

  303. bernie
    Posted Sep 10, 2007 at 5:47 AM | Permalink

    For the record, there is still no mention of the release of the code at RC. Seems like CA is rapidly becoming the only game in town for those interested in verifying that the GCM have a basis in reality.

  304. Jean S
    Posted Sep 10, 2007 at 6:00 AM | Permalink

    Mark, a quick question: are the temperatures represented as integers in the variable old_db (comb_records.py)?

    #299: if you have int/int division, what is the exact convention for the truncation/rounding? Especially, how negative numbers are handled, i.e., what is the result of -2/3 (is the truncation towards zero or -infinity)?

  305. MattN
    Posted Sep 10, 2007 at 6:06 AM | Permalink

    For the record, there is still no mention of the release of the code at RC.

    I think someone here did mention it in the Friday Roundup, and it was promptly deleated. Gavin strikes me as an arogant sonofabitch when he does stuff like that.

  306. Mark
    Posted Sep 10, 2007 at 6:41 AM | Permalink

    Hi,

    Mark, a quick question: are the temperatures represented as integers in the variable old_db (comb_records.py)?

    The temperature values seem to be stored as integers, a look at the log file shows values like the following –

    30 -2 -11 16 -4 -38 26 27 -8 8 10 -11 -20 20 18 14 104

    Mark.

  307. steven mosher
    Posted Sep 10, 2007 at 7:15 AM | Permalink

    re 322. Thanks willis,

    If I get time today I’ll try to anwer the question about how longest is measured. My sense and intuition is that the
    algrithm was deefined based on a few cases and then just applied to the whole, since checking
    this kind of thing would be extremely tedious. Now that I have a general sense of step 2
    I suppose I should start writing things down, without running code and the ability to do
    intermediate dumps it will involve some guess work ( and my Fortran skills are 20 years old read only
    )

  308. Steve McIntyre
    Posted Sep 10, 2007 at 7:20 AM | Permalink

    test

  309. steven mosher
    Posted Sep 10, 2007 at 8:48 AM | Permalink

    335.

    I’ll have a look, after I take the kids to school.

  310. Mark
    Posted Sep 10, 2007 at 8:54 AM | Permalink

    I always knew that there may be problems with the data. I just had an itch to get the code running and found some data that seemed to work.

    If anyone can point to the actual data to use then I will switch to that, it may solve the problems I’m having now.

    Mark.

  311. steven mosher
    Posted Sep 10, 2007 at 9:03 AM | Permalink

    335.

    I assume the EOF error is happening in SREAD?

  312. Mark
    Posted Sep 10, 2007 at 9:19 AM | Permalink

    339.

    Yes the end of file happens in SREAD.

    Mark.

  313. Terry
    Posted Sep 10, 2007 at 9:38 AM | Permalink

    Hi. I’ve been lurking and lovin’ every minute of it. Two points. No response requested. You all concentrate on the code and the data.

    1) If this code has been changed and edited since Hansen submitted his research for publication, then it isn’t really the code we are looking for is it? Hansen should have archived the original code and given out the original code. Am I the only one who finds this unacceptable and unreasonable given the gravity of the subject matter and the apparent importance of this research? It essentially continues to place his research out of reach and beyond the peer review process. And his attempts to “simplify” the code runs in the face of scientific transparency and full disclosure. He bold-facedly admits to wanting to change (ie simplify) the code *further* for “those interested in science”. He cannot possibly be that stupid, can he? He cannot possibly believe that we are that stupid, can he? The next question is what is NASA going to do with this infernally ascientific toad? Phew, sorry ’bout the rant. His arrogance just p’s me off.

    Having said that, it is still important that the code that was released be carefully vetted, as you are doing, even if it is not the original code.

    2) I await the day when you write up a succinct recap of your current and future findings regarding the code and post it on the blog.

    Keep up the excellent work.

  314. Larry
    Posted Sep 10, 2007 at 10:26 AM | Permalink

    This thread reminds me of this:

  315. SteveSadlov
    Posted Sep 10, 2007 at 10:40 AM | Permalink

    RE: #53 – I find it interesting that the “Killer AGW” lobby take pains to pigeonhole all critics as “skeptics” or the even more inflammatory “denialist” label. I am more or less in line with Steve M. I believe that there is an AGW term. My view is, science needs to quantify or understand the boundary values of that term, and meanwhile, better understand overall climate change, in other words, the entire equation including all terms. Why is that such a threat to some?

  316. JerryB
    Posted Sep 10, 2007 at 10:49 AM | Permalink

    FWIW, a pseudo, not genuine, perhaps somewhat similar, alternate to
    USHCN.v2.mean_noFIL is
    located here
    .

    It is simply an extract of USHCN adjusted data taken from the GHCN v2.mean_adj.Z f
    and seems already to be in the necessary format, but while it may be similar
    to USHCN.v2.mean_noFIL, it surely is not an exact match.

  317. SteveSadlov
    Posted Sep 10, 2007 at 10:50 AM | Permalink

    RE: #61 – Or how about this? In earlier, primitive days, it was assumed that GHG forcing would act in a bulk manner, upon “naturally” rereadiated IR, across the globe. However, given the imbalances imparted by human albedo modification, and, human caused energy flux in developed areas, in fact, the GHG forcing term operates overwhelmingly in areas which have incurred either extensive albedo modification of a type which favors efficient IR reradiation, or have incurred development of energy dissipating equipment and devices, electrical current distribution networks and, high fluxes of EM in certain frequency bands.

  318. Mark T.
    Posted Sep 10, 2007 at 10:53 AM | Permalink

    And his attempts to “simplify” the code runs in the face of scientific transparency and full disclosure. He bold-facedly admits to wanting to change (ie simplify) the code *further* for “those interested in science”.

    For those that believe in him, this is sufficient. I.e., they do not care what he used, only what he thinks should be used. That there may have been errors in the actual code used to generate all of his conclusions is an immaterial point. Those truly interested in the science would like to be able to replicate his results, which means access to the original code. It’s not really replication if you use a “simplified” version, is it?

    He cannot possibly be that stupid, can he?

    No, he’s not, IMO.

    He cannot possibly believe that we are that stupid, can he?

    Maybe.

    Mark

  319. Larry
    Posted Sep 10, 2007 at 11:00 AM | Permalink

    346,

    My view is, science needs to quantify or understand the boundary values of that term, and meanwhile, better understand overall climate change, in other words, the entire equation including all terms. Why is that such a threat to some?

    Because the difference between being an unknown scientist in an esoteric field and being a rock star receiving huge grants from the Tides foundation is the ability to maintain high drama. Without the sense of impending catastrophe and doom, he’d be a lot poorer and totally anonymous. He’s also be expected to produce quality work.

  320. JerryB
    Posted Sep 10, 2007 at 11:09 AM | Permalink

    With apologies, if you downloaded the file linked in my previous post, and the
    size is not 2755498 bytes, please try again. The first file was not what I
    thought/said it was.

  321. SteveSadlov
    Posted Sep 10, 2007 at 11:16 AM | Permalink

    RE: #230 – Waldo is hiding in the Arctic Ocean, in both liquid and solid phases.

  322. steven mosher
    Posted Sep 10, 2007 at 11:25 AM | Permalink

    340… Ok…

    And the segmentation fault… Are we walking off the end of an array somewhere?

    Maybe the changes made in step two are throwing an index off somewhere.

  323. steven mosher
    Posted Sep 10, 2007 at 11:58 AM | Permalink

    Hey SteveS,

    I left a comment for U over on Watts’ . Check out the Bubble study. VERY Cool.

    Also.. neat stuff here

    Click to access indexCD.pdf

    Have you read Geiger’s book on microclimate? I recommended it to Anthony and was
    going to read it befre this cde thing happened.

  324. DKN
    Posted Sep 10, 2007 at 12:02 PM | Permalink

    Steve,

    I’ve just begun following the discussions here, so forgive me if my comment is passé or OT. And I’m not a climatologist, I’m a geographer, so maybe I’m missing a point. (And I’m definitely numerically challenged!)

    But Hansen’s remarks about South America, the Southern Hemisphere (S.H.) in fact, being unimportant ring false to
    me. The reason for this is that little or no warming is occurring in the S.H. (correct?)- despite good atmospheric mixing of CO2.
    If so, it would seem that “Global” warming is more or less a Northern Hemisphere phenomenon, not global at all. That strikes me as rather important.

    It’s true that the S.H. may just lag the North because of the high percentage of ocean down there, but should that not be considered a negative feedback in the models? Or do they consider it at all?

    Finally, while it is important to understand Hansen’s methods, why not also put together a clean dataset and do a spatio-temporal analysis for trends. Or has that been done?

    Thank for your good work (and your patience with my ignorance).

    DKN

  325. SteveSadlov
    Posted Sep 10, 2007 at 12:04 PM | Permalink

    More PR:

    http://global-warming.accuweather.com/

    Spreading like wildfire.

  326. Mark
    Posted Sep 10, 2007 at 12:28 PM | Permalink

    #358

    Thanks Steve, it gives me one more place to look.

    In the mean time I looked at the TEST4_5 directory and saw that it has similar contents to the TEST3 directory.

    I need to find the data files used in that phase.

    Mark.

  327. steven mosher
    Posted Sep 10, 2007 at 12:50 PM | Permalink

    362. SST Those are the Hadcru SST ( sea surface temp) Files.

    Sources: http://www.hadobs.org HadISST1: 1870-present
    http://ftp.emc.ncep.noaa.gov cmb/sst/oimonth_v2 Reynolds 11/1981-present

    For both sources, we compute the anomalies with respect to 1982-1992, use
    the Hadley data for the period 1880-11/1981 and Reynolds data for 12/1981-present.
    Since these data sets are complete, creating 1982-92 climatologies is simple.
    These data are replicated on the 8000-box qual-area grid and stored in the same way
    as the surface data to be able to use the same utilities for surface and ocean data.

    Areas covered occasionally by sea ice are masked using a time-independent mask.
    The Reynolds climatology is included, since it also may be used to find that
    mask. Programs are included to show how to regrid these anomaly maps:
    do_comb_step4.sh adds a single or several successive months for the same year
    to an existing ocean file SBBX.HadR2; a program to add several years is also
    included.

    Result: update of SBBX.HadR2

    Is that what you are talking about

  328. Mark
    Posted Sep 10, 2007 at 1:27 PM | Permalink

    #361

    Thanks once again Steve, you are a great help.

    I’m not at my main workstation right now, family stuff. I’ll be looking later though.

    Mark.

  329. steven mosher
    Posted Sep 10, 2007 at 3:17 PM | Permalink

    SteveMC.. Can Mark not sure and others PLEASE get a separate thread to discuss getting the Code running?

    I’ll still bop around and answer folks questions about hansen and his methods, but we need a dedicated space
    to talk amongst ourselves..

  330. MarkR
    Posted Sep 10, 2007 at 3:37 PM | Permalink

    #328 Jean. If the work was originally done with AIX operating system, then the only machine that ran that in the mid to late 80s was the IBM 6150 type:

    Three models were produced, the 6150, 6151, and 6152. The basic types of machines were the tower model (6150), and the desktop model (6151). All these models featured a special board slot for the processor card.
    There were three versions of the 6150/6151 processor card: the standard 032 processor card had a 170ns processor cycle time, 1 MiB standard memory (expandable via 1 MiB, 2 MiB or 4 MiB memory boards) and optional floating point accelerator.
    The Advanced processor card had a 100ns processor cycle and either 4 MiB memory on the processor card, or external 4 MiB ECC memory cards, and featured a built-in 20 MHz Motorola 68881 floating-point processor. The Enhanced Advanced processor card had a cycle time of 80ns, 16 MiB on-board memory, while an enhanced advanced floating point accelerator was standard.

    http://en.wikipedia.org/wiki/IBM_6150_RT

    The 68881 had eight 80-bit data registers. It allowed seven different modes of numeric representation, including single-precision, double-precision, and extended-precision, as defined by the IEEE floating-point standard, or “IEEE 754”. It was designed specifically for floating-point math and was not a general-purpose CPU. For example, when an instruction required any address calculations, the main CPU would handle them before the 68881 took control.
    The CPU/FPU pair were designed such that both could run at the same time. When the CPU encountered a 68881 instruction, it would hand the FPU all operands needed for that instruction, and then the FPU would release the CPU to go on and execute the next instruction.

    http://en.wikipedia.org/wiki/Motorola_68881

    Perhaps it would be as well to ask them exactly what type of machine and operating system they are actually using for their latest results. I doubt if they actually use an IBM 6150, as they are probably competely obselete by now (I used to sell them way back when). Otherwise it’s possible that lots of small differences are going to occur, and no-one will be able to pin it down.

  331. steven mosher
    Posted Sep 10, 2007 at 3:39 PM | Permalink

    365..

    Arrrg. I just knew this thing would blow up over input/output crap

    of the 9000 LOC I bet 3000 of it is I/O

  332. steven mosher
    Posted Sep 10, 2007 at 4:20 PM | Permalink

    RE 367.

    When I heard AIX I shuddered. HP-UX was bad enough.

    When you read the code it is clear that it was written on a very limited
    system relative to today so a lot of time is spent thrashing the IO.
    But back in the day, you had to do that junk, so I’m not baggin on them

    BUT

    I don’t understand why Reto didnt port the junk to his PC just to see
    if he could. ahhh whatever.

  333. steven mosher
    Posted Sep 10, 2007 at 4:27 PM | Permalink

    RE 365.

    bad unit number? That’s like an file system OS error right?

  334. steven mosher
    Posted Sep 10, 2007 at 4:34 PM | Permalink

    If you are working on the compile, or interested. Hit the new
    thread SteveMc hosted for us.

  335. mjrod
    Posted Sep 10, 2007 at 4:55 PM | Permalink

    If I port this to PC, what do I win?

  336. Posted Sep 10, 2007 at 6:15 PM | Permalink

    The new thread for technical only was a great idea. Steve needs a non-technical only so we can cheer. “Not sure” is having way too much fun.

  337. Posted Sep 10, 2007 at 6:44 PM | Permalink

    I know this is going to fall on deaf ears, being in the middle of the excitement and everything, but perhaps it’d be worth it to slow down and take some time to set up a proper project.

    A SourceForge-type CVS repository for porting the code to Windows/Linux PC, a Wiki (as has already been suggested) to document it, etc. This will help focus efforts, and more importantly, attract others, because they won’t need to start from scratch in order to help out.

    Just a thought.

  338. bernie
    Posted Sep 10, 2007 at 7:17 PM | Permalink

    Update on RC: Despite another excessively polite but unpublished comment on the release of Hansen’s code they are now concerned about Lomborg’s new book, Cool It. They clearly have not realized that if Hansen’s code proves out, Lomborg’s book will be of marginal relevance.

  339. Larry
    Posted Sep 10, 2007 at 7:28 PM | Permalink

    Let me Guess: the site that’s supposed to be there to provide “information” on climatology is ripping Lomborg’s book on economics.

  340. steven mosher
    Posted Sep 10, 2007 at 8:45 PM | Permalink

    374 and 377.

    Well Guys thats the path I thought we should go down, But Not sure and Mark are
    making headway to getting the pile to compile so, if you can read code and debug
    from a distance lend a hand. I figured if they were making headway that I should
    pitch in and help them finish. I liked what both you guys suggested .. So start down that
    path and if we can get the compile job done then I can circle back and contribute on your
    thing..

    The union of people who have read all the papers and who can understand code is very small.

  341. Evan Jones
    Posted Sep 10, 2007 at 10:10 PM | Permalink

    Congrats and kudos to St. Mac! Chalk one up for the scientific method!

    It puts one in mind of Col. Mandrake in Strangelove: “The code, Jim. That’s it. A nice cup of tea, and the code . . .”

    But didn’t H. even provide operating manuals? You gotta go Fortran diving? Sheesh! Wassup wizzat noise? Not that youse tech boyz ain’t up to it, but, like, sheesh! An innocent soul would–almost–think NASA doesn’t LIKE due diligence!

    Unfortunately I haven’t cracked a Fortran book in 30 years, so I’m no help to you.

  342. Geoff Sherrington
    Posted Sep 10, 2007 at 10:32 PM | Permalink

    Re # 295 and 303. Range of influence 1000 km.
    Thanks, guys.

    Some people quote 1200 km, so I took a map of my home country, Australia. I cut a piece of paper 1200 km long (using the scale at the bottom, which is inaccurate with latitude changes, so it was a rough exercise). Moving the strip over the map, there is no place in the interior of mainland Australia less than 1200 km from the seacoast. Roughly, this means that any observing station will be adjusted over an area of about a quarter to a third of the area of the Continent, which is just a bit smaller than the contiguous USA.

    I remain to be convinced that there is a valid, useful correlation over this distance anywhere on land earth. The seminal papers have special conditions, like near-continuous snow cover.

    What is the general impression of airports as station sites? The Aust Bureau of Met has released maps with chosen sites, which are about 40 airports. Given the population distribution, about half are next to the sea and thus potentially able to have different circumstances to inland ones. Geoff.

  343. harold
    Posted Sep 11, 2007 at 2:59 AM | Permalink

    I am a frequent visitor of this site, even though I am innumerate.

    One of the things that has been bugging me, is an ¡§answer¡¨ Hansen
    gives in his ¡§flood of demands¡¨ defence email.

    He asks:¡¨ But what is the global significance of these regions
    of exceptionally poor data?¡¨ (the ¡§regions¡¨ are the continents
    Africa and South America ļ).
    (and he is willing to omit these regions (because?), they do not
    change the global temperature)

    Added to the email is an introduction (History) to the program which defines the scientific purposes of the GISS Temperature Analysis:

    ¡§The rationale of the GISS Temperature Analysis was that the number of Southern
    Hemisphere stations was sufficient for a meaningful estimate of global temperature change¡K¡¨

    Hansen et al. 1981 showed on the basis of the GISS Temperature Analysis:
    (contrary to impressions from northern latitudes)
    – global cooling after 1940 was small
    -a net global warming of about 0.4C between the 1880s and 1970s

    So if it¡¦s heads the objectors lose, and if it¡¦s tails Hansen wins.

    Imo he cannot have it both ways, he has to defend the regions
    (¡§with exceptionally poor data¡¨), or he has to change the conclusions
    of Hansen et al 1981, which were based on the GISS Temperature
    Analysis rationale.

    Am I perhaps being simplistic or maybe just plain stupid?

    (should have typed it here,now its a bit of a mess)

  344. D. Patterson
    Posted Sep 11, 2007 at 5:59 AM | Permalink

    Keep asking questions and seeking empirical evidence for answers and you’ll be qualified to replace the author of the papers claiming a continent several times the size of the United States is without significant climatological impact and significance.

  345. John V.
    Posted Sep 11, 2007 at 8:04 AM | Permalink

    #349:
    Quoting Hansen:

    But what is the global significance of these regions of exceptionally poor data? As
    shown by Figure 1, omission of South America and Africa has only a tiny effect on the global
    temperature change. Indeed, the difference that omitting these areas makes is to increase the
    global temperature change by (an entirely insignificant) 0.01C.

    He is not saying that Africa and South America are insignificant. He is only saying that the difference in the global trend with and without those regions is insignificant.

    Why is the difference insignificant?
    It must be because the trend in Africa and South America is very similar to the trend everywhere else.

  346. Posted Sep 11, 2007 at 8:07 AM | Permalink

    I read all the comments suggesting the use of a wiki, and the possibility of using StikiR’s technology. That would be great as far as I’m concerned.

    If I understand, the various posters are envisioning a separate site under the editorial control of this community. Definitely possible. I’ll wait to hear from Steve on that topic.

    I do invite you to share analytical work amongst yourselves on StikiR now. The content there is mostly of my creation, but it’s not my blog. Please edit or build on what you see. I have or will be extending the same invitation to other organisations like RealClimate, GISS, CRU, etc. So I ask politely, please keep the lively discussion here 😉

    Mike Cassin
    MD, Stikir Ltd

  347. Larry
    Posted Sep 11, 2007 at 8:12 AM | Permalink

    350, that’s a circular argument, in case it’s not obvious. He’s saying that the quality of the data doesn’t matter because the low-quality data agrees with other low-quality data. Just because you can get agreement between summary statistics of junk data sets doesn’t mean that the data isn’t junk.

  348. John V.
    Posted Sep 11, 2007 at 3:29 PM | Permalink

    #361 PabloM:

    And several of the people involved in those corrections (Hansen , Schmidt etc.) are the same people whose “corrections” are showing up as questionable through Climate Audit.

    In the interest of completeness, I have listed the names and institutions for the authors in each of the three papers. Typically authors are listed in relative order of their contribution. Out of 13 institutions and 30 authors, there are 4 authors from NASA/GISS. They all appear in Santer et al, starting at author #17 out of 25 for the paper. (They are highlighted in the lists below).

    Mears et al.
    1. Carl A. Mears: Remote Sensing Systems
    2. Frank J. Wentz: Remote Sensing Systems

    Santer et al.
    1. Benjamin D. Santer: Lawrence Livermore National Laboratory
    2. Tom M. L. Wigley: National Center for Atmospheric Research
    3. Carl Mears: Remote Sensing Systems
    4. Frank J. Wentz: Remote Sensing Systems
    5. Stephen A. Klein: Lawrence Livermore National Laboratory
    6. Dian J. Seidel: NOAA/Air Resources Laboratory
    7. Karl E. Taylor: Lawrence Livermore National Laboratory
    8. Peter W. Thorne: Hadley Centre for Climate Prediction and Research
    9. Michael F. Wehner: Lawrence Berkeley National Laboratory
    10. Peter J. Gleckler: Lawrence Livermore National Laboratory
    11. Jim S. Boyle: Lawrence Livermore National Laboratory
    12. W. D. Collins: National Center for Atmospheric Research
    13. Keith W. Dixon: NOAA/Geophysical Fluid Dynamics Laboratory
    14. Charles Doutriaux: Lawrence Livermore National Laboratory
    15. Melissa Free: NOAA/Air Resources Laboratory
    16. Qiang Fu: University of Washington
    17. Jim E. Hansen: NASA/Goddard Institute for Space Studies
    18. Gareth. S. Jones: Hadley Centre for Climate Prediction and Research
    19. Reto Ruedy: NASA/Goddard Institute for Space Studies
    20. T. R. Karl: NOAA/National Climatic Data Center
    21. John R. Lanzante: NOAA/Geophysical Fluid Dynamics Laboratory
    22. Gerald A. Meehl: National Center for Atmospheric Research
    23. V. Ramaswamy: NOAA/Geophysical Fluid Dynamics Laboratory
    24. Gary Russell: NASA/Goddard Institute for Space Studies
    25. Gavin A. Schmidt: NASA/Goddard Institute for Space Studies

    Sherwood et al.
    1. Steven Sherwood: Yale University
    2. John Lanzante: Princeton University,
    3. Cathryn Meyer: Yale University

  349. John V.
    Posted Sep 12, 2007 at 1:19 AM | Permalink

    Um, what happened to our conversation?

    There were a bunch of posts between myself, PabloM, Larry, reid, and steven mosher — the conversation may have been off-topic, but it was a good conversation. No flames.

    Is there a problem with WordPress? What’s the policy on deleting posts around here?

  350. fFreddy
    Posted Sep 12, 2007 at 1:32 AM | Permalink

    Probably moved to Unthreaded on grounds of off-topicality.

  351. fFreddy
    Posted Sep 12, 2007 at 2:31 AM | Permalink

    Ren #354, John V

    What’s the policy on deleting posts around here?

    Other than obvious spam, Steve dislikes deletions – it is one of the most irritating habts of advocacy sites like RealClimate that good questions for which they do not have good answers tend not to make it through moderation. Accordingly, Steve tries to avoid hitting the delete button.

    However, he has to balance this against a need to maintain focus in threads. As ever on the internet, conversations tend to spin off in different directions. This can derail useful, on-topic conversations, and reduce the serious work that gets done here.

    Your conversation (satellite/surface temperature discrepancies, wasn’t it ?) was not offensive, and was on-topic for the site – you will note that there is a category of posts for it in the top left sidebar. However, it was not connected to the Hansen code, and so was off-topic for this thread.

    Accordingly, I think you will find most of it on Unthreaded #19. (Also see Steve’s posts at #659 and #664 there, where he expresses some of what he is looking for.)

    I should add that I don’t speak for Steve, and I am only posting this because it is UK morning and he is probably still asleep. If I have expressed Steve’s views badly, he should feel free to, ahem, delete this post and use his own words.

  352. bernie
    Posted Sep 12, 2007 at 6:39 AM | Permalink

    #350
    John V:
    The handling of S. America and Africa is intriguing for a couple of reasons. First, because of the problematic rationale for minimizing the so called Y2K errors that Steve brought to their attention by Hansen et al arguing that the US record represented but 2% of the surface and 6% of the land rcords. This prompted the hectic search for Waldo, i.e., a clear warming trend influenced by UHI and other distinctly local
    influences. The narrower the area where the trend exists the more problematic it is to talk about global warming. Second, if the findings for Brazil are generalizable to the rest of S. America then the fact that dropping the S. America data has no discernible impact on the trend could mean that the trend elsewhere is in fact a UHI driven trend as it appeared to be in Brazil. (It might of course not mean this – but that is again the reason for trying to identify where the non-UHI contaminated trend actually exists.)

  353. Pedro S
    Posted Sep 12, 2007 at 7:05 AM | Permalink

    Isn’t it strange that Hansen still can claim to be able to model temperature change in the South Atlantic and Southeastern Pacific even without any data from Africa or South America? Where does he get the data for those areas? I do not think there are many weather stations across the oceans …

  354. bernie
    Posted Sep 12, 2007 at 8:19 AM | Permalink

    #358
    No he was trying to make a point that even IF he ignored the flawed and limited data from Africa and S. America it would not change the overall trend. Hansen is keeping all the data – at least until we determine whether or not there are fundamental flaws with the way he consolidates the records.

  355. Pedro S
    Posted Sep 12, 2007 at 3:34 PM | Permalink

    #359

    The point I was trying to make is this:
    If we limit data from Africa and South America, I would expect one to not simply lose the ability to track temperature changs there, but also in the oceans in between.
    I do not understand how Hansen (in figure 2, page 3 of the link provided by Steve, above) can simultaneously show Africa and S. America as grey areas (i.e. assuming no data from these areas) and have information on temperature changes at the South Atlantic (which would probably be in large part derived from the “unused” African and S. American data). Or is Hansen’s method able to derive S. Atlantic data from N. Hemisphere temperatures?

  356. bernie
    Posted Sep 12, 2007 at 6:37 PM | Permalink

    I think Hansen made a poor choice of figures. Logically you are right in that the graph is misleading. He presumably had not intended to throw out the SST record.

  357. BarryW
    Posted Sep 12, 2007 at 6:39 PM | Permalink

    So far, most of what I’ve seen here is related to land temp data. If three continents don’t have much effect on the result then maybe the place to look is in how they’re using the ocean data, and if the land data is being used to “adjust” the oceanic temps or vice versa. I seem to remember in one of the threads something about Hansen arbitrarily changing some island data. Could that be because it was affecting the oceanic temps in a way he didn’t like?

  358. Sam Urbinto
    Posted Sep 13, 2007 at 1:15 PM | Permalink

    Erik Ramberg, on your thoughts of the blog: Seems Steve does this as a hobby, much like a crossword puzzle. Most of us here do this as a thought exercise I think, and I believe most everyone believes there’s some type of warming going on. And I’m also of the mind that most of us are of the belief there’s certain things we should do about it regardless of the specifics. But when the people, who work for us, act like they’re trying to hide something, write in a dismissive juevenile tone, have errors found after claiming everything’s wonderful, talk about their network (that they seem to know little of the specifics on) as if it’s of a high degree of quality and they are not so (working under the assumption that high quality = known as to bias(s) and/or meeting standards)…. I don’t think you are accurately judging the degree of civility that is actually here, nor the motivations of the bulk of readers/posters.

    And one more thing; what if say aerosol effects are masking a much larger warming and perhaps some of this work will uncover it, proving things are worse than thought, spurring greater and faster action? Wouldn’t some notional idea like that be a good thing?

    A couple of people mentioned advancing glaciers. Retreating ones almost equal both advancing and stationary ones put together. (Yes, it’s still only 15% retreating, but that beats advancing by 5:1) But Actually, most of the ones we know about, as the largest percentage, are doing….. Nothing.

    It gets trickier if you talk about actual mass balance. We have sampled very few of them, but of the ones that have been, the losers far ourweigh the gainers in numbers, and even those that are gaining are gaining less individually than the losers are losing individually (e. g. if the gain/loss ratio is 20/80 that’s 1 gain for 4 losses, and if you compare the 1 gainer, it gained 10 but the median loser lost 100). If you look at the trends for the ones that are tracked over time from the WGMS for mass, they are almost all on an obvious downward slope. If they’re indicative (I have no reason to beleive they’re not, but it is possible), what it means (it could be natural short term (for glaciers) variations), and what to do about it are all different issues. But the fact is (of what we’re looking at of course) is mostly they’re losing mass. According to the measurements, sat readings, and models at least. I’m trying to be fair about this….

    BTW the last post on the (closed) “1934 and all that” is about the release of the code.

  359. bernie
    Posted Sep 13, 2007 at 7:22 PM | Permalink

    Sam:
    What you describe as the motivation for those involved in this site is IMHO spot on: Most of us are undoubtedly puzzle fiends and were born in Missouri (- For fellow non-Yanks, Missouri’s State motto is “Show Me!”). I would add that what is common amongst many here is that we work a lot with data and have an intuitive feel for what I would call orders of effects. Good numbers people I have found have a sense when the numbers do not add up. We hone in immediately on what might be missing – though clearly we can get it wrong. The absence of any statement of confidence intervals around predictions of global warming and the relatively small effects being measured by crude measurement devices with significant known errors was a red flag that sparked my curiosity. One look at John Daly’s site and I knew there was more to the story than what the media was trumpeting. I think today Steve cited a NOAA scientist who pointed out that the folks at CA and Steve by name were pretty smart guys. I have to say that I am immensely impressed with both the quality of the logic, the real statistical “skill” and the tremendous work rate of many on this site. The basic civility of everyone is the icing on the cake.

  360. H
    Posted Sep 14, 2007 at 3:07 AM | Permalink

    Steve, I have made my own calculation. Every step and assumption is shown. I posted it earlier on another thread. Copy below, in case you missed it. If not, sorry.

    I think that such fairly simple energy balance models would be very useful when the effect of various feedbacks are studied. I am willing to translate it, if you think it is worth the effort.

    ——————————-
    #181

    Steve, I too have looked for a credible explanation how CO2 warms the atmosphere. I could not find one and decided to calculate it myself. So, I made a simple energy balance calculation based on the international standard atmosphere. I’m sorry that the text is in finnish, but the formulas are self-explanatory 🙂 Anyway the last figure shows all. I studied 4 cases of H2O feedback.

    blue – no feedback
    light green – linear decrease from 100% response at SSL to 0% at 4 km
    green – linear decrease from 100% response at SSL to 0% at 11 km
    red – 100 % through the troposphere

    Click to access Hiilidioksidin_vaikutus_H.pdf

    Only the last case shows stronger positive feedback. It is favoured by the modellers! The other climatologists seem to favour the second. My calculation indicates that the increase of H2O content of the upper troposphere is crucial to the H2O feedback. Has this been observed? I am sceptical that the big GCM get it right.

  361. Jean S
    Posted Sep 14, 2007 at 3:35 AM | Permalink

    #368 (H): Although I do appreaciate your efforts, please try to post under appropriate topics (in this case “Unthreaded”). This thread is about Hansen releasing the code. Off-topic discussions tend to get deleated here.

  362. H
    Posted Sep 14, 2007 at 5:22 AM | Permalink

    Sorry. That was the intention. I must have clicked a wrong tab.

  363. clivere
    Posted Nov 18, 2007 at 4:16 PM | Permalink

    At the time the code was released I had the impression of an organisation under severe pressure trying to buy some time for itself. They appear to have achieved that and this issue has somewhat dropped off the radar.

    With reference to the quote “Reto would have preferred to have a week or two to combine these into a simpler more transparent structure, but because of a recent flood of demands for the programs, they are being made available as is. People interested in science may want to wait a week or two for a simplified version.”

    I have an interest in observing where this is going to lead to.

    When people mention simplification with regard to computer programs it normally means fix them because usually you dont fix what ain’t broke.

    Any program that deals with complexity will require extensive testing and that may be the reason why it has been a lot longer than a couple of weeks to make it available. Will the simplified version ever be released now the pressure is off?

    As they indicate scientists will want the code that is used which I assume is still currently the old code. Has anybody managed to successfully run that code? Do we know if the released code is really the version currently used?

    If they do make fixes to resolve issues then I assume the simplified version will then become the version actually used. How will they manage the transition to this version including any restatement of prior results if required?

  364. steven mosher
    Posted Nov 18, 2007 at 4:47 PM | Permalink

    RE 371.

    “At the time the code was released I had the impression of an organisation under severe pressure trying to buy some time for itself.
    They appear to have achieved that and this issue has somewhat dropped off the radar.”

    Yes we predicted this.

    “With reference to the quote “Reto would have preferred to have a week or two to combine these into a simpler
    more transparent structure, but because of a recent flood of demands for the programs, they are being made available as is.
    People interested in science may want to wait a week or two for a simplified version.””

    This promise was never fulfilled. Some believe that OpenTemp is a creation of Hansen’s funnelled through JohnV
    a former NASA employee. I’ve seen no direct evidence of this and take JohnV at his word.

    “I have an interest in observing where this is going to lead to.

    When people mention simplification with regard to computer programs it normally means fix them
    because usually you dont fix what ain’t broke.”

    If you read GISSTEMP, the simplifications would be apparent since the code carries with it
    baggage from punchcard days. It’s pathetic.

    “Any program that deals with complexity will require extensive testing and that may be
    the reason why it has been a lot longer than a couple of weeks to make it available.
    Will the simplified version ever be released now the pressure is off?”

    The testing would be simple and mundane. The math is very simple. The problem is the code
    was written over decades and has never had a proper Rewrite. There are roughly 10K Lines of
    Code. The rewrite job is trivial, if your getting paid. The simplified version will not be released.
    the pressure is off.

    “As they indicate scientists will want the code that is used which I assume is still currently the old code.
    Has anybody managed to successfully run that code? Do we know if the released code is really the version currently used?”

    After a few weeks of effort no one was able to get the code to compile and execute. The issues were
    OS related and Compiler related. These are nasty issues. Nobody has the patience to slog through the crap.

  365. bender
    Posted Nov 18, 2007 at 5:03 PM | Permalink

    Nobody has the patience to slog through the crap.

    I wish I had the time now that I had when I was 14 years old. If I were that age again I’d be ripping into these puzzles, 24/7. Oh, the time we had slogging through the crap that the busy adults didn’t have the patience or gumption for.

  366. Larry
    Posted Nov 18, 2007 at 5:06 PM | Permalink

    Anybody with a PDP-8 (or whatever they used) in their basement?

  367. steven mosher
    Posted Nov 18, 2007 at 5:37 PM | Permalink

    RE 374.

    Based on comments from guys who worked on it we think it was AIX. bastard child of Unix
    exclipsed only by HPUX in it’s stupidity.

    The issue was weird funky little things

    the flow is pretty simple. Read file. process. write file. last I looked all the crashes
    were stupid fileopen file overwrite issues, memory overwrite stuff..

    I can understand why JohnV rewrote from scratch. Gisstemp was like divinci code written by the shortbus crew.

  368. steven mosher
    Posted Nov 18, 2007 at 5:42 PM | Permalink

    RE 373. Patience.

    yes. I rememeber when my parents threw away a TV and I desoldered every component from every board. I had egg
    cartons of rocks collected. egg cartons of bones collected. egg cartons of electronic components
    collected. I had patience then.

  369. Larry
    Posted Nov 18, 2007 at 5:46 PM | Permalink

    HPUX? Is that what you hit with a hockey stick?

  370. MarkR
    Posted Nov 18, 2007 at 6:16 PM | Permalink

    Mosh. This is all very disappointing. Why doesn’t someone write to Hansen and his boss to follow up on his promise to release the code? As SteveM says so often, delay is a harbinger of bad tidings. The more Hansen delays the more likely there is something seriously wrong.

  371. steven mosher
    Posted Nov 18, 2007 at 6:52 PM | Permalink

    RE 377. HPUX was Hewlett Packards abortive attempt to do UNIX.

    After screwing up Unix they bought appollo and screwed that up.

    at one point I had a HPWorkstation with RockyMountain Basic OS ( which was totally cool)
    Another ( HPVECTRA) with a lame substandard MSDOS implementation, and third with HPUX and star graphics.

    where was I? ah well. I am going to head over to WeirdStuff and see if they have any AIX crap. Just for grins.
    Failing that I have some olddogs I can talk to about the project.

  372. steven mosher
    Posted Nov 18, 2007 at 7:06 PM | Permalink

    RE 378. I have a suspicion that the story about the “simplified vesion” was just a story.

    Looking at the code it would take somebody more than two weeks to clean the dead code out and to
    make WELL DESIGNED program out of it. Figure 10K LOC… figure 10LOC/hr on rewrite 6 man month job
    with full docs of course. A fast guy could whip it out in 3 months I figure. If he knew fortran.
    If he knew the math in the model. Feed him twinkies,
    raw meat and strippers.. 1-2 months..

  373. Larry
    Posted Nov 18, 2007 at 7:19 PM | Permalink

    381, are you accounting for the fact that fortran is a totally unstructured goto-happy language? It’s really hard to follow program flow in fortran even in a well-written program. They’re hard to clean up, and even harder to translate into a structured language.

    Need much fritos and mountain dew for that code monkey.

  374. MarkR
    Posted Nov 18, 2007 at 7:25 PM | Permalink

    Mosh. Does anyone know the exact machine model and AIX version they are/were using?

  375. BarryW
    Posted Nov 18, 2007 at 7:37 PM | Permalink

    Re 381

    Oh I believe that they are making a “simplified” version. Just not what you’re expecting. From what I’ve seen Hansen’s crew is a “hobby shop”. They write quick and dirty code to solve a specific problem and it keeps getting added to year after year. That’s why you can’t make the code they released worked. There are a set of manual steps that only the guru’s know, so even if you get it to compile you’ll be forever trying to get the data to run because there are steps that are done from the command line or through shell scripts. I seem to remember some of the code was in Python. I expect you’ll see more of same or PHP or Java because that’s what the coder they have working on it will be comfortable with. The programmer is probably doing it as a side job or is rewriting the io to make they’re production updates easier, but my guess is they won’t be touching the main calculations now if ever, unless Hansen has a new algorithm that he wants to use. In the mean time Hansen can say he released the code, and if the Jesters can’t make it run it’s just because they’re not competent.

  376. MarkR
    Posted Nov 18, 2007 at 7:58 PM | Permalink

    Looks like they updated the code again after this thread died. Anyone want to take another look?

    This archive is approximately 2.2 MB. It was updated Sep. 10, 2007, to clarify the procedures of some steps; finally it was updated Oct 10, 2007, to simplify and speed up STEP2 (homogeneization)

    NASA Link

  377. MarkR
    Posted Nov 18, 2007 at 8:07 PM | Permalink

    Page updated: 2007-10-08

    Source code and documentation for GISTEMP software is available here. The programs are intended for installation and use on a computer with a Unix-like operating system.

    NASA Link

    If this explains the steps more clearly, I expect we will hear more……

  378. clivere
    Posted Nov 19, 2007 at 3:19 PM | Permalink

    MarkR – well spotted!

    So they released this version without fanfare or any mention that they had replaced the original released code.

    Is this the final simplified version they referred to or is there more to come?

  379. Sam Urbinto
    Posted Nov 19, 2007 at 4:10 PM | Permalink

    The thing about Linux is the version numbers of everything and their dependencies can complicate things also.

  380. Larry
    Posted Nov 19, 2007 at 7:07 PM | Permalink

    387, amen. Sometimes you can’t have program “a” and program “b” on the same machine at the same time, because they use different versions of library “c”. Very annoying. They should figure out a way to have multiple versions at the same time.

  381. MarkR
    Posted Dec 2, 2007 at 4:50 AM | Permalink

    Is anything happening with the latest HOW-TO Document for the GISS GCM?

    3) System requirements

    The set-up code and automatic compilation rely heavily on gmake (version 3.79 or higher) and perl (version 5.005 or higher). Both of these programs are public domain and available for almost all platforms. Up-to-date versions are required to be able to set up and compile the model.

    The code requires a FORTRAN 90/95 compiler. It has been tested with the SGI, IBM, and COMPAQ/DEC compilers on their respective workstations. For Linux, Macs or PCs, the choice of compiler is wider, and we have not been able to test all possibilities. The Absoft ProFortran compiler works well, as do the Lahey/Fujitsu and Portland Group compilers. We have been unable (as yet) to get the model to compile with the Intel or VAST compilers due to internal errors in those products. Please let us know if you succeed (or more importantly, fail) in compiling the code using any other compiler.

    Note that the parallelization used in the code is based on the OpenMP shared memory architecture…….

  382. scp
    Posted Apr 13, 2008 at 7:39 PM | Permalink

    “Peak Oil” Paper Revised and Temperature Analysis Code

    On or about 7 September 2007, Dr. Hansen said…

    People interested in science may want to wait a week or two for a simplified version.

    I am interested in science and I have now waited a week or two (or somewhat longer) for a simplified version. Can anyone tell me where I might find it?

  383. Mike Rankin
    Posted Aug 13, 2009 at 2:45 PM | Permalink

    E. M. Smith reports success at making the GISS code run. At his blog

    http://chiefio.wordpress.com/

    he relates his observations on the data and how much of the warming seems related to stations entering and leaving the GISS records.

  384. Posted Aug 14, 2009 at 1:15 AM | Permalink

    I have made GIStemp run.

    All you need is a Linux box (I’m on a Red Hat 7.2 release on an AMD 400 mhz chip with about 128 MB of memory IIRC – you don’t need a lot of hardware. It started life as a 486 machine a couple of decades ago; I wanted to be “period correct” 😎 (That’s a re-enactor inside joke 😉 The changes needed to the code are not particularly large (though a bit hard to discover) mostly related to not using a non-standard extension that lets you put data initialization into the variable declarations. (i.e. use a DATA statement to load initial values…)

    That, and some programs need an f77 compiler while others need f90 or better (i.e. g95). I have a makefile that keeps that straight in my build. Oh, and I yanked out the ksh directives and made it sh / bash compatible (mostly adding a couple of strategic “;” characters that are optional with ksh… )

    I used the free g95 compiler for my “f90 or better” and it seems to work well.

    The only “issue” still open for me is that the Hadley SST file is “bidgendian” and the PC I’m on is “littleendian” so I can compile, but not run, STEP3 and STEP4_5. I used the stable.91 g95 and the newer release claims to support the “convert=bigendian” flag that would solve this problem.

    If anyone wants a “tarball”, just leave a note on one of my blog pages or send email to the address in my “about” tab; and I’ll put it on your FTP destination. I can also put up a set of “diffs” if folks would rather have that. FWIW, I’ve also “cleaned things up a bit” by moving to a better directory structure and naming things a bit more cleanly (shell scripts now end with .sh on a reliable basis; binaries are in a “bin” directory and sources in a “src” directory). I also pulled the “inline compile and run” out and made a “makefile” See:

    GIStemp – a cleaner approach

    The direcotory change is not needed to run, it just makes it a lot easier to keep straight what’s what.

    Oh, and if you put this on a bigendian box you ought to be “good to go” through STEP4_5 as is. (Mac PowerPC, SUN Sparc, IBM RS6000, MIPS and more are “bigendian”. Intel x86 are littleendian. HP made a box with selectable endian character at one point).

    Basically, if anyone wants this on their box, I’m available and willing to help. If you are “L.A. close” to San Franciso, I could be talked into assisting (gas money would be nice; “will program for beer” 😎

    E. M. Smith

  385. Peter O'Neill
    Posted Nov 17, 2009 at 8:21 PM | Permalink

    A STEP6 has now appeared, with other updates to other steps, at ftp://data.giss.nasa.gov/pub/gistemp/GISS_Obs_analysis/GISTEMP_sources/

    These are dated November 14th, and seem to still be work in progress –

    http://data.giss.nasa.gov/gistemp/sources/GISTEMP_sources.tar.gz , linked from the Sources page, seems to have become a broken link for now.

    While http://data.giss.nasa.gov/gistemp/updates/ has been updated to indicate that USHCN version 2 will be used from November 13th, there is as yet no mention of STEP6 there, and it is not mentioned in the revised gistemp.txt file either.

    A quick look at the STEP6 files indicates that it produces “line plots” from ANNZON.Ts.GHCN.CL.PA.1200, ANNZON.Tsho2.GHCN.CL.PA.1200, ZON.Ts.GHCN.CL.PA.1200 and ZON.Tsho2.GHCN.CL.PA.1200.

7 Trackbacks

  1. […] you’re interested in global warming and are familiar with shell scripts and Fortran, you can now access the code written by NASA’s James Hansen on the taxpayers’ […]

  2. […] Software engineering is a core capability and a key enabling technology necessary for the support of NASA’s Mission Directorates. Ensuring the quality, safety, and reliability of NASA software is of paramount importance in achieving … …more […]

  3. […] Software engineering is a core capability and a key enabling technology necessary for the support of NASA’s Mission Directorates. Ensuring the quality, safety, and reliability of NASA software is of paramount importance in achieving … …more […]

  4. […] James Hansen released his computer code for the analysis of climate data after what I see as a serious breach of scientific ethics. For a […]

  5. […] science’ graduate students taking climatology courses going to do now that their days as ‘wannabe programmers’ writing archaic, awful, leak riddled code, patched together from h… are almost […]

  6. […] Ook is het jammer dat de e-mails slechts de periode 3 t/m 15 augustus beslaan. Op 8 september 2007 geeft Hansen namelijk de broncode vrij van het programma waarmee hij de mondiale temperatuur berekent. Uit zijn reactie destijds bleek dat […]

  7. By Y2K Re-Visited « Climate Audit on Oct 29, 2011 at 2:13 PM

    […] Hansen capitulated to pressure to release GISS code, I commented here on what I believed to be the relevant interest in temperature records – a comment that seems […]