Weblog Update: November 2006

I thought I’d give a brief overview on what’s happening with the weblog, the changes that were made, the problems encountered/fixed and the future of Climate Audit.

Statistics

Figures for the month of November show a big jump from all previous months. The number of hits climbed steeply to around 1.7 million.

castatsnov2006.JPG

Performance problems

As you can probably gather, ClimateAudit is no ordinary blog. Because of a notification from the webhost regarding security and possible spam relaying, I had to rapidly update the blog software to the latest version of WordPress (2.05), and in the process break a few things, remake a few things and install a new look for the site. This took some effort to do, as there were quite a few plug-ins that broke (most noticeably the plug-in that makes the Road Map sticky). I had already done a test update on my own server to make sure nothing major happened to the database, but even so things did break.

The most noticeable problem was that some posts refused to come up at all, even though they were in the database. This was traced to the memory allocated to php, the scripting language that WordPress is written in. Since CA is hosted on a normal webhosting package for normal weblogs, the memory usage isn’t normally going to be a problem. But what I found was that if the number of comments got above 175-200, then the php request was rejected by the webserver and you get nothing back.

At the moment the only workaround is to create a second post and edit the MySQL database to move comments from the original post to the new post. As a workaround this can only be a temporary measure, as there’s only so many times this administrator is going to hack databases.

I’ve asked Steve to contact the webhost (webserve.ca) and see whether they will triple the amount of memory available for php. In the event of the webhost not being responsive to the idea, then we’ll have to look at other hosting options, or face increasing disruption in the performance.

In the interim, I’ve installed Wp-Cache which should decrease the load on php and MySQL by caching pages which have not changed.

Accessing the blog

I know that some people have had a problem accessing the blog from time to time. When I upgraded Bad Behavior this caused one or two longtime commentors to have difficulty accessing the site at all, but BB has now settled down.

The exception to this was an e-mail from SurfControl.com which runs a filtering proxy service for businesses, which got a very big error message “Error 400”. The workaround for them was to whitelist the IP address that SurfControl uses, on the understanding that they don’t allow spam through their system.

Surfcontrol did express to me gratitude that we responded to their inquiry and enacted the fix – most websites appeared to think it’s SurfControl’s problem only and don’t bother helping resolve the issue.

The new look and Adsense

The most obvious change this month was moving to a new three-column theme and adding Google Adsense and Google search.

Our original theme was very simple and there were a lot of CA-specific hacks I introduced into the theme to get various things to work. For the update I decided to go with a new, more professional looking theme so I used Tiga as the basic theme, and then did some modifications (with a user-supplied header background being top of the list).

Google Adsense was added after a conversation with Steve about the costs of webhosting and how much money Steve hasn’t made in mining stocks since he started working on the Hockey Stick more or less fulltime.

Money from Adsense (via the links in the special panel on the RHS and links in the Google Web Search feature) will go to hosting the blog in the future. The more people use Adsense, the more money can be used to ensure the future hosting of CA, its as simple as that. (By the way, I get nothing from this arrangement)

The future of Climate Audit

Unless Steve starts enacting policies designed to shrink the weblog’s audience, in the near future CA will have to go to a different hosting package. At the moment, the next step-up would be a Virtual Private Server (VPS) solution, which would mean I could tune the memory for CA’s specific needs and we’d have more disk space (the next thing on my horizon is that CA is going to bust the disk space limit in a few months). Webserve.ca do a VPS package starting at C$40 per month, which our current Adsense revenue should be able to cover.

CA is two years old in late January (how time flies!) and grown above my expectations in a very short time. When I offered to help install and manage the blog, I didn’t realise how much work there was going to be maintaining it because I didn’t foresee how much traffic it would generate (or how many articles Steve manages to conjure up).

It’s sort of a labour of love, only with more brickbats than I expected. I’ve learnt a lot of php and MySQL and met some very interesting people that I’m delighted to correspond with.

Who knows what the next year will bring?

62 Comments

  1. Dave Dardinger
    Posted Nov 30, 2006 at 12:18 PM | Permalink

    So How about Steve tests CA’s monthly traffic as a temperature proxy? Of course the times would have to be dialated but is that any worse than BCP telecommunications?

  2. Posted Nov 30, 2006 at 12:30 PM | Permalink

    I’ve no idea about bristlecones, but I can tell you that CA’s stats are coming dangerously close to those seen in the Holocene Maximum.

  3. bender
    Posted Nov 30, 2006 at 1:45 PM | Permalink

    A big audience for the champagne tonight when the hurricane season closes!

  4. jae
    Posted Nov 30, 2006 at 2:31 PM | Permalink

    Thanks for all your efforts, John A!

  5. Posted Nov 30, 2006 at 3:08 PM | Permalink

    John A:

    Thanks for all your labor of love! The new look is great. I will try to remember to check adsense when looking for a product or service.

    Russ

  6. T J Olson
    Posted Nov 30, 2006 at 3:31 PM | Permalink

    Yeah. A fine new look with increased user-friendliness. Thanks John and founder Steve!

    Will you add a way to send contibutions? I know Steve is proudly independent, but doesn’t usage warrant it? Many of the quality contributors imput are worth supporting, somehow.

  7. Brooks Hurd
    Posted Nov 30, 2006 at 3:43 PM | Permalink

    John,

    It looks great.

    It is somewhat of a problem to navigate when a long thread is split into two (or more) threads. Would it be possible to add links from split threads to the other parts? You could put the ending comment number and link it to that page. I realized that the “p=xxx” numbers are not sequential, so that doesn’t help (I understand the reason). I mean this as a suggestion, not a slam. I know how much work you have put into this.

    Great job!

    Thanks for all your efforts.

  8. Reid
    Posted Nov 30, 2006 at 4:15 PM | Permalink

    Thanks Steve and John A.

    I read all your posts and try to read all the comments.

    Is there anyone out there who will apply the Climate Audit blog formula to auditing the GCM’s? They are ripe for deconstruction. GCM’s are modern divination with a mathematical front end.

  9. bender
    Posted Nov 30, 2006 at 4:42 PM | Permalink

    Is there anyone out there who will apply the Climate Audit blog formula to auditing the GCM’s?

    Not to my knowledge, although I’ve previously called for exactly that. But you know how challenging that would be?

  10. cbone
    Posted Nov 30, 2006 at 4:56 PM | Permalink

    “Not to my knowledge, although I’ve previously called for exactly that. But you know how challenging that would be?”

    No more challenging than the verification and auditing done on computer modeling used in other disciplines. Of course, when no one holds you accountable for the results of your models, there is no impetus to perform the audits. I know in Engineering disciplines the models have to undergo rigorous scrutiny before they can be relied upon for design purposes. I wouldn’t risk my license by relying on a GCM in that manner.

  11. Steve McIntyre
    Posted Nov 30, 2006 at 6:08 PM | Permalink

    That’ a lot of traffic in November. It’s hard to think of a reason – it’s not like there’s been any unusual publicity.

  12. bender
    Posted Nov 30, 2006 at 6:16 PM | Permalink

    The new look.

  13. bender
    Posted Nov 30, 2006 at 6:19 PM | Permalink

    Also I wonder about the effect of that Canadian program, Fifth Estate. Maybe much of the new traffic is Canadian?

  14. Steve McIntyre
    Posted Nov 30, 2006 at 6:20 PM | Permalink

    The new look is certainly much more inviting.

  15. Stan Palmer
    Posted Nov 30, 2006 at 6:21 PM | Permalink

    re 13

    The Fifth Estate is not a program with a polularity that would drive a lot of traffic.

  16. bender
    Posted Nov 30, 2006 at 6:42 PM | Permalink

    I think it’s the clouds in the banner. That moist convection. Leonard Cohen would approve.

  17. kim
    Posted Nov 30, 2006 at 6:47 PM | Permalink

    I think I’ve never heard so loud,
    The quiet message of a cloud.
    ====================

  18. Ken Fritsch
    Posted Nov 30, 2006 at 7:00 PM | Permalink

    Speaking of BCP, reminds me that my old eyes are analogous to one of those hypothetical old trees at the tree line that responds in a linear fashion to temperature. These proxy eyes say the new format is hot and positive. No flipping required.

  19. bender
    Posted Nov 30, 2006 at 7:06 PM | Permalink

    “i can’t run no more
    with that lawless crowd
    while the killers in high places
    sing their prayers out loud
    but the’ve summoned, they have
    summoned up a thundercloud
    they’re gonna hear from me”

    leonard cohen

    for poet #17

  20. Greg F
    Posted Nov 30, 2006 at 7:08 PM | Permalink

    That’ a lot of traffic in November. It’s hard to think of a reason – it’s not like there’s been any unusual publicity.

    I get the RSS feed with Thunderbird. Before the site upgrade, if I left the computer on all day, it would get 10 comments and stop (close and restart would get new comments). At one point I even tried a different RSS reader with the same results. After CA went to the new look Thunderbird started retrieving RSS comments at regular intervals. Perhaps this accounts for some of the increase.

  21. John S
    Posted Nov 30, 2006 at 7:11 PM | Permalink

    The general content of the AdSense links is, in the twisted way I see things, amusing. Given the general (mis)perception of this site I’m surprised that ads for Exxon and ‘Anti-Science Republicans’ aren’t featured more prominently. Then people could really go to town over supposed money trails.

  22. Lee
    Posted Nov 30, 2006 at 7:14 PM | Permalink

    “That’ a lot of traffic in November. It’s hard to think of a reason -”

    Its me, Steve. All me.

    grin.

  23. Armand MacMurray
    Posted Nov 30, 2006 at 8:50 PM | Permalink

    Re:#21
    I found this ad especially amusing:

    Internal Audit: Qualified accounting professionals post job listings & connect today.

    It made me wonder: just who might be a “qualified” climate auditor? Perhaps Steve should start a sideline granting certification? Besides tree-coring and statistics classes, and posting all of one’s work immediately to the internet, I’m sure squash would be a part of the curriculum.

  24. Posted Dec 1, 2006 at 12:16 AM | Permalink

    #11.

    That’ a lot of traffic in November. It’s hard to think of a reason – it’s not like there’s been any unusual publicity.

    Hypothesis: Google bumped up your search rankings cause you started serving adsense.

  25. Vasco
    Posted Dec 1, 2006 at 5:43 AM | Permalink

    climateaudit.org has been referred to in various forums discussing GW lately (also by me). this may explain the surge in numbers.

    keep up the good work! I hope auditing research before publication will be a standard issue.

  26. Posted Dec 2, 2006 at 1:19 AM | Permalink

    #20

    RSS feed

    Another possibility. The RSS feed was broken on older wordpress version (due to a line end as first character). Perhaps now its not and you are getting polled by RSS readers.

  27. gb
    Posted Dec 2, 2006 at 4:18 AM | Permalink

    Re # 8:

    Auditing GCM’s. What do you have in mind? Looking through a few thousands of lines of Fortran code to check if there are any coding errors? Who has said that the teams developing GCM’s don’t carefully check their codes?

  28. MarkR
    Posted Dec 2, 2006 at 5:25 AM | Permalink

    #11 I think Monckton will have generated a lot of traffic. It’s the first prominent large media piece in the UK that puts the anti Warmer point of view. You could probably tell by looking at the proportion of UK traffic in November v previous.

  29. MarkR
    Posted Dec 2, 2006 at 5:30 AM | Permalink

    #27 I don’t think there has been any independant audit of the theory, input, assumptions, and the calculation of the GCM. Based on the history of proxy studies, I don’t think peer review in the Journals can be relied on for GCM review.

  30. Steve McIntyre
    Posted Dec 2, 2006 at 6:39 AM | Permalink

    #27. gb, I think that it would be very healthy to have an independent audit of the “best” GCM by properly funded engineers. I’m sure that it would take a substantial team properly funded a long time to evaluate a GCM. Engineers do feasibility studies all the time and feasibility studies can cost millions of dollars and test everything in the proposed design. Journal review of articles about GCMs by people without the time or funding to such analysis can’t do this.

    Most audits of financial statements simply endorse the statements prepared by the company, but audits are done routinely and for godd reason.

    No one’s suggesting that GCM designers haven’t tried to build good code, It’s just that, in other walks of life, independent review by professionals is standard procedure. If there were no policy implications, the amateurism of the review and verification process in climate science wouldn’t matter. However, there are real policy decisions being made.

  31. gb
    Posted Dec 2, 2006 at 6:42 AM | Permalink

    Re # 29. I have looked only at certain parts of ocean and atmospheric models but in these cases assumptions and parameterizations are reported (in journals). You are thus free to evaluate them. Validations are in fact frequently carried out and reported in the journals.

  32. Chris H
    Posted Dec 2, 2006 at 6:53 AM | Permalink

    #27 gb, When I was working in commercial software development we had a series of mechanism for ensuring the quality of our code.

    1. Various design reviews where we had to explain and justify our design to senior developers outside of our group.

    2. All code had to go through a line by line review with a senior developer to ensure that it was correctly implemented, well commented and well presented.

    3. We had to write detailed documentation down to the level of the behaviour and valid parameters for every function.

    4. We had to design our software in such a way that it could be tested at a low level and produce a document that defined how it could be tested.

    5. An independent Quality Assurance department would take the documentation from steps 3. and 4. and would write a test bed to ensure that the code we had written conformed to our claims (without looking at the source code).

    6. We had to work closely with technical authors who produced documentation for users.

    7. A second Quality Assurance department would work from the user documentation to produce an integrated set of tests to ensure that the product actually did would we wanted it to do.

    This is just for normal every day commercial software. When I worked on projects that did stuff like control a nuclear reactor or keep a helicopter in the air, we were much more careful.

    How does this compare to the verification process for GCMs? I don’t actually know about GCMs but I have had quite a lot to do with software produced by academia and thus have very low expectations.

  33. gb
    Posted Dec 2, 2006 at 7:01 AM | Permalink

    Re # 30. Climate models and parameterizations are frequently improved. Every time a new version is developed it is tested and the sensitivity to (new) parameterizations are evaluated. Perhaps you wish it to be more rigorous but I can’t call the approach of modellers amateurism.

  34. Willis Eschenbach
    Posted Dec 2, 2006 at 7:09 AM | Permalink

    gb, thanks for your comments above. You say:

    Auditing GCM’s. What do you have in mind? Looking through a few thousands of lines of Fortran code to check if there are any coding errors? Who has said that the teams developing GCM’s don’t carefully check their codes?

    I fear that you may not understand what it takes to properly validate a complex computer program. Yes, they “check their codes”, but this is far from validation. There have been several descriptions of this process by software engineers on this blog which you might want to read, I can’t track them at the moment, but maybe the authors could point them out.

    You say “Validations are in fact frequently carried out and reported in the journals.” I challenge you to supply a citation of a single one. Errors are occasionally reported, but validation does not mean “report the odd error that you find.” It is a much more comprehensive process that involves looking at the assumptions, simplifications, parameters, input data, interim values, end-point and boundary treatments, line-by-line code, error handling, and a host of other aspects of a GCM. There are not “thousands of lines” of code in a GCM, there are hundreds of thousands, usually written by a variety of programmers over a long period of time. Debugging and validating software is not a trivial process as you seem to assume.

    w.

  35. gb
    Posted Dec 2, 2006 at 8:10 AM | Permalink

    Re # 34:

    Doney et al. (2004) Evaluating global ocean carbon models. Glob. Biochem. Cycles, vol. 18, 3017.

    Matsumoto et al. (2004) Evaluation of ocean carbon cycle models with data-based metrics. Geophys. Res. Lett. vol 31, 07303.

    Canuto et al. (2004). Latitude-dependent vertical mixing …. Geophys. Res. Letts. vol. 31, 16305.

    Gent et al. (2002) Parameterization improvements in an eddy-permitting ocean model for climate. J. Climate vol 15, 1447.

    Schmidt et al. (2006) Present-day atmospheric simulations using GISS ModelE: Comparison to In Situ, Satellite, and Reanalysis Data. J. Climate., vol. 19, 153.

    to name a few. Can’t you carry out a literature search yourself? And I am doing simulations and code development myself so yes, I know what it takes to validate a program.

  36. MarkR
    Posted Dec 2, 2006 at 8:40 AM | Permalink

    #35 So the independant validation of the NASA GISS Model is carried out by……NASA GISS. I don’t think so.

    Present-Day Atmospheric Simulations Using GISS ModelE: Comparison to In Situ, Satellite, and Reanalysis Data
    GAVIN A. SCHMIDT,a RETO RUEDY,b JAMES E. HANSEN,c IGOR ALEINOV,a NADINE BELL,a MIKE BAUER,a SUSANNE BAUER,a BRIAN CAIRNS,a VITTORIO CANUTO,c YE CHENG,b ANTHONY DEL GENIO,c GREG FALUVEGI,a ANDREW D. FRIEND,d TIM M. HALL,c YONGYUN HU,a,* MAX KELLEY,d NANCY Y. KIANG,a DOROTHY KOCH,a ANDY A. LACIS,c JEAN LERNER,a KEN K. LO,b RON L. MILLER,c LARISSA NAZARENKO,a VALDAR OINAS,b JAN PERLWITZ,f JUDITH PERLWITZ,a DAVID RIND,c ANASTASIA ROMANOU,e GARY L. RUSSELL,c MAKIKO SATO,b DREW T. SHINDELL,c PETER H. STONE,f SHAN SUN,g NICK TAUSNEV,b DUANE THRESHER,a, AND MAO-SUNG YAOb
    a NASA Goddard Institute for Space Studies, and Center for Climate Systems Research, Columbia University, New York, New York bNASA Goddard Institute for Space Studies, and SGT, Inc., New York, New York cNASA Goddard Institute for Space Studies, New York, New York dLSCE, CEA Saclay, Gif-sur-Yvette, France eNASA Goddard Institute for Space Studies, and Department of Applied Physics and Applied Mathematics, Columbia University, New York, New York f Massachusetts Institute of Technology, Cambridge, Massachusetts gNASA Goddard Institute for Space Studies, New York, New York, and Massachusetts Institute of Technology, Cambridge, Massachusetts

  37. Steve McIntyre
    Posted Dec 2, 2006 at 10:21 AM | Permalink

    #33 gb- I didn’t say that the modelers were “amateurish”. I said that the systems for independent verification were “amateurish”. You have unpaid, volunteer and cursory journal review. People say that journal reviewers are unpaid. Quite so. My point is that if you are going to rely on GCMs for policy purposes, policy makers should require a more professional evaluation. I think that this would be healthy for all parties. As I said, my concept of valuation would be the equivalent of a “red team” or “tiger team” like one would do for aerospace engineering. It would cost in the millions of dollars to do with properly funded engineers, but it would be money well spent – regardless of where you stood on the political spectrum.

  38. David Smith
    Posted Dec 2, 2006 at 11:22 AM | Permalink

    RE #35, Schmidt study: I haven’t found a free copy of this one so I have not read it. (If anyone sees a free copy, please post the link.)

    However, I noticed in the abstract that they compare GCM output with reanalysis data. I hope they were cautious in that comparison, because “reanalysis data” is not always observed data. Sometimes reanalysis data are computer-generated estimates and sometimes they consist of observations which were adjusted by a human, for one reason or another. The data is not necessarily reality, so using reanalysis data to confirm GCM performance has to be done with caution, especially if one is looking for small effects.

  39. Willis Eschenbach
    Posted Dec 2, 2006 at 4:44 PM | Permalink

    gb, you list what you think are “validations” of the climate models above. As an aside, your snide comment,

    Can’t you carry out a literature search yourself?

    is unpleasant, un-needed, and untrue … but I digress.

    According to the titles your list of papers contains “comparisons”, “evaluations”, and “parameterization improvements”, but no code and process validation exercises. The only paper of the bunch that I have read is the Schmidt et al. paper on the GISS model. Read Chris H’s listing of the process for validating ordinary commercial software above, and then read Schmidt’s paper. You will see that as a validation exercise, the process of Schmidt et al. is totally inadequate. It does a good job of listing the errors of the model, but makes no attempt to explain why the errors exist. A related paper by Hansen discusses the Schmidt et al. paper you reference, saying:

    ModelE [2006] compares the atmospheric model climatology with observations. Model shortcomings include

    “‚⠠ ~25% regional deficiency of summer stratus cloud cover off the west coast of the continents with resulting excessive absorption of solar radiation by as much as 50 W/m2,

    “‚⠠ deficiency in absorbed solar radiation and net radiation over other tropical regions by typically 20 W/m2,

    “‚⠠ sea level pressure too high by 4-8 hPa in the winter in the Arctic and 2-4 hPa too low in all seasons in the tropics,

    “‚⠠ ~20% deficiency of rainfall over the Amazon basin,

    “‚⠠ ~25% deficiency in summer cloud cover in the western United States and central Asia with a corresponding ~5°C excessive summer warmth in these regions.

    “‚⠠ In addition to the inaccuracies in the simulated climatology, another shortcoming of the atmospheric model for climate change studies is the absence of a gravity wave representation, as noted above, which may affect the nature of interactions between the troposphere and stratosphere.

    “‚⠠ The stratospheric variability is less than observed, as shown by analysis of the present 20-layer 4°x5° atmospheric model by J. Perlwitz [personal communication]. In a 50-year control run Perlwitz finds that the interannual variability of seasonal mean temperature in the stratosphere maximizes in the region of the subpolar jet streams at realistic values, but the model produces only six sudden stratospheric warmings (SSWs) in 50 years, compared with about one every two years in the real world.

    Then remember that people are making TRILLION DOLLAR decisions based on GCMs that are attempting to forecase the effects of a ~4 W/m2 change in a century from CO2 doubling, and that one of the best of these GCMs has an error of 20 W/m^2 over the tropical regions, and you should take the processes listed by Chris and multiply them by a factor of 10 or so. The level of validation effort required is well described by Steve M. above:

    As I said, my concept of valuation would be the equivalent of a “red team” or “tiger team” like one would do for aerospace engineering. It would cost in the millions of dollars to do with properly funded engineers, but it would be money well spent – regardless of where you stood on the political spectrum.

    I agree completely. We have embarked on the Kyoto Protocol, which may turn out to be the most costly human project ever, based on climate models which have huge acknowledged errors, and which not only do not agree with the data, but do not agree with each other.

    Given the same data, the various GCMs often give answers which vary by a factor of 5 or more. One model predicts a sea level rise of a foot, and another says 5 feet, with the rest falling in between. The response to this situation has been ludicrous “¢’‚¬? the response is average the models to get the “most likely” answer, and consider the range of the models to be the “uncertainty”. This is scientific nonsense.

    This response is a sad joke. The proper response would be to lock all the modelers in a room and say “figure out where your models are going astray, and don’t bother me again until you know the answer, and your model results actually resemble the data”. At least that would buy us fifty years of silence …

    w.

    PS – David Smith, the Schmidt study is available here.

  40. Reid
    Posted Dec 2, 2006 at 4:59 PM | Permalink

    If Lorenz (1963) is correct then the entire GCM enterprise is a waste of time. So far nobody has proven Lorenz wrong.

    Faster supercomputers with a larger stack of differential equation won’t solve the fundamental problem with GCM’s. They all diverge from the real world as time progresses in the model. If the model is correct is it only by coincidence for a short time period.

  41. Dan Hughes
    Posted Dec 2, 2006 at 5:21 PM | Permalink

    I posted a short discussion of some software Verification and Validation issues on another thread. Here are some additional thoughts.

    I have a few questions for anyone who have answers. I consider these issues to be essentially show-stoppers as far as use of the results of any of the AOLBCGCM codes and all supporting codes use in all aspects of climate-change analyses, for either (1) archival peer-reviewed publications, (2) providing true insight into the phenomena and processes that are modeled, and most importantly (3) for decision-making relative to public policies. Any and all professional software developers would absolutely require that all of the issues to be mentioned below be sufficiently addressed and documented before using any software for applications in the analyses areas for which it was designed.

    In no particular order, as each of the following is very important, can anyone provide documented information about:

    (1). Audited Software Quality Assurance (SQA) Plans for any of the computer software that is used in all aspects of climate-change analyses.

    (2) Documentation of Maintenance under audited and approved SQA procedures of the ‘frozen’ versions that are used for production-level applications.

    (3) Documentation of the Qualifications of the users of the software to apply the software to the analyses that they perform.

    (4) Documentation of independent Verification that the source coding is correct relative to the code-specification documents.

    (5) Documentation of independent Verification that the equations in the code are solved correctly and the order of convergence of the solutions of the discrete equations to the continuous equations has been determined.

    (6) Sufficient information from which the software and its applications and results can be independently replicated by personnel not associated with the software.

    (7) It is my impression that use of ensemble averages of several computer calculations that are based on deterministic models and equations is unique to the climate-change community in all of science and engineering. I can be easily corrected on this point if anyone can provide a reference that shows that the procedure is used in any other applications. (The use of monte carlo methods to solve the model equations is not the same thing). The use of ensemble averaging and the resulting graphs of the results makes it very difficult to gain an understanding of the calculated results; rough long-term trends are about all that can be discerned from the plots.

    (8) Documentation that shows that the codes always calculate physically realistic numbers. For example, the time-rate-of-change of temperature, say, is always consistent with the energy equations and is not the results of numerical instabilities or other numerical solution methods problems.

    (9) Documentation in which the mathematical properties (characteristics, proper boundary condition specifications, well- (or ill-) posedness, etc.) of all the continuous equations used in a code have been determined. Do attractors exist, for example.

    (10) Documentation in which it has been shown analytically that the system of continuous equations used in any AOLBCGCM model has the chaotic properties that seem to be invoked by association and not by complete analysis. Strange-looking output from computer codes does not prove that the system of continuous equations possess chaotic characteristics. Output from computer codes might very likely be results of modeling problems, mistakes, solution errors, and/or numerical instabilities.

    Invoking/appealing-to an analogy to the Lorenz continuous equations is not appropriate for any other model systems. The Lorenz model equations are a severely truncated approximation of an already overly simplified model. The wide range of physical time constants and potential phase errors in the numerical solutions almost guarantees that aperiodic behavior will be calculated.

    Especially true considering the next item.

    (11) Documentation in which it has been determined that the discrete equations and numerical solution method are consistent and stable and thus the convergence of the solution of the discrete equations to the continuous equations is assured. Actually I understand that the large AOLBCGCM codes are known to be unable to demonstrate independence of the discrete approximations used in the numerical solution methods. The calculated results are in fact known to be functions of the spatial and temporal representations used in the numerical solutions. This characteristic proves that convergence cannot be demonstrated. Consistency and stability remain open questions.

    (12) Documentation in which it is shown that the models/codes/calculations have been Validated for applications to the analyses for which it has been designed.

    All software, each and every one, that is used for analyses of applications the results of which might influence decisions that affect the health and safety of the public will have addressed all these issues in detail.

    If my understanding of the status of these critical issues is correct I can only conclude:

    (1) The software used in the climate-change community does not meet the most fundamental requirements of software used in almost all other areas of science and engineering. Almost none of the basic elements of accepted software design and applications for production-level software are applied to climate-change software.

    (2) The calculated results cannot be accepted as correct and valid additions to the peer-reviewed literature of technical journals.

    (3) The software should never be used in attempts to predict the effects of changes in public policy (fuel sources for energy production, say) on the climate; neither short- or long-range.

    (4) The calculated results are highly likely not correct relative to physical reality.

    I will say that item (11) is in fact a totally unacceptable characteristic for any engineering and scientific software. The results from any codes that have this property would be rejected for publication by many professional engineering organizations. I can be easily corrected if anyone can point me to calculated results from any other areas of science and engineering in which the fact that the numerical methods are known to be not converged is accepted as being, well, acceptable practice. Buildings, airplanes, bridges, elevators, flight-control systems, nothing in fact, are never designed under this approach.

    Actually all professional software development projects require far more than the information that I discuss above. Any textbook on software development can be consulted for a more complete listing and detailed discussions. Almost all the large complex AOLBCGCM codes have evolved over decades from software that was significantly more simple than the present versions. These codes have not been designed and built ‘from scratch’ on a ‘clean piece of paper’. Newly-built software, designed and constructed under SQA plans and associated procedures require very significantly more documentation and independent review, Verification, and Validation than that mentioned above.

    Several have mentioned that source listings for some of the AOLBCGCM codes are available on the Web. This is very true. And it is equally true that in theory the source coding could be used to Verify the coding. However, in order for this to be a useful exercise we need some kind of specification of what was intended to be coded into the code. In the absence of this information we cannot develop objective metrics for judging that the coding is correct.

    The level of detail needed for Verification of the coding is generally many times greater than that typically available for many software products. Because the objective is Verification of the coding a specification of all the equations in the code is needed. For legacy software that has evolved over decades of time, this information is usually contained in a theory and numerical methods manual in which the continuous equations and the discrete approximations to these and the numerical solution methods used to solve the discrete equations are described in detail. A computer code manual in which the structure of the code is describe in sufficient detail that independent outside interests can understand the source code would also be helpful in any attempts to Verify the coding. I have not been successful in finding such manuals for AOLBCGCM codes.

    As someone mentioned, as taxpayer-funded software, this documentation should in fact be readily available. It is not.

    The level of documentation detail required for an independent V&V and SQA effort is enormous. Chip H above has mentioned documentation required for development and Verification of the coding. In order to get a even better handle on the exact nature of the models, methods and codes and their applications other documentation should be available. This documentation includes:

    Volume 1: Model Theory and Solution Methods Manual. A theory manual in which the derivations of each and every equation, continuous and discrete, are given in sufficient detail that the final equations for the models can be obtained by independent interests.

    Volume 2: Computer Code Manual. A computer code manual in which the code is described in sufficient detail that independent outside interests can understand the source code.

    Volume 3: User’s Manual. A user’s manual that describes how to develop the input for the code, perform the calculations and understand the results form the calculations.

    Volume 4: Verification and Validation Manual. A manual or reports in which the verification and validation of the basic functions of the software and example applications are given.

    Volume 5: Qualification Manual. Additional manuals or reports in which the models and methods, software and user are demonstrated to be qualified for application to analyses of the intended application areas.

    Other reports and papers can be used to supplement the above documentation. These then become a part of the official record of the software relative to independent V&V and SQA efforts.

    The coding cannot be Verified given the level of documentation that I have found so far. This was only a first attempt, and by someone not familiar with the codes. However I think it does give a first glimpse into the lack of sufficient documentation for verifying the coding.

  42. Monckton of Brenchley
    Posted Dec 2, 2006 at 5:35 PM | Permalink

    Warmest congratulations to Climate Audit on a balanced and vital service to the truth – M of B

  43. Reid
    Posted Dec 2, 2006 at 6:25 PM | Permalink

    Thank you Monckton of Brenchley for informing the public that there is no consensus. The so-called consensus is in political parlance, “manufactured consensus”.

  44. KevinUK
    Posted Dec 2, 2006 at 6:50 PM | Permalink

    #34 gb

    Here’s an example of how you could validate a computer code.

    Scenario: You need to confirm what the level of natural circulation will be in a gas-cooled nuclear reactor in the very highly unlikely event that all post trip gas circulation and decay hear boiler coolant circulation fails. You’ve simulated this scenario and your complex computer model predicts that there is sufficient safety margin. However your nuclear regulatory body has been under pressure from anti-nuclear lobbyists who’s experts are claiming that in this scenario there will be run-away temperature increases which will lead to severe damage to the reactor core resulting in a catastrophic release of radioactivity to the environment.

    So what do you do? Do you compare the predictions of your model with those of someone elses model and on the basis that they agree within an acceptable margin claim that you model has been validated?

    Well actually no you don’t as inter-model comparisons are more or less useless and most certainly do not show that a computer model is valid. Instead what you do is conduct a real experiment on a real nuclear reactor. Now the problem is that you obviously can’t perform the actual experiment as the nuclear regulator won’t let you. So what do you do? Well you persuade the regulator and the nuclear safety panel that you could conduct a similar equally real experiment on the real reactor in which you first cool the reactor to a uniform low temperature prior to switching off all the gas circulators and all the decay heat boilers. Now obviously the regulator wants to see a simlation of this experiment before you will be allowed to plan and conduct such a test so you simulate this experiment and predict that provided the reactor is pre-cooled to a low enough temperature that temperatures during the test will rise at an acceptable slow rate and will not approach safety limits by quite some margin at any time within the test. After a number of presentations to the regulator and the safety panel you are given approval to conduct the test. You conduct the test and gather as much data as possible on the variation in temperatures and pressures from measurements taken by hundreds of thermocouples within the reactor. Finally you compare these instrumented temperatures and pressures with the predictions of you computer model. The predictions of your computer model show excellent agreement with the recorded temperatures and pressures, their rate of change etc and so you are in a position to claim that the level of natural circulation is sufficent to cool the reactor (remove decay heat) in the event that all circulators fail to operate and there is no post trip cooling to the decay heat boilers.

    Now unlike the so called validation carrie dout for the GCMs this is REAL VALIDATION based on conducting a REAL EXPERIMENT on a REAL NUCLEAR REACTOR. This real experiment was conceived, planned and conducted by myself at a nuclear power station in the UK and has been used subsequently to demonstrate the safety margins available in regard to post trip cooling failure for all the UK’s Advanced Gas-Cooled Reactors.

    KevinUK

  45. MarkR
    Posted Dec 2, 2006 at 11:04 PM | Permalink

    #42 Well done Monckton of Brenchley for the first widely published, properly rounded critique of current climate theories.
    I hope you have more to come.

  46. Steve Bloom
    Posted Dec 3, 2006 at 4:02 AM | Permalink

    Re #41: “The level of documentation detail required for an independent V&V and SQA effort is enormous.” Just so. The mind boggles at the implied employment opportunities for software engineers and such.

    Re #42: I’m sorry to have to inform you that there are some peasants waving pitchforks around over at Wikipedia. Do keep us up to date on how that works out.

    Re #45: MarkR, in future you can just refer to him as “Your Worship” (don’t forget the caps!) and everyone will know who you’re talking about. 🙂 Also, you say widely, but has it been published anywhere else besides the Torygraph?

  47. gb
    Posted Dec 3, 2006 at 4:38 AM | Permalink

    Re 39. My apologies. I didn’t want to offend you. I just wanted to point that when you carry out a literature search you will find articles on code validation, evaluations of parameterizations. Researchers do investigate if the outcome of a model is physical realistic. A literature search will take time; it is not always obvious from the title that such a study is carried out.

    Re 37. I wouldn’t object to that but I would like to point out that funding for the development of better parameterizations or new (satellite) observations is perhaps equally well justified.

    Re 40. The work of Lorenz is correct but GCM’s are not meant to predict the temperature in London on June 25, 2008, say. In that case you know that the uncertainty of the projection is high. However, if you consider the mean annual temperature of 2008 in the UK, the uncertainty of the projection is much lower and it becomes even lower if you consider a larger area over a longer period of time (I leave it to to other people to make an estimation of the uncertainty). That is what GCM’s are aiming at.

    Re 41. With a nonlinear model (GCM’s) it is not possible to prove convergence (Lorenz, you know). However, if you consider averages (see above) it is possible to investigate if grid refinement leads to the same solution. The same problem I have. I carry out simulations of turbulent flows. Quite nonlinear, so he solution doesn’t converge if the grid is refined, only when in linearise the problem. There is also no analytical solution. But I can check if the mean statistics converge, for example.

    Many people on this site critise numerical modelling and in particular GCM’s. I think that is not always justified. Numerical modelling plays an important role in modern science. With GCM’s we can obtain insight in climate dynamics which is not possible in other ways. What climate modellers are doing is quite nice work. But there are uncertainties of course. Some physcial processes are no well understood, others less so.

  48. KevinUK
    Posted Dec 3, 2006 at 4:54 AM | Permalink

    #46 bloomie

    “Re #42: I’m sorry to have to inform you that there are some peasants waving pitchforks around over at Wikipedia. Do keep us up to date on how that works out.”

    Looks like Jucksies’ squash playing mate Billy the Wiki has decided to set up an article on Wikipedia to discuss Christopher Monckton’s paper. Good. Let’s hope that he isn’t his usual censorsing self. Steve and John A, it looks like you’ll have to get that hosting plan upgraded PDQ as you’ll be getting even more visitors to this blog over the coming weeks.

    KevinUK

  49. MarkR
    Posted Dec 3, 2006 at 5:40 AM | Permalink

    #46 Hi SteveB

    I think you ought to spend some time with a dictionary.
    Published:
    To prepare and issue (printed material) for public distribution or sale.
    To bring to the public attention; announce. See Synonyms at announce.

    Daily Telegraph circulation 904,660

    http://www.telegraph.co.uk/pressoffice/main.jhtml?xml=/pressoffice/research/rescirc.xml

    Any thoughts about which of the poor, sick, and hungry you wish to take the $28billion dollars a year from in the UK, as proposed, by the Stern Report?

    Talking about worship, have you genuflected to your Hockey Stick lately, I really think you shouldn’t.

  50. Dave Dardinger
    Posted Dec 3, 2006 at 7:50 AM | Permalink

    re: #49

    Talking about worship, have you genuflected to your Hockey Stick lately

    But the HS is the warmer version of “In Hoc Signo Vinces.” I think it came to Michael Mann one night in a dream.

  51. Dan Hughes
    Posted Dec 3, 2006 at 7:51 AM | Permalink

    #47 gb says: “Re 41. With a nonlinear model (GCM’s) it is not possible to prove convergence (Lorenz, you know). However, if you consider averages (see above) it is possible to investigate if grid refinement leads to the same solution. The same problem I have. I carry out simulations of turbulent flows. Quite nonlinear, so he solution doesn’t converge if the grid is refined, only when in linearise the problem. There is also no analytical solution. But I can check if the mean statistics converge, for example.”

    Show me a single textbook in which it is stated that numerical solutions of systems of nonlinear equations cannot be shown to converge to the solution of the continuous equations. There is an entire literature devoted to numerical solutions of nonlinear equations. Almost all mathematical models of real-world physical phenomena and processes are involve nonlinear equations. Linear equations are generally the exception, nonlinear equations are the rule.

    Lorenz did not ever say that it is impossible to show convergence of numerical solutions to nonlinear equations. As the Lorenz-type model equations are almost always systems of ODEs, the numerical solutions you see displayed in publications will in fact be the converged solutions. ODE solvers always work that way. Again, point me to any publication in which the Lorenz models are said to demonstrate that converged solutions of nonlinear equations are impossible to obtain.

    The AOLGCM models/codes do not solve equations that corrospond to accurate spatial and temporal resolution of the basic Navier-Stokes equations such as are used in DNS of turbulent flows. On the contrary, many of the parameterizations that are used in AOLGCM models are there because the calculations cannot be preformed at spatial resolutions for which very important physical processes occur. The codes solve the fluid motion equations for the mean flow. They have never attempted to solve equations that represent the turbulent flows. All turbulent flow phenomena and processes are represented by algebraic models. Again, point me to publications in which it is stated that the mean flow fields of turbulent flows cannot be converged.

    The analogy of the Lorenz models to turbulent fluid flows is not correct. Additionally, and more importantly, I think it is true that the Lorenz model equations have never been shown to predict any experimental fluid flow data. If there are publications that show otherwise, kindly let me know. In the absence of any predictions of measured fluid flow data, how can the Lorenz models be invoked as analogies for fluid flows; laminar or turbulent, chaotic or non-chaotic.

  52. Dave Dardinger
    Posted Dec 3, 2006 at 8:18 AM | Permalink

    re: #51

    Again, point me to publications in which it is stated that the mean flow fields of turbulent flows cannot be converged.

    I don’t think that’s the problem with GCMs. How the climate changes over time isn’t just a matter of the mean turbulent flow, but how, precisely, the turbulence interacts with larger patterns, say a “local” hurricane ripping up a mangrove thicket which changes the local albedo which in turn changes the local climate for a decade or so. In turn this local change effects how winds are deflected by a smallish mountain range 1000 km away which in turn…. This is the essence of the “butterfly effect”. When you have a complex non-linear system, or more precisely a coupled set of non-linear systems, results are sensitive to small differences in starting conditions, however “starting conditions” are defined in a particular situation.

  53. gb
    Posted Dec 4, 2006 at 11:08 AM | Permalink

    Re # 51. The point you are missing is that not all scales are modelled in GCM’s (and weather forecast models) and PDE’s are solved, not ODE’s. The larger (chaotic) scales are resolved on the numerical grid. Therefore, weather forecast models have a limited predictablity. Weather forecast models, GCM’s, dns codes don’t converge to a steady state solution (in contrast to commercial CFD codes where indeed all scales are modelled). All these models diverge after a finite time if a small initial perturbation is imposed.

  54. Steve Bloom
    Posted Dec 4, 2006 at 6:08 PM | Permalink

    Re #49: It was the “widely” claim I was concerned with. Was it published elsewhere? If not, “narrowly” might be a better term to use. In any case, I think Monchton’s family connections have sufficient publication pull only at the Torygraph. Any other media outlet would have asked some tough questions about Monckton’s credentials before even looking at that tissue of fantasies.

    Regarding the hockey stick as an object of veneration, I would point out that I am *not* Canadian.

  55. Steve Bloom
    Posted Dec 4, 2006 at 6:26 PM | Permalink

    Re #53: But gb, we come to this discussion *knowing* the models are wrong. It’s just a matter of finding out why. At the very least we can endlessly discuss why we think that’s the case.

    On another thread some weeks back (probably not now accessible due to the hosting problems), Isaac Held made an appearance here and stated his willingness to explain some things about models. I mentioned to the locals that they would drive Isaac away if they badgered him to justify climate science and modeling from first principles (which they couldn’t resist because they just *know* those first principles are wrong), but sure enough they couldn’t restrain themselves and Isaac soon departed.

    BTW, there’s getting to be a pretty long list of scientists who’ve had similar experiences here. Note that Martin Juckes is being put through that mill just now, although he seems a little more willing than most to dish back.

  56. Gerald Machnee
    Posted Dec 4, 2006 at 10:21 PM | Permalink

    Re #55. **BTW, there’s getting to be a pretty long list of scientists who’ve had similar experiences here.**
    Really? We do not have a list of them. They would not have a problem here if they engage in genuine scientific analysis and commentary.
    **Note that Martin Juckes is being put through that mill just now, although he seems a little more willing than most to dish back.**
    How many of his responses have been to the point? Has he actually done a “new” study or rehashed old work? His “list” of faults is growing too.

  57. John Reid
    Posted Dec 4, 2006 at 11:10 PM | Permalink

    I can no longer access Monbiot vs Monckton round 2.

    Has it had too many posts and used up all the available memory again?

    JR

  58. c newin
    Posted Dec 4, 2006 at 11:11 PM | Permalink

    Re#55. It seems to me, that Dr. Jukes is putting
    himself “through the mill”.

  59. John Reid
    Posted Dec 4, 2006 at 11:24 PM | Permalink

    No Sorry

    It is Gore Gored Monckton Replies that I can’t get into

    JR

  60. James Lane
    Posted Dec 5, 2006 at 2:13 AM | Permalink

    Lee:

    Note that Martin Juckes is being put through that mill just now, although he seems a little more willing than most to dish back.

    I think Martin’s performance on this blog has been pitiful. Let’s see if the debate on CoP is any better.

  61. Rev Jackson
    Posted Dec 5, 2006 at 4:02 AM | Permalink

    Re $54, 55: The longer I hang around here Mr Bloom, the more evident it becomes that “by their fruits ye shall know them”.

    That applies to Dr Juckes (I fear a sad disappointment to Concerned of Berkely), yourself, Lee and some of the other AGW advocates that turn up at this site.

  62. Posted Dec 5, 2006 at 8:44 AM | Permalink

    Some different (but related) question here:

    Next week Wednesday, there is some discussion here (Flanders, Belgium) about Al Gore’s film “And Inconvenient Truth”. I haven’t seen the film, only a few parts of it. Is there some critique on-line that gives a short view of what is said in the film (and what is true or exaggerated)? The US version on DVD is just released, but the European versions will follow later, too late for the discussion, I suppose.