Earth's climate crashes in 2013

From our friends at climateprediction.net, climate disaster has struck

I regret to announce that we’ve recently discovered a major error in one of the files used by the climate model. The file in question specifies levels of man-made sulphate emissions but due to a problem with the file specification, models have been inputting greatly reduced levels throughout their runtime. The consequence of this is that aerosols responsible for "global dimming" (cooling) are not present in sufficient amounts and models have tended to warm up too quickly. The file specification error is also responsible for causing models to crash in 2013 which is how we originally came across the problem.

Unfortunately, all the data returned to us so far has been affected by this problem. While the data is scientifically very useful, and will certainly form the basis of future research (it allows us to investigate the full effect of greenhouse gas emissions without global dimming), it doesn’t enable us to compare the models’ performance against real world observations of the 20th century since such an important component is missing. In order to do the experiment we intended, we unfortunately have no choice but to start models again from the beginning.

Now I know what you’re thinking – how can they possibly have not put some limit in the software before the Earth heats up too much? But these are the same people who think that the Earth heating up by 11oC in 40 years is a reasonable result. I’d love to know what’s "scientifically useful" about bad data from a known unphysical model.

So all of that time and energy that people have allowed for these models to run has been utterly wasted. We could have told them that.

In the words of The Inquirer

With around 200,000 PCs running the experiment non-stop for two months, it looks very much as if the BBC experiment is making more of a contribution to global warming than scientific knowledge.

263 Comments

  1. ET SidViscous
    Posted Apr 18, 2006 at 9:12 AM | Permalink

    “I’d love to know what’s “scientifically useful” about bad data from a known unphysical model.”

    Helps to quantify the level of dimming of the Sulfate particulates.

    Not that I believe in that, a bit of rubbish with all sorts of numbers pulled from nether regions with a rubber glove and a flashlight if you ask me, but from their perspective.

    On to Spinola’s (Thomas’) comment at the end. Bout a year and half ago someone put up an article on /. about a huge solar/greenhouse power plant they were proposing in Australia, forgot the actual area but then I did the numbers and found a comparison geographically. It was going to be the size of Sarasota Florida, with a roof over it.

    Anyways the plan was to have all this greenhouse, with a chimney in the middle (that was proposed to be the largest structure ever built by man. My point then was.

    Yeah instead of heating the atmosphere indirectly, let’s heat it directly, that should really help the situation.

    Other humorous things at the time where the people who wondered why it was going to cost as much as $50 Million to put a roof over something the size of Sarasota FL, and create the largest tower ever constructed by man. I mean really, that’s not htat big of a civil engineering project [/sarcasm off]

  2. Peter Hearnden
    Posted Apr 18, 2006 at 9:14 AM | Permalink

    Talk about gloating….

  3. John Lish
    Posted Apr 18, 2006 at 9:18 AM | Permalink

    #2 weren’t you running a model Peter, what happened to yours?

  4. Peter Hearnden
    Posted Apr 18, 2006 at 9:37 AM | Permalink

    Re #3, I did 1% and suspended it, I thought the PC’s cooling fan would explode – machine is a bit underpowered…fun to watch the model run though. Also, I think it is actually interesting to watch when experiments go wrong – it’s one way you learn.

  5. TCO
    Posted Apr 18, 2006 at 9:42 AM | Permalink

    I thought the Australian tower thing was kind of cool…

  6. Posted Apr 18, 2006 at 9:43 AM | Permalink

    Lets hope that some of those school children learn the value of modeling from this fiasco. Total value of the whole experiment seems to be approximately nill. I’d be fascinated to hear the ‘climate scientists” view on this one…

  7. fFreddy
    Posted Apr 18, 2006 at 9:48 AM | Permalink

    I’m sure I remember them saying that this was the same program that they run on the super-computer for all their doom and gloom forecasts.
    It rather makes me wonder, how do they know they haven’t got errors in the input data for the main program ?

  8. Tom Brogle
    Posted Apr 18, 2006 at 10:08 AM | Permalink

    Presumably the reason that the model crashes is that it does not duplicate the global temperature of the last century or so.
    However this record is flawed because the it is contaminated by urban heat.
    In a few months I will have worked through enough data to prove this assertion.
    I bet that the only way I will get it published will be on this site.

  9. John A
    Posted Apr 18, 2006 at 10:12 AM | Permalink

    What I’d like to know is, why they didn’t run this thing through once themselves before letting others go at it?

    The BBC’s “Climate Chaos” season is off to a bad start.

  10. Peter Hearnden
    Posted Apr 18, 2006 at 10:14 AM | Permalink

    Re #8, the myths are starting… It DOESN’T crash, it just runs without fully allowing for anthro aerosols. So, it shows more warming, as the planet would have if we’d not shoved all those aerosols up there blocking out some solar radiation.

  11. John A
    Posted Apr 18, 2006 at 10:16 AM | Permalink

    Re: #10

    Peter, they’re just numbers in a climate model. They’re not reality.

  12. ET SidViscous
    Posted Apr 18, 2006 at 10:19 AM | Permalink

    “The file specification error is also responsible for causing models to crash in 2013”

    Nick Faull
    Project Coordinator
    Climateprediction.net
    University of Oxford

  13. Sean
    Posted Apr 18, 2006 at 10:19 AM | Permalink

    Peter,

    The first message linked to by the BBC states:
    The file specification error is also responsible for causing models to crash in 2013 which is how we originally came across the problem.

    Maybe it does run to completion for the majority of seeds. That is almost worse than if it fell over reliably – that way people would have known for certain that they had done nothing useful. What if it had never crashed, and the flaw not been spotted for 20 years?

  14. Posted Apr 18, 2006 at 10:20 AM | Permalink

    Re #6. Gavin said at realclimate that

    With this background, what should one make of the climateprediction.net results? They show that the sensitivity to 2xCO2 of a large multi-model ensemble with different parameters ranges from 2 to 11°C. This shows that it is possible to construct models with rather extreme behavior — whether these are realistic is another matter.

    http://www.realclimate.org/index.php/archives/2005/01/climatepredictionnet-climate-challenges-and-climate-sensitivity/

    I made a comment here (somewhere) to the effect that the conclusions of the paper trumpeted in Nature et al by Stainforth et al of a 11C output for 2XCO2 sensitivity as a possible reality instead of a possible failing of the model illustrates the lack of critical assessment of climate (and environmental) science today.

    So I think many climate scientists would concerned with the results in climatepredictionnet. But no comments have been published by Nature on the Stainforth paper by climate scientists despite its obvious flaws. Is this because the conclusions are just fine and dandy?

    Taking a page out of Steve’s book though, the Stainforth paper should be looked at again, possibly retracted, and the frequency of this error throughout studies using the Hadley model determined post haste – by climate scientists.

  15. Jean S
    Posted Apr 18, 2006 at 10:34 AM | Permalink

    I think this the best thing that happened to the climate change discussion since Steve’s original publication. Now, it is possible although I’m not counting on it, that many people including some journalists just might ask the question: after all, how reliable are the climate model predictions?

  16. Mark
    Posted Apr 18, 2006 at 10:59 AM | Permalink

    Sooo, what was that about a “myth,” Peter?

    Jean, I really doubt people will ask these questions. The illuminati will “move on,” and choose some new method that supports their preconceived conclusions. It’s laughable to anyone that actually practices real science for a living.

    Mark

  17. Michael Jankowski
    Posted Apr 18, 2006 at 11:11 AM | Permalink

    Check out this spin in Peter’s link:

    “The experiment you have done is still a useful contribution to scientific research, as well as a graphic illustration of how much warmer the world might already be were it not for the global dimming effect.”

    How exactly was it “a useful contribution to scientific research,” other than some debugging? Are the results from those crashed and/or incorrectly input runs actually going to be put to use anywhere other than the recycle bin?

    “But to make the most accurate prediction for the 21st century, we obviously want to use the best possible inputs over the 20th century. So, after much discussion, the scientists have decided the best strategy is to re-run people’s climate models with corrected sulphate data, rather than trying to “fix” the problem after the event.”

    Are we really to believe that required “much discussion?”

  18. John A
    Posted Apr 18, 2006 at 11:31 AM | Permalink

    What if it had never crashed, and the flaw not been spotted for 20 years?

    I think we might have noticed by 2013….even Peter. Tim Lambert would have called on me to apologise to Stainforth for the mismeasurement of the Earth’s climate which made it look so completely unlike the model results.

  19. Steve McIntyre
    Posted Apr 18, 2006 at 11:42 AM | Permalink

    Can someone post up a description of what the “problem with the file specification” was. Is there a URL for the problematic file and for what it “should” be?

  20. Peter Hearnden
    Posted Apr 18, 2006 at 11:42 AM | Permalink

    Re #12, #13 and others, yup, OK it’s crashes at 2013 but it runs OK until then.

    I do think it’s interesting since it gives and idea of the cooling effect of anthro aerosols.

  21. ET SidViscous
    Posted Apr 18, 2006 at 11:44 AM | Permalink

    “OK it’s crashes at 2013 but it runs OK until then”

    MMmmmmmm that’s an interesting concept.

    “My car is fine, it ran until the engine siezed”

    All programs run until they crash.

  22. fFreddy
    Posted Apr 18, 2006 at 11:46 AM | Permalink

    Re #20, Peter

    I do think it’s interesting since it gives and idea of the cooling effect of anthro aerosols.

    No, Peter, it gives an idea of these people’s assumptions about the cooling effect of anthro aerosols.
    Not the same thing.

  23. Peter Hearnden
    Posted Apr 18, 2006 at 11:49 AM | Permalink

    Sid, so you’re saying that because cars fail in the end they’re of no use? Billions would disagree.

  24. Michael Jankowski
    Posted Apr 18, 2006 at 11:56 AM | Permalink

    Sid, so you’re saying that because cars fail in the end they’re of no use? Billions would disagree.

    No, he’s saying that just because a car runs for 8 yrs before it breaks down doesn’t mean it didn’t break down.

    The program “crashed” according to the project coordinator. That’s no “myth.”

    And certainly cars, even those that fail in the end, are of use. But they don’t fail until they produce meaningful results – e.g., they take you to-and-from a number of destinations. Now if a car fails before ever leaving the dealer’s lot, that car is certainly of no use. Or if you think you bought a car and have been driving it around for 6 months only to realize you’ve only been driving it within some virtual reality world and not really going anywhere…well, I’d say you wasted a lot of time.

  25. Spence_UK
    Posted Apr 18, 2006 at 12:01 PM | Permalink

    Isn’t it fortuitous that us humans happen to be spewing out cooling sulphate aerosols (which would plunge the world into an instant ice age) and warming carbon dioxide (which would turn the world into a fiery hell hole) in just the right proportions to leave a sort of random walk of global mean temperature.

    Of course, any suggestion that climate models have turned into a farcical balance between these two parameters must, by definition, be an oil industry shill.

    In the true spirit of global warming, when something unexpected happens it must be (somehow) the fault of unspecified anthropogenic effects, cf. Briffa’s divergence problem. I personally believe that man-made global warming caused errors on the interweb, files became corrupt, resulting in this error. If only we stopped setting fire to fossils, these models would run perfectly every time.

    I apologise for the excessive use of irony in this post. It’s the only way I can follow these debates and retain what little sanity I have left.

    PS. I love the thought “it runs OK until then”, as if “not crashing means it must be fine”. I wish that was the only requirement on the software I produce.

  26. Mark
    Posted Apr 18, 2006 at 12:14 PM | Permalink

    Me too… “gee, the radar tracked the missle great up until a microsecond before it hit us…”

    Mark

  27. John A
    Posted Apr 18, 2006 at 12:15 PM | Permalink

    Re: #25

    Spence,

    Isn’t that the point? Every twist and turn of the climate is due to anthropogenic effects, without which the climate would be stable…erm except when it isn’t.

    Every raindrop that falls is the result of some baleful manmade influence. Every hurricane season we get the same thing. If a glacier retreats its caused by man, if it surges, ditto.

    The only reason that this continues is because of a fascination with computer modelling and an unjustified and irrational authority given to people with PhDs (I think Freeman Dyson has said something similar to this).

    I’d like to be able to have a PhD, but I’d only be expert in the contents of my thesis – and I could still foul up on everything subsequently and still be a PhD.

  28. Michael Jankowski
    Posted Apr 18, 2006 at 12:35 PM | Permalink

    Isn’t it fortuitous that us humans happen to be spewing out cooling sulphate aerosols

    Very fortuitous! It seems that if we weren’t spewing out so much, our climate would crash in 2013.

  29. Dave Dardinger
    Posted Apr 18, 2006 at 12:38 PM | Permalink

    re: #19 So Steve, are you wondering if it was something like imputting degrees instead of radians? I suspect it has to be some sort of decimal slipping mistake

  30. ET SidViscous
    Posted Apr 18, 2006 at 12:51 PM | Permalink

    After the engine seizes it is next to useless, billions would agree.

    But it was a comment, not a direct analogy, t depends on the design life. Cars are designed to last at least 10 years, if the engine seizes in ten months, then yes I would see it as next to useless (though it had use until then, the cost per mile becomes exorbitant). But a closer analogy would be the engine seizing in the showroom, which yes would be useless.

    Whether the model is useful depends on when it was designed to model to. if it was designed to model until 2015, then some value could be seen.

    This of course all falls apart because the crash lead them to a different problem, which was that it was using poor data, and giving bad results, worse than a crash. So to continue the car analogy, if it appeared to work, but didn’t, I think billions would agree that that it is of no use.

    Use a good real world example. Mercedes recently developed a feature into it’s adaptive cruise control. If there is a stopped vehicle in front of you, it would actually stop the car. So a German news outlet came over to film a demonstration. They put cameras in the car, as well as outside. Car took off on a foggy day with a stopped car in front of it. Result? The Mercedes slammed into the other car full speed without so much as a how-do-you-do. Now during regular day to day use you could say the system is working fine, and most people would never know it doesn’t work. But the failure mode is pretty bad, therefore billions would agree it’s useless.

    So since that was what it was supposed to do, and it didn’t do it, with spectacular failure. That portion is yes absolutely useless, billions would agree.

    Since the program was supposed to model past 2013, and it crashed, and before that is was giving bad results. It’s the equivalent of car that does not run coming out of the factory.

  31. Steve Latham
    Posted Apr 18, 2006 at 12:51 PM | Permalink

    John A, I don’t know who you’re citing here regarding 11C within 40 years:

    “Now I know what you’re thinking – how can they possibly have not put some limit in the software before the Earth heats up too much? But these are the same people who think that the Earth heating up by 11oC in 40 years is a reasonable result.”

    I have another question, too: If they put in limits of any kind (Earth ain’t gonna warm slower than x or faster than y), wouldn’t you be howling that the model simply has too much circularity? Isn’t that a common charge here, that inputs are chosen to get the desired result? I’m surprised that you’re arguing against a method that uses ‘primarily’ (I don’t know how the model works) a first principles approach (if that’s indeed the approach used).

    Finally, no comment commending the researchers for being open about their failure? Didn’t you expect that the researchers would fudge their results no matter what they were to support the conspiracy?

  32. Steve McIntyre
    Posted Apr 18, 2006 at 1:04 PM | Permalink

    #29. I’m not trying to guess – it could be anything. I’d just like to know what the error was.

  33. JEM
    Posted Apr 18, 2006 at 1:33 PM | Permalink

    But these are the same people who think that the Earth heating up by 11oC in 40 years is a reasonable result.

    The really interesting question is, suppose the model that produced this result had instead predicted 11 C cooling in 40 years — would the same people have thought that was reasonable?

  34. fFreddy
    Posted Apr 18, 2006 at 1:40 PM | Permalink

    Re #31, Steve Latham

    Finally, no comment commending the researchers for being open about their failure? Didn’t you expect that the researchers would fudge their results no matter what they were to support the conspiracy?

    How do you fudge peoples’ computers crashing ?

  35. Posted Apr 18, 2006 at 1:40 PM | Permalink

    Re #32:

    Steve, the error is more or less described as the last item in the climateprediction.net message board:

    Yes, it was a wrong date in a header file, causing a 70 year offset between parts of the experiment. Or something like that. Which is what triggered crash after crash at “2013”.

    This was about man-made sulfate emissions. As far as I could follow the comments, this happened in two recently started experiments, not in older versions. New versions of the experiment are distributed now.

    That doesn’t mean that the influence of sulfate aerosols (even with the correct data) are as high as implemented in most climate models. See my comment at RealClimate, where there was no reaction by modellers (until now).

  36. Greg F
    Posted Apr 18, 2006 at 2:06 PM | Permalink

    Re:19

    Can someone post up a description of what the “problem with the file specification” was.

    I found this on the message board Steve.

    Les Bayliss – Site Admin
    The mistake was apparently an incorrect date in a header file. Rather obscure taken on its own, and only coming to light in the model year 2013, when the models of the ‘fast crunchers’ started crashing at the same ‘model time’.

    Re:20

    OK it’s crashes at 2013 but it runs OK until then.

    No Peter it wasn’t running okay till then.

    Les Bayliss – Site Admin
    The reason for this is that the scientists at Oxford have discovered that one of the input files to the model hasn’t been increasing the amount of sulphate pollution in the atmosphere (sometimes called the “global dimming” effect) as it should have done. So what you are seeing is the full impact of greenhouse warming not masked as it was in the real world by sulphate pollution.

    The date problem was what allowed them to track down the lack of sulphates. I suspect they knew something was wrong as model results started coming in. They had to realize the model was not tracking historical temperatures very well.

    Re:31

    Finally, no comment commending the researchers for being open about their failure?

    Every model would have crashed in 2013, how do you hide something like that?

    I’m surprised that you’re arguing against a method that uses “primarily’ (I don’t know how the model works) a first principles approach (if that’s indeed the approach used).

    They don’t use “first principles”, they use parametrisations ,the resolution is way to poor.

  37. jae
    Posted Apr 18, 2006 at 2:09 PM | Permalink

    John A

    I’d like to be able to have a PhD, but I’d only be expert in the contents of my thesis – and I could still foul up on everything subsequently and still be a PhD.

    Good statement. There are far too many PhDs that forget that they are really an expert in only one tiny part of the universe.

  38. Greg F
    Posted Apr 18, 2006 at 2:21 PM | Permalink

    Good statement. There are far too many PhDs that forget that they are really an expert in only one tiny part of the universe.

    As a friend of mine said after getting his PhD in chemistry. ‘I now know what PhD stands for, piled higher and deeper’.

  39. Dave Dardinger
    Posted Apr 18, 2006 at 3:09 PM | Permalink

    Greg,

    It’s not quite as bad as you indicate. Or at least it wasn’t back when I was going to grad school. In the biochemistry Phd program at Purdue you had to take various courses, then pass a comprehensive exam which allowed you to go on. Then you had to do a couple of proposals for research projects not in your own little niche. Finally you’d finish your own research and defend your thesis. The comps and the proposals would show you could work in other areas of biochemistry.

    Myself, I was playing too much bridge & chess and wimped out of taking the comps, going for the MS track instead. There is a paper published in Biochemistry, however which is still being cited with my name on it (as well as my Major professor, Larry Butler and my fellow grad student Susan Kelly)

    And the reason the paper is still being published is that this paper presented the details of the assay test for the enzyme activity we were studying. Dr. Butler had originally developed the reagents but there was a bit of a problem with stability of the acidic reagent either dried or in buffer solution so we came up with a method to produce the ammonium salt which is quite stable and can now be ordered from I think Regis Chemicals, at a quite reasonable price. I mention all this to point out that releasing sufficient data to let others replicate your work has the side effect of producing a lasting legacy in the form of citations 30 years later.

  40. Greg F
    Posted Apr 18, 2006 at 3:13 PM | Permalink

    Greg,

    It’s not quite as bad as you indicate.

    Psst … Dave … it was a joke. He did say it but he was only half serious.

  41. Posted Apr 18, 2006 at 3:13 PM | Permalink

    I am not surprised that a climate model was found to have a serious bug. The results from those models have never seemed trustworthy.
    I once downloaded a publically available climate model (not the climateprediction.net one) and started to run it. It crashed after a few decades of model time, but before that I had noticed something strange: The model has no leap years. Every year is 365 days in this model. Although not a serious bug, it certainly does not make me trust the programmers behind the model.
    Re #8: I am interested in your findings, since it seems to be in line with what both I and Warwick Hughes have found. Please contact me.

  42. Pat Frank
    Posted Apr 18, 2006 at 6:07 PM | Permalink

    #14 “With this background, what should one make of the climateprediction.net results? They show that the sensitivity to 2xCO2 of a large multi-model ensemble with different parameters ranges from 2 to 11°C. This shows that it is possible to construct models with rather extreme behavior — whether these are realistic is another matter.”

    And #20 “OK it’s crashes at 2013 but it runs OK until then. I do think it’s interesting since it gives and idea of the cooling effect of anthro aerosols.”

    And no one publishes the errors propagated through any individual climate run. We get time-wise temperature trend lines with no error margins. We get ensemble averages with no cumulated rms errors. This state of affairs is called the science of climate prediction.

  43. Steve Latham
    Posted Apr 18, 2006 at 6:53 PM | Permalink

    Thank you fFreddy (34) and Greg F (36),

    You both pointed out that it would be difficult to fudge results when folks’ computers crashed. Based on the notion that prelim results had been published and #13s suggestion that most runs didn’t crash, I assumed that select runs could be ditched. But more to the point, I don’t know how half the stuff claimed to be fudging could actually be fudged. Lack of imagination on my part? Maybe, but ask yourselves why they even chose to run the simulations across others’ computers rather than keep the models (and results) in a more controlled environment? John A asks in #9 why they didn’t run it themselves first. If they had run it first, would John A be complaining that they checked it to make sure it gave them the answers they wanted before starting the experiment?

    I haven’t visited here recently, but a common complaint is of this style: if glaciers are receding, it’s evidence of AGW; if they’re growing, it’s evidence of AGW. As critics of this, why not make sure you’re not guilty of the same thing? E.g., a climate model that screws up is evidence that models suck, and climate models that give the AGW proponents’ expected results are evidence that models suck; if modelers constrain results to expectations then they’re biased ideologues and if they don’t, well then they suck. My more important question for John A was not about the fudging of computer-crashed results but about this latter example. You indicated, Greg, that the models are parameterized (resolution too poor? for what?) — okay, are you saying it would be better to have coding that eliminated extreme results so that the output conformed more to expectations? I would disagree with that.

    I thought I’d ‘heard’ (can’t remember where) that the mechanistic physics was applied in most GCMs and that the relevant parameters yielded results consistent with observed climate data; that’s very different from saying that the parameters were derived from climate data. Are we making the same distinction, Greg, and if so can you provide a citation regarding the parameterization? And while we’re discussing citations, can someone provide a link to John’s statement that these are the same people who thought +11C in 40 years is a reasonable result?

  44. Pat Frank
    Posted Apr 18, 2006 at 7:15 PM | Permalink

    #43, Steve L, why aren’t parameter errors propagated through the models during a run and displayed as error-bars on plots of the time-wise projected temperature trend?

  45. Steve Latham
    Posted Apr 18, 2006 at 8:07 PM | Permalink

    Dear Pat Frank,
    That looks like a good question. Have you tried asking someone who might know the answer? (I wouldn’t know.) I suspect, though, that in these simulations, each run occurs without error — the parameters are assumed accurate. I would bet (a small amount of money) that the uncertainty in the parameters is instead represented across runs, such that the probability distributions of the various parameters are resampled for each individual realisation (run). Thus, what I would like to see is perhaps the median run (in terms of final outcome) and then maybe the 5th, 25th, 75th, and 95th percentiles at each time step. (Note: I am neither a climate researcher nor a computer modeler!) Would that suit your purposes? I think that might be good because it would provide a 90% CI (or would that be prediction interval?) for any given time step and also give some idea of the density of the ‘most likely’ 50% of results.

  46. Dave Dardinger
    Posted Apr 18, 2006 at 8:35 PM | Permalink

    re:#43

    I don’t know where you heard that mechanistic physics was used to run models. Well, sure some of the conservation laws, etc. are put in that way, but many of the important items are just too complicated to calculate and are parameterized. Thus the atmosphere is divided into several layers and the cloud % calculated based on certain observations and calculations, but not on the basis of running actual physical laws. I’m sure that temperature and humidity enter into it and all that but it doesn’t take the form of modeling actual clouds and up and down drafts and seeing how they evolve over time. Instead the program might get a value for temperature and humidity and then look at a table for that level and pull out the average cloud cover. This might work, or it might not work, depending on just how the real-world handles such things. But we know from things like jet contrails that the actually amount of cloud cover can be sensitive to rather small pertubations. The question is whether things average out over time or snowball, with no way to know exactly when the snowballing will start. Modelers almost have to assume the former if they want their model to mimic reality. But this is similar to the old theory of uniformitarianism. We skeptics oppose this neouniformitarianism, but as I said once before we have to put up with you antidisestablishneouniformitarianismists.

  47. Pat Frank
    Posted Apr 18, 2006 at 9:13 PM | Permalink

    #45 Thanks for your reply, Steve. I’ve looked in the published literature for error propagation through GCMs, but without luck. One does find proposed methods for estimating error covariances within GCMs, and other numerical schemes to test the reliability of GCM outputs. But these are all complicated assessments.

    But I’m looking for something simple. There are uncertainties in the known energy flows, for example. There are uncertainties in cloud forcing, in aerosol forcing, etc. These uncertainties can be propagated through the physical equations used in GCMs to produce an estimated total uncertainty in projected temperature. One never sees such an estimate; but such an estimate is critical to judging whether the outputs are worth an investment of concern.

    You’re likely right in your bet, that uncertainty is typically represented by comparing runs, or by changing initial conditions, or comparing outputs of various models. The percentiles you mention, though, are statistical uncertainties. But those who actually evaluate models speak of off-setting errors in parameters that essentially disallow tracing output errors back to errors in specific parameterizations. I’ve added an abstract below illustrating one published example.

    What would suit my purposes is a candid presentation of the physical uncertainties. I’d like to see them propagated through the models, and end up as error bars around projected future temperature trends. It should be relatively easy to do that, for a modeler. But I have yet to see a single paper with a candid physical error analysis.

    Here’s the promised example. I have it in pdf, if anyone wants it. The paper outlines the state of the art in the GCMs producing outputs that will go into the IPCC 4AR. The AR that Steve Bloom tells us will say that it’s 99% certain that human-produced CO2 is causing global warming. You decide whether that’s a specious claim, or not.
    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

    T. J. Phillips, et al. (2004) “Evaluating parameterizations in general circulation models: Climate simulation meets weather prediction” Bull. Amer. Meteorol. Soc. 85, 1903-1915

    Abstract: To further a method for evaluating and analyzing parameterizations in climate general circulation models (GCMs), the US Department of Energy is funding a joint venture of its Climate Change Prediction Program (CCPP) and Atmospheric Radiation Measurement (ARM) Program: the CCPP-ARM Parameterization Testbed (CAPT). CAPT is not a panacea for improving climate GCM parameterizations at all time scales, but just one choice from a “tool kit” that may also include SCMs, CRMs, and simplified GCMs. Nonetheless, insights obtained from adopting this NWP-inspired methodology are expected to contribute significantly to the general improvement of GCM climate simulation.
    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

    Further, on page 1904, they write: “[D]ue to sampling limitations, the observed climate statistics only roughly approximate the statistics of the global climate system “¢’‚¬? to greater or lesser degree “¢’‚¬? depending on the process of interest (e.g., Kistler et al. 2001). Moreover, because the GCM climate state reflects compensating errors in the simulation of many nonlinear processes, it is very difficult to attribute these errors to particular parameterization deficiencies. In such a context also, the parameterizations are driven by an unrealistic largescale state, so that it is difficult to evaluate their performance objectively (Schubert and Chang 1996).”

    Seeing a large project in 2004 to reduce large systematic errors in GCMs, along with an acknowledgment that updating GCM calculations with observational inputs is only a “rough” renormalization, plus compensating parameter errors, plus that calculational models end up “driven by an unrealistic largescale state,” doesn’t give me confidence. Instead, it makes me wonder just what it is that leads modelers like Jim Hansen and Tim Barnett to claim any physical accuracy at all to their climate and temperature projections.

    I don’t understand, in short, how anyone is able to use a GCM to support the CO2 “A” in AGW. That’s why I’d like to see the physical uncertainties (not the statistical ones) propagated through the major GCMs. The exercise would bring informed rationality to the debate about global warming.

  48. Greg F
    Posted Apr 18, 2006 at 9:21 PM | Permalink

    Based on the notion that prelim results had been published and #13s suggestion that most runs didn’t crash, I assumed that select runs could be ditched.

    Ummm … no. Most runs didn’t crash because they had not yet made it to 2013. The third or fourth post on their message board was someone who’s model was at 1940. My suspicions that they knew something was wrong (see my comment #36) before the crash has been confirmed on the message board.

    All the trickles uploaded data, and it was seeing this data that showed the researchers that something unexpected was happening.

    The problem would have resulted in models being warmer then the historical data. That would have been unexpected.

    But more to the point, I don’t know how half the stuff claimed to be fudging could actually be fudged.

    I think you need to understand the models better. You expressing a number of beliefs about the models that are quiet common, but wrong. I would suggest this article by a climate modeler. This will also partially answer your question on parametrisations.

    Unfortunately, these models all contain a vast number of parameters which control the model behavior, and which are not directly constrained by theory or observations. Examples might be such things as “the average speed that ice crystals fall” or “droplet size in clouds”. Some of them describe rather abstract concepts and it is not clear how they could be measured even in principle, let alone practice. The range of plausible values often extends through orders of magnitude, and changing the parameters can have a significant effect on the model behavior.

    IOW, there is lots of wiggle room. This is why the IPCC does not call the model output “predictions”. Rather they call them “scenarios”. The distinction is significant. The IPCC even tells you that you cannot assign probabilities to the models output.

    You indicated, Greg, that the models are parameterized (resolution too poor? for what?)

    Clouds for one. Obviously you can’t represent a cloud very well in a grid that is 2.5 degrees. The clouds have to be parameterized. See above article as well as this for starters. Google is your friend, there is a lot of information about the models if your interested.

  49. Greg F
    Posted Apr 18, 2006 at 9:34 PM | Permalink

    Pat,

    I would like a copy if John A would be so kind to forward you my email addy.

  50. fFreddy
    Posted Apr 19, 2006 at 12:12 AM | Permalink

    Re #43, Steve Latham

    Maybe, but ask yourselves why they even chose to run the simulations across others’ computers rather than keep the models (and results) in a more controlled environment?

    I was assuming that the point of this Climateprediction.net exercise is PR rather than any sort of real science. The aim is to get a load more people to buy into AGW, and to feel some degree of ownership because they ran this silly program on their PCs for a while. This will help to increase their resistance to any opposing viewpoint.
    It will end up with BBC Breakfast interviewing the young academic behind it, plus Charlene from Dagenham. Charlene will be the person whose PC came closest to replicating the 1990 temperature record. There will be much oohing and aahing about how it goes on to predict that we are all going to die by 2100, including much reference to cuddly little grandchildren.
    (There will be no reference to infinite monkeys with infinite typewriters, and how a monkey that has written Macbeth will not necessarily go on to write the other works of Shakespeare.)
    Steve Bloom, is this how you would run this PR campaign ?

  51. Bob K
    Posted Apr 19, 2006 at 12:45 AM | Permalink

    re: 50
    LOL fFreddy,

    I think you’re assessment is right on the mark.

  52. Chris Chittleborough
    Posted Apr 19, 2006 at 12:54 AM | Permalink

    When you cannot derive a complete mathematical model from first principles, it is common (and appropriate) to try to derive the “shape” of the formula(e) — eg., f must be of the form aàƒ’€”x³ + bàƒ’€”sin(y) + c, where a, b and c are unknown coefficients. (I assume this is what was meant by parameterization above.)
    If the formula is linear in the unknowns, you can use Multiple Linear Regression to get estimated values and standard deviations for them. (Statistically literate scientists and engineers might use something more appropriate than MLR.)
    If the formula is non-linear, you can try a range of values for each unknown, varying them by hand until you get the best fit. This should also tell you how sensitive your calculated values are to variations in the unknowns; combining this with the error bars for your observed values gives you error bars for your coefficient values.
    In either case, an error analysis based on first principles will produce huge error bars, but the statistical process will produce much smaller error bars. Researchers will generally report the later, and I think they are right to do so.
    However, the preceding assumes a non-iterative mathematical model. In something like climate, you apply the formulae over and over, simulating different locations and different times. Such simulations are inherently non-linear in the unknowns, so the second case applies. First you have to do enormous numbers of simulations against measurements with slight variations in the estimates for the unknowns, and you can’t even estimate your errors until you have all the results so you can look at the sensitivity of the goodness of fit to variations in the unknowns. Combining that with the error bars for the measurements gives you some kind of error estimate for the parameters. You can then see how sensitive predicted values of temperature etc are to variations in the same parameters; adding that it gives you error estimates for your predictions.
    Hence these researchers cannot provide error estimates until they are finished.
    (Aside: I am relying on the reader to translate between “error bar”, “standard deviation”, “estimated error” etc.)
    This is hard work, and not everyone gets it right. Worse still, Numerical Analysis is much harder than most scientists or engineers (or even computer programmers!) realise. A degree of skepticism is advised when reading the results of such studies.

  53. Posted Apr 19, 2006 at 7:18 AM | Permalink

    In the case of the BBC software, the models were tested by 200,000 PC users. It’s not the case of the models that are actually used to determine policies. See more comments here:

    http://motls.blogspot.com/2006/04/bbc-climate-software-confuses-200000.html

  54. Dave Dardinger
    Posted Apr 19, 2006 at 7:37 AM | Permalink

    re #52,

    I don’t think the parameterization problem is quite what you think, but I’m not a modeler, so I’m open to being corrected. As I understand it, the appropriate equations are quite well known, but they can’t be applied because of limited computing ability. Therefore they used simplified equations; perhaps something like your equation, but everyone is aware that much is lost using such an equation so that the results have an inherent error regardless of how successful they are at producing the best set of values they can for the parameters.

    Now I’d think that one of the best things which might have been done before producing such GCMs would be to examing and publish a set of papers which compared the results over time of one small area using the exact equation vs the parameterized equation. Thus examining a from-the-ground-up set of runs on cloud formation at a given altitude vs taking the values from a simple equation containing just the temperature and relative humidity. Perhaps these studies exist, perhaps they don’t. I haven’t seen discussions on such results, but they may still exist in specialist journals which aren’t available on-line to quote from. Anyway, such a study would presumably give you a sensible sort of error bar as an output. Of course, as you indicate, feeding such error numbers into a GCM isn’t a trivial task since the errors could cancel or multiply depending on “the objective circumstances” to borrow the old Soviet term of art.

  55. Posted Apr 19, 2006 at 8:00 AM | Permalink

    The statement by Nick Faull, the project coordinator, whose part has been fortunately reproduced and stored at ClimateAudit, was “superseded” on their website by Mr. Allen who is the principal investigator and is probably receiving the money.

    Mr. Allen openly says that they intend to separate the component, which they routinely do, and continue. I am flabbergasted, and if he really does so, then he is a criminal in my eyes. If you could just separate the effect of aerosols, then it would mean that the aerosols influence the results in a linear or another simple way, and if it were the case, you would surely not need to run 200,000 computers for two months to get the results.

    The story shows the kind of scientific fraud that is routinely done by these people in the full force. I am disgusted by these “colleagues”.

  56. Peter Hearnden
    Posted Apr 19, 2006 at 8:04 AM | Permalink

    Re #55 ‘criminal’ and ‘fraud’ in the same post – crikey!

    I guess that’s what you call ‘substantive’ and ‘addressing the science’?

  57. tom brogle
    Posted Apr 19, 2006 at 8:10 AM | Permalink

    When was Peter Hearnden guilty of addressing the science’?

  58. Posted Apr 19, 2006 at 8:10 AM | Permalink

    The original announcement by Nick Faull is here:

    http://schwinger.harvard.edu/~motl/climate-faull.html

  59. Peter Hearnden
    Posted Apr 19, 2006 at 8:18 AM | Permalink

    Re #57 tu quoque allways was a good defence, eh?

  60. John A
    Posted Apr 19, 2006 at 8:26 AM | Permalink

    Re: #59

    Not unless you too have been compared to Senator Joe McCarthy.

    Besides, Lubos Motl is a scientist and does know trash masquerading as science when he sees it. A badly done experiment is a badly done experiment, and the excuses for this badly done experiment are not impressing anyone but the already fully committed.

  61. ET SidViscous
    Posted Apr 19, 2006 at 8:30 AM | Permalink

    Actually Peter, yes that’s exactly what it is, both substantice, and addresing the science.

    Did you read the full comment on his blog?

  62. Michael Jankowski
    Posted Apr 19, 2006 at 8:33 AM | Permalink

    Didn’t you expect that the researchers would fudge their results no matter what they were to support the conspiracy?

    Apparently, this was given heavy consideration. After all, from the link Peter posted in #10:
    ***So, after much discussion, the scientists have decided the best strategy is to re-run people’s climate models with corrected sulphate data, rather than trying to “fix” the problem after the event.***
    You would think either re-running the models or scrapping their results completely would have been the ONLY options. But no…”much discussion” considered “trying to ‘fix’ the problem after the event.” I’d call that “fudging,” and it was apparently Plan B.

  63. J. Sperry
    Posted Apr 19, 2006 at 8:33 AM | Permalink

    Re #31 & #42 (remark about 11C in 40 yrs):
    How about this, showing preliminary results of the experiment (4 out of 5 phases)? There is some discussion about how the plot includes both stable and unstable models, but I’m pretty sure that what it’s calling unstable are the lines that drop (i.e., cool) too fast. By my reckoning, some “stable” lines go up from 286-287.5K to 296-298K (8.5-12K increase) in 40 years.

  64. Greg F
    Posted Apr 19, 2006 at 8:37 AM | Permalink

    RE:52

    When you cannot derive a complete mathematical model from first principles, it is common (and appropriate) to try to derive the “shape” of the formula(e)

    That is sometimes called a curve fit. Curve fitting is useful as long as you only interpolate. The curve fitting I have done (2 independent variables and one dependent variable) has always resulted in multiple solutions. The problem is if you extrapolate each equation they diverge from one another. If you understand the process you can eliminate the ones that don’t make any physical sense, which helps, but doesn’t solve the extrapolation problem. It appears to me that climate models are just complex curve fits. Some of the process are understood, but many are not. It seems apparent that you should be able to get the models to extrapolate whatever values you deem appropriate. IOW, it provides room for someone’s unintended bias to creep in.

    RE:54

    I don’t think the parameterization problem is quite what you think, but I’m not a modeler, so I’m open to being corrected. As I understand it, the appropriate equations are quite well known, but they can’t be applied because of limited computing ability.

    Dave, I think there are examples of both. The link I provided to the climate modeler gives examples of ones that we just don’t know:

    Unfortunately, these models all contain a vast number of parameters which control the model behaviour, and which are not directly constrained by theory or observations. Examples might be such things as “the average speed that ice crystals fall” or “droplet size in clouds”. Some of them describe rather abstract concepts and it is not clear how they could be measured even in principle, let alone practice. The range of plausible values often extends through orders of magnitude, and changing the parameters can have a significant effect on the model behaviour.

    There are others, such as terrain elevation, that is easy to quantify but dimensionally to small to be represented by the climate models resolution.

  65. John A
    Posted Apr 19, 2006 at 8:41 AM | Permalink

    ou would think either re-running the models or scrapping their results completely would have been the ONLY options. But no…”much discussion” considered “trying to “fix’ the problem after the event.” I’d call that “fudging,” and it was apparently Plan B.

    They’re on a deadline to broadcast the bad news on the BBC. Can you think of another experimental setup where an experiment goes badly haywire and they discuss carrying on? What sort of Mickey Mouse operation is this?

  66. Jean S
    Posted Apr 19, 2006 at 10:12 AM | Permalink

    re 55: So if Allen can separate the effect, it means that sulphates has a significant cooling effect (since their models warmed “too much” with future (lower) sulphate concentrations) despite of other factors. Thus you may ask Allen (the AGW acid test): should we consider increasing the sulphate emissions to combat the global warming? Wasn’t this Myles Allen the same guy who seriously predicted temperature raises of 10+C? Are they using the same (but, of course, slightly “improved”) models for this (BBC sponsored) experiment?

    re 65: Yes, a 70+ years real-life experiment comes to my mind…

  67. jae
    Posted Apr 19, 2006 at 10:31 AM | Permalink

    Very good (and interesting) summary of literature on solar influences at CO2 Science:
    http://www.co2science.org/scripts/CO2ScienceB2C/subject/s/summaries/solariceage.jsp

  68. jae
    Posted Apr 19, 2006 at 11:53 AM | Permalink

    These models remind me somewhat of risk assessment schemes, where cascading layers of “safety factors” (because of uncertainties) are employed to derive some “safe” level of exposure to a chemical. I have always wanted to ask a statistician (Steve?) just what is the probability of arriving at a correct number, when one cascades four or five “just-to-be-safe” factors.

    I’m no modeler, but I can’t help speculating. The models are based on equations that relate various climate variables, most of which are approximations (curve fitting). In such cases, do the uncertainties in the equations tend to cancel each other, or do they cascade to produce rediculous results. I think the latter. Probably the only way to keep these models from crashing is by constantly “tuning” the equations so a crash doesn’t result. I can’t see how anyone can believe that the climate models have predictive power, when they are specifically designed to fit some recent curve (the flawed global average surface temperature “record”). It is complete circular reasoning.

  69. Greg F
    Posted Apr 19, 2006 at 2:52 PM | Permalink

    I can’t see how anyone can believe that the climate models have predictive power, when they are specifically designed to fit some recent curve (the flawed global average surface temperature “record”). It is complete circular reasoning.

    The IPCC doesn’t even claim they predict, that is why they are called “scenarios”. The IPCC called them, if I recall correctly, “story lines” in the 95 report. The failure of the IPCC to correct this widely held belief has resulted in the general populace thinking they are predictions. Perhaps this time around they should name them “fairy tales”.

  70. Dano
    Posted Apr 19, 2006 at 2:55 PM | Permalink

    68:

    I can’t see how anyone can believe that the climate models have predictive power, when they are specifically designed to fit some recent curve

    You can see how you can cure your cascading set of erroneous beliefs by starting your tuning here.

    HTH,

    D

  71. Dano
    Posted Apr 19, 2006 at 2:58 PM | Permalink

    69:

    The IPCC scenarios aren’t for what you have been told – or erroneously believe – they are for. Model output is a component of the scenario analysis and adaptive management response.

    Begin your quest to learn about scenario analysis here.

    HTH,

    D

  72. jae
    Posted Apr 19, 2006 at 3:17 PM | Permalink

    Oh, Danoboy: your normal MO, throw out some references for me to wade through. Back at you:
    http://www.co2science.org/scripts/CO2ScienceB2C/subject/c/subject_c.jsp

  73. Hans Erren
    Posted Apr 19, 2006 at 3:19 PM | Permalink

    The A2 scenario is never going to happen. Oh those evil journalists, spinning honest scientific research.

  74. Hans Erren
    Posted Apr 19, 2006 at 3:21 PM | Permalink

    re 72:

    yup been there, done that:
    http://home.casema.nl/errenwijlens/co2/tcscrichton.htm

  75. Greg F
    Posted Apr 19, 2006 at 3:26 PM | Permalink

    Re:71

    The IPCC scenarios aren’t for what you have been told – or erroneously believe – they are for.

    [satire] Oh my Dano … how would we have known without you? Since you seem to be the expert perhaps you should inform the IPCC. [/satire]
    Sometimes you are a real hoot Dano. LOL

  76. jae
    Posted Apr 19, 2006 at 3:43 PM | Permalink

    Dano: Here is a sample paragraph from the conclusions of the report at your linkie:

    We have worked to synthesize key model and experimental features such as the
    resolution of the ocean and atmospheric component models, the spin-up procedure form
    of flux adjustment (if any), and the basic characteristics of the land and sea-ice models.
    A thorough documentation of any model is an arduous undertaking; describing a suite of
    models is even more challenging. We have provided a collection of references to
    facilitate efforts to acquire more in-depth understanding of particular model features.
    While this information is not comprehensive, we find that it is already proving to be
    valuable.

    Doesn’t tell you much, does it? It says they have provided “a collection of references to facilitate efforts…” and “we find that it is already proving to be valuable.” (whatever that means). Wish I had the time to read the whole thing. Maybe someday.

  77. Dave Dardinger
    Posted Apr 19, 2006 at 4:25 PM | Permalink

    Jae,

    the spin-up procedure form of flux adjustment (if any)

    If only Dr. Emmet Brown were available he could provide them with a Flux Capacitor! Then all they’d need is an automobile capable of getting up to 88 mph. Of course if they did get to the future to measure the flux adjustment they’d probably get arrested for driving a transportation vehicle without a properly executed procedure form.

  78. Willis Eschenbach
    Posted Apr 19, 2006 at 5:01 PM | Permalink

    Re #70, Dano, thanks for the link to the model intercomparison project. It shows that for an 80 year run, the models usually get within +/- 2°C of the mean temperature for most of the globe, but in a number of regions are off by as much as +/- 10°C. In particular, the Antarctic is way wrong.

    Now, remember that we are using these models to tell us something about a ~ 0.4°C temperature change over the eighty year run …

    So Dano, give me your opinion here. You laughed in your posting about someone saying:

    I can’t see how anyone can believe that the climate models have predictive power, when they are specifically designed to fit some recent curve

    Are models whose underlying error is on the order of +/- 2°C, with regional errors of up to 10°C, going to be able to tell us anything about a 0.4° change in temperature? How much predictive power do they have regarding a 0.4°C change?

    If the underlying model error is +/- 2°C and they are often out by up to 10°C, I don’t think they have any useable “predictive power” at all … but your mileage may vary.

    All the best,

    w.

  79. Paul Penrose
    Posted Apr 19, 2006 at 5:28 PM | Permalink

    Come on, I’ve seen some plots from these models. They include some runs with huge temperature excursions in both directions. Now these are thrown out, of course, but then to claim that the remaining runs are somehow meaningful is total folly. If you look at the inputs for one of these runs with a huge output response you can’t really differentiate it from one of the “nominal” runs without looking at the output. That’s to be expected when extrapolating using a tangle of interrelated non-linear equations, but to claim that some results are valid and others are not is just bogus. If the models can produce results which are obviously incorrect even with “valid” looking inputs, then it is not useful for any kind of predection, projections, or planning. If I was one of the authors of a model like this I would restrict the end-date to today’s date so that people don’t fall into the trap of thinking that it can be used to extrapolate future events.

  80. jae
    Posted Apr 19, 2006 at 5:31 PM | Permalink

    I strongly suspect that “parameterization” will soon be revealed as another cherry-picking scheme. In this case, it is cherry-picking equations and constants to force the model to do what the modeler wants it to do. It’s starting to look like the whole AGW theory is rapidly boiling down to an exercise in non-scientific cherry-picking.

  81. Paul Linsay
    Posted Apr 19, 2006 at 6:48 PM | Permalink

    #70, Thank you for the linky Dano. First two sentences of the conclusion are

    Comparison of the CMIP2 control run output with observations of the present day climate reveals improvements in coupled model performance since the IPCC’s mid-1990s assessment (Gates et al. 1996). The most prominent of these is a diminishing need for arbitrary flux adjustments at the air-sea interface.

    So I infer from this that there is still a need for “arbitrary flux adjustments”, whatever they are, to get the right answer?

    By the way, I’m always perplexed as to which global temperature record they use. The satellites, the radio sondes, or the ground based measurements? Probably the ground based measurements? But then which version, the GISS which is the hottest, or one of the British ones which show considerably less warming and are almost in agreement with the satellite?

  82. Chris Chittleborough
    Posted Apr 19, 2006 at 8:19 PM | Permalink

    #54: Dave is quite right, climate models use vastly simplified equations because the correct equations are too hard to compute with (by orders of magnitude!), not unknown.

    #65: John A asks “Can you think of another experimental setup where an experiment goes badly haywire and they discuss carrying on?” That’s something you won’t see too often: John A being naive. It’s all too common when you’ve got limited access to expensive equipment. However, when I did it in the cases with which I am familiar, only research time was at stake, not huge changes in world-wide economics.

  83. Chris Chittleborough
    Posted Apr 19, 2006 at 8:20 PM | Permalink

    Whoops, WordPress filtered out my HTML. “When I did it” should have been struck through.

  84. Posted May 20, 2006 at 5:17 AM | Permalink

    I work on the climateprediction.net experiment, and pardon for my late reply, I’ve just been recently getting into “blogging” and this looks like an interesting “skeptic” site; not totally full of Myron “CO2 Is Life” Ebell types etc! 😉

    Anyway — the specific error was not in the climate model or program — but in an ancillary file that had a bad header. Specifically, there were two date headers in the file, a fixed-length header which had the correct date (the file was supposed to span emissions from 1918 through 2082, i.e. safely cover our 160-year experiment of 1920-2080).

    The scientifist who provided this file at the last minute overlooked a second header in the file which the model actually used, and made the model believe the data was for 1849-2012. So the software behaved normally/properly, it’s just that values where shifted off until people got to 2013, it was the “end-of-file” and crashed.

    Of course, I would have loved to have run a lot of these all the way through, but they take months, and there was time pressure to get this out for the TV documentary etc. And often our beta testers have fast enough machines to be faster than the cluster we have access too (which is how this problem was found out — the vast majority of people restarted were nowhere near 2000 or 2013 etc).

    PS — I find it quite hilarious claims that cp.net is “expensive” or “all climate models are invalidated” from this snafu etc. Mistakes like this crop up, they get corrected, and things get rerun; all throughout science (not just my project! 🙂
    I mean, jeez, it’s not like your crackpot “proofs that AGW is false” are “correct” the first time through, eh? 😉

  85. Posted May 20, 2006 at 5:33 AM | Permalink

    Ooh, we’re crackpots now.

    I like it. Do I have to let my hair grow long and spikey, kind of like the mad scientist in Back to the Future?

  86. John Lish
    Posted May 20, 2006 at 5:56 AM | Permalink

    #84 – I think Carl, you should actually bother to read the site before making sweeping and misplaced comments. Knee jerk reactionism gets everybody nowhere.

  87. Posted May 20, 2006 at 6:55 AM | Permalink

    HAHA, I have looked around the website. Look, I don’t want to start a fight, I just thought it would be a novel concept to have something factual posted here, amidst the spurious statistical analyses, foregone conclusions to support right-wing ideology, pseudo-intellectual claptrap etc. It’s pretty funny you whinge about my single post, yet 98% of this site is ad-hominem attacks at best. Enjoy!

  88. John A
    Posted May 20, 2006 at 7:48 AM | Permalink

    Carl,

    98% of this site is about science, audit and falsifiability. The other 2% is the noise of people making startling claims with no evidence and then requiring the rest of us to disprove them.

    It isn’t for us to prove AGW false, it is for you to prove it true. We’re still waiting for the “smoking gun” that shows us all to be on the edge of “climate chaos”. The last “smoking gun”, which was climateprediction’s “sensitivity experiment” showed us nothing other than cp’s willingness to press the Big Red Panic Button.

    In any case, the climateprediction.net “experiment” has been run with singular ineptitude that should cause anyone to wonder what value the “result” really has. Personally I think the value can be measured not in knowledge of how the climate system works, but in scary headlines, green propaganda and hackneyed cliches about how it will be all “worse than previously thought”.

    That’s my prediction about climateprediction.net

  89. Paul
    Posted May 20, 2006 at 7:50 AM | Permalink

    Carl Christenson – chief software engineer having appraise the suprious statistics on this blog. I do hope you don’t have any say in the model design specification paramatisation or estimation.

    ClimatePrediction.net credibility sinking fast.

  90. Dave Dardinger
    Posted May 20, 2006 at 8:36 AM | Permalink

    Come on guys! Don’t be knee-jerks. Carl made some specific points and he should be made to defend them.

    Another point of fact. A blog is to be judged based on the notes, articles, etc. posted by the Principal(s), not by the comments which can come from anyone and can range from outstanding to abyssmal. I’m sure Carl can find “spurious statistical analyses” in the comments here, but can he do so in the postings of Steve McIntyre, who is the Principal here? I don’t think he can.

    And actually, of course, the general level of comments here, where the comment is intended to be an actually contribution to the topic as opposed to a remark by or to a troll, is quite high.

  91. Steve McIntyre
    Posted May 20, 2006 at 9:46 AM | Permalink

    Carl,

    To the extent that I’m the principal of this blog, I personally never suggested that this particular snafu invalidated climate models in general. In fact, I did not participate in this thread other than to inquire what the actual mistake was. I quite agree with your observation that:

    Mistakes like this crop up, they get corrected, and things get rerun; all throughout science (not just my project!

    I think that’s appropriate. Having said that, in a paper which I was not involved with, Ross McKitrick made a programming error in which he input degrees instead of radians. Like you, he corrected the error and re-ran the results. Instead of taking the responsible position that you advocate above, many people, for example, William Connolley, seized on this error as a reason to disparage every thing that McKitrick had ever written. Indeed, because McKitrick’s coauthor in this paper had a surname beginning with M, some people (including Connolley) went even further, ascribing the error to M&M in an effort to discredit me. So perhaps you should convey to Connolley and people like that they should apply the responsible view that you state above.

    Secondly, I have never presented any “proofs that AGW is false” – crackpot or otherwise. My particular concerns are with the multiproxy studies purporting to estimate temperature for the past millennium. Here my own view is that the studies are insufficient to make pronouncements with the claimed confidence.

    As to your observation that:

    I just thought it would be a novel concept to have something factual posted here, amidst the spurious statistical analyses, foregone conclusions to support right-wing ideology, pseudo-intellectual claptrap etc. It’s pretty funny you whinge about my single post, yet 98% of this site is ad-hominem attacks at best.

    I regard this as itself an ad-hominem attack. I consider my own posts, by and large, to be pretty detailed. My own political views are not right-wing. I really have no foregone even on the major issues. I do not believe that the statistical analyses that I’ve presented here are "spurious" or "claptrap". The knowledge of readers here is very uneven, but quite a few have excellent statistical skills.

    But let’s be specific. There are dozens of statistical analyses on this blog. They are of uneven quality – it’s a blog and I’m not editing and re-editing the material.  Sometimes it’s work in progress or just notes on something I’m reading. You’ve said that it would be "novel" to make a factual statement amidst the "spurious" statistical analyses. I’m not asking that you show that all the analyses are "spurious", but you must have at least one example that caused you to use this term. So pick one and tell me why you think it’s "spurious".

  92. Posted May 20, 2006 at 10:04 AM | Permalink

    It’s quite simple — if you have “disproven AGW” — then please get it published in the relevant, peer-reviewed scientific journals. This site is mostly bitching about everything that comes along in ‘Nature’, ‘Science’, even the BBC etc. I just find it odd that with all the self-proclaimed experts around here, none of whom typically have any education in the field of climate science or atmospheric physics, the actual publications are few and far between and usually end up in “Coal Mine Executive Quarterly” or “Proceedings of the Cato Institute” or whatever.

    It would be of interest if you guys spent 5% of the effort studying the ridiculous claims of the right-wing kooks you love such as the Cato Institute, CEI, etc. Do you really think AGW is some concerted effort by scientists all over the world? For what, to drive worse cars than you sellouts? But this sort of incestual blogger-inanity is something all blogs have at fault. I think a minute wasted on a blog means another year spent in hell in the future! 😉

  93. Ken Robinson
    Posted May 20, 2006 at 10:18 AM | Permalink

    Carl:

    Your post #93 is singularly unhelpful. If you wish to retain some credibility in commenting, it would be useful to respond to the specific points Steve raises in #92. Having followed this blog for over a year, I will say that I’ve found that very few of Steve’s analyses (none I can think of offhand) have ever been demonstrated to be incorrect. Steve isn’t arguing about “climate science” per se. He’s taking issue with the statistical methods used in most proxy reconstructions, and with the lack of access to the data underlying these studies. In both areas, he is clearly very highly qualified.

    By the way, I develop energy efficiency and renewable fuel projects for a living so your assumption that everyone here is a right-wing idealogue is, on its face, incorrect.

    Regards;

  94. Posted May 20, 2006 at 10:34 AM | Permalink

    There isn’t actually any ‘proof’ of AGW in the literature – let’s not confuse ‘proof’ with ‘evidence.’ There is, of course, ‘evidence’ for a human influence on climate, including non-GHG anthropogenic processes e.g.

    Evidence for influence of anthropogenic surface processes on lower tropospheric and surface temperature trends
    A. T. J. De Laat *, A. N. Maurellis
    Netherlands Institute for Space Research (SRON), EOS, Utrecht, Netherlands

    http://www3.interscience.wiley.com/cgi-bin/abstract/112510967/ABSTRACT

    “These findings suggest that over the last two decades non-GHG anthropogenic processes have also contributed significantly to surface temperature changes. We identify one process that potentially could contribute to the observed temperature patterns, although there certainly may be other processes involved. Copyright © 2006 Royal Meteorological Society.”

    Roger Pielke Sr has calculated the fraction of ‘global warming’ that is due to the radiative forcing of increased CO2, with generous reference to the IPCC perspective on climate forcings, to be about 26.5%. It may well be lower.

    Saviv and Veizer suggest two-thirds of ‘global warming’ is due to the effect of cosmic ray flux on low-level cloud cover, modulated by the 11-year solar cycle, and that climate sensitivity to CO2 is 1 to 1.5C.

    Steve McIntyre has made a valuable contribution to estimating the normal range of temperature variabilty, which seems not to have been exceeded during the current warm period.

  95. John A
    Posted May 20, 2006 at 10:47 AM | Permalink

    It’s clear that Carl isn’t going to bother to defend climateprediction.net. Instead he’s just accusing everybody of being "self-proclaimed experts", which is odd because that’s a phrase I’ve never seen anybody use.

  96. nanny_govt_sucks
    Posted May 20, 2006 at 11:17 AM | Permalink

    Anyway “¢’‚¬? the specific error was not in the climate model or program “¢’‚¬? but in an ancillary file that had a bad header.

    I believe that’s a distinction without a difference. And I’ll go ahead and proclaim myself an expert on software errors.

  97. Posted May 20, 2006 at 11:25 AM | Permalink

    Steve claims “some people (including Connolley) went even further, ascribing the error to M&M in an effort to discredit me”.

    That’s rather paranoid of you. M&M is the conventional abbreviation for McKitrick and Michaels.

  98. Steve McIntyre
    Posted May 20, 2006 at 11:33 AM | Permalink

    #96. John A, In terms of climateprediction.net, they made an error of the type that can happen to anyone. Carl’s given a perfectly satisfactory explanation – one that was already posted up earlier in the thread. I don’t see any point in piling on about the error. I simply wish that Carl would hold William Connolley to the same expectations as he reasonably asks from us.

    #93. Carl, I got interested in the area of proxies. I think that my statistical skills compare favorably with climate scientists in the paleoclimatology field.

    You accused me of posting up "spurious" statistical analyses. I asked you to give me an example. You didn’t answer. Again, could you back up your comment with at least one example so that I can consider your criticisms.

  99. Steve McIntyre
    Posted May 20, 2006 at 11:49 AM | Permalink

    #98. Tim, if you look at Connolley’s comnments in the Wikipedia, they are unsavory in this respect.

    Tim, Carl has reasonably taken the position that programming errors happen and that climateprediction.net should not be dismissed simply because they made a programming error. I agree with that. I presume that you do as well. Shouldn’t you apply the same reasoning elsewhere?

  100. Francois Ouellette
    Posted May 20, 2006 at 12:16 PM | Permalink

    Re: #93

    Carl,

    That kind of post is one of the reasons I am a “skeptik”. There is not a bit of science in it, and a lot of politics. You claim that the people who comment here have political motives, but prove to me that the people who promote all the scary predictions about AGW do not have their own political motives. Yes, there is such a thing as climate science, but in this debate it is all too often hidden behind the political agendas of everyone, especially the most vocal advocates on each side.

    I like Steve’s blog because he tries to focus on the science and avoid as much as possible the ideological debate. That’s not always easy because he himself has to confront the accusations of being “right wing” and all.

    Now, I’m not a climate scientist, but I do have a Ph.D. if that’s worth anything (in laser physics if you want to know). But what I do have is 20+ year of practicing science as a profession, publishing papers etc.(look me up in Google Scholar if you want).

    I have done computer modelling from time to time as a lot of scientists do, but of course for much simpler problems than global climate. Now it’s one thing if you want to simulate some experimental results, however complex, because the experiment can always be repeated under different conditions to check the model. In the case of global climate, it is about the most complex physical and mathematical problem that you can find, the boundary and initial conditions have large uncertainties, and we really only have one very limited data set to check the validity of the model. But given the complexity of the model and the number of parameters, you could find an agreement of the model with that one data set that would be accidental, yet you would infer that the model is correct, and use it to predict the future.

    Don’t get me wrong, I have a lot of respect for those who do that kind of work, and I have no doubt that we will eventually have a combination of model sophistication validated by suitably large past data set, that will alllow us to be confident about the model results for the next 100 years. My own personal opinion is that we are probably not there yet, and that model results must be used with caution. Does that make me a right wing kook?

    Now I am also worried that a lot of people involved in climate research do have political opinions, and that that opinion may taint their results and conclusions. When I look at the likes of John Hunter, and yourself, and a lot of the real climate bunch, I am not reassured. Scientists who are condident about their results don’t react with ad hominem attacks about political motives.

  101. Pat Frank
    Posted May 20, 2006 at 12:55 PM | Permalink

    #88 — “…yet 98% of this site is ad-hominem attacks at best.” That’s a serious misapprehension that makes a joke of the claim, “I have looked around the website.” What would such a factually unsustainable but sweeping group-dismissal be, Carl? Perhaps an ‘ad-popula’ attack?

  102. Steve McIntyre
    Posted May 20, 2006 at 1:05 PM | Permalink

    As I mentioned before, I have refrained from making any piling-on comment about the climateprediction.net goof, as I think that this kind of error can be fixed. (This doesn’t mean that just because they can fix this error that their results will automatically be OK). But consider this sort of vituperative comment by an NCAR climate scientist when he had an opportunity to criticize McKitrick and Michaelshere:

    I’ve worked at NCAR doing climate and chemical modeling in FORTRAN….. I do have some small knowledge in this area… .
    Anyone who can’t figure out the difference between radians and degrees deserves to be publically humiliated for the idiocy it takes to do such a thing.

    Unfortunately that type of attitude is far too common in climate science and sets up the gotcha mentality that seems all too common.

    The main criticisms that I have are not so much making mistakes, but the lack of due diligence, poor disclosure in the original articles and especially the withholding of adverse results. These are different issues from simply making mistakes.

  103. Posted May 20, 2006 at 1:14 PM | Permalink

    It’s quite simple, you guys (not just you but all the alleged “scientific blogosphere”) take yourselves way too seriously. I saw the same crap in the early days of blogging when computer geeks thought they were onto something big (just look how “Slashdot” has devolved). There’s a reason why most respectable scientists stay away. As if posting to a bunch of bleating sheep like “nanny_govt_sucks” makes for great discourse! 😉

    Science best advances through peer-reviewed papers, yet what bloggers provide is basically a circular mutual masturbation society. So Steve et al can puff up their egos, with everybody patting each other on the back, and whinge about their inane conspiracies of “why da man (‘Nature’ is a current fave 😉 is keepin’ me down!”

    The proof is in the pudding; publish your reverse-engineered statistical “smoking guns” that you amazingly seem to find in every damn ‘Nature’ and ‘Science’ paper. Blogs are useful for entertainment via the “signal-to-noise”, that’s about it. I think at the top of most blogs, especially this one should be a disclaimer “Warning: Full of lots of sound & fury, signifying nothing!”

    I’m confident we’ll continue to do good science on climateprediction.net via our publications — too bad you guys aren’t up for anything except a web version of the “Rush Limbaugh Hour.”

  104. Posted May 20, 2006 at 1:15 PM | Permalink

    It’s quite simple, you guys (not just you but all the alleged “scientific blogosphere”) take yourselves way too seriously. I saw the same stuff in the early days of blogging when computer geeks thought they were onto something big (just look how “Slashdot” has devolved). There’s a reason why most respectable scientists stay away. As if posting to a bunch of bleating sheep like “nanny_govt_sucks” makes for great discourse! 😉

    Science best advances through peer-reviewed papers, yet what bloggers provide is basically a circular mutual masturbation society. So Steve et al can puff up their egos to a bunch of like-minded dweebs, with everybody patting each other on the back, and whinge about their inane conspiracies of “why da man (‘Nature’ is a current fave 😉 is keepin’ me down!”

    The proof is in the pudding; publish your reverse-engineered statistical “smoking guns” that you amazingly seem to find in every damn ‘Nature’ and ‘Science’ paper. Blogs are useful for entertainment via the “signal-to-noise”, that’s about it. I think at the top of most blogs, especially this one should be a disclaimer “Warning: Full of lots of sound & fury, signifying nothing!”

    I’m confident we’ll continue to do good science on climateprediction.net via our publications — too bad you guys aren’t up for anything except a web version of the “Rush Limbaugh Hour.”

  105. Posted May 20, 2006 at 1:16 PM | Permalink

    It’s quite simple, you guys (not just you but all the alleged “scientific blogosphere”) take yourselves way too seriously. I saw the same stuff in the early days of blogging when computer geeks thought they were onto something big (just look how “Slashdot” has devolved). There’s a reason why most respectable scientists stay away. As if posting to a bunch of bleating sheep like “nanny_govt_sucks” makes for great discourse! 😉

    Science best advances through peer-reviewed papers, yet what bloggers provide is basically a circle-jerk society. So Steve et al can puff up their egos to a bunch of like-minded dweebs, with everybody patting each other on the back, and whinge about their inane conspiracies of “why da man (‘Nature’ is a current fave 😉 is keepin’ me down!”

    The proof is in the pudding; publish your reverse-engineered statistical “smoking guns” that you amazingly seem to find in every damn ‘Nature’ and ‘Science’ paper that comes along. Blogs are useful for entertainment via the “signal-to-noise”, that’s about it. I think at the top of most blogs, especially this one should be a disclaimer “Warning: Full of lots of sound & fury, signifying nothing!”

    I’m confident we’ll continue to do good science on climateprediction.net via our publications — too bad you guys aren’t up for anything except a web version of the “Rush Limbaugh Hour.”

  106. Pat Frank
    Posted May 20, 2006 at 1:31 PM | Permalink

    #93 — “…ridiculous claims of the right-wing kooks”

    Carl, your politically-inspired rants here, in #’s 84, 88, and now 93 — e.g., “crackpot “proofs”,”right-wing ideology, pseudo-intellectual claptrap,” “you sellouts,” etc. — make you look like a green-fringe kook.

    Like Francois Ouellette I have a PhD, but in chemistry, and have published research for more than 20 years. It doesn’t take a professional degree in climate physics to read the work of climate modelers, and to discover that they themselves publish explicit cautions against using GCMs to predict future climate.

    As someone concerned about the environment, and in view of the state of the science, I consider the heated AGW debate to be a large and corrosive distraction — driven mostly by self-indulgent political passions — that results in a waste of resources and attention that could be used to ameliorate real environmental problems.

    And let me ask you: Why has no one ever published a propagation of estimated errors and uncertainties in energy fluxes and parameter values through a climate model calculation in order to determine a true uncertainty range in climate projections?

  107. Posted May 20, 2006 at 1:32 PM | Permalink

    jeez, I wish you pussies weren’t banning my posts, you ask for a response and I keep getting "This post has been detected as spam." HAHAHA what a bunch of wimps!

    Steve: As others will vouch, I don’t censor comments like yours. We have to operate with a spam filter as presently we can get over 100 spams per day. One of the criteria is a sudden pulse of posts from a new poster. You’ve gone from not posting and Spam Karma has trouble distimguishing from a poker site ot that type of thing. If a post gets hung up, email me at smcintyre25 AT yahoo.ca and I’ll attend to it. Please be assured that there’s nothing about your comments so far that I find challenging.

  108. Posted May 20, 2006 at 1:47 PM | Permalink

    Yes, it seems to me that this blog is all about people making mistakes, and

    (a) refusing to admit it
    (b) refusing to correct them
    (c) refusing to take steps to prevent similar mistakes happening in future
    (d) refusing to take steps to make mistakes easier to spot

    , all of which are ridiculous if your goal is actually to perform high-quality science with as few mistakes as possible in the final product.

    Yes, everybody messes up, I do all the time too, but we like to be able to catch & correct our errors before they compound themselves. This is one important reason why the scientific process is SUPPOSED to be so open.

  109. Posted May 20, 2006 at 2:22 PM | Permalink

    as a late reader, the comment numbering doesn’t make sense any more, I don’t know who is referring to who 🙂

  110. TCO
    Posted May 20, 2006 at 2:56 PM | Permalink

    Carl:

    The spam guard tends to spank new, extremely active posters.

    But you are a pussy.

    😉

  111. Posted May 20, 2006 at 3:10 PM | Permalink

    “It’s quite simple, you guys (not just you but all the alleged “scientific blogosphere”) take yourselves way too seriously.”

    Black hole calling the pot black?

  112. Posted May 20, 2006 at 3:17 PM | Permalink

    OK, sorry, that was a cheap shot.

    But seriously, why are “peer-reviewed journals” the be-all and end-all of science?

    It used to be that the only sensible method for distributing scientific works to other scientists was through journals. Just like the only sensible method for getting news was from a newspaper.

    That’s all changed now.

    Any boob, me included, can publish a paper online and, if it has merit, millions can decide to read it. No, it’s not peer reviewed, but I’ve seen sufficient evidence to believe that peer review in at least some journals is all but useless for guaranteeing the quality of the paper. It seems to me what Steve publishes – including in peer reviewed journals – is high quality work relating to statistics. If he’s right, he’s right – it doesn’t matter if he’s writing articles in Nature or on his blog or on a web page at geocities.com

    So instead of using your “appeal from authority” how about you actually point out what’s wrong with his statistical commentary? Just point out a flaw in one of his analyses. That will get you a lot more credit than calling us pussies or any other insult you can come up with.

  113. TCO
    Posted May 20, 2006 at 3:18 PM | Permalink

    I completely agree that Steve should have his stuff peer reviewed and published in archived literature. Not to do so is somehow timid or lazy.

  114. Spence_UK
    Posted May 20, 2006 at 3:21 PM | Permalink

    Re #102

    From this reply it appears that you have no evidence to support your criticism of Steve’s statistical analysis, only ideological disagreement with the overall approach. Fair comment?

    Any chance of turning down the hypocrisy a little while you’re there? Complaining about ad homs and bleating sheep doesn’t sound convincing when it sums up most of your posts.

  115. Posted May 20, 2006 at 4:06 PM | Permalink

    Any schmuck, as this website shows, can reverse-engineer stats to “disprove” someone else’s stats. The fact that Steve “coincidentally” does this on a number of papers from esteemed journals shows that. If he really feels he has found something to disprove the papers, then simply get them published.

    And, no, although you blog-dweebs thing that a bunch of posts on a website consitute a “science publication” — it doesn’t. What you basically get is these self-recycling groups of blowhards who just bitch about the scientists that are getting published.

  116. Posted May 20, 2006 at 4:25 PM | Permalink

    Oh c’mon, the cavalier way you douche-bags trash climate models doesn’t deserve any coherent response. I guess you guys don’t bother listening to the weather forecast, after all they’re just using an ensemble of hi-res, short-range forecasts. What a joke. As I said, enjoy your little blog games, but please don’t get it confused with science. Your phony “contrarian” stance is a hoot; it’s 90% jealousy & 10% schadenfreude.

  117. Posted May 20, 2006 at 5:30 PM | Permalink

    then simply get them published.

    sure, simply…

  118. Ian Castles
    Posted May 20, 2006 at 5:32 PM | Permalink

    According to the press statement issued by the UK Natural Environment Research Council on 27 January 2005, ‘Climateprediction.net is a collaboration between several UK Universities and The Met Office, led by the University of Oxford and funded by the Natural Environment Research Council and the Department of Trade and Industry’s e-Science programme.’ Carl Christensen is embarrassing all of these institutions – especially the University of Oxford which employs him and publishes his personal page.

  119. Posted May 20, 2006 at 5:33 PM | Permalink

    carl, what’s so contrarian about: Show me your data, Explain me your method?

    methinks that was known as the scientific process a few years ago…

  120. Posted May 20, 2006 at 5:35 PM | Permalink

    neat site:
    http://www.hartnell.edu/faculty/cchriste/

  121. Spence_UK
    Posted May 20, 2006 at 5:40 PM | Permalink

    Any schmuck, as this website shows, can reverse-engineer stats to “disprove” someone else’s stats.

    If that was true, you would be able to offer a valid criticism on some of Steve’s statistical analysis. I note with interest that you still haven’t. Lots of talk, no substance.

  122. Posted May 20, 2006 at 6:01 PM | Permalink

    Carl,

    Can you do anything other than swear? Seriously? Do you have a point to make, or are you just here to call us names?

    I don’t take kindly to being called a crackpot pseudo-intellectual douchebag schmuck, thank you very much.

    I’ve written models before. Writing a model is easy. Getting its output to be meaningful is hard. And no, I don’t listen to weather reports, because they’re so often wrong. Why pay attention to a prediction you can’t rely on? Then you go and make wild claims about your model’s predictive power, when you didn’t even get the input data right. And you have the temerity to tell US that we don’t know what we’re doing?

    I hope to see no more comments from you until you have something constructive to add.

  123. Steve McIntyre
    Posted May 20, 2006 at 6:12 PM | Permalink

    Is there something about being a computer programmer that makes people vent? Christensen seems to be a computer programmer rather than a climate scientist. Here’s his not-current homepage and pre-climateprediction.net resume.

    He is described as the chief software engineer for climateprediction.net -see a paper of his here. He’s presumably the person with climateprediction.net who had to carry the can for the software problem. No wonder he’s not very happy. I’d imagine that some of his associates were furious at him.

    We’ve run into climateprediction.net in a couple of other situations recently. Myles Allen of climateprediction.net was the guy interviewed by the BBC about the misleading press release, which even realclimate disowned.

    One of the scientific papers up at their website as a climateprediction.net production is Hegerl et al 2006: G.C. Hegerl, T.J. Crowley, W.T. Hyde and D. J. Frame, Climate sensitivity constrained by temperature reconstructions over the past seven centuries, Nature, 440, p1029-1032, April 2006. This is the paper which appears to have cocked up the confidence interval calculations – the one that upper and lower confidence intervals criss-cross. Ah, but it’s “peer reviewed” and Nature no less. I guess that means that upper and lower confidence intervals really do cross from time to time. It’s just that the unwashed don’t understand this.

    D. A. Stainforth, T. Aina, C. Christensen, M. Collins, N. Faull, D. J. Frame, J. A. Kettleborough, S. Knight, A. Martin, J. M. Murphy, C. Piani, D. Sexton, L. A. Smith, R. A. Spicer, A. J. Thorpe & M. R. Allen, Uncertainty in predictions of the climate response to rising levels of greenhouse gases, Nature, 433, pp.403-406, January 2005.

  124. Posted May 20, 2006 at 6:24 PM | Permalink

    Steve: Well, programming can be/is very frustrating. You have to deal with tools which I find, frankly, to be poorly made and/or inadequate. If you try to fix them you find you could spend your whole life doing just that. You have to deal with problems with operating systems, libraries, compilers, linkers… trust me it can be maddening.

    I do occasionally lash out at people as a way to get it out of my system but I try not to do it in comments on peoples’ blogs. But yes, I can understand why he’s upset. This post is rather critical of his “baby”. (Spend months or years working on something and you’ll feel quite proprietorial over it). But, it wasn’t targetting him directly nor casting aspersions on his abilities (at least, not directly). So, I don’t feel like the way he’s treating you/us is very sporting.

    There’s an old saying which goes “every program has at least one bug”. (It goes on to say that every program can also be reduce by one line of code – and therefore all software can be reduced to 0 lines of code with 1 bug.) So I think he shouldn’t be too surprised and we shouldn’t be too critical. But really, the point of this post I felt was to throw into contrast the wild claims that were made about the results. That is what bothers me, and from what you have said above, you too — not the fact that somebody made a small mistake in the program.

  125. cytochrome sea
    Posted May 20, 2006 at 6:38 PM | Permalink

    Carl Christensen, good day to you. When I first heard about climateprediction.net, I read the stated goals of the project, and was quite interested, and almost DL’d the software to allow my computer to help out. (although my comp. is usually running simulations at night, so don’t know how much it would have helped the project 🙂

    Upon reading this thread (as of now) it appears that at least one of your posts has been temporarily “spanked” by the anti-spam software John A has helped the site with. Don’t worry, I imagine it will be returned soon. (however the numbers of the posts might be changed, so it’ll probably be better to quote arguments in a certain post than refer to the # of said post )

    Regardless, after reading the initial stated goals of the project, (I’m completely guessing here, maybe 2 or so years ago?) I was about to include myself in it, but upon further reading realized the first model to be tested (again, 2 or so years ago, so I’m fuzzy on it) was the HADcm2 or 3?, but a scaled down version it seemed, coupled to a slab ocean. (please correct my fuzziness on it if you will) Anyway, I didn’t end up trying it out.

    Later on, I went back to the website to have a look at some of the early results. One thing that piqued my interest was that some of the early runs were readily? discarded on the basis of the results being “unstable”. Granted, I didn’t scour the site enough to try to identify the criteria for what results were considered to be stable, and what were not,
    but visually it appeared that any runs showing a rapid, or even mild decrease in global average surface temps, were systematically discarded. (and labelled as being unstable)

    Memory still serves me a bit funny, but IMHO it might be more fruitful to include all runs.
    Initially my intuition imagined that the discarded runs would only include very few outliers, yet from some reading (and this might not be very accurate) the discarded results are on the order of 40-43%? If this were to be the case, it seems that this would present an enormous bias in the finalized sums. (TCO, please forgive me for hacking on ya about referring to Feynman so much earlier, as his techniques certainly seem to apply in this case 🙂

    I suppose my real question in this case would be; are there problems the ?pseudorandom? parameterizations incorporated into the model runs? Have your team been able to identify the error in the model/inputs?

    Also, importantly; again, I’m most curious about the criteria for the basis of rejecting runs. Along with that, have any of your higher surface temp. estimate runs been classified as being “unstable” as well? I only ask these questions as the results of the tests have been presented to high profile->press release literature, ( I apologize, as not remembering if it was Nature or Science ) and the press releases and accompanying news articles most likely will not be able to express the value of the research, and also more? importantly the value of the caveats the particular researchers have about their own work.
    Gotta be cautious… 🙂

  126. Doug L
    Posted May 20, 2006 at 6:44 PM | Permalink

    Dear Carl,

    This is not working. If only you could read your comments through the filter of having read this blog over a long period of time. So far, you’ve said almost nothing original.

    I do pay attention to weather forecasts. A bad map is better than no map at all. It’s still a bad map though.

  127. Steve McIntyre
    Posted May 20, 2006 at 6:47 PM | Permalink

    I thought that John A’s post was needlessly triumphal and didn’t like the tone. It sounds too much like Lambert, another computer guy. I disagree with the statement "We could have told them that." I don’t see how we could have told them that. It’s not a view I hold. Having said that, how hard would it be for him to agree that the pillorying of McKitrick on radians-degrees was uncalled for.

  128. Ian Castles
    Posted May 20, 2006 at 7:28 PM | Permalink

    Steve, I suspect that the links in the first para. of your #124 are to a different Carl Christensen. But the ‘C Christensen’ who’s listed among the authors of the Nature article in January 2005 must be the person who posted #116 and #117. The other authors in that list include Myles Allen (Review Editor of the ‘Global Climate Projections’ chapter of AR4) and James Murphy (Lead Author of the same Chapter). They too should be embarrassed that at the tone and (lack of) content of Christensen’s posts.

  129. K Goodman
    Posted May 20, 2006 at 7:36 PM | Permalink

    Linking Climate Change Across Time Scales

    http://www.whoi.edu/mr/pr.do?id=13207

    What do month-to-month changes in temperature have to do with century-to-century changes in temperature? At first it might seem like not much. But in a report published in this week’s Nature, scientists from the Woods Hole Oceanographic Institution (WHOI) have found some unifying themes in the global variations of temperature at time scales ranging from a single season to hundreds of thousands of years. These findings help place climate observed at individual places and times into a larger global and temporal context ….

  130. Posted May 20, 2006 at 9:06 PM | Permalink

    Re: 116,117. Dear Cartman: I don’t recall anyone saying blogs were a replacement for peer-review publishing. Some of us here do publish, some even get ‘contrarian’ papers published sometimes.

  131. Steve McIntyre
    Posted May 20, 2006 at 9:22 PM | Permalink

    #129. No, the links are to the Carl Christensen posting here. At this site, he says:

    Hi, I’m an “ex-pat” American who has been working on and running climateprediction.net at Oxford University. Now I’m happy to be helping out on E@H for a few months at Max-Planck-Institut in Golm, Germany before returning to Oxford! 🙂

    and links to the webpage cited above.

  132. Tom Brogle
    Posted May 20, 2006 at 11:00 PM | Permalink

    The effects of Chaos Theory was first demonstrated when computers were used in an attempt to produce long range weather forecasts.
    It was was found that minor changes in initial conditions produced widely varying outcomes.
    This has been called the Butterfly Effect
    I looks to me that Climate Prediction net is demonstrating the above.
    Hope that this is not repetition.

  133. Chas
    Posted May 21, 2006 at 3:05 AM | Permalink

    Carl/anyone; a general CPDN question:
    When the parameters are tweaked for the CPDN model runs are they tweaked by a scheme that says something like ‘lets tweak them by a fixed percentage (say +/- 20%)’ or they distributed across the model runs according to someone’s opinion as to their likelyhood?
    I.e When we look at the CPDN results are we looking at some sort of Bayesian posterior distribution, or are we looking at something else (what?)?

  134. Posted May 21, 2006 at 3:15 AM | Permalink

    Steve — your ignorance shines clearly through! Again, for the umpteenth time, the problem you cite is not a “software programming” or coding problem, it was with a binary, ancillary file. It had nothing to do with the climate model program, or the BOINC software wrapper etc. This will be my last post on the subject, you dweebs can enjoy your mutual-masturbation again!

  135. John A
    Posted May 21, 2006 at 3:26 AM | Permalink

    I disagree with the statement “We could have told them that.” I don’t see how we could have told them that. It’s not a view I hold.

    Well Steve, if climate models need tens of thousands of runs to produce a probabilistic result whose mean depends on the programmed sensivity to carbon dioxide given by the modellers (as admitted by Myles Allen on the BBC recently) then what good is it?

    What is the utility of this climate modelling exercise at all? If the mean is known and the distribution is wide (floor to ceiling) then what scientific result is being produced?

    Do you think Carl Christensen will bet any money on the predictions against the real world? Or Dave Frame? Or Myles Allen?

    If you or Dave Stockwell can produce just as “good” agreement with real data using random numbers and LTP, then what is the point?

    I would have thought that when a scientific experiment is designed, clear criteria are set for what is being demonstrated in such as way as to allow falsification of a particular hypothesis.

    As far as I can see, the only purpose of cp is to provide scary headlines for the BBC “Climate Chaos” season. Unless of course, you know better…..

  136. cytochrome sea
    Posted May 21, 2006 at 4:21 AM | Permalink

    #135
    John A, I believe this to be a completely coherent rant. (I’m using the term “rant” here in a very mild way) 🙂

  137. Peter Hearnden
    Posted May 21, 2006 at 4:37 AM | Permalink

    Re #135

    Do you think Carl Christensen will bet any money on the predictions against the real world? Or Dave Frame? Or Myles Allen?

    I don’t know. But if you be a betting man (my guess is you’re not) get in contact with James Annan.

  138. Ed Snack
    Posted May 21, 2006 at 5:12 AM | Permalink

    Oh dear dear dear, Carl, please, your are simply exposing your ridiculous self to some well deserved derision. Haven’t laughed so much at a pompous wanker for years !

    ps: I do hope you don’t program as badly as your thought processes would appear to indicate, although it is perhaps as likely that the real Carl Christensen will turn up and disavow this loon.

  139. Spence_UK
    Posted May 21, 2006 at 5:16 AM | Permalink

    This will be my last post on the subject

    OK, you came on here to defend your position on climateprediction.net, which I can understand, although the manner in which you defended it was highly unprofessional.

    But I note you have continued to fail to support your position on statistical analysis with any kind of substance. And now you’re running away.

    My take away from this is:

    – You are a bit part player in the climateprediction.net experiment, which was a screw up on a grand scale, and enjoyed the humiliation of laundering its dirty linen in full public view, but the screw up was not your fault (and you are not senior enough to carry the can anyway).

    – You have an ideological/political disagreement with the views of many people here, but you have an insufficient grasp of statistics to tackle Steve’s arguments on merit. Despite disliking people here for being self-proclaimed experts, you making sweeping statements about a scientific area (statistical analysis) you appear to have little understanding of.

    – Rather than either supporting your comments, or apologising to Steve for making unsupportable claims about his work, you prefer to hypocritically dump ad homs and then run away.

    I think this is a fair summary of your contribution to ClimateAudit. Thanks for coming.

  140. Steve McIntyre
    Posted May 21, 2006 at 6:03 AM | Permalink

    Carl,

    Gerry Browning, who posted up on numerical climate models only a few posts ago, is a distinguished author of many peer-reviewed papers.

    In #135, Carl Christensem said:

    Steve “¢’‚¬? your ignorance shines clearly through! Again, for the umpteenth time, the problem you cite is not a “software programming” or coding problem, it was with a binary, ancillary file. It had nothing to do with the climate model program, or the BOINC software wrapper etc. This will be my last post on the subject, you dweebs can enjoy your mutual-masturbation again!

    I had previously said in #124:

    He is described as the chief software engineer for climateprediction.net -see a paper of his here. He’s presumably the person with climateprediction.net who had to carry the can for the software problem. No wonder he’s not very happy. I’d imagine that some of his associates were furious at him.

    I reviewed my comment and the phrase in quotation “software programming” , which presumably is quoted to make fun of a turn of phrase, did not occur in my comment. I searched the entire thread and, other than Carl, the term does not occur elsewhere.

    The problem does not seem to be with the data in the ancillary file, but how the file was read. This is surely a software problem – one that is easily fixed, but one with embarrassing consequences – hardly unprecedented with software.

    I didn’t suggest that this implied any defects in the BOINC software or that it implied defects in the Hadley Center model. On the other hand, being able to fix this does not imply that the Hadley center model is therefore valid.

    When problems occur – and one sure happened here – then someone has to take responsibility. That doesn’t mean that you fire the person or that the person is incompetent. In this case, the BOINC aspect of the software is undoubtedly a considerable technical accomplishment. But the person who raised money for the project cannot have been very happy about the situation and my experience with organizations makes me believe that the chief software designer would not be exempt from this unhappiness.

  141. kim
    Posted May 21, 2006 at 6:04 AM | Permalink

    By golly, with Carl at the tiller we can’t help but find our way to accurate modeling.
    ===============================================

  142. Steve McIntyre
    Posted May 21, 2006 at 6:25 AM | Permalink

    There’s some useful information about the climateprediction.net model where they inform us that Nature is a “very prestigeous” scientific journal.
    here

    Nature is a very prestigeous scientific journal. CPDN’s has had their paper go through the extensive peer review process necessary for it to be approved for publication. It was published on 27th January 2005. The paper is available here.

    They then proceed to explain the following:

    Most papers begin with an abstract of what they are going to say. In this case:

    That was very, very helpful information. I’m sure that clarified what was otherwise a lingering mystery for many.

  143. Posted May 21, 2006 at 6:41 AM | Permalink

    >The problem does not seem to be with the data in the ancillary file, but how the file was
    >read

    Ummm, wrong, as stated many times, the ancillary data file had an error in a date header. You dopes continue to try to spin it that “the CPDN software is wrong” or even more humorously extrapolate it to “climate models are bad.” A program cannot self-correct itself when given a wrong input (“garbage-in-garbage-out” — although the error in the data file is not wholly a loss).

    Yet you and others cavalierly claiming this invalidates Hadley Centre models, and other climate models, is pretty hilarious. I mean, you guys don’t even know the basics of the CPDN project, i.e. it was around before & will be long after the BBC involvement in one experiment of ours ends.

    Look, you have every right to have this website to tout your bad-science-in-the-cause-of-pseudo-libertarianism-and-corporate-hegemony. But please don’t confuse your own, and your “skeptic junk science” audience of morons (many of whom are quite as happy saying “Creationism” is “real science”, “stem cell research is evil” etc) with actual science in the peer-reviewed literature. Your site & audience is more like when Hitler’s scientists were “contrarian” and “disproved” Einstein. Or sort of like the write Christopher Hitchens, who once thought himself a “contrarian” in the Orwell tradition, decided to sod it all and just be a sell out for the Richard Mellon Scaife/Cato Institute types.

  144. kim
    Posted May 21, 2006 at 6:56 AM | Permalink

    You seem to have a lot of resentment about politics. Look, it’s really just political science. Hook up a million computers and start getting some data. Chaos, schmaos.
    =========================================

  145. Dave Dardinger
    Posted May 21, 2006 at 7:34 AM | Permalink

    re: #144

    You know, it’s bad enough to let TCO post here while drunk. But letting drive-by trolls post while drunk or drugged is a bit much. I vote for letting him talk to Spam Karma till he gets sober.

  146. kim
    Posted May 21, 2006 at 7:39 AM | Permalink

    I think he’s cold sober. And he claims to be an Oxford scientist, too.
    ==========================================

  147. Posted May 21, 2006 at 7:41 AM | Permalink

    well I’m a computer scientist at the very least, which makes me more qualified than 98% of the skeptics (or “septics” as I guess James Annan calls ’em)! 😉

  148. kim
    Posted May 21, 2006 at 7:44 AM | Permalink

    Does your computer science diploma document the 98% figure? It doesn’t? Where did you scientifically come up with that number, then.
    =========================================

  149. Posted May 21, 2006 at 7:47 AM | Permalink

    oh come on, a random sampling of posts here reveals it to be the sort of Yahoo board Republicans, Free Republic readers, CEI/Cato types, etc, it’s really hilarious you dopes are trying to hide this fact underneath statistical pseudo-science.

  150. kim
    Posted May 21, 2006 at 7:52 AM | Permalink

    You’ve not sampled randomly or objectively.

    And you’ve again called the statistics pseudoscience. Do you have proof of that, or are you just conjecturing, or worse, repeating what you don’t understand.
    ======================================

  151. kim
    Posted May 21, 2006 at 7:55 AM | Permalink

    I would feel guilty about wasting Steve’s bandwidth in this fluffy way, but it is amusing watching you degrade Oxford. Oh my, you’ve come a long way, baby.
    =============================================

  152. jae
    Posted May 21, 2006 at 8:37 AM | Permalink

    Carl: You poor bitter, angry person, you are a hoot. Your unbelievably harsh ad-homs are really laughable. LMAO.

  153. Tom Brogle
    Posted May 21, 2006 at 9:01 AM | Permalink

    I resent Carl Chhistiansen telling us that we are all right wingers.
    That arch free marketeer Maggie Thatcher was the first politician to warn of CO2 causing GW .
    The present leader of the Tories in the UK has based his policies on fighting AGW and has gained many votes by doing so.
    The trouble with being a skeptic is that we know that AGW is just Twaddle and given a level playing field we could go a long way to allaying public fears.
    But all we get is rejection and insults
    C C should remember that people far more knowledgeable than he are skeptics and (I dare say) none of them are in for the money. Most incur costs in their skepticism

  154. TCO
    Posted May 21, 2006 at 9:12 AM | Permalink

    My handlers at the Free Republic have authorized me to deal with Chrissie. Let me at him.

  155. Posted May 21, 2006 at 9:16 AM | Permalink

    oooh you’re such a bad boy “TCO”, (emphasis on “boy”). Shouldn’t you be in a mass grave somewhere, tough guy? 😉

  156. Roger Bell
    Posted May 21, 2006 at 10:36 AM | Permalink

    If I could get a word in edgewise, I would really like to recommend that anyone who has not done so should turn to:
    http://www.co2science.org/scripts/CO2ScienceB2C//subject/s/summaries/solariceage.jsp
    Quotes from this follow:
    Periods of higher solar activity result in a lower production of atmospheric 14C. Plants remove carbon from the air and so create a history of solar activity which can be compared with climatic indices. For example, the work of Hong et al using 18O showed the most dramatic cold events of the Littler Ice Age were centered at roughly AD 1550, 1650 and 1750, which gives remarkable agreement with the atmospheric 14C record from tree rings.
    Lean, of the NRL’s Center for Space Researchin Washington, DC, makes the following point about climate models and the sun-climate connection :
    ” a major enigma is that general circulation climate models predict an immutable climate in response to decadal solar variability, whereas surface temperatures, cloud cover, drought,
    rainfall, tropical cyclones and forest fires show a definite correlation with solar activity.”

  157. Posted May 21, 2006 at 10:50 AM | Permalink

    Steve, you are being silly. You should not take it as a personal attack if someone refers to McKitrick and Michaels as M&M.

    McKitrick’s degrees/radians was hardly the only one that I discovered. You can see the list here. McKitrick used cos(abs(lat)) in his model. But that’s the same as cos(lat). Why take abs as well? Probably because he had abs(lat) in there and decided to try cos(abs(lat)) instead. Now there isn’t any good reason to choose one over the other, so the normal thing would be to choose the one that made for the best fit. However, because of the degrees/radians error taking the cos made the fit a lot worse. But it did make the effect of the economic variables much stronger. The sort of one-sided error checking exposed here is a good reason for people not to trust his work.

  158. TCO
    Posted May 21, 2006 at 10:56 AM | Permalink

    Bulls***, Tim. He’s been referred to as that the entire time. And now you come in and do the same with some other group and cite an error and don’t clarify the differnce. You’re full of s***. F***er.

  159. TCO
    Posted May 21, 2006 at 10:58 AM | Permalink

    And when the f*** are you going to write your “Mann screws up again” article for his sin error?

  160. Posted May 21, 2006 at 11:09 AM | Permalink

    TCO, you’re incoherent. Come back when you are sober.

  161. Roger Bell
    Posted May 21, 2006 at 11:56 AM | Permalink

    GENTLEMEN, PLEASE!!!!
    Look, the WEB site I give in post 156 gives pretty damning criticism of some climate “scientists.”

    …a vast repository of empirical findings from an array of scientific disciplines is being ignored by a small coterie of climate scientists who are focussed almost exclusively on developing computer models of how they BELIEVE earth’s climate system operates.

    Lean of the Naval Research Lab has done outstanding work!!!

    Please read it!

  162. TCO
    Posted May 21, 2006 at 12:15 PM | Permalink

    Tim, I’m embaressingly drunk. But not yet incoherent. Please be precise when chiding me.

  163. John Lish
    Posted May 21, 2006 at 12:38 PM | Permalink

    #161 – Interesting article Roger, thanks for posting it. Promotes more questions for me about the relative forcings of CO2 and solar changes re the climate. It would be useful to see a reconciling exercise.

  164. Kenneth Blumenfeld
    Posted May 21, 2006 at 12:42 PM | Permalink

    Carl,

    As someone who probably disagrees with the majority of posters here about climate change/global warming, I understand your frustration with the tone of this site. Yet, as someone who lost high double-digit family members in the Holocaust, I can never understand flippant references to it or any of its major characters (currently #143). Doing so simultaneously over-villianizes the skeptics *and* trivializes the brutality of the Nazis. You may seriously disagree with skeptical, or even mainstream climate science rhetoric, and you may be passionate about your position, but surely you can find more creative (and realistic) comparitive devices.

  165. kim
    Posted May 21, 2006 at 1:21 PM | Permalink

    Nice, Roger; ironic that it’s tree ring data.
    ========================

  166. Paul Linsay
    Posted May 21, 2006 at 1:34 PM | Permalink

    #161. Interesting article with lots of good information on the sun-climate connection. One idea that they don’t touch on is the Svensmark effect During periods of high solar activity the earth is shielded from cosmic rays which seed clouds. So there are fewer clouds, hence more sunlight, and a warmer earth. The reverse happens with low solar activity, more clouds, less sunlight, and a cooler earth. C14 is a good marker for this since it is cosmogenic in origin and its abundance is inversely related to solar activity.

    Total solar irradiance also goes up an down with increases and decreases in solar activity which combines with the changes in cloud cover to create an even bigger effect.

    For those that doubt that cosmic rays can seed clouds here’s a picture from a cloud chamber. This picture won Carl Anderson the Nobel Prize in physics for the discovery of anti-matter.

  167. John A
    Posted May 21, 2006 at 2:40 PM | Permalink

    Re #164

    I agree completely with you on that one. The behavior of propagandists of all kinds to attempt to demonize their political opponents with references to the Nazis and the Holocaust is becoming ever more shrill and it personally disgusts me. I might dislike some people for their behavior of one kind or another, but to compare them with Nazis is to me, to demean the historical events as if these events are some sort of rhetorical game. It’s an obscenity.

  168. Peter Hearnden
    Posted May 21, 2006 at 3:25 PM | Permalink

    What disgusts me (apart from comparing people to the Nazis) is when it’s said a group of people want to so reduce human population as to ‘send us back to the stone age’. Such a comment doesn’t imply the milion of murders the horrid nazis carried out but billions of deaths. Perhaps those who say of people like me that we ‘want to send us back to the stone age’ might think on that as well?

  169. TCO
    Posted May 21, 2006 at 3:26 PM | Permalink

    I like being mean to you. America! F*** yeah! We’re here to save the day…f*** yeah!

  170. MrPete
    Posted May 21, 2006 at 3:40 PM | Permalink

    #143 “…the ancillary data file had an error in a date header. You dopes continue to try to spin it that ‘the CPDN software is wrong’…A program cannot self-correct itself when given a wrong input (‘garbage-in-garbage-out’…)”

    Carl, now I’m ROFLOL! You are truly degrading the cachet of your degree with every supposedly-factual statement you make.

    About the “ancillary” data file. Your software system will not function correctly without it. It is not produced by the system, nor is it a user input. It is a static, constant, set of parameters. It is an integral part of the system as built.

    Is it not a set of data constants? Constants are every bit a part of a “program” whether compiled in, read at run time or whatever.

    If you had an error in your data constants, you had a program bug. Simple as that.

    GIGO is as GIGO does. If there’d been a more robust build management and QA system, the flaw might have been caught.

    I’ve built systems that are robust to any kind of garbage input, where a flaw showed up in only one of a million or so user events… and have fixed said flaws. And then we could find no further errors after many tens of millions of random events.

    Just admit there was a bug, fix it, write more robust build-management and QA code in the future, and move on. Hopefully with a lot more humility in the future.

    PS The fact that your system is capable of producing “unstable” results is also worrisome. How can you be sure your system is processing its data correctly???

  171. John A
    Posted May 21, 2006 at 3:57 PM | Permalink

    What disgusts me (apart from comparing people to the Nazis) is when it’s said a group of people want to so reduce human population as to ‘send us back to the stone age’. Such a comment doesn’t imply the milion of murders the horrid nazis carried out but billions of deaths. Perhaps those who say of people like me that we “want to send us back to the stone age’ might think on that as well?

    I’ve no idea. You’d better ask Chris Rapley who argued that the Earth’s “carrying capacity” was 1-2 billion, or Aubrey Mayer who openly advocates “Convergence and Contraction” with view to reduce man-made carbon dioxide to near zero by “reducing consumption” and economic vices that make the Kyoto Protocol look positively liberal by comparison. I encourage you to read Mayer’s book, which is filled with Zen Buddhist arguments and iconography as well as bizarre pictures that may mean something to Aubrey, but mean nothing to the rest of us.

    Such views beg the question as to where two thirds of the world’s population are meant to go, or who gets to decide who reproduces and how many.

    Back to the Stone Age is exactly right. The question is, is that what you want?

    I’m not accusing Rapley or Mayer of being like Nazis. I’m not accusing them of denying the Holocaust, which is the sort of rhetoric that has persistently come ever more from climate alarmists towards skeptics, in place of or instead of explanations of their experimental methods, data and conclusions.

    The people who use these metaphors instead of debate, like Cris Christensen, have no real idea what the Holocaust was in history, because if they did, they’d never use such comparisons – unless they have no conscience and no shame.

  172. per
    Posted May 21, 2006 at 4:39 PM | Permalink

    Carl christenson wrote:

    oh come on, a random sampling of posts here reveals it to be …

    ah, so you can take a sample of posts and deduce something from that. Good. Let’s try this on carl’s posts:

    Shouldn’t you be in a mass grave somewhere, you dopes, your “skeptic junk science” audience of morons, you dweebs can enjoy your mutual-masturbation, the cavalier way you douche-bags…

    So what we have is lots of what I find to be fairly offensive language; yet when it comes to supporting something substantive, like the phrase, “the spurious statistical analyses”, CC can’t come up with a shred of evidence.
    Computer scientist ?
    yours
    per

  173. cytochrome sea
    Posted May 21, 2006 at 5:56 PM | Permalink

    Kenneth Blumenfeld: Spot on, a breath of fresh air. Upon reading your post, my grievance is there for you and your families’ losses. I only wish it would have never happened. CURSE WORD, that’s my sincere assesment. (and CURSE WORD, CURSE WORD,CURSE WORD, and many more.)

    Carl, I feel obligated to retract my politeness in my previous response. To be blunt, the project is NOT a scientific analysis. Let us be clear on that, it is NOT scientific, unless there are very clear reasons why some data is discarded and some runs are not discarded.

  174. Bob K
    Posted May 21, 2006 at 6:56 PM | Permalink

    Being curious, I browsed climateprediction.net and from what I gather, it sounds like they are trying to use brute force to attempt to find correct values for the parameters. Everyone participating starts with a unique set of parameters for their model. I counted 34 parameters in the list found here. http://www.climateprediction.net/science/parameters.php
    These do not include the external forcings they employ, of which there are several hundred combinations. I’m going to ignore them in my calculations although they would add considerably to the search space. I’m feeling lazy and don’t want to count them up.

    I’m going to guess a little here.
    Since the parameter space explodes in size if more than two values are allowed, I’ll assume the parameters have only 2 different starting values each. That would amount to 2^34 = 17,179,869,184 unique parameter sets. Optimistically guessing that one machine can do one run in 200 hours and uses 1kwh per 4 hours, each run will consume 50kwh. About $7.50 per run @ $.15 per kwh. Total power consumption to complete the parameter space will be 858,993gwh. Heat generation anyone? 🙂

    It’s unlikely, but maybe they’ll only have to search 10% of the parameter space to find what they’re looking for. If so, using a electric cost of about $150,000 per gwh the cost will only be 858,993gwh * $150,000 * .1 = $12,884,895,000 for each 10% evaluated. This will amount to a paltry 39,223,445 years of computer run time.

    Of course this only works out if only two values are possible for each parameter(not likely), and they’ve included all the necessary parameters(not likely), and the correct parameter values are to be found in a particular set. Oh yeah, and there aren’t any programming errors.

    Sounds to me like they’re doing something similar to what can be done if you had all the data from last years Racing Forms. Search the data space making adjustments that evaluate to more winners, until you come up with a formula that shows a profit for last racing season. It’s unlikely to work this year, but you can market it to suckers by showing the great past performance. Some people will buy into anything.

  175. John A
    Posted May 22, 2006 at 2:33 AM | Permalink

    Re #174

    Therefore, in the real world, climateprediction cannot run more than a small fraction of 1% of the restricted parameter space. Maybe they’ll select “a few good men”

  176. Steve McIntyre
    Posted May 22, 2006 at 7:20 AM | Permalink

    I checked the Spam Karma and confirmed that nothing from Carl Christensen has been hung up. As most of you realize, if you intervene to stop the inflow of such abusive and intemperate comments, the person usually runs off and complains that they are being censored, which then trivializes my objections to the censorship policies at realclimate, as Armand has pointed out. So while I may occasionally delete comments from regular readers, I’m extremely reluctant to delete or snip comments from people like Carl, no matter how intemperate. In my view, such comments simply embarrass themselves and their associates.

    In this case, the situation had really gotten out of control both with obscenities and talk of “mass graves”. Yesterday morning, I emailed Dave Frame of climateprediction.net (who had posted here and below in a cordial way) and drew his attention to the postings of his associate. While I have not received any acknowledgement from Frame, Christensen has not made any subsequent posts.

  177. TCO
    Posted May 22, 2006 at 7:51 AM | Permalink

    Steve:

    You are right to allow him to post, but are wrong to report him to his superior. He has been aggressive in discussion and has failed to engage on technical points. But so what. That’s par for the course.

    If he starts posting racist chants or something, then report him. But some skirting of Godwin’s Law is not grounds for stopping the discussion. (And note the accused parties, were very quick to paint themselves as outraged victims. Almost like Steve Kerr doing the “foul flop” in the NBA…)

    Honest, Steve, you are a little “off” in your reaction to this guy. Let him argue his case or just do the internet piss and moan thing. Heck, I AGREE with him on the insular nature of internet blogs and on how you ought to put yourself in combat by writing real journal articles. If you’re scared to put the same analyses that you write here into a journal article, then I can’t take them as seriously as if you had. (And I really think you are way too timid about being caught in a “sin error” or the like.) Less worry about the perfect attack plan and more attacking will help the war more.

  178. Steve McIntyre
    Posted May 22, 2006 at 8:11 AM | Permalink

    TCO, your attitude on obscenities and rambling is different than other readers. If people want to flame each other, they can go to sci.net. I received several serious and reasonable complaints.

    Christensen linked to climateprediction.net in his signature url and additionally involved the organization that way. I reported that I’d contacted the organization so that this was on the record.

    I’m not "scared" of putting stuff into journals. If I was "scared", would I have written what I have? Give me a break. I’m working on a couple of things right now that look interesting.

  179. Posted May 22, 2006 at 8:46 AM | Permalink

    Re 178. I agree with Carl that many of the skeptics arguments and tactics are ineffectual, and why is a worthwhile discussion point. He was embarrasing his employers though with his contemptuous and colorful language, and Steve did them a service. He has the option of posting anonymously, but chose to make his identity known and so spoke for them in a way.

    Re 177. My take on Steve’s approach is this — you do not go after elephants with buckshot. To spray papers everywhere irritates the elephant and wastes your time. To bag the big ones you need to pick your target and hit it hard and accurately then follow through. I may be reading too much into Carls invective, but I actually think that was the essence of his advice.

  180. Peter Hearnden
    Posted May 22, 2006 at 8:48 AM | Permalink

    Re #171

    I’ve no idea. You’d better ask Chris Rapley who argued that the Earth’s “carrying capacity” was 1-2 billion, or Aubrey Mayer who openly advocates “Convergence and Contraction” with view to reduce man-made carbon dioxide to near zero by “reducing consumption” and economic vices that make the Kyoto Protocol look positively liberal by comparison. I encourage you to read Mayer’s book, which is filled with Zen Buddhist arguments and iconography as well as bizarre pictures that may mean something to Aubrey, but mean nothing to the rest of us.

    Such views beg the question as to where two thirds of the world’s population are meant to go, or who gets to decide who reproduces and how many.

    Back to the Stone Age is exactly right. The question is, is that what you want?

    May I be allowed to reply without my comments being deleted?

    John A you are amazing. You, rightly, take offence at being compared to Nazis and then proceed to ‘wonder’ if I’m one who wants billions to die for my ’cause’. Nothing it what I’ve posted anywhere would suggest I want to go back to the stone age. Why you think I’d not take offence at that suggest (even if it be obliquely done), and why you use the same kind of rhetorical device you condemn others for using only you can answer.

    So, your first paragraph is not what I think. I’m not interested in zen buddism, nor have I heard of Aubrey Meyer – perhaps you’re trying for guilt by association?

    But, having got the idea I’m some kind of Zen Buddist, and that I support the killing enforced reduction of population by billions, into the open and into peoples heads you then ask me if I want to go back to the stone age.

    Again, to stress, my answer is: NO I DO NOT. Clear enough?

  181. TCO
    Posted May 22, 2006 at 9:11 AM | Permalink

    The buckshot theory is silly. I’ve written more papers and done more reading on the theory/culture of science publishing than Steve or you have. Steve should push the peanut ahead and should write for the record. Analyses that are not put in the archived records are wastes. In this case, there’s no government funding that is being wasted (like when a grad student drops out and doesn’t publish), but it’s still a waste. Read the classics. Read Wilson. Or just look at this blog and the sheer mass of incomplete, unpublished analyses. It’s appalling.

    He is WAY, WAY too worried about playing the gotcha game and avoiding it against himself (and that’s where the “scared” comment comes in and I stand by it.) And insufficiently worried about advancing science.

    Oh…and even from a purely tactical standpoint, the “buckshot” approach is a WAY, WAY better way to go after the goal. If he had 10 peer-reviewed papers with independant issues to his credit, he would be taken much more seriously. And deservedly so.

  182. Steve McIntyre
    Posted May 22, 2006 at 9:21 AM | Permalink

    #157. Tim Lambert wrote:

    McKitrick used cos(abs(lat)) in his model. But that’s the same as cos(lat). Why take abs as well? Probably because he had abs(lat) in there and decided to try cos(abs(lat)) instead. Now there isn’t any good reason to choose one over the other, so the normal thing would be to choose the one that made for the best fit. However, because of the degrees/radians error taking the cos made the fit a lot worse. But it did make the effect of the economic variables much stronger. The sort of one-sided error checking exposed here is a good reason for people not to trust his work.

    Tim, I haven’t asked Ross about it, but I very much doubt that he’d gone from a calculation with abs(lat) to cos(abs(lat)). He could just as easily have had abs(lat) redundantly in the formula. You surmise otherwise, but it’s not "probably" or then you slide from "probably" to stating that "one-sided error checking" occurred in your last line.

    But let’s apply the standards that you espouse here to a situation where we have proof. We know that Mann calculated the verification r2 because the procedures are in his source code, being calculated at the same time as the RE. His results failed the verification r2 test and Mann did not report this failure and even had the gall to tell the NAS panel that he did not calculate the verification r2 statistic. Or again, Mann did PC calculations without the bristlecones (the "CENSORED" file) and didn’t get a hockey stick. He didn’t report these results and later claimed that the results were robust to the presence/absence of dendroclimatic indicators. There is absolutely no doubt about either point. It’s not surmise. By your standards, surely these are "good reasons for people not to trust [Mann’s] work". It’s taken a while for you to come round on this point, but I’m glad that we’re now on the same page in condemning this sort of behavior.

  183. Francois Ouellette
    Posted May 22, 2006 at 9:36 AM | Permalink

    For those interested, the following comment by Myles Allen (http://www.climateprediction.net/science/pubs/nature_18_9_03.pdf), which appeared in Nature in 2003, describes ClimateNet’s approach to simulation, but also gives a rather good overview of the challenges met by modellers. Upon reading a few of the papers from that group, I got the same feeling that I got from reading the IPCC TAR: there is a huge gap between what is said in the scientific litterature, and what is proclaimed in the media. If that were just the case of radical environmentalists trying to hype an issue, I could understand, but in many cases it is the same scientists who say one thing in a paper, and a different thing in public (e.g. Jim Hansen). Even more troubling is that one can now see a gradual shift even in the scientific litterature, to a more shrill and politically motivated message, especially in a journal like Nature, which seems to enjoy the publicity it gets from that kind of papers. The Hockey stick controversy is but one sad example of this.

    That being said, I wonder if any modellers have ever tried to include solar and/or cosmic rays effects to see if they get a better fit. Anyone knows? (notice the not too subtle way that I’m trying to tone down the debate somewhat… if this blog turns into a shouting match between a couple of idiots, I’ll lose a precious source of information and discussion!…)

  184. John A
    Posted May 22, 2006 at 10:10 AM | Permalink

    [snip]

    Steve: John A, sorry about this. Both you and Peter have been through this before. Let him have the last word. I don’t think that it matters.

  185. Posted May 22, 2006 at 12:37 PM | Permalink

    Steve, so why not ask Ross about it?

  186. Posted May 22, 2006 at 8:56 PM | Permalink

    Steve says: “Tim, I haven’t asked Ross about it, but I very much doubt that he’d gone from a calculation with abs(lat) to cos(abs(lat)).”

    I disagree. How could we resolve this? Oh, I know, why don’t you ask him?

  187. Steve McIntyre
    Posted May 22, 2006 at 9:59 PM | Permalink

    #185. Tim, OK, I’ll ask him.

    But we have the evidence for you to answer my question. It is beyond doubt that Mann witheld adverse verification statistics and misrepresented the lack of robustness to bristlecones. Both of these meet your standards “not to trust” his work. Do you agree – yes or no? If you won’t answer a simple question like that on standards that you yourself advocated, why should we pay any attention to anything that you say?

  188. Posted May 23, 2006 at 12:47 AM | Permalink

    Steve, here’s what MBH98 actually says: “the long-term trend in NH is relatively robust to the inclusion of dendroclimatic indicators in the network”. That’s not the same thing as saying that their reconstruction is robust with respect to bristlecones. They are talking about the “long-term trend” and not the reconstruction and they also qualify the word “robust”. Since you have misrepresented what MBH98 says, why should we trust anything you say about Mann’s work?

  189. Posted May 23, 2006 at 1:04 AM | Permalink

    OK Tim, so it’s OK that Mann’s results are not robust, because he put the word “relatively” in front of it?

    Please, talk about clutching at straws. I believe what Steve says because he’s shown, time and time again, that he knows what he is talking about, and so far nobody — especially not you — has shown any flaws in his statistical analyses.

    Just one of the several problems Steve has identified would be enough to convince me Mann does not know what he is doing. Couple that with his obstructionism and I’d say he’s doing a major disservice to science. I wonder why you defend him so vehemently?

  190. Willis Eschenbach
    Posted May 23, 2006 at 1:09 AM | Permalink

    Tim, you only replied to half of Steve’s question, that about robustness. You didn’t touch the radioactive question about withholding adverse verification statistics. Care to comment on that?

    Also, I don’t understand the distinction you are making between the “long-term trend” and the reconstruction. The long term trend they are talking about is the result of the reconstruction. If one is not robust, neither is the other.

    w.

  191. Willis Eschenbach
    Posted May 23, 2006 at 1:14 AM | Permalink

    Tim, upon re-reading, I understand even less.

    If a reconstruction depends on the presence of bristlecones, and doesn’t come out anywhere near the same without bristlecones, that’s not “relatively robust” — that’s not robust at all.

    If you had to take out a quarter of the proxies to get a change, that’s relatively robust. But a result that depends predominantly on the inclusion of a single proxy is not robust at all, and is far from being “relatively” robust. Not only that, but Mann knew that it depended on just that proxy.

    Like Nicholas, I fail to see why you are defending such poor science. You can’t show that Steve’s analysis is wrong, yet you continue to parse the sentences to try to find some shred of support for Mann.

    Why?

    w.

  192. Posted May 23, 2006 at 1:25 AM | Permalink

    Oh for heavens sake. Do you understand what the word “trend” means? Two different reconstructions can have the same long-term trend.

  193. Posted May 23, 2006 at 1:58 AM | Permalink

    Well, I’m only eyeballing it but the trend on that data sure looks different than the trend on the “hockey stick”.

    What am I missing? Where’s the evidence that removing some of the “dendroclimatic indicators” does not change the trend? Because, it sure looks different to me without the “Bristlecones”…

    At this point I think Mann should be the one who has to defend his statement. Let’s see his calculations with and without the dendro. data showing the two similar trends. Since he made that claim, he must have done the calculations, therefore it would be easy for him to demonstrate, right?

  194. Louis Hissink
    Posted May 23, 2006 at 5:58 AM | Permalink

    Re # 191

    The logical fallacy – my cat has 4 legs and my dog also. Hence cat=dog.

    Data set A has trend 1

    Data set G also has trend 1,

    Hence Data set A = Data set G.

  195. kim
    Posted May 23, 2006 at 6:01 AM | Permalink

    What the world needs is more geometry.
    ==========================

  196. Steve McIntyre
    Posted May 23, 2006 at 6:30 AM | Permalink

    Robustness
    Let’s start with the positiin that Mann knew that the reconstruction was not robust to the bristlecones from the calculations in the CENSORED file i.e. that without the bristlecones, you got a high 15th century. The statement from MBH98 that this is "relatively robust" is false. The main result – 20th century uniqueness – is not robust at all. This is now admitted by Mann et al – except they call it "throwing out information" without reconciling their previous misrepresentations.

    “the long-term trend in NH is relatively robust to the inclusion of dendroclimatic indicators in the network”

    But it’s not just this statement that you have to contend with. Mann added a great deal to this statement in Mann et al [2000] which is online here. Potential CO2 fertilization was an issue mentioned in IPCC SAR with tree ring-based reconstructions. Mann says here:

    We have also verified that possible low-frequency bias due to non-climatic influences on dendroclimatic (tree-ring) indicators is not problematic in our temperature reconstructions.

    He goes on to say:

    MBH98 found through statistical proxy network sensitivity estimates that skillful NH reconstructions were possible without using any dendroclimatic data, with results that were quite similar to those shown by MBH98 based on the full multiproxy network (with dendroclimatic indicators) if no dendroclimatic indicators were used at all. We show this below for annual-mean reconstructions of Northern Hemisphere mean temperatures.

    Whether we use all data, exclude tree rings, or base a reconstruction only on tree rings, has no significant effect on the form of the reconstruction for the period in question. This is most probably a result of the combination of our unique reconstruction strategy with the careful selection of the natural archives according to clear a priori criteria

    Verification Statistics
    Again, Tim, answer the question. Even Wahl and Ammann have been forced to agree that Mann’s reconstruction flunks verification r2 and CE tests. Mann said in MBH98 that they used RE, r and r2 tests. In IPCC TAR, he said that their reconstruction had significant skill in cross-validation tests. Now that the failure of the cross-validation tests is becoming more broadly understood, he told the NAS Panel that he never calculated the r2 statistic – that would be "silly and incorrect thing to do". But his source code shows that the verification r2 was calculated at the same time as the RE statistic. However, the results were bad and he didn’t report it.

    Tim, I have mis-described nothing. So I repeat my earlier comment.

    But we have the evidence for you to answer my question. It is beyond doubt that Mann witheld adverse verification statistics and misrepresented the lack of robustness to bristlecones. Both of these meet your standards “not to trust” his work. Do you agree – yes or no? If you won’t answer a simple question like that on standards that you yourself advocated, why should we pay any attention to anything that you say?

  197. Posted May 23, 2006 at 7:49 AM | Permalink

    Oh please. The EI link even has a graph illustrating what they are talking about — the reconstruction with dendro is similar to the one without dendro. Look at the graph. It’s a shame you keep misrepresenting what Mann says — this is not helping your credibility.

  198. dave eaton
    Posted May 23, 2006 at 8:17 AM | Permalink

    Neither is being evasive and dismissive helping yours, Tim.

  199. Steve McIntyre
    Posted May 23, 2006 at 8:27 AM | Permalink

    Mann himself produced a graphic showing the effect of his calculations without the North American dendro network here with a very high 15th century as we reported. Now you can put all the trees back in except the bristlecones and get something that looks like that. So don’t kid yourself, the bristlecones and dendro indicators have a big impact.

    But you fell into a little trap here. Look at what Mann did. He’d done the 15th century calculations with and without the bristlecones – we know from the CENSORED file. That’s not what he showed here. Instead he showed the 1760 proxy network where there are instrumental temperatures. It’s a completely different network. Pretty cute, eh? Would you trust results from someone who was being so cute? Didn’t think so.

    BTW, Tim, are you ever going to answer about verification statistics? You can’t, can you? If you can’t defend it, just say so and let’s get on with things.

  200. Posted May 23, 2006 at 9:47 AM | Permalink

    Yes Steve, it’s the 1760 network and it shows that you get a similar reconstruction with just dendro, no dendro and with everything. Which is what Mann said. You just can’t help misprepresenting him can you? You are not to be trusted.

    As for your hobby horse flashing on r2: I’m not interested. The serious issue you have raised is the charge of a falsification about robustness. But as we have seen, your cdharge there is false.

    How are you doing on the cos(abs(lat)) thing?

  201. Steve McIntyre
    Posted May 23, 2006 at 10:42 AM | Permalink

    Tim, there are two issues here and you can’t pick and choose. But you haven’t dealt with either one.

    The withholding of the failed verification r2 statistic is a “serious issue”. It’s just as serious, if not more serious, as the false claim about robustness. It’s a lot more serious than you abs(lat) issue, which I’ll ask Ross about today (yesterday was a holiday in Canada). Mann said in MBH98 that verification r and r2 tests were used as well as RE and in IPCC TAR claimed significance in cross-validation tests. It was the statistical skill that was referred to in IPCC TAR, not robustness to dendro indicators. So is this “serious”? You bet it is. Plus Mann had the cheek to deny at the NAS Panel that he even calculated the r2 statistic. This is what you “trust”??

    I didn’t misrepresent MAnn. I quoted him. Look at what he said. He knew that his reconstruction was not robust to the presence/absence of dendro indicators, because he’d tested it in the 15th century and found that the bristlecones affected the reconstruction. WE know that he did the test because we can see the CENSORED directory and we know that it affects the result because Mann himself has admitted it (even if you don’t accept our calculations.)

    Mann’s decision to illustrate the 1760 network is a promoter’s trick. It’s completely misleading. But the covering language is not even carefully guarded. Read what he said. He didn’t say the just the 1760 network and later – he claimed it for his entire network including the 15th century. It’s just false. Again, I’m not misrepresenting him. I quoted him.

    So Tim, you’ve evaded the black-and-white verification misrepresentation and you’ve evaded the false claims about robustness. It’s really hard to take you seriously.

  202. Posted May 23, 2006 at 7:48 PM | Permalink

    Steve, in MBH98 they did not say that the reconstruction was robust wrt dendro but that the “long term trend” was “relatively robust”. That’s not the same thing, and it is deceitful for you to repeatedly misrepresent it. The EI paper does not say that they get the same results for the 1400 network with and without dendro. The graph is of the 1760 network — it should have been obvious that that was what they were talking about. If Mann really did say that his reconstruction was robust wrt bristlecones, how come you can’t quote him to that effect and instead have to resort to your fanciful intepretations of his writings?

    And yes, I’m evading your r2 flashing hobby horse, just like you are evading discussion of average temperature. I’m not interested.

  203. Steve McIntyre
    Posted May 23, 2006 at 8:48 PM | Permalink

    Tim, you are pathetic. Mann has talked time and time again about the need to pass verification statistics. He said that his reconstruction passed. It doesn’t. Now he tells the NAS Panel that he never calculated it. You talk about trust. Why do you “trust” him”?

    Secondly, you have not dealt with the false and misleading statements about robustness. The 15th century values without North American dendro (bristlecones) is not “relatively robust”. It’s not “robust” and it’s not “relatively robust”. The difference disentitles him from claiming 20th century uniqueness. It mattered a lot. It;’s not “relatively robust”.

    I’ve quoted Mann, but let me quote him again. You want a statement that the reconstruction is robust to bristlecones. Well, bristlecones are dendro indicators. So if he claims that it’s robust to dendro indicators, he’s claimed that it’s robust to bristlecones. Or don’t they teach that at the University of Manitoba?

    Mann said:
    “the long-term trend in NH is relatively robust to the inclusion of dendroclimatic indicators in the network”

    A foritiori, it is relatively robust to the inclusion of bristlecones. It isn’t and Mann knew it.

    Mann said:

    We have also verified that possible low-frequency bias due to non-climatic influences on dendroclimatic (tree-ring) indicators is not problematic in our temperature reconstructions.

    That’s false. He tested removing bristlecones from the network and got the opposite result.

    Mann said:

    MBH98 found through statistical proxy network sensitivity estimates that skillful NH reconstructions were possible without using any dendroclimatic data, with results that were quite similar to those shown by MBH98 based on the full multiproxy network (with dendroclimatic indicators) if no dendroclimatic indicators were used at all.

    That’s false. The results without bristlecones are not “quite similar”. They’re very different.

    He says:

    Whether we use all data, exclude tree rings, or base a reconstruction only on tree rings, has no significant effect on the form of the reconstruction for the period in question.

    This is a promoter’s trick. Your only excuse is that Mann didn’t lie here about the 1760 period. But he knew that this was false for the earlier periods. In a prospectus, that would be against securities laws. The other false statements would be violations of securities laws in a prospectus. So would the withholding of the adverse verification results.

    You are defending what would be violations of securities laws in a prospectus. You are pathetic.

  204. Posted May 23, 2006 at 9:25 PM | Permalink

    So your argument is that Mann is a liar because what he wrote was true, but some other statement THAT HE DIDN’T MAKE was false? How come you can’t make your case without stuffing words into his mouth?

  205. Steve McIntyre
    Posted May 23, 2006 at 9:55 PM | Permalink

    Tim, he made both false statements and misleading statements. I’m not "stuffing" words into his mouth. I’m quoting him.

    And by the way, it’s not just misrepresentations that are misconduct. Omitting adverse results is scientific misconduct. So citing 1760 results when you know that 15th century results are the opposite is misconduct.

    Mann did the same thing with the verification statistics. He cited the 1820 r2 when it passed and withheld the 15th century r2 statistic.

    And here you are supporting scientific misconduct. You should be ashamed of yourself.

  206. Posted May 23, 2006 at 10:15 PM | Permalink

    Here’s the actual quote from MBH98 that you have repeatedly lied about:

    “the long-term trend in NH is relatively robust to the inclusion of dendroclimatic indicators in the network”

    It does not say “reconstruction”.

  207. Armand MacMurray
    Posted May 23, 2006 at 10:32 PM | Permalink

    Re: #205
    Tim, you need to read more carefully. That’s one of the quotes Steve used in #202, and it’s the same in his post and in your post.
    What I think you’re missing is that when Steve says, e.g. “Mann said”, it DOES NOT mean “Mann wrote in MBH98,” but just means that Mann said or wrote it *somewhere*. The “reconstruction” word certainly appears in Mann et al 2000 (see Steve’s post #195). I think any reasonable person would agree that “Mann said” includes what he wrote in other papers than MBH98.

  208. Steve McIntyre
    Posted May 23, 2006 at 10:38 PM | Permalink

    Without the bristlecones, you have high 15th century. So the “trend” isn’t “relatively robust”. Mann knew that and misrepresented it. Is that all you’ve got, Tim? So let’s make a list one more time.

    1. “the long-term trend in NH is relatively robust to the inclusion of dendroclimatic indicators in the network”
    FALSE and known to be false from the CENSORED File

    2. “We have also verified that possible low-frequency bias due to non-climatic influences on dendroclimatic (tree-ring) indicators is not problematic in our temperature reconstructions.”
    FALSE and known to be false from the CENSORED file

    3. “MBH98 found through statistical proxy network sensitivity estimates that skillful NH reconstructions were possible without using any dendroclimatic data, with results that were quite similar to those shown by MBH98 based on the full multiproxy network (with dendroclimatic indicators) if no dendroclimatic indicators were used at all.”
    FALSE and known to be false from the CENSORED file

    4. “Whether we use all data, exclude tree rings, or base a reconstruction only on tree rings, has no significant effect on the form of the reconstruction for the period in question.”
    INTENTIONAL WITHHELD AN ADVERSE RESULT. Mann knew that this claim was false for the earlier periods.

    5. “they [MBH] estimated the Northern Hemisphere mean temperature back to AD 1400, a reconstruction which had significant skill in independent cross-validation tests.” “correlation (r) and squared-correlation (r2) statistics are also determined.”
    The skill claim is FALSE and Mann knew it to be false. Mann withheld the adverse results.

    6. “We did not calculate the r2 statistic, that would be a silly and incorrect thing to do.”
    FALSE and Mann knew it to be false.

    Tim, you’re the one who brought up the issue of “trust”. Yet you endorse scientific misconduct as long as the person supports your goals. Your’e pathetic.

  209. MrPete
    Posted May 23, 2006 at 11:11 PM | Permalink

    #203 Tim: did you read what is written in #202 before writing #203?

    Steve QUOTED Mann. No “stuffing words”.

    Steve has demonstrated that three of Mann’s statements, as quoted, were false.

    Steve has demonstrated that the fourth Mann statement was true-but-highly-misleading (or would you prefer true-but-immaterial to the actual question at hand). The fourth Mann statement is UNtrue for the critical (MWP) period.

    Steve has demonstrated that Mann knowingly withheld adverse results.

    What have you demonstrated that supports a Mannian perspective on the HS? Nothing.

    What have you demonstrated that falsifies Steve’s work? Nothing.

  210. nanny_govt_sucks
    Posted May 23, 2006 at 11:59 PM | Permalink

    #205 Tim, in your opinion, what is Mann talking about when he says “trend in NH”, and “dendroclimatic indicators in the network” if he’s not talking about his reconstruction?

  211. James Lane
    Posted May 24, 2006 at 6:03 AM | Permalink

    Lambert: “And yes, I’m evading your r2 flashing hobby horse, just like you are evading discussion of average temperature. I’m not interested.”

    Sounds like an 8 year old I know. Lambert, like most bullies, is also a coward.

  212. kim
    Posted May 24, 2006 at 6:11 AM | Permalink

    I think his error is to let his politics drag him into lost causes.
    =====================================

  213. Posted May 24, 2006 at 6:32 AM | Permalink

    nanny_govt_sucks: My understanding of what Mr. Lambert is claiming is that, according to Mann, whether or not you include the dendroclimactic indicators, the *trend* of his results is *reasonably* robust. That is, if you take a trend of the results with and without the dendro. data, that trend will be reasonably similar.

    At least, that’s the least strict way I can think of to interpret Mann’s quote from MBH98.

    Only, the problem is, Mann’s own data from the CENSORED directory, which lacked some of the dendro. data, has a distinctly different trend from his “full” reconstruction. It is clearly a decreasing trend overall, whereas the “hockey stick” clearly has an increasing trend.

    So, how he could claim that his study’s “trend” is “relatively robust” to the includion of the “dendroclimatic indicators” with a straight face, I don’t know. Maybe if he could argue he didn’t try a full reconstruction, or that he only tried removing a few of the series, he could get away with that claim. But thanks to the CENSORED directory we know he and/or his team tried removing the “bristlecones” and discovered how different the outcome was without them. With that knowledge, I think he should have removed that claim.

  214. kim
    Posted May 24, 2006 at 6:40 AM | Permalink

    One certainly wonders at the use of the word ‘censored’.
    ====================================

  215. Posted May 24, 2006 at 7:26 AM | Permalink

    >> I think his error is to let his politics drag him into lost causes

    HAHAHAHA, now really, talk about the pot calling the kettle back! I mean, honestly, the whole modus operandi for you dopes & your reverse-engineering of datasets you don’t know how to use is because it goes against your corporate handlers & political ideology. Can you “nanny_govt_sucks” types honestly say it’s not because of your Randroid ideology or whatever?

  216. kim
    Posted May 24, 2006 at 8:21 AM | Permalink

    Any response to Bob K at #174?
    ====================

  217. Posted May 24, 2006 at 8:25 AM | Permalink

    umm, yeah, he totally doesn’t understand the cpdn experiment, as do numerous other posts here both with cpdn, software engineering, etc.

  218. Posted May 24, 2006 at 8:27 AM | Permalink

    as a simple example — do you or Bob K of #174 really think you need to explore every single parameter combination in the 34-dimensional hypercube? 😉

  219. kim
    Posted May 24, 2006 at 9:06 AM | Permalink

    I don’t know enough to think, but I would be interested(to read from you, pardon the snark) in a point by point rresponse to his criticisms.
    =====================================

  220. kim
    Posted May 24, 2006 at 9:08 AM | Permalink

    Or even just an explanation of how your experiment works, in terms that I can understand as easily as I did Bob K’s. Remember, I am not a scientist. OK, You’re on.
    ============================

  221. Posted May 24, 2006 at 9:27 AM | Permalink

    There’s nothing “point by point” to respond to Bob K (174) — he’s got it all abysmally wrong. If you honestly are interested in learning about cpdn, simply check around the website instead of relying on hearsay:

    http://climateprediction.net/project.php
    http://climateprediction.net/science/index.php
    http://climateprediction.net/science/strategy_adv.php
    http://climateprediction.net/science/scientific_papers.php

  222. Posted May 24, 2006 at 9:54 AM | Permalink

    Let’s go through Steve’s quotes yet again:

    1. “the long-term trend in NH is relatively robust to the inclusion of dendroclimatic indicators in the network”. This is the only one from MBH98 and it’s true. The trend is a gentle decrease from the 15th to 19th century. Leave out the bristle cones and the trend is a little steeper.

    2. “We have also verified that possible low-frequency bias due to non-climatic influences on dendroclimatic (tree-ring) indicators is not problematic in our temperature reconstructions.” From EI. Does not say that the reconstruction is the same with out dendro.

    3. “MBH98 found through statistical proxy network sensitivity estimates that skillful NH reconstructions were possible without using any dendroclimatic data, with results that were quite similar to those shown by MBH98 based on the full multiproxy network (with dendroclimatic indicators) if no dendroclimatic indicators were used at all.” Refers to 1760 network graphed below and is true.

    4. “Whether we use all data, exclude tree rings, or base a reconstruction only on tree rings, has no significant effect on the form of the reconstruction for the period in question.” Is true since the period in question is 1760+.

    5,6 r2 Official sanctioned Steve McIntyre responses: hobby horse/flashing/thread hijacking/get a life.

  223. Posted May 25, 2006 at 6:20 AM | Permalink

    Oh Steve, how’s that abs(lat) thing coming along?

  224. Posted May 25, 2006 at 6:34 AM | Permalink

    Mr. Lambert says:

    1. “the long-term trend in NH is relatively robust to the inclusion of dendroclimatic indicators in the network”. This is the only one from MBH98 and it’s true. The trend is a gentle decrease from the 15th to 19th century. Leave out the bristle cones and the trend is a little steeper.

    So, you admit then, that the long term trend in Northern Hemisphere temperature is downward? That’s a relief.

    No more questions, your honour.

  225. Posted May 25, 2006 at 6:42 AM | Permalink

    And let me add, before I get a snarky response from Mr. Lambert.

    Yes, he said “from the 15th to the 19th century”. However, he’s defending Mann’s quote, which does not mention the 15th nor the 19th century. Mann said that “the trend” – meaning the trend across the entire time span of his study – is robust. Unless, that is, the sentence immediately before the one quoted places qualifications upon what he means by “the trend”. Assuming that is not the case (and I expect someone will let us know if it is), then in order to defend Mann’s statement, he must show that the entire trend is robust.

    I agree, the trend of the data is downward, if you ignore the outliers, which is good statistical practice. That’s not what Mann did in his analysis. So I’m confused how Mr. Lambert is defending what he said in this way. Still, I can’t find any other interpretation of his comments, so I’ll leave it at that…

  226. Steve McIntyre
    Posted May 25, 2006 at 7:11 AM | Permalink

    For others, Ross McKitrick archived code and data for Michaels and McKitrick at the time of publication. Lambert analyzed that code at the time and found that McKitrick input data was latitudes in degrees, while the program expected latitudes in radians. As I noted above, this seems to me to be a highly comparable type of error to the header file error in the climateprediction.net, but on a far smaller scale. I thought Lambert should treat the errors consistently.

    Lambert then alleged that McKitrick’s use of cos(abs(latitude)) indicated that he had withheld adverse results (i.e. that he had also done a run using abs(latitude) and that these had “good” results which were not reported.) When I pointed out that we knew that Mann had withheld adverse verification statistics and that Mann had both withheld and misrepresented adverse results from not using bristlecone pines, Lambert argued that Mann’s comments could be construed as applying ONLY to the 1760 step and did not specifically deny the adverse results in the 1400 step. In fact, I think that Mann’s comments did not limit themselves to the 1760 step (and no securities commission would give a business prospectus the benefit of doubt on a comparable issue). But even if he did, so what? As long as Mann knew of the adverse results for the 1400 step (which he did), reporting the 1760 results in the way he did is actually a badge of deceit. A securities commission would see through a trick like that in a minute.

    Now let’s think about abs(latitude). Lambert’s allegation is that McKitrick knew of adverse results using abs(latitude), failed to report them and therefore says that he doesn’t “trust” McKitrick. On the other hand, he goes through hoops to avoid acknowledging the obvious and proven failures to report by Mann. Tim’s contortion or worthy of a circus acrobat.

    I’ve inquired about abs(latitude).

    First, Ross confirmed that he simply did not consider that cos(Lat) = cos(Abs(Lat)) and that’s why the redundant formula occurs.

    Second, Ross points out that the source code and data were at all times made public. Indeed, that’s how Lambert identified the error in the first place. My own take on Lambert’s diagnosis of the fact that there was a programming error is: rather than permanently discrediting our claims of problems in Mann’s Fortran code, it highlighted that errors can occur and demonstrates the merit of archiving code. However, many climate scientists felt the opposite and seemed to conclude that because McKitrick had made a programming error on one occasion that implied that Mann didn’t.

    Third, Ross says that a run was done originally using abs(latitude) and that a referee asked that it be changed to cosine, which was done. He says that the difference on overall results was not noticeable. He says:

    In any case the accusation that we switched models to juice up the results is false, and makes no sense in the context of a study where I posted the data and the code from the very start, even when the study was in discussion paper form.

    In addition to providing code and data for the original case, Ross promptly acknowledged and reported the cos(latitude) error and providing full code and data for the amendment here.

    http://www.uoguelph.ca/~rmckitri/research/gdptemp.html
    http://www.uoguelph.ca/~rmckitri/research/gdptemp.log.txt

    Ross also sent me contemporary correspondence with Rasmus in which he attempted to resolve concerns raised by Rasmus about spatial autocorrelation.

    All in all, it is remarkable to contrast McKitrick’s documentation of source code and data with Mann’s intransigence.

    It is Lambert’s total hypocrisy that amazes me. How can he froth at the mouth about McKitrick supposedly withholding adverse results using abs(latitude) and then ignore Mann’s withholding of adverse verification statistics and adverse robustness results for the 1400 network?

  227. mtb
    Posted May 25, 2006 at 7:24 AM | Permalink

    A very interesting exchange, and I am sure that any objective, professional, scientific observers will be able to see which side of the discussion has merit, and which doesn’t.

  228. Posted May 25, 2006 at 7:29 AM | Permalink

    translation: “baa baa baa” “rah rah rah — go team!” 😉

  229. Posted May 25, 2006 at 8:21 AM | Permalink

    “For others, Ross McKitrick archived code and data for Michaels and McKitrick at the time of publication. Lambert analyzed that code at the time and found that McKitrick input data was latitudes in degrees, while the program expected latitudes in radians.”

    Wow, and just think, if they hadn’t archived the data and Mr. Lambert hadn’t audited it, the mistake may never have been found.

    Remind me again, is Mr. Lambert in favour of, or in opposition to, the practice of auditing climate studies?

    Thanks to Mr. McKitrick and Mr. Michaels for having the integrity to perform their science openly.

    (I keep thinking I should be calling someone Dr. but I can’t remember who’s Mr. and who’s Dr. Apoligies if I got it wrong).

  230. Dave Dardinger
    Posted May 25, 2006 at 8:25 AM | Permalink

    Notice to Carl:

    This is to notify you that you have managed to make my “ignore anything posted by this creep” list in record time. I hope Peter Hearnden isn’t jealous.

  231. Posted May 25, 2006 at 9:25 AM | Permalink

    So Steve, I was correct when I said that he had gone from abs(lat) to cos(abs(lat)) and you were wrong. It is false to claim that the changing from abs(lat) to cos(abs(lat)) made no noticeable difference to the results. In fact, when correcting the paper he wrote

    Outside the dry/cold regions the measured temperature change is significantly (previous: primarily ) influenced by economic and social variables.

    So let’s see: he writes the paper, sends it out, a referee says he should should use cos(lat) instead of abs(lat), he changes it (making the degrees/radians error), it doesn’t make any noticeable difference, he doesn’t notice that r2 has got worse so he puts the new numbers in to keep the referee happy and publishes…

    No, that doesn’t work, because why would the text say “primarily” when that’s only true with the erroneous cos calculation? So he must have noticed that his result was now much stronger (implying that there has been no global warming) and changed the text.

    According the the Steve McIntyre “disclose adverse results” standard even if he didn’t notice the mistake, shouldn’t he have disclosed that adverse abs(lat) result?

    Here’s how the result was described by his co-author:

    The research showed that somewhere around one-half of the warming in the U.N. surface record was explained by economic factors, which can be changes in land use, quality of instrumentation, or upkeep of records.

    That’s only true with the incorrect cos calculation.

  232. Steve McIntyre
    Posted May 25, 2006 at 11:00 AM | Permalink

    Yes, I think that adverse results should be disclosed – by Michaels and McKitrick or by anyone else. It looks to me like they made a conscientious effort to provide disclosure. I wasn’t an author of the paper and, to the extent that they didn’t, shame on them. Authors on replication (e.g. King; McCullough) emphasize the importance of providing source code and data as a precaution and here their behvious was exemplary.

    By contrast, Tim, your own hypocrisy remains simply stunning and makes it difficult to accept anything that you say with a straight face. By the standards, you espouse here you have no alternative but to condemn Mann, Bradley and Hughes. We’re waiting.

  233. Posted May 25, 2006 at 11:16 AM | Permalink

    Well it looks to me like MBH made a conscientious effort at disclosure of results without dendro. They said that the long-term trend was similar. The way you have repeatedly misrepresented this is disgraceful.

    And you are one of the authors of the EE 2005 paper which brutally rips the quote:

    Whether we use all data, exclude tree rings, or base a reconstruction only on tree rings, has no significant effect on the form of the reconstruction for the period in question.

    out of the context where it is clear that the “period in question” is 1760+ and you present it as if they are talking about the 15th century.

    That’s another serious misrepresentation on your part.

    But what can we expect from the man who claimed that MBH98, published in April 98 showed data from the first seven months of 1998?

  234. Steve McIntyre
    Posted May 25, 2006 at 11:34 AM | Permalink

    Tim, it doesn’t matter a damn whether or not their language trickily limited their claim to the 1760 period. (And I don’t concede that it did.) They had knowledge that the claim was not correct for the 15th century, because they tested for the absence of bristlecones. Going from that analysis to presenting the 1760 results is a badge of deceit, which you have bewilderingly decided to endorse.

  235. Pat Frank
    Posted May 25, 2006 at 11:40 AM | Permalink

    #226 — “How can he froth at the mouth about McKitrick supposedly withholding adverse results using abs(latitude) and then ignore Mann’s withholding of adverse verification statistics and adverse robustness results for the 1400 network?

    It’s possible he has an overblown feeling of propriety about the issue. A Flashpoint search of “Lambert T,” or “Lambert Timothy” since 2000 didn’t turn up any publications I could attribute to our Dr. Lambert. Flashpoint searches MathSci Net and SciSearch among other data bases, which should have been appropriate for any published computational work. And so with regard to the critique of Ross’ paper, the feeling of having actually produced something along with the sense of ownership that goes with the feeling, may explain Dr. Lambert’s passionate insistence on Ross’ guilt, while an entirely different passion allows Mann to skate.

    See, Steve, you just don’t understand. There’s no connection at all between Tim’s accusations against Ross and his exculpation of Mann. The first is his and the second is just right. It’s all just a question of the proper psychological compartmentalization. Feel-good is the critical organizing criterion. 🙂

  236. Ross McKitrick
    Posted May 25, 2006 at 1:52 PM | Permalink

    I am not going to waste my time dealing with Lambert’s continued obsessive ranting about my CR paper. The results are robust and transparent, whether he likes them or not. For all the nitpicking about the now-fixed cosine error, we tested a straightforward hypothesis: that the surface data have been “cleaned” of nonclimatic influences. We showed that hypothesis can be rejected, we argued that the nonclimatic effects add up to a net warming bias in the global average, and we called for further efforts to measure it more precisely. It’s a good paper and I’m proud of it.

    But one accusation needs rebutting. The use of ABSLAT was in the earliest version of the study, which is still available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=284175. The paper went through many iterations after that, as I received comments from a lot of readers. The switch to cosablat happened in an early revision. The split of the sample into cold season/warm season and dry/moist subsamples came later in the process after we were already using cosine(lat), incorrectly calculated as it happened. Lambert insinuates without a shred of evidence we did the cos transform after we had done the cold/warm sample split. This is untrue.

    Furthermore, if we did have the 2 sets of results to choose from, we would not have preferred the erroneous cos(lat) ones, since they yield a worse overall fit and much lower cross-validation score, an anomalously low latitute effect and a cold-season pressure effect that didn’t fit our prior expectation. Correcting the cosine error improved the results and increased our confidence in them, it didn’t detract from them.

    Finally, if I were in the business of cheating on my work I wouldn’t publish my data and code, and I wouldn’t be calling for the creation of a federal auditor to enforce disclosure.

  237. Bob K
    Posted May 25, 2006 at 2:53 PM | Permalink

    Carl C,

    Re: #218
    I never claimed it was necessary to do calculations for the entire parameter space. My figures only account for 10% of the parameter space. If this figure is unreasonably high, what percentage would you consider reasonable? I am glad to see you agree it is at least a 34 dimensional space. That’s without forcings, which would add more dimensions.

    Re: 221

    Found here. http://www.climateprediction.net/board/viewtopic.php?t=2022
    Posted by administrator DaveF (Frame?).

    The essence of the climateprediction.net experiment is to vary the physics of a full-scale climate model by making changes to various parts of the model’s code. This involves making changes to some of the many model parameters. These parameters control different parts of how the model attempts to simulate climate. Very often, these parameters are an attempt to represent at large scales the effects of small scale events (such as when we consider the amount of cloud in a grid box, or how quickly cloud moisture converts to rain). In climateprediction.net we vary parameters and examine how these variations affect the modelled climate. Most interesting are the cases in which the model continues to simulate present day (or historical) climate reasonably well, but gives a different response (from the standard model) to the doubling of carbon dioxide (or changes in aerosol loading).

    He says “varying” which is not necessarily the same as “randomly varying” within the physical range of each parameter. If they’re not varying the parameter range randomly they must use a rule set to vary them. I find it hard to believe the rule set covers the entire space of physical possibilities in an objective manner.

    I think my assessment is reasonable. On the conservative side at that. Since I don’t have the time or inclination to learn all the nuances of what is being done, maybe you would be so kind as to point out where my comment #174 is “abysmally wrong”.

    I see they have completed 161,294 hadsm3 runs.(since February?) Have they precalculated the number of runs necessary to get a reliable solution? If not, why not? Or are they at some point going to eyeball what’s been done and assume it’s enough?

    How do they deal with crashed models? Do they simply assume the parameter set is invalid and move on to another set?

    I noticed the home page shows the number of model years completed to three decimals. It seems odd to display years to a less than 9 hour resolution. For those not noticing that it is a decimal and not a comma, it appears that 1000 times more has been accomplished than a closer look reveals. Nice touch. Missed it myself at first glance.

    If your actually here to shine light on the subject, I would think you’d help me out here. I’m trying to learn something. I’ve looked through your site and can’t find the answers. Seems to me the answers would be simple enough for you to state.

  238. Paul Linsay
    Posted May 25, 2006 at 4:54 PM | Permalink

    #236. I don’t understand all the ho-hah about cos(abs(lat)) versus cos(lat). It’s simple trig: cos(x) = cos(-x) so cos(abs(lat)) = cos(lat). Am I missing something here? I can understand mixing up radians and degrees, it’s not always clear what to use for a particular calculational tool, documentation can suck.

  239. Ed Snack
    Posted May 25, 2006 at 5:12 PM | Permalink

    Lambert, publish your “Mann Screws up for the Nth Time” article or crawl bag into your slimebag !

  240. Posted May 26, 2006 at 1:24 AM | Permalink

    237 — *sigh* – the information is on our website from the links I posted, you’re just too lazy! 😉

    In a nutshell — we are attempting to quantify the uncertainity in climate model using the Hadley Centre (UK MetOffice) models. These are typically run on order 10-100 simulations; we are extending it to 1000, 10K, 100K. We explored 34 parameters in the model, identified that 6 parameters had the most effect, and explored them further, but still continue to make workunits exploring all parameters. The more the merrier! 😉

    We are using three values per param to explore (“low”, “medium”, “high” if you will). So they aren’t randomly varied; what would be the point of say putting in hailstoned 30 metres across? The idea is to fill an n-dimensional hypercube of parameter space to explore the model as best we can, get a wide range of sensitivities to explore further in ways such as our current project with the BBC (i.e. spin up an ocean, couple the param space, forecast to 2080). Of course to “fill” the hypercube would require many millions of runs, but we can use various statistical methods to get an idea of what is going on.

    The project has been going on for a few years, we started out with a slab model, did a THC shutdown experiment, did a sulphur cycle experiment, a detection-and-attribution experiment, we have the current coupled model (HadCM3) experiment (the one with the BBC), and will shortly be embedding the UKMO PRECIS regional model into that. So there are a lot of things going on which hopefully you’ll be reading about in the literature (and perhaps which Steve will try to reverse-engineer, with data he doesn’t understand, just to suit his corporate whoremasters ;-).

    Perhaps it would help if you actually read the Nature paper which has a bit at the back about the perturbations etc:

    Click to access nature_first_results.pdf

  241. James Lane
    Posted May 26, 2006 at 3:30 AM | Permalink

    “So there are a lot of things going on which hopefully you’ll be reading about in the literature (and perhaps which Steve will try to reverse-engineer, with data he doesn’t understand, just to suit his corporate whoremasters”

    Charming.

  242. Hans Erren
    Posted May 26, 2006 at 3:44 AM | Permalink

    Click to access model_appraisal.pdf

    Carl how are your clouds performing?
    Cloud cover is still very badly modeled in both polar regions (fig 4.11, page 50)

    How does your ITCZ behave?
    All of the coupled models exhibit a “split” intertropical convergence zone

    How is your arctic modeling fitting observations?
    The discrepancy among models and the Levitus climatolgies are most prevalent in the Arctic Basin.

    please read:
    http://home.casema.nl/errenwijlens/co2/tcscrichton.htm

  243. Bob K
    Posted May 27, 2006 at 4:50 AM | Permalink

    Carl C,

    Thank you. The Nature letter helped.

    If I have this right, what the letter says is roughly this.

    2017 unique simulations(sims) were run. 15 years each for calibration, control, and double CO2.

    Any sims showing obvious computer related bad data are discarded.(i.e. sudden wild value changes)
    This is about 1.6% of sims or roughly 32 sims.(Data quality section)

    At the end of the control phase, any sim showing a mean reduction in temperature of more than -0.02 degrees per year over the last eight years of the control period is discarded.(Data quality section)

    The remaining 1148 sims (57% of initial pop.) are considered as being valid at the end of double CO2 phase. Graph those values. Notice they show a warming trend and make an announcement.

    The results are hardly suprising. Since no sims appear to have been discarded for showing a temperature increase during the control phase, though 800+ showing cooling were discarded. The surprising thing is that some of the sims still managed to show a decrease over the double CO2 phase. Looks like selection bias to me.

    Of the 34 parameters, the focus seems to have been on iterating the three possible values of the six parameters judged to be the most significant. Unfortunately, the iteration of the values of the other 28 parameters appears to be almost non-existent.

    If the effect of the remaining 28 parameters was considered trivial I doubt they would have been included in the sims. I can only assume it will also be necessary to interate a non-trivial percentage of their values to assure reliability.

    Now that I know each parameter has three values, I see the figures in my comment #174 are too low by a factor of roughly 1,000,000. (I originally considered 2 values each)

    Given the size of the parameter space, I suspect direct observations for the next century will be readily available for comparison by the time sufficient sims have been run to consider them statistically reliable for making projections.

    Hope you have a lot of perseverance.

  244. Posted May 27, 2006 at 4:56 AM | Permalink

    wow, it’s really amazing how you misinterpret everything to fit into your own idiotic viewpoint!

  245. Bob K
    Posted May 27, 2006 at 5:02 AM | Permalink

    Thanks for calling my viewpoint idiotic.

    If you actually wanted to be helpful, you would have pointed out what I interpreted incorrectly concerning the Nature letter.

    Or are you just here to troll.

  246. Posted May 27, 2006 at 5:09 AM | Permalink

    That you’re insinuating “selection bias” shows that you didn’t understand the Nature paper or the other links I gave you, because it led you to say such whoppers as “Unfortunately, the iteration of the values of the other 28 parameters appears to be almost non-existent”.

    That you continue to misunderstand the parameter space and our sampling thereof, shows that you just don’t have the background necessary to understand, or you’re just continuing to purposely construct these strawmen. The answers are in our papers, I suggest if you really are interested you take a serious look at them, rather than hanging out on blogs.

  247. Bob K
    Posted May 27, 2006 at 5:44 AM | Permalink

    Carl c,

    Keeping 28 parameters at the same initial value and interating the other six that have three values each comes to 729 simulations. They stated only 2017 unique sims were done. If they iterated an additional parameter through all it’s values, 2187 sims would have been completed. That happens to be more sims than they claimed to have done. This leaves 27 parameters with values left unchanged for the entire 2017 unique sims. If that’s not “almost non-existent” interation considering number of parameters involved, what do you call it? Comprehensive?

    You think cropping 800+ sims that run cold while not cropping the sims that run hot doesn’t induce bias? Surely you jest.

  248. Posted May 27, 2006 at 5:51 AM | Permalink

    Once again, you guys are “reverse engineering” something you know nothing about, i.e. retrofitting your preconceptions & perversions of statistics onto actual science.

    *sigh* if you can’t understand it via the papers I’ve referenced numerous times, then you’ll just have to wait for future papers of ours when we go through the huge dataset further. The ensemble of this Nature paper was just the tip of the iceberg. Your false insistence that we said this 2187 “sims” represented all parameter space is just ludicrous.

  249. Greg F
    Posted May 27, 2006 at 6:01 AM | Permalink

    Bob K,

    The climate models are just curve fit programs. Extrapolating outside the fitted data is political science.

  250. Bob K
    Posted May 27, 2006 at 6:59 AM | Permalink

    Carl said:
    Your false insistence that we said this 2187 “sims” represented all parameter space is just ludicrous.

    I never said that at all. They simply gave no indication whatsoever of the actual size of the parameter space they are dealing with.

    They ran 2017 sims. Threw out 800+ sims that showed cooling. Then wrote a letter to Nature including pretty graphs of the cropped data showing how bad it’s going to get.

    The actual number of runs necessary to uniquely simulate the entire parameter space is 16,677,181,699,666,569 or 1.66771817 àƒÆ’”‚¬” (10^16). They simulated 1/8,268,310,213,022th of the space and published results. It boggles the mind.

    What’s the purpose of claiming 34 parameters with three values each if the intention isn’t to uniquely vary them? Are most of them just for show? Why not just plug in constants for the ones they don’t intend to vary? Not as impressive sounding, maybe?

  251. Posted May 27, 2006 at 8:02 AM | Permalink

    It’s laughable how poorly you guys understand climate modelling. You may as well bitch that the typical smaller ensembles of climate models only represent 1/8,268,310,213,022,000,000th. Oh wait, I’m sure you will!

  252. kim
    Posted May 27, 2006 at 8:11 AM | Permalink

    Bob K is still writing comprehensible English, and you, C, are still referring to distant authorities that I can’t understand. I hope you’ve rehearsed Act II a little better.
    =======================================

  253. TCO
    Posted May 27, 2006 at 8:31 AM | Permalink

    BobK, read a book on DOE. Seriously.

  254. TCO
    Posted May 27, 2006 at 9:08 AM | Permalink

    Bob, please disregard my remark. Was made before several of your thoughtful recent remarks.

  255. Bob K
    Posted May 27, 2006 at 10:53 AM | Permalink

    Carl,

    Being the curious type I located your public account data page here.
    http://climateapps2.oucs.ox.ac.uk/cpdnboinc/show_user.php?userid=1

    Shame on you!

    Following the links on that page I see you haven’t contributed any credits to your team in at least the past 30 days. No host computer is listed, and the .08 figure for your recent average credit score is near the bottom of the barrel. I see a dozen of your team members are between 108 and 1512 for recent average credit.

    Here’s a few more question you’ll probably ad hom instead of answer.

    Have you decided it is non-productive for you to run the simulations?
    Won’t others on your team catch on and follow your example?
    Couldn’t you at least run the simulations while sleeping?
    Did you decide to conserve on your electric?
    Have you no team spirit?

    Why don’t you at least run the simulations while visiting climateaudit? Help the team out.

  256. Posted May 27, 2006 at 11:57 AM | Permalink

    HAHAHAHAHA, wow, your “brilliance” shines through! I’ll give you a little hint, genius, the clue is on this page: “BBC experiment.”

  257. Bob K
    Posted May 27, 2006 at 4:51 PM | Permalink

    More evasion Carl?

    You didn’t bother to provide a URL to refute the link I posted in #256. So I plugged your ‘clue’ words into the climate prediction search engine and the only link retrieved is to inthenews.php. Nothing about you there.

    I notice you didn’t even assert that what I linked is incorrect. So your paucity of simulation time still stands unrefuted.

    Don’t you think it’s kind of shabby of you to have dropped participation in a project you claim is wothwhile?

  258. Posted May 27, 2006 at 5:04 PM | Permalink

    Re: 249.

    The climate models are just curve fit programs.

    And I suspect a better fit could be obtained with least squares regression in most cases, and most of the search of parameter spaces is in fact a very inefficient brute force form of curve fitting.

  259. Steve McIntyre
    Posted May 27, 2006 at 5:20 PM | Permalink

    #259. David, do you recall the posts on Kaufmann, a well known econometrician, who has an article showing that the GCMs do not out-perform a linear model based on input forcing. There were a couple of good posts and Gavin asked that it be taken offline.

    Gavin argued that the point of GCMs was not to predict temperature- that was a “done deal” – but to detemine regional distributions.

    Of course, if you inquire where the deal was done, they’ve moved on,

  260. Greg F
    Posted May 27, 2006 at 5:57 PM | Permalink

    And I suspect a better fit could be obtained with least squares regression in most cases…

    A guy named Jan Janssens did it with Excel.

  261. Posted May 27, 2006 at 11:13 PM | Permalink

    Re. 260. I do recall the post on RC very well, and also that the publication of these embarrassing results was being stymied by a single anonymous reviewer who kept requesting more changes, until finally saying the results were out of date because it didn’t use the latest model, a model not available when the did the study! I hope Kaufmann finally won through for science.

    No doubt there are times when a CGCM might be used over a regression, but you shouldn’t use a hammer on everything. Anyone seen any studies verifying the predictions of temperature in the last IPCC report? Any comparisons I have seen show actual temperatures below the lowest CGCM scenario, but consistent with a 100 year linear trend line. Where are the tests?

  262. Posted May 28, 2006 at 5:58 AM | Permalink

    Ah, I see that the McIntyre-McKitrick definition of "robust" is very elastic. McK’s results are "robust" even though they change when the degree/radians error is corrected, but Mann’s are not because they change if you delete enough of the data.

    Thanks for the link to earlier version of your paper Dr McKitrick. I see it contains a reductio ad absurdum for your thesis — apparently economic effects have a strong influence on satellite measured trends as well. Those urban heat islands must be enormous. It seems that this adverse finding disappeared from the CR version of your paper.

    Steve: My definition is not elastic. What’s sauce for the goose is sauce for the gander. I am not an author of Michaels and McKitrick despite your continued efforts to involve me. If their results are not robust, so be it.

    In Mann’s case, it’s not the non-robustness simpliciter that is the issue, but Mann’s claims about robustness to the presence/absence of all dendroclimatic indicators. If claims by Michaels and McKitrick are inconsistent with their calculations, then shame on them. But you’re running a reverse beauty contest here – whatever standards you are using in this beauty contest, also have to be applied to Mann.

    McKitrick made his code and data available so that everything could be tested. As a result, the radian/degree error was quickly diagnosed and known long before any potential use of these results for policy purposes. In contrast, Mann did the opposite. Thus, this paper was being applied for policy purposes while information was being withheld. In fact, IPCC stated that Mann’s reconstruction had statistical skill in cross-validation tests (which in MBH98 had been said to br RE, r and r2) – an untrue claim made by Mann as an IPCC author. You do yourself no good by saying that my insistence on this issue is a “hobbyhorse” – it’s a fundamental misrepresentation that’s not going to go away whatever you find in McKitrick and Michaels.

  263. cytochrome_sea
    Posted May 29, 2006 at 7:26 AM | Permalink

    Carl, I only PWI on blogs, yet, I’ll retract my NOT-SCIENCE bit, as nobody (yourself included?) seemed to get the reference bit about Feynmann, no biggie but I wish Lubos was here 😉 It was a lame joke about path intregal formulation and the Monte Carlo method. In other words, a dud! 🙂