NCAR and Year 2100

Here’s another model output oddity that I noticed from a plot and confirmed with a direct screensave. In January 2100, the SH values for an NCAR PCM run, as archived at KNMI, (expressed as an anomaly here) jumps 2.5 deg C from 0.3 deg C to 2.81 deg C, before relaxing to lower values over the next few years. The NH has an offsetting problem with 45-90N dropping over 2 deg C in Jan 2100. Overall it seems to average out. (Did I hear someone say that it therefore “doesn’t matter”?)

The problem occurs exactly in January 2100 and one can hardly help wondering whether something has been spliced somewhere along the way – a model version of the Hansen Y2K problem. Again, I do not know whether this is an artifact of how KNMI handles the data extraction or whether it’s a property of the model.

SH (0-90) Scrape

45-90N Scrape

55 Comments

  1. Mike Davis
    Posted Jul 5, 2009 at 8:26 AM | Permalink

    That is well within the range of natural variations when you consider alternate realities. Look at jan 2101 also.

    • RomanM
      Posted Jul 5, 2009 at 9:29 AM | Permalink

      Re: Mike Davis (#1),

      That is well within the range of natural variations when you consider alternate realities. Look at jan 2101 also.

      … and what kind of reality is that?! These are supposed to represent monthly values for an entire hemisphere. Changes in the “weather” must be pretty abrupt.

      Re: Andrew (#2), stole my line, you did. 🙂

  2. Andrew
    Posted Jul 5, 2009 at 9:05 AM | Permalink

    This would the Y2.1K problem.

  3. Posted Jul 5, 2009 at 9:13 AM | Permalink

    To this econometrician (25+ years now and counting) those both look like either addfactors or a dummy variable was set in the equation to generate those results. If it is the result of your garden-variety stochastic equations (including non-linear), then there is something very, very strange with the equations (i.e. you can’t get results like that without intervening in the equations…)

    Any chance of seeing the equations?

    And: of course it matters. Toss in enough of these and you skew model results subtly. Which may have been the point to begin with…

    And: the two do not cancel each other out. The positive blip is both in sum and in average greater than the negative blip…significantly so.

  4. Gary Strand
    Posted Jul 5, 2009 at 9:24 AM | Permalink

    I can address your questions, Steve, since I was the one who did the processing of the PCM data for the IPCC AR4. Can you be more specific as to which set of experiments you’re looking at?

  5. Arn Riewe
    Posted Jul 5, 2009 at 9:46 AM | Permalink

    Steve:

    OT and probably would be an interesting new thread. Michael Hammer has a guest post over at Jennifer Marohasy’s site on ocean heat content and discrepancies between the IPCC models and observed. His calculations show discrepancies by a factor of 10. It would be interesting to have this audited and I wouldn’t be surprised if you could get Loehle and Pielke, Sr. to weigh in on this.

    http://jennifermarohasy.com/blog/2009/07/a-climate-change-paradox/#more-5649

    snip – no need to editorialize like this.

  6. Steve McIntyre
    Posted Jul 5, 2009 at 10:09 AM | Permalink

    NCAR PCM sresa1b tas – as shown in the post. Third ensemble in the KNMI output at climexp.nl

  7. Gary Strand
    Posted Jul 5, 2009 at 10:24 AM | Permalink

    What’s the base period for the anomaly?

  8. Clark
    Posted Jul 5, 2009 at 10:56 AM | Permalink

    What’s interesting in the large month-to-month shifts in the anomalies. Do these models often have hemispheric temps jumping > 1 C each month, and then back again?

    • Posted Jul 5, 2009 at 11:18 AM | Permalink

      Re: Clark (#9),

      The calculated change in the energy content of the atmosphere ( and ‘effective’ oceans if they have been included ) on an hourly, daily, and monthly basis would be very interesting to see. The change for a single time step would be even more interesting, imo.

      Then convert these to an equivalent W/m^2 to attempt to check if natural processes ( in contrast to numerical artifacts ) could possibly effect such rates of change of energy content.

      I estimate that a change of 2k for the entire atmosphere over a month having 31 days represents about 7.6 W/m^2.

      All corrections appreciated.

  9. Robert Austin
    Posted Jul 5, 2009 at 10:56 AM | Permalink

    The models can’t be flawed since they are created by the best minds in science as evidenced by lots of peer reviewed papers and Nobel Prizes granted. The only possible explanation is that these abrupt changes are manifestations of the fabulous “tipping points” that lurk just around the climate change corner.

  10. Gary Strand
    Posted Jul 5, 2009 at 11:24 AM | Permalink

    I’ve got the appropriate model output data ready to go, to either verify what Steve noted, or to explain why that specific dataset could be flawed, but I need additional information, i.e., the base period for the anomaly and how the anomaly was calculated.

  11. Steve McIntyre
    Posted Jul 5, 2009 at 11:48 AM | Permalink

    Go to climexp.nl. Login. Select Monthly scenario runs. Click the radio button for UKMO cm3 sresa1b tas. I selected lats of -20 20, click Make Time SEries. Click raw data. Scroll to 2100

    • Gary Strand
      Posted Jul 5, 2009 at 12:29 PM | Permalink

      Re: Steve McIntyre (#13),

      Go to climexp.nl. Login. Select Monthly scenario runs. Click the radio button for UKMO cm3 sresa1b tas. I selected lats of -20 20, click Make Time SEries. Click raw data. Scroll to 2100

      Is this addressed to me, Steve?

      • AnonyMoose
        Posted Jul 5, 2009 at 3:42 PM | Permalink

        Re: Gary Strand (#14), it looks like it answers your question. In the spirit of this site’s encouraging people to reexamine climate data, it was probably posted for anyone who wants to see the same results.

  12. Terry
    Posted Jul 5, 2009 at 12:34 PM | Permalink

    Slightly OT, but what possible use is there in calculating and/or storing a predicted anomaly to 1/100,000th of a degree? AFAIK we don’t have thermometers that can measure with anything close to that level of accuracy. Why the unnecessary (and meaningless) precision?

  13. Genghis
    Posted Jul 5, 2009 at 12:47 PM | Permalink

    I have a question too. Is the model only predicting (typically) less than a degree increase in 100 years?

  14. Paul Penrose
    Posted Jul 5, 2009 at 1:00 PM | Permalink

    Terry,
    This is model output, not actual measured values so the accuracy of current thermometers has nothing to do with it. Generally the amount of precision selected for output is arbitrary. In most cases the model authors pick something that will show significant results. For example, if you are expecting changes to the second decimal place, then you will output to the third or fourth. Now whether the data means anything at those levels of precision is dependent on how the math and internal storage is handled in the model. But that subject is truly OT.

  15. Lars Kamél
    Posted Jul 5, 2009 at 3:38 PM | Permalink

    This reminds me of the time when I downloaded and tried a climate model. I never got an end result, because it eventually crashed. Before it crashed, however, I discovered that the model had no leap years! Every year had 365 days, no year had 366. Such a model must have seasons that move rapidly through the calender year.

    • Gary Strand
      Posted Jul 5, 2009 at 4:30 PM | Permalink

      Re: Lars Kamél (#18),

      Before it crashed, however, I discovered that the model had no leap years! Every year had 365 days, no year had 366. Such a model must have seasons that move rapidly through the calender year.

      Many climate models have no leap years – it’s nice to have February always be 28 days. It doesn’t really matter, anyway, as the input forcings are kept consistent.

      • Mike Lorrey
        Posted Jul 10, 2009 at 10:49 AM | Permalink

        Re: Gary Strand (#21), Actually Gary, by leaving out leap days for a century, you are essentially introducing a global axis precession of 25 days into the model, equivalent to a Malenkovich drift of almost 2000 years in polar precession. This clearly would cause a major problem in any GCM accurately modelling real world climate. This omission seems to me to be a VERY serious problem that cannot be hand waved away.

  16. Alex B
    Posted Jul 5, 2009 at 3:52 PM | Permalink

    Thanks Steve. All this time I thought the problem was that these models hadn’t been validated. Now it looks like one worse and the question is have they even been verified?

    Steve: as always, I urge readers not to go a bridge too far. Little purpose is served by piling on at this time. Let’s see what the explanation is.

  17. Gary Strand
    Posted Jul 5, 2009 at 4:33 PM | Permalink

    I’d still like to know the base period for the calculation of the anomaly. It looks like each the mean for each month was subtracted from its respective month in the time series, i.e., the mean Jan for years N to M is subtracted from each Jan of years N to M. I need to know what N and M are – 2000 to 2009, 2000 to 2019, 2000 to 2029, etc…

  18. Gary Strand
    Posted Jul 5, 2009 at 6:51 PM | Permalink

    A quick perusal shows a discontinuity in the sulfate forcings datasets between 2099 and 2100. If I have the time, I may look at the ozone forcing as well.

    I also want to note that PCM is an older model; CCSM, version 3, from NCAR is much more up-to-date. PCM was submitted to the IPCC AR4 as a backup in case CCSM3 wasn’t ready in time, as well. I wouldn’t take the results from PCM as state-of-the-art. In the early 2000s, it was reasonably current; the runs you’re looking at are now ~7 years old, Steve.

    I don’t think anyone wants to run 7-year-old hardware with 7-year-old software.

    • Geoff Sherrington
      Posted Jul 5, 2009 at 7:08 PM | Permalink

      Re: Gary Strand (#23),

      You are probably in a position to know of any scenarios were provided after the official IPCC cut-off date. Do you know if any were? One can become confused as to what are hindcasts and what are projections, or a mix.

    • Curt Covey
      Posted Jul 5, 2009 at 7:22 PM | Permalink

      Re: Gary Strand (#23), and Steve’s earlier comment “Again, I do not know whether this is an artifact of how KNMI handles the data extraction or whether it’s a property of the model.” — The best way to track down this glitch would be to go more directly to the source of the climate-model output. Try the archive at www-pcmdi.llnl.gov (the source for KNMI’s numbers) or, better yet, http://www.earthsystemgrid.org (where output from the NCAR family of models is deposited). These two archives are available to anyone who registers on the appropriate
      Earth System Grid Web-portal. I think that a look at them would also answer Geoff Sherrington’s question for Gary.

      • Steve McIntyre
        Posted Jul 5, 2009 at 7:38 PM | Permalink

        Re: Curt Covey (#26),

        I do not have enough interest in the problem to spend time figuring out the glitch. If people responsible for the data aren’t interested in figuring out whether the glitch is at PCMDI or KNMI, then I think that they should be. If you think that the best way to locate the glitch is examine PCMDI data, you obviously know how to do so far faster than I can.

        I’m registered at pcmdi. While I’m pretty good at figuring out how to access things, I have so far been unable to decode how to extract information from PCMDI. I would very much like to see a template for how to extract geographically averaged from PCMDI but thus far have not been successful in obtaining one.

        • Chad
          Posted Jul 5, 2009 at 8:59 PM | Permalink

          Re: Steve McIntyre (#27),

          You can ftp to the server and download the model data. It’s very well organized and easy to use. Better than using the web interface (the downloads are much slower through http!)

        • Steve McIntyre
          Posted Jul 5, 2009 at 9:20 PM | Permalink

          Re: Chad (#34),

          Chad, I can scrape model data from KNMI and have posted up a script to do this.

          I do not want to download huge amounts of model data from PCMDI in order to calculate small things like TRP or SH averages. I have no intention of doing so.

          If you can tell me (in baby steps ) how to extract such averages, I’d appreciate it.

          What I’d like to be able to do and I’ve figured out how to scrape KNMI doing this – is to access the data from R. I’ve sort of figured out how to construct CGI commands to KNMI within R and I’ve got a way of accessing KNMI data that IMO is far superior to any other method. BUt I haven’t been able to get to a point at PCMDI where I can even make a CGI handshake. NOr have I found available information helpful enough to get me started.

  19. Gary Strand
    Posted Jul 5, 2009 at 7:13 PM | Permalink

    I’m quite sure that some modeling groups submitted data after the official cut-off, mainly because the timelines were so incredibly short. It takes time to do the model runs, it takes time to process the data, and it takes time to analyze data and write papers. The first activity and the last were least impacted by the ambitious timeline; the middle step (data processing) was the part that was squeezed, IMHO.

    We at NCAR were very clear as to which runs were which type (20C3M vs. SRES scenarios, plus the other runs), but managing all that was a big job.

  20. Posted Jul 5, 2009 at 8:16 PM | Permalink

    SteveMcIntyre–
    I’ve noticed similar glitches and mentioned them to Geert Jan from time to time. He takes reports like these very seriously always looks into his scripts to see if it’s on his side when alerted. I don’t remember specifics of all conversations discussions, but I seem to recall the problem like individual wild hairs seemed to originate on the PCMDI side of the KNMI/PCMDI divide. Of course, even of so, PCMDI may not have introduced the error. The problems could originate upstream at the modeling groups.

    I agree with you that it’s not your job (or my job or any bloggers) to track down where errors my have entered the model-data stream. But it may be worth letting Geert Jan know when you find specific errors because he’ll at least check to make sure it’s not on the KNMI side. After that, one might hope PCMDI would shoulder the responsibility of checking their side. But if funding is insufficient, then, PCMDI should simply post a disclaimer indicating that they cannot ensure the accuracy of the model-data in their archives.

    • Steve McIntyre
      Posted Jul 5, 2009 at 8:26 PM | Permalink

      Re: lucia (#28),

      I’ve passed comments on to Geert and agree that he is both responsive and cheerful and have complimented him on this in prior threads. I will pass these two cases on to him as well.

  21. Gary Strand
    Posted Jul 5, 2009 at 8:25 PM | Permalink

    Just FYI – none of the same data available from KNMI or PCMDI is available via http://www.earthsystemgrid.org.

    For NCAR’s submission of PCM and CCSM data, we did as much QC as we could given the constraints we were operating under; considering the ~10,000 GB (yes, 10,000 GB) of data we submitted for AR4, we did a pretty good job.

    I doubt PCMDI introduced any errors into the data they host – the responsibility for them lays with the modeling groups that submitted data.

    • Geoff Sherrington
      Posted Jul 6, 2009 at 2:57 AM | Permalink

      Re: Gary Strand (#29),

      How much of the 10,000 GB you handled for AR4 was surplus to needs? Reason, the types of individual errors being picked up here are like the one bad apple that spoils the barrel. It is far beyond the capacity of the most devoted individual to maintain high quality with 10,000 GB over a couple of years.

      Thus, doubt tends to arise when quantity overtakes quality. Which topics were the main offenders in padding out to that 10,000 GB?

  22. Gary Strand
    Posted Jul 5, 2009 at 8:30 PM | Permalink

    I’m confident KNMI didn’t change any of the PCM data; the “error” appears to lie in the discontinuity of the sulfate aerosol forcings (at least) between 2099 and 2100. Models tend to react to stepwise jumps in the boundary conditions in interesting ways.

    Likewise, I don’t think KNMI or PCMDI are responsible for the UKMO CM3 March 2106 data; I recommend checking with the UKMO, the contacts being provided in the original tas file’s header info.

    • Steve McIntyre
      Posted Jul 5, 2009 at 8:46 PM | Permalink

      Re: Gary Strand (#31),

      Gary, as I said in the other post, I’m not sufficiently interested in the problem to sort out who’s responsible for what. Now that PCMDI is aware of the problem via Curt Covey, they can either figure it out or not. In my opinion, it’s something that they should be interested in. Again, if they aren’t interested in figuring it out, then I’ll probably comment further on the matter, but it’s someone else’s responsibility to figure out who’s responsible for the defect if there is one.

      Practically, I haven’t had much luck in getting information from the UK Met Office in the past anyway. I haven’t even been able to get station data from them. Someone from PCMDI or KNMI would have far more authority with the Met Office in getting an explanation than I would.

    • Steve McIntyre
      Posted Jul 5, 2009 at 8:54 PM | Permalink

      Re: Gary Strand (#31),

      Models tend to react to stepwise jumps in the boundary conditions in interesting ways.

      Interesting.

      Here we have a 2 degree change in the NH with an opposite change in the SH. If, as you surmise, that is a result of a discontinuity in sulfate aerosols then it’s an interesting property of the model, which might not be understood in a run that had been sanitized. I wonder if this property has been reported in the PRL.

    • MarkR
      Posted Jul 8, 2009 at 7:17 PM | Permalink

      Re: Gary Strand (#31),

      Models tend to react to stepwise jumps in the boundary conditions in interesting ways

      Garry, have you considered Gerry Browning?

      A small perturbation of the initial conditions for this equation can lead to instantaneous, unbounded growth and time dependent systems that exibit this type of behavior are called ill-posed systems. It is quite surprising how often simplifications that have been made in practice have led to this type of problem. Therefore, any simplification of the original continuum equations should be checked to ensure that the simplified system accurately approximates the continuum solution of interest and is properly posed

      Link

  23. Gary Strand
    Posted Jul 6, 2009 at 7:18 AM | Permalink

    None was surplus – what was wanted was at least several ensemble members for each class of experiment. IPCC AR4 requested 12 basic experiments:

    1 Pre-industrial control (constant year ~1850 forcing)
    2 Present-day control (constant year ~2000 forcing)
    3 1%/year CO2 increase to doubling
    4 1%/year CO2 increase to quadrupling
    5 Slab ocean control
    6 Instantaneous CO2 doubling with slab ocean
    7 AMIP
    8 20th century (~1850 to ~2000)
    9 SRES A1B scenario
    10 SRES B1 scenario
    11 SRES A2 scenario
    12 Climate change “commitment” scenario

    CCSM provided more ensemble members for the relevant experiments than any other modeling group.

    I don’t believe one missing month, or one discontinuity, “spoils the barrel”.

    12 AMI

    • Steve McIntyre
      Posted Jul 6, 2009 at 8:45 AM | Permalink

      Re: Gary Strand (#37),

      I don’t believe one missing month, or one discontinuity, “spoils the barrel”.

      Nor have I said that.

      However, in scripts that I write, I take such oddities very seriously as they are 99.99% of the time a symptom of a problem in what I’m doing. Sometimes it’s easy to fix, but not always.

      My own original studies were in math and small “glitches” are sometimes huge problems and are never just passed over. Think of Andrew Wiles’ experience.

      I find the very idea of March 2106 going AWOL to be very mysterious and am very dissatisfied with the “explanation” that these things just happen.

      From a numerical analysis point of view, why does it happen?

      Recall that I’m actually interested in the math and statistics of things and do not view political decisions as the only relevant analysis metric.

      • Gary Strand
        Posted Jul 7, 2009 at 7:30 AM | Permalink

        Re: Steve McIntyre (#39),

        From a numerical analysis point of view, why does it happen?

        I can’t speak for the UKMO folks, but we’ve had to release data with months set to all missing values not for numerical reasons (i.e., the model became unstable and produced unphysical values) but because the data wasn’t recoverable. As I noted, we’ve had data loss because of faulty magnetic tape and without a backup copy. Additionally, as hardware and software evolve, models fall by the wayside, as is the case with PCM. That model is no longer supported and cannot be used on currently-available hardware.

        Also, as I said, machine changes, compiler changes, and OS changes can result in the loss of bit-for-bit reproducability. Given the way we run our climate models, rerunning an simulation can give different answers – not a different climate, but non-identical values over the length of the run.

        • Steve McIntyre
          Posted Jul 7, 2009 at 7:41 AM | Permalink

          Re: Gary Strand (#46),

          OK

        • Posted Jul 8, 2009 at 11:58 AM | Permalink

          Re: Gary Strand (#46),

          This sounds like something you should try to correct. The only way a program with no user input can get different results in different runs is through some sort of pseudo random input. I’m sure that you do that to simulate noise, but you should be able to reproduce the identical pseudo random input if for no other reason than quality control. Otherwise how do you do regression testing?

        • Soronel Haetir
          Posted Jul 8, 2009 at 8:48 PM | Permalink

          Re: Nicolas Nierenberg (#52),

          Nicolas,

          I’ve never written code to run on super-computers, but it would not surprise me if it’s not extremely difficult to replicate earlier machine output on later equipment. And reading about GCMs it appears that is the equipment market they require.

          I do think it would be interesting to find out what a high end, but still basic modern server could do in this arena, something like an 8-core xeon, or 16 core recent amd (sorry the name escapes me) with 256gb+ ram if the teams were to move away from antiquated programming methods. Be a pita to start from scratch though.

    • Geoff Sherrington
      Posted Jul 6, 2009 at 8:11 PM | Permalink

      Re: Gary Strand (#37),

      Thank you for your openness. I guess that I’m more political than Steve at #39, but I do feel uncomfortable that such a volume of work was distilled into a Summary for Policy Makers before the work had finished; and that the brief summary might not have done justice to the hard work that went into the 10,000 GB.

  24. James P
    Posted Jul 6, 2009 at 8:10 AM | Permalink

    I don’t think anyone wants to run 7-year-old hardware with 7-year-old software.

    My experience of 7-year old hardware is that 7-year old software is what it runs best! 🙂

  25. Tim
    Posted Jul 6, 2009 at 10:30 AM | Permalink

    It’s interesting this is what I logged onto today, since I spent the morning resurfing some Y2K Hansen info. Primarily I was surfing the global warming pages on Wikipedia and found them to be locked! The global warming controversy site was even more one sided, and can’t be changed. Even Steve’s Wiki entry stated that (in relation to GISS corrected data), “the rankings of the globally warmest years remained unchanged.”!!! No source on that one, as you could suspect. I’ve registered with the site to see if it allows me any more freedom to post challenging entries, but wondered if anyone else had some experience or advice in this area? I can only imagine it’s uphill, as usual.

    • Soronel Haetir
      Posted Jul 6, 2009 at 9:06 PM | Permalink

      Re: Tim (#40),

      My advice is to not even try adding any balance to WP. Your changes will get reverted so quick it will make your head spin and no matter how much effort you put in it will only end in tears. The entire topic is under the control of True Believer admins.

  26. Shihad
    Posted Jul 6, 2009 at 7:08 PM | Permalink

    Hi Gary,

    I was wondering if I could get your opinion on something to do with the models. It’s not really relevant to this post but is to models in general.

    I was wondering, in your opinion, how effective are the models you work with in replicating things like clouds (more specifically Sc or low-level clouds) and their variations over time?

    Do the models actually preclude low climate sensitivity? i.e. 2C02 leading to at most 1C? or is it possible to have GCM models that are consistent with a climate system only weakly sensitive to CO2?

    Also do you, personally, believe it’s possible that the models could be overly sensitive or are you relatively happy with the IPCC’s selected model result of 2CO2 equating to roughly 3C?

    Thanks
    Steve

  27. Gary Strand
    Posted Jul 6, 2009 at 8:09 PM | Permalink

    Shihad, the questions you have are best answered by the Journal of Climate special issue on CCSM3, at

    http://www.ccsm.ucar.edu/publications/jclim04/Papers_JCL04.html

  28. Shihad
    Posted Jul 7, 2009 at 5:13 AM | Permalink

    Hi Gary,

    Thanks for the link. I’ve had a quick look at the 1. document from the link.

    It’s interesting that the climate sensitivity from CCSM1 to CCSM2 and CCSM3 increases from 2.0 to 2.2 then 2.7K respectively.

    It appears this sensitivity change seems to be largely due to changes in the way the model deals with low cloud cover.

    The behaviors of CCSM3 of note seem to be a reduction in low cloud cover over certain regions and that also the rate of formation of low cloud is slower in CCSM3 than in CCSM2.

    The reduction in low cloud cover would be a positive feedback I presume and one of the reasons for the increase in sensitivity. How do we know these kinds of feedbacks actually exist in nature? Do we have the existing empirical evidence over a sufficient period of time to support this?

    I have read paper by Spencer and Braswell that purports to show the relationship may actually be the other way around. i.e. that it is the reduction in low cloud that leads to the warming not the warming that leads to the reduction in low cloud cover. If, and I say if, this were true would this not lead to CCSM3 having a positive bias in climate sensitivity?

    Correct me if I’m wrong but another paper by Caldwell and Bretherton found that low cloud liquid water content increased in response to a warming ocean. I would have thought this would be consistent with negative feedbacks not positive feedbacks.

  29. Posted Jul 7, 2009 at 8:09 AM | Permalink

    Re 45. Research on cloud structure continues. See http://www.eol.ucar.edu/projects/vocals/
    Soon we will have an idea about the accuracy some of the parameters — better late than never.

    JF

  30. Posted Jul 7, 2009 at 9:00 PM | Permalink

    Re 49

    Better _very_ late than never…

    JF

  31. Edward
    Posted Jul 8, 2009 at 9:16 AM | Permalink

    Tim #40
    Any comment that you add to any Wiki topic remotely related to global warming or even the Biographies of those involved in the discussion will be immediately reverted by Connelly and Peterson. There really is an indoctrination going on with school children starting in kindergarden regarding “Global warming” and the plight of the polar bears. I don’t see new “skeptics emerging after 20 years of this brainwashing.
    Thanks
    Ed