GISS Model E Data

Steve Mosher provides the following recipe for getting GISS Model E results:

Ok getting ModelE data

Start here http://data.giss.nasa.gov/

See the link for climate simulations of 1880-2003. click that

http://data.giss.nasa.gov/modelE/transient/climsim.html

Here you will see the link to the paper and all the readme I know of

now to get the data look at table 1.

See line #4. ALL FORCINGS. These are the similautions that include all forcings
( line 1-3 contain individual forcings, like GHG only for example, or volcano)

On the left hand side of the table you will see links for the forcing. On the RIGHT
you will see a list of RESPONSES.

select “lat time”

http://data.giss.nasa.gov/modelE/transient/Rc_jt.1.11.html

Now you will see a pull down menu.

See the first box. Quantity? Pick surface temp ( there are others as well )

Mean Peroid. pick 1 month to get rid of the running mean

Time interval: pick what you like.

base peroid. I selected 1961-1990 because I wanted to compare ModelE to hadcrut.

Output. Formatted page with download links

Show plot and then get the data

http://data.giss.nasa.gov/work/modelEt/time_series/work/tmp.4_E3Af8aeM20_1_1880_2003_1961_1990-L3AaeoM20D/LTglb.txt

Update: John Christy has sent in the following showing GISS Model E versus UAH. noting his regret that GISS had not reported their run a bit further out.

model_31.jpg

106 Comments

  1. steven mosher
    Posted May 9, 2008 at 10:57 AM | Permalink

    If you want data from all the Ar4 simulations and all the models you have to ask nice

    here. You do have to justify your request.

    http://www-pcmdi.llnl.gov/ipcc/about_ipcc.php

  2. Anthony Watts
    Posted May 9, 2008 at 10:57 AM | Permalink

    Hmmm…from Mosh’s output link, 1934 shows all negative anomalies

    1934.041626 -0.3682092950E-01
    1934.125000 -0.4630460963E-01
    1934.208374 -0.1112723723
    1934.291626 -0.1299406737
    1934.375000 -0.9522772580E-01
    1934.458374 -0.1219040602
    1934.541626 -0.1092894450
    1934.625000 -0.9290801734E-01
    1934.708374 -0.1363567114
    1934.791626 -0.1901973784
    1934.875000 -0.9777686745E-01
    1934.958374 -0.1535656303

    1998 shows all positive anomalies:

    1998.041626 0.2108096778
    1998.125000 0.3365091980
    1998.208374 0.3034206033
    1998.291626 0.3975079656
    1998.375000 0.2881736755
    1998.458374 0.2785674930
    1998.541626 0.2753267586
    1998.625000 0.3045601845
    1998.708374 0.2492422760
    1998.791626 0.2611250877
    1998.875000 0.3535256386
    1998.958374 0.3200100660

    Even with the 1961-1990 base period (which trended cool) you’d think that 1934 would show a positive temperature anomaly.

    But just eyeballing the data it looks to me like the same sort of hingepoint that goes on when GISS adjusts a single station and the past cools while the present remains unchanged.

    But maybe I’ve missed something and the model is right and the instrumental surface temperature record for 1934 is totally wrong? 😉

  3. steven mosher
    Posted May 9, 2008 at 11:21 AM | Permalink

    AW,

    If you look on the douglas thread you will see were I plotted this versus the observations.
    I’ll do more later, but the early 2oth century warming is one thing the model misses.

    So, for the first 30 years 1880-1910 the model shows an uptrend and the observations are downtrending. from 1910-1940, the model misses the 30s warming. from 40-70 the model
    matches the flat trend pretty well, and from 70-2000 it captures the warming trend.

    So, one way to look at this is that the model misses natural cooling periods, natural
    warming periods.

  4. Euser
    Posted May 9, 2008 at 11:35 AM | Permalink

    Another way to get model E data is to run it yourself: http://simplex.giss.nasa.gov/gcm/

  5. Sam Urbinto
    Posted May 9, 2008 at 11:44 AM | Permalink

    It’s not warming or cooling, it’s a change in the anomaly direction or anomaly trend direction, and which side of the zero line it’s on.

    But a model (although it seems ensembles are more like reality rather than individual models) doesn’t have to match events, just start and end points. (Or in other words, I don’t care how exactly you get to Miami to Boston, just that you get there on time and under budget.)

    I want to see how things go when the base period changes to 1981-2010 !

    As I’ve mentioned elsewhere, the thing I’m curious about is the base period. Rather than give us a number on the GISTEMP absolute temperature page or in the data itself, the number given is “a best guess estimate” which appears to be from the “most trusted models”. And that is 14 C as a global mean.

    Is the base period itself a product of ModelE? Where’s the actual data for it if not?

  6. Wondering Aloud
    Posted May 9, 2008 at 11:49 AM | Permalink

    It looks like nearly every month was a negative anomally for the first 50 years or so. How much is real and how much is due to the long series of unlikely “adjustments” that are in the data set?

  7. kuhnkat
    Posted May 9, 2008 at 1:07 PM | Permalink

    Sam,

    You don’t care if I get run over and killed on the way as long as my body is delivered to the destination on time and under budget??

    HAHAHAHAHAHAHAHA

    No reason you should care, except, with the models, the idea that it doesn’t matter how we get to an intersect at some point in the future not mattering is a non-sequitur. How we get there is most of it. The fact that we get to ONE particular intersect at a particular time is relatively worthless. If they got to most outputs (multi elevation temps, north and south ice cover…) at that point in the future, it would be more interesting, but, I still claim that HOW they get there is still very important.

    For instance, assume they finally have a single model ensemble that really seems to track past climate and weather. This model is a black box type, all dials, knobs, and switches. It predicts that we will have a large sea level rise in 25 years. WHAT do we try and mitigate??

    In their current models the magnitude of CO2 forcings could in reality be a composite of small magnitudes from a number of other contributors. We mitigate CO2, IT HAS NO EFFECT ON THE SEA LEVEL RISE!!!!

    Knowing how we get there gives us information on whether values and magnitudes are attributed correctly. Still not certainty, but, closer.

    The idea that it all averages out can kill us!!! Saying the stresses in an airframe averages out doesn’t tell you the wing fell off…!!!

    The ensembles diverge substantially. The AVERAGE looks good. This should tell you that the AVERAGE is NOT useful. The fact that most models are biased, compared to the observations, generally in the same direction, and a small subset keeps them in the game, should again show that they are not useful other than continued research.

  8. Sam Urbinto
    Posted May 9, 2008 at 2:10 PM | Permalink

    kuhnkat: “It predicts that we will have a large sea level rise in 25 years. WHAT do we try and mitigate??”

    Nothing. We take the same model, plug in the conditions 50 years ago and see if it tells us what today is. Then 25 years. Then 100. Whatever.

    Although I’m sure you’re aware I don’t accept it’s been convincingly shown that the anomaly trend rise is anything but how the instruments change over time, or an accurate gauge of the Earth’s energy levels. Not that it’s not, just it hasn’t been shown. And also rather dubious of the idea of a “global temperature” in the first place. So what the models say is uninteresting to me by and large. 🙂

    I’m saying it’s 1952 and models start a run using known state of X for 1952 and when it finishes shows X+10 at 1994. If the purpose of my model is to get an idea of what X is going to be in 1994, I’m not really concerned what it shows along the way exactly. (Obviously though, if the model says X+20 in 1953 but it’s really X+1 then I am concerned.) I’m talking about reasonably; like a detour to one city instead of another one along the way to Boston, or picking one route versus another; not unreasonable as in I start going towards Mexico. Or that I end up at the destination a corpse. 🙂

    However, you bring up an intersting point. It’s 1952 and models start a run using known state of X in 1910 and when it finishes shows X+10 at 1994. In the course of building the model, I’d have to check and see if every year (or however long) was reasonably close to the actual state of X during that time period and ended up reasonably close in 1952. Then maybe I can trust it. (What’s “reasonable”? That every 1/42nd of the trip I’m around 1/42nd of the way to Boston, after any abnormal events that can be made up for taken into account.)

    Or in other words, obviously my model won’t be able to take into account an extreme event in 1918 that calmed back down to normal by 1925; I have to take that into account.

    But more here. Let’s say I’m reasonably close to reality 1910-1952 What else could I do? Shouldn’t the model, if I plug in today’s known state of X (let’s say X+5), be able to do the same thing back to 1910 and end up where it actually had been at X where it started? I don’t know if that’s unreasonable to expect or not.

    But as we’ve shown elsewhere, the models (at least tropical troposphere) seem to be missing the actual chaotic (random, complex, intricate) nature of weather in them. So take that bias/uncertainty for what it’s worth.

  9. henry
    Posted May 9, 2008 at 3:07 PM | Permalink

    Just curious – instead of seeing what the DIFFERENCES are, can we see what the COMMON is?

    What I mean, with all the adjustments, it appears that the TREND is being preserved. Whenever the current anomaly is determined, adjustments are being made to the past to keep the present trend within the model range.

    I’m just wondering, if a time-line of the charts were made, if the trend stays constant.

    Maybe I’m not making sense (I work with electronics, differential amps, and we can see what’s COMMON to two signals).

  10. steven mosher
    Posted May 9, 2008 at 3:19 PM | Permalink

    RE 5. there is an option to get the mean of the ensemble as opposed to ensemble – average of base period. I tried it, no joy. As you know Sam I think they should just serve
    it up RAW and let others smooth and anomalize as they see fit.

    When it comes to testing the base period is inconsequential.

  11. steven mosher
    Posted May 9, 2008 at 3:26 PM | Permalink

    RE 9.

    In the process of doing those charts. They are mostly done. ( when I can sneak the time)THAT SAID, i used an approach that was crude. That will, of course, stimulate others to do something more interesting.

    But lets start with the WHOLE series. ModelE computer versus Hadcru data

  12. steven mosher
    Posted May 9, 2008 at 3:30 PM | Permalink

    RE 11. But the hadcru observations have error, does ModelE lie within the error bounds
    of the surface observations or outside them? A gross look at that question.
    Showing the upper and lower limits of hadcru error bands and modelE

  13. Steve McIntyre
    Posted May 9, 2008 at 4:36 PM | Permalink

    JOhn Christy has sent in a graphic showin GISS Model E versus UAH – I’ve added to the thread.

  14. Posted May 9, 2008 at 4:44 PM | Permalink

    13 (SteveM): what causes those sharp drops of the model values from time to time? Are they adjustments downwards to make the model fit the observations every so often? If no drops the model should march way up into the sky…

  15. steven mosher
    Posted May 9, 2008 at 5:31 PM | Permalink

    re 14.

    The downward shot like noise you see in model response is most likely a consequence of volcanic forcing.

    That’s my interp, based on looking at the various runs.

    If you like I’ll you how to get their estimtion of the temp reposnse
    due to solar focing only

  16. Posted May 9, 2008 at 5:39 PM | Permalink

    15 (Mosh): yeah that looks like Krakatoa, Agung and the rest. I should have seen that directly. Now, where is that solar thing…

  17. steven mosher
    Posted May 9, 2008 at 5:49 PM | Permalink

    RE 13.

    Yes! I was meaning to work my way foward to the the MSU data, but I wanted to to start
    with the surface. And see what kind of skill modele had with that metric.

    From 1880 to 1940 I think their skill is white belt skill, not black belt. hehe

    From 1940 to present, modelE surface skill is ( subjectively) pretty good.

    Now, skill at T3? That’s above my paygrade .

  18. steven mosher
    Posted May 9, 2008 at 5:55 PM | Permalink

    re 16. leif.

    Ok see my instructions above

    I’ll link you to the RESPONSE for solar forcing only

    That is, ModelE run with no forcing but the solar

    http://data.giss.nasa.gov/modelE/transient/Rc_jt.1.06.html

    So go to the table and select solar forcing.

    If you follow the instructions you can get the MODEL RESPONSE to solar forcing only

    http://data.giss.nasa.gov/work/modelEt/time_series/work/tmp.4_E3SOaeoM20_1_1880_2003_1951_1980-L3AaeoM20A/LTglb.txt

  19. Greg Meurer
    Posted May 9, 2008 at 6:27 PM | Permalink

    16 (Leif):

    The “solar thing” is Lean 2000 as described in the Hansen, et al., 2007 paper describing the results of the Model E runs. This is discussed in the paper although they were aware of later work when the model was run including I believe Wang.

    Greg

  20. Posted May 9, 2008 at 6:38 PM | Permalink

    19 (Greg): But everyone [including Judith Lean] knows that her TSI reconstruction is not correct, so why is it sill being used? Unless it it because without it, there is no explanation for the increase in temperature before the take-off of CO2. But, if that is so, then [snip – Leif: banned word]

  21. steven mosher
    Posted May 9, 2008 at 6:45 PM | Permalink

    UC. Attribution studies. With the modelE database we have we cannot simply select a
    simlulation run of all forcings except Human GHG. Their design of experiments wasnt a full
    factorial WRT to forcings. So I can take the temp response of all forcings, and subtract out the temp response of all GHG only to approximate the response to “non ghg” forcings. kinda ugly.

    I wonder if they did it this way?

  22. Greg Meurer
    Posted May 9, 2008 at 7:11 PM | Permalink

    20 (leif):

    The discussion of the solar forcing and the reasoning for using Lean 2000 is on page 670 of the paper. The explanation seems some what convoluted, but beyond me.

    I will not speculate on motives, but yes the clear effect is to explain the late 19th and early 20th century relatively cool temperatures. The effect in the model can be seen in the reference given by Steve Mosher in 18.

    Greg

  23. Frank K.
    Posted May 9, 2008 at 7:11 PM | Permalink

    Ahh ModelE. That great GCM with some of the poorest documentation I’ve ever seen. Let me know when someone finds out what differential equation(s) Gavin’s code is actually solving…

  24. Rattus Norvegicus
    Posted May 9, 2008 at 7:59 PM | Permalink

    Well Frank, you can always read the source code which might answer your questions. Of course if you can’t read FORTRAN, then you probably should not be criticising the model.

  25. Peter Hartley
    Posted May 9, 2008 at 8:10 PM | Permalink

    Re #24 I thought this enterprise was supposed to be a science. If so, this statement

    Of course if you can’t read FORTRAN, then you probably should not be criticising the model.

    is certainly out of place. The test of a theory is supposed to be its correspondence with the evidence. You do not have to be able to read FORTRAN to observe how well it does that any more than one needs to know all the details of how a piece of scientific equipment works in order to do laboratory experiments.

  26. Posted May 9, 2008 at 8:58 PM | Permalink

    #24: I work with Fortran codes all the time, and I am not particularly impressed by this one. The build process is so baroque and awful that I’m forced to wonder if anyone outside the group who developed it has ever actually run it. For instance, it *REQUIRES* that everything is located under /u/cmrun/modelE rather than the current working directory, and you have to modify the source in several separate places to make it run anywhere else. Their OpenMP support using the Portland Group compilers on Linux is almost certainly broken; AFAICT, they use the wrong option to enable it. Making it work with GNU GFortran (a decent if not particularly speedy Fortran 95 compiler with OpenMP support included with most current Linux distributions) is not exactly straightfoward either.

    If one of my users handed me this and asked me to port it to our systems for them, it would take me several days to do it. (To put this in perspective, I can usually port a reasonably well written software package to our systems in a couple hours.)

  27. Frank K.
    Posted May 9, 2008 at 9:03 PM | Permalink

    Re: 24

    I can certainly read FORTRAN (and C, C++, …). Can you divine from the FORTRAN source code what equations are being solved, how they are being solved (i.e. which numerical algorithms have been implemented), how the boundary conditions have been formulated, source terms, tracer equations, etc.? I have searched their site and have yet to find anything very comprehensive. Please let me know if your find something…

    For a stark contrast, please observe the outstanding documentation at the NCAR website:

    http://www.ccsm.ucar.edu/models/atm-cam/

    In particular…

    http://www.ccsm.ucar.edu/models/atm-cam/docs/description/

  28. Andrew
    Posted May 9, 2008 at 10:31 PM | Permalink

    Just eyeballing I can see that the point that Richard Lindzen and others have been going on about-that modelinernal variability is unrealistically small, is absolutely correct. Look at the “wiggles” in the actual data, sharply up and down (and up and down, and up and down) versus the models (pathetic static).

    12 (mosher): Any idea what that sharp down turn circa 1905 is? The models seem to miss it!

  29. Posted May 10, 2008 at 12:57 AM | Permalink

    Steven (21)

    So I can take the temp response of all forcings, and subtract out the temp response of all GHG only to approximate the response to “non ghg” forcings. kinda ugly.

    Hmm, works if forcing–temperature system is linear. MBH98 Fig7 makes that assumption (BTW, http://www.siam.org/journals/categories/02-004.php is still open )

    What if errors in GCM outputs are additive Cauchy distributed, simulations are not ergodic, and weather noise is 1/f, ? 😉

  30. Posted May 10, 2008 at 4:05 AM | Permalink

    30 (Andrey): Very nasty, Andrey, very nasty. But it is not about me and my correction of sunspot numbers [which by the way goes the other way: early Wolf numbers are too small]. This graph shows the Lean (2000) reconstruction [brown curve] compared to later reconstructions [Wang et al {incl. Lean}, Krivova et al., Premminger et al.]:

  31. Spence_UK
    Posted May 10, 2008 at 4:16 AM | Permalink

    UC, #29

    Prof Koutsoyiannis has recently been asking this exact question. How well do the models capture the scaling behaviour of the real climate, by assessing standard deviation at different scales. (Albeit at a regional, rather than global level).

    Assessment of the reliability of climate predictions based on comparisons with historical time series

    (Click on “Presentation” to get through abstract page)

  32. Spence_UK
    Posted May 10, 2008 at 4:21 AM | Permalink

    Postscript added to note on #32

    When I say “answered this exact question”, I mean of course “addressed a slightly different, but closely related question”

    🙂

  33. Andrey Levin
    Posted May 10, 2008 at 5:27 AM | Permalink

    Very nasty, Andrey, very nasty.

    You got it right. Keep proper academic standards, or else…

  34. Posted May 10, 2008 at 6:10 AM | Permalink

    34 (Andrey): And picking a data series that you know is not correct because it makes your model look better is proper academic standard?

  35. steven mosher
    Posted May 10, 2008 at 6:23 AM | Permalink

    26 troy, have a go at porting gisstemp. its source is availible on the pages i linked.
    10K lines of fortran, with some python thrown in for good measure

  36. steven mosher
    Posted May 10, 2008 at 6:25 AM | Permalink

    re 28 andrew,

    no idea. the model gets the slope of the first 30 years wrong missing the donwtrend
    from 1880-1910, then it misses the warming from 1910-1940, but from 1940 on it hits
    the mark, shrugs

  37. steven mosher
    Posted May 10, 2008 at 6:30 AM | Permalink

    re 29. ya i suppose i could take all the invidual responses add them and see if they
    match the total

  38. Steve McIntyre
    Posted May 10, 2008 at 6:33 AM | Permalink

    #30. Leif, I recently looked at the solar reconstructions in IPCC 1990 and was struck by the fact that they were very small-amplitude – my impression was that they were along the lines of your position. The very large amplitude amplitude theories seem to have arisen in the mid-1990s. Is this a correct perception?

  39. steven mosher
    Posted May 10, 2008 at 6:41 AM | Permalink

    re 31 Nice paper, I liked the fact he looked at Hurst

  40. Posted May 10, 2008 at 6:42 AM | Permalink

    38 (SteveMc): Basically correct. There are some details:
    1) the solar cycle variation [from min to max for a given cycle] before 1947 is a bit too small
    2) the ‘background’ variation on which the solar cycle rides does not exist, so TSI at each minimum falls to about the same value [very small variance still possible – the Sun is messy]

    It is basically 2) that is the problem.

  41. Posted May 10, 2008 at 6:58 AM | Permalink

    38 (SteveMc): The increasing background was triggered by a 1999 paper by Lockwood et al. in Nature [extending some work I did in the 1970s], where they claimed that the sun’s coronal magnetic field [and by implication the general background field of the sun] had more than doubled over the 20th century. Everybody doing TSI reconstructions then scrambled to adapt their values to be ‘consistent’ with that claim. Mainly due to my own work over the past six years [where I have realized that I was wrong about the doubling that I suggested in 1977], it is now becoming accepted that the doubling did not occur and that the sun’s basal background magnetic field has varied a lot less, if at all. Therefore, the TSI crowd is now scrambling to ‘undo’ the damage done by the perceived doubling [without admitting that they were wrong – tough job, but some people are skilled at this]. As usual, it will take a solar cycle’s worth of time before that filters down to the ‘users’ of TSI, especially if the old, erroneous values ‘fit better’ with someone’s pet theory or model.

  42. Harold Pierce Jr
    Posted May 10, 2008 at 7:12 AM | Permalink

    FWIW

    To determine the effect of weather and any other forcings except sunlight on the variability of surface temperature at the Quatsino (BC) weather station, I computed the mean Tmax, Tmin and Tmean for Sept 21 for the years 1895 to 2007. Presumably, the most important factor effecting daily surface temperature would be clouds, the climatologist’s worst nightmare.

    The results are: Tmax=17.0 +/- 3 K, Tmin=8.5 +/- 1.5 K and Tmean=12 +/- 2. The error is the classical average deviation. The means were rounded to 0.0 or 0.5 K since the temperature is reported to +/- 0.5 K.

    If you buy my argument for the special price of only $9.52 (+GST) that the variability in surface temperature is due mostly to clouds, then we have fairly good quantitative estimate for the value of this effect, and it is just huge. Mostly likely, it just pure luck that Tmax and its AD are exactly twice that of Tmin and its AD.

    Now suppose that we do this type of analysis for every remote weather station on the planet, use only data for the equinoxes and the solstices (or any other set of four equally-spaced days), and obtain the same or very similar results. We be forced to conlude that the natural variabilty due to local weather on surface temperaure is so great that it would be extremely difficult if not possible to detect any other climate forcings unless their effects were quite large and persistent enough to override that of natural variability.

    In the following post I shall present data for the years 1895-1907 vs 1990-2007.

  43. Francois Ouellette
    Posted May 10, 2008 at 7:16 AM | Permalink

    #12 Steven

    One must also remember that the forcing estimates have uncertainty too. So, apart from the model’s apparent “noise”, different runs covering the range of possible forcings should be made. I’m not talking about multiple runs with the same forcings here. The result would be a “band” of temperatures, that could be compared with the band resulting from observations. I wonder if this has ever been done… Considering that the uncertainty on aerosols is gigantic, I suspect that the result, pre-1980, would be that any comparison between model and observations is meaningless: the bands would be so large as to encompass anything.

    In the end, it’s easy to see that the models amount to not much: linear GHG forcing with water vapor feedback, volcanic forcing tuned to match the known events, and aerosols added in to get the best match. Not to mention the “noise” supposed to represent “natural” variability. A 20-line program could be just as good. My thought is that the complexity of models really hides quite simplistic, and somewhat unproven (if not unprovable), physical assumptions. Don’t get me wrong: the models should be a good tool to try to understand major climatic phenomena. As such, they should be used to test and refine physical models, by comparing results with observations. But here they are used to demonstrate one single (and simple) effect: that relating global GHG forcing to global mean temperature. You can easily get the right answer for the wrong reasons. It’s in the details that you know if your model is good or not. The best example is the natural variability of climate versus the “noise” of the model: clearly we’re missing something here. If we can’t capture the natural variability, there is something fundamental lacking to our understanding. How do we know that our lack of understanding of the natural variability on a short time scale doesn’t also mean that we don’t understand natural variability on long (multi-decadal) time scales? It seems to me that those are the sort of questions that should be answered before proclaiming that the models can be extrapolated into any sort of parameter space that has never been observed before (like doubling CO2). Extrapolation is the most dangerous thing to do with an imperfect model.

  44. steven mosher
    Posted May 10, 2008 at 7:30 AM | Permalink

    re 43, yes willis has a post over on the douglass thread that details the simple 6 parameter
    model the ipcc uses. As for extrapolation, I agree. danger will robinson!

  45. Smokey
    Posted May 10, 2008 at 8:55 AM | Permalink

    It appears that someone could easily replicate the rising temps since around 1900 by overlaying with this:

  46. Harold Pierce Jr
    Posted May 10, 2008 at 10:04 AM | Permalink

    Here is the data for 1895-1907 vs 1990-2007.

    For year 1890, Tmax=00.0, Tmin=00.0, and for year 1990, Tmax=25.0, Tmin=10.0.

    For year 1891, Tmax=00.0, Tmin=00.0, and for year 1991, Tmax=19.5, Tmin=07.0.

    For year 1892, Tmax=00.0, Tmin=00.0, and for year 1992, Tmax=18.0, Tmin=12.5.

    For year 1893, Tmax=00.0, Tmin=00.0, and for year 1993, Tmax=17.5, Tmin=06.0.

    For year 1894, Tmax=00.0, Tmin=00.0, and for year 1994, Tmax=24.5, Tmin=10.0.

    For year 1895, Tmax=13.0, Tmin=05.0, and for year 1995, Tmax=21.5. Tmin=11.5.

    For year 1896, Tmax=14.0, Tmin=08.0, and for year 1996, Tmax=14.5, Tmin=06.5.

    For year 1897, Tmax=16.0, Tmin=10.5, and for year 1997, Tmax=18.5, Tmin=08.5.

    For year 1898, Tmax=15.5, Tmin=13.0, and for year 1998, Tmax=20.5, Tmin=10.5.

    For year 1899, Tmax=15.5, Tmin=09.5, and for year 1999, Tmax=19.5, Tmin=08.5.

    For year 1900, Tmax=14.4, Tmin=12.0, and for year 2000, Tmax=19.5, Tmin=09.5.

    For year 1901, Tmax=15.0, Tmin=10.0, and for year 2001, Tmax=14.5, Tmin=11.5.

    For year 1902, Tmax=15.5, Tmin=08.0, and for year 2002, Tmax=14.5, Tmin=10.0.

    For year 1903, Tmax=13.5, Tmin=05.5, and for year 2003, Tmax=15.0, Tmin=09.0.

    For year 1904, Tmax=14.0, Tmin=04.0, and for year 2004, Tmax=15.5, Tmin=11.5.

    For year 1905, Tmax=14.5, Tmin=08.5, and for year 2005, Tmax=14.5, Tmin=10.0.

    For year 1906, Tmax=15.5, Tmin=05.5, and for year 2006, Tmax=14.0, Tmin=10.0.

    For year 1907, Tmax=14.0, Tmin=08.0, and for year 2007, Tmax=18.5, Tmin=09.0.

    I had to post the data as above because I don’t how to paste in a table in text form and then keep it from getting all jumbled up by word wrap. As a matter fact I was copying the data right off of notebook paper. ATTN: Steve, If you have the time I would be appreciative if you could put the data into a proper table.

    First you notice that the interval 1990-2000 was hot, hot, hot! But starting in 2001, Tmax dropped an astonishing 6 K from the mean Tmax value of the 1900-2000 interval. Not only did “warming” at this site cease after 2000, temperatures for this short period were about the same as that for the 1900-1907 interval, and most astoundingly the mean Tmax for both interval was _exactly_ the same: 14.5 Deg C!

    Now listen up everybody. I don’t want to hear any squeaks, squeals, squawks, hoots and howls about cherry picking because this not what this little exercise is about. Its about what we chemists call analytical methods development. I’m trying to develop a simple method analysis of temperature records that will yield useful information without squandering megawatts of electricity and wasting a lot of time and human effort.

    For this study I used the late John Daly’s criteria for station selection which are (1) remoteness, (2) physically well-maintained facility, (3) long continous station record, (4) compliance with WMO standards, and (5) meticulous record keeping. I also used Roger Sr’s idea that standard temperature metrics should be analyzed separately.

    While I was entering the data, I pointed out to my son that the 1990-2000 decade was really hot. He said, “Maybe that is due to the Gulf War and all the oils that were set on fire”. Could it be that these burning oil wells produced such an enormous amount of black carbon that it started a global warming? And maybe after a decade all of this black carbon has finally been washed out and the climate is settling back to normal.

    A really important motivation for eventually showing global warming never really occured is: I don’t want to pay any BC carbon taxes!

  47. EW
    Posted May 10, 2008 at 11:04 AM | Permalink

    30 (Leif’s reconstruction)

    It is interesting, how the TSI does not correlate with more local temperatures. The Central European temperatures (shown as Prague Klementinum graph, but Vienna and Bern are similar) show quite hot temperatures at the end of the 18th century and the cold came almost 20 years after the Dalton min. – between 1840-1860.

  48. Basil
    Posted May 10, 2008 at 11:15 AM | Permalink

    #11, #13 (Steve Mosher)

    It is not just enough to get it inside the error bars. It should go up and down when it is supposed to, and have rates of change that are at least plausible over several years or longer. It doesn’t appear very accurate in hindcasting the HadCRUT series prior to 1960:

    It goes up circa 1910 when HadCRUT goes down, and doesn’t go up enough circa 1940. This has the consequence of understating the warming rate of the 1920’s and 1930’s by a considerable margin. Only after 1960 is there a reasonable degree of resemblance between the hindcast and HadCRUTv3.

  49. Posted May 10, 2008 at 11:46 AM | Permalink

    45 (Smokey): yes, and that why it is important that people stop using the old Lean2000 TSI-reconstruction. The rise from 1900 to 1950 just didn’t happen.
    Andrey notwithstanding.

  50. Posted May 10, 2008 at 12:42 PM | Permalink

    47 (EW):

    It is interesting, how the TSI does not correlate with more local temperature

    But the point is precisely that the agreement with global data is spurious because the TSI people employ is faulty, so it is no wonder that the faulty data does agree with local data.

  51. Posted May 10, 2008 at 1:01 PM | Permalink

    51 Obviously: does not agree.

  52. steven mosher
    Posted May 10, 2008 at 1:07 PM | Permalink

    RE 48. Yes basil. The first thing I wanted to check was did the model, at least, stay
    inside the boundaries. beyond that I had not thought about how best to test the skill.
    Like I said, crude first look.

    does it get 10 trends right? 20 year? 30 year?
    is it biased to missing cold spells?

  53. EW
    Posted May 10, 2008 at 1:42 PM | Permalink

    51 (Leif)
    But regardless of the reconstruction, there was a Dalton. And the Central Europe temperature decrease lagged and lingered, that was my point. The old sources (early 20th century) explained it by switching between Atlantic (colder summers, warmer winters) and continental trend (more like extremes on both sides). So I wonder, what overcame the lowered TSI and what cooled the Central Europe, when the TSI business was again as usual.

  54. Posted May 10, 2008 at 2:10 PM | Permalink

    54 (EW):

    what overcame the lowered TSI

    But TSI was not markedly lower during Dalton, only about 0.3 W/m2, which probably is not enough to have any measurable effect [less than 0.02K]:

    The red curve is what you should look at. Even if you go with Wang2005, the effect would only be 0.04K.

  55. kuhnkat
    Posted May 10, 2008 at 2:12 PM | Permalink

    Just a reminder of what smart people think of models. From:

    http://petesplace-peter.blogspot.com/2008/05/predictive-power-of-computer-climate.htm

    The modelers can find a way to show anything. As the famous mathematical physicst von Neumann said “If you allow me four free parameters I can build a mathematical model that describes exactly everything that an elephant can do. If you allow me a fifth free parameter, the model I build will forecast that the elephant will fly.” That is by the way why many of us more senior climatologists and meteorologists prefer to work with real data and correlate factors with real data than depend on models.

    Of course, I think we were told climate modelers actually use 6 variables????? Maybe their elephant can time travel also!!

  56. Posted May 10, 2008 at 2:17 PM | Permalink

    need an “l” at the end:
    http://petesplace-peter.blogspot.com/2008/05/predictive-power-of-computer-climate.html

  57. Mick
    Posted May 10, 2008 at 2:53 PM | Permalink

    The cold came years after the Dalton minimum because the two are actually correlated. When something is correlated you expect a consistent one followed by the other, the time order is extremely important.

    Now if the sun changes many years after the temperature time and time and time again like C02 you would have something to question.

  58. MJW
    Posted May 10, 2008 at 5:41 PM | Permalink

    What’s with that wacky spambot that always puts two greater than signs in the heading and ellipses within square brackets in the message? Can’t the spam filter reject comments that follow such an obvious pattern? I once had a perfectly valid comment rejected as spam (fortunately, on second thought, I decided the comment wasn’t all that insightful, so I let the filter have its way).

  59. Sam Urbinto
    Posted May 10, 2008 at 7:23 PM | Permalink

    I have a time travelling elephant. But I lost it at some point in time.

  60. steven mosher
    Posted May 11, 2008 at 6:07 AM | Permalink

    great work Lucia. hey willis, have a look at how well Lucia’s two parameter model hindcasts.

  61. M. Jeff
    Posted May 11, 2008 at 6:54 AM | Permalink

    re: steven mosher, May 11th, 2008 at 6:07 am who says:

    great work Lucia.

    Which of Lucia’s great analyses are you referring to?

  62. steven mosher
    Posted May 11, 2008 at 7:05 AM | Permalink

    click on her link above in comment 60 ( blackboard link)

    There you will see her comparsion between modelE hindcast ability and her two paramater
    model called (i got to name the baby) Lumpy. Lumpy is the most simple model of the climate
    you can create. Two paramters and the elephant is fit.

  63. cce
    Posted May 11, 2008 at 10:25 AM | Permalink

    #34 Leif, the fact that these runs end in 2003 might give you a clue as to why Lean 2000 was used. The inputs were frozen a long time ago, which is a consequence of the time it took to design the experiment, perform the runs (there were many), and then submit them in time for AR4.

  64. Posted May 11, 2008 at 10:55 AM | Permalink

    64 (cce): Thanks for that tidbit. Maybe the modelers say so somewhere, but it would seem that they should say right up front that they know that the runs are based on an obsolete solar input series, and that therefore the solar forcing is suspect. This should be on page one. Hansen’s paper is from 2007. Haven’t they done anything since 2003?

  65. cce
    Posted May 11, 2008 at 11:32 AM | Permalink

    65 Leif, here is a list of what GISS has been doing since 2003.
    http://pubs.giss.nasa.gov/abstracts/2004/
    http://pubs.giss.nasa.gov/abstracts/2005/
    http://pubs.giss.nasa.gov/abstracts/2006/
    http://pubs.giss.nasa.gov/abstracts/2007/
    http://pubs.giss.nasa.gov/abstracts/2008/

    For the “modelers” specifically, no doubt they have been designing experiments with the model, running the model, evaluating model results, documenting the model, and designing and testing the next model. I count 41 publications co-authored by Gavin between 2004 and 2007. http://pubs.giss.nasa.gov/authors/gschmidt.html

    As to why they don’t warn that a model frozen in 2004 is out of date, I think that goes without saying.

  66. Posted May 11, 2008 at 12:10 PM | Permalink

    Steven– I’m tempted to run a “guess Lumpy’s time constant” poll. . . 🙂

  67. steven mosher
    Posted May 11, 2008 at 2:00 PM | Permalink

    re 67. haha. I bet annan would get it wrong.

  68. steven mosher
    Posted May 11, 2008 at 2:08 PM | Permalink

    RE 66 CCE. I see less than a couple hundred pages written. Looks like monkeys humping
    a football approach to science

  69. cce
    Posted May 11, 2008 at 4:25 PM | Permalink

    #60 Well then, based on the number of words written at CA, the laws of physics should have been unified by now.

  70. steven mosher
    Posted May 11, 2008 at 8:14 PM | Permalink

    RE 70. I’m just saying cce that cranking out a 200 page report in 6 months for most of us here is a piece of cake. In one three year peroid I think I did close to 1000 pages. All classified, sorry. So, I’m not impressed by 41 papers, a large wack of which are one page wonders. That said, I think gavin is a fine scientist even if never published. Do you get that?
    I dont count articles, I dont count pages, I dont care about peer review. he does good work.
    Writes lousy code, but does good work.

  71. cce
    Posted May 11, 2008 at 10:47 PM | Permalink

    #71 So is he a monkey or a “fine scientist”?

    His published work is a pretty good record of what he has been working on, and if Leif wants to know, he can check them out along with the work of all the other modelers working for GISS.

    I also question the ability of a scientist to routinely crank out a thousand pages jam packed with previously undiscovered revelations about the universe. After all, no “football humping” allowed.

  72. rafa
    Posted May 12, 2008 at 12:27 AM | Permalink

    I did follow Steve’s instructions using as input “cloud cover %”. Sorry for being so dumb but can someone explain me the resulting plot?, that abrupt drop since mid 90’s, what does it mean? Thank you.

    best

  73. steven mosher
    Posted May 12, 2008 at 5:51 AM | Permalink

    re 72 fine scientist.

  74. Frank K.
    Posted May 12, 2008 at 6:40 AM | Permalink

    Lucia nd Steve Mosher,

    Re: 63 – thanks for the link. Lumpy is brilliant!

    It occurred to me a while ago that someone could probably develop a simple ODE with a suitable forcing function which could do as good a job at hindcasting the “global average temperature” as the highly vaunted GCMs. And you have done it.

    Lumpy Rules!

    Frank

    PS Perhaps you could start a line of Lumpy-ware…you know, Lumpy coffee mugs, Lumpy t-thirts, Lumpy screen savers,… ;^)

  75. Ron Broberg
    Posted May 12, 2008 at 7:42 AM | Permalink

    #26 I work with Fortran codes all the time, and I am not particularly impressed by this one. The build process is so baroque and awful that I’m forced to wonder if anyone outside the group who developed it has ever actually run it.

    Got Model E running on my Linux laptop at home.
    Pentium III mobile 1.5 ghz
    512 mb ram
    G95
    Netcdf 4.3 (?)

    Took about 8 hours to get it up and running.
    But then I’m a newb.

  76. steven mosher
    Posted May 12, 2008 at 8:33 AM | Permalink

    ron. B. take a crack at gisstemp. Links if you need a crutch, but search GISS first.

    oh crap why hide the pea,

    http://data.giss.nasa.gov/gistemp/sources/

    If you get Gisstemp running, then you got some programming stones mate.

    Gisstemp running is the holy grail right now.

  77. Sam Urbinto
    Posted May 12, 2008 at 9:01 AM | Permalink

    As far as we know nobody here has got it running. One wonders how anyone has gotten it running. I am assuming the folks at GISS have it running. Curious.

  78. Edouard
    Posted May 12, 2008 at 9:21 AM | Permalink

    @Leif Svalgaard

    How can we compare global temperatures of the last 200 Years with solar variation, if these temperatures don’t exist?

    Where they exist (in Euroe) there seems to be a “very” close correlation with the sun minima. Maybe there is a serious problem with temperature reconstructions? So, the only ones to rely on are those from Europe? This for instance: http://www.schulphysik.de/hohenpeissenberg.html or this http://members.lycos.nl/errenwijlens/co2/errenvsluterbacher.htm

    For my understanding, there is no doubt, that these correlations are a proof, that solar variation has a strong influence on the weather patterns. The correlation Co2 -> Modern warmperiod looks to me much much weaker than the solar one.

    If we don’t understand these patterns, why don’t we try to find out? How could we understand “weather” or “climate” without understanding these mechanisms?

    The other problem for me is the tipping point from iceage to warmperiod. There seems to be a regulating mechanism, preventing the climate to change extremely all the time? Why not try to find out more about that? And why does such a small signal, like TSI-variation, nevertheless disturb this equilibrium?

    And, to come back to the graphic above, why does an El Ninjo disturb the climatesystem to such an extent, if global temperatures didn’t really change for centuries? There MUST be something wrong with global temperature reconstructions, in my opinion!

  79. Posted May 12, 2008 at 10:36 AM | Permalink

    79 (Edouard): No doubt that we have good temperature data for Europe, but

    there seems to be a “very” close correlation with the sun minima.

    is not supported by the data. I’m at a loss why this is claimed again and again as the data does not show it. There is a very good correlation between geomagnetic activity/aurorae with solar activity. In fact, so good and so accepted that people do not bring this claim up again and again [as they did 100 years ago – before it was accepted].

  80. Edouard
    Posted May 12, 2008 at 1:11 PM | Permalink

    80 (Leif)

    If I compare Hohenpeissenberg, it gives peaks at solar highs and downs at solar downs. We know that the extreme cold was around 1700 in Europe an it became quickly warmer afterwards. If you take the trends solar high = temperature high you get a much stronger correlation than with Co2, because before 1800 there was no real Co2-fluctuation, but the temperature was heavily changing in Europe. You have only the last 50 years of cerrelation with Co2, but nearly 250 year correlation with sun activity.

    No matter if it has a delay sometimes. It is to obvious to me to be ignored. It must have something to do with the sun. Or you could even say in 200 years, that Co2 has ZERO influence on our weather.

    If a temperature goes up an down for a longer periode in correlation with the sun, this means much more than 30 years with an extreme El Ninjo peak an a flattening cuve afterwards, bcause of Co2. This could mean a hundred things. Could be Co2, could be mars attacks, could be billions growing crops, heating water, producing clouds, or just a higher activity of the sun for years and years ….. what do we really know?

  81. SteveSadlov
    Posted May 12, 2008 at 1:43 PM | Permalink

    This is a highly disturbing thread. I hope everyone is ready.

  82. Posted May 12, 2008 at 3:32 PM | Permalink

    Re #35,77: I’m going to take a crack at GISTEMP in a couple days. I’ve just finished automating the process of pulling down all the requisite data files…

  83. steven mosher
    Posted May 12, 2008 at 3:51 PM | Permalink

    RE 83. check the threads here ( hansen frees the code) for some previous attempts.

    The existing code is compiled for AIX,as I recall some guys working in Lunix had to muss
    about with comipler flags. I think we got about 80% done. then add set in.

  84. jeez
    Posted May 12, 2008 at 3:55 PM | Permalink

    Ah, good ol’ ad…

  85. Steve McIntyre
    Posted May 12, 2008 at 5:15 PM | Permalink

    http://www.climateaudit.org/?p=508 reviews an interesting paper by Holloway discussing the problems of modeling a duck pond from first principles, let alone an ocean.

  86. Ron Broberg
    Posted May 12, 2008 at 11:14 PM | Permalink

    #77,83

    [X] STEP0
    [_] STEP1
    [_] STEP2
    [_] STEP3
    [_] STEP4_5

    I’m willing to bet it gets harder from here. 😉

  87. Nylo
    Posted May 13, 2008 at 5:35 AM | Permalink

    Slightly off-topic, but talking about GISS data, it seems that they are massaging their global data for April a little bit before publication. They usually announce data by the day 10 or 11 of the next month, but it is already May the 13th and still nothing…

    Fortunatelly we already have UAH data showing April 2008 as the coldest April since 1997 and 2008 having the coldest average for the first third of the year since 2000. I hope the “warming” doesn’t make us freeze.

    http://vortex.nsstc.uah.edu/public/msu/t2lt/tltglhmam_5.2

  88. Ben Gallagher
    Posted May 13, 2008 at 6:26 AM | Permalink

    So if what Leif is saying is correct, that the variance in TSI is small (and I have no reason to doubt him) – then logically the only way the sun could have triggered warming at the start of the 20th century would be if there was some other factor that magnified the effect of the small TSI change (eg cosmic rays, artic oscillation variations etc) – that would have big implications for the accuracy of any forecasting model.

    The alternative is that there is a completely different non-modelled factor that drove the warming (ie not solar, volcanic or CO2) – which again has big implications to the accuracy of the climate models going forward.

  89. Bob B
    Posted May 13, 2008 at 6:32 AM | Permalink

    Just simply mind boggling–two decades of cooling consistent with climate models!

    http://sciencepolicy.colorado.edu/prometheus/archives/climate_change/001425how_to_make_two_deca.html

  90. Posted May 13, 2008 at 7:46 AM | Permalink

    Re #83: I got as far as STEP1 last night. The way they package their Fortran code and Python extensions is rather sloppy IMNSHO.

  91. steven mosher
    Posted May 13, 2008 at 7:54 AM | Permalink

    re 91. make sure you see the thread “hansen frees the code” for the troubles ahead,
    minor crap but annoying nonetheless.

  92. Eggplant fan
    Posted May 13, 2008 at 3:17 PM | Permalink

    It is interesting to see how papers in the peer-reviewed literature can strongly disagree with the Real Climate claim that there is no need for the models to match the actual climate. From the abstract to Wang et al., Journal of Climate, vol. 20, pp 1093-1107 (http://ams.allenpress.com/perlserv/?request=get-abstract&doi=10.1175%2FJCLI4043.1) :

    “Reproducing this decadal and longer variability in coupled general circulation models (GCMs) is a critical test for understanding processes in the Arctic climate system and increasing the confidence in the Intergovernmental Panel on Climate Change (IPCC) model projections.”

    But what do they know…

  93. Jaye Bass
    Posted May 13, 2008 at 10:22 PM | Permalink

    RE: 90

    Wow Annan is seriously delusional.

  94. Posted May 14, 2008 at 11:53 AM | Permalink

    It looks like nearly every month was a negative anomally for the first 50 years or so. How much is real and how much is due to the long series of unlikely “adjustments” that are in the data set?

  95. Ron Broberg
    Posted May 15, 2008 at 8:39 AM | Permalink

    [X] STEP0
    [X] STEP1
    [_] STEP2
    [_] STEP3
    [_] STEP4_5

    Looks like I’m the tortoise in this race …
    Python 1.5.2
    Berkely DB 1.8.5

  96. Ron Broberg
    Posted May 17, 2008 at 7:33 AM | Permalink

    [X] STEP0
    [X] STEP1
    [X] STEP2
    [_] STEP3
    [_] STEP4_5

    So far the only real problems I’ve encountered are
    a. g95 doesn’t like some of the declarations/initializations used in the GISS fortran
    b. the Py_Free issue in stationsstringmodule.f
    c. the infinite loop in PApars.f

    As I go through the steps, I’ve been building simplistic patch scripts so I can reapply and reproduce my fixes.

    Free time work, so its been a little slow going.

  97. Ron Broberg
    Posted May 18, 2008 at 12:20 AM | Permalink

    [X] STEP0
    [X] STEP1
    [X] STEP2
    [X] STEP3
    [_] STEP4_5

    Well, step 3 produces the file GLB.Ts.GHCN.CL.PA.txt

    This file has the same format as the following
    http://data.giss.nasa.gov/gistemp/tabledata/GLB.Ts.txt

    The values in the two tables are not the same.

  98. John A
    Posted May 18, 2008 at 5:15 AM | Permalink

    The tension mounts…

  99. steven mosher
    Posted May 18, 2008 at 5:38 AM | Permalink

    re 98. Great work. did you read the thread “hansen frees the code” I think we
    addressed the infinite loop question there

  100. Ron Broberg
    Posted May 18, 2008 at 8:22 AM | Permalink

    @100: naturlich

    But I need to reread stuff there and poke around the opengistemp site
    I suddenly remember the mood of the crowd here.
    So let me emphasize that at this stage problems in output point towards MY errors in porting

    Two suspects loom large in my mind
    – is there a better solution of py_free than just commenting out the calls? 😉
    – revisit declaration and initilization
    I have had to make multiple changes to declarations and initializations to satisfy g95

    Heres my plan for the next few weeks

    Capture my code mods for STEP3
    Rerun each step multiple times … is the output the same in each run?
    Recompile the code with gfortran and intel f95
    I chose g95 because the model E code has flags for g95 and I was able to compile that without issue
    Start flow charting the runs
    Take a hard look at the Berkeley DB libs.
    I picked 1.8.5 to try to match the age of code – but it is very old.
    Same with the Python 1.5.2 – old code.
    Publish my patches and let someone else take a whack at it.
    Looks like I have a summer project

    Maybe … just maybe … I’ll buy the Absoft compiler for my b-day.
    No educational discount though 😦

  101. Posted May 19, 2008 at 1:29 AM | Permalink

    Steps 0 through 5 are all running on OS X Intel. Data not yet verified.

    Instructions and code: http://dev.edgcm.columbia.edu/browser/StationData/

    Next steps: Visualization of steps 0 through 5 via KML, and comparisons to other data sets…

    Please subscribe to RSS feeds of source and/or wiki page updates on that site for further info.

  102. Posted May 19, 2008 at 1:32 AM | Permalink

    Included link to svn repository, forgot link to www page: http://dev.edgcm.columbia.edu/wiki/StationData

  103. Ron Broberg
    Posted May 19, 2008 at 6:56 AM | Permalink

    Thank you, mankoff.
    I figured EdGCM was a good place to post this stuff.

    What fortran compiler is on OSX?

    FWIW, switching from g95 tp gfortran to compile my patched code had no effect on the output.

  104. Posted May 19, 2008 at 7:18 AM | Permalink

    Compiler version and all other software versions here http://dev.edgcm.columbia.edu/browser/StationData/README

  105. steven mosher
    Posted May 19, 2008 at 7:22 AM | Permalink

    steveMc. can we get a new thread for this? with gisstemp up and runing
    there are a few simple tests we can run.

    ron and mankoff. how well does your output match that posted by nasa

  106. Posted May 19, 2008 at 7:39 AM | Permalink

    I need time (considerable time) to compare results, and then even more to find attribution for any differences. I’ve only glanced at a comparison of the STEP3 output so far (see #98) and there was a difference, but only of +- 2.5 hundredths of a degree.

2 Trackbacks

  1. By Lumpy vs Model E | The Blackboard on May 11, 2008 at 5:41 AM

    […] at Climate Audit, Steve Mosher has been showing readers who Model E hindcasts HadCrut data. He suggested I show how […]

  2. […] a model including forcings and a time lag. Stephen Mosher points out how to access the NASA data here (with a good discussion), so I went to the NASA site he indicated and got the GISSE results he […]