Lucia on Tamino on Schwartz

Back from a very pleasant Christmas.

A little while ago, I threaded an interesting comment by UC on Tamino’s criticism of Schwartz. As a blog management aside, I like having this kind of thread by others as it was a good comment and it was based on a careful analysis of third party literature. I have no problems threading material like this for discussion as it’s something that’s relevant and analytic and any subsequent discussion was on a relevant thread. My problem with the “thermo” discussions is mostly that too much of it has been too often unfocused, unrelated to primary literature and on irrelevant threads or unthreaded. I wanted to limit the topic until there were thread quality analyses.

Lucia has written a further analysis on the topic, which she’s posted at her own blog here. I’ve transferred some discussion from Unthreaded to the comments below.

Lucia builds on UC’s earlier analysis by making a distinction between how two quite different kinds of error can affect estimation of response time based on temperature data. One type of error occurs from making time averages of temperature data; a second time of error occurs from measurement errors in the temperature data.

Lucia observed that the Tamino-RC analysis argued that the Schwartz analysis was confounded by the first type of error (time averaging). Tamino produced a graphic showing that the GISS and simulated data did not match – presenting this a gotcha against Schwartz. Lucia shows that the effect of this particular class of error does not match the situation: she observes that this would yield a positive intercept for the intercept of time vs log(autocorrelation), whereas the actual result is negative. She observes:

If I were to plot Ln(Rθ) or a physical system with a linear response, that has been measure imprecisely, lack of precision in the measurements results in a negative intercept for the linear regression.

She observes, on the other hand, that the GISS situation nicely matches UC’s plots, thereby suggesting that measurement errors in the temperature data, rather than time averaging bias, accounted for the observed patterns.

My own feeling (and it’s not more than a feeling at this point) was that you couldn’t lean very heavily on response times calculated from the autocorrelations, regardless of whether one liked or disliked the answer.

Prior references on the topic include: Schwartz article; Tamino’s guest post at RC; Luboš comment, UC’s comment here, my report on Schwartz at AGU. In addition, Scafetta and West also consider response times from quite a different point of view (and are criticized by Rasmus at RC.)

98 Comments

  1. Ron Cram
    Posted Dec 24, 2007 at 8:33 AM | Permalink

    This is the thread where we can request threads on certain papers.

    1. I would like to see a thread on Schwartz’s paper “Heat Capacity, Time Constant, and Sensitivity of Earth’s Climate System.” He used a new approach combining surface temps with ocean heat content to measure climate change and used the Mt. Pinatubo eruption for the climate perturbation. He concluded climate sensitivity to rising CO2 was only 1/3 of the IPCC estimate. For some reason, he is still worried about CO2 and is studying the possibility of using aerosols as an antidote to CO2 warming. I would like the thread to discuss his paper and also see if we can get him to comment on why he is still worried.
    His paper is here.
    Article in North Shore Sun.
    Article in National Post.

    2. I would also like to see a thread on the recent Petr Chylek paper titled “Aerosol Optical Depth, Climate Sensitivity and Global Warming.” Chylek concludes that we have overestimated the cooling impact of aerosols and the warming impact of CO2. An abstract of his paper is here.

    I would also like to discuss how these two papers relate to one another.

    Steve, what do you say?

  2. Larry
    Posted Dec 24, 2007 at 11:57 AM | Permalink

    360, 364, Lucia did an alternative analysis to the Schwartz paper, and came up with a time constant of 18 years. She’s currently soliciting comments on her blog. Sorry, I don’t have the link handy, but it’s linked here somewhere in the past few days.

  3. Posted Dec 24, 2007 at 8:21 PM | Permalink

    @Larry– My article discussing Schwartz’s time constant is Time constant reanalyzed.”

    I basically look at the same data, but account for measurement imprecision.

  4. Posted Dec 25, 2007 at 7:14 AM | Permalink

    @Raven:

    4) Schwartz is not technically a sceptic but his paper seems to confirm Lindzen’s estimates.
    5) Your analysis suggests that the sensitivity is much higher than what the AGW advocates are saying which implies major warming is ahead.

    The scope of what I posed is limited the major analysis Schwartz actually did in that paper. This was to suggest a very simple model for the earth’s response time, and try to extract a time constant from GMST data on that basis.

    Schwartz looked at the data in a way that made it very difficult to distinguish the known uncertainties in the temperature measurements (which are admitted to be at least 0.5K) from the signal he wished to measure: that is the time constant.

    I reanalyzed the data using a more common method that separates this particular “noise” (due to uncertainty) from “signal” — the time constant.

    I get a different time constant– I get 18 years not 5 years.

    After getting his 5 years, Schwartz did a few more minor manipulations to get sensitivity. I didn’t go through to see whether Schwartz made mistakes on the other bits (getting the thermal mass or the forcing). He pulled these number from other sources, and I didn’t check them, and I didn’t imitate these bits. I don’t actually know if the value for thermal mass or forcing make sense. For this reason, I don’t calculate a sensitivity.

    But, if Schwartz was correct about the thermal mass of the ocean & the forcing, then, the 18 years corresponds to a sensitivity that slightly than expected by GCM’s. In fact, mine is closer to GCM estimates than Schwartz’s was– but mine was a little higher and his was a lot lower.

    I make no claims about the significance of all this. I’m just saying a time constant of 5 years doesn’t match that data set. If the simple lumped parameter model of the earth model applies, and you can get the time constant from that data, the best estimate is 18 years, not 5 years.

    What are you missing? It’s possible Schwartz’ model is useless because it’s too simple. The earth likely doesn’t have just one time constant.

    But that assessment that the model is “too simple” should based on something other than the answer. Those who loved the model when it says 5 years should, in principle still love it. Those who hated it when it gave the answer “5 years” should still hate it.

    FWIW– one of the things those writing GCM’s were saying before Schwartz’s paper was that the current over-prediction by GCM’s may be because they underestimate the earth’s climate’s time constant. So, actually, my high time constant would partially explain the current over-prediction by GCM’s.

    So… I’m afraid, on the balance, 18 years not 5 years is not a good update for skeptics.

  5. Ron Cram
    Posted Dec 25, 2007 at 9:03 AM | Permalink

    lucia,

    re: 368, 374

    Thank you for the link to your analysis of the Schwartz paper. While it appears you have put a lot of work into your blog posting, it is not apparent to me that you were using the same data set as Schwartz. Because of problems with the quality of sea temp data, Schwartz used a combination of ocean heat content and surface temp. The most important of these is the ocean heat content data set, which he identified as Levitus et al 2005 data set that he denoted as L300, L700 and L3000 for the various depths.

    Your blog posting does mention “land ocean data” but that appears to be taken from GISTEMP or HADCRU3. Please correct me if I am wrong here, but this appears to be the biggest reason you arrived at a different result that Schwartz.

  6. Posted Dec 25, 2007 at 7:24 PM | Permalink

    @Raven– I’m not saying reanalyzing the same data Schwartz used, but accounting for measurement uncertainty turns the simple model into an accurate one. I’m just saying it gives a different time constant for the data.

    Both 5 and 18 years agree with some other people’s, but disagree with other people’s results.

  7. Jan Pompe
    Posted Dec 25, 2007 at 8:18 PM | Permalink

    Raven says:
    December 25th, 2007 at 12:42 pm

    and Lucia

    Another comment: the observed temperature response after volcanic eruptions is also consistent with a low time constant.

    I tend to think that this is a good method for sanity checks as we can get a fairly realistic idea of the boundary conditions that way. This is something I see no mention of in Lucia’s analysis so I’m wondering how realistic it is.

  8. Posted Dec 25, 2007 at 8:38 PM | Permalink

    @Ron And Raven…. I found a stupider error on my part! Using excel, I inserted “log” instead of “ln”.

    With the correction, the time constant is 8 years! Gosh,that was dumb of me. I actually thought I had checked that several time! I adjusted the numbers, and I’ll be rechecking… but log instead of ln!

  9. Pat Keating
    Posted Dec 25, 2007 at 9:43 PM | Permalink

    392 lucia

    Oh, that is embarrassing! It’s a good job you found the error before the IPCC got hold of your analysis…..

  10. Posted Dec 25, 2007 at 9:53 PM | Permalink

    @393–
    Well, if the IPCC got a hold of it, we’d figure out someway to make 18 years stick! But anyway, what’s with you guys. I read you are supposed to find every mistake in favor of AGW instantly, and I had to find this on my own. (blushing anyway.)

    Actually, the way it works out:

    1) The shape of the curve Tamino criticized is explained by the uncertainty in measurements. (He thought the dip in the tau vs. t curve meant the simple model was bad. The model may well be bad, but the dip in the curve is precisely what is expected if the are measurement uncertainties– and there always are.)

    2) The time constant does shift compared to Schwartz value– (but not as much as if you fill down Log() instead of ln()!)

    3) Looking at the data on a ln(r) vs t graph lets you curve fit instead of just eyeballing. (That’s always nice.) and

    4) It suggests the uncertainty in the GISS GMST data might be around 0.1K instead of the 0.05K attributed to station inaccuracy by GISS. (But, there’s a lot of slop in that because detrending alone could screw things up.)

    Anyway, should anyone want to check, it’s done by Excel spreadsheet. (That’s actually the reason it’s a pain to proofread. I always have to click all the boxes to check formulas. I know there are better ways… but….)

  11. Ron Cram
    Posted Dec 25, 2007 at 10:23 PM | Permalink

    lucia,

    re:388,

    You may be correct that ocean heat content is used to determine heat capacity and not the climate system time constant. Perhaps I misunderstood this passage in Section 2 (on page 7), where Schwartz writes:

    Here observations of global mean surface temperature over 1880–2004 and ocean heat content over 1956–2003 permit empirical determination of the climate system time constant τ, and effective heat capacity C.

    However, Schwartz does not just use GISS to determine the GMST. He also uses CRU data. Schwartz writes:

    For GMST I use the values tabulated by the Goddard Institute of Space Studies [GISS, NASA, USA; Hansen et al., 1996; updated at http://data.giss.nasa.gov/gistemp/%5D and the Climatic Research Unit [CRU, University of East Anglia, UK; Jones and Moberg, 2003; updated at http://cdiac.esd.ornl.gov/trends/temp/jonescru/jones.html%5D. Time series of these quantities are shown in Figure 2.

    Interestingly, Figure 2 shows data from GISS, CRU and Levitus. As I said before, it is not clear to me that you have used the same data set as Schwartz.

    To be honest, I do not understand your statement that adjusting for instrument imprecision would increase the climate system time constant. I could see how it might increase the margin of error, but it does not make sense to me that the time constant itself would more than triple from 5 years to 18 years.

    Based on the work of Anthony Watts (also Ross McKitrick from a different perspective), it appears the warming in the GMST is strongly exaggerated. Perhaps up to half of the 20th century warming is not real but an artifact of poorly sited stations or other non-climatic influences.

    Once Anthony completes his work and we have a more precise temperature record that shows much lower amount of warming, how would that change the time constant of the climate system or the climate sensitivity to rising atmospheric CO2?

  12. Ron Cram
    Posted Dec 25, 2007 at 10:33 PM | Permalink

    lucia,

    re:392 and 394

    I was working on my rather lengthy response above for a quite a while today – in between doing other things. I just noticed that you found an error. I’m glad to hear you found it. If ocean heat content is not involved in the calculation of the time constant, I could not see how a slightly different data set (GISS and CRU combined instead of just GISS) could make such a large difference.

    I really think the Schwartz paper deserves a good thread here. Keep up the good work. We may yet get one.

  13. Filippo Turturici
    Posted Dec 26, 2007 at 4:39 AM | Permalink

    Lucia (#394) really the GISS calculates so uncertainty?!?
    I do not care of many doctorates or published papers they have, they know nothing about uncertainty, and giving non-sense measures is a kind of swindle and no more: in any other scientifical field they would have been successfully confuted, and their incompetence would have been made know to all world.
    Let’s say just that instrumental uncertainty is 0.1-0.2K (then better to keep the second one): where is it? And all other kinds of measurement uncertainty?
    They completely misunderstood real and mathematical world: real wolrd is uncertainty, and it cannot be wiped out; all other calculi, they must depend on this, or be confined by this: I can compute everything I like, but measuring it is a quite different thing, as in my computation I cannot exclude uncertainty (thinking even the most complicated algorythm might cancel basic uncertainties, like instrumental one, is a complete non-sense, and I never saw such calculus in other fields).
    Anyway, if station inaccuracy uncertainty is just 0.05K, maybe even just 0.1K, I am Mickey Mouse…

    P.S.: what, so global warming will never be proven, with so uncertain measures…ops!

  14. Raven
    Posted Dec 26, 2007 at 6:27 AM | Permalink

    Lucia,

    With the correction, the time constant is 8 years! Gosh,that was dumb of me. I actually thought I had checked that several time! I adjusted the numbers, and I’ll be rechecking… but log instead of ln!

    I am glad you found the error – if your analysis was correct the implications were huge because it seemed to suggest that climate models were extremely sensitive to measurement errors (i.e. the same model produces opposite conclusions when measurement errors are considered). This is something that would have had profound implications no matter what side of the debate someone was on.

  15. Tony Edwards
    Posted Dec 26, 2007 at 6:54 AM | Permalink

    lucia, #392 admits to finding an error, as did Ross McKitrick in his recent paper and they both admitted to it and corrected it. Indeed, in lucia’s case, there is a more than halving of her result which is truly a major change, so a really big hand for you, lucia, for being brave and honest. People are fallible, finding mistakes is part of life, sometimes you find your own, sometimes someone else does, but either way, having the humility and honesty to admit the mistake and correct it makes for better research and better people.
    So hurray for you both and for anyone else who has done likewise. Would that the same could be said for certain other papers which have been shown clearly and definitively to have errors, sometimes huge, yet whose authors still continue to try and defend them instead of admitting to being wrong. As I see it, auditing the papers that are spewed out in vast numbers is an ever increasing necessity, so, go Steve Mc and your merry band.
    And a happy New Year to all devoted CA readers.
    Off topic, I know, but heartfelt, so please accept my apology, Steve.

  16. Posted Dec 26, 2007 at 7:28 AM | Permalink

    @tony–
    Someone else would have found mine soon enough! Blogging and posting a mistake is embarrassing, but I find if I just scribble at home, I never even look for them.

    I’m not really auditing a paper. I just read Tamino’s criticism of the paper. When he suggested the simple model couldn’t created data that looked like the GISS data, I knew that was wrong. Tamino’s computation ‘mimicked’ the “system” that modeled the climate. Looked at this way, his temperature output was “the real temperature of the earth”. But the GISS measurements aren’t truly “the real temperature”, they are measurements.

    It sounds odd to say measurements and reality are different things because experimentalist try to make them be the same thing. But measurements are never perfect.

    @Ron– I agree Steve [Schwartz] uses both data sets, but he uses them individually. In the paper, he also says he checked Northern hemisphere only/ Southern Hemisphere.

    I wouldn’t expect which set of data are used to make such a big difference.

    @Filipo–
    I don’t consider a 0.1K uncertainty in a GMST to be a large uncertainty for this measurement. Individual airports would be thrilled with this level of uncertainty locally! GISS’s lower bound estimate on uncertainty is 0.05K, and that’s only recently. This analysis uses data back to 1888, when uncertainty is certainly larger than 0.05K. 🙂

    Anyway, individual there are places where no measurements are made and GISS estimates the temperature. So, there are many sources of uncertainty. The only difficulty with regard to estimating time constant τ is this uncertainty and autocorrelation introduces a systematic bias. That means this bias needs to be recognized and accounted for.

    I’m trying to think of an analogy. . . When calculating standard deviation for a sample, you need to include the square root of (N-1) in the denominator, where as when understanding the value for a full population, you use square root of (N). The “-1” comes from using the sample mean as the true mean, and if you don’t recognize you need the “-1”, you systematically under-estimate the true variance.

    The instrument uncertainty matters in a similar way here.

    @Raven– I think the only implication for climate models is that a simple 0-D system suggests the atmosphere responds to forcing rather rapidly. Some climate models evidently don’t. GCM’s contain a lot of parameterizations; that means any number of things might go wrong.

    Whatever one might say about the a simple 0-D model, it’s at least simple.

    I happen to like 0-D models. (I’ve said this before with regard to the Gerry North thread.) They can have problems, but the problems are usually easy to diagnose. No one expects them to be perfect.

  17. Steve McIntyre
    Posted Dec 26, 2007 at 7:52 AM | Permalink

    #16. Lucia, I’m with you on 0-D and 1-D models. It seems to me that the GCMs may introduce all kinds of irrelevant complexities into the analysis of the impact of doubled CO2 and that a much more concerted effort to provide intermediate-level complexity analyses using 0-D and 1-D models would have been a good idea for the IPCC types.

  18. Steve McIntyre
    Posted Dec 26, 2007 at 7:54 AM | Permalink

    Lucia, you’ve incorrectly credited me with some analysis that was done by UC.

  19. Posted Dec 26, 2007 at 8:01 AM | Permalink

    Ahhh! You’re right. I didn’t notice that was a guest post! UC is the one who noticed the noise issue. Yes, the “noise” UC introduced would be the analog of what I call “measurement uncertainty.”

  20. Ivan
    Posted Dec 26, 2007 at 8:39 AM | Permalink

    Steve, your link toward Lubos’s article is not working. When you click on that link window opens with Gavin Schimdt’s and Mann’s comment published in some climate journal.

  21. Kenneth Fritsch
    Posted Dec 26, 2007 at 11:57 AM | Permalink

    I like these kinds of threads and primarily because when I saw Lucia’s post I was hoping for the more detailed discussion that a post receives as a thread topic. I think I learn more from material in that format and that I understand better Lucia analysis.

    Kudos to Lucia for quickly admitting a mistake and a mistake that obviously was not relevant to her basic argument that Schwartz’s and Tamino’s analysis were in error. Quick exchanges at blogs can potentially lead to more mistakes (and subsequent corrections) but I would hope that that condition does not inhibit these types of analyses and exchanges.

    As an aside, I note that many label all skeptics as being more or less one sided skeptics in their veiw of the amount potential climate change that is commonly prescribed by the consensus, when in fact, I think there are a number, such as myself, who are skeptical of the low uncertainity in change that is implied by the consensus. My concerns are that the uncertainties are large or difficult to define and that would apply to both the higher and lower estimates.

  22. Ron Cram
    Posted Dec 26, 2007 at 12:39 PM | Permalink

    lucia,

    Steve was thoughtful enough to provide us with a link to the thread by UC which I did not get a chance to spend much time on earlier. Jean S had one comment I thought particularly interesting and would like to know your thoughts after reading this. I believe it was Comment #49. Jean S wrote:

    Oh dear!!! I came accross this paper:

    John Haslett: On the Sample Variogram and the Sample Autocovariance for Non-Stationary Time Series, The Statistician, Vol. 46, No. 4., pp. 475-485, 1997. doi:10.1111/1467-9884.00101

    We consider the estimation of the covariance structure in time series for which the classical conditions of both mean and variance stationary may not be satisfied. It is well known that the classical estimators of the autocovariance are biased even when the process is stationary; even for series of length 100–200 this bias can be surprisingly large. When the process is not mean stationary these estimators become hopelessly biased. When the process is not variance stationary the autocovariance is not even defined. By contrast the variogram is well defined for the much wider class of so-called intrinsic processes, its classical estimator is unbiased when the process is only mean stationary and an alternative but natural estimator has only a small bias even when the process is neither mean nor variance stationary. The basic theory is discussed and simulations presented. The procedures are illustrated in the context of a time series of the temperature of the Earth since the mid-19th century.

    Well, what are the results? He considered various models for fitting the NH tempereture series (1854-1994, Jones’ version I think):

    The key interpretations are
    (a) that there is clear evidence of warming, with the 95% confidence interval being (0.28, 0.54) “C per century,
    (b) that the process is entirely consistent with a variance stationary model and
    ( c) that the fitted model may be interpreted as the sum of two processes, one of high frequency, representing annual variations and one slowly changing, with drift, representing perhaps the underlying climate.

    Well, that’s the Schwartz’s model with the exception that Haslett uses ARMA(1,1)-model (AR(1)+white noise)! If I’m not completely mistaken, the REML-fitted values (AR coefficient=0.718) means a decay time (Mean lifetime; Schwarz’s \tau) of -1/log(0.718)=3.0185 years!

    Does this comment change your thinking at all?

  23. Phil.
    Posted Dec 26, 2007 at 1:24 PM | Permalink

    Re #22

    -1/log(0.718) = 6.95, does this change your thinking at all?

  24. Pat Keating
    Posted Dec 26, 2007 at 2:18 PM | Permalink

    22 23
    That darned log/ln thing again…

  25. JS
    Posted Dec 26, 2007 at 2:21 PM | Permalink

    #8 and #10

    Mistakes happen.

    But should anyone say, for example, “You can’t trust X, they don’t know the difference between degrees and radians.” – you might understand a little more than others lest you be attacked with, “You can’t trust Lucia, they don’t know the difference between natural and base 10 logs.”

    …and this, at any rate, puts you in good company. A Nobel prize winner I know of had a very famous published paper that included dummy variables for certain effects and, because logs were being used for a number of variables, had the dummies entered as 10 rather than 1. It rather changed the interpretation.

  26. Posted Dec 26, 2007 at 2:24 PM | Permalink

    @Ron Cram– I don’t know. I’d have to do the numbers. But…. 6.95 sounds close to the 7.3 years I get for the land/ocean GISS data. (It’s inside one standard error the way I do the calculation anyway.)

    But, with regard to the bias Jean S mentions, and the one I’m thinking of: I think they have different sources. I think the one Jean mentions would be there even if the temperature measurement were perfect. The one I mention is there as a result of the measurements not being exactly the same as the climate.

    The bias imposed by integrating over a year– which causes the effects Tamino showed– are also different from the one Jean S mentions.

    I’d have to read the paper Jean S mentioned to try to understand how the various biases fit together. (This type of thing actually becomes important when designing lab experiments. Unfortunately, with climate science, we often can’t design them to give us the precision we’d like!)

  27. Posted Dec 26, 2007 at 2:28 PM | Permalink

    JS– it’s hardly the first time I’ve made a mistake in a spread sheet. Of course… one difficulty with blogging it the tendency to do things more casually. But, yes, mistakes happen.

    On important matters, we do a large number of redundant things to find errors. Discussing the results is actually one of them! (There is nothing like someone saying they don’t believe you to make you go back and check. Well…to make ME go back and check. 🙂 )

  28. Steve McIntyre
    Posted Dec 26, 2007 at 2:30 PM | Permalink

    #24. Somebody at AGU told me that there had been a log/ln error in one of the lines of the main Hadley Center model for about 7 years before it was winkled out.

  29. Larry
    Posted Dec 26, 2007 at 2:51 PM | Permalink

    it’s hardly the first time I’ve made a mistake in a spread sheet. Of course… one difficulty with blogging it the tendency to do things more casually.

    Basic downside of life in the internet age. You literally don’t have time to do the proper checks, because of the pace of the hockey game. Which is why the team players understand that you never want to let the other side know where the puck is. If the auditors know where the puck is, it’s time to get the puck out of there.

  30. Phil.
    Posted Dec 26, 2007 at 3:02 PM | Permalink

    Re #27

    This discussion reminds me of many years ago as a young post-doc, I’d written a computer program to do a simple analysis. My advisor did a quick check with a small angle approximation and told me I must have an error and sent me off to check it. Finding no errors I went back over his approximation, it turns out he had approximated cos(x) as 1 rather than 1-(x^2)/2!, when we used the latter my program checked out. He actually taught me some very good lessons as we always checked and cross-checked before publication, it drove us nuts at times but numerous errors were found.

  31. Jon
    Posted Dec 26, 2007 at 5:47 PM | Permalink

    #24. Somebody at AGU told me that there had been a log/ln error in one of the lines of the main Hadley Center model for about 7 years before it was winkled out.

    AFAIK, this is also the perils of using excel. log for base 10, ln for natural log, is something of a grade school idiom that has gradually infected more and more sources, but especially analytic software for business people.

    In most “scientific” programs (e.g., matlab, mathematica, C, fortran, etc) the “log” function means natural log. Base 10 log is predominately log10, log(x…, 10), etc.

  32. Slevdi
    Posted Dec 26, 2007 at 6:23 PM | Permalink

    No professional computer programmer can take the AGW proposition with it’s forecast seriously when these things are based on the most fallible of all man’s creations – the computer program; especially when the underlying programs are coded by non-professional programmers and not independently audited.

    All this discussion on tree rings, SST, Ice Cores, Satellite data etc. is minor in comparison to that simple observation. (It’s still fascinating though).

  33. Posted Dec 27, 2007 at 2:38 AM | Permalink

    #31 jon,

    The ln, log idiom has been around as far as I can remember (40+ years i.e. going back to the slide rule era). This is especially true in EE where ln is used for time constants and log for decibels. I don’t think it is just grade school.

  34. Jean S
    Posted Dec 27, 2007 at 8:51 AM | Permalink

    #23: Why anyone would take a 10 -based logarithm in that context? log stands for the natural logarithm for me and I’m not the only one with that opinion.

  35. Mark T.
    Posted Dec 27, 2007 at 9:18 AM | Permalink

    log stands for the natural logarithm for me and I’m not the only one with that opinion.

    Scientific programs (such as MATLAB) use log = natural logarithm, but Excel does not. Excel does have a log10() function, but omitting the base defaults it to log10() for whatever reason. Maybe because TI calculators use ln()? Jean S., you, like me, are probably a MATLAB user, so switching to Excel can cause grief. 🙂

    Mark

  36. Mark T.
    Posted Dec 27, 2007 at 9:23 AM | Permalink

    Ooops, as noted by Jon.

    Mark

  37. Posted Dec 27, 2007 at 10:19 AM | Permalink

    Well… of course, there is always Log Ln.

    I always chuckled when I walked by that street on the way to work.

  38. Jean S
    Posted Dec 27, 2007 at 10:28 AM | Permalink

    #26 (lucia): Haslett’s paper is an excellent read. The bias mentioned in the abstract is the bias of the standard autocovariance estimator. However, this has nothing to do with the 3 year value given in #22: the value is based on Hasslett’s ML-fitted AR(1)+white noise with a drift-model parameters.

  39. Posted Dec 27, 2007 at 11:36 AM | Permalink

    Steve wanted to link to at least one of these two comments of mine:

    http://motls.blogspot.com/2007/08/stephen-schwartz-brookhaven-climate.html
    http://motls.blogspot.com/2007/09/stephen-schwartz-vs-scientific.html

  40. steven mosher
    Posted Dec 27, 2007 at 1:47 PM | Permalink

    Lucia got this for christmas.

  41. Larry
    Posted Dec 27, 2007 at 2:03 PM | Permalink

    40, she didn’t want a log, she wanted log rhythms:

    They’re kinda like AlGoreRhythms, only different.

  42. Phil.
    Posted Dec 27, 2007 at 2:21 PM | Permalink

    Re #39

    Steve wanted to link to at least one of these two comments of mine:

    http://motls.blogspot.com/2007/08/stephen-schwartz-brookhaven-climate.html
    http://motls.blogspot.com/2007/09/stephen-schwartz-vs-scientific.html

    I’m sure you’re right Mr Motl, though I’m not sure why he would want to do so since someone who believes that Global Warming isn’t a problem because there are seasonal fluctuations has little credibility on the subject! Also your tendency to play fast and loose with the numbers (always in the direction that favors your argument) hardly fits in an audit site: e.g. 1.357 ? 1.40, ln(1.357)= 0.305, ln(2)= 0.693, this doesn’t agree with your statement that “the effect of most of the doubling has already been made”.

  43. Jon
    Posted Dec 27, 2007 at 2:27 PM | Permalink

    The ln, log idiom has been around as far as I can remember (40+ years i.e. going back to the slide rule era). This is especially true in EE where ln is used for time constants and log for decibels. I don’t think it is just grade school.

    The notation “Log” to mean natural log dates to the early 1600s; I believe it may have been Kepler. Ln is credited to Irving Stringham in 1893. My point about ‘ln’ and grade school is that ‘log’ is by far the predominate notation among mathematicians. So use of ‘ln’ in grade school ‘math’ classes is idiosyncratic.

    I don’t dispute that it arises frequently in other contexts, but my EE professors used ‘log’ and ‘log10’. In the EE context, the pervasive use of dB tends to help clarify what is meant.

  44. Larry
    Posted Dec 27, 2007 at 2:34 PM | Permalink

    It looks like we have a microcosm of the topic this thread right here, with someone spoiling for a fight with Dr. Motl, apparently out of some wild ideological hair. It’s certainly not on topic, or even relevant to Dr. Motl’s comment.

  45. Phil.
    Posted Dec 27, 2007 at 2:43 PM | Permalink

    RE #44

    Mr Motl claims that Steve McI intended to link to a comment of his, I’ve commented on the content of that post, how is that off topic (or spoiling for a fight)? By the way I apologise that the font has apparently substituted ‘?’ for ‘not equal to’ in my post, it should be: 1.357 is not equal to 1.40.

  46. steven mosher
    Posted Dec 27, 2007 at 3:50 PM | Permalink

    At least when I go off topic I’m funny.

    Let’s return to Lucia’s treatment of the problem.

  47. Posted Dec 27, 2007 at 4:09 PM | Permalink

    Well… lucia needs to read the Haslett paper but not today. It’s my father-in-law’s 90th birthday! 🙂

    It does look like it’s worth looking at the pileup of systematic errors in calculating the autocorrelation to see how that affect the correct value. Schwartz really didn’t. That’s fine– I think the major contribution of that paper is to come up with the idea that one can come up with some empirical method to estimate this value.

    But, it is worth trying to figure out what the correct value is if we account for all the systematic biases introduced as a result of a) measurement and b) statistical manipulations.

    I have a much better semi-intuitive idea about (a). After all, my training is engineering, and in particular as an experimentalist doing lab work. So, problems of instrument response are ones I usually sort of “know” without reading a paper.

    My guess is that Jean S’s training is from the opposite end: as a statistician first? (Am I guessing wrong?) Anyway, I need to read that paper, and…. not today. 🙂 I may want to figure out how to email Jean directly.

  48. John M
    Posted Dec 27, 2007 at 4:56 PM | Permalink

    Re #22

    So just to clarify, natural log is the appropriate function, and Jean S’s original calculation (3.0185 yrs) is the consensus result? I’ve gotten a little lost in the banter.

    For what it’s worth, my chemistry and physics textbooks all kept the log-ln distinction (i.e log was to base ten by default). My calculus book uses ln for natural log, but log(subscript b) for log to any other base b, including 10. I’ve also got a shamelessly pristine introductory statistics text (Mendenhall & Beaver) that has at least one instance of ln(x).

    Is this a European-North American thing or a science-engineering thing?

  49. Posted Dec 27, 2007 at 5:06 PM | Permalink

    @John-M: The Log-ln thing is just a mysterious thing. I’m a mechanical engineer. That puts me right smack between disciplines that often use “log” to mean “log10” ( EE’s) and applied mathematician’s who would generally use “log” = “log_e”.

    I rarely use Log10 for anything, but I have always been aware that you need to check when using software written by others. One also need to read the text to figure out what others intend. In fact, I’ve told students that you need to check. For that matter, I knew log on Excel is “Log10”, but I screwed up anyway.

    I’ve been known to screw up. People screw up. It happens. When accuracy and precision are required, systems are organized to catch these things.

    Strangely enough: this mistake was found. By me. After a few people questioned me, and I wanted to explain further. So, discussion is one of the ways these things are found.

    I could provide anecdotes not involving me directly (other than as the one telling a grad student– not mine– “don’t worry too much, everyone understands”.)

  50. Larry
    Posted Dec 27, 2007 at 5:17 PM | Permalink

    49, don’t forget that it wasn’t that long ago (before calculators) that common logs were used as a shortcut method to multiply numbers. And engineers and even scientists used to carry around analog computers that multiplied numbers by physically adding their common logs on their scales. Those analog computers were sometimes known as sly drools. I think that was the biggest use for common logs.

    A lot of things that we carry around now really aren’t necessary any more, but were handy in the past, and now we’re stuck with them. Like decibels. And common logs. I’m sure if I sat down, I could probably come up with 100 other examples.

  51. Pat Keating
    Posted Dec 27, 2007 at 6:22 PM | Permalink

    50 Larry
    The use of ln for loge and log for log10 has long been common in Physics.
    ln has important natural and fundamental uses. For example, the fundamental definition of entropy is S = k*ln(N) (statistical thermodynamics).

  52. Larry
    Posted Dec 27, 2007 at 7:07 PM | Permalink

    I know where the natural log comes from (\int\frac{1}{x}dx), but common logs are really nothing more than an arithmetic economization trick.

  53. Larry
    Posted Dec 27, 2007 at 7:08 PM | Permalink

    Ok, what’s the trick to make latex work?

  54. Jan Pompe
    Posted Dec 27, 2007 at 7:18 PM | Permalink

    at Keating says:
    December 27th, 2007 at 6:22 pm

    50 Larry
    The use of ln for loge and log for log10 has long been common in Physics.

    While what you say is true we really can’t expect mathematicians and programmers to adhere to such traditions.

    Octave and I expect Matlab uses log for loge, log2 for log2 and log10 for log10 which of course makes more sense to them.

    We need to be careful when using tools from 3rd parties.

  55. Jan Pompe
    Posted Dec 27, 2007 at 7:20 PM | Permalink

    54 me

    Darn it. the subscripts didn’t work and I’ve created a tautology. Oh well sometimes some rain us fall.

  56. Larry
    Posted Dec 27, 2007 at 7:23 PM | Permalink

    I forgot about log2. Log2 is also a very useful function in comp sci.

  57. Posted Dec 27, 2007 at 10:08 PM | Permalink

    Larry #53 writes,

    Ok, what’s the trick to make latex work?

    Steve tells me that you precede and follow latex with [ t e x ] … [ / t e x ], with the spaces removed. I’ll try this on your \int\frac{1}{x}dx :
    \int\frac{1}{x}dx  .
    Unfortunately, the Preview panel doesn’t show this as latex.

  58. Jean S
    Posted Dec 28, 2007 at 3:55 AM | Permalink

    #47 lucia: The three years estimate derived from Haslett’s paper is as good as it gets in terms of “trying to figure out what the correct value is”. It is not based on detrending and sample autocorrelation calculation, but to a direct REML model fit. So the measurement errors are within the model and also biases introduced by sample autocorrelation and detrending calculations are avoided. Of course, the real question is that if Schwartz’s original connection between the climatic lag time and the decay of the autocorrelation function is meaningful. The real take home message from the estimates I derived from Haslett’s and other papers is that it is seems that Schwartz’s original (rather bad IMHO) calculation of the lag time (5 years) is likely an overestimate introduced by detrending and sample autocorrelation function biases. So RC’s criticism on this matter is IMO completely misguided.

    Yes, by first background is in math (which explains my log use). My e-mail: jean_sbls@yahoo.com (I read it irregularly unless I’m expecting to receive some mail 😉 )

    John M: Yes, the logarithm should be natural one, and that’s what I used in #22. I think #23 was a deliberate attempt to misguide readers here. The log thing is a completele chaos, see the link in #34. I think it would be useful to always mention what base is used. When originally writing #22 I didn’t think about this issue as I’m used to use log for natural logarithms and the original equations involve exponential function.

  59. Posted Dec 28, 2007 at 7:02 AM | Permalink

    I have plotted a set of published values on climate sensitivity, there is an interesting frequency dependency:

  60. Posted Dec 28, 2007 at 7:25 AM | Permalink

    Hans,

    I’m just a layman on these matters but I would think that climate sensitivity would be higher on shorter time scales i.e. less mass to influence.

    i.e. if you are going to heat the whole ocean by 1C it takes more energy than just heating the top 20 m.

    I’d appreciate your critique of the above thoughts.

  61. Pat Keating
    Posted Dec 28, 2007 at 7:38 AM | Permalink

    59 Hans

    I’m not sure what you mean by ‘frequency’ and ‘period’, in connection with climate sensitivity. Could you elaborate?

  62. Posted Dec 28, 2007 at 7:39 AM | Permalink

    The earth behaves as a low pass filter. It’s like a a heavy mass, which is also not sensitive to hich frequency hammering. We has a large mass seismometer in our university, you could swing it with slow knee bends not with jumping around.

  63. Posted Dec 28, 2007 at 7:54 AM | Permalink

    re 61:
    All systems with inertia respond different to signals of different frequency.
    Consider the playground swing: you need to swing with specific frequency to get te swing swinging. The swing is insensitive to other frequencies. The climate system is likewise. Slow swings (long periods) have a bigger effect than small swings.
    John Daly has an elegant calculation of the in-sensitivity of the tropics to annual variation in insolation:
    http://www.john-daly.com/miniwarm.htm

  64. Posted Dec 28, 2007 at 8:03 AM | Permalink

    Jean S

    The real take home message from the estimates I derived from Haslett’s and other papers is that it is seems that Schwartz’s original (rather bad IMHO) calculation of the lag time (5 years) is likely an overestimate introduced by detrending and sample autocorrelation function biases.

    The detrending in particular bothers me a lot. The quick and dirty way I did it gives an estimate of uncertainty in the autocorrelation due to measurement biases. However, among other difficulties, that “uncertainty” includes whatever is introduced by detrending.
    (I doubt my method does anything to correct for problems with determining the sample autocorrelation!)

    So RC’s criticism on this matter is IMO completely misguided.

    Well… yes… When I read Tamino’s criticism, my thought was– but his reason for not saying the 0-D model is unrealistic is so wrong. Schwartz’s simplified model may turn out not to describe the data, but not for Tamino or anyone at RC’s reason! Their methods entirely overlook the fact that Shwartz’s model pertains to the behavior of the climate and the data are measurements that contain uncertainty.

  65. bender
    Posted Dec 28, 2007 at 8:08 AM | Permalink

    Hans, can you clarify the x-axis? Is this a data plot or a schematic diagram?

    Are you suggesting that the longer you observe a system, the closer your estimate of apparent sensitivity starts to approach the true sensitivity?

    The problem is there are no replicate points on that curve because the timing of observation and the duration of observation are confounded; older observations span more time. Consequently your one dashed curve may actually be a composite of a family of curves (which may or may not have that hypothesized shape).

  66. Posted Dec 28, 2007 at 8:42 AM | Permalink

    Re 65:

    Bender the x-axis is climate forcing period in years. The dotted line is indeed an interpretation.

    Black dots are empirical values.
    50 million years for the PETM (ball park estimate), 100000 years for ice ages (climate sensitivities of Hansen, Arrhenius and Shaviv)
    The equilibrium sensitivities of IPCC are based on modelstidies with runs between 300 and 500 years.
    GCM sensitivities are transient sensitivities (1-3 k/2xco2), for model runs up to 100 years.

  67. Pat Keating
    Posted Dec 28, 2007 at 8:49 AM | Permalink

    63 Hans

    Thanks, your elaboration was helpful (tho’ the swing example reference to a resonance might be confusing to some, for this case). Yes, it does look like a classic roll-off, such as in an audio amplifier.

    How good are the PETM and Hansen estimates, do you think?

  68. Larry
    Posted Dec 28, 2007 at 10:26 AM | Permalink

    More specifically, the mixed layer model assumes (idealizes) that the top 100m or so of water in the ocean are turbulent and well-mixed, and everything below that moves very slowly, so that wrt the thermal time constant of the mixed layer, the rest of the ocean can be ignored. A well-mixed fluid mass will respond to temperature changes at its surface as a first-order exponential decay (or AR(1)) type of system. The first-order exponential behavior is a direct consequence of the mixed layer model.

    http://en.wikipedia.org/wiki/Mixed_layer

  69. Posted Dec 29, 2007 at 9:20 AM | Permalink

    How good are the PETM and Hansen estimates, do you think?

    I think they a good as maximum values for climate sensitivity. Personally I think vegetation albedo feedback is underestimated, which reduces Co2 climate sensitivity.

  70. Mike B
    Posted Jan 1, 2008 at 10:10 AM | Permalink

    Steve, the link to Lubos’ comment goes to a paper by Tamino, Annan, Schmidt, and Mann

  71. Ron Cram
    Posted Jan 1, 2008 at 10:34 AM | Permalink

    Jean S,

    Are you going to try to get your three year estimate for the time constant published in JGR as a comment? I think it would certainly add to the discussion. I invited Schwartz to come discuss his paper here but he is spending his time responding to comments in JGR.

  72. Posted Jan 15, 2008 at 1:55 PM | Permalink

    Something Jean S said in a comment made me thing of yet another way to use Schwartz’s simple lumped parameter model.

    I realized that if I had estimates for climate forcing, and an estimate for two parameters (time constant and effective heat capacity of the climate) I could “predict” the future temperature. Then, if I had a temperature record, I could just guess the value of two parameters, and iterate until I found the values that minimized the sum of the square of the differences between the model and the temperatures. So… here it is:

    The correlation coefficient is 0.87, which I think is rather good given the likely uncertainty in temperature measurements and historical forcing estimates

    FWIW, I get a climate sensitivity of 1.5 C.

    Since my track-record on proof-reading more mistakes is not so hot, I’m going to wait before I project temperatures.

  73. Posted Jan 15, 2008 at 1:57 PM | Permalink

    The image didn’t show!

    (Just in case.) http://rankexploits.com/musings/wp-content/uploads/2008/01/climatesens.jpg

  74. steven mosher
    Posted Jan 15, 2008 at 2:36 PM | Permalink

    re 73. Lucia, what data did you use for forcing?

  75. steven mosher
    Posted Jan 15, 2008 at 2:41 PM | Permalink

    re 73 another thought. most agree that Volcanic forcing are non climatic.
    Simply, if you are using the historical data to estimate Sensivity, then it would best
    if you could remove the volcanic forcing from the record. Did you do this? Hansen
    actualy supplies the data, so it could be done

  76. Posted Jan 15, 2008 at 3:08 PM | Permalink

    Steve– I obtained forcings from Real Climate– See this post, then scroll down to forcing. I have no idea if they are correct.

    Obviously, if they are wrong, that would affect my parameters. As would any mistakes I make. 🙂

    The general idea is a not to revolutionary extension of what Schwartz did– just try to use real forcing estimates.

    I’m not sure what you mean by “non-climatic”. With regard to the temperature response of the planet, anything that changes the net radiative forcing affects the temperature. I need to include those forcings to estimate the climate response time and heat capacity. Why do you think I should remove this?

  77. Mike B
    Posted Jan 15, 2008 at 3:13 PM | Permalink

    What form does your “best fit” model take?

    Eyballing the graph, after 25 years of reduced aerosol forcing, we still haven’t quite made it to +0.6 from +0.2, and yet your model is moving it all the way to
    +1.0 by 2015? Dr. Gladwell, I smell a tipping point.

    If those are going to be your out of sample forecasts, be prepared for some good natured ridicule if you’re wrong. 🙂

  78. Posted Jan 15, 2008 at 3:24 PM | Permalink

    Mike B– I wrote up what I did here:
    Model description.

    I actually integrate a differential equation using two parameters: time constant (τ) and heat capacity (α ). I guessed both, found the “error” (difference between measured data and model values) and found the sum of the square of the error. Then I iterated to find the value of τ and α that gave the best fit.

    The lines after 2003 are extrapolations. I was fiddling with various changes in forcing to see what I got. That’s what the model predicts if GHG’s increase twice the current rate after 2003. (So… yes. That would involve a big increase!)

    I don’t talk about that bit in the blog post. I should go change that extension so as not to mislead.

  79. Posted Jan 15, 2008 at 3:32 PM | Permalink

    The graph now projects assuming GHG forcings increase at the rate they did over the past 9 years (after the volcanic eruption washed out.)

    If GHG forcings don’t increase and there are no volcano eruptions, the model predicts the temperature anomaly will reach 0.86 C.

  80. Mike B
    Posted Jan 15, 2008 at 3:42 PM | Permalink

    Lucia – a couple of comments…wish I had more time, but I just don’t.

    First, looking at the graph, you can see that you minimization of MSE regime de facto requires your estimates (1980-present) and your predictions to way overshoot the forcings, to make up for the 1880-1940 period where the temps undershoot the forcing. The long past is leveraging your predictions way high.

    Second, I suggest you examine some literature on forecasting to compute realistic prediction intervals on your current year plus 10 predictions. You’ll be shocked how wide they are. If I have some time tomorrow, I’ll try to track down some online references.

  81. Posted Jan 15, 2008 at 3:57 PM | Permalink

    Mike B–
    The predictions aren’t way overshooting the forcings. The forcings and predictions have different units. As it happens, I divide the forcings by to keep my scales from running away from me. If forcing stops increasing and level off, temperatures eventually approach an assymtotic value. It happens that if we “ran” a case where we increased the forcing by a step function going from 0 to 3.75 W/m^2, you’d see the temperature would initially increase almost linearly at a rate of 1.72 C/ 14.5 year. After that, it would approach 1.72 C and level off. The shape would look like a decaying exponential. This is a classic shape.

    If you look carefully, you’ll see temperature tends to lag increases in the forcing. That’s the way this model works.

    On the data issue: there does seem to be problem with the early time period. I suspect that’s because the forcing estimates are poor during the early periods. Possibly, Hansen underestimates the drop in forcing due to for those ancient volcanic eruptions. Or there is something else not know.

    However, I think, with this model, the long past doesn’t necessarily levarage predictions high in the current time in the way you suggest. This model is based on an energy balance for a lumped parameter.

    The difficulty with the poor agreement in the past is that it may be due to poor estimates of forcing, and so result in poor values for the time constant (τ) and the effective heat capacity of the climate (1/α )

    If you have papers, I’d love to read them though.

  82. Sam Urbinto
    Posted Jan 15, 2008 at 4:13 PM | Permalink

    Glancing over this again, I wanted to comment on something Lucia said that I so totally agree with. Measurements are not temperature. They are measurements.

    Now while I certainly wouldn’t say none of the thermometers or satellite readings used to develop the anomaly are accurate, do we really know how accurate and in what percentage? We know that there are errors and biases, but we haven’t fully quantified it yet.

    Then we hit the other issue, the real problem; Sure, I may be measuring a fine and accurate temperature {points to spot in the air} right here, but even if I am, how do we know what the fact it’s 20 C right there {points to spot in the air} means if I’m using that sample as a proxy for the temperature of {vaguely guestures at 1K radius} the area we’re representing, or that 5 feet up is meaningful?

    We don’t have the temperature, we have measurements sampling the air and reading the surface of the water.

    Does that reflect “warming”? If it’s warming, is it understated, overstated or just right? Would it be doing the same thing if no humans were here?

    Or does the number reflect a change in measuring and processing side-effects.

  83. Posted Jan 15, 2008 at 4:33 PM | Permalink

    @Sam– we’ll hopefully, temperature measurements do approximate the real temperature. 🙂

    My issue with Tamino’s criticism of Schwartz is he decreed the model was unphysical because he mistook a well known data artifact that arises when measurements contain uncertainty for a physical process.

    Now that we are discussing this, I’m going to have to make sure I don’t fall prey to that same data artifact!)

    But, meanwhile, right now I’m hoping:
    a) I can find better information on forcing during the 1880s and
    b) I can learn how I do an error estimate on these parameter values.

    Do I have to run simulations where I introduce white noise into my data and then find a distribution around that? That’s the only thing I can think of!

  84. steven mosher
    Posted Jan 15, 2008 at 5:01 PM | Permalink

    RE 76.

    I probably misunderstood. The volcanic forcing apears in the record like shot noise
    with some reverb. If you are trying to estimate the sensitivity of the system
    I think it might be instructive to elimate these spikes.

  85. Sam Urbinto
    Posted Jan 15, 2008 at 5:51 PM | Permalink

    Approximates the temperature of the air 5 feet off the ground in the locations it’s sampled and the top of the water surfaces as mean anomaly over time, yes. Otherwise, it’s really an unknown that we think approximates “the temperature of the Earth”.

    I have a problem signing off on the idea there is “a” temperature of the Earth, although it does seem within the bounds of reason the anomaly might approximate something to at least build off of. (And that thing is not CO2=temperature rise) 😀

  86. Posted Jan 15, 2008 at 6:02 PM | Permalink

    @84 stevem. This looks good enough that if I am going to look for better forcing data, and then redo the analysis using monthly temperature data. When I do that, the volcanic eruptions will help pin down the time constant.

  87. Ron Cram
    Posted Jan 15, 2008 at 9:46 PM | Permalink

    lucia,

    If you want some papers on scientific forecasting, I suggest you start with the website by Scott Armstrong. He is the leading figure in this fairly young field and the author of “Principles of Forecasting.” Armstrong conducts “forecasting audits.” He recently looked into the forecasting done by the IPCC and was more than a little underwhelmed. There are three journals dedicated to the science of forecasting – Journal of Forecasting, International Journal of Forecasting and Foresight. Armstrong’s website is here.

  88. Ron Cram
    Posted Jan 15, 2008 at 9:54 PM | Permalink

    re: 83

    lucia,

    You wrote: “hopefully, temperature measurements do approximate the real temperature.”

    Actually, the difference between the temp record and the real temp can be quite significant. I am certain you know of Anthony Watts, but possibly you have not seen the presentation he gave at UCAR last year about microsite issues. Of course, this only impacts land surface temps but it is still significant. You can find his presentation here.

  89. Posted Jan 16, 2008 at 6:44 AM | Permalink

    Ron– Yes. I’m aware there are systematic errors! Heck, the Crut and GISS met station only data show a systematic deviation over time of about 0.1 C per century. It’s less than the overall rise, but it’s still large enough to make statistical tests of theories about temperature evolution difficult. The answer depends on the data set!

    The HadCrut and GISS LandOcean data track each other better. (That doesn’t mean they are right– but at least they don’t disagree with each other!)

  90. Ron Cram
    Posted Jan 16, 2008 at 8:44 AM | Permalink

    lucia,

    If you have not done so, please take a look at the presentation by Watts. I do not have the math skills you and Steve McIntyre have but just doing some approximations leads me to think that up to half of the observed warming in the 20th century may not be real but an artifact of these microsite issues (which are even more significant outside the US than inside). Without seeing the presentation, I do not think you will understand the significance of the problem.

    The ocean data has similar problems. The recently completed (November 2007) Argo Network is providing very good ocean temp data now. They began to build it about 2000, if I remember correctly. But prior to that there were lots of changes in instrumentation and other factors that look to have biased the sea temp record. Steve McIntyre has written on this some.

  91. steven mosher
    Posted Jan 16, 2008 at 10:12 AM | Permalink

    RE 86 Lucia,

    You have this right?

    http://data.giss.nasa.gov/modelforce/RadF.txt

  92. steven mosher
    Posted Jan 16, 2008 at 10:16 AM | Permalink

    RE 86 Lucia,

    You have this right?

    http://data.giss.nasa.gov/modelforce/ghgases/GCM_2004.html

  93. Posted Jan 16, 2008 at 3:12 PM | Permalink

    stevemoscher– Thanks for those links. I also emailed Gavin explaining that I wanted more detailed volcano eruption data and he pointed me to the data at:
    http://data.giss.nasa.gov/modelforce/strataer/

    Now I just need to figure out how to deal with the volcano stuff so I can peg those volcanos more precisely.

    Oh… even though I don’t have the volcanoes predicted, I figured I might as well get in the horserace and prognosticate.

    Now, I better go figure out how to make volcanic eruptions happen when they actually happen instead of smearing their average effect over a year.

  94. steven mosher
    Posted Jan 16, 2008 at 3:52 PM | Permalink

    RE 93.

    If you have a look around tamino’s site you will find a place where he plays with
    Volcano forcing. I dont have an exact link, but if you cant fin it in 15 mintutes I
    will go dig it up for you. probably 2-3 months ago.

    ANYWAY, we fingured the effect came into the record with an 8 month lag, that is, 8 months
    after the eruption you get the cooling. Like always he did not supply the analysis but maybe
    you ask nice and he will tell you

  95. steven mosher
    Posted Jan 16, 2008 at 4:00 PM | Permalink

    lucia, here is a tamino tidbit on volcano

    http://tamino.wordpress.com/2007/10/16/many-factors/#more-427

    not very helpful, but a clue at least

  96. Sam Urbinto
    Posted Jan 16, 2008 at 4:08 PM | Permalink

    I again point to the discrepencies between the GHCN-ERSST data at the NCDC GCAG page and the GISS GLB.Ts+dSST.txt file.

  97. Posted Jan 16, 2008 at 4:23 PM | Permalink

    stevemoscher. Thanks for the link. I was sort of guestimating that I would need to redo the analysis with temperatures at no more than 1 month increments, particulary around times of volcanic incidents. But, I know have “the concept” of how to do this, so it’s a matter of getting the volcano data.

    Because the volcanoes really spike the forcing, I think they have a potential to affect the magnitude of the two parameters quite a bit.

    Sam– Thanks. I’ll end up using all the temperature records after I figure out the volcanic forcing data. All of that is going to be useful for getting ranges out of this simple model.

    It’s already kind of a cool model. Even with the smoothed out volcano and averaged yearly data, I get correlations of 0.89 bewteen the data and th

  98. pat
    Posted Mar 31, 2008 at 5:39 PM | Permalink

    Speaking of errors: log/ln; degrees/radians, nothing can match that of a recent Mars lander mission which was lost when JPL used pounds for thrust and NASA, joules – or was it the reverse? In any case, what is a very understandable error can have disastrous results.