Demetris Koutsoyannis

I mentioned a few days ago that a serious discussion had threatened to break out at realclimate, where Demetris Koutsoyannis had posted up some astute commentary. He has recently dropped in here as well. I was unfamiliar with his work prior to this recent introduction. He has written extensively on climate, much of which has been from a statistical viewpoint much more advanced than poor old Rasmus.I began writing a commentary on his realclimate post, but instead have simply reproduced it below as it deserves to be read in its entirety, following some short introductory comments.

You can consult a list of his publications here . He emailed me the following two articles not posted up his website, which may interest others. (They are more accessible to non-statisticians than some of the other articles on the website): Koutsoyiannis, D., Climate change, the Hurst phenomenon, and hydrological statistics, Hydrological Sciences Journal, 48(1), 3-24, 2003. Koutsoyiannis, D., The Hurst phenomenon and fractional Gaussian noise made easy, Hydrological Sciences Journal, 47(4), 573-595, 2002.

I’ve obviously been intrigued by persistence issues for some time and have briefly discussed Mandelbrot and also Vit Klemeàƒ’€¦à‚⟼/a>, who has encouraged Koutsoyannis. In light of Rasmus’ insistence (persistence?) for i.i.d. (independent identical distributions), I repeat the following from Klemeàƒ’€¦à‚⟬ which applies nicely to Rasmus and his ilk:

Somehow the operational attitude toward mathematical modeling, the exaggerated strife for mathematical tractability and convenience ("Oh Lord, please keep our world linear and Gaussian") has blurred our sense for reality…

I’ve discussed Rasmus together with Cohn and Lins in a number of recent posts, commencing here . Rasmus argued that consideration of statistical persistence "pitched" statistics against physics, in effect claiming that statistical approaches perhaps made sense at a quantum mechanics level, but not at a terrestrial scale. Doubters should re-read his post at realclimate, but here are a couple of excerpts:

In fact, one may wonder if an underlying assumption of stochastic behaviour is representative, since after all, the laws of physics seem to rule our universe….

On the very microscopical scales, processes obey quantum physics and events are stochastic. Nevertheless, the probability for their position or occurrence is determined by a set of rules (e.g. the SchràƒÆ’à‚ⵤinger’s equation). Still, on a macroscopic scale, nature follows a set of physical laws, as a consequence of the way the probabilities are detemined…

The nature is not trendy in our case, by the way – because of the laws of physics.

If one pursues his references a little further, one finds that MBH99 has become not merely an icon, but may even be one of the “laws of physics” referred to by Rasmus. He relied on the assertion that the "historical climate has been fairly stable", citing here , which, in turn, showed the MBH98-99 hockeystick as "proof". Such tangled webs.

Here’s Demetris comment in full as all of it is worth considering:

1. "Statistical questions demand, essentially, statistical answers". (Here I have quoted Karl Poppers’ second thesis on quantum physics interpretation – from his book "Quantum Theory and the Schism in Physics"). The question whether "The GCMs […] give a good description of our climate’s main features" (quoted from the rasmus’s response) or not is, in my opinion, a statistical question as it implies comparisons of real data with model simulations. A lot of similar questions (e.g., Which of GCMs perform better? Are GCMs future predictions good enough? Do GCM simulations reproduce important natural behaviours?) are all statistical questions. Most of all, the "attribution" questions (to quote again rasmus, "how much of the trend is natural and how much is anthropogenic" and "to which degree are the variations ‘natural’") are statistical questions as they imply statistical testing. And undoubtedly, questions related to the uncertainty of future climate are clearly statistical questions. Even if one believes that the climate system is perfectly understood (which I do not believe, thus not concurring with rasmus), its complex dynamics entail uncertainty (this has been well documented nowadays). Thus, I doubt if one can avoid statistics in climatic research.

2. Correct statistical answers demand correct statistics, appropriate for the statistical behaviours exhibited in the phenomena under study. So, if it is "well known" that there is long term persistence (I was really happy to read this in rasmus’s response) then the classical statistical methods, which are essentially based on an Independent Identically Distributed (IID) paradigm are not appropriate. This I regard as a very simple, almost obvious, truth and I wonder why climatic studies are still based on the IID statistical methods. This query as well as my own answer, which is very similar to Cohn and Lins’ one, I have expressed publicly three years ago (Koutsoyiannis, D., Climate change, the Hurst phenomenon, and hydrological statistics, Hydrological Sciences Journal, 48(1), 3-24, 2003 – http://www.extenza-eps.com/IAHS/doi/abs/10.1623/hysj.48.1.3.43481). In this respect, I am happy for the discussion of Cohn and Lins work hoping that this discussion will lead to more correct statistical methods and more consistent statistical thinking.

3. Consequently, to incorporate the scaling behaviour in the null hypothesis is not a matter of "circular reasoning". Simply, it is a matter of doing correct statistics. But if one worries too much about "circular reasoning" there is a very simple technique to avoid it, proposed ten years ago in this very important paper: H. von Storch, Misuses of statistical analysis in climate research. In H. von Storch and A. Navarra (eds.): Analysis of Climate Variability Applications of Statistical Techniques. Springer Verlag, 11-26, 1995 (http://w3g.gkss.de/staff/storch/pdf/misuses.pdf). This technique is to split the available record into two parts and formulate the null hypothesis based on the first part.

4. Using probabilistic and statistical methods should not be confused with admitting that things "happen spontaneously and randomly" or "without a cause" (again I quoted here rasmus’s response). Rather, it is an efficient way to describe uncertainty and even to make good predictions under uncertainty. Take the simple example of the movement of a die and eventually its outcome. We use probabilistic laws (in this case the Principle of Insufficient Reason or equivalently The Principle of Maximum Entropy) to produce that the probability of a certain outcome is 1/6 because we cannot arrive at a better prediction using a deterministic (causative) model. This is not a denial of causal mechanisms. If we had perfectly measured the position and momentum of the die at a certain moment and the problem at hand was to predict the position one millisecond after, then the causal mechanisms would undoubtedly help us to derive a good prediction. But if the lead time of one millisecond needs to be a few seconds (i.e. if we are interested about the eventual outcome), then the causal mechanisms do not help and the probabilistic answers become better. May I add here my opinion that the climate system is perhaps more complex than the movement of a die. And may I endorse this thesis saying that statistical thermophysics, which is based on probabilistic considerations, is not at all a denial of causative mechanisms. Here, I must admit that I am ignorant of the detailed structure of GCMs but I cannot imagine that they are not based on statistical thermophysics.

5. I have difficulties to understand rasmus’s point "A change in the global mean temperature is different to, say the flow of the Nile, since the former implies a vast shift in heat (energy), and there has to be physical explanations for this." Is it meant that there should not be physical explanations for the flow of the Nile river? Or is it meant that the changes in this flow do not reflect changes in rainfall or temperature? I used the example of Nile for three reasons. Firstly, because its basin is huge and its flow manifests an integration of climate over an even more extended area. Secondly, because it is the only case in history that we have an instrumental record of a length of so many centuries (note that the measurements are taken in a solid construction known as the Nilometer), and the record is also validated by historical evidence, which for example witness that there were long periods with consecutive (or very frequent) droughts and others with much higher water levels. And thirdly, because this record clearly manifests a natural behaviour (it is totally free of anthropogenic influences because it covers a period starting at the 6th century AD).

6. I hope that my above points should not be given a "political" interpretation. The problem I try to address is not related to the political debate about the reduction of CO2 emissions. Simply I believe that scientific views have to be as correct and sincere as possible; I also believe that the more correct and sincere these views are the more powerful and influencing will be.

This is very clearly put and I would barely disagree on the location of a comma. Happily, Gavin has let Rasmus off the bench to respond. It’s fun seeing Rasmus run amok. You’ll enjoy his reply as well. Among other things, Rasmus tells us: "I am not an ‘anti-statistics’ guy. Statistics is a fascinating field. In fact, most of my current work is heavily embracing statistics." There are a few pearls that are worth incorporating into the The Sayings of Rasmus.

Koutsoyannis’ post was followed up by one from Isaac Held, a very eminent climate scientist, who twitted Rasmus for his statistics. I’ll try to post some excerpts from Koutsoyannis’ articles on another occasion.

83 Comments

  1. Willis Eschenbach
    Posted Jan 5, 2006 at 4:38 PM | Permalink

    Well, I went to RealClimate to comment on the Koutsoyannis post, and guess what … they’ve closed the thread. No more posties, sorry.

    Wankers …

    w.

  2. Armand MacMurray
    Posted Jan 5, 2006 at 5:28 PM | Permalink

    Willis, it looks postable to me; perhaps they’ve just locked you out?

  3. Willis Eschenbach
    Posted Jan 5, 2006 at 7:06 PM | Permalink

    Armand, when I call up the page http://www.realclimate.org/index.php?p=228#comment-7096, it only has the “Preview” button, and not the “Post Comment” button. What am I missing?

    w.

  4. Posted Jan 5, 2006 at 9:53 PM | Permalink

    Steve, your article is a great honor for me and very encouraging. Thank you. I am also glad for the comments by David Stockwell, Dave Dardinger and Willis Eschenbach.

  5. Armand MacMurray
    Posted Jan 6, 2006 at 2:11 AM | Permalink

    Willis, just have faith! 🙂
    Press the Preview button, and along with your preview, a “Post” button should appear.

  6. Willis Eschenbach
    Posted Jan 6, 2006 at 3:39 AM | Permalink

    OK, went to look again, and this time the post button was there … go figure. So I posted a comment.

    Demetris, the comment involved constructal theory. It seems to me that statistics for constructal geometries would be the relevant ones, but I can’t find much on the web regarding this question. Do you have any information about the statistics of constructal geometries?

    w.

  7. Posted Jan 6, 2006 at 6:53 AM | Permalink

    I feel compelled to say a few things about Koutsoyannis in a thread bearing his name, but are relevant to a great number of threads about statistics, attitudes, and more.

    Generally, hydrological statistics asks questions of the type “which?” and avoids questions of the type “why?”. However, the latter would provide an explanation of the appropriateness or inappropriateness of a certain distribution and thus would also help choose the most appropriate distribution.

    Koutsoyiannis, D., Uncertainty, entropy, scaling and hydrological
    stochastics. 1. Marginal distributional properties
    of hydrological processes and state scaling.

    His answer to why in this paper, which unifies different distributions under a maximum entropy principle remind me of some of the deepest arguments from physics. The insistence on exploring the consequences that flow from ditching a preferred time scale reminds me of the origins of relativity: ‘What flows from ditching a preferred reference frame?’ This is not only abstract theory however. I have counted four different methods to generate long term persistence behaviour, some that involve long-term memory, the sum of short term persistence with different time scales, an heirarchical approach, and the chaotic tent map which is a deterministic way of generating scale invariant series. These are practical models that could be used in a variety of applications as appropriate. I will definitely be taking another look at spatial autocorrelation in my area as a result of reading this.

    Even though I get the impression that there is a view that the climate is so uncertain at a fundamental level that prediction is impossible, there are surely some results that could be applied to GCMs to at least create behaviours that have more realistic distributions. One that comes to mind is trying to ditch the implied preference for particular scale inherent in a finite volume, finite time step simulation. I wonder if aggregating simulations conducted simultaneously at multiple scales and time steps has been done? Koutsoyannis’ results show this could improve their realism at least in scaling properties.

  8. Posted Jan 6, 2006 at 10:50 PM | Permalink

    Re #7.
    David, I am grateful for this. May I use your words after paraphrasing them this way: I feel compelled to say a few things about my works in a thread bearing my name, but are relevant to a great number of people.

    1. Let me start with a few sad stories, which I had repelled, but this thread (along with my electronic archives and an encouraging email by a knowledgeable colleague) helped me to recall. The first story is about my first paper on Hurst (Koutsoyiannis, D., The Hurst phenomenon and fractional Gaussian noise made easy, Hydrological Sciences Journal, 47(4), 573-595, 2002). Initially (November 2000), I submitted it to another journal; I will call it Journal A as it is not important to give the name here. It was rejected outright (no second chance) mainly because an anonymous reviewer did not agree with bringing FGN models back to widespread use in hydrology.

    2. My second paper on Hurst (Koutsoyiannis, D., Climate change, the Hurst phenomenon, and hydrological statistics, Hydrological Sciences Journal, 48(1), 3-24, 2003) I submitted initially to another journal which I will call Journal B. The first submission (May 2001) was rejected based on two anonymous reviews. I tried a second submission to the same journal, after addressing the review comments (October 2001); this was again rejected (finally condemned this time) based on another two reviews.

    3. At about the same time I did a third paper with the title “Are hydrologic processes chaotic?” which I submitted again to Journal B (August 2001). In this I tried to express my opinions that (a) the chaos notion may be good for simple nonlinear physical systems but not good enough for complex geophysical processes (except for understanding or approximating behaviours), and (b) that the old (in some way Aristotelian – according to Karl Popper) probability concept may be more appropriate to describe geophysical processes. I also tried to show (using mathematics) the sources of potential errors that led to claims of low dimensional chaos in hydrological processes. Again I received a rejection accompanied by three reviews and an associate editor assessment report. All four were anonymous (or were sent to me without names) and their total length was about that of my own paper. The rejection was based on disagreement with my interpretation and nobody spotted errors in my mathematics. The associate editor charged me with “limited knowledge” and “myopic perspective”.

    4. God bless Zbyszek Kundzewicz, the editor of Hydrological Sciences Journal, who published my papers 1 and 2, even though he knew that they were both rejected earlier (because I sent him the earlier reviews). I realized that he is a great editor as he has personal opinion on the papers; he reads them all and makes his own comments and corrections. And Hydrological Sciences Journal, albeit small scale, is a great journal and the oldest in the field of hydrology (in 2005 it celebrated its 50th birthday – same age with me). Since then I have published several of my works there and the editor treated them in a manner that was always better than I could expect. I was also honoured to see my name in the editorial board of this journal.

    5. As per paper 3, I was so shocked that I could not work on it at all. I do not know what made me so upset; perhaps it was the length of reviews. Should a paper that brings about reviews with length equal to its own length be rejected? Should a paper be rejected for reviewers’ disagreement on its interpretations (even if they seem wrong)? Why these people wrote so many pages to justify rejection of the paper and not let it go and publish their own opinions as discussions (in which they could prove publicly my terribly wrong interpretations)? Only two weeks ago (four years after its rejection) I was able to work on this paper again. Now it happens to visit Georgia Tech in Atlanta for two months which have almost passed (normally I work in Athens, Greece) and the problem I am working on here is related to the theme of this paper. I have completed a revision of the manuscript at I have it ready for resubmission (but I do not know where). I hope this public discussion here will not eliminate any possibility for publication.

    7. With all this I do not want to give the impression of being ungrateful to Journals A and B. In contrast, I am very grateful to both. They provide a great service to the scientific community and both have high impact; so I think that we all are grateful. I am also grateful because they have published other papers of mine and because both have honoured me in one or another way.

    8. At the same time I do not pretend that I feel comfortable with all this situation, even though I understand that rejection is inevitable in a system that performs quality control in scientific publishing. Sometimes, I feel angry, not for a particular journal, editor or reviewer, but for the entire scientific publishing system that allows or even creates such situations that I have described above. I am sure that other people have experienced problems similar or even worse than these. Zbyszek Kundzewicz and I tried to analyze some pathologies of the system and think of potential solutions; our editorial on this in Hydrological Sciences Journal is openly available (http://www.cig.ensmp.fr/~iahs/hsj/504/504000.pdf).

    9. From these pathologies I would like to give here emphasis to this: the climate of secrecy and anonymity that prevails in the peer review system. I really look forward to a climate change from secrecy to openness and from anonymity to eponymity. The web paradigm (including this very site) helps very much to such a climate change. In the above stories all reviews were (or reached me as) anonymous. What have I done personally for the anonymity problem? First, I do only eponymous reviews. Second, I try to promote this idea (this I am doing here and now as well). After my 100th review I have formulated a motto, something like a stamp, which I put in the beginning of all my reviews. It says: “Reviewer’s assertion: It is my opinion that a shift from anonymous to eponymous (signed) reviewing would help the scientific community to be more cooperative, democratic, equitable, ethical, productive and responsible. Therefore, it is my choice to sign my reviews.”

    10. Obviously, the peer review system (and the scientific publishing procedure in general) is not a scientific issue per se; it is an ideological and ethical issue. I feel sorry about the dominance of the ideology and ethics of secrecy and anonymity. How can the fear of saying things eponymously be consistent with the search for the truth, i.e. with science? How can anonymity be so dominant in the scientific community? In another rejected paper of mine the editor (not a reviewer) who rejected my paper and whom I knew wrote to me something like “how better it would be if you did not know my name”.

    11. I used the Greek-origin words eponymity, eponymous and eponymously as sort of “scientific” terms because I do not know English words to describe the very meaning I wanted (I have seen the words attributed and signed but these do not seem concise, according my poor English). In Greek eponymous is someone who has name and it is just the opposite of anonymous who does not have name. I see that the Oxford dictionary does not contain the words eponymity and eponymously at all, while for eponymous it gives this definition: “eponymous a. (of a person) giving his or her name to something”. This does not correspond to the Greek meaning of eponymous, but rather corresponds to the Greek word “pheronymous” (i.e. bearing the name). Can anyone help me suggesting the correct English antonyms of anonymity, anonymous and anonymously? (By the way, antonym is another Greek word of same etymology but with the preposition anti-, i.e. opposite).

  9. ET SidViscous
    Posted Jan 6, 2006 at 11:07 PM | Permalink

    Well I ran eponymous through babelfish and came out with eponymous. Rather than saying that’s the proper Enflish word, knowing babelfish, repeating the word usually means “Hell if I know”

    So I fired up google and got this.

    http://thesaurus.reference.com/search?q=Anonymous

    For which antonyms it gave: identified, known, named, showcased, spotlighted, public, familiar

    From the context I would put my vote on Identified, as in “Reviewer’s assertion: It is my opinion that a shift from anonymous to identified (signed) reviewing would help the scientific community to be more cooperative, democratic, equitable, ethical, productive and responsible. Therefore, it is my choice to sign my reviews.”

    Though public would work as well.

    It is interesting to note that reading the definitions for anonymous on that site it gives a much more sinsiter tone to the word than is usually the case, in line with your thoughts about anonymous reviews. Using words like “unacknowledged, incognito, concealed, disguised, hidden”, and so forth while also using less sinsiter synonyms. However, in our legal system there is a reason why the accused in a criminal trial has the right to face their accuser.

  10. John S
    Posted Jan 6, 2006 at 11:43 PM | Permalink

    Both anonymous and eponymous reviewing have their problems.

    Anonymous reviewing encourages cabals to reject papers contrary to their positions with little consequence. Good papers are rejected.

    Eponymous reviewing discourages rejection because most authors are also reviewers and the reviewer would like to be treated favourably when their paper is likewise reviewed. I imagine the ultimate outcome would be that bad papers are not rejected.

    The ideal outcome would be somewhere in between. Sadly, this state of academic nirvana may be unattainable. In either case, the ultimate decisions will have to be made by the editors and a good editor can overcome the inadequacies of either system. So I advocate neither anonymous or eponymous reviewing – I advocate better editors (But how on earth do we achieve that?)

  11. Armand MacMurray
    Posted Jan 7, 2006 at 1:55 AM | Permalink

    I think ET’s right on the terminology: identified, named, or signed reviewing.
    John S is exactly right — we need more good editors. I’ve always felt that the peer reviewers serve mainly to enforce “sufficiency of evidence” and (hopefully) spot logical flaws, establishing publishability, while the editor’s job is to use his taste and vision to choose *which* papers to publish.
    As publishing shifts to electronic media effectively without limits as to the number of works that can be published, I believe that the traditional “edited” model is still useful; however, in order to make available work that might otherwise remain unpublished, perhaps journals might be encouraged to also publish the “publishable” papers not selected for the main journal. If such publication were online-only, the only extra resources required would be for archiving.

  12. Louis Hissink
    Posted Jan 7, 2006 at 5:50 AM | Permalink

    Statistics are simply concise descriptions of a sampled “population”.

    Extrapolating from these calculations is nothing more than describing what colour the robe is of one Angel dancing on the proverbial pin-head.

    None hear would be entrusted with a pelikan pick, let alone a CAT-980, or its equivalent, today.

  13. Louis Hissink
    Posted Jan 7, 2006 at 5:51 AM | Permalink

    Hear: phonetic version, lexical “here”/ 🙂

  14. Peter Hearnden
    Posted Jan 7, 2006 at 5:55 AM | Permalink

    Re #1, I’m not sure I’ve seen you referred to in such a derogitory way by anyone, but there we are.

  15. per
    Posted Jan 7, 2006 at 7:17 AM | Permalink

    fascinated by demetris comments on peer-review.

    In defence of peer-review, I would note that you can often appeal to the editor. If the reviewer’s comments really take up so many pages, much of it must be irrelevant. You have to be able to capture the argument, and dispose of the relevant argument. It is entirely possible to get papers into journals even after quite hostile review.

    I also note that publication in peer-reviewed journals is a big thing for scientists. A paper in a prestigious journal can mean job security (or loss), promotion, grants, pay rises; as well as the less tangible rewards, such as esteem. These are big stakes.

    It would be very easy to for a rejected author simply to bottle up resentment against anyone who rejects his work, and repay the “favour”. I certainly remember a senior colleague losing his temper whilst describing the unfairness of the refereeing of a paper of his. Since I was the anonymous referee, I found it very difficult to reconcile his description with the standard of evidence in the paper, and I was very glad for anonymity.

    yours
    per

  16. Posted Jan 7, 2006 at 7:34 AM | Permalink

    Re 9-11:

    1. “I would put my vote on Identified”: My Greek lingual instinct is not very much satisfied with this. “Identified reviewing” may perhaps mean the reviewing (the system) is identified, not that the reviewer’s name is known. I verified myself that babelfish comes out with the English “eponymous’ for the Greek eponymous (which I wrote using Greek letters). So, may I keep eponymous, so that I may be understood if I also say “eponymity’ and “eponymously’.

    2. “… definitions for anonymous on that site it gives a much more sinister tone”: This is the case in Greek as well. There may be a reason for this. I am not aware of any instance of practicing anonymity in the ancient Greek world except perhaps in voting. In contrast, as far as I know, Greek societies (particularly the Athenian democracy) promoted the public, open, behaviour. It is interesting that this has been manifested in the English language as well: The word “idiot” (originally idiotes in Greek) is just the opposite of “public” but it came to have as negative meaning as to metaphorically mean the stupid person.

    3. “in our legal system there is a reason why the accused in a criminal trial has the right to face their accuser”: This is an excellent analogy.

    4. “Both anonymous and eponymous reviewing have their problems.” Indeed, but this could be generalized for any system or behaviour. For example, democracy has also its problems. But when an issue is ethical, as in my opinion is the case with the reviewing system, then efficiency comes second. And as far as I know, no one has argued that eponymity and anonymity are ethically equivalent.

    5. “Eponymous reviewing discourages rejection because …” Isn’t this a confession that we (acting as reviewers/scientist) have a fear to say things eponymously?

    6. “The ideal outcome would be somewhere in between.” There is nothing in between. Either I know your name or I don’t.

    7. “I advocate better editors”; “John S is exactly right “¢’‚¬? we need more good editors”: Who doesn’t wish better editors? But what is the meaning of this? I think it can be interpreted this way. The editors (i.e. a few people) are responsible for all problems. We (all others, the majority, including ourselves) are innocent. I do not agree with this.

  17. Posted Jan 7, 2006 at 7:59 AM | Permalink
  18. Geoff
    Posted Jan 7, 2006 at 8:06 AM | Permalink

    On the topic of peer review, it’s interesting to note that the journal Weather and Forecasting has just discontinued their “trial” policy of double-blind reviews. “Under the leadership of then Chief Editor Dr. Robert Maddox, the editorial staff of Weather and Forecasting began an experiment using a double-blind review process in 2002. Under this experiment the authors’ names were removed from the manuscript title page and the acknowledgments were deleted. Reviewers only officially learned the names of the authors when the manuscript was accepted for publication. Thus, both the authors and reviewers were unaware of each other, in contrast to the traditional single-blind review process in which the reviewers know the names and affiliations of the authors”. This trial policy was not extended to the other AMS (American Meteorological Society) publications (such as the Journal of Climate).

    The editors comment “the main reason for initiating the double-blind procedure was a belief that the present single-blind review process can lead to reviewer bias. Published studies indicate that reviewers may have biases based upon gender, country of origin, and the number of authors (Tregenza 2002 ; WenneràƒÆ’à‚⤳ and Wold 1997 ). While no studies have been done regarding the review processes of AMS journals, as editors we have circumstantial evidence suggesting that reviewers are not always unbiased”.

    Their experiences with the double-blind process were not without difficulties: “There definitely are problems with the double blind. In a scientific discipline as small as meteorology, some reviewers have no trouble identifying at least one of the authors from presentations made at conferences, writing style, or figure style. In addition, references to previous works help to identify the authors in some cases. There are no easy fixes for these problems. However, the driving question to ask is whether or not we think the present peer-review process being used by AMS journals is fair, and if not, then what would be better? While we believe that reviewers usually are fair and helpful, we also believe that biases can be subtle and hard to recognize (even by the reviewer providing the review)”.

    This double-blind review procedure is being given up for the sake of “uniform” treatment with other AMS journals.

    Tregenza, T., 2002: Gender bias in the refereeing process?. Trends Eco. Evol., 17, 349–350.

    WenneràƒÆ’à‚⤳, C., and A. Wold, 1997: Nepotism and sexism in peer-review. Nature., 387, 341–343

    (AMS members can view the full comment at doi: 10.1175/WAF9010.1
    Weather and Forecasting: Vol. 20, No. 6, pp. 825–826)

    Geoff

  19. Steve McIntyre
    Posted Jan 7, 2006 at 8:53 AM | Permalink

    One use of the term “eponym” is in the description of the ancient Assyrian method of naming individual years after individuals, for whom modern historians use the term “eponym”. Lists of eponyms are available for the Late Assyrian Empire and are important for dating ancient events. No corresponding lists are available for earlier periods. An anonymous, but perhaps identifiable, website making an effort to improve the dating of eponyms in the reign of the Middle Assyrian king, Shalmaneser I (13th century BC) is here .

  20. Posted Jan 7, 2006 at 9:43 AM | Permalink

    Re #17 and #19:

    The word “eponym” also exists in Greek; it is a noun and the meaning in modern Greek is “surname”.

    I am happy with the lesson on Greek by Luboà…⟮ We do not use in modern Greek “onymous” except to synthesize other words (and they are a lot); in this case we use it along with the preposition epi- (eponymous), probably to give more emphasis (?). But in many cases the Greek-origin English words are closer to the ancient Greek meaning than modern Greek words. This is the case for example with the word “idiot” that I mentioned in #16.2.

    I saw in the Oxford dictionary that all three words onymous, onymity and onymously exist in English with exacly the meaning I had in mind when I used eponymous, eponymity and eponymously. Here are the definitions of the dictionary:

    onymous /nms/ a. rare. [Extracted f. ANONYMOUS.] Having a name; (of a writing) bearing the name of the author; (of an author) giving his or her name. Usu. in collocation w. anonymous.

    onymity n. the condition of being onymous.

    onymously adv. with the writer’s name given.

    So, from now on I will use the words onymous, onymity and onymously (despite the fact that the dictionary characterizes them as rare and hoping that they will become less rare is the peer review system).

  21. Posted Jan 7, 2006 at 9:44 AM | Permalink

    #15

    Per, you say “It would be very easy to for a rejected author simply to bottle up resentment against anyone who rejects his work, and repay the “favour”. ”

    Not so easy. If the review is “eponymous”, and you think that someone whose paper you have rejected might give you the same treatment, you just have to ask the Editor to chose someone else. VoilàƒÆ’à‚➡

    David Stockwell had a good piece on peer review, where I myself commented how I think that anonymity is bad, and how “eponymity” would help get more honest, and more civil reviews. The advent of Internet publishing also allows to have two classes of papers: the ones that are deemed very good and significant would end up in “offical” journals, but the rest could still be published, and therefore read and possibly cited by others. “Journal” papers can get you a better rating for grant applications, but I can imagine that a highly cited “limbo” paper would also have some value. Certainly, it would make the reviewers look stupid.

    There are two main reasons to reject a paper: (1) it’s wrong, and (2) it’s not a “significant” result. Most papers are not wrong, and if they were, and it can be clearly demonstrated, then the author should himself agree and correct it. “Significance” is a very vague concept. Very little of what is actually published is highly significant, yet most of it is useful in some way or another. I can understand that Journals want to distinguish themselves by publishing highly significant results. But that’s really for Journals like Science and Nature (and I agree with Steve that they have an extra burden if they want to live up to their reputation). My experience is that most Journals publish almost anything, and the reason is that there are so many papers being submitted that the Editor and the reviewers, who do this on a voluntary basis, simply don’t have the time to do a good job.

  22. Peter Hartley
    Posted Jan 7, 2006 at 9:45 AM | Permalink

    I suspect many people who have tried to publish in refereed journals have similar “horror stories.” The basic problem with the current system is that referees do not have to take responsibility for their decisions because they are not held accountable. I also think that the system is fundamentally biased. A referee can make two types of mistakes — reject a paper that is good, or accept a paper that is bad. The penalty associated with the latter type of mistake is likely to be higher than the penalty associated with the former. The editor knows who the referee is, and trusts his or her judgement. If the paper later turns out to be bad, the editor may not be kindly disposed toward the referee, which could have subsequent unpleasant consequences. On the other hand, if a good paper is rejected, the editor may not find out that it was published somewhere else, or may not take the time to verify that the published version was not substantively different from the rejected paper. This asymetry in penalties would make referees inherently conservative. They will tend to accept papers in the current “mainstream” and reject radical departures from the paradigm because that is a safe thing to do. Papers that are radically different are also harder to referee, and since refereeing is usually an unpaid activity, few referees would be willing to take the time to check out the paper that is unfamiliar. It is much easier, and much safer, to find flimsy grounds to reject it and get back to work. This can lead to a “tyrrany of the status quo” even if there is no actual conspiracy to promote a certain viewpoint to the exclusion of radical alternatives. Another common observation is that most academics can pick up a copy of almost any journal in their field and find at least one article in there that should not have been published. Refereeing is not a good quality control mechanism. The incentives to do a good job are quite weak.

    The problem with onymous refereeing is that it is likely to reinforce the tendency to produce cliques publishing in particular journals. People agree to the publication of an article in return for a simlar favor down the road. The group gets in control of what is published and alternative viewpoints get shut out even more strongly than under the anonymous protocol.

    One thought that occurred to me as a way of fixing the system is that, when a paper is published, one could also publish at the end of it the names and affiliations of all the reviewers who accepted the paper and all reviewers who had previously rejected it. Academics could then receive recognition and reward for doing a good job as referees, and suffer a penalty from doing a bad job. They would have an incentive to referee conscientiously, and I think we would see the quality of the published articles increase. Publishing all the names along with the article could also limit the problem of a clique getting in control of a journal since then the selection of referees and the beahvior of editors and referees is public knowledge. A problem with the proposal, however, is that the final published article could differ substantially from an earlier version that was rejected. The referees who suggested rejecting that article may have been correct, but they would suffer a penalty from being listed as rejectors on the final piece, even though their position was correct and may even have contributed to the quality of the final paper. Ultimately, it may be difficult to improve upon the double-blind refereeing system and rewarding editors who do a good job selecting and rejecting articles for publication.

  23. per
    Posted Jan 7, 2006 at 12:46 PM | Permalink

    i raised the argument that in the case of named refereeing, the concept of people taking revenge for bad reviews is pretty severe.

    Just for example, in the US and UK, there are major government funding comittees, where a group of scientists get together and decide who is going to get the dosh (research grants). There is a due process- they have to write out for referee’s opinions. But the academics who give out the money have control over who gets to referee, and passing judgement on the referees opinions. These individual academics get to review a large number of grant applications in the whole of their general area.

    Who be the UK/ US academic who rejected one of their papers under a named refereeing scheme ?

    As for the idea of double-blind refereeing, it is of course a sham. The references will instantly give away which group it is.

    cheers
    per

  24. Dave Dardinger
    Posted Jan 7, 2006 at 1:09 PM | Permalink

    I was thinking last night about the disadvantages of either signing or not signing reviews and came up with one possibility. How about allowing a person submitting a paper to provide a list of people he/she/they would not like to submit an unsigned review? Then if the editor asks someone on the list to review, that notation would be given and the person asked could decline to review and possibly provide a cautionary remark to the editor. I.e. “Dr. Jones is my superior, so I feel it is unfair for him to place me in a position where it would be difficult for me to review his work without risking my status at the University.”

    Such a dual system would possibly remove the worst abuses of the present system without introducing new problems. At least it would make the Editor aware of possible problems in assigning reviews.

  25. Posted Jan 7, 2006 at 4:24 PM | Permalink

    The discussion about the review system is fascinated. It shows that there exist solutions worth try. I mentioned before an editorial (Z. Kundzewicz, and D. Koutsoyiannis, The peer-review system: prospects and challenges, Hydrological Sciences Journal, 50(4), 577-590, 2005) which can be found in


    http://www.cig.ensmp.fr/~iahs/hsj/504/504000.pdf

    There we have also tried to imagine some realistic steps for the improvement of the system. In addition, we discuss some interesting objective experiments, especially diagnostic ones, which are not widely known. If someone is interested to contribute to this, the discussion on the journal is open until 1 February.

  26. Paul Linsay
    Posted Jan 7, 2006 at 5:02 PM | Permalink

    The first time Einstein submitted a paper to a US journal in the 1930s (after fleeing Europe) he was outraged to learn that it would be refereed. It wasn’t done in Europe, you simply submitted the paper and the readers decided if it was worthwhile. This kind of wide open publication fell by the wayside with increasing numbers of scientists and the rising costs of publishing and distributing a paper journal. There’s no reason wide open publication can’t return now that we have cheap storage and electronic distribution. A good example is lanl.arXiv.org.

    As far as I can tell, the only value added that Science and Nature provide is to report science news and politics. There is no reason this has to be coupled to publication of science papers. The prestige associated in publishing in them is strictly due to the artificial scarcity of publication space. In the end, who cares where an article is published so long as it’s accessible, which is another major flaw of the current system.

    Let everyone publish and make the information freely available. Over time the science will correct itself, as it always does, by calculations, discussion, and experiment.

  27. Ray Soper
    Posted Jan 7, 2006 at 5:04 PM | Permalink

    Re 26: Well said Paul.

  28. Steve McIntyre
    Posted Jan 7, 2006 at 5:37 PM | Permalink

    The sociology of the prestige of Nature and Science is really quite interesting. In the paleoclimate studies that I’ve specialized in, the Nature and Science articles are often little more than Reader’s Digest articles.

  29. per
    Posted Jan 7, 2006 at 6:50 PM | Permalink

    In the end, who cares where an article is published…

    It is an interesting perspective, and no doubt correct with the value of hindsight. After all, Steve’s publication in Energy & Environment about MBH’98 (published in Nature) had some impact.

    But I should answer that question. Quite simply, publications in high quality journals are worth hundreds of thousands, to millions, of pounds of grant money. If your paper gets into Nature, you are virtually guaranteed your next grant- and grants come in multiples of 1/4 million sterling. As well as grants, you get invites to expenses paid conferences all over the world, and promotion (= cash in your pocket).

    I am fairly clear that the editors of journals such as science and nature are flown across continents and lavishly entertained, whilst explaining to them the virtues of a piece of science, and how Nature/ Science would benefit from publishing the work. Large research institutes often have surprisingly cordial relationships with such editors; and when you consider that a Nature or Science paper can justify a million pound budget virtually by itself, you can see why.

    I should also add that in the UK, research is ranked in all universities in a big exercise. One of the principal ways of doing this is looking at the impact factor of the journals that academics publish in. On this assessment, the research income of academic departments are set, so you can be talking about a million pounds per annum for ca. 30-40 people. If you don’t get articles in high impact-factor journals, you will get a lower assessment, and you will get less money for your university, and the university will slash jobs in that department.

    Perhaps more detail than you wished 🙂
    yours
    per

  30. Posted Jan 8, 2006 at 7:06 AM | Permalink

    Paul #26,

    while I completely agree with you that the refereeing process today is not terribly efficient, it picks a very similar fraction of good papers as you would have without the process – and all science disciplines could very well follow our example with the preprint server arxiv.org – I think that your history is not quite right.

    The fate of Einstein’s 1905 paper about special relativity was probably decided by the editor – and the editor of Annalen der Physik was Max Planck himself who fortunately understood relativity right away. There were perhaps no anonymous referees in Europe, but there were always people who could kill the paper. Otherwise it would be impossible to protect themselves from complete crackpots.

    Einstein himself was also the referee who delayed the publication of Kaluza’s proposal on the fifth dimension by two years. Finally it was published in Sitzungsber.Preuss.Akad.Wiss.Berlin (Math.Phys.), 1921. So I don’t believe that in the 1930s, Einstein encountered the procedures for the first time.

    Onymous referees would have to write even more positive reviews, to keep their friends among their colleagues, so I don’t think that it is a terribly good idea. As I said earlier, it could be a good idea to define completely new scientific jobs of those who would be refereeing and looking for errors everywhere.

    All the best
    Lubos

  31. Posted Jan 8, 2006 at 6:19 PM | Permalink

    Re #30, Lubos

    Onymous referees would have to write even more positive reviews, to keep their friends among their colleagues

    I don’t understand, what kind of friendship is this? Suppose I have to review a bad paper authored by my friend. I put the mask of anonymity on and say “this paper is bad”. Then I get the mask off and say to the author “you are my friend- I know nothing of your rejected paper”. Is this what you mean? No, I would prefer to say “you are my friend but your paper was really bad”. I have done it and received thanks from my friend. But if I do not feel strong enough to do this, I have another option, very simple: I can return the paper to the editor and say “sorry, I cannot do this review”. After all, I am not the only reviewer in the world. I have used this option too.

    it could be a good idea to define completely new scientific jobs of those who would be refereeing and looking for errors everywhere

    Professional reviewers looking for errors everywhere? But, if these people can spot these errors, aren’t they capable of doing their own research? Why should we reprobate them to look for errors of other people? And who will pay for these jobs? Scientific publishing has been a matter of voluntarism. And in my opinion this is good.

    With all this I do not mean that onymous reviewing is without problems. On the other hand, I feel that our hesitation, perhaps phobia, to behave openly and publicly has amplified these problems. When I have a phobia, I try to find arguments either to justify or to disguise it. But what I have in mind is not a sudden change from anonymity to onymity. This is not realistic. I envisage a dynamic procedure of encouraging onymity in which we will learn to live and behave without such hesitations or phobias.

  32. per
    Posted Jan 8, 2006 at 7:48 PM | Permalink

    “I would prefer to say “you are my friend but your paper was really bad”.

    Well, full marks for honesty, but what about your professional colleague who figures- “he says he is my friend, but stabs me in the back. I will have my revenge !”

    There is a maths that you can apply to this- game theory. If you never know who reviews your papers/ grants, you will always behave neutrally to your colleagues, because you never know who was responsible.

    However, if you have to tell them your name when you are rejecting a paper, how many of your colleagues need to switch to “retributive mode” to make rejecting a paper/ grant bad for your game ?

    I am sure that your suggestion is worst of all. You eschew anonymity (no loss or gain) to tell your friends when you have rejected their grant. You stand to gain a tiny gain by way of increased reputation, and a major loss if one of your friends rejects your paper/ grant as retribution.

    cheers
    per

  33. Posted Jan 8, 2006 at 8:15 PM | Permalink

    Per, I gave a second option for a “friend” ready for revenge. And, when you apply your game theory never be sure for anonymity. For example someone in the editorial office may be my friend. Life plays strange games, so it happens that all of us know who some of the “anonymous” reviewers of our papers are.

  34. per
    Posted Jan 9, 2006 at 8:49 AM | Permalink

    Demetris
    I accept your point that anonymity may not work; but if there is a downside to the anonymity not working (i.e. retribution !), then that downside is the same downside for eponymous refereeing.

    I am not saying that eponymous refereeing is impossible, as I know a journal which practices it. I don’t think however, it is fair to paint of picture of black vs white, where anonymous refereeing is black, and open refereeing is without problem.

    Indeed, I am quite sure that if you waved a magic wand, and implemented open refereeing everywhere, the sky would not fall in, and the system would work. But there would be a very different set of problems to deal with. We would have conversations where people said- you know, all these problems would go away with anonymous refereeing 🙂

    cheers
    per

  35. Posted Jan 9, 2006 at 9:39 AM | Permalink
  36. Posted Jan 9, 2006 at 12:04 PM | Permalink

    Per,

    Just a comment on grant commitees vs paper reviewing. It’s a completely different process (I’ve done both). Here in Canada, at least, the grant committee does not chose the external reviewers, we merely make suggestions. The committee I served on had about 12 people. Judging a proposal is different from reviewing a paper: it’s the potential of future work as opposed to the results of the work. There are also strict rules regarding potential conflicts of interest: if you are linked in any way to a researcher, you must leave the room when his/her proposal is discussed. In the end, I can say that there is always a slight bias towards established and succesful researchers, but it’s inevitable: they are succesful for a reason. It is indeed hard for a newcomer or someone with a highly original and unusual proposal to succeed. But you normally find voices within the committee to take their side. Furthermore, the committee is not anonymous, but of course, you can always give the excuse that you pleaded for someone’s proposal but the rest of the committee rejected it… Anyway, in my case, we were covering a very broad field, so we all had very different interests and background, so it was far from a “clique”. My main frustration was that we had so little money to allocate! I felt sorry to have to reject some very good proposals because of that.

    On another point: when you say “he says he is my friend, but stabs me in the back. I will have my revenge !”, it’s quite the opposite. “Stabbing in the back” means the person doesn’t see you or know who you are. And I have addressed the “revenge” argument above.

  37. Posted Jan 11, 2006 at 6:37 AM | Permalink

    Hi Demetris
    Like you I have a wonderfully checkered history of experiences with peer review. A few years ago I collected up my best stories and wrote a paper about them (and about other issues like the long lags in the system). I was delighted when it was published in a peer reviewed journal:

    Pannell, D.J. (2002). Prose, psychopaths and persistence: Personal perspectives on publishing, Canadian Journal of Agricultural Economics, 50(2): 101-116.

    If interested, you can see a version of the paper here: http://cyllene.uwa.edu.au/~dpannell/prose.htm

    Since then, it has been by far my most popular paper. I gave a short version of the paper as a talk on national radio here in Australia and got a huge response from frustrated academics from all fields. Based on the feedback I got, it seems to me like the issues are basically the same in all disciplines, pretty much regardless of whether reviewing is double blind, single blind, or not blind at all.

    Finally, to top it all off, Ross McKitrick cited it in a paper of his, recently published in the same journal.

    Anyway, thanks for your excellent contributions here and over at WSOTDS (web site of the dark side).

    Dave

  38. Steve McIntyre
    Posted Jan 11, 2006 at 7:51 AM | Permalink

    I’ve noticed an interesting interaction between editors and peer reviewers. For example, in our Nature submission, in our first submission, we got favorable reviews from 2 reviewers. Nature then forced us to cut the article in half and added a new peer reviewer who was very antagonistic. It sure felt like the editors had put their hand on the scales.

    There is also an interesting question about after-market responsibilities of Nature and Science. They are oriented to breaking news, but create institutional barriers to comments and replies. For example, our reviewers pointed to dense and detailed methodological issues, which Nature ultimately decided could not be dealt with within the 500 words that they were prepared to allot to the article. Why a 500 word limit? By this sort of word limit on critical analyses, Nature then exports critiques to the specialist literature. It seems to me that, if they published the article, they should bear after-market responsibility. Why should GRL or some other journal have to mop up for NAture?

    I would also submit that Nature shows a lack of continuing responsibility. For example, one of our reviewers was an authority on principal components (a reader has identified a very renowned authority) and he was very critical of MBH. The hatchet-man reviewer added by Nature thought that the issues were too technical to be of interest to Nature’s general readership. All reviewers stated that they could not resolve the dispute without an extraordinary amount of work, which was beyond the scope of a reviewer. See here.

    What were Nature’s responsibilities under the circumstances? Suppose that you were running a company and received expert product liability reviews comparable to our referee reports. You’d have had an obligation to get to the bottom of the matter. Regardless of whether our submission had got things quite right, the referees did not give MBH a clean bill of health – quite the contrary. In similar circumstances, a company with product liability would have done the equivalent of re-checking MBH from the ground up in a more thorough manner than the first time. Did Nature do this? Not at all. In fact, the MBH Corrigendum was not even peer reviewed.

    Nature’s failures went even further. One of our referees explicitly encouraged us to continue our investigations of MBH and other similar studies and stated that our investigations should not be "hampered". In August 2004, we cited this in re-iterating our request to Nature for source code and residual series. Nature refused to provide either, saying that they did not normally require authors to do this here. In this case, it was the editors that over-rode the referee. In addition, the decision on source code and residuals was specifically referred to Philip Campbell, the editor in chief of Nature, so it was not a decision of underlings.

    I’ve posted up our entire correspondence commencing with our Materials Complaint (I hope that the arrow-links work.)

  39. Posted Jan 11, 2006 at 10:27 PM | Permalink

    Re #37
    Dave,
    It is always encouraging to see that other people have similar adventures and similar views about them. And your paper is very optimistic and encouraging, despite the horrible experiences it describes. I can understand why it became so popular.

    I enjoyed particularly your poem. I hope someone puts music on it, so that we can sing it altogether. I recognize in your “Referee” some of my referees and I think many people do the same. Somewhat more difficult is to identify ourselves in your “Referee” — but authors and referees are the same persons in the peer review system.

    I am optimistic too and believe that we (authors and referees) can gradually behave a little better and also improve the entire system, which in turn will help us behave even better. Therefore I strongly support the idea of eponymous reviewing. I hope that this is in accord with your verse:

    I can be a total bastard with
    Complete impunity

    What I am trying to say is that if I am eponymous, there is no complete impunity, so I do not have to be a total bastard.

  40. Posted Jan 11, 2006 at 10:41 PM | Permalink

    Coming back to entropy, may I bring the attention of interested colleagues to the session:

    HS37 Entropy, Information and Scales in Hydrology (co-listed in NP)
    of the EGU General Assembly 2006 (Vienna, Austria, 2-7 April 2006)
    organized by Annalisa Molini from the University Genoa (with a contribution by me). The session’s subject is in fact broader than hydrology; for example it includes (quoting from the call from papers)

    Entropy and information concepts at the interface between Hydrology and Atmospheric Physics
    Use of Information Statistics as decision tools in environmental and water resources issues

    The deadline for abstract submission is on 13 January. If you wish to submit an abstract, you can visit

    http://www.cosis.net/members/meetings/programme/view.php?m_id=29&p_id=182,

    find HS37 and click on Abstract submission.

  41. Mark Frank
    Posted Jan 13, 2006 at 5:33 PM | Permalink

    A tale of hypothesis testing — expanding on Hans Von Storch and the Mexican Hat

    Demitrus was kind enough to direct our attention to this excellent paper by Hans Von Storch.

    Click to access misuses.pdf

    I am a philosophy graduate — so I have taken a philosopher’s approach to expanding on a fascinating example he used.

    First Hans’ example from the paper — I have reproduced it here – many of you probably know it already:

    In the desert at the border of Utah and Arizona there is a famous combination of vertically aligned stones named the “Mexican Hat” which looks like a human with a Mexican hat. It is a random product of nature and not man-made . . . really? Can we test the null hypothesis “The Mexican Hat is of natural origin”? To do so we need a test statistic for a pile of stones and a probability distribution for this test statistic under the null hypothesis. Let’s take:

    t(p) = 1 if p forms a Mexican Hat, 0 otherwise

    for any pile of stones p. How do we get a probability distribution of t(p) for all piles of stones p not affected by man? We walk through the desert, examine a large number, say n = 106, of piles of stones, and count the frequency of t(p) = 0 and of t(p) = 1. Now, the Mexican Hat is famous for good reasons – there is only one p with t(p) = 1, namely the Mexican Hat itself. The other n – 1 = 106 – 1 samples go with t(p) = 0. Therefore the probability distribution for p not affected by man is

    prob (t(p) = k) = 1/106 for k = 1, 1 – 106 for k = 0

    After these preparations everything is ready for the final test. We reject the null hypothesis with a risk of 106 if t(Mexican hat) = 1. This condition is fulfilled and we may conclude: The Mexican Hat is not of natural origin but man-made.

    Obviously, this argument is pretty absurd – but where is the logical error? The fundamental error is that the null hypothesis is not independent of the data which are used to conduct the test. We know a-priori that the Mexican Hat is a rare event, therefore the impossibility of finding such a combination of stones cannot be used as evidence against its natural origin. The same trick can of course be used to prove that any rare event is non-natural, be it a heat wave or a particularly violent storm – the probability of observing a rare event is small.

    Now for my expansion.

    Suppose we investigate the detailed history of the area and we learn from other sources that there was a tribe that lived in the vicinity that was in the habit of building hat images out of piles of stones. Now the fact that there are no other natural piles of stones of this shape becomes rather pertinent. We might be tempted to reject the natural hypothesis after all.

    Lesson 1 — the alternative hypotheses matter.

    Now let us add another alternative hypothesis. The Mexican Hat was created by an alien who was trying to leave a message and messages in alien language happen to look like Mexican hats. What is the probability of the data given the hypothesis? It is extremely high – maybe even higher than the man-made hypothesis. But, being sane, we dismiss it out of hand.

    Lesson 2 — the a priori plausibility of the hypotheses matter.

    Now a statistician comes along with yet another hypothesis. They have looked at complex rock formations in the area and tried various statistical explanations for the hat. They have found a formula that relates latitude, longitude and altitude to rock formation complexity (with a small error term). They can show that a rock formation of the complexity of the Mexican Hat is almost exactly what their formula predicts.

    I will let you draw your own lesson.

    There are some complaints that there is something circular about this. Ah — but this statistician has read the rest of Hans paper. The answer is simple. We divide the set of rock formations that accord to this formula into two subsets. We use the first subset to formulate the hypothesis and the second subset (which includes the hat) is the test. The data accords to the hypothesis and we have an alternative hypothesis that we cannot reject.

    I invite further examples…. 🙂

  42. Maciej Radziejewski
    Posted Feb 5, 2006 at 5:01 PM | Permalink

    Re #41

    Mark,

    I believe the principal problem with the Mexican Hat is that of selecting the pile of rocks to look at and the formation to look for. In other words, the question that our test statistic should answer is really “Is there any pile of rocks on Earth like a Mexican Hat?”. Now, for any single pile this would be unlikely, but it is easy to imagine there should be at least one strange pile among the many. Hence, common sense tells us that this result might be insignificant.

    How to properly assess significance in this case is another problem. Without precise knowledge of the “generating mechanism” for all piles of rocks I would argue it is not possible. This is a situation where the use of simple resampling (i.e. resample all the existing piles with replacement) would be questionable, because the signal (the existence of the Mexican Hat) would, on average, be preserved in the resampled data.

    Maciej.

  43. Louis Hissink
    Posted Feb 5, 2006 at 6:03 PM | Permalink

    Oh that’s interesting, Rasmus makes a comparision between the Nile and the GMT of the earth, implying that a slight change in GMT implies a large change in heat energy.

    Pity they never computed the heat energy content in the first place. Averaging temperatures in grid cells doesn’t achieve that.

  44. Spence_UK
    Posted Feb 14, 2006 at 3:38 AM | Permalink

    Re #41, and the Mexican Hat

    There is another great example of falling into this logical trap of observing an event a priori, and then trying to assign a null hypothesis based on this observation. It is the basis of James Lovelock’s “Gaia” hypothesis currently the headline article over at realclimate.

    The mistake is simple. Lovelock compares the control mechanisms on earth to evolutionary principles; observing that living organisms seem to check and balance one another, keeping a harmonious relationship which could otherwise prevent life. He observes that whilst there is scope for evolution with species there are many competing organisms, some of which will survive, some of which will die out – there is no such scope on planet earth, because there is only one earth.

    Lovelock has fallen into exactly that trap von Storch describes above. We witness a-priori that the earth has this balance that supports life. To then use that observation in support of a hypothesis about the origin and nature of the earth is a schoolboy statistical error.

    Here is an example of a better way of viewing this event statistically. There are billions of planets. The probability that one of those happens to have a self-controlling environment capable of supporting life in some form or another is probably quite high. (Note that life can survive in a variety of different forms, not just as we view it). Given that there is a planet that supports life, and that intelligent life forms on that planet, what is the probability that life should have formed on a planet capable of supporting it? The simple answer is, of course, the probability is one, because it forms part of the question.

    To Lovelock’s credit, he does admit past mistakes. In a recent interview, he acknowledges that in the 1970’s he was warning about imminent ice age. I find at least his honesty refreshing, unlike some scientists who seem determined to revise history and claim no such alarmism was taking place.

  45. John A
    Posted Feb 14, 2006 at 9:36 AM | Permalink

    …unlike some scientists who seem determined to revise history and claim no such alarmism was taking place

    Who then have the brass neck to call us “deniers”

  46. Posted Apr 11, 2006 at 5:10 AM | Permalink

    A late comment just for the completeness of the archive.

    1. In #20 I wrote “So, from now on I will use the words onymous, onymity and onymously (despite the fact that the dictionary characterizes them as rare and hoping that they will become less rare in the peer review system)” trusting that “in many cases the Greek-origin English words are closer to the ancient Greek meaning than modern Greek words.”

    However, later (when I returned to Greece) I investigated it a little more with the help of an ancient Greek dictionary and I changed my mind. In ancient Greek, as in modern Greek, there is the word “eponymous” but no word “onymous”. Instead, the word “onyma” appears, which however in a noun, just another type of “onoma”, simply meaning “name”. So, to synthesize an adjective with a meaning “having a name” from the noun “name/onyma”, certainly we need to use a preposition, and in this case this is “epi-“. Thus, I think that my initial suggestion for the words “eponymous”, “eponymity” and “eponymously”, just analogous to the Greek words “eponymos”, “eponymia” and “eponymws” (here I used “w” for omega) can be a good solution and, thus, I will keep using these words. In a recent talk in an international audience I used these words and my impression was that not only did people understand them but they were happy to learn the Greek antonyms of “anonymous”, “anonymity” and “anonymously”, given that the latter words are also Greek.

    2. In #25 I mentioned an editorial paper about the peer review system published in Hydrological Sciences Journal. This paper motivated several discussion papers, one of which (by Dave Pannell) essentially started from this post (see #37). The discussion, along with a reply from authors of the original paper is published in the journal and can be found here:

    http://www.extenza-eps.com/IAHS/doi/abs/10.1623/hysj.51.2.357

    3. In #8.3 and #8.5 I have written a story about a rejected paper on chaos. Eventually, I submitted it to Hydrological Sciences Journal (as I did with other rejected papers that I mentioned in #8) and it received two very positive reviews from very knowledgeable eponymous colleagues, so I can hope that it will be published eventually.

  47. Posted Jan 27, 2008 at 12:53 PM | Permalink

    I think the 1/f discussion from thread Publishing NASA Data at Realclimate ,

    http://www.climateaudit.org/?p=2620#comment-203347

    http://www.climateaudit.org/?p=2620#comment-203376

    http://www.climateaudit.org/?p=2620#comment-203589

    fits here better. I find the articles of Koutsoyiannis and Voss extremely interesting. 1/f or FGN model for climate ‘noise’, how would IPCC calculate trend uncertainty estimates then?

    bender,

    UC #81, what if it’s multistable as opposed to “stable”? i.e. It’s not random walk, but random walking jump. Like electrons in shells.

    Koutsoyiannis talks about a process that forgets its mean. And adds that

    This explanation is consistence with the assertion of the National Research Council (1991) that climate “changes irregularly, for unknown reasons, on all timescales.

  48. Spence_UK
    Posted Jan 30, 2008 at 8:38 AM | Permalink

    Bump for an interesting discussion from UC here.

    Some time ago, in reponse to the piece on RealClimate on chaos, I tried messing about with the Lorenz equations and scale averaging to illustrate the difficulties in detecting external forcings in the presence of self-similar noise, and being able to relate it to a cartoon system of equations (i.e. the “noise” was really a complex signal). I keep meaning to resurrect it but never seem to have the time (now there is an age-old story 🙄 ). Perhaps at some point I will start a thread on the forums to discuss it.

  49. Posted Feb 3, 2008 at 12:03 PM | Permalink

    1/f can be thought as self-similar noise, as Allan Variance is constant over all time-scales. Koutsoyiannis article (*) has many examples how to generate this kind of processes, I took liberty to simplify a bit :

    cycles=25;
    outL=2^18;
    out=zeros(outL,1);
    randn(‘state’,sum(100*clock))
    rand(‘state’,sum(100*clock))
    V=2*1/cycles;
    sV=sqrt(V);
    r=randn(1,cycles)*sV;
    rphase=rand(1,cycles).*(2.^(1:cycles));
    rphase=floor(rphase);
    for i=1:outL
    r(1)=randn*sV;
    for z=1:cycles
    if rem(i+rphase(z),2^z)==0
    r(z)=randn*sV;
    end
    end
    out(i,1)=sum( r );
    end
    close all
    plot(out)

    Produces this kind of figures:

    ..and there’s still lot’s of power when resampled from ‘daily’ values to 1000 yrs averages, just like in Earth temperatures -case.

    * The Hurst phenomenon and fractional Gaussian noise made easy

    ** One of those plots is actually daily satellite temperature, relevant discussion in here

  50. Posted Feb 3, 2008 at 12:08 PM | Permalink

    Matlab code work better when copy-pasted from here,

    http://signals.auditblogs.com/files/2008/02/noise_example.txt

    (takes some 20 secs to run )

  51. bender
    Posted Feb 3, 2008 at 12:14 PM | Permalink

    #49 So to summarize for the layperson, this graph shows that you are getting “trends” at all time scales. But in fact they are not real trends, just long-term persistent excursions from the process mean. If this were climate data, for example, the blue line might represent “internal climate varability” in the absence of any external forcing.

    This reminds me of the paper by Robock(1978), which I see Pete Holzman has cited in the Revkin NYT thread.

  52. Posted Feb 3, 2008 at 1:45 PM | Permalink

    re: #48

    Spence_UK, I have the original Lorenz system of 1963, and several other systems, setup for analysis by a number of numerical solution methods. The work is summarized in the posts Chaos and ODEs Part1x: x=a, …, d here. A summary of the systems that I have coded is in this PDF file.

    What do you have in mind to calculate? We can take the discussions over to the BB if that’s a better fit.

    Thanks

  53. Spence_UK
    Posted Feb 3, 2008 at 9:04 PM | Permalink

    ..and there’s still lot’s of power when resampled from ‘daily’ values to 1000 yrs averages, just like in Earth temperatures -case.

    To me, this is one of the most important aspects. I find it informative to plot results on different scales with essentially the same number of points along the X-axis, because it gives a nice visual illustration of self-similarity.

    To help emphasise this point, I added this into your code:
    IntegrateXPoints = 1;
    NumXPoints = 360;
    cycles=25;
    outL=NumXPoints*IntegrateXPoints;
    out=zeros(outL,1);
    randn(‘state’,sum(100*clock))
    rand(‘state’,sum(100*clock))
    V=2*1/cycles;
    sV=sqrt(V);
    r=randn(1,cycles)*sV;
    rphase=rand(1,cycles).*(2.^(1:cycles));
    rphase=floor(rphase);
    for i=1:outL
    r(1)=randn*sV;
    for z=1:cycles
    if rem(i+rphase(z),2^z)==0
    r(z)=randn*sV;
    end
    end
    out(i,1)=sum(r);
    end
    figure;
    subplot(2,1,1);
    plot(out);
    if (IntegrateXPoints .gt. 1)
    reformed_out = mean(reshape(out, IntegrateXPoints, NumXPoints));
    else
    reformed_out = out;
    end
    subplot(2,1,2);
    plot([1:NumXPoints]*IntegrateXPoints, reformed_out);

    (Note: Replace .gt. with greater than symbol in the last if statement, and probably anything else wordpress butchers!)

    This way two plots are generated for each run, the first with the “daily” fluctuations and the second plot on some other scale, as specified by IntegrateXPoints (e.g. 7 gives weekly, 30 gives monthly, 365 gives annual etc). This shows the tendency to generate new “trends” at each scale. I have done some runs, and at a more convenient time will get around to hosting them! I used 360 points on the X axis because this broadly corresponds to the number of months of satellite global mean temp data available (around 30 yrs times 12 mths), but this can be changed also.

    It makes me laugh when Gavin Schmidt frets over the difference between 8-year trends and 30-year trends at RealClimate; a mere factor of 4 difference in the number of points, when these types of noise exhibit self-similar behaviour over so many orders of magnitude. If the trends are meaningless at 8 years, they are most likely meaningless at 30 years also, as well as 800 years, 8000 years etc. etc; you have to take the scaling behaviour into account when forming the null hypothesis (and, unlike Gavin’s illustrations, make sure you are not restricting the number of degrees of freedom at longer scales by applying the test on the same length of data!)

  54. Spence_UK
    Posted Feb 3, 2008 at 9:31 PM | Permalink

    Re #52

    Dan, I have been watching your site with interest, and your analysis looks to be going into much greater depth than the playing around I did with the Lorenz system.

    My interest stemmed from a search of pro-AGW sites for a good argument as to why chaos doesn’t matter; almost all sites offered very weak explanations, simply hand-waving about how averaging “defeats” the chaos of weather. Most of these articles are consistently written to produce the same fallacious arguments; they often go to great lengths to define climate, but do not define chaos; the only tests applied are eyeballing tests of time series and insisting that the series look “patterned” or “random”, and drawing conclusions based on subjective interpretations. (Ironically, I believe Coby Beck and William Connolley use exactly the reverse test as to what chaos should look like!)

    Of course, these tests are garbage; depending on the system and the scale selected, chaotic systems can look both “random” and “patterned”, just as non-chaotic systems can look both “random” and “patterned”; the tests have no power, and often how scale is handled in the plot explains the differences anyway (which ties in with my note above).

    None of them addressed the issue of valid tests for chaotic behaviour, nor the issue of self-similarity and how it can cause havoc with attempts to average out the noise (which, of course, isn’t noise at all but fine scale signal).

    After reviewing a number of sites, the most convincing argument I found on the internet (which I accept may not be the most convincing argument available) was the discussion of the Lorenz system at RealClimate (see here) by Connolley and Annan. Perhaps convincing is the wrong word, but at least there were numbers to get stuck into rather than hand-waving. Just playing with the equations finds the RealClimate article does not really hold water, it uses a special case of a special case of a cartoon chaos example – and even then they have to cheat on their plot to “prove” taht averaging helps! The only validity this argument holds is if the “special case” assumptions can be justified, which IMHO are entirely unconvincing.

    My interests, more specifically, are related to:
    * The consequence of scaling behaviour in non-linear coupled systems (both chaotic and non-chaotic)
    * The consequence of external forcings on such systems
    * The limitations of numerical methods used to analyse these systems, particularly with regard to initial conditions

    … and, of course, the consequences of these three points on using statistics to assess the consequence of external forces on these systems. It probably needs to be moved somewhere else, although it does tie in well with Prof. Koutsoyiannis’ work, just due to the length of waffle I would probably need to put forward my thinking. I have my reasoning fairly well thought through but it would take a long time to write up (partly down to limited amount of time available to play with this stuff at the moment!)

  55. Posted Feb 4, 2008 at 5:47 AM | Permalink

    I find it informative to plot results on different scales with essentially the same number of points along the X-axis, because it gives a nice visual illustration of self-similarity.

    And you must remember that in climate science, it is completely legal to compare different time-scales in the same plot. Here’s hypothetical example, say ‘Example Hansen’. First plot annual data along with 1000 year average:

    Then take last 100 years of the annual series, change the color to match with averaged series, and remember to change the time scale as well:

    Finally, remove the blue series (we can’t observe it anyway ) and add lines that show that recent warming is unprecedented. With small font you can add that ‘Modern data are the 5-year running mean, while the paleoclimate data has a resolution of the order of 1,000 years’:

  56. Spence_UK
    Posted Feb 4, 2008 at 11:32 AM | Permalink

    Re #55 I’m glad Hansen gets such a large amount of press coverage, with thinking like that 😮

    I checked some stuff out and I’ve got a few corrections to make in comment #54. It is a long time since I visited Coby Beck’s unscientific site on global alarmism, so I thought I’d quickly check his comment on weather / climate prediction and chaos. His original article seems to have gone (where he had little pictures of “weather” and “climate” variations) and has been replaced with an argument closer to Connolley’s. It still lacks any substance; it boils down to “we can’t predict the next wave but we can predict the tides, we can’t predict the weather therefore we can predict climate”. Classic nonsense argument, a faulty syllogism (fallacy of four terms) and, as usual, fails to discuss the elephant in the room (LTP).

    The RealClimate link that caused me to go and play with the Lorenz equations is missing from #54, can be found here.

  57. Posted Feb 4, 2008 at 11:53 AM | Permalink

    gavin’s comment on trends at different time scales is quite revealing,

    http://www.realclimate.org/index.php/archives/2008/01/new-rule-for-high-profile-papers/langswitch_lang/bg

    #150

    yet you hold great store by the first period and not the later
    because you say 15 years is long term climatic variation and 7 years
    is short term weather variation!!!!

    thst’s surely silly.

    [Response: No. It’s statistics. – gavin]

    And after that they went to do their real business, checking people’s background from sourcewatch and exxonsecrets .. Sad.

  58. Spence_UK
    Posted Feb 4, 2008 at 12:22 PM | Permalink

    I think Gavin just invented LOLStatistics, a development of LOLScience. Well done Gav!

  59. Posted Feb 4, 2008 at 12:55 PM | Permalink

    So, we have Robock

    The natural variability of the atmosphere, through random short-term variations in the dynamical fluxes, has been shown to produce unpredictable long-term variations in the climate.

    and Roy Spencer

    Random daily fluctuations in radiative forcing (e.g., in low cloud cover) cause LONG TERM variability in ocean temperatures….not only year to year, but decade to decade

    and William

    [Response: I think you are missing the point. While the atmos may be chaotic, its long-term averages aren’t – at least not in any meaningful sense – William]

    And there we have the answer, there can’t be 1/f noise in climate. It is possible to have 1/f in fluctuations of the Earth’s rate of rotation, undersea currents, traffic on Japanese expressways, in the minimum and maximum flood levels of Nile. But not in Earth mean temperature.

  60. Spence_UK
    Posted Feb 4, 2008 at 5:47 PM | Permalink

    UC,

    You have a remarkable talent for finding quotes on RealClimate that highlight just how poor the reasoning is on that site.

    Techniques for the advancement of science through valid inference:

    Logicians: Modus Ponens, Modus Tollens

    Mathematicians: Proof by Resolution, Proof by Induction

    RealClimaticians: Proof by Pertinacious Assertion

  61. bender
    Posted Feb 4, 2008 at 7:01 PM | Permalink

    Re #59 I took Connolley to task on that point. He dodged, weaved, and left RC. All I did was ask whether the arithemtic time-series averages are indeed good indicators of the ensemble “central tendencies” in a chaotic system. He just didn’t get it. I think “ergodicity” scared him.

  62. Posted Feb 5, 2008 at 1:59 AM | Permalink

    I think Gavin just invented LOLStatistics, a development of LOLScience. Well done Gav!

    Hmmm, I’d like to find a more polite term, native English speakers should help me out; keywords {model-free, hidden assumptions, implicit model} statistics.

    Gavin and Hansen have the same implicit model; their model for temperature is a smooth glacial to interglacial signal with ~ 100 thousand year cycles, and then low-amplitude variability with annual cycles. Nothing in between, unless A-CO2 comes to play. Low amplitude variability cancels out with 5-year smoothing, and A-CO2 plus 100 thousand year cycle will remain. Thus, Hansen can plot all in the same figure. Same with Gavin, annual variability can cause 7-year variations, but after 15 years the model says that A-CO2 must be the reason. Under this model, time-scale matters a lot in the prediction accuracy (unlike in the 1/f case). Allan variance goes down rapidly along averaging time, as ‘weather noise’ will average out. Remaining part is A-CO2 effect, and that can be predicted using emission scenarios and estimate of sensitivity. When the averaging time is ~ 1000 years, the Allan variance will go up again, as astronomical forcings come into play.

    You can compare this model realization

    ( http://signals.auditblogs.com/files/2008/02/hm1.png )

    with approx 1/f realization

    ( http://signals.auditblogs.com/files/2008/02/fm.png ).

    For the former, the Allan deviation goes down clearly:

    ( http://signals.auditblogs.com/files/2008/02/hm1_allan.png )

    and for the latter it is (theoretically) constant.

    I’m not saying that their model is incorrect, but they should state it explicitly (and not throw arrogant comments for people who are in the 1/f side of the debate.. )

  63. MarkW
    Posted Feb 5, 2008 at 5:42 AM | Permalink

    Does Gavin believe that the PDO is a real phenomenum? It appears to have a 50 to 60 year cycle.

  64. Geoff Sherrington
    Posted Feb 5, 2008 at 6:14 AM | Permalink

    Re # 43 louis Hissink

    Pity they never computed the heat energy content in the first place. Averaging temperatures in grid cells doesn’t achieve that.

    I have often wondered about this. Any good references? Geoff.

  65. bender
    Posted Feb 5, 2008 at 9:07 AM | Permalink

    Same with Gavin, annual variability can cause 7-year variations, but after 15 years the model says that A-CO2 must be the reason.

    “It’s statistics.” Err, no. It’s splitting hairs. Post hoc.

    Does Gavin believe that the PDO is a real phenomenum? It appears to have a 50 to 60 year cycle.

    What does “real” mean? I’ve asked at RC about spatial and temporal instabilities in these ocean convection flows, in order to get some sense of their error model. Dodge, bluff, and … crickets.

    Never mind 6, 60, 600, or 6000 year cycles in any of these convectional modes. I am skeptical that these convection “modes” are anything more than a texas sharpshooter’s fallacy. Allan variance leads to searching for patterns in the clouds. Calculating EOFs to characterize what – a single instance of weather noise?

  66. Spence_UK
    Posted Feb 5, 2008 at 12:18 PM | Permalink

    Re #65, it is remarkable how many 50-60 year cycles we see in data sets which have around 70-80 years worth of half decent data.

    keywords {model-free, hidden assumptions, implicit model} statistics.

    More tricky than it sounds this one. Hidden assumptions is probably an easy one (opaque?) but model-free / implicit model doesn’t make any words spring to mind. I’ll have a think…

  67. Peter D. Tillman
    Posted Feb 5, 2008 at 12:22 PM | Permalink

    Re von Storch, Misuses of Statistical Anal

    Since I had a bit of a struggle to find it: this interesting paper is now online at http://coast.gkss.de/staff/storch/pdf/misuses.pdf

    Happy reading–
    Pete Tillman

  68. Sam Urbinto
    Posted Feb 5, 2008 at 7:00 PM | Permalink

    You could always just look at where water vapor is hanging out.

  69. Posted Feb 6, 2008 at 1:35 AM | Permalink

    bender,

    Allan variance leads to searching for patterns in the clouds.

    It’s just a useful tool to check if there’s energy in different timescales. Almost like Fourier analysis, but fits better for the situations where timescales include everything from day-to-day to interglacial-to-glacial. Existence of different time-scale ENSOs and PDOs is IMO interesting issue alone, using them in conditional predictions in post-hoc fashion is Team work (*)

    BTW, gavin just wrote

    When comparing climate models to reality the first problem to confront is the ‘weather’, defined loosely as the unforced variability (that exists on multiple timescales).

    .. add enough independent processes with different timescales, and you’ll get 1/f. Some kind of contradiction here, Hansen’s figure is invalid if there is variability on multiple timescales, and same issue with Gavin’s ‘It’s statistics’ statement.

    (*)
    http://www.metoffice.gov.uk/corporate/pressoffice/2007/pr20071213.html

    Professor Phil Jones, Director of UEA’s Climatic Research Unit, said, “The year began with a weak El Niño – the warmer relation of La Niña – and global temperatures well above the long-term average. However, since the end of April the La Niña event has taken some of the heat out of what could have been an even warmer year.”

  70. Steve McIntyre
    Posted Feb 6, 2008 at 8:35 AM | Permalink

    I’m pretty sure that “Allan variance” in this context is the same thing as the wavelet decomposition of variance by scale that I’ve used in several posts e.g. my discussion of the VZ pseudoproxies.

    I found it a very handy way of thinking about the recovery of low-frequency variance when that was being argued about in the context of the Wahl, Ritston critique of VZ.

  71. bender
    Posted Feb 6, 2008 at 8:50 AM | Permalink

    I am convinced they don’t know what they are doing when it comes to internal climate variability (“weather noise”) and statistical treatment of ensemble GCM runs. They start by assuming the GCMs are valid in terms of the noise structure they generate. Have they ever proven this? I think not.

    When I said 1/f variance leads to searching for patterns in the clouds, I meant that it produces patterns that are tantalizing. Patterns that beg to be post-hoc interpreted.

  72. Ron Cram
    Posted Feb 6, 2008 at 9:15 AM | Permalink

    re: #65
    bender,

    You write:

    I am skeptical that these convection “modes” are anything more than a texas sharpshooter’s fallacy.

    I have noticed this about you and wonder why this is true. Are you just as skeptical about the warming impact of ENSO? Or does your skepticism only apply to PDO?

    If you accept the warming impact of ENSO, a nearly unanimous view, why be skeptical of the impact of PDO? If you are skeptical of ENSO’s warming, what evidence makes you skeptical?

  73. Spence_UK
    Posted Feb 6, 2008 at 11:01 AM | Permalink

    Re #72

    I can’t speak for bender, who may have a different view, but this is my take on it.

    ENSO events are clearly recognisable, measurable events that have an influence on global near-surface temperature by drawing warmer (EN) or cooler (LN) water to the ocean surface. This is well known and understood, and I don’t think anyone is doubting this.

    The issue is the statistical behaviour of these events. Complex non-linear systems tend to exhibit self-similar “noise” (it isn’t really noise – it is all signal). One way of viewing this that may help (or may add to the confusion!) is that such events tend to cluster, you then get clusters-of-clusters, and clusters-of-clusters-of-clusters etc. There are a number of reasons why this occurs, one example might be chaotic behaviour in the system, a non-chaotic example might be the multiple reservoir analogy.

    This cascade of clustering events is what creates the fractional “noise”. The statistical structure of such noise results in apparently significant trends being visible at all scales. The sharpshooter’s fallacy comes in when these trends (which have small number of degrees of freedom) are associated with other data sets with low degrees of freedom.

    In both of the above examples, the clustering (large-scale variability) follows directly from the fine-scale variability. If the fine-scale variability (i.e., the weather) is an initial value problem with exponential error growth, attempting to predict large-scale behaviour of the system within the natural variability would be an exercise in futility.

  74. Posted Feb 6, 2008 at 11:34 AM | Permalink

    SteveM

    I’m pretty sure that “Allan variance” in this context is the same thing as the wavelet decomposition of variance by scale that I’ve used in several posts e.g. my discussion of the VZ pseudoproxies.

    Haar wavelet variance with fractional freq. deviates, if I remember correctly 😉

    As per post 62 , Gavin and Hansen are doing fine if global temp Allan variance looks similar to last figure in that post. Maybe that’s one reason why Mann hand-crafted astronomical forcing to the hockey stick . See the Allan variance before and after ‘the fix’:

    ( http://signals.auditblogs.com/files/2008/02/co2fix_allan.png )

  75. bender
    Posted Feb 6, 2008 at 10:05 PM | Permalink

    #73
    This is the first time I can recall wishing I’d had someone else’s words put in my mouth. Well said, Spence_UK.

    All the convection modes (ENSO,PDO,NAO,THC,etc) are in some sense real. But what are they? You can’t predict the temperature of the flows through those pathways over time (over weeks & months, yes). You can’t say how robust those pathways are spatially because they haven’t been studied that long and their driving forces are not that well understood. How on earth do you judge whether a GCMs internal climate variability/weather noise is appropriate when you don’t know how the actual system is behaving? And if you can’t get a fix on the internal noise, how do you get a fix on the external signal?

    The 20th “century” warming “trend” is actually two warming phases lasting only 30 years long each (broken by “aersol cooling” in teh 60s and 70s?). Given the uncertainty on the solar forcing, its effects on dominant pathways such as ENSO, the process by which surface heat is taken to the ocean depths … given the spectacular failure of the GCMs in the Arctic during both of these warming phases … I can’t help being skeptical of the way “weather noise” (which in my books includes ENSO,PDO,NAO,etc.) is encapsulated in the GCMs.

    The questions I ask at RC on this issue go unanswered. Why? If Spence_UK can understand what I am saying, why can the gatekeepers not? Are they in denial?

    Questions, questions.

  76. Tom Vonk
    Posted Feb 8, 2008 at 6:11 AM | Permalink

    Spence UK wrote # 73

    In both of the above examples, the clustering (large-scale variability) follows directly from the fine-scale variability. If the fine-scale variability (i.e., the weather) is an initial value problem with exponential error growth, attempting to predict large-scale behaviour of the system within the natural variability would be an exercise in futility.

    Yes and YES !
    However why do you write “… IF the weather is an initial value problem …” ?
    It is an initial value problem and I know of nothing that would suggest otherwise .
    From that follows then that the clustering (or self similarity) forbids to operate some cut offs in the time scales by arbitrary considerations of “randomness below , signal above” .
    L.Motl has challenged some post on RC blathering about “weather vs climate differences” by asking at what time scale climate emerges from the noise and what physical process governs that transition .
    My take on it is that climate emerges after around 50 years because that is a typical time length of human awareness .
    Shorter we don’t know enough and longer we forgot too much 🙂

  77. Spence_UK
    Posted Feb 8, 2008 at 12:09 PM | Permalink

    Thanks for the kind comments, bender and Tom, much appreciated.

    Tom, I fully agree that there is considerable evidence to support the idea that weather is an initial value problem with exponential error growth. I start from this premise rather than justify it because I believe both AGW supporters and AGW sceptics agree on this point. This makes it a good place to start a debate, in order to find why we draw different conclusions from the same premise. My use of the word “if” is really due to habit, it is just my naturally cautious way of expressing the premise of a logical argument. (I blame the modal logic that I had to study as part of my university work!)

    Bender made an excellent point on another thread which ties in with this nicely as well. Dr. Curry argued on the feedback thread that single model runs (what I would refer to as realisations) were not informative due to the fact that a single run could look very different due to natural variability. I was still in lurk mode and bit my tongue, but the good dr. bender made the point very well elsewhere. The real world is also only a single realisation that may also not be typical (whatever typical means in this context). Yet we are busily fitting our models to churn that out as the central result. This is a very important issue, and hopefully someone in the mainstream will wake up and smell the coffee on this point. The presence of fractional variability in climate makes this a fundamental problem.

    PS. Watching closely for Dr. Browning’s review. Should be interesting.

  78. bender
    Posted Feb 8, 2008 at 1:02 PM | Permalink

    The real world is also only a single realisation that may also not be typical

    Exactly. The models should be fit* to Earth’s own ensemble. But all we have is one realization. If you overfit to that realization, you will be wrong about the ensemble. That is a certainty.

    I do not think Gavin Schmidt et al understand this. I hope and pray that I am wrong.

    *And don’t tell the models are “determined by physics” and not tuned to scenarios. This is a lie that I have heard told by true believing alarmists on many occasions. There is wide uncertainty on some parameters and this gives the modelers free regin to overfit those parameters to the single stochastic realization of modern earth climate.

    Does this falsify AGW? No. Does it suggest the estimated sensitivities may be biased low or high? Yes.

  79. Sam Urbinto
    Posted Feb 8, 2008 at 6:15 PM | Permalink

    If anyone can prove to me that 800 ppmv of carbon dioxide will result in any change in the anomaly, I’ll give them $1,000,000

    Of course, proof entails an atmosphere measured at 800 ppmv and proof the anomaly reflects temperature.

    Or as my dad once told me while we were bowling, “Good luck with that, fanboy.”

  80. Tom Vonk
    Posted Feb 11, 2008 at 9:07 AM | Permalink

    Spence_UK # 77

    The real world is also only a single realisation that may also not be typical (whatever typical means in this context). Yet we are busily fitting our models to churn that out as the central result. This is a very important issue …

    It is actually THE single most important issue .
    It is the same question that the cosmology asks : “Why is the Universe like it is ?” .
    Here too we have a single realisation .
    However the cosmologists are bright enough to have seen that the question of probability of realisation à priori is at best undefined and at worst absurd .
    In any case they know that even by using all natural laws we know and having numerical models running for eternity we would not be able to stick a number on this particular Universe we have .

    Actually it is interesting to look at the “physics” of a well documented climate model like CAM 3.0 .
    Not because of the content that contrarily to a widespread myth is not complex but rather simple and boring .
    It is interesting to see what the underlying assumptions/beliefs/hopes are .
    There is a dynamical core and a sauce .
    Let’s forget the sauce for every cook has his own receipt and it is anyway irrelevant for what follows .

    The construction of the dynamical core consists to play with conservation laws f.ex Navier Stokes .
    Basically it means to change variables by calling them generalised coordinates and transform the relevant equations to a more manageable form .
    Anybody with understanding of analytic geometry and PDE theory can follow very easily what they do – no rocket science involved .
    Once there , some form of discretisation is chosen and the thing coded on a computer .
    What does the computer produce ?
    A climate .
    Certainly it produces a climate that is not absurd . Certainly it is a climate that looks possible .

    However since Dan Hughes showed me some papers about the constructal theory whose details I had the opportunity to discover , I have seen that with a very small number of very basic principles it was possible to develop a quite impressive “climate model” .
    No computer necessary , only pen and paper and yet you get all the circulation features right (Hadley cells , polar cells) and the model even predicts their location and dynamics .
    Of course all “details” were wrong and many assumptions debatable but the fact stays that even a model that had no continents ,
    no GHG , no clouds , no sun variability , produced something that looked like a climate .
    It appeared also certain that with a little sauce it would easily incorporate more details if somebody wished so .

    What I infer from that and what is also very consistent with the chaos theory is that the sole existence of conservation laws and the permanence of the geometry and of the mass distribution imposes on the dynamics of the system such severe constraints that given a set of initial conditions , ANY model that respects these laws would produce a climate .
    Now the enlightening point by looking at the CAM 3.0 documentation is that the people are convinced that they produce THE climate .
    They even use rather strong statements in the semi lagrangian section by talking about high predictive skills despite the use of coarse resolution .
    They certainly believe that playing with N-S and changing variables , they have actually solved the problem .
    Of course if they took a step back , they would realise that people have been “playing” with N-S these last 100 + years and the problem is still not solved on the fundamental level let alone at the discrete numerical level .

    I am convinced that there is an infinity of solutions to the continuous equations representing the dynamics of the earth/atmosphere system .
    Therefore there can’t be any uniform convergence of a numerical run to THE solution and interestingly hindcasting doesn’t make any sense because there is no reason that a model starting with the correct initial conditions 20 000 years ago finishes NECESSARILY with the observed state of today .
    Follows also that a numerical trajectory cannot have a probability of realisation .
    The only thing that we will be able to say once we stop wasting money on deterministic schemes that won’t work , is that all the trajectories are located within a strange attractor that may have a large number of attractor bassins .
    The Earth during the last 4.5 billions years has wandered through the attractor whose topology depends on the millions of “details” that the modellers consider irrelevant .
    If we have some insight in this topology , we will perhaps be able to make modest statements regading the rates of divergence of trajectories depending on the initial point chosen within the attractor .
    The state the Earth is in today is no more probable than any other , possibly very different , as long as it is situated within the attractor .
    And to close the loop – for an initial value problem the time scale is everything and with increasing time the system may but must not rapidly swap from one attraction bassin to another .

  81. Spence_UK
    Posted Feb 22, 2008 at 5:06 PM | Permalink

    I’ve been doing some reading up on 1/f noise and “self-organised criticality”. Per Bak, a Danish theoretical physicist, did a considerable amount of work on this topic, and has an interesting book, “How Nature Works – the science of self-organised criticality”, in which he discusses a model of evolution, economics and other natural processes based on 1/f noise.

    There is an interesting review of the book here. Unfortunately the book is a little pricey over here so I might wait until I’m next at a decent scientific library to have a trawl through it! He also has some interesting papers on the topic (here) – these papers were controversial at the time, although carry many citations.

    Following this up led to an interesting bibliography on 1/f noise here.

  82. DeWitt Payne
    Posted Feb 22, 2008 at 8:00 PM | Permalink

    Radioactive decay is the only thing I can think of that is only shot noise. The problem is that there is no perfect detector. You always have problems like dead time and pulse pile up that will limit your precision at long integration times for high count rates.

  83. Posted Jun 15, 2012 at 7:32 AM | Permalink

    Reblogged this on Climate Ponderings.