A Peek behind the Curtain

On Feb 26, Garth Paltridge, Albert Arking and Michael Pook’s report on a re-examination of NCEP reanalysis data on upper tropospheric humidity was published online by Theoretical and Applied Climatology. Upper tropospheric humidity is a critical topic in assessing the strength of water vapor feedbacks – knowledge that is essential to understand just how much temperature increase can be expected from doubled CO2. Paltridge and Arking are both senior climate scientists with lengthy and distinguished publication records. They reported:

The National Centers for Environmental Prediction (NCEP) reanalysis data on tropospheric humidity are examined for the period 1973 to 2007. It is accepted that radiosonde-derived humidity data must be treated with great caution, particularly at altitudes above the 500 hPa pressure level. With that caveat, the face-value 35-year trend in zonal-average annual-average specific humidity q is significantly negative at all altitudes above 850 hPa (roughly the top of the convective boundary layer) in the tropics and southern midlatitudes and at altitudes above 600 hPa in the northern midlatitudes. It is significantly positive below 850 hPa in all three zones, as might be expected in a mixed layer with rising temperatures over a moist surface. The results are qualitatively consistent with trends in NCEP atmospheric temperatures (which must also be treated with great caution) that show an increase in the stability of the convective boundary layer as the global temperature has risen over the period. The upper-level negative trends in q are inconsistent with climate-model calculations and are largely (but not completely) inconsistent with satellite data. Water vapor feedback in climate models is positive mainly because of their roughly constant relative humidity (i.e., increasing q) in the mid-to-upper troposphere as the planet warms. Negative trends in q as found in the NCEP data would imply that long-term water vapor feedback is negative—that it would reduce rather than amplify the response of the climate system to external forcing such as that from increasing atmospheric CO2. In this context, it is important to establish what (if any) aspects of the observed trends survive detailed examination of the impact of past changes of radiosonde instrumentation and protocol within the various international networks.

A few days earlier on Feb 20, Dessler and Sherwood published a review article in Science on upper tropospheric humidity. This was accompanied by a podcast and a blog article at Grist here . They reported:

Interestingly, it seems that just about everybody now agrees water vapor provides a robustly strong and positive feedback

They made no mention of the pending Paltridge et al results.

OK, climate scientists disagree. What else is new. However, today you get a little peek behind the curtains, courtesy of Garth Paltridge who sends in the following account of the handling (and rejection) of their article at Journal of Climate.

Garth Paltridge writes:

LOADED DICE IN THE CLIMATE GAME

Back in March of 2008, three of us sent off a manuscript to the Journal of Climate. It was a straightforward paper reporting the trends of humidity in the middle and upper troposphere as they (the trends) appear at face value in the NCEP monthly-average reanalysis data. NCEP data on atmospheric behaviour over the last 50 years are readily available on the web and are something of a workhorse for much modern research on meteorology and climate.

The paper did two things:

(1) It pointed out that, according to the NCEP data, the zonal-average tropical and mid-latitude humidities have decreased over the last 35 years at altitudes above the 850mb pressure level – that is, in the middle and upper troposphere, roughly above the top of the convective boundary layer. NCEP humidity information derives ultimately from the international network of balloon-borne radiosondes. And one must say immediately that radiosonde humidity data have more than their fair share of problems. So does the NCEP process of using an operational weather forecasting model to integrate the actual measurements into a meteorologically coherent set of data presented on a regular grid.

(2) It made the point (not an original point, but on the other hand one that is not widely known even among the cognoscenti) that water vapour feedback in the global warming story is very largely determined by the response of water vapour in the middle and upper troposphere. Total water vapour in the atmosphere may increase as the temperature of the surface rises, but if at the same time the mid- to upper-level concentration decreases then water vapour feedback will be negative. (There are hand-waving physical arguments that might explain how a decoupling such as that could occur).

Climate models (for various obscure reasons) tend to maintain constant relative humidity at each atmospheric level, and therefore have an increasing absolute humidity at each level as the surface and atmospheric temperatures increase. This behaviour in the upper levels of the models produces a positive feedback which more than doubles the temperature rise calculated to be the consequence of increasing atmospheric CO2.

The bottom line is that, if (repeat if) one could believe the NCEP data ‘as is’, water vapour feedback over the last 35 years has been negative. And if the pattern were to continue into the future, one would expect water vapour feedback in the climate system to halve rather than double the temperature rise due to increasing CO2.

Satellite data from the HIRS instruments on the NOAA polar orbiting satellites tend (‘sort of’, only in the tropics, and only for part of the time) to support the climate model story. The ‘ifs and buts’ of satellite information about upper tropospheric humidity are of the same order as that from balloon radiosondes.

Anyway, our paper concluded by suggesting that, in view of the extreme significance of upper-level humidity to the climate change story, the international radiosonde data on upper-level humidity should not be ‘written off’ without a serious attempt at abstracting the best possible humidity signal from within the noise of instrumental and operational changes at each of the relevant radiosonde stations. After all, we are not exactly over-endowed with data on the matter. The attempt would be similar in principle to the current efforts at abstracting a believable global warming signal from the networks of surface-temperature observations.

Suffice it to say that after 3 or 4 months the paper was knocked back. This largely because of an unbelievably vitriolic, and indeed rather hysterical, review from someone who let slip that

“the only object I can see for this paper is for the authors to get something in the peer-reviewed literature which the ignorant can cite as supporting lower climate sensitivity than the standard IPCC range”.

We argued a bit with the editor about why he took notice of such a review. We are not exactly novices in the research game, and can say with reasonable authority that when faced with such an emotive review the editor should simply have ignored it and sent the paper off to someone else. The argument didn’t get far. In particular we couldn’t get a guarantee that a re-submission would not involve the same reviewer. And in any event the conditions for re-submission effectively amounted to a requirement that we first prove the models and the satellites wrong.

A couple of weeks after the knock-back, and for unrelated reasons, two of us went to a small workshop on water vapour held at LDEO in New Jersey, whereat we told the tale. The audience was split as to whether the existence of the NCEP trends in humidity should be reported in the literature. Those ‘against’ (among them a number of people from GISS) simply said that the radiosonde data were too ‘iffy’ to report the trends publicly in a political climate where there are horrible people who might make sinful use of them. Those ‘for’ simply said that scientific reportage shouldn’t be constrained by the politically correct. The matter was dropped. I found after the event that the journal editor had come (I think specifically) to hear the talk. He didn’t bother to introduce himself.

I guess the story doesn’t amount to much. Perhaps it is significant only in that it shows how naïve we were to imagine that climate scientists might welcome the challenge to examine properly and in detail even the smell of a possibility that global warming might not be as bad as it is made out to be. Silly us.

After some kerfuffle, the paper was accepted by “Theoretical and Applied Climatology” and appeared on February 26 on the journal’s web site. (One can if so inclined, and if one has personal or institutional access to the journal, find it here). We presume it will be ignored. Being paranoiac from way back, we wonder at the happy chance by which a one-page general-interest article appeared in ‘Science’ on February 20. With some self-referencing, it extolled the virtue of the latest modelling research, and of new(?) satellite observations of short-term, large amplitude, water vapour variability, which (say the authors) strongly support model predictions of long-term positive water vapour feedback. Well, maybe. It would be easy enough to argue against that conclusion. The paranoia arises because of another issue. We know that at least one of the authors is well aware of the contrary story told by the raw balloon data. But there is no mention of it in their article.

338 Comments

  1. Ed Snack
    Posted Mar 4, 2009 at 1:29 PM | Permalink

    We are surprised because… ?

  2. bender
    Posted Mar 4, 2009 at 1:51 PM | Permalink

    Well, well.

  3. Graeme Rodaughan
    Posted Mar 4, 2009 at 1:58 PM | Permalink

    Firstly: Thanks to Garth Paltridge for coming forward with this story.

    Secondly: And some people are upset that there is scepticism wrt the notion that man made emissions of CO2 will cause global warming…

  4. Peter
    Posted Mar 4, 2009 at 2:01 PM | Permalink

    I am beginning to cringe each time I see the word “robust”. It almost seems it could be defined as “we can’t prove statistical significance, but we really, really believe” we’re right.

  5. Posted Mar 4, 2009 at 2:04 PM | Permalink

    The NCEP Reanalysis forecast model is based largely upon a frozen version of the MRF (currently GFS) forecast model circa 1997-1998. It has very coarse grid spacing and no one would use it to diagnose phenomena less than 200 km in spatial scale. Furthermore, the NCEP Reanalysis and other reanalysis products suffer from horrible model bias and inhomogeneities related to the evolving observing system, mainly the inclusion of new satellite retrievals. Reanalysis datasets are not acceptable for this reason (among many others) to deduce long term trends in climate, especially upper-troposphere humidity prior to the satellite era.

    There is only one “unnamed” team that utilized reanalysis data to examine tropopause height and trends in upper-troposphere warming. Roger Pielke Sr. appropriated commented on this Science piece.

    • Craig Loehle
      Posted Mar 4, 2009 at 2:41 PM | Permalink

      Re: Ryan Maue (#5), could you point to the Pielke comment and also clarify your post? Do you mean that the NCEP data are so bad the Partidge paper is useless? Do you have documentation of this?

      In my forthcoming paper in E&E (yes, that journal again), the January 2009 issue, I show ocean cooling over the past 4.5 years. The paper was rejected in a matter of days (without review) from Science, Nature, and GRL. Wonder what would have happened if I showed rapid warming?

      • Dave Andrews
        Posted Mar 4, 2009 at 2:58 PM | Permalink

        Re: Craig Loehle (#14),

        You would have made the front cover of them all 🙂

      • Posted Mar 4, 2009 at 3:44 PM | Permalink

        Re: Craig Loehle (#14), this link is a good place to start on the tropopause height issues. Science 2004 Pielke Sr.

        There have been three generations of reanalysis products. NCEP Reanalysis would belong to the first generation, ERA-40 to the second, and ERA-interim/JRA-25 as well as NASA’s MERRA in the third generation. 4DVar is utilized in the recent reanalysis projects, whereas NCEP Reanalysis does not employ the latest state-of-the-art data assimilation procedures. Accordingly, it does not advertise to do such. This is 1990s NWP.

        Re: Steve McIntyre (#15),

        Yes, that’s another example. The IPCC AR4 chapter written by Trenberth is an excellent summary of the pitfalls in using reanalysis data for climate trends. It is dangerous territory, indeed.

        I echo Re: Gerald Browning (#21) , but simply pointing out the caveats and the potential pitfalls is insufficient in my book. With the ERA-40 being freely downloadable (also the JRA-25) and available to the research community, a cross-comparison would be appropriate and fairly easy to achieve. This would add robustness and perhaps some indication of error in the upper-tropospheric humidity measurements.

        On a separate, my paper on the collapse in Northern Hemisphere Tropical Cyclone Activity since 2007 was published by GRL today. The review process was very helpful and I did not experience the aforementioned bias or resistance to my manuscript either scientifically or politically.

        • M. Villeger
          Posted Mar 4, 2009 at 4:02 PM | Permalink

          Re: Ryan Maue (#32),

          Here is a quote from Pielke Sr.: “First, as can be seen from Fig. 1, A and B, ignoring data after 1996, linear trends in globally averaged 1000–300 hPa thickness temperature and 300 hPa over 1979 to 1996 were not significant [–0.045°C/decade (P = 0.55) and –2.59 m/decade (P = 0.33) for 1000–300 hPa thickness temperature and 300 hPa height, respectively]. This indicates that the warming described by Santer et al. (1) resulted from data at the end of the time series and is not the result of a general linear warming trend. The map of thickness temperature spatial trends and their zonal average over 1979–1996 (Fig. 2) shows that warming in the Northern Hemisphere is statistically significant only in several small isolated regions, and does not support a conclusion of general warming of the Northern Hemisphere troposphere— or the latitudinal characteristics reported in (1). Second, because the NCAR/NCEP reanalysis data are tuned by radiosonde data, long-term trends are dependent on the evolution of the radiosonde data over time (6). This connection with observations prevents any model drift in tropospheric heating over time. Third, temperature profiles are integrated upwards (8), so any bias in the stratosphere would not have a direct effect on the computation of underlying tropospheric pressure-surface heights. Thus, the NCAR/NCEP reanalysis is a valuable tool for climate assessments (9–12).”

          Looks like it is a valuable tool in his opinion.

        • Willem Kernkamp
          Posted Mar 4, 2009 at 5:49 PM | Permalink

          Re: Ryan Maue (#32),

          The interesting discussion initiated by Ryan Maue is interspersed with a lot of chatter. Ryan’s comment seems valid:

          With the ERA-40 being freely downloadable (also the JRA-25) and available to the research community, a cross-comparison would be appropriate and fairly easy to achieve. This would add robustness and perhaps some indication of error in the upper-tropospheric humidity measurements.

          Perhaps this could be done.

          Will

        • Bernie
          Posted Mar 4, 2009 at 5:59 PM | Permalink

          Re: Willem Kernkamp (#66), Will:
          Thanks for getting us back on track. However, my sense is that this particular topic constitutes a new area of research and without a pretty definitive article is not something that will be pursued here. But perhaps I am wrong.

        • Kenneth Fritsch
          Posted Mar 4, 2009 at 7:20 PM | Permalink

          Re: Ryan Maue (#30),

          There have been three generations of reanalysis products. NCEP Reanalysis would belong to the first generation, ERA-40 to the second, and ERA-interim/JRA-25 as well as NASA’s MERRA in the third generation. 4DVar is utilized in the recent reanalysis projects, whereas NCEP Reanalysis does not employ the latest state-of-the-art data assimilation procedures. Accordingly, it does not advertise to do such. This is 1990s NWP.

          I think from past comments and participation from Ryan here at CA that his judgments on issues like this one deserve respect and attention. I would think, based on the usual handling of issues, like this one, at CA, that an informative analysis/discussion could pursue. As a layperson I have heard of problems (different ones) for all of these reanalysis procedures.

          I think that, as someone has already mentioned, there are two issues here that are not particularly related. If reasons given for rejecting the paper are politically/agenda based, that would be an ominous sign in everyone’s book and would not, I would think, have anything to do with the publish-worthiness of the paper. The second issue in my mind would be is the reanalysis method sufficiently reliable, given all its known limitations, to determine whether the observed results are in disagreement with the modeled results. As Ryan implied, why was the most primitive reanalysis method applied and without comparison with the other methods. Are all these reanalysis methods independent of climate model influences?

          Perhaps Ryan could provide some good reference links where these reanalysis methods are discussed. Also, are there other prominent papers that have been published in recent times that have used NCEP reanalysis data?

        • Kenneth Fritsch
          Posted Mar 5, 2009 at 1:32 PM | Permalink

          Re: Ryan Maue (#30),

          Ryan, the link that you gave to the Pielke Sr. article can be summarized, I think, fairly well from the three excerpts below. In them the spurious lower stratosphere cooling in the NCAR/NCEP reanalysis was noted and also noted that it had no effect on the subject at hand. As I recall this is a point that Douglass et al. (2008) also made. The independence of the MSU and NCAR/NCEP was noted and finally the famous “early” ending of the Santer temperature series is pointed to and how that early ending conclusion changes when the end is extended forward in time.

          Outside the spurious lower stratosphere cooling I see no major misgivings about the NCEP reanalysis in this article. Actually Pielke notes that “Thus, the NCAR/NCEP reanalysis is a valuable tool for climate assessments (9–12).”

          Christy et al. (6) and Santer et al. (7) reported on a spurious lower stratospheric cooling in the NCAR/NCEP reanalysis data beginning in 1997.

          Some have questioned the independence of the NCAR/NCEP reanalysis and the MSU lower tropospheric data (13). However, our previous intercomparisons were made specifically with the Christy et al. (9) MSU lower tropospheric data (which was adjusted and merged with the microwave data sets), and not the raw MSU satellite radiances used in the package that generates temperature retrievals for NCEP reanalyses. The Christy et al. data product was not used in the NCAR/NCEP reanalysis (6). Since the MSU lower tropospheric data and radiosonde data are independent sources, the long-term rate of change in the temperature fields that they diagnose are independent of each other. These data can therefore be compared to assess the accuracy of trend and variability in the reanalysis.

          This indicates that the warming described by Santer et al. (1) resulted from data at the end of the time series and is not the result of a general linear warming trend.

          I intend next to review the IPCC AR4 Trenberth chapter.

        • Posted Mar 5, 2009 at 2:52 PM | Permalink

          Re: Kenneth Fritsch (#124), I would agree with that assessment of the comment/reply exchange between Santer and Pielke, as each made legitimate points and in my opinion, arrived at a stalemate.

          Another point: the reanalysis datasets are anchored to the radiosonde network especially where the density of such observations is high. Radiosonde locations are quickly plotted (from ERA-40 data ingest for January 1, 2001): There are on the order of 700 stations used with the highest density in the Northern Hemisphere especially in Norther America and Central Europe. The ocean coverage in middle latitudes is non-existent. This is a huge problem. The reanalysis model is “tuned” or “trained” or “anchored” to radiosonde profiles where they in fact exist. Thus, a radiosonde profile will get a much larger weighting in the final analysis as opposed to a satellite derived radiance retrieval. Over the oceans, the “climate” background of the model [with all of its inherent problems/biases] has to do a lot more work and this is where two reanalysis products would differ most, especially when one is using 1990s NWP (NCAR Reanalysis 3DVar) as opposed to the new ERA-interim (4DVar, state of the art ECMWF 2007 era data assimilation). Therefore, where there are few in situ observations (which have well defined and understood error characteristics, arguably radiosondes fit into that category), reanalysis models can act as “glorified” climate models in their interpretation of satellite data.

          Thus, to answer my own questions, I will simply download the data, do the same analysis as Paltridge did for the NCEP, and report back the findings from the other reanalysis datasets (JRA-25, ERA-40, ERA-interim).

        • Kenneth Fritsch
          Posted Mar 5, 2009 at 2:54 PM | Permalink

          Re: Ryan Maue (#30),

          The chapter to which Ryan referenced in the above post is Chapter 3 of the IPCC AR4 by Kevin Trenbreth.

          Click to access AR4WG1_Print_Ch03.pdf

          As an aside, this was the chapter that I believe Chris Landsea was working on before he resigned in a dust up with Kevin Trenbreth over the politicization (claimed by Landsea against Trenbreth) of the issue of tropical storm activity and global warming.

          Here is an excerpt from chapter 3 on TC activity and SST:

          Intense tropical cyclone activity has increased since about 1970. Variations in tropical cyclones, hurricanes and typhoons are dominated by ENSO and decadal variability, which result in a redistribution of tropical storm numbers and their tracks, so that increases in one basin are often compensated by decreases over other oceans. Trends are apparent in SSTs and other critical variables that infl uence tropical thunderstorm and tropical storm development. Globally, estimates of the potential destructiveness of hurricanes show a signifi cant upward trend since the mid-1970s, with a trend towards longer lifetimes and greater storm intensity, and such trends are strongly correlated with tropical SST.

          Getting back to the point of reanalyses, I did a search on NCEP in chapter 3 and came up with following excerpts from the text.

          There were 20 references in the notes for Chapter 3 that mentioned NCEP.

          From these excerpts I get the distinct opinion that the author was more concerned about pre-1979 use of the reanalysis but was saying good things about them after that date. The comment on water vapor seems to be cautioning against the use of reanalysis before the 1970s and that perhapsd problems exist after that period.

          Being familiar with IPCC reviews, the next step is to look at the references given for that last reservation.

          The post-1979 period allows, for the first time, a global perspective on many fields of variables, such as precipitation, that was not previously available. For instance, the reanalyses of the global atmosphere from the National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR, referred to as NRA; Kalnay et al., 1996; Kistler et al., 2001) and the European Centre for Medium Range Weather Forecasts (ECMWF, referred to as ERA-40; Uppala et al., 2005) are markedly more reliable after 1979, and spurious discontinuities are present in the analysed record at the end of 1978 (Santer et al., 1999; Bengtsson et al., 2004; Bromwich and Fogt, 2004; Simmons et al., 2004; Trenberth et al., 2005a). Therefore, the availability of high quality data has led to a focus on the post-1978 period, although physically this new regime seems to have begun in 1976/1977.

          An independent check on globally vertically integrated water vapour amounts is whether the change in water vapour mass is refl ected in the surface pressure fi eld, as this is the only signifi cant infl uence on the global atmospheric mass to within measurement accuracies. As Trenberth and Smith (2005) showed, such checks indicate considerable problems prior to 1979 in reanalyses, but results are in better agreement thereafter for ERA-40. Evaluations of column integrated water vapour from the NASA Water Vapor Project (NVAP; Randel et al., 1996), and reanalysis data sets from NRA, NCEP-2 and ERA-15/ERA-40 (see Appendix 3.B.5.4) reveal several defi ciencies and spurious trends, which limit their utility for climate monitoring (Zveryaev and Chu, 2003; Trenberth et al., 2005a; Uppala et al., 2005). The spatial distributions, trends and interannual variability of water vapour over the tropical oceans are not always well reproduced by reanalyses, even after the 1970s (Allan et al., 2002, 2004; Trenberth et al., 2005a).

          Using NCEP-2 reanalysis data, Lim and Simmonds (2002) showed that for 1979 to 1999, increasing trends in the annual number of explosively developing (deepening by 1 hPa per hour or more) extratropical cyclones are signifi cant in the SH and over the globe (0.56 and 0.78 more systems per year, respectively), while the positive trend did not achieve signifi cance in the NH. Simmonds and Keay (2002) obtained similar results for the change in the number of cyclones in the decile for deepest cyclones averaged over the North Pacifi c and over the North Atlantic in winter over the period 1958 to 1997.

    • Steve McIntyre
      Posted Mar 4, 2009 at 2:49 PM | Permalink

      Re: Ryan Maue (#5), Ryan, didn’t HAimberger et al use ERA-40 reanalysis in their radiosonde stuff that was involved in Santer and Schmidt v Douglass? Is it better/worse/similar to NCEP?

    • Steve McIntyre
      Posted Mar 5, 2009 at 8:56 AM | Permalink

      Re: Ryan Maue (#5),

      I was a little surprised to learn how low the NCEP reanalysis is presently esteemed by the climate science community. It’s not a topic that I’ve followed.

      I looked at the Trenberth chapter in AR4 mentioned above by Ryan (chapter 3). If you do a search for NCEP, there are 22 uses and 35 uses of the acronym NRA (NCEP/NCAR). Without surveying the matter, it was my impression that the NCEP reanalysis, whatever its warts, appears to have been extensively used in the literature cited by Trenberth/IPCC.

      I have no particular view on whether NCEP reanalysis should be used. But presumably its defects, whatever they are, were well known at the time of AR4. So why were there such extensive use in AR4?

      • Andrew Dessler
        Posted Mar 5, 2009 at 10:35 AM | Permalink

        Re: Steve McIntyre (#94),

        The NCEP/NCAR reanalysis is a well-respected and useful data set. But like any tool, there are some jobs it’s better for than others. You would not use a screwdriver to hammer a nail, and most people would not use a reanalysis for a long-term trend calculation. Thus, while psychoanalyzing peer reviewers may be fun, in this case I’d take the reviewers at their word that they rejected it because they didn’t think this calculation was particularly good (I should note that I was not a reviewer). People here may or may not agree with that conclusion, but I do think it’s what the reviewers actually thought — and not a conspiracy to suppress dissenting views.

        • bender
          Posted Mar 5, 2009 at 10:47 AM | Permalink

          Re: Andrew Dessler (#99),
          1. Paltridge’s quote makes it quite clear that the report was suppressed – in part – because of the political implications of its publication. i.e. It wasn’t just the methodology.
          2. There’s no “conspiracy” theory in play here – just the fact that there seems to be a shared concern that one single publication could rock the IPCC’s scientific foundation. The thing I don’t understand is this: if the foundation is so solid, then what is the basis for this fear? (So you see, it’s not that psychoanalysis is “fun”. It seems to be necessary.)
          3. IPCC has done this to themselves by deciding at the outset not to discuss IN DETAIL greenhouse physics and the derivation (i.e. statistics) of the GHG sensitivity coefficients. An engineering quality exposition is desperately needed.

          Thank you very much for commenting.

        • Andrew Dessler
          Posted Mar 5, 2009 at 11:12 AM | Permalink

          Re: bender (#100),

          As a former associate editor of JGR, I can assure you that all authors dream up reasons their papers were rejected so they don’t have to confront the reality that their paper just wasn’t very good. I had many conversations with authors that went something like this:

          Author: I can’t believe you rejected my paper. Why did you do it?
          Me: I think there are some real methodological problems, as described in the reviews.
          Author: I know that X was one of the reviewers. X is an idiot and a hack. He has no idea what he’s talking about. And he’s biased against my work because it contradicts his theories.
          (Of course, X was NOT one of the reviewers — authors rarely guess the reviewers)
          Author: Why don’t you get someone like Y to review it. He’s sensible and I’m sure he’d see the merit of the paper.
          (Y, of course, was a reviewer who suggested rejection)
          Me: Take another look at the reviews. They’re actually clear and relevant. It will improve your paper to address those comments.
          Author walks off angrily.

          Thus, I take Paltridge’s story about a conspiracy to suppress his work with a grain of salt. I would really love the see the full reviews and the letter from the editor. I imagine that the reviewers had a boat load of sensible criticisms.

        • Steve McIntyre
          Posted Mar 5, 2009 at 11:36 AM | Permalink

          Re: Andrew Dessler (#101),

          I do not use the word “conspiracy” nor do regular readers of this blog. 99% of all such uses occur in drive-by comments like yours. The purpose of the term seems to be to set up some sort of absurd straw man argument to deflect attention from the incident in question. You invent an allegation of “conspiracy” (where none was alleged) and then say that none existed. So there.

          Paltridge provided us with the following quote from the review:

          the only object I can see for this paper is for the authors to get something in the peer-reviewed literature which the ignorant can cite as supporting lower climate sensitivity than the standard IPCC range

          Regardless of whether the other comments were on point, I do not think the quotation in front of us is a legitimate review comment for a professional journal.

          Can we agree on that?

        • Mark T
          Posted Mar 5, 2009 at 12:00 PM | Permalink

          Re: Steve McIntyre (#105),

          Regardless of whether the other comments were on point, I do not think the quotation in front of us is a legitimate review comment for a professional journal.

          Ultimately this is the reason we are “psychoanalyzing.” Whether Garth Paltridge is dreaming up a reason for being rejected or not, why is this review comment even part of the process? It is obviously devoid of scientific merit and serves only to highlight the reviewers bias, which is expected to be at a minimum in the peer review process. The reviewer might as well have said “I don’t like this author’s hair style so we should reject his paper.”

          Mark

        • MC
          Posted Mar 5, 2009 at 12:22 PM | Permalink

          Re: Steve McIntyre (#105), and Re: bender (#100), Yeah this comment seems a little too off topic for a scientific review. Something like ‘no you need to reconsider the uncertainties etc..’ would be proactive. Saying ignorant doesn’t exactly help as every specialist could be said to be ignorant of the exact field of the specialist next to him. As an aside it is sometimes interesting putting quotes into their opposite to gain an insight into persective i.e.

          the only object I can see for this paper is for the authors to get something in the peer-reviewed literature which the ignorant can cite as supporting lower climate sensitivity than the standard IPCC range

          could be stated in an opposite argument as

          the numerous objects I can see for this paper is for the authors to get something in the peer-reviewed literature which the informed can cite as supporting lower climate sensitivity than the standard IPCC range

          Now which quote doesn’t grate so much?

        • Andrew Dessler
          Posted Mar 5, 2009 at 12:49 PM | Permalink

          Re: Steve McIntyre (#105),

          A few comments:
          OK, I won’t use the word conspiracy anymore.

          Second, I was not at the workshop, so I cannot comment on the veracity of Paltridge’s claim.

          Third, I cannot tell from the quote provided whether I would agree with Serreze et al.’s use of the reanalysis. It may be that’s a trivial part of the paper. If you provide a cite, I’ll take a look and let you know. Nevertheless, you can tell by the dearth of trend papers using reanalysis that this is not something that’s done very often for reasons described above.

        • Steve McIntyre
          Posted Mar 5, 2009 at 3:35 PM | Permalink

          Re: Andrew Dessler (#115),
          Serreze et al (Cryosphere 2009) http://www.the-cryosphere.net/3/11/2009/tc-3-11-2009.pdf

          It took me about 10 seconds to find a paper reporting trends using NCEP. That was one from 2009. My guess is that there are plenty more, but perhaps this was by pure chance. Maybe I’ll start a thread on the topic.

        • Posted Mar 5, 2009 at 3:53 PM | Permalink

          Re: Steve McIntyre (#148), This paper (Serreze et al. 2009) has the distinct advantage of reporting both comparisons between NCEP and JRA25 fields for a variety of situations. This is an example of the robustness that the Paltridge et al. paper lacks. Let me clarify: the reanalysis data is a valuable resource for certain applications. Long time-period trend analysis, especially when spanning the pre- and post-satellite eras, is not recommended. Furthermore, I believe this fact is well recognized by those conscious of data assimilation procedures in meteorology and climate science.

        • Andrew Dessler
          Posted Mar 5, 2009 at 4:36 PM | Permalink

          Re: Steve McIntyre (#147),

          That paper you pointed to does indeed do trend calculations with ncep reanalysis. Everyone has their individual level of skepticism for any given technique, data set, or method, and perhaps for trends from reanalysis my skepticism is higher than others. I don’t really know since I have not thought much about this. So I’ll cede this point to you.

        • Jeff Alberts
          Posted Mar 5, 2009 at 12:30 PM | Permalink

          Re: Andrew Dessler (#101),

          If review is as stringent as you imply, how does something like MBH98 or any other dendroclimatology paper get through the process when they’re using questionable statistical procedures? Perhaps each journal is different, and perhaps each reviewer has different standards, but it seems like something so extremely cited would have been questioned more thoroughly after the fact by the “community”.

        • bender
          Posted Mar 5, 2009 at 12:49 PM | Permalink

          Re: Andrew Dessler (#101),
          Thank you for your assurances, but I don’t need them as I’ve my own set of comparable experiences to draw upon.

          As I’ve already allowed: I agree that Paltridge *may* fall into your category of The Disgruntled. We’ll just have to suspend judgement until there are more facts. That would be the honest thing to do.

        • BillBodell
          Posted Mar 5, 2009 at 12:51 PM | Permalink

          Re: Andrew Dessler (#101),

          I’m sure you are correct in most circumstances.

          However, this doesn’t account for the reviewer quote:

          “the only object I can see for this paper is for the authors to get something in the peer-reviewed literature which the ignorant can cite as supporting lower climate sensitivity than the standard IPCC range”.

          Unless this quote can be shown to falsified (and I can’t believe it would have been) or taken out of context (and I can’t imagine a context which would alter the meaning). It has to be considered.

          If the quote had nothing to do with the article’s acceptance, then it’s unfortunate that the reviewer said it. It’s kind of like having an arguement with your wife, making a series of reasoned points and then adding that her mother is ugly. The reasoned points will be ignored and the focus will be on the comment about her mother.

        • Mark T
          Posted Mar 5, 2009 at 12:54 PM | Permalink

          Re: BillBodell (#116),

          and then adding that her mother is ugly.

          And you don’t like her hair style. 😉

          Mark

        • Jason
          Posted Mar 5, 2009 at 1:09 PM | Permalink

          Re: Andrew Dessler (#101),

          I agree 100% with your characterization of the behavior of some (many) rejected authors. (Except that Y not much more likely to be a reviewer than X).

          But what do you think of the quoted line:

          the only object I can see for this paper is for the authors to get something in the peer-reviewed literature which the ignorant can cite as supporting lower climate sensitivity than the standard IPCC range.

          Isn’t this sort of commentary about the political consequences of acceptance inappropriate in a review?

          Several respected researchers have made comments in this thread implying that, had they been a reviewer, they also would have recommended rejection. Wouldn’t the integrity of the process have been better served if one of them had been invited to replace the quoted reviewer?

          Fairly or unfairly (I’d like to see the full review before I make up my mind about fairness), this incident will be used to further impeach the credibility of the peer review process in climate science. Had an additional review been solicited, the comments would likely have gone unnoticed and unreported.

        • Andrew Dessler
          Posted Mar 5, 2009 at 1:26 PM | Permalink

          Re: Jason (#119),

          When I was an editor, I saw some inappropriate things in reviews — everything from snide comments to outright insults. I always removed them before sending them to the author. The quote above is inappropriate, but not too far out of the ordinary. The anonymity of peer review is similar to the anonymity of a blog — and as anyone who blogs knows, anonymity seems to bring out the worst in some people.

          I would have removed that comment, but would have sent along the rest of the review (and I would have made my decision based on the appropriate comments).

        • bender
          Posted Mar 5, 2009 at 1:56 PM | Permalink

          Re: Andrew Dessler (#123),
          Anonymity freed this reviewer up to say what he thought. And his thoughts are deplorable. (I may be idealistic, but I am not naive.)

        • Mark T
          Posted Mar 5, 2009 at 2:00 PM | Permalink

          Re: bender (#129),

          Anonymity freed this reviewer up to say what he thought. And his thoughts are deplorable.

          I would be less worried about insults than I am about this type of review comment. Such review comments only serve to confirm a suspicion that there is indeed an agenda, and whether the science is right or wrong, we must follow the agenda or be damned.

          Mark

        • Mark T
          Posted Mar 5, 2009 at 1:57 PM | Permalink

          Re: Andrew Dessler (#123),

          I would have removed that comment, but would have sent along the rest of the review (and I would have made my decision based on the appropriate comments).

          Well, assuming the rest of the review was relevant, I’d assume you mean. Since Garth has not proffered the entire review, we can only speculate as to the rest of that review, or any of the others for that matter.

          Mark

        • Hugo M
          Posted Mar 5, 2009 at 2:19 PM | Permalink

          Re: Andrew Dessler (#123),

          I would have removed that comment, but would have sent along the rest of the review (and I would have made my decision based on the appropriate comments).

          Very decent approach, isn’t it? Except that the author would have been left with pure rationalisations in the Freudian sense, and no hint to the dirty reason why his paper really was turned down.

        • Steve McIntyre
          Posted Mar 5, 2009 at 2:31 PM | Permalink

          Re: Andrew Dessler (#122),

          The quote above is inappropriate, but not too far out of the ordinary. The anonymity of peer review is similar to the anonymity of a blog — and as anyone who blogs knows, anonymity seems to bring out the worst in some people.

          If this sort of misconduct is within the norms of climate science review, then the journals need to make a concerted effort to clean this up, rather than just saying, well, everyone else does it.

          I originally had a laissez-faire policy at this blog but eventually established policies prohibiting people from imputing motives; I try to extinguish food fights when they occur.

          If you can do this on a blog, surely AGU editors dealing with professional scientists should be able to rise to this particular challenge. If they don’t, the journals should establish some objective QC standards for review, so that’s not one editor swimming upstream. If they do, then the editor should be enforcing the policy.

          In either case, it is very unprofessional for this sort of comment to enter into a QC process, regardless of whether it occurs elsewhere in this field.

        • Posted Mar 5, 2009 at 3:54 PM | Permalink

          Re: Andrew Dessler (#122), Andrew, I didn’t see your comment above mine, as I think our comments crossed, but what you say is fair enough. I would add that in my experience the moderator sets the tone, and feedback to people is necessary for them to change. A note to the reviewers that you have snipped them would be needed for them to get the message, though whether you would be able to maintain a pool of reviewers then is questionable.

          Your comment #122 seemed to contradict #99 and previous, that I took to mean that all rejections are justified. Rereading it seems that you think rejection justified by the reviewers on rational grounds in this specific case.

        • Steve McIntyre
          Posted Mar 5, 2009 at 11:48 AM | Permalink

          Re: Andrew Dessler (#99),

          Thank you for your comment. As I noted above, I’m not familiar with issues NCEP reanalysis and welcome clarification. You say:

          The NCEP/NCAR reanalysis is a well-respected and useful data set. But like any tool, there are some jobs it’s better for than others. You would not use a screwdriver to hammer a nail, and most people would not use a reanalysis for a long-term trend calculation.

          Serreze et al 2009 say:

          By contrast, Arctic trends from NCEP are most positive at the surface for all seasons but summer, with this surface maximum most pronounced in autumn and (like the pattern in Fig. 1b) strongest at the pole.

          I take it that you do not agree with Serreze et al 2009 use of NCEP reanalysis for reporting on Arctic trends. If you do agree with such usage, maybe you could elucidate on why it’s OK for Serreze et al to comment on trends derived from NCEP data?

        • KevinUK
          Posted Mar 5, 2009 at 12:15 PM | Permalink

          Re: Andrew Dessler (#99),

          “The NCEP/NCAR reanalysis is a well-respected and useful data set. But like any tool, there are some jobs it’s better for than others.”

          Really? And do you also think the same of bristcone pines as a temperature proxy? … Sorry but you’ll have to do better than that. Please explain (as Ryan has at least attempted to do) why you personally think it is not a ‘fit for purpose’ tool in Garth’s case and provide an example where you think it has appropriately been applied as a fit for purpose tool?

          KevinUK

        • Posted Mar 5, 2009 at 1:31 PM | Permalink

          Re: Andrew Dessler (#99), It easy to ping Paltridge for ‘whining’, and I am certain he was conscious of that perception when he gave Steve the email. Its possible to take things out of context, and there are contexts where the statement about the reanalysis set makes sense. But would you say you have never seen an irrational review?

          I and others as reported here have had similar experiences, and without whining, have tried to report what IS about peer review in this field. Now what is, is, so whether you want to class that as a problem or not is another issue. Steve pegged your comment as ‘drive-by’, common in the blogosphere, of which you are a part too I see, and I would call it a cheap shot. I for one would welcome a patient and extended exchange drawing on your experiences as a editor.

  6. Peter
    Posted Mar 4, 2009 at 2:05 PM | Permalink

    A question for the better informed: If this research is correct, would this fit with the lack of an observed tropical tropospheric “hotspot”?

  7. Posted Mar 4, 2009 at 2:11 PM | Permalink

    Garth congrats on getting this artice published. So the team would have us believe that the radiosonde data is crap but AWS stations used to construct positive temperature trends in Antarctica are reliable and robust…what a joke! Perhaps you should have applied some form of RegEM to the radiosonde results to get the right answer.

  8. Jason
    Posted Mar 4, 2009 at 2:15 PM | Permalink

    Is the data contradicting the “big red dog” as previously discussed here entirely the result of radiosonde data?

    Either way, where can we find a summary of the satellite data that Paltridge characterizes as contradicting the radiosonde data?

    (Is it too much to hope for an even handed comparison of the two?)

  9. D, Cohen
    Posted Mar 4, 2009 at 2:22 PM | Permalink

    I have a question about the positive and negative feedback talked about here. If there is positive feedback, doesn’t that mean that the climate system is inherently unstable? Forget about CO2, if a random increase in water vapor occurs, doesn’t that all by itself increase the greenhouse effect, leading to evaporation of more water, more greenhouse effect, and so on? If this is so, it seems to me that on these grounds alone there has to be some sort of negative feedback to make the climate system stable. We know from archeology and paleontology that in fact the overall temperature of the earth is a relatively stable quantity over very long periods of time.

    • bender
      Posted Mar 4, 2009 at 3:20 PM | Permalink

      Re: D, Cohen (#9),
      I reply in unthreaded.

    • Mark T
      Posted Mar 4, 2009 at 3:21 PM | Permalink

      Re: D, Cohen (#9),

      I have a question about the positive and negative feedback talked about here. If there is positive feedback, doesn’t that mean that the climate system is inherently unstable?

      No. Positive and negative feedback in no way imply stability, the only dependent factor is the magnitude of the feedback term, i.e., if it is greater than unity it is unstable. Of course, the climate realm semi-misuses the feedback terminology, so it is really difficult to compare their use to classical system/control theory.

      Forget about CO2, if a random increase in water vapor occurs, doesn’t that all by itself increase the greenhouse effect, leading to evaporation of more water, more greenhouse effect, and so on? If this is so, it seems to me that on these grounds alone there has to be some sort of negative feedback to make the climate system stable. We know from archeology and paleontology that in fact the overall temperature of the earth is a relatively stable quantity over very long periods of time.

      It is rather easy to prove that the so-called “instability” or “tipping-point” is not physically possible without some hitherto unknown additional source of energy. Also, as long as the feedback term is less than unity, the system is guaranteed stable (aka unconditionally stable) with or without a mythical “balancing negative feedback.”

      Mark

      • bender
        Posted Mar 4, 2009 at 3:27 PM | Permalink

        Re: Mark T (#24),
        See my comment in unthreaded. Can I ask that we please stay focused on this very important issue of editorial control? Is this the start of the end of climate science?

  10. Posted Mar 4, 2009 at 2:23 PM | Permalink

    This type of study must include a much longer and significantly more detailed discussion on the errors inherent to the reanalysis procedure. The most relevant reference in the Paltridge et al. paper is by Bengtsson et al. (2004). If I were a reviewer of this paper, I would require a cross-comparison with the other available reanalysis products, including the ERA-40 and JRA-25. Without this sensitivity study, all of the model dependent results have the following glaring caveat: “according to the NCEP Reanalysis”. This does not discount the results, but clearly opens the paper up to significant criticism from anyone familiar with the deficiencies in the NCEP data assimilation procedures.

  11. stan
    Posted Mar 4, 2009 at 2:28 PM | Permalink

    Those ‘against’ (among them a number of people from GISS) simply said that the radiosonde data were too ‘iffy’ to report the trends publicly in a political climate where there are horrible people who might make sinful use of them. Those ‘for’ simply said that scientific reportage shouldn’t be constrained by the politically correct.

    And this reflects adherence to the scientific method?!

  12. Posted Mar 4, 2009 at 2:35 PM | Permalink

    Those ‘against’ (among them a number of people from GISS) simply said that the radiosonde data were too ‘iffy’ to report the trends publicly in a political climate where there are horrible people who might make sinful use of them.

    Yep, and that’s precisely the problem. Politics should not be driving science, yet there it is. If it goes against your conclusions and against your politics, it’s “iffy” and must be rejected forthwith.

  13. Tim G
    Posted Mar 4, 2009 at 2:36 PM | Permalink

    Is there an original link for the Paltridge comments? Or is this it?

    –t

  14. jack mosevich
    Posted Mar 4, 2009 at 2:50 PM | Permalink

    Craig Loehle;#14..Hey, I will submit it under my name with the word cooling replaced by the word warming and see if it is accepted.

  15. BillBodell
    Posted Mar 4, 2009 at 3:02 PM | Permalink

    D, Cohen,

    That’s the crux of the matter.

    Check out climate-skeptic.com. He’s a “lukewarmer” that (IMO) properly focuses his skepicism on exactly this issue

  16. Les Johnson
    Posted Mar 4, 2009 at 3:07 PM | Permalink

    Correct if I am wrong; but if one of the authors of the satellite study knew of Partridge et al’s paper, isn’t it malfeasance by omission, to not mention it?

  17. Gerald Browning
    Posted Mar 4, 2009 at 3:17 PM | Permalink

    All,

    It is well known that the reanalysis data (a combination of model forecast and observational data)is suspect, especially in the tropics where the forcing (heating and cooling) is dominant and poorly understood and over the oceans where the insitu data is sparse (satellite data is questionable where there is no surface data to aid in the temperature retrieval). But at least the authors of this article pointed out those caveats up front (and kudos to them for their honesty). If the modelers pointed out all of the flaws in the climate models in the same manner (unrealistic type and size of dissipation, unresolved spectra and incorrect cascade of enstrophy, unphysical tuning, ill posedness of the continuum system, etc.), then no one would believe any of their results.

    Jerry

    Jerry

  18. nanny_govt_sucks
    Posted Mar 4, 2009 at 3:20 PM | Permalink

    Those ‘against’ (among them a number of people from GISS) simply said that the radiosonde data were too ‘iffy’ to report the trends publicly in a political climate where there are horrible people who might make sinful use of them.

    But what of “iffy” data that supposedly supports AGW? Aaah… that can be used to smite those horrible people!

  19. Posted Mar 4, 2009 at 3:25 PM | Permalink

    Good of Garth to send you this. Perhaps he will consider speaking at the upcoming Heartland conference.

  20. Posted Mar 4, 2009 at 3:28 PM | Permalink

    Say, Paltridge, is there anything else from that review that you might let us know about? I’m curious of any other non-fact based argument against publication.

  21. Mark T
    Posted Mar 4, 2009 at 3:31 PM | Permalink

    Yes, I see. I was just browsing and would have moved my point over there had we not cross-posted and I saw yours first. It’s easy to get side-tracked in this case because our posts are relevant to the E&E article, but the article itself isn’t really relevant to this thread. Feel free to move my comment, Steve, you won’t hurt my feelings.

    Mark

  22. Simon Evans
    Posted Mar 4, 2009 at 3:38 PM | Permalink

    Garth Partridge has quoted a particular comment from a peer reviewer.

    May I ask him to – snip-
    quote all peer review comments fully, so that readers may make their own assessment of the comments in relation to his paper, rather than being obliged to accept his own assessment of an “unbelievably vitriolic, and indeed rather hysterical, review”?

  23. Dishman
    Posted Mar 4, 2009 at 3:58 PM | Permalink

    “Peer review” is not a Quality Process by modern standards.

  24. Bill Illis
    Posted Mar 4, 2009 at 4:09 PM | Permalink

    Regarding the previous water vapour study by Dessler – the study just examined the change in water vapour from DJF 2007 to DJF of 2008 when the La Nina reduced temperatures by 0.4C.

    The study found there was a 1.5% (percentage points) decline in relative humidity in the very lower levels of the troposphere and a 1.5% increase in relative humidity in the upper layers of the troposphere. The middle layers were constant.

    There are supposed to be subtle changes in relative humidity in different layers and latitudes depending on temperature changes but global warming theory suggests relative humidity should stay more-or-less stable.

    Given there is much more water vapour in the lower levels of the atmosphere, the study really found that there was a decline in overall global relative humidity.

    So, Dessler says relative humidity declines as temperature declines and this is proof of global warming theory. Well, this would actually put us down the road of a runaway ice planet or a runaway greenhouse. Even contradiction is proof nowadays.

  25. Chris H
    Posted Mar 4, 2009 at 4:11 PM | Permalink

    This makes me mad, far madder than all the censorship at RC does (which just mis-informs the public).

    When even genuine climate scientists cannot get a short article published, that tries make other climate scientists aware of data that might have a slightly negative effect on AGW theory (as in the CO2 warming might not be as bad as predicted by climate models), well, you know for certain that climate science is no-longer functioning as a science.

    It’s not a complete surprise (there have been lots of hints that this kind of thing is pervasive), but this certainly clinches it. If this doesn’t make you a skeptic, I don’t know what will.

    Oh, and another recent bit of evidence of this sort of thing from http://www.drroyspencer.com , who on February 21st 2009 said:

    “Is my work published? No…at least not yet…although I have tried. Apparently it disagrees too much with the IPCC party line to be readily acceptable. My finding of negative SW feedback of around 5 W m-2 K-1 from real radiation budget data (the CERES instrument on Aqua) is apparently inadmissible as evidence.”

  26. Reid
    Posted Mar 4, 2009 at 4:21 PM | Permalink

    “When even genuine climate scientists cannot get a short article published, that tries make other climate scientists aware of data that might have a slightly negative effect on AGW theory”

    It is not slightly negative but game over for global warming alarmism if water vapor feedbacks are negative. Correct me if I am wrong but positive water vapor feedbacks are the heart of the alarmist case. If this is true even Steve McIntyre will officially join the skeptic-heretic camp.

  27. Howard S.
    Posted Mar 4, 2009 at 4:23 PM | Permalink

    “we were to imagine that climate scientists might welcome the challenge to examine properly and in detail even the smell of a possibility that global warming might not be as bad as it is made out to be.”
    snip – policy and motives

  28. Bill Illis
    Posted Mar 4, 2009 at 4:29 PM | Permalink

    Regarding the previous water vapour study by Dessler – the study just examined the change in water vapour from DJF 2007 to DJF of 2008 when the La Nina reduced temperatures by 0.4C.

    The study found there was a 1.5% (percentage points) decline in relative humidity in the very lower levels of the troposphere and a 1.5% increase in relative humidity in the upper layers of the troposphere. The middle layers were constant.

    Given there is much more water vapour in the lower levels of the atmosphere, the study really found that there was a decline in overall global relative humidity when global warming theory suggests it should stay more-or-less stable.

    To be fair, the models do produce results which show subtle changes in relative humidity at different layers and latitudes depending on temperature changes.

    But these results (declining relative humidity with declining temperature) just puts us down the road of a runaway ice planet or a runaway greenhouse.

    Even contradiction is proof of global warming these days. I think the study just confirmed there are changes in relative humidity that we do not understand yet.

  29. Bernie
    Posted Mar 4, 2009 at 4:33 PM | Permalink

    Clearly there are two issues here. The first issue concerns the robustness of the findings in the Paltridge et al paper. Ryan Maue’s specific concerns probably deserve a response from Paltridge et al., the more so if Ryan has been a reviewer of the paper at one of the journals to which it was submitted. The second issue is the apparent lack of objectivity and general readiness to censor research that casts doubt on the dominat viewpoint.

    Given that the paper has been accepted, one assumes that Ryan’s concerns were not so critical as to preclude a journal from publishing the paper. That does not mean that Ryan’s concerns can be ignored.

    It does make sense to understand the other reviewers comments and to separaste out the substantive from the political. Since Garth Paltridge sees at least one review as being

    unbelievably vitriolic, and indeed rather hysterical

    it would be helpful to see more concretely what was said.

    snip

  30. Mark T
    Posted Mar 4, 2009 at 4:52 PM | Permalink

    I agree, Bernie, that there are two issues. Clearly if other reviewers have issues with the robustness of the paper, whether or not it would have been published might not rest solely on the one review. However, given that this one review does exist, why? Have scientists really openly lost their objectivity to the point that anything casting doubt on their so-called “consensus view” must be silenced?

    I wouldn’t really call the snippet vitriolic or hysterical (well, maybe a bit hysterical), but I would call it decidedly un-scientific and very lacking in objectivity, which is supposed to be what you get from independent peer review. The rest of that comment (as well as the others) would be enlightening.

    Mark

  31. Posted Mar 4, 2009 at 4:54 PM | Permalink

    In the interests of open debate its about that that journals made reviews accessible to anyone interested, posting electronic versions along with electronic copies of final articles! I see this would have a major effect in curtailing emotive responses such as those described by Garth and keeping reviewers focused on the job at hand-that being critical appraisal of the methods, data and results.

  32. Curt Covey
    Posted Mar 4, 2009 at 5:12 PM | Permalink

    Probably all scientists can recall unfair treatment of a paper we submitted to a peer-reviewed journals. Certainly I can. Without seeing the full correspondence between Paltridge and his editor, I will pass on agreeing or disagreeing about fairness. The essential point is that there is always another journal to submit to. Indeed, Paltridge et al. have evidently published now. This is important because IPCC assessments require that significant claims appearing in peer-reviewed journals be discussed, and IPCC reports get reviewed by enough people (including “climate skeptics”) to ensure that all such claims are in fact discussed. So don’t dispair.

    • Jason
      Posted Mar 4, 2009 at 6:47 PM | Permalink

      Re: Curt Covey (#54),

      This is important because IPCC assessments require that significant claims appearing in peer-reviewed journals be discussed, and IPCC reports get reviewed by enough people (including “climate skeptics”) to ensure that all such claims are in fact discussed.

      There is a lengthy history (well documented on this site) of the IPCC completely ignoring peer reviewed research that disagrees with its findings. IPCC reviewers objecting to this practice are often dismissed out of hand without any material discussion of the research.

    • Steve McIntyre
      Posted Mar 4, 2009 at 6:56 PM | Permalink

      Re: Curt Covey (#41),

      This is important because IPCC assessments require that significant claims appearing in peer-reviewed journals be discussed, and IPCC reports get reviewed by enough people (including “climate skeptics”) to ensure that all such claims are in fact discussed

      I acted as a peer reviewer and in that capacity drew their attention to “significant claims appearing in peer-reviewed journals”. IPCC refused to discuss, for example, Miller et al 2006; Naurzbaev et al 2004, which estimated MWP temperatures in California and Siberia as notably higher than at present (inconsistent with the key Mann bristlecone PC and Briffa Yamal series).

      When I asked IPCC to provide supporting data for an then unpublished study, they refused to ask. When I asked directly, they threatened to expel me as a reviewer if I asked any more authors of unpublished studies being used in IPCC for data.

      IPCC refused to archive all review comments and asked the UK Met Office not to release the Review Editor comments of John Mitchell. Mitchell attempted to thwart an FOI request by saying that he had destroyed all his correspondence with IPCC, then the comments were his personal property.

      This has been documented in excruciating detail elswehere on this site – so please – tread lightly when lionizing this process here.

    • Ross McKitrick
      Posted Mar 4, 2009 at 8:33 PM | Permalink

      Re: Curt Covey (#41),

      This is important because IPCC assessments require that significant claims appearing in peer-reviewed journals be discussed, and IPCC reports get reviewed by enough people (including “climate skeptics”) to ensure that all such claims are in fact discussed

      That’s the ideal. But I was a reviewer and I categorically assert that it is not the reality. I expect that having a few skeptics in the list of reviewers adds to the appearance of diversity, but in all the areas I reviewed the process carefully insulates the final text from influences that depart in any substantive way from the Lead Author’s prior views, chiefly by putting the text through 3 re-writes after the close of scientific review. These re-writes involve insertions, deletions and edits that are not subject to scientific review.

      I documented a few such problems in a book chapter last year. In the case of the peer-reviewed papers showing that the surface temperature data is contaminated, the IPCC dealt with that by simply fabricating non-existent counter-evidence in the form of a claim that the effects can be attributed to atmospheric circulation patterns, and on doing so the contamination pattern becomes statistically insignificant. (AR4 Ch 3 p. 244). I have a paper under review that shows the IPCC claim was not only groundless (which is obvious from the AR4 text itself) but is provably false. The AR4 text was never peer-reviewed because it was inserted after the close of the 2nd review round.

      In the case of Long Term Persistence and the common problem of false significance in climatic trend regressions, some important text was entered into the AR4 2nd draft as a result of peer review, but simply dropped in the published AR4 (see the sequence of reviews and responses here).

      In the case of the hockey stick there was a long battle to deal with grudging, inaccurate and pejorative language (well documented here). In the end the IPCC lacked published, peer-reviewed support for their preferred position so they cited a then-unpublished paper that subsequently missed all the IPCC publication deadlines that were applied to other literature. Indeed its eventual publication timetable was highly irregular and has the appearance of an editor backdating acceptance for IPCC purposes.

  33. jeez
    Posted Mar 4, 2009 at 5:24 PM | Permalink

    OH…THE HUMIDITY!

  34. Mark T
    Posted Mar 4, 2009 at 5:28 PM | Permalink

    Hehe, now your hysterics are becoming apparent. Just admit it, you hate when it becomes obvious the clear double standard that exists.

    None of your retorts are germane, Simon, which is not unusual. Garth did indeed post at least part of the offending reviewer’s comments, which indicate at the very least they were biased and unscientific. He didn’t accuse the reviewer of anything and merely re-stated his account of how the negotiations went. Insinuations that climate scientists are not open to proper examination of the evidence is, well, as bender put it, part of a track record that has been well documented here. This is ongoing, and hardly surprising to anyone but you, apparently.

    All of this went down over the course of a few months, and here you are making accusations less than half a day after Garth’s post. Your question is legitimate, but asked in a completely rude way. Save the rudeness for an acknowledgment that he won’t answer. The guy might still be in bed for chrissakes.

    Mark

    • Mark T
      Posted Mar 4, 2009 at 5:29 PM | Permalink

      Re: Mark T (#58),

      He didn’t accuse the reviewer of anything and merely re-stated his account of how the negotiations went.

      That should be editor.

      Mark

  35. kim
    Posted Mar 4, 2009 at 5:28 PM | Permalink

    Dumb is as dumb does.
    But little is as dumb as
    The games people play.
    =======================

  36. Steven Sherwood
    Posted Mar 4, 2009 at 5:41 PM | Permalink

    As one of the authors of the Science perspective, thought I’d try to chip in to defend the “AGW” crowd (whatever that is)

    The Science piece was an essay and should be read as such. I was aware of Garth’s work but didn’t know it had been published (also, there are severe limits on the number of citations allowed). Anyway, there is so much documented evidence of spurious shifts and trends in hydrological variables in the reanalyses (and even in temperature and wind) that I would not place any more store in trends from them than I would in a Ouija board. The problem is that humidity and rainfall observations have changed massively over the years and these totally corrupt the products he used. Others have tried to “homogenize” the data (a dubious process that has been discussed before on this site) and have found that the expected humidity trends do then appear in the data. This doesn’t prove much except that we should rely on other evidence. I was not a reviewer of the Paltridge article but (as I told him myself) I’m afraid I would not have been very enthusiastic about it (although I’d never make insinuations about why he was publishing it).

    If it makes anyone feel any better, I am now leading a team to write a review paper on water vapor, and will definitely cite Garth’s paper along with any others (though not without noting key limitations) that anyone sends my way.

    Steve Mc: Thanks for this comment.

    • Mark T
      Posted Mar 4, 2009 at 5:48 PM | Permalink

      Re: Steven Sherwood (#62),

      I was aware of Garth’s work but didn’t know it had been published

      Based on this sentence by Garth:

      We know that at least one of the authors is well aware of the contrary story told by the raw balloon data. But there is no mention of it in their article.

      I’m thinking he didn’t expect you to cite him, but to at least mention the balloon data as being contrary. However, I can’t seen any major gripe given the apparently obvious issues not only you, but others seem to have with this data. It may be a turd not worth mentioning, I don’t know. To me, this point was really tertiary to the central theme of the post.

      Your last bit is certainly what one would expect, however. 😉

      Mark

    • Wondering Aloud
      Posted Mar 5, 2009 at 7:40 AM | Permalink

      Re: Steven Sherwood (#47),

      Thank you so much for this response. The attitude that some seem to have that seemingly contradictory data can just be ignored throws up so many red flags for me that it is good to see a reasoned honest response.

  37. Posted Mar 4, 2009 at 5:58 PM | Permalink

    Well, I already see the main error in the paper:

    We know that at least one of the authors is well aware of the contrary story told by the raw balloon data.

    The data used hadn’t been “adjusted” properly.

  38. Edward
    Posted Mar 4, 2009 at 6:22 PM | Permalink

    I suggest Simon and Bender take any further comments they wish to exchange to:

    http://www.climateaudit.org/?p=4804#comment-330337

    Frankly, it’s distracting from this important discussion.

    • bender
      Posted Mar 4, 2009 at 6:37 PM | Permalink

      Re: Edward (#74),
      Respectfully, the accusation of hypocrisy was in regard to the primary issue here. Let the record show that Paltridge and The Team have very different priors, and so it is fair to treat them differently. If we can agree to that then there will be no further “distraction”.

      • Mark T
        Posted Mar 4, 2009 at 6:41 PM | Permalink

        Re: bender (#57),

        Let the record show that Paltridge and The Team have very different priors, and so it is fair to treat them differently.

        Hehe, the Team has earned a reputation as a “hostile witness” so to speak.

        Mark

  39. Simon Evans
    Posted Mar 4, 2009 at 6:46 PM | Permalink

    It seems my recent comments are being snipped, so I won’t attempt to respond to any more points. I’ll point out though that my last remark was a suggestion that the thread should get onto a more interesting topic than attacking my comment. Good night 🙂

    Steve: I deleted both sides of the food fight. I’m glad you support getting the thread on track.

  40. Steve McIntyre
    Posted Mar 4, 2009 at 6:47 PM | Permalink

    I’ve snipped many food fight comments originating from an attribution of intent that is inconsistent with blog policies. I would prefer that, rather than debating that sort of attribution, posters would notify me of the problem and let me deal with it, rather than get into a food fight.

    On another matter, for an excellent sample of a vitriolic review, plesae don’t forget this one by the Maestro.

    I’ll look up Ryan’s review on hurricane trends. I guess things at GRL have settled down a little. When Pielke Jr and I submitted a paper in early 2007, we got reviews that were amusingly inconsistent in their vitriol – one accused our statistical analysis of being wrong and even “fraudulent” , while the other stated that all the results were already well known in the literature. There was a “consensus” that it not be published.

    • bender
      Posted Mar 4, 2009 at 8:58 PM | Permalink

      Re: Steve McIntyre (#61),

      I would prefer that, rather than debating that sort of attribution, posters would notify me of the problem and let me deal with it, rather than get into a food fight.

      But are you asking me to let a comment like that stand? – snip –


      Steve:
      I’m not online 24/7. If you object to a comment that breaches blog policies (as the objected to comment did), please merely state that the comment breaches blog policies – you know them pretty well even though they tend to be common law rather than Napoleonic Code,if that metaphor is helpful, rather than argue about it. Let me deal with it when I’m back online. Thx.

    • bernie
      Posted Mar 4, 2009 at 9:21 PM | Permalink

      Re: Steve McIntyre (#61), Steve:
      I apologize for my part in any distractions. I hope Garth provides more details on both the substance and the review process.

  41. Joel
    Posted Mar 4, 2009 at 6:54 PM | Permalink

    Is the job of a reviewer for a peer reviewed publication analogous to a referee in a sports match? If so, why would any reviewer be allowed to continue to judge a paper if his/her comments clearly express a political opinion about the intent of the author rather than addressing the validity of the facts or hypotheses that are being presented?

  42. PhilH
    Posted Mar 4, 2009 at 7:39 PM | Permalink

    “…there are horrible people who might make sinful use of them.” This, folks, is the language of a religion

  43. Andrew
    Posted Mar 4, 2009 at 7:43 PM | Permalink

    I find the tale that Paltridge tells rather disturbing. From the sound of it, his paper merely highlights the problem of a discrepancy between satellite and radiosonde humidity data, but what their implications are for models. And he does note that the radiosonde data and satellites have caveats which need to be considered about them. Maybe, in so far as much of this is already “known” such a result was not worth publishing, but for reviewers to dismiss it for political reasons is disturbing. GISS’s “evil people” argument is also a cringer. Is it not important to get to actual scientific answers to such questions, not ones that further a particular agenda? I believe that most scientists are above this sort of thing, but it always disturbs me to see how many AGWer’s have ceased to be scientists and become advocates.

  44. Craig Loehle
    Posted Mar 4, 2009 at 7:57 PM | Permalink

    I think the point is NOT whether the Paltridge paper is any good. Ryan has some reasons to doubt the underlying data. I think the point is that papers are being rejected because they might provide ammunition to the wrong side. Same with “debating” stuff in public. This is awful. Is it unprecedented? No.
    -snip
    Sunshine is the best remedy. Kudos to Paltridge for showing the review.

  45. DG
    Posted Mar 4, 2009 at 8:20 PM | Permalink

    As Dessler et al 2008 was brought up, how could the following get accepted by the reviewers in light of there is not one mention of cloud feedback in the article? “Virtually guaranteed”? “Business as usual”?

    The existence of a strong and positive water-vapor feedback means that projected business-as-usual greenhouse gas emissions over the next century are virtually guaranteed to produce warming of several degrees Celsius. The only way that will not happen is if a strong, negative, and currently unknown feedback is discovered somewhere in our climate system.

    How many smoking guns can there be that end up shooting blanks?

    See Niche Modeling where Roy Spencer chimes in.

  46. Curt Covey
    Posted Mar 4, 2009 at 8:24 PM | Permalink

    I don’t know enough to say whether Paltridge was treated fairly or unfairly by the Journal of Climate. Just about any scientist (including me) can tell a story or two about unfair reception of a submitted paper. There is always another journal to submit to. If I read this thread correctly, Paltridge et al. have indeed published in a peer-reviewed journal. This is important because IPCC is required to discuss significant claims that appear in peer-reviewed journals and IPCC report drafts are reviewed by enough people (including “climate change skeptics”) to insure that such papers are not ignored. So don’t dispair.

    Steve: This seems to repeat your earlier post here including the admonition not to dispair. I replied to your earlier post here as I do not agree with your characterization of IPCC review processes in paleoclimate anyway. In that prior reply, I observed:

    I acted as a peer reviewer and in that capacity drew their attention to “significant claims appearing in peer-reviewed journals”. IPCC refused to discuss, for example, Miller et al 2006; Naurzbaev et al 2004, which estimated MWP temperatures in California and Siberia as notably higher than at present (inconsistent with the key Mann bristlecone PC and Briffa Yamal series).

    When I asked IPCC to provide supporting data for an then unpublished study, they refused to ask. When I asked directly, they threatened to expel me as a reviewer if I asked any more authors of unpublished studies being used in IPCC for data.

    IPCC refused to archive all review comments and asked the UK Met Office not to release the Review Editor comments of John Mitchell. Mitchell attempted to thwart an FOI request by saying that he had destroyed all his correspondence with IPCC, then the comments were his personal property.

    This has been documented in excruciating detail elswehere on this site – so please – tread lightly when lionizing this process here.

    • Curt Covey
      Posted Mar 8, 2009 at 12:49 PM | Permalink

      Re: Curt Covey (#71), sorry about posting the same message twice. Our family computer is getting rusty and so are my computer skills.

      I was minimally involved with the IPCC’s 2007 document (AR4) and was not aware of the specific events described in M and M’s separate replies to my post. Nor have I heard the IPCC’s side of the story. Rather than pursue these particular issues farther, I will simply promise to do my part to have all significant claims appearing in peer-reviewed journals appropriately discussed in the AR5.

  47. Gary
    Posted Mar 4, 2009 at 8:31 PM | Permalink

    Coming in late here. But as a physician this all seems strange to me. In medicine if a study disputes the prior theories, as long as the methods are good, it gets published. Everyone has seen the news that one week something is good for you but two weeks later a study says it is bad. Although it is confusing for the public, this is the only way to advance knowledge. If you refuse to allow conflicting studies and views you can never get any closer to the truth.

    • Jeff Alberts
      Posted Mar 4, 2009 at 9:11 PM | Permalink

      Re: Gary (#72),

      The problem with most of those studies are that they’re portrayed as “fact” without being vetted or replicated.

  48. Steve McIntyre
    Posted Mar 4, 2009 at 9:16 PM | Permalink

    Paltridge’s articles on maximum entropy in the 1970s have been very influential and are still cited. I referred to them in passing in this post . There are a number of authors doing interested work in this area – Ralph Lorenz, Ou, Pujols, …

  49. Steve McIntyre
    Posted Mar 4, 2009 at 9:34 PM | Permalink

    Folks, I realize that this sort of incident is provocative. But please do not use that as an excuse to over-editorialize, pile on or go a bridge too far. If you wish to make a point on a contentious topic, stick to the narrowest comment without generalizing. please.

    • bernie
      Posted Mar 5, 2009 at 7:16 AM | Permalink

      Re: Steve McIntyre (#78), Steve:
      Can you clarify the primary theme of this thread: Sign and size of water vapor feedback or the review process. Thanks.

      Steve: I’ve moved a few posts to Unthreaded. Please resist the temptation to debate AGW from first principles in one-paragraph bites on every thread. If you wish to discuss NCEP or Paltridge or Dessler, fine, but please do not use this to introduce other issues.

  50. Posted Mar 4, 2009 at 9:55 PM | Permalink

    Before I made a first read of Paltridge et al I expected to see an uh-oh of a paper. I thought it would be like the 2005 hurricane papers, where the authors made conclusions based on the face value of flawed data.

    After my first read I’ve put my prejudices and biases on hold and need to made a deeper read before forming an opinion.

    Paltridge et all appears to search for salvagable, useful portions of reanalysis humidity data which can be culled from the badly-flawed portions. Maybe, maybe not, but they make the attempt.

    It looks to me like a reasonable effort. A lot of science consists of pulling limited information from flawed data. Yes, it’s “ugh” data but, in a topic with very limited data alternatives, it’s worth the try and worth a read.

  51. Posted Mar 4, 2009 at 9:57 PM | Permalink

    In a paper examining trends in cirrus (due to contrails) between 1971 and 1995, Minnis et al.(JC, 2004) showed negative trends in Rh at 300 mb over most non-polar areas based on the NCEP reanalysis. Those trends were compared to radiosonde data and found to have similar trends, except over western Asia and the US. Those results appear to be mixed. However, comparisons with land cirrus observations revealed that annual mean cirrus was highly positively correlated with the NCEP RH trends everywhere but over the US and Western Europe where air traffic is heavy. A disjoint in the correlation would be expected because of rising contrail formation due to heavy air traffic. The consistency in the high cloud coverage and the upper tropospheric RH indicates that the trends are more robust than one might expect from the apparently mongrelized assimilation data over the period of record. The results, though representative of 300 hPA only, support what is reported by Paltridge et al. Weak negative correlations were found between the mean annual NCEP RH and cirrus over oceans, but again, most of the data over oceans are in the air traffic corridors where contrail formation and raw aircraft emissions could affect the cirrus trends more than over land because of greater susceptibility in the more pristine marine air. A comparison of ECMWF and NCEP trends for a common period (1985-1996) yield rather different results with ECMWF showing wetter areas with increasing RH and drier areas with dropping RH. So, it is not clear what would happen with other datasets.

    Wang et al. (GRL, 2002; 29, NO. 10, 10.1029/2001GL014264)used SAGE, ERBE, and CERES data to show that the frequency of clouds in the upper troposphere decreased between 1985 and 1998, while the OLR increased. The change in cloud height frequency explained 46% of the OLR increases. These results also indicate decreased humidity in the upper troposphere.

    It is good to see in the Paltridge et al. paper that reanalysis humidity is examined more closely in terms of climate feedback. This is an area needing a lot more study, but without an agenda.

  52. Geoff Sherrington
    Posted Mar 4, 2009 at 10:06 PM | Permalink

    For those who do not know, here is a brief c.v. of Garth Paltridge:

    “Emeritus Professor Garth W. Paltridge BSc Hons (Qld), MSc PhD (Melb), DSc (Qld) has held positions as Chief Research Scientist, CSIRO Division of Atmospheric Research, Director of the Institute of Antarctic and Southern Ocean Research at University of Tasmania and Chief Executive Officer of the Antarctic Cooperative Research Centre.”

    Both his academic and experience qualifications are rather high and germane to Climate Audit.

  53. Paul Penrose
    Posted Mar 5, 2009 at 12:46 AM | Permalink

    Personally I’ve gotten to the point where I don’t put much stock in the outputs of any of these models, including the reanalysis efforts. They are all research codes, not engineering codes, which means that the strongest conclusion you can make using their output is that they support the plausibility of a theory being correct. That’s it. If you want more than that you need to have a quality process in place when designing, writing, and testing the software. Even then you can’t guarantee there won’t be any bugs in it, just that they have been reduced to (possibly) acceptable levels.

    In my opinion as a 25 year software engineering veteran these research models are more likely to have serious bugs in them than not. The people that wrote them just don’t know it because they stopped testing the software when it started producing results that they expected. Talk about confirmation bias!

  54. Ken Gregory
    Posted Mar 5, 2009 at 1:07 AM | Permalink

    I updated the relative humidity graph to 2008. It is worth presenting again here:

    The relevant discussion of the water vapour effect from the IPCC Fourth Assessment Report (Chapter 8 page 632):

    The radiative effect of absorption by water vapour is roughly proportional to the logarithm of its concentration, so it is the fractional change in water vapour concentration, not the absolute change, that governs its strength as a feedback mechanism. Calculations with GCMs suggest that water vapour remains at an approximately constant fraction of its saturated value (close to unchanged relative humidity (RH)) under global-scale warming (see Section 8.6.3.1). Under such a response, for uniform warming, the largest fractional change in water vapour, and thus the largest contribution to the feedback, occurs in the upper troposphere.

    This means that changes in specific humidity in the upper troposphere (300 – 700 mb) may be very significant even though the amount of water vapour there is low due to the cold temperatures.

    If relative humidity remains constant, CO2 induced warming would cause increasing specific humidity and a strong positive feedback. But if relative humidity is actually falling (due to water vapour being displaced by CO2 as per Miskolczi) then water vapour may cause a negative feedback. The specific humidity has declined dramatically in 2008 at ALL levels in the troposphere.

    I do not know the accuracy of the NCEP reanalysis data on upper tropospheric humidity, but the direct measurement of humidity by weather balloons seems preferable to the very indirect determination from satellite data.

  55. Alan Wilkinson
    Posted Mar 5, 2009 at 3:36 AM | Permalink

    Isn’t it time to do a PCA of “climate science” publications to determine the relative frequencies of papers positive, neutral and negative to IPCC conclusions together with trends over time?

    It could even look at spatial autocorrelation between authors and conclusions.

  56. Stuart Harmon
    Posted Mar 5, 2009 at 3:43 AM | Permalink

    It is better to stir up a question without deciding it, than to decide it without stirring it up. It is better to debate a question without deciding it than to decide it without debating it.

    Quote By Joseph Joubert

  57. Steve Carson
    Posted Mar 5, 2009 at 6:04 AM | Permalink

    I hope it’s not too off-topic to comment on the side-show going on here. The standard media and especially the “peer-reviewed” media once had the strangle-hold on a given debate. A few key journal editorial boards could “hold the line” on what was published.

    I don’t think they have yet realized that the world has changed and their gate-keeping role now has the opposite effect of what it had 10 years ago. Once it could “inform” the consensus opinion, but now it has the opposite effect – the potential to hold these same editors to ridicule over the coming years.

    Regardless of what they personally believe, their censorship of dissenting climate opinions may have a short term effect of “holding the line” as far as the public uninformed view is concerned. But there’s a million plus watching this on CA and other blogs. If the science of AGW was stronger we all know we would be seeing a different sideshow played out..

  58. EddieO
    Posted Mar 5, 2009 at 6:10 AM | Permalink

    Of course I was assuming here that the definition of “feedback” used in climate modelling is the same as my understanding of feedback.

  59. KevinUK
    Posted Mar 5, 2009 at 6:28 AM | Permalink

    Steve,

    I’m disappointed in you. You know full well that your job as our resident Toto is not just to ‘peek behind the curtain’ but to pull it back. Shame on you!

    Garth, do you mind if from now on I call you Dorothy?

    KevinUK

  60. Douglas Hoyt
    Posted Mar 5, 2009 at 7:12 AM | Permalink

    Of all the climate scientists I have met over the years, Garth Paltridge has impressed me as the most intelligent and competent of them all.

  61. Douglas Hoyt
    Posted Mar 5, 2009 at 7:54 AM | Permalink

    It is also worth mentioning that, in 1984, Hugh Ellsaesser predicted the upper troposphere would become dryer if the atmosphere warmed. He argued that atmospheric humidity is controlled by convection and by the general circulation and not solely by temperature. In contrast, the climate models have temperature as the controlling factor for humidity.

    Ref.: Ellsaesser, H. W., 1984. The climatic effect of CO2: A different view. Atmos. Environ. 18,. 431-434; 1495-1496.

  62. Basil
    Posted Mar 5, 2009 at 8:53 AM | Permalink

    The paranoia arises because of another issue. We know that at least one of the authors is well aware of the contrary story told by the raw balloon data. But there is no mention of it in their article.

    After all is said and done, this final sentence is what distresses me the most. In a former time I had some modest role in publishing in peer reviewed journals, and serving as a referee. One of the things we always looked for was whether all of the “relevant literature” was considered. Referees were not tasked with concluding whether the conclusions were right or wrong. We were specifically not to make that judgment. As long as no relevant literature was ignored, and there were no obvious deficiencies in methodology, and the conclusions had some bearing on a relevant research topic (i.e. were not trivial, mundane, or off topic for the journal) our task was done.

    The peer review process is so broken that it has become meaningless. I recently employed a statistical procedure in litigation that has been published in GRL. My opponent tried to discount it as “gray literature.” Now “gray literature” normally refers to non-peer reviewed publication (like conference proceedings). I think what my opponent was getting at is that the editors of GRL decide what to publish without asking for normal or formal peer review. Now I do not know if that is the case or not. It was almost as if my opponent were comparing GRL to E&E in the nature of the review process employed. My response would have been (the matter was settled without going to trial) to point to all of the GRL papers cited by IPCC. But this just goes to show that the peer review process itself has been co-opted into the service of political agendas, and no longer works the way it was intended.

  63. Bob North
    Posted Mar 5, 2009 at 9:06 AM | Permalink

    Several months ago, I did a quick and dirty, back of the envelope type calculation of globally averaged specific humidity trends using the NCEP/NCAR reanalysis data available on the web for the period 1964 to 2007. I got essentially the same results as Paltridge et al. with only the lowest part of the troposphere (1000 and 925 mbar) showing an increasing trend much different than 0. The highest levels, 500 mbar and up, showed an ever so slight decreasing trend. Since so much more water is in the lower troposphere, the overall trend was up. I do think some type of evelaution of the concerns with NCEP re-analysis data is clearly warranted by looking at other reanalysis products.

  64. David Snyder
    Posted Mar 5, 2009 at 9:30 AM | Permalink

    I think its obvious Humidity cannot have a positive feedback. If it did it would go to 100% & stay there. It does not do that. If it did that, it would be cloudy all the time & little sunlight would get through. – a negative feedback.


    Steve:
    I don’t think that anything is obvious. Let’s focus on smaller bites: something lie, what is the status of the NCEP reanalysis? … where there’s a chance of improving our collective understanding of something finite.

  65. Posted Mar 5, 2009 at 9:34 AM | Permalink

    This is not very scholarly or scientific, but a Google Scholar search for the exact phrase “NCEP reanalysis” gives “about 6,800” hits. The Recent Articles option applied to the complete list gives “about 3,370” hits. These latter include articles published into 2009.

    • Posted Mar 5, 2009 at 11:19 AM | Permalink

      Re: Dan Hughes (#97), A reanalysis dataset is a good tool for the following couple reasons:

      (1) Consistent model (most important for identifying observing system changes and inhomogeneities).
      (2) Decades long (from 30 to 60 years).
      (3) Optimally combines all available observations, both satellite and in situ (radiosonde, surface obs, ship, aircraft, using advanced data assimilation procedures.

      Thus, many climatologists and meteorologists utilize the NCEP Reanalysis as well as the ERA-40 for a variety of studies. However, as outlined by Bengtsson et al. (2004) and others (see IPCC AR4 Chapter by Trenberth), reanalysis datasets are not typically employed for climate change studies without considerable bias correction, and that is a difficult proposition in itself. Thus, the number of studies out of the thousands that use reanalysis datasets that do this are likely less than 1%.

  66. Posted Mar 5, 2009 at 9:37 AM | Permalink

    Oh, those would all be peer-reviewed publications, of course. I’m not sure of the pedigree of the journals.

  67. Posted Mar 5, 2009 at 11:24 AM | Permalink

    Again, not at all scholarly or scientific, but a Google Scholar search for the exact phrase “long-term trend” ‘with all of the words: NCEP reanalysis’ gives “about 1,710” hits and “about 759” Recent Articles.

  68. Bill Illis
    Posted Mar 5, 2009 at 11:25 AM | Permalink

    Given how important the water vapour question is, how come we do not have good data to rely on?

    There are dozens of satellites capable of tracking water vapour. It has been commonly measured at half the weather stations around the world for over 100 years. The data is built into every single weather forecast and climate model in use today.

    It is surprising there is not a reliable publicly available dataset to say the least.

  69. Steve McIntyre
    Posted Mar 5, 2009 at 12:22 PM | Permalink

    Paltridge also made the following observation about a workshop:

    The audience was split as to whether the existence of the NCEP trends in humidity should be reported in the literature. Those ‘against’ (among them a number of people from GISS) simply said that the radiosonde data were too ‘iffy’ to report the trends publicly in a political climate where there are horrible people who might make sinful use of them. Those ‘for’ simply said that scientific reportage shouldn’t be constrained by the politically correct.

    Maybe Dessler or Sherwood or someone else could comment on whether there is an accurate report of the workshop?

  70. Jason
    Posted Mar 5, 2009 at 12:41 PM | Permalink

    If I understand things correctly, the climate models predict relative humidity in the upper troposphere to be close to 100% at all times.

    If this is correct, shouldn’t it be easy to verify? There is no need compare measurements taken decades apart. A few simultaneous measurements of humidity and temperature in the upper troposphere should suffice.

  71. jc-at-play
    Posted Mar 5, 2009 at 12:48 PM | Permalink

    Paltridge states that

    A couple of weeks after the knock-back, and for unrelated reasons, two of us went to a small workshop on water vapour held at LDEO in New Jersey, whereat we told the tale. The audience was split as to whether the existence of the NCEP trends in humidity should be reported in the literature. Those ‘against’ (among them a number of people from GISS) simply said that the radiosonde data were too ‘iffy’ to report the trends publicly in a political climate where there are horrible people who might make sinful use of them.

    Are there any CA readers who were present at this workshop? As a confirmed skeptic (about everything), I’d like to hear if there’s another side to the story, before knowing what to make of the incident.

  72. bender
    Posted Mar 5, 2009 at 12:57 PM | Permalink

    Dr. Dessler,
    Show me the review article that definitively explores where this data can and can’t be used. Your opinion is valuable, but not authoritative. You may be right, but on the surface, and at the moment, it appears to be a case of special pleading. Can you show that this is not the case?

    • Andrew Dessler
      Posted Mar 5, 2009 at 1:18 PM | Permalink

      Re: bender (#118),

      Unfortunately, I don’t think such a paper exits. People generally do not write papers saying data set x cannot be used for problem y. The exception would be if someone published a paper using x for problem y, then someone might write a paper pointing out that x cannot be used for y. Perhaps Paltridge’s paper will elicit such a response. I do know that several researchers have worked on getting trends out of the water sonde measurements, and it’s difficult. (I can’t find the cites right now)

      I would also point out that perhaps my claims about trends in reanalysis were overly broad. I’m sure there are some trend analyses that could be done with the NCEP or ERA40 reanalysis, particularly if the data being assimilated was well understood and reliable. But water vapor from sondes does not qualify in my opinion.

      • bender
        Posted Mar 5, 2009 at 2:28 PM | Permalink

        Re: Andrew Dessler (#121),

        Unfortunately, I don’t think such a paper exits.

        If that is the case – and I don’t doubt you – then I have difficulty understanding where Eli Rabbett can speak so authoritatively on the subject:

        The problem is that everyone knows that the NCEP reanalysis has significant problems with humidity, and anyone who doesn’t is not clued in.

        He is saying Paltridge is not “clued in”, which strikes me as unlikely. But maybe Eli can clarify how this “everyone” got to be so informed if there is no definitive document on the subject?

  73. bender
    Posted Mar 5, 2009 at 1:11 PM | Permalink

    A scientific paper is to be judged on its scientific merit alone, not on the basis of how it may or not be interpreted or misinterpreted once published. The reviewer’s comment is shocking and disgusting. No scientist I know would agree with the politicization of his field.

  74. Harry Eagar
    Posted Mar 5, 2009 at 1:49 PM | Permalink

    For sure, there must be negative feedbacks somewhere, otherwise we would not be here.

    You’d think there would be a massive research program to determine what they are and to cherish and preserve them.

  75. hswiseman
    Posted Mar 5, 2009 at 2:06 PM | Permalink

    Perhaps the best part of CA is its function as a forum where I get to read Paltridge, Maue, Sherwood, Sherrington, Loehle, Dessler, McIntyre and others kick around an interesting topic. Dessler is not responsible for every use or misuse of the NCEP data and has no real obligation to justify what others have done with it. If someone has a problem with what Dessler actually says, speak up-it appears AD would be happy to to have a joust.

    • bender
      Posted Mar 5, 2009 at 2:16 PM | Permalink

      Re: hswiseman (#129),
      I see that as a very different thread. This one being about “the curtain”. I wish the two weren’t intermingled.
      .
      The other question that needs to be asked here: did the J. Climate editor act on the quoted part of the reviewer’s comment, or did he find it to be deplorably non-scientific? Worse: maybe the reviewer offered the comment because he knows the editor appreciates that kind of frank feedback.

  76. Simon Evans
    Posted Mar 5, 2009 at 2:20 PM | Permalink

    FWIW (not a lot, of course, but my comment last night kicked up some reaction), I also think that the quoted reviewer’s comment was improper. However, it does not follow that the basis of that reviewer’s judgment, nor the basis of the editor’s judgment, was the attribution of motive suggested in the quoted comment. That is why, without seeing the rest of the reviewer’s comments, the quotation is out of the context of the judgment. For a hypothetical paper, one might imagine basic errors being criticised, leading to a comment “I don’t know why you’ve submitted this other than wishing to get your name in print”. Such a comment would also be improper, but the judgment would not be.

    Given this site’s pride in its determination to examine evidence fully rather than accepting reported conclusions, I think it is a pity that some have leapt to the judgment that this story is evidence of a paper being rejected on ‘political’ grounds.

    • bender
      Posted Mar 5, 2009 at 2:49 PM | Permalink

      Re: Simon Evans (#133),

      it is a pity that some have leapt to the judgment that this story is evidence of a paper being rejected on ‘political’ grounds.

      snip

      Steve: Simon, I agree with bender’s annoyance at this sort of unreferenced spitball (and without a reference, it is no more than a spitball). Please refer to a specific comment rather than using Gavinesque terms like “Some have argued.,..”

    • Steve McIntyre
      Posted Mar 5, 2009 at 3:29 PM | Permalink

      Re: Simon Evans (#133),

      I’ve snipped a food fight. For the record, I agree with bender’s annoyance at:

      some have leapt to the judgment that this story is evidence of a paper being rejected on ‘political’ grounds

      Without a reference, this is no more than a Gavinesque spitball. “some have said…”.

      The punch line of the article was not a complaint about the rejection process but the failure of a “review article” to report this line of evidence, even if to dismiss it, while not stinting in self-references.

      We know that at least one of the authors is well aware of the contrary story told by the raw balloon data. But there is no mention of it in their article.

  77. Dave
    Posted Mar 5, 2009 at 2:22 PM | Permalink

    I think this would be a good time for Paltridge to produce the referee’s comments. Call me a cynic, but I’m not inclined to take his interpretation of the facts as being the same thing as the facts.

  78. Bill Illis
    Posted Mar 5, 2009 at 2:30 PM | Permalink

    Regarding the data, though, declines of this magnitude (4% percentage points to 10% pp) in relative humidity would have produced significant cooling in the middle and upper troposphere. So, there probably is reason to question the numbers.

    The models assume very small declines and increases (less than 1% pp) in relative humidity at these levels over the same period (depending on height) so the data would be very inconsistent with the models and the theory.

  79. dearieme
    Posted Mar 5, 2009 at 2:32 PM | Permalink

    Can anyone elaborate on the remark “Climate models (for various obscure reasons) tend to maintain constant relative humidity at each atmospheric level”? Does he mean that this condition is imposed on the models, or that it turns out to be what they predict? Why are the reasons obscure? Anyone?

  80. David L. Hagen
    Posted Mar 5, 2009 at 2:58 PM | Permalink

    As an engineering session chairman, I would not tolerate such a reviewer and would find another more objective person if at all possible. I have not had to deal with that degree of political correctness and abuse of science experienced by Paltridge.

    snip- irrelevant

    These differences in humidity suggest major differences in models above/below the tropopause.
    e.g., decreasing trend above suggests a change in the temperature lapse rate. The increasing humidity trend below the troposphere might be compensated by increase in convection and precipitation.

    How can we know what really is occurring without vigorously pursuing ALL models to see which better fit the data and then which can be validated?

    snip – editorializing

  81. Steve Case
    Posted Mar 5, 2009 at 3:33 PM | Permalink

    Climate models …tend to maintain constant relative humidity at each atmospheric level, and therefore have an increasing absolute humidity at each level as the surface and atmospheric temperatures increase. This behaviour in the upper levels of the models produces a positive feedback which more than doubles the temperature rise calculated to be the consequence of increasing atmospheric CO2.

    So that means that of the 0.6°C increase last century CO2 contributed just 0.3°. When you consider that the “Greenhouse” effect is logarithmic, wouldn’t that affect subsequent water vapor feedback values? In other words, doesn’t water vapor follow the same logarithmic rules that CO2 does? Doesn’t it follow that the increased amount of CO2 to achieve the same 0.3° rise will only increase water vapor by the same amount as before? And so, doesn’t the water vapor feedback become less and less as CO2 increases?

  82. bender
    Posted Mar 5, 2009 at 3:39 PM | Permalink

    Maybe Simon Evans can turn his attention to something more substantive, such as #135: the factual basis for Eli Rabett’s contention that “everyone” knows that “NCEP has significant problems with humidity”, with the insinuation that Paltridge must be among those not “clued in”.

  83. Simon Evans
    Posted Mar 5, 2009 at 3:51 PM | Permalink

    Moi: “it is a pity that some have leapt to the judgment that this story is evidence of a paper being rejected on ‘political’ grounds.”

    snip

    Steve: Simon, I agree with bender’s annoyance at this sort of unreferenced spitball (and without a reference, it is no more than a spitball). Please refer to a specific comment rather than using Gavinesque terms like “Some have argued.,..”

    Ok, Steve, here you go:-

    12.: “Politics should not be driving science, yet there it is. If it goes against your conclusions and against your politics, it’s “iffy” and must be rejected forthwith.”

    29: “This illustrates that scientific enquiry is inherently and inescapably political in that it is used to support a political end. Science is about power.”

    34: “This makes me mad, far madder than all the censorship at RC does (which just mis-informs the public).

    When even genuine climate scientists cannot get a short article published, that tries make other climate scientists aware of data that might have a slightly negative effect on AGW theory (as in the CO2 warming might not be as bad as predicted by climate models), well, you know for certain that climate science is no-longer functioning as a science.”

    39: “Have scientists really openly lost their objectivity to the point that anything casting doubt on their so-called “consensus view” must be silenced?”

    68: “for reviewers to dismiss it for political reasons is disturbing.”

    69: “I think the point is that papers are being rejected because they might provide ammunition to the wrong side.”

    72: “If you refuse to allow conflicting studies and views you can never get any closer to the truth.”

    86: “their censorship of dissenting climate opinions”

    93: “this just goes to show that the peer review process itself has been co-opted into the service of political agendas, and no longer works the way it was intended.”

    100: “Paltridge’s quote makes it quite clear that the report was suppressed – in part – because of the political implications of its publication.”

    128: “Such review comments only serve to confirm a suspicion that there is indeed an agenda, and whether the science is right or wrong, we must follow the agenda or be damned.”

    132: “the author would have been left with pure rationalisations in the Freudian sense, and no hint to the dirty reason why his paper really was turned down.”

    I trust that is enough references for you, to explain my generalised comment (which I had no wish to direct at any poster in particular). I recognise that many other posters have suggested judgment should be reserved. I maintain my opinion that it’s a pity when people leap to conclusions regarding, in this case, the judgment of the reviewer and editor.

    Steve: Fair enough. I asked people as follows:

    Folks, I realize that this sort of incident is provocative. But please do not use that as an excuse to over-editorialize, pile on or go a bridge too far. If you wish to make a point on a contentious topic, stick to the narrowest comment without generalizing. please.

    I’ve been working on some other matters and there are a lot of points here that breach policies, particularly given my explicit request to dial back this sort of rhetoric. I believe that there is a valid line of criticism here, but it is important not to extrapolate beyond the narrowest point. I concede your point and express my annoyance to the readers who have made the above piling on comments.

    • bender
      Posted Mar 5, 2009 at 4:13 PM | Permalink

      Re: Simon Evans (#150),
      1. My #100 does not say what Simon says it does. I specifically wrote “in part” so that I would NOT be lumped in with others jumping the gun. Yet here he is misrepresenting what I said by taking it out of context. What’s that called again when you deliberately distort what people say?
      2. Simon asks for the full review to be disclosed. I, in fact, go one further and also ask the Editor to clarify what role the reviewer comment might have made in his decision. At htis point I am far more interested in that than in the content of the paper or the quality of the NCEP reanalysis data. Because it is such a simple and obvious question with a very simple answer. To focus on NCEP is a total distraction from the issue here. It is aginst blog rules to speculate why one might want to change the subject, so I won’t.

    • bender
      Posted Mar 6, 2009 at 1:26 AM | Permalink

      Re: Simon Evans (#149),
      My #114 supercedes and clarifies my #100 (and my #2) yet is curiously not cited in Simon’s list. Selective quoting like this to fabricate a story is a bad habit he may want to consider breaking. This is not a spitball. This is about a guy who refuses to admit that Team work should be subjected to a greater level of scrutiny than work by authors with no such history. It’s “the silence of the lambs”, Steve. This is important, dammit.

  84. M. Villeger
    Posted Mar 5, 2009 at 3:52 PM | Permalink

    Steve, is Garth Paltridge going to provide more material in order to put the reviewer’s quote in perspective with regards to the scientific objections that were raised?

    It seems the Paltridge paper doesn’t aim at providing a definitive answer but at suggesting further investigation paths that IPCC models so far appeared not to have explored. Furthermore Paltridge’s tone is far from condescending suggesting a genuine concern to further the debate: isn’t that what science is all about?

  85. Steve McIntyre
    Posted Mar 5, 2009 at 3:56 PM | Permalink

    Perhaps a reasonable case can be made against using NCEP reanalysis for climate trends. But the case against Graybill bristlecone chronologies is beyond dispute.

    Imagine the reaction of Dessler and Sherwood if someone said – well, our model doesn’t work unless we use NCEP reanalysis data. They would laugh uproariously. But when Wahl and Ammann say – our model doesn’t work without Graybill bristlecone chronologies, ergo they contain “valid” information at the “eigenvector level”, no one in the “community” says boo to a goose. Even after the NAS panel said that this data should be “avoided”, PNAS accepted this data one more time in Mann et al 2008.

    The “community” seems to be better at catching errors in one direction but not the other – which contributes to cynicism from third parties.

  86. Jeff Alberts
    Posted Mar 5, 2009 at 4:00 PM | Permalink

    In a food fight, someone always ends up with egg on their face…

  87. Mark T
    Posted Mar 5, 2009 at 4:02 PM | Permalink

    Hysterics, indeed. You hit a nerve, bender.

    Re: Ryan Maue (#152),

    Long time-period trend analysis, especially when spanning the pre- and post-satellite eras, is not recommended.

    This has been said twice, and I do not disbelieve it, but why?
    .
    . (I finally figured out why Ryan O does this!)
    .
    Re: David Stockwell (#153),

    A note to the reviewers that you have snipped them would be needed for them to get the message, though whether you would be able to maintain a pool of reviewers then is questionable.

    One of the problems with allowing “review editing,” as David L. Hagen states in #142, is that the reviewer has shown his lack of objectivity, and his conclusion, while perhaps relevant and even correct, are suspect.

    Mark

    • Posted Mar 5, 2009 at 4:13 PM | Permalink

      Re: Mark T (#155), The best reference for this statement is made by Bengtsson et al. (2004) — and a freely downloadable version is available here. The published JGR version is located here.

      Section 6 Discussion of the PDF is a good/quick primer on the artificial biases/trends resulting from the evolution in the observation system (inclusion of new satellites as they come online).

      • Mark T
        Posted Mar 5, 2009 at 5:44 PM | Permalink

        Re: Ryan Maue (#160), Thanks, Ryan.

        ^Steve: sorry, back to back comments I figured would get snipped. I shouldn’t feed the bears.

        Mark

      • Kenneth Fritsch
        Posted Mar 5, 2009 at 7:05 PM | Permalink

        Re: Ryan Maue (#160),

        I excerpted the following from Ryan’s first link in the post above just to give a flavor of what the paper is comparing and analyzing.

        It would appear that the analysis of the claimed deficiencies in the IWV trend as reanalyzed by ERA40 are based on comparing those results with what the author deems as the theoretical values from the Clausius-Clapeyron relation and the model results based on that relation. I did not see any details of how the reanalysis deficiencies arise, but again the pre-1979 results appear to be judged less valid than those from after 1979.

        Click to access max_scirep_351.pdf

        The global trend in IWV for the period 1979-2001 is +0.36 mm per decade.
        This is about twice as high as the trend determined from the Clausius-Clapeyron relation
        assuming conservation of relative humidity. It is also larger than results from free climate
        model integrations driven by the same observed sea surface temperature (SST) as used in
        ERA40. It is suggested that the large trend in IWV does not represent a genuine climate
        trend but an artefact caused by changes in the global observing system such as the use of
        SSM/I and more satellite soundings in later years. Recent results are in good agreement
        with GPS measurements. The IWV trend for the period 1958-2001 is still higher but
        reduced to +0.16 mm per decade when corrected for changes in the observing systems.
        Total kinetic energy shows an increasing global trend. Results from data assimilation
        experiments strongly suggest that this trend is also incorrect and mainly caused by the
        huge changes in the global observing system in 1979. When this is corrected for no
        significant change in global kinetic energy from 1958 onwards can be found.

        The increase of IWV in ECHAM5 broadly follow the Clausius-Clapeyron relation with an increase of 6-7% in water vapor per Kelvin (Trenberth et al., 2003), the increase in ERA40 is almost twice as large. A similar calculation for the period 1958-2001 suggests an even faster increase than in 1979-2001 (Table 2) Again, this could be an artificial increase caused by the changes in the observing system. The IWV in the NOSAT experiment is actually 1.1 mm (4.3%) less than in ERA40 (Table 3). The most likely explanation is that the reduced observing system in NOSAT results in an underestimation of IWV due to the enhanced influence of the model bias (Bengtsson et al., 2004).

        To explore this discrepancy the ERA40 and the NOSAT experiment we compare IWV
        with in situ GPS measurements for the month of July 2000 and January 2001 (Hagemann
        et al., 2003). The results are summarised in Table 4 and Figure 4 and show that in most
        areas ERA40 agrees better with the GPS observations than does the NOSAT experiment.
        Furthermore, the NOSAT experiment has a dry bias in almost all areas. It is also dryer
        than ERA40 except over the central US during winter, where ERA40 has a slight dry bias
        compared to the GPS derived IWV (Hagemann et al., 2003). Because of the heterogeneous distribution of GPS measurements it is not possible to determine a reliable value of the typical global average but the indications are that the NOSAT assimilation underestimates IWV. The IWV of ERA40 may also be underestimated but less than in the NOSAT assimilation. The NCAR/NCEP has also been explored for the IWV trend and shows negative trends for both periods. We believe this is not credible given the overall warming trend both in SST and TLT (Table 1).

        • Mark T
          Posted Mar 5, 2009 at 9:31 PM | Permalink

          Re: Kenneth Fritsch (#170), Kenneth, from the one quote you posted it says:

          The NCAR/NCEP has also been explored for the IWV trend and shows negative trends for both periods. We believe this is not credible given the overall warming trend both in SST and TLT (Table 1).

          It sounds to me like they’re saying “our theory based on warming in both SST and TLT says the IWV trend should likewise be positive, but since it is negative, it cannot be trusted.” Do you concur? If so, doesn’t this seem rather circular, or am I missing something?

          Mark

  88. Simon Evans
    Posted Mar 5, 2009 at 4:03 PM | Permalink

    By the way, Steve, post 146 above appears to contravene your blog policy.

    Steve: quite so. please draw such things to my attention (and allow for the fact that I’m not an automaton) rather than engaging in a dispute. Others, please do the same.

  89. Mark T
    Posted Mar 5, 2009 at 4:07 PM | Permalink

    snip –

    please stop this sort of food fight. I cant stand it.

  90. Mark T
    Posted Mar 5, 2009 at 4:08 PM | Permalink

    snip – are you asking to be put on a moderation queue?

  91. Steve McIntyre
    Posted Mar 5, 2009 at 4:38 PM | Permalink

    A workshop proceeding cited at Lucia’s against reanalyses says:

    This study suggest that the future Reanalyses includes a sub-analysis using only the limited well-known, high quality, fixed number stations for GDAS, such that a baseline reference analysis for the full analysis can be established. Meanwhile, it is indispensable to conduct the parallel processes for extended period whenever a new instrument, or processing system, is introduced, such that the impact from the new instrument/process can be understood.

    The idea of focusing on high-quality stations is very much in line with comments made here and at Anthony’s about defective USHCN stations. Inhomogeneities in radiosonde data have also been discussed here in the past: e.g. Leopold in the Sky with Diamonds. The idea that that a climate data set is screwed up is not one that should take CA readers by total surprise. As I’ve advised on other occasions – merely because readers “like” one set of results doesn’t mean that they should abandon the sort of criticism that would apply to a USHCN station.

    I mentioned this sort of point recently in connection with Antarctic cooling, where readers need to keep in mind the frailty of early Antarctic data, which might well permit reanalysis – and so-called “cooling” might well depend on frail data. Whether Steig has done a proper reanalysis is a different issue though.

  92. Molon Labe
    Posted Mar 5, 2009 at 4:45 PM | Permalink

    Apparently a “Reanalysis” involves running a climate model informed with “future” observational data. But the results can’t be trusted for evaluating trends or even getting RH correct.

    Yet it’s fine to run the same climate models with no future information and trust the results implicitly.

    Steve: My understanding is that reanalysis humindity depends on radiosondes about which there are many inhomogeneity issues – I don’t know this for sure, but I suspect that placing the blame on GCMs may be unfair in this case.

    • Andrew Dessler
      Posted Mar 5, 2009 at 5:31 PM | Permalink

      Re: Molon Labe (#163),

      A reanalysis system is a climate model, but the model is “nudged” toward observations at every time step, wherever observations are available. Assimilated data can include temperature, humidity, wind speed, etc. One way to think about this is that it is an interpolation system that produces a picture of the atmosphere as consistent as possible with all of the observations going into it.

      • Molon Labe
        Posted Mar 5, 2009 at 6:17 PM | Permalink

        Re: Andrew Dessler (#164),

        Andrew, that’s my point. Even with nudging and hints the reanalysis is suspect, but GCM modeling of future climate…no problemo.

  93. Papertiger
    Posted Mar 5, 2009 at 6:32 PM | Permalink

    It just occured to me that the Steig findings (well not the popular media spin) – but the findings that a gradual continent wide cooling of Antarctica is robust even to hockey team member machinations – could be interpreted as confirmation of Paltridge’s dropping relative humidity in the upper troposphere. South Pole station’s average barometric pressure is in the 600 – 700 mb range.

  94. Kenneth Fritsch
    Posted Mar 5, 2009 at 7:20 PM | Permalink

    I would remiss if I did not include the author’s caveats from my previous post above.

    As has been pointed out many times previously, for example, Goody et al. (2001) and
    Trenberth et al. (2002), the present observing system, which was essentially set up to
    support weather forecasting, is not directly suitable for climate monitoring

    I would think this would be evident for those who have looked at similar problems with the surface temperature records that were originally devised to record weather and not climate.

    Because of limited resources this study must be seen as a very preliminary one where only the effect of the major change in the global atmospheric observing system which took place in 1979 is considered. For computational reasons the experimental periods have been limited to three shorter periods.
    The overall finding in this paper is model dependent to some extent and therefore cannot be generalized. Experiments with another assimilation systems will give different results since it depends on model biases. Yet it should be possible to correct for artificial trends in the same way as done in the present study.

    Returning finally to the question in the title of this study a fully affirmative answer
    cannot be given. However, it is believed that there are ways forward as indicated in this
    study which in the longer term are likely to be successful. The study also stresses the
    difficulties in detecting long term trends in the atmosphere and major efforts along the
    lines indicated here are urgently needed.

    My bold above for emphasis.

    • Posted Mar 5, 2009 at 8:30 PM | Permalink

      Re: Kenneth Fritsch (#171), Even simpler from that paper: prior to a given date, you have a certain observing system. Take 1979 as the pre-SSM/I era for instance. The ERA-40 is happily going along with its model integration: the observations are being ingested, weighted, and variational data assimilation is progressing as normal.

      The model has a certain bias associated with the data at hand. Now, all of a sudden, a new data source comes along in 1979. This new satellite data, for instance radiance data (temperature profiles) is now given additional weight in the model since the data is so densely distributed. After a short while, the background model climatology or the model bias changes. This change in model bias is artificial, but can be mistaken for an actual change in the atmosphere/climate. You can be honest and say “caveat” this and “we recognize data problems” but that is misleading in my book.

      • Kenneth Fritsch
        Posted Mar 6, 2009 at 10:50 AM | Permalink

        Re: Ryan Maue (#175),

        The model has a certain bias associated with the data at hand. Now, all of a sudden, a new data source comes along in 1979. This new satellite data, for instance radiance data (temperature profiles) is now given additional weight in the model since the data is so densely distributed. After a short while, the background model climatology or the model bias changes. This change in model bias is artificial, but can be mistaken for an actual change in the atmosphere/climate. You can be honest and say “caveat” this and “we recognize data problems” but that is misleading in my book.

        I think this is the essence of what I am personally attempting to understand about the potential errors in the reanalysis. My problem remains in attempting to understand how this baseline change would create a problem if one wanted to use 1979 as a starting point and work forward.

        If the problem were temperature anomalies and I wanted to use the satellite data that started in 1979 I see no reason for the past non-satellite temperature data interfering as long as I confined my studies to the 1979-2008 time period. I must be missing something in relating this to humidity.

        If there are further problems with reanalyses coming forward from 1979 I would like to know specifically what they are. I get frustrated by studies that give only hazy reference (for this layperson) to problems but fortress their contentions on the comparison of the observed results with modelled results.

        Also in the paper on which this thread was initiated: how much of the conclusion (with caveats) is based on the pre-1979 humidity reanalysis data?

        One final point: Changes that you suggest occur in reanalysis would appear to be gradual and not detectable as those for surface temperatures where break (change) points in the time series are used for adjusting data. I have read papers by Christy (I think) where the authors used change points to adjust radiosonde data (where they corresponded with instrument changes) and found a better fit with satellite data.

  95. bender
    Posted Mar 5, 2009 at 7:37 PM | Permalink

    It would be interesting to see the review in full. But it would also be interesting to see the other reviews at Theoretical and Applied Climatology. Clearly there is someone in the world that thinks the paper had some merit. Am I the only one interested in thsoe reviews? Apparently.

    • Mark T
      Posted Mar 5, 2009 at 9:25 PM | Permalink

      Re: bender (#172),

      Am I the only one interested in thsoe reviews? Apparently.

      I am, too.

      Mark

  96. Craig Loehle
    Posted Mar 5, 2009 at 8:02 PM | Permalink

    I have probably reviewed 400 papers for journals over the years. No doubt I have sent in mistaken reviews. But I have never sent off an insult, nor imputed motive, nor suggested something was too hot to publish. On the other hand, when dealing in sensitive topics, and only then, I have gotten reviews that are simply rude, extremely brief, and completely fail to justify their rejection. (e.g., “this is the worst paper I have ever read” or “you can’t do what the author has set out to do”) I would not call this whining. When I get a paper rejected for random reasons (reviewers or editors missed the point, for example) or because I did not make myself clear, it is simply time for a rewrite. But on routine science I have NEVER gotten a rude comment while on touchy topics (climate change, spotted owls to name 2) the rudeness and agenda has almost always been in the mix.

    • Pat Keating
      Posted Mar 5, 2009 at 8:46 PM | Permalink

      Re: Craig Loehle (#173),
      I once had a submitted paper reviewed by two referees. One said that my result was completely wrong, the other said it was well-known and understood, and was therefore not novel!

      • Martin Sidey
        Posted Mar 6, 2009 at 11:23 AM | Permalink

        Re: Pat Keating (#177),

        I once had a paper reviewed by conference referees:

        One was very favorable and indicated “best paper” quality.

        The other indicated that it was of such low quality that I should not consider submitting to the conference again.

        This was for a paper which challenged the prevailing orthodoxy. So there are no prizes for guessing the reason for this divergence of views. My impression is that Platridge received his review for similar reasons.

  97. Bill Illis
    Posted Mar 5, 2009 at 8:15 PM | Permalink

    Over at WUWT, an individual linked to a spreadsheet containing all the data available from NCEP.

    http://members.shaw.ca/sch25/Ken/Optical%20Depth%20Data.xls

    The weighted-average specific humidity (rather than relative humidity) is constant over the 1948 to 2008 period.

    So, there is NO water vapour feedback AT ALL, either positive or negative (or the data is faulty).

    • Jason
      Posted Mar 5, 2009 at 8:28 PM | Permalink

      Re: Bill Illis (#174),

      Maybe I’m not reading it properly, but the spreadsheet appears to show DECLINING specific humidity at 850mb and above and INCREASING specific humidity at 950mb and below.

      Isn’t this exactly the sort of thing that (if real) would result in negative water vapor feedback?

  98. Bill Illis
    Posted Mar 5, 2009 at 8:42 PM | Permalink

    To Jason, the models have different changes in humidity levels at different levels of the atmosphere depending on temperatures changes.

    Here are the numbers from GISS Model E from 1948 to 2003 for example.

    http://data.giss.nasa.gov/work/modelEt/lat_height/work/tmp.3_E3Af8aeM20_1_0112_1948_2003_1951_1980_-L3AaeoM20D_lin/mean.txt

    Given there are increases and also decreases at different heights, one has to calculate some kind of weighted average to arrive at the total water vapour in the atmosphere. I’m assuming 300 MB has 0.3 times the water vapour of 1000 MB etc. and I don’t think you could calculate it any other way without having some kind of not-invented-yet instrument. The heat provided by the Sun has to escape into space (within 12-24 hours in reality on average) so it is the total water vapour in the complete-all-the-way-the-top of the atmosphere that is in question here.

  99. Posted Mar 5, 2009 at 9:08 PM | Permalink

    I have a response to this and Anthony Watts’ post at http://chriscolose.wordpress.com/2009/03/05/what-if-relative-humidity-was-not-constant/

    To Andrew Dessler: Thanks for dropping by and providing insight.

    • Posted Mar 5, 2009 at 9:58 PM | Permalink

      Re: Chris Colose (#178), Disregarding the obvious personal editorial comments in your blog, you do make some good points, and I encourage you to snip a few out and provide them here.

      Again, as I get exasperated, if you are going to try and publish a paper that bucks consensus or sheds new light on a very controversial topic, you need to come to the gunfight with a lot more than the NCEP Reanalysis data. The paper is littered with caveats about the poor or questionable quality of the data, yet it is used nevertheless, without any assessment of such deficiencies.

      • Jason
        Posted Mar 6, 2009 at 6:39 AM | Permalink

        Re: Ryan Maue (#182),

        Again, as I get exasperated, if you are going to try and publish a paper that bucks consensus or sheds new light on a very controversial topic, you need to come to the gunfight with a lot more than the NCEP Reanalysis data. The paper is littered with caveats about the poor or questionable quality of the data, yet it is used nevertheless, without any assessment of such deficiencies.

        There is one key piece of data, that Paltridge does not appear to mention, which supports the use of the NECP humidity data.

        If significant positive water vapor feedback existed, we would expect (according to Gavin and the IPCC) to see a strong pattern of warming in the tropical troposphere. The observed absence of this warming therefore supports Paltridge’s conclusion.

        As you are probably aware, in Santer et al 2008, seventeen of the IPCCs strongest proponents established a framework for testing whether or not observed tropospheric temperatures trends falsify the climate models used by the IPCC. Steve has shown that applying this framework to currently available data (Santer et al only had time to use the data as of 1999), the climate models are in fact falsified.

        If the NECP humidity data presented by Paltridge is correct, it would directly explain WHY the climate models were so wrong.

    • Steve McIntyre
      Posted Mar 5, 2009 at 10:27 PM | Permalink

      Re: Chris Colose (#178),

      Watts and McIntyre (who has his own post ) make it out to be a bad thing that people are concerned with “iffy” data …

      This is a repugnant and untrue allegation. I’ve made it very clear that readers should not abandon critical perspective just because they happen to “like” an answer. Only 16 posts prior to Colose’s untrue allegation, I stated the exact opposite of Colose’s allegation:

      The idea that that a climate data set is screwed up is not one that should take CA readers by total surprise. As I’ve advised on other occasions – merely because readers “like” one set of results doesn’t mean that they should abandon the sort of criticism that would apply to a USHCN station.

      I mentioned this sort of point recently in connection with Antarctic cooling, where readers need to keep in mind the frailty of early Antarctic data, which might well permit reanalysis – and so-called “cooling” might well depend on frail data. Whether Steig has done a proper reanalysis is a different issue though.

      Surely I am entitled to observe the irony of the community’s scruples against the NCEP reanalysis (undoubtedly deserved) while they remain silent on the continued use of the Graybill bristlecone chronologies under far more dubious circumstances.

  100. Mark T
    Posted Mar 5, 2009 at 9:24 PM | Permalink

    Ah yes, Chris “I begin my obviously important analysis with an argumentum ad-hominem so please take me seriously” Colose.

    Mark

  101. Gerald Browning
    Posted Mar 5, 2009 at 10:23 PM | Permalink

    Andrew Dessler (#164) and All,

    A reanalysis system is a climate model, but the model is “nudged” toward observations at every time step, wherever observations are available. Assimilated data can include temperature, humidity, wind speed, etc. One way to think about this is that it is an interpolation system that produces a picture of the atmosphere as consistent as possible with all of the observations going into it.

    Let us discuss a reanalysis system in more detail.

    A reanalysis system is a global large scale numerical forecast model (currently a numerical approximation of the ill posed hydrostatic system, e.g. the ECMWF model) that assimilates (inserts or mixes) observational data in with the numerical model forecast in an attempt to obtain a better set of global data. If one looks at the manuscript by Sylvie Gravel on this site, one sees how many assumptions go into the process.

    A mathematical analysis of the process (reference available on request) has shown that the wind data , i.e. the vertical component of vorticity, is the only information that is necessary to drive this process. Using only the wind data (radiosondes and commercia jet
    measurements), the short term large scale forecast error over the US was just as small as with all additional sources of data combined. (Physically that is because most of the kinetic energy is in the jet streams). A similar test with satellite data alone (no radiosonde info or surface data to help with the inversion of the radiance integral into temperatures) was a disaster.

    In 3D assimilation, the observational data is incorporated into the model every 6 hours using a complicated statistical interpolation
    scheme that is not necessary if only wind data is used (as proved in the mathematical analysis of the process and demonstrated in Sylvie Gravel’s manuscript).

    In 4D assimilation, the data is inserted in both time and space as the data becomes available. Unfortunately this causes discontinuities in time and space and requires unphysically large dissipation to control those discontinuities.

    It has also been proved mathematically that if one uses only the vertical component of vorticity (a simple interpolation in time and space can be used), then the rest of the slowly evolving in time solution, i.e. the horizontal divergence, the vertical velocity, the potential temperature, and the pressure) can be determined using solutions of a simple set of elliptic equations (Browning and Kreiss 2002 and Page et al. 2005). The remaining variables can be determined
    from the parameterizations. Of course this will show up any flaws in those parameterizations.

    Andrew Dessler,

    One additional remark. It is the Editor’s responsibility to choose competent reviewers (unless he himself has an agenda). By choosing one or more reviewers that make nonscientific statements, it is a reflection on his biases or incompetence. And even more so when the Editor does not throw out those reviews and allows the authors to see them.

    Jerry

    • bender
      Posted Mar 6, 2009 at 12:46 AM | Permalink

      Re: Gerald Browning (#183),
      Thank you, Jerry, for pointing out the role of the Editor in this process. I want to know if he is in the habit of selecting reviewers who will give him what he wants: an assessment based to some degree on politics.

      As a courtesy to Jerry, here is a copy of Sylvie Gravel’s manuscript to which he refers. His story about the fate of this paper is quite interesting. Search CA for details.

  102. Gerald Browning
    Posted Mar 5, 2009 at 10:32 PM | Permalink

    Ryan Maue #182),

    And is a climate model better than the reanalysis data?

    Spare me.

    Jerry

    • Posted Mar 6, 2009 at 1:47 AM | Permalink

      Re: Gerald Browning (#185),

      And is a climate model better than the reanalysis data?
      Spare me.

      Where does this non-sequitur come from in any of my threads? I haven’t said anything about climate models. Spare me? Moving on…

      From Figure 2b. of Paltridge et al. (2009), a clear downward trend in 400 hPa specific humidity is plotted for July/August and then plotted again in Figure 3. I replicated the results for the NCEP/NCAR Reanalysis for the time period 1973-2007, and then compared the JRA-25 from 1979-2007 and ERA-40 from 1973-2002. The results are here: Q The JRA-25 is clearly a lot different than the other two.

      From a few cursory examinations of geographical differences between the models, much of the differences in monthly-mean specific humidity (and many other variables) between the reanalysis products occurs in the tropics. The Tibetan-plateau is also a source of large differences and enters into the “middle-latitude” belt chosen by the authors for their areal-average. It is also not surprising that the Tibetan plateau specific humidity at this pressure level is considerably higher than the zonal mean (3.5-4 times larger during the summer). Here is an example image from NCEP Reanalysis July 2007.

      The authors discuss their choice of latitude bands for averaging in the Introduction of their paper:

      The analysis is restricted to latitudes between 50S and 50N and to altitudes at which the zonal-average specific humidity is greater than 0.5 g/kg. Thus, in the tropics (20S to 20N), data from the NCEP pressure levels up to 300 hPa are included. In the midlatitudes (20 to 50 deg in each hemisphere), data from all levels up to 500 hPa are included, together with the summer season data from 400 hPa. The criterion…is equivalent to a restriction of the analysis to levels where the zonal-average temperature is greater than about -30C in the tropics and -20C in the midlatitudes.

      Let’s suppose that I have a slightly different interpretation of what are the midlatitudes, especially during the summer months. I propose instead of 20-50N in the Northern Hemisphere to instead use 30-55N. A rational argument could be made to define this band as the midlatitudes. The NH specific humidity calculation is repeated: Q-new The trend is gone. A similar result is obtained using a smaller latitudinal band of 30-50N. What does this mean?

      In this instance (NCEP Reanalysis), the dramatic decrease in tropical specific humidity (their Figure 3 center) appears to extend into and dominate the midlatitude changes, as defined by the authors. Thus, even if the NCEP Reanalysis perfectly reflected the actual climatological changes in specific humidity, the authors’ findings (at 400 hPa for NH) are not robust to a simple and appropriate shift in the geographical definition of the midlatitudes.

      • Kenneth Fritsch
        Posted Mar 6, 2009 at 11:47 AM | Permalink

        Re: Ryan Maue (#203),

        Ryan, when you say “..and then compared the JRA-25 from 1979-2007” my version of the graph shows JRA-25 going from 1973-2001.

        • Posted Mar 6, 2009 at 12:52 PM | Permalink

          Re: Kenneth Fritsch (#235), Yes, that is a plotting error on my end. I apparently failed with Excel on that one. The JRA-25 is indeed from 1979-2007. I will update it later this evening.

          There are two additional reanalysis datasets that are also available. The newest ECMWF ERA-interim reanalysis which extends from 1989-2009, and the NASA MERRA. Both utilize up-to-date 4D Var assimilation techniques and concentrate solely on the satellite era. The NASA MERRA is still running and won’t be finished for at least another 6 months (20 of 30 years are completed).

          Having multiple reanalysis or in the case of forecasting the weather, multiple operational models, is a good thing.

  103. Posted Mar 5, 2009 at 10:48 PM | Permalink

    Steve McIntyre

    I have made a comment on my blog withdrawing the remark. I did not remove it in the context of Anthony Watts’ post, since that is where I received the impression (along with someone else I corresponded with, so I’ll leave it at that). My apologies.

    Steve – thanks. Doing this sort of correct thing in respect to CA is virtually “unprecedented” in my experience with the climate “community”. I disagree with your characterization of Anthony by the way and would prefer that you speak more judiciously.

  104. Posted Mar 5, 2009 at 10:51 PM | Permalink

    By the way, I never made any claim about Bristlecone chronologies, so I’m not sure who this “community” is or why it applied to my remarks. I don’t have much background in that area, but I have seen the issue raised in authoritative sources and I had thought it was well accepted that such a proxy had severe limitations.

    Do you disagree?

    • bender
      Posted Mar 6, 2009 at 12:51 AM | Permalink

      Re: Chris Colose (#187),
      This POV is laughable. It goes to show how many who ought to know better are completely ignorant about the real problems with the paleoclimatic data on which Hansen’s estimate of climate sensitivty is built. It is utterly laughable. Graybill is crack to the team. They can’t get on without it. Don’t you read this blog?

      • freddy
        Posted Mar 6, 2009 at 7:55 AM | Permalink

        Re: bender (#195), Would you care to point to any paper where Hansen has even discussed paleo-climate over the last millenium, let alone relied on bristlecone pines for his estimates of the climate sensitivity? His main slide on this comes from the Last Glacial Maximum – not an era known for its substantive tree ring records….

        • bender
          Posted Mar 6, 2009 at 8:08 PM | Permalink

          Re: freddy (#213),

          Hansen & paleoclimate data:
          Author(s): LORIUS, C; JOUZEL, J; RAYNAUD, D; HANSEN, J; LETREUT, H
          Title: THE ICE-CORE RECORD – CLIMATE SENSITIVITY AND FUTURE GREENHOUSE WARMING
          Source: NATURE, 347 (6289): 139-145 SEP 13 1990

          AFAIK Hansen doesn’t rely on tree rings.

        • bender
          Posted Mar 6, 2009 at 8:15 PM | Permalink

          Re: freddy (#213),
          Something a little more accessible and more recent:
          Target Atmospheric CO2: Where Should Humanity Aim? James Hansen et al.

          “Paleoclimate data show that climate sensitivity is ~3°C for doubled CO2”

        • bender
          Posted Mar 6, 2009 at 8:27 PM | Permalink

          Re: freddy (#213),
          Estimating climate sensitivity from paleo-data.
          Crowley, T. J.; Hegerl, G. C.
          American Geophysical Union, Fall Meeting 2003, abstract #PP22B-08

          For twenty years estimates of climate sensitivity from the instrumental record have neen between about 1.5-4.5° C for a doubling of CO2. Various efforts, most notably by J. Hansen, and M. Hoffert and C. Covey. have been made to test this range against paleo-data for the ice age and Cretaceous, yielding approximately the same range with a “best guess” sensitivity of about 2.0-3.0° C. Here we re-examine this issue with new paleo-data and also include information for the time period 1000-present. For this latter interval formal pdfs can for the first time be calculated for paleo data. Regardless of the time interval examined we generally find that paleo-sensitivities still fall within the range of about 1.5-4.5° C. The primary impediments to more precise determinations involve not only uncertainties in forcings but also the paleo reconstructions. Barring a dramatic breakthrough in reconciliation of some long-standing differences in the magnitude of paleotemperature estimates for different proxies, the range of paleo-sensitivities will continue to have this uncertainty. This range can be considered either unsatisfactory or satisfactory. It is unsatisfactory because some may consider it insufficiently precise. It is satisfactory in the sense that the range is both robust and entirely consistent with the range independently estimated from the instrumental record.

        • bender
          Posted Mar 6, 2009 at 9:31 PM | Permalink

          Re: freddy (#213),
          The Hansen paper cited in #278 states:

          Evidence from Earth’s history (3-6) and climate models (7) suggests that climate sensitivity is 0.75 +/- 0.25°C per W/m^2

          I will leave for you to guess what proportion of the 4 papers cited in reference to “Earth’s history” link to other papers by Hansen.

  105. Steve McIntyre
    Posted Mar 5, 2009 at 11:18 PM | Permalink

    I had thought it was well accepted that such a proxy had severe limitations. Do you disagree?

    I disagree that it is “well accepted”. Wahl and Ammann 2007 argued that they added “necessary” skill to the Mann reconstruction. Even though the Wegman panel said that the Wahl and Ammann paper had no “statistical integrity” and the NAS panel said that they should be avoided and even after Ababneh failed to replicate the Graybill results, IPCC endorsed the nonsensical Wahl and Ammann argument. So I would submit that the opposite is the case – it is apparently the “consensus” of the climate science community that these proxies are just fine.

    Post IPCC, Mann et al PNAS 2008 used these proxies. In our comment (MM09), Ross and I stated:

    Although Mann et al. purport to ‘‘follow the suggestions’ of [NRC], they employed ‘‘strip-bark’ dendrochronologies despite the recommendation of [NRC], that these chronologies be ‘‘avoided’ …

    To which Mann replied:

    Finally, McIntyre and McKitrick misrepresent both the National Research Council report and the issues in that report that we claimed to address (see abstract in ref. 2). They ignore subsequent findings [Wahl and Ammann] concerning ‘‘strip bark’ records … In summary, their criticisms have no merit.

    If you can point out statements evidencing that it is “well accepted that such a proxy had severe limitations”, I would appreciate it. But I think that you’ll find that the community has foolishlessly acquiesced in this nonsense.

  106. kim
    Posted Mar 5, 2009 at 11:20 PM | Permalink

    ‘Well-accepted’ that Bristlecone chronologies have severe limitations, and still the hockey stick is shown, to great effect, to the naive, and defended tenaciously and viciously. How about publicly denouncing the hockey stick?
    =======================================================================

  107. Posted Mar 5, 2009 at 11:51 PM | Permalink

    Steve McIntyre,

    I do not agree with your interpretation of the NRC report. Readers can read the full page of http://books.nap.edu/openbook.php?record_id=11676&page=52 and place the sentence “While “strip-bark” samples should be avoided for temperature reconstructions…” in a fuller context (maybe read the conclusions of that chapter). The 2008 paper by Mann and others specifically note that “Recent warmth appears anomalous for at least the past 1,300 years whether or not tree-ring data are used.” I also think IPCC gives a fair treatment on pg. 472-73

    This is OT now so that’s my last word on it…again, it’s beyond my study interests in this area (though I’m reasonably familiar with some of the issues), and it’s not somethng I originally commented on.

    • Steve McIntyre
      Posted Mar 6, 2009 at 12:06 AM | Permalink

      Re: Chris Colose (#190),

      You said:

      I had thought it was well accepted that such a proxy had severe limitations.

      Do I take it that you withdraw this comment as well? If not, please support it.

      The 2008 paper by Mann and others specifically note that “Recent warmth appears anomalous for at least the past 1,300 years whether or not tree-ring data are used.”

      The Mann 2008 paper has been discussed here and I do not accept its assertions at face value nor should you. Mann et al 2000 also said that their recon was robust to the presence/absence of all dendro data and that statement proved false. His so-called non-dendro recon laughably uses the Tiljander data set upside-down.

      But please don’t get distracted by trying to re-argue Mann from the ground up. Please stick to the point – please support or withdraw your assertion that

      “it was well accepted that such a proxy had severe limitations.”

  108. Alan Wilkinson
    Posted Mar 6, 2009 at 12:04 AM | Permalink

    snip – piling on

  109. Posted Mar 6, 2009 at 12:26 AM | Permalink

    Steve M,

    I guess I was wrong again about “the final word” since I suppose I will be further pushed into a different topic.

    There’s nothing to retract, and really, I can’t imagine what you disagree with. This has been discussed in IPCC and NRC which represent the “consensus view” much more broadly than Mann et al. or any other individual paper (whether Mann’s conclusions are “correct” or not was not my focus). I simply said I didn’t agree with your NRC interpretation, and I agreed with the Mann reply.

    I didn’t say it was the mainstream view that the tree-ring proxy should never be used or avoided-at-all-cost or what have you, only that issues may potentially exist with them which need to be carefully looked at (the limitations I spoke of depending on time period or geographic location).

    For readers who may not know (I’m sure they all do by now), the Divergence problem shows up as some ring-width histories failing to track the size of the most-recent warming even though they tracked the earlier-century warming well. There are many hypotheses– some of them would have no implications for temperature of Medieval Warmth, some would. Again, this comes from discussion in the NRC report and references cited, so there is obviously more than a minor acknowledgment.

    • bender
      Posted Mar 6, 2009 at 12:54 AM | Permalink

      Re: Chris Colose (#193),
      You are dead wrong here. Care to know exactly where and why? Hint: it has to do with confidence intervals under divergence vs. no divergence.

    • Geoff Sherrington
      Posted Mar 6, 2009 at 12:55 AM | Permalink

      Re: Chris Colose (#193),

      It is incredibly naive to support a method whose accuracy is clearly shown to vary for inexplicable reasons over time. I am referring to temperature. It is further naive to support a method whose response vararies incomprehensibly even in recent instrumental time. I refer to dendroclimatology.

      As to why tree rings fail to track the size of the most recent warming, it is likely that both the tree ring method and the warming estimate are in error. Therefore, there is no justification for belief in older reconstructions, since neither major factor improves back to obscurity.

      In truth, do you have anything of substance to support your assertions or are you merely going along for the ride?

    • Dave Dardinger
      Posted Mar 6, 2009 at 2:57 AM | Permalink

      Re: Chris Colose (#193),

      You need to do a search on the site for Starbucks and stripbark. The little experiment done to resample some of the Graybill stripbark bristlecone pines showed quite well why stripbark trees are unsuitable to use for temperature measurements. Basicly stripbark condenses growth into a small segment of the tree and thus it will show larger growth along a radius until the bark is healed over. This means that given a certain % of the trees being stripbarked, and Graybill and Idso purposely chose stripbarks to measure, you will find these trees showing apparently increased growth rates in recent times. This is the sole reason the Graybill stripbark bristlecome pines are used; they were noticed to result in a growth spurt and this went with the assumption that AGW was a fact. I suggest you try asking yourself why they’ve continued to be used in one guise or another right up to the present. Then you might be ready to look at the other questionable proxies which are used to “substitute” for the Bristlecomes and supposedly verify them.

  110. bender
    Posted Mar 6, 2009 at 12:55 AM | Permalink

    Steve M:
    Please, please, let me pile on and say something insulting. Please?

  111. Peter D. Tillman
    Posted Mar 6, 2009 at 12:55 AM | Permalink

    Here’s a link to a copy of the Dessler & Sherwood Science Perspective article (Feb 20, 2009) mentioned several times upthread:

    Click to access dessler09.pdf

    It looks like a pretty straightforward review, though it may gloss over the possibility that the water-vapor feedback could be less than is now thought to be the case. It is good that the authors stress the importance of empirically determining climate sensitivity — a topic that CA returns to often. It’s surprising how little progress has been made in more precisely determining this critical piece of information.

    Cheers — Pete Tillman

  112. bender
    Posted Mar 6, 2009 at 12:59 AM | Permalink

    Steve M, did you catch my reference a couple of weeks ago to that paper showing that most trees in Canada are limited by drought, not temperatures during the growing season? (I think you were playing squash.) Snip if you like. I know it’s OT, but Chris Colose should read that paper.

  113. bender
    Posted Mar 6, 2009 at 1:36 AM | Permalink

    Where’s Ryan Maue’s response to Gerald Browning? If the NCEP reanalysis is crap then the GCMs are crap. Is there a double-standard here or isn’t there?

  114. bender
    Posted Mar 6, 2009 at 2:09 AM | Permalink

    Thanks, Ryan. Hope I wasn’t too provocative.

  115. steven mosher
    Posted Mar 6, 2009 at 2:55 AM | Permalink

    well.

    Its interesting to see reviewer comments. reviewer comments and identities ought to be public, a part of the publication record. This whole notion that somehow an opaque process and an anonymous process leads to better science is a crock. My perspective on this whole AGW thing is kinda broad. I happen to believe in AGW ( but I’m a luke warnmer) and I think the scientific process and the editorial process I see in this field BLOWS DEAD BEARS, technically speaking. free the data. free the code. Pants the reviewers.

    In the world most of us live in if we had an issue with somebody’s analysis, we put our frickin name to it. thunderdome, now! none of this groveresque puppet show.

    sorry had a moshpit moment

    Apocryphal makes for interesting reading.

    • Craig Loehle
      Posted Mar 6, 2009 at 7:43 AM | Permalink

      Re: steven mosher (#206), Unfortunately, in the real world some authors who get non-anonomous reviews will remember it forever and set out to punish the reviewer. Some of them can deny grants or spread rumors. It can prevent tenure.

      • Mark T
        Posted Mar 6, 2009 at 12:48 PM | Permalink

        Re: Craig Loehle (#212),

        Unfortunately, in the real world some authors who get non-anonomous reviews will remember it forever and set out to punish the reviewer. Some of them can deny grants or spread rumors. It can prevent tenure.

        While I do understand your position here, one thing that would counter this would be the potential embarrassment for “punishing” someone for past review. In an open process, it would be very hard to pull off without someone noticing. I would think then, that a quid pro quo might be difficult to pull off. We can only speculate, however, since it is not, nor has it ever been (to my knowledge) this way.

        I do come from an arena in which reviews are open, btw (well, my day job is, I have done some academia related stuff, too, which is not). In fact, “design reviews” are much more critical than any of these reviews, some would say brutal. Even papers that I write are given to people I not only know, but people I expect will criticize me openly, for all to see (a problem I have with my current job is that I have a hard time getting reviews because we are so small, which bothers me). It is humbling, but it also forces me to produce the best work I can consistently. Rarely do I see grudge matches play out over technical reviews, and I’ve been involved in well over 100 (probably over 50 of my own). I embarrass and have been embarrassed, in a room full of 20 other engineers equally if not more capable than I am (my largest review was in front of a room full of contractors, maybe 100 or so, half of which were highly technical, the other half jealously guarding the money we wanted them to spend).

        I have never been pantsed, btw. At least, not as the result of a review.

        Mark

        • Craig Loehle
          Posted Mar 6, 2009 at 2:39 PM | Permalink

          Re: Mark T (#238), Because engineers must suffer real-world consequences for failures, review is understood to be essential. In the ivied tower, it is not. It would be very hard to observe quid pro quo from irked authors, because it would be remote in time and space from you, when you submit a proposal to NSF say or are up for tenure and people ask about you.

        • Ron Cram
          Posted Mar 6, 2009 at 3:11 PM | Permalink

          Re: Craig Loehle (#244),

          I have heard this type of argument time and again and I just do not buy it. I do not believe science should have a lower standard than engineering. People may hold grudges, but those asking can also determine if a reviewing relationship ever existed. Because everything is open, the grudge is seen for what it is.

          Actually, an open system would many fewer problems because people would not be as inclined to slander in the first place if they knew their comments were not anonymous. Opening the process to the light would be good for the system all around and force people to look at the science and less at personalities.

        • Ryan O
          Posted Mar 6, 2009 at 3:22 PM | Permalink

          Re: Ron Cram (#245), Science and engineering are different disciplines. Science, in and of itself, generally has no direct consequences. It is engineering that takes science and applies it. Einstein didn’t build GPS satellites using correction factors from general and special relativity; Rockwell International did.
          .
          Because this particular field of science – climate – has such far-reaching implications, the science itself can have a direct impact on policy decisions. This makes the situation dicier. Even so, I strongly feel that public disclosure of peer reviewers and their comments would have a significant negative impact on the science. Just as the secret ballot system relies on anonymity to prevent individuals from being pressured by others during the act of voting, the peer review system depends on anonymity to prevent reviewers from having to think about how their statements would be viewed by a much larger audience – an audience for which the review is not intended in the first place. A open review system is the surest way of politicizing the science even further.
          .
          Steve – if this is too OT, I apologize in advance.

        • Mark T
          Posted Mar 6, 2009 at 3:31 PM | Permalink

          Re: Ryan O (#248), I think Ron’s legitimate point is that openness might close the gap between the two, i.e., there might be direct consequences (albeit mostly regarding stature in the community). You write a review that is clearly political, and/or shows heavy bias, nobody will respect your reviews in the future. Magazines/journals will refuse to use you because you are publicly voicing your lack of objectivity. Write a review that is easy to see as technically untenable and you look silly, calling into question your authority and ability as a reviewer.

          Mark

        • Ryan O
          Posted Mar 6, 2009 at 3:40 PM | Permalink

          Re: Mark T (#249), I agree that it would stop comments like the one quoted by Paltridge from happening – which would be a benefit. The worry would be that review comments would be used against the reviewers by those outside the scientific community. Bloggers and reporters are very effective at finding the smallest ammunition to use and taking it out of context (not a comment on this blog at all . . . but most blogs show nowhere near the level of honesty or fairness that Steve does). Sometimes the most effective means of censorship is full disclosure.
          .
          It’s a fine line, especially in a field where there is a lot of emotion attached to competing ideas and theories.

        • Mark T
          Posted Mar 6, 2009 at 4:11 PM | Permalink

          Re: Ryan O (#252), Could be. I think you agree with me that we are in speculative territory for sure. Unfortunate that we cannot codify a simple formula to prove one way or another, eh? 🙂

          Mark

        • Martin Sidey
          Posted Mar 6, 2009 at 5:25 PM | Permalink

          Re: Ryan O (#252),

          I think that you are all missing the big question. That is why such an important world project is being mediated through the journal review process and run y academics with no training in managing large projects.

          Why should contributions be vetted by anonymous gatekeepers? If these gatekeepers can keep an important new idea from being explored then why should they not be required to justify their opinions publicly?

        • Simon Evans
          Posted Mar 6, 2009 at 5:52 PM | Permalink

          Re: Martin Sidey (#258),

          “If these gatekeepers can keep an important new idea from being explored…”

          What do you think is ‘new’ that is important in this paper?

        • Ron Cram
          Posted Mar 6, 2009 at 4:16 PM | Permalink

          Re: Ryan O (#248),

          Actually, openness takes the politics out of the science. The principle of the secret ballot does not apply. If you want to determine the sex of a cat, you do not get a roomful of people to vote on it. Someone has to make observations… not me, I’m allergic. And then others get to replicate the study and see if they reach the same conclusion. If a controversy arises, one will eventually be confirmed and the other embarrassed. Such is science.

          Anonymous reviewers commonly result in poor quality reviews. Let me describe the ideal situation, as I see it. If you want good quality reviews, you pay the reviewers and you put their names on the article when published. (BTW, in my view, reviewers should never get to say if the article should be published or not. The reviewers should point out the strengths and weaknesses of the article, any errors or plagiarism, etc and then the editor makes a decision. Pushing the decision onto the reviewers is gutless. Many times a paper can correct its deficiencies and then be published and make a valuable contribution to the literature.) If the paper is not published, the author is provided the names and comments of the reviewers. In addition, the editor explains why the paper was rejected. It is understood these comments may become public information as the discretion of the author. If everyone knew their comments may become public, they would be much more circumspect in their comments.

          I agree this is OT. I’m done.

        • Ryan O
          Posted Mar 6, 2009 at 7:47 PM | Permalink

          Re: Ron Cram (#256),

          Actually, openness takes the politics out of the science. The principle of the secret ballot does not apply.

          .
          I’m not sure I follow. Climate science is a highly politically charged arena. Each “side” looks for any opportunity to undercut the credibility of the other (not necessarily referring to the scientists). Keeping the reviews anonymous means that the reviewers neither need to pull punches if the paper is bad (but the author is famous) nor grandstand for the public audience that will eventually see the review. For example, if Santer were to write a review that was critical of an analysis purporting to show concordance with model predictions – and said review was made public – do you honestly think that people would not:
          .
          A. Use this to show that OMG . . . even Santer secretly “knows” observations don’t fit models; and/or,
          .
          B. Use this to show that stop the presses – even Santer himself knows the latest research published by Umptyscrunch is bunk!!!
          .
          Not only would this place the editor in a much more difficult position when deciding whether to publish the article, but do you honestly think that Santer would not be aware of the implications of his review? While I agree that it would prevent ridiculous comments like the one quoted by Partridge from occurring, I’m not sure how you can conclude that it would de-politicize the review process or make decisions easier for publishers. I rather think it would make the decision much harder (or at least allow additional possibility that non-scientific sentiment could drive the decision to publish/not to publish).
          .
          In addition, if reviews were to be made public, do you think that reviewers would not realize that now the act of writing a review is yet another avenue for them to trumpet their own ideas/work/agenda outside of needing to publish a paper? I would be willing to bet a large sum of money that if the review process were made public that you would see reviews bloating with irrelevant “explanatory information” and “background”.
          .
          Apologies in advance because I don’t mean to be flippant or blunt . . . but I doubt you will be able to convince me that there is any practical way to prevent these type of things from happening if reviews became public. In my opinion, such a move would serve more to censor the science than advance it. I feel rather strongly about it. While there are disadvantages to an anonymous review process, the cure is worse than the disease.
          .

          Anonymous reviewers commonly result in poor quality reviews.

          .
          If you have evidence backing this up, it would be interesting to see.
          .
          Re: Martin Sidey (#258),
          .

          That is why such an important world project is being mediated through the journal review process and run y academics with no training in managing large projects.

          .
          To me, this statement is problematic. Science is not a project. Science is an investigation. Engineering is the project. This is why I strongly disagree with treating climate science as climate engineering. They are separate disciplines with separate needs.
          .
          When someone proposes nuclear power plants, or wind farms, or solar farms, or large-scale use of microhydro, or hydrogen-based fuels, or changes to the building codes to accommodate conservation . . . that’s what needs to be public. Cost-benefit analyses, risk analyses, practicality reports . . . these things are the domain of engineering. These things (when government sponsored) should be 100% public and wholly transparent. These things are projects. As many have mentioned before (and I can attest to as well), the level of detail, rigor, and proof in major engineering projects usually dwarfs what is used for papers – but the vast majority of the work is entirely irrelevant to science.
          .
          Thinking that one can treat science as engineering or vice versa is, in my opinion, naive.
          .
          The scientist is trying to figure out what is going on. He doesn’t care about a cost-benefit analysis. He is conducting an investigation, not building a business model. His effort is in the abstraction of information to a concept or theory or causal relationship or trend. Sometimes this is qualitative. Sometimes this is simply order-of-magnitude. Neither answer is acceptable for engineering, and requiring an engineering level of certainty on science would be stifling. They’re totally different modes of thought and they require different types of rigor. I know; I’ve been part of both communities.
          .
          The fact that some climate scientists do not understand the level of rigor needed for engineering projects may be frustrating, but it in no way validates the idea that science and engineering should be treated the same way. There is no logical connection there.

        • Geoff Sherrington
          Posted Mar 7, 2009 at 3:37 AM | Permalink

          Re: Ryan O (#268),

          You write –

          The scientist is trying to figure out what is going on.

          This is entirely inconsistent with the treatment of Prof Paltridge’s paper. He is trying to advance the knowledge of what is going on. Some people are trying to stop him. Such people do NOT fall into a peer definition of “scientist”.

          Maybe we need a new term like “Climate Jock” or similar. But please understand that the standards of “climate science” as publicised in the last decade or two by many star performers are woefully short of the standards other scientists expect. No wonder engineers complain also.

          Notice that there is scarcely a “hard” scientist commenting on RC any more? They have given it up as beneath dignity, or been censored.

        • Ryan O
          Posted Mar 7, 2009 at 7:54 AM | Permalink

          Re: Geoff Sherrington (#297), Agree. I’ve said multiple times that (assuming Paltridge’s claim to be true) the handling of the paper seems poor at best an deplorable at worst. I think people are inferring that my position is somehow different.
          .
          My only point is that the proposed cure – an open peer review system – has disadvantages, too and these disadvantages are likely to outweigh the benefits.
          .
          Re: Ron Cram (#289),
          .
          It sounds like you feel this has gotten past the friendly discussion state and become argumentative. I apologize if my statements made you feel that way . . . for me, this has been nothing more than a friendly exchange between two people with different views. Anyway, you have made good points and, although we may still disagree, I respect your views. If you felt this was not the case, please accept my apologies. 🙂

        • Ron Cram
          Posted Mar 7, 2009 at 9:53 AM | Permalink

          Re: Ryan O (#302),

          I just felt like it reached the point we had both taken our positions and understood the other side well enough to know neither was going to change our position. Unfortunately, I get frustrated when a point is reached in a discussion when learning has stopped. It is one of my worst faults and I have many. Please forgive my frustration. I know a great many people I respect who disagree with me on this issue, including Craig Loehle. You are in good company with Craig. But rest assured if I ever decide to publish my own scientific journal, the reviewers will not be anonymous.

        • Mark T
          Posted Mar 6, 2009 at 3:17 PM | Permalink

          Re: Craig Loehle (#244), Perhaps true, but as I noted, speculative. Obviously something needs to be changed, and the extreme example would be what people like me have to go through.

          Re: Ron Cram (#245),

          Actually, an open system would many fewer problems because people would not be as inclined to slander in the first place if they knew their comments were not anonymous.

          This really might be the counter-argument to Craig’s assertion. Still speculation, however.

          Mark

    • Ron Cram
      Posted Mar 6, 2009 at 12:20 PM | Permalink

      Re: steven mosher (#206),

      Wow Mosh! I love your comments! I also believe in openness and despise the concept of anonymous reviewers. Your comment about pantsing the reviewers made me spit out my raisins laughing.

  116. Philip Lloyd
    Posted Mar 6, 2009 at 3:36 AM | Permalink

    Re Jason #62, Joel #63, SMcI #64 and RM # 73, a problem I have with the IPCC review process (having been a CLA) is that the authors get to know the reviewers – in fact they get quite friendly, having had to sit in numerous darkened rooms for far too long. When I peer review for a professional journal or when I act as an editor and ask someone to review, the link between author and reviewer is broken, and the reviewer can speak his mind. I think it makes a huge difference to the quality of the review.

  117. Bill Illis
    Posted Mar 6, 2009 at 6:46 AM | Permalink

    I’ve charted up the specific humidity data from the NCEP (at all levels available and assuming the spreadsheet linked above is correct – it seems accurate for what I have been able to check.)

    Either the data is wrong or a few textbooks need to be rewritten.

    • Craig Loehle
      Posted Mar 6, 2009 at 7:47 AM | Permalink

      Re: Bill Illis (#211), This graph and the one cited above at
      http://members.shaw.ca/sch25/Ken/Optical%20Depth%20Data.xls seem directly contradictory: one flat one declining. What gives? Is the NOAA data raw radiosonde #s?

      • Bill Illis
        Posted Mar 6, 2009 at 8:19 AM | Permalink

        Re: Craig Loehle (#213),

        It is the same data. The chart labeled NOAA just includes the 1000MB and 925MB levels (while I charted them all) and the scale is tighter. There is increasing specific humidity at the lower levels, some decline in higher levels but the weighted average is constant.

        Given temps increased 0.53C over the period, specific humidity should have increased something like this chart.

      • MikeU
        Posted Mar 6, 2009 at 8:38 AM | Permalink

        Re: Craig Loehle (#213), “This graph and the one cited above at … seem directly contradictory: one flat one declining. What gives?”

        I believe the declining one is relative humidity, declining as CO2 and temperature go up (instead of remaining constant, as GCMs assume). The flat one is specific humidity, which has apparently remained constant instead of going up as predicted.

      • RomanM
        Posted Mar 6, 2009 at 8:47 AM | Permalink

        Re: Craig Loehle (#213),

        The graph given by Bill Illis (#211), hides the details because of the relatively large spread between humidities at the various pressures. I have graphed the same data from the spreadsheet using R here using separate scales at each of the eight levels. The “weighted average” of all the levels was omitted because I didn’t know what weights he used.

        I have also added loess curves (locally weighted least squares estimates) to indicate temporal trends.

        • Jason
          Posted Mar 6, 2009 at 10:53 AM | Permalink

          Re: RomanM (#219),

          Would I be wrong to think that the main difference between the flat graphs and the graphs that show a strong trend is the labeling of the vertical axis? 🙂

        • RomanM
          Posted Mar 6, 2009 at 11:13 AM | Permalink

          Re: Jason (#232),

          It is indeed. However, the importance of a particular trend may depend on the value of those labels.

  118. mugwump
    Posted Mar 6, 2009 at 8:17 AM | Permalink

    I’d love to get to the physics. From the OP:

    the face-value 35-year trend in zonal-average annual-average specific humidity q is significantly negative at all altitudes above 850 hPa (roughly the top of the convective boundary layer) in the tropics and southern midlatitudes and at altitudes above 600 hPa in the northern midlatitudes. It is significantly positive below 850 hPa in all three zones, as might be expected in a mixed layer with rising temperatures over a moist surface.

    However:

    Climate models (for various obscure reasons) tend to maintain constant relative humidity at each atmospheric level, and therefore have an increasing absolute humidity at each level as the surface and atmospheric temperatures increase.

    So if (and I understand it is a big if) the humidity trend really is negative at high altitudes, the climate models are missing an important piece of physics. Anyone who understands these things care to give a potted summary of what that physics might be?

    • Jason
      Posted Mar 6, 2009 at 8:39 AM | Permalink

      Re: mugwump (#215),

      Its really not an accurate representation of climate models to characterize them as being based on physics.

      Outside of a few well known fluid dynamics equations, the climate models are really just algorithms trying to replicate past planetary climate conditions (both from the recent past and the very distant past).

      The climate modelers make several assumptions about recent climate forcings. They include:

      1. Significant negative forcings from aerosols.
      2. Minimal positive forcings from solar activity (This may significantly underestimate the actual situation)
      3. No forcings from natural climate variability (Forcing is defined as an influence from outside the climate system, and natural climate variability is by definition part of the system).
      4. Insignificant forcings from non-atmospheric anthropogenic factors (like the dramatic changes to land use that have occurred over the past 100 years).

      Given these assumptions, it is then very difficult to duplicate recently observed temperatures unless there is a substantial positive feedback.

      Water vapor feedback is the only obvious candidate for providing a feedback of sufficiently positive magnitude.

      Water vapor will only provide the necessary forcing if the upper atmosphere is relatively saturated, and remains so even as temperatures increase.

      Therefore, the models assume constant relative humidity in the upper atmosphere.

      Although physical arguments have been advanced suggesting otherwise, this doesn’t violate any widely accepted physics.

      But it could be wrong.

      And if it were wrong, the entire IPCC consensus could be off by a factor of 2 or more.

      • mugwump
        Posted Mar 6, 2009 at 8:55 AM | Permalink

        Re: Jason (#217), I hear you on the aerosol tweaking – Kiehl established that without doubt – but the models seem to be somewhat more sophisticated than you suggest when it comes to water vapour. From Dessler and Sherwood’s Science Paper:

        The water vapor feedback mainly results from changes in humidity in the tropical uppertroposphere (2), where temperatures are far below that of the surface and the vapor is above most of the cloud cover. The distribution of humidity in this region is well reproduced by “large-scale control” models, in which air leaves stormy regions in a saturated condition, but with negligible ice or liquid content. Water vapor is thereafter transported by the large-scale circulation, which conserves the specific humidity (the ratio of the mass of water vapor to the total mass in a unit volume of air), except during subsequent saturation events, when loss of water occurs instantaneously to prevent supersaturation [Ed: that would be a fancy way of saying it rains? 🙂 ].

        Despite the simplicity of this idea, which entirely neglects detailed microphysics and other small-scale processes, such models accurately reproduce the observed water vapor distribution for the mid and upper troposphere (3, 4).

        Now, as a physicist from way back, my gut says that if the humidity trend really does reverse at high altitudes, it is unlikely to be because of some hitherto overlooked microphysical process. It is more likely to be something missing in the large scale model. However, I know very little about atmospheric physics so I’d be very grateful to hear an opinion from someone who does.

  119. Juraj V.
    Posted Mar 6, 2009 at 9:09 AM | Permalink

    So, the lack of measured troposphere hot spot is because there is no increase in absolute vapor content (which has been expected to cause positive feedback)? If this is a sign of neutral or even negative vapor feedback against increased temperature (and something´s dampening the whole system since the climate is relatively stable), with present slight drop of temperatures the relative humidity should go up again. Pity we have such limited data so far.

  120. Bob North
    Posted Mar 6, 2009 at 9:16 AM | Permalink

    First, a quick thank you to Ryan Maue and Andrew Dessler for the straightforward, non-passionate, non-judgmental presentation of relevent information on the concerns with re-analysis information. If only more discussions were carried out in such a manner (and this applies to protagonists on each “side”), a lot more could be accomplished.

    Now the question. Given that re-analysis products are really just model runs that have been fed the actually data through the course of the model run to, as one person put it, “nudge” the model toward the real data, is there a better way to assimilate and evaluate historic humidity data? I know it is a diffilcult problem becuase the data is not synoptic, has various quality/discontinuity issues due to instrumentation, and is widely dispersed both vertically and laterally through the atmosphere, but would something along the lines of what is done with GIStemp or HadCrut for temp, or even, heaven forbid, using RegEM or some other multivariate technique, be a better way to reconstruct the humidty history than using a forecast model?

  121. Posted Mar 6, 2009 at 9:19 AM | Permalink

    Juraj V (#220),

    There are many well-known observational issues concerning the detection of a “hot spot” so these conclusions are not really justified, but you’re correct that theoretically the water vapor and lapse rate feedbacks are not independent of each other. Actually with no “hot spot” you get a bit more of a positive lapse rate feedback (less negative) than in the hotspot case.

    We see the tropical atmospheric temperature profile retain a moist adiabat as a response to ENSO, solar cycles, etc so we don’t have a reason why CO2 should be different. Last I checked convection was pretty good at moving heat around.

  122. Posted Mar 6, 2009 at 9:24 AM | Permalink

    mugwump (#220)

    This is quoted from Dessler and Minschwaner (2007)

    Over the past 10 years, however, an alternative school
    of thought has emerged: that detailed microphysics need not
    be included in models in order to accurately simulate
    tropical tropospheric humidity. The view is based on results of simplified models of the troposphere that advect water
    passively and contain virtually no microphysics other than
    the requirement that water vapor is immediately removed so
    as to prevent the relative humidity (RH) from exceeding
    100%. These simple models are sometimes referred to as
    large-scale control (LSC) models, and, despite their simplicity,
    they have proven effective in simulating tropical
    upper tropospheric humidity [Sherwood, 1996; Salathe
    and Hartmann, 1997, 2000; Pierrehumbert and Roca,
    1998; Dessler and Sherwood, 2000; Folkins et al., 2002b;
    Minschwaner and Dessler, 2004].

    • mugwump
      Posted Mar 6, 2009 at 9:40 AM | Permalink

      Re: Chris Colose (#224), that’s essentially what Dessler is referring to in my quote at #220. But those large scale models must be wrong if the trends measured by Paltridge are correct (or at least of the correct sign).

      So, what physics might be missing from the large scale models that would allow negative humidity feedback at high altitudes while retaining positive feedback at lower altitudes? The only hint we have is from the OP:

      (There are hand-waving physical arguments that might explain how a decoupling such as that could occur).

      I am really curious about those hand-waving arguments. That’s the crux of the issue, because it seems a lot of smart people think the large-scale models currently capture more-or-less everything of relevance.

  123. Posted Mar 6, 2009 at 9:32 AM | Permalink

    I have not followed this thread in detail, but heard some to the discussion on the CCG group list. The whole tropospheric humidity feedback anomaly, that some call the “iris effect” is being studied using short period data and may well be a long period cyclic effect, another kind of el Nino effect. I have not directly connected it, but studied a long wave in the 130yr data that could reflect it I think. fyi http://www.synapse9.com/drpage.htm#warming

  124. Posted Mar 6, 2009 at 9:45 AM | Permalink

    mugwump,

    Well, I don’t buy that specific humidity has declined (or any significant decline in RH), since there’s no good evidence to suggest this is the case. But to answer your question, I haven’t seen any real argument for why such a feedback should not exist in theory. I believe Lindzen had an older idea but I’d also like to see anything somewhat modern which has withstood scrutiny.

    • mugwump
      Posted Mar 6, 2009 at 10:40 AM | Permalink

      Re: Chris Colose (#227), Dessler’s Science paper refers to this 1990 paper by Lindzen which contains a discussion on pages 296 and 297 (paper is scnned, so I can’t cut ‘n paste the text). If I understand him correctly, according to Lindzen the gist of the mechanism for drying of the upper troposphere is thus:

      1) Cumulus convection occurs in rapidly rising air towers (a basic “uniform” model would not capture such a non-uniform phenomenon).

      2) The air in such towers is dryer than the surrounding air because the rapidly rising air also cools rapidly and that cooling causes the moisture in the air to precipitate as rain.

      3) When the air tops out (at up to 16KM altitude) it is very dry, and it convects back down towards the upper troposphere (above 3-5KM), drying it out.

      With warming, the clouds top out at greater altitude and are thus dryer and the convection intensity is increased so the dry air convects back down more effectively.

      That’s all pretty hand-wavy. But Lindzen’s article gives no greater detail. Are those the “hand-waving physical arguments” to which Paltridge was referring?

    • Ron Cram
      Posted Mar 6, 2009 at 12:41 PM | Permalink

      Re: Chris Colose (#227),

      You mention Lindzen’s hypothesis as if it has not withstood scrutiny. Perhaps you have not read Spencer et al 2007 found here. In this paper, Spencer and his co-authors identify a negative feedback over the tropics they identify as the Infrared Iris effect hypothesized by Lindzen.

      In my mind, Lindzen is one of the great minds in climate science. Like Einstein he published his hypothesis and it was later confirmed by observation. If Nobel ever gives a prize for climate, it should go to Lindzen.

  125. M. Villeger
    Posted Mar 6, 2009 at 9:49 AM | Permalink

    Obviously the question is far, far from being settled… 😉
    In the end what is the big deal to run models with and without a constant humidity? That’s what models are for: run them!

  126. mugwump
    Posted Mar 6, 2009 at 10:41 AM | Permalink

    My reply is being eaten by the spam filter 😦

  127. Jason
    Posted Mar 6, 2009 at 10:51 AM | Permalink

    I think that Dessler and Sherwood do a good job of characterizing the argument in favor of the model’s representation of humidity in the upper troposphere.

    They aren’t claiming that physics requires this to be the case, but they feel that observations from satellite based sensors like AMSU-B support their case.

    If I understand things correctly (a VERY questionable proposition), the satellites measure the radiation emitted by water vapor, and segregate it by physical location and vertical velocity.

    Precipitating and highly convective clouds are masked.

    Relative humidity at each combination of location and velocity is calculated by fitting a set of tropical training data to a curve, and then applying that curve to all tropical satellite measurements.

    Velocity data from the reanalyses (NCEP2 and ERA-40 [which has previously been discussed as being more reliable than other variables calculated in the reanalyses]) is then used to reconstruct a vertical profile of humidity.

    None of this seems obviously unreasonable.

    I wonder how much the relative humidity calculation and/or the vertical reconstruction would have to change for both Paltridge’s data and the raw satellite measurements to be correct.

    Would such a result require nonsensical changes to the algorithms used to produce a vertical humidity profile from satellite data?

    Or is the satellite vertical humidity profile highly sensitive to the assumptions used to produce it?

    I see various caveats in the literature suggesting that the pre-reconstruction data is more reliable; but does that simply mean that big error bars must be placed on the reconstruction? or does it mean that the potential errors are large enough to permit Paltridge’s NCEP data to be a better representation of reality than the layer separated data derived from the satellites?

    • Kenneth Fritsch
      Posted Mar 6, 2009 at 1:36 PM | Permalink

      Re: Jason (#231),

      I see various caveats in the literature suggesting that the pre-reconstruction data is more reliable; but does that simply mean that big error bars must be placed on the reconstruction? or does it mean that the potential errors are large enough to permit Paltridge’s NCEP data to be a better representation of reality than the layer separated data derived from the satellites?

      I am led to believe that most reanalyses in recent times use statellite measurements but handle it differently. Do you have references that you can link that discuss the relative merits of pre- and post-satellite data?

      • Jason
        Posted Mar 6, 2009 at 1:41 PM | Permalink

        Re: Kenneth Fritsch (#241),

        Actually, I didn’t mean data from before and after satellites were available.

        I meant satellite data in which humidity is calculated for each vertical velocity versus reconstructions based on that data in which humidity is calculated for various atmospheric pressure (using velocity information from the reanalyses to make this determination).

  128. Kenneth Fritsch
    Posted Mar 6, 2009 at 1:31 PM | Permalink

    I found the link below explaining that the RSS SSM/I adjusted measurements have been available since 1988. I am led to believe that NCEP, ERA-40 and JRA-25 use the SSM/I data (since 1988) but with different algorithms. The excerpts below point to the authors’ preference for RSS over the other reanalysis. The paper’s analysis used PCA. It was published in 2004, so I do not know whether the comparison would be the same today. Assuming that the conclusions given by Trenberth here are correct, I would be surprised that anyone would use NCEP (or ERA-40) atmospheric moisture reanalysis for any study.

    Ryan, could you extend your analysis to the use of RSS SSM/I from 1988 to near present time.

    Click to access Trenberth2005FasulloSmith.pdf

    Both NCEP reanalyses are deficient over the oceans in terms of the mean, the variability and trends, and the structures of variability are not very realistic. This stems from the lack of assimilation of water vapor information from satellites into the analyses and model biases. They agree reasonably well with ERA-40 over land where values are constrained by radiosondes, with some discrepancies over Africa.
    • The NVAP dataset suffers from major changes in processing at the beginning of 1993 and 2000 that upsets analysis of trends and variability. Further, there are major problems in mountain areas and in regions where radiosonde data are not prevalent and TOVS data from the oceans are erroneously extended over land.
    • The ERA-40 dataset appears to be quite reliable over land and where radiosondes exist, but suffers from substantial problems over the oceans, especially with values too high for 2 years following the Mount Pinatubo eruption in 1991 and again in 1995–1996, associated with problematic bias corrections of new satellites. The trends are generally not very reliable over the oceans. Allan et al. (2004) drew similar conclusions.
    • The RSS SSM/I dataset appears to be realistic in terms of means, variability and trends over the oceans, although questions remain at high latitudes in areas frequented by sea ice. It is recommended that this dataset should be used for analyses of precipitable water and for model validation over the oceans from 1988 onwards.

    Accordingly, great care should be taken by users of these data to factor in the known shortcomings in any analysis. The problems highlight the need for reprocessing of data, as has been done by RSS, and reanalyses that adequately take account of the changing observing system. This remains a major challenge.

    • Posted Mar 6, 2009 at 1:57 PM | Permalink

      Re: Kenneth Fritsch (#240), I think the Paltridge et al. (2009) paper’s foundation for its conclusions is on life support. Your post succinctly wraps up the data issues from the Trenberth paper.

      Since Trenberth’s paper was published in 2005, it likely may not address the evolution of the reanalysis technology through the 3rd generation JRA-25 and ERA-interim. I acquired the latter monthly mean and plotted up the Specific Humidity q for 1989-2007, Northern Hemisphere (20-50N) midlatitudes for July/August. Clearly no downward trend.

      I will endeavor to find the RSS data in a quick and useful format.

      I would be surprised that anyone would use NCEP (or ERA-40) atmospheric moisture reanalysis for any study.

      I agree, and this study suffers greatly.

      • Ryan O
        Posted Mar 6, 2009 at 3:11 PM | Permalink

        Re: Ryan Maue (#243), In the same vein as an earlier post (#202) you made, does the existence/nonexistence of the trend depend at all on the definition of “midlatitudes”? I see you chose Paltridge’s original definition for this graph, but I was curious how sensitive it was to the choice of latitudes.
        .
        I also was curious what the remainder of the altitude bands look like . . . but I will stop short of making a request as you have already provided much interesting information.

      • Ron Cram
        Posted Mar 6, 2009 at 3:38 PM | Permalink

        Re: Ryan Maue (#243),

        I do not think you are being fair to Kenneth. He wrote:

        Assuming that the conclusions given by Trenberth here are correct, I would be surprised that anyone would use NCEP (or ERA-40) atmospheric moisture reanalysis for any study.

        Your quote leaves out the assumption. I would guess Paltridge is assuming Trenberth is not correct.

        Your plot is very different from Paltridge’s downtrend. How do you explain the difference?

        • Posted Mar 6, 2009 at 7:30 PM | Permalink

          Re: Ron Cram (#250), I’ve been documenting the problems with the Paltridge et al. (2009) study with respect to the choice of data set. Read above.

          Trenberth is right on this issue. He understands the data sources especially the reanalyses. He is what I would call an “expert” as he has had a huge influence on the development of reanalysis datasets, old and new. He is the lead author of the IPCC AR4 on this topic. As for whether Paltridge agrees with Trenberth’s 2005 paper conclusions, that is a non sequitur since he failed to address them in his paper.

          Paltridge’s paper only exists because of the questionable quality of the NCEP reanalysis.

          My plot utilizes data from the (ERA-interim) reanalysis run with the state-of-the-art ECMWF operational model at high resolution, current as of 2007. The ECMWF operational forecast model is the best on the planet, hands down under a variety of metrics. This data is freely available for research purposes.

          I cannot envision a scenario in which this paper is taken seriously until considerable updating is done, and the analysis is verified with much more appropriate data sources. My cursory analysis of more advanced and current reanalysis data has already shown completely opposite results.

        • Bill Illis
          Posted Mar 6, 2009 at 8:17 PM | Permalink

          Re: Ryan Maue (#266),

          My plot utilizes data from the (ERA-interim) reanalysis run with the state-of-the-art ECMWF operational model at high resolution, current as of 2007. The ECMWF operational forecast model is the best on the planet, hands down under a variety of metrics. This data is freely available for research purposes.

          Your data shows a 0.03 gram per kilogram increase in specific humidity at 400 hpa over 20 years.

          How much global warming can we expect from 0.03 grams of water vapour per kilogram?

        • Kenneth Fritsch
          Posted Mar 6, 2009 at 9:01 PM | Permalink

          Re: Ryan Maue (#266),

          Trenberth is right on this issue. He understands the data sources especially the reanalyses. He is what I would call an “expert” as he has had a huge influence on the development of reanalysis datasets, old and new. He is the lead author of the IPCC AR4 on this topic. As for whether Paltridge agrees with Trenberth’s 2005 paper conclusions, that is a non sequitur since he failed to address them in his paper.

          Paltridge’s paper only exists because of the questionable quality of the NCEP reanalysis.

          Ryan, I think it may be premature to attempt to make conclusions based on data covering 20 years (from SSM/I). If you can point me to the data from your preferred reanalysis I can attempt to look at the CIs involved. The case may be that the error bars are sufficiently large to disallow conclusions either way.

        • Ron Cram
          Posted Mar 6, 2009 at 9:40 PM | Permalink

          Re: Ryan Maue (#266),

          Trenberth may be correct. I do not have an opinion on the matter. But your quote of Kenneth was unfair to Kenneth. Paltridge should have addressed the Trenberth paper directly rather than write vaguely about problems with the dataset. But it is my understanding others have used the dataset.

          You write:

          My cursory analysis of more advanced and current reanalysis data has already shown completely opposite results.

          I do not believe this is entirely correct. Your plot looks flat, not uptrending. From 1991 to 2007, the numbers look to be the same.

        • bender
          Posted Mar 6, 2009 at 9:45 PM | Permalink

          Re: Ron Cram (#282),
          Ron, respectfully, I would not be so dismissive of what Ryan says. These reanalysis datasets are complex things. He can probably help us learn quite a bit about what goes into them and what there real limitations are. I would try a more inquisitive approach. (My apologies for offering unsolicited advice.)

        • Ron Cram
          Posted Mar 6, 2009 at 10:14 PM | Permalink

          Re: bender (#284),

          I am not saying Ryan’s work may not be valuable. And I have learned from him already. I am only asking that he be fair when he quotes people and in the statements he makes. His plot is inconsistent with Paltridge but it is not “opposite” for a number of reasons. I certainly am not trying to chase Ryan off and I hope he does not take me that way.

          Ryan, forgive me if I come off a bit cantankerous at times. It is not intentional. I only want to see good science result from these discussions. Precision in expression is as important as precision in measurement. Please do continue to contribute.

  129. mugwump
    Posted Mar 6, 2009 at 3:40 PM | Permalink

    Meta discussions…

    All that matters at the end of the day is the physics.

    To follow up on my previous link to Lindzen, this paper has a lot more details. Still nearly 20 years old, so probably won’t satisfy Chris Colose, but very readable nonetheless.

  130. Hemst 101
    Posted Mar 6, 2009 at 4:06 PM | Permalink

    Thank you RomanM

    Finally, someone that does not use Excel straight line Trends!! Now study and try to explain those Loess curves.

  131. Kenneth Fritsch
    Posted Mar 6, 2009 at 4:10 PM | Permalink

    Ryan, if you can avail me of your data, I could do what has become standard procedure with time series here at CA, i.e. the Nychka treatment for AR1 as applied in Santer et al. (2008). We need to put some realistic CIs on the slope of these trends before we go much further in these discussions.

  132. Mark T
    Posted Mar 6, 2009 at 4:24 PM | Permalink

    This is not OT at all, Ron. This thread is about political content and/or agenda in reviews (in spite of the technical diversion). When I read a paper I want to know for sure that it was accepted because of its scientific merit, not because its results adhered to dogma. Is this the case here? Hard to tell since we don’t have all the information. Hint, hint, Dr. Paltridge. On the surface, however, there is good evidence that at least one reviewer had a biased opinion (with or without legitimate gripes, his statement clearly indicates a bias), and at least some anecdotal evidence that the editor chose to let this color his decision. The truth… only the shadow knows!

    Mark

  133. Posted Mar 6, 2009 at 5:56 PM | Permalink

    mugwump,

    I don’t know what the the authors meant by “hand-waving” argument so you’ll have to ask them. Lindzen’s old water vapor feedback proposal has gotten a lot of study over the years, and does not seem to be credible at this point, and I don’t think he even still defends it. His more recent argument (back in 2001 I think) is the IRIS hypothesis which had more to do with cloud feedbacks than water vapor.

    As I said, there is no reason to doubt the general picture of the water vapor feedback that has evolved over the decades.

    • Jason
      Posted Mar 6, 2009 at 6:27 PM | Permalink

      Re: Chris Colose (#260),

      As I said, there is no reason to doubt the general picture of the water vapor feedback that has evolved over the decades.

      You mean other than the fact that current tropical tropospheric temperature observations are – per the Santer 2008 methodology – inconsistent this?

    • Greg F
      Posted Mar 6, 2009 at 7:06 PM | Permalink

      Re: Chris Colose (#260),

      As I said, there is no reason to doubt the general picture of the water vapor feedback that has evolved over the decades.

      You wouldn’t happen to know of a source of where general picture is consolidated so one could review it?

    • Ron Cram
      Posted Mar 6, 2009 at 7:13 PM | Permalink

      Re: Chris Colose (#260),

      Chris, this is twice you have made this statement and it is just not true. Lindzen’s Infrared Iris hypothesis has been observed. See my comment above Ron Cram (#250).

    • Ron Cram
      Posted Mar 6, 2009 at 7:23 PM | Permalink

      Re: Chris Colose (#260),

      Chris, I’m sorry. I pointed you to the wrong comment. I meant Ron Cram (#237).

    • KevinUK
      Posted Mar 7, 2009 at 4:03 AM | Permalink

      Re: Chris Colose (#260),

      “As I said, there is no reason to doubt the general picture of the water vapor feedback that has evolved over the decades.

      Chris there is every reason to doubt this ‘general picture of water feedback” particularly the still yet to proven and in my IMO wrong assumption by the IPCC that water vapour is a strong positive feedback. This assumption is at the heart of the claims by alarmists like Hansen that we must act now in order to avoid a climate change ‘tipping point’. Without invoking the claim of strong positive feedback from water vapour there is no need to act now in order to ‘save the planet from unprecedented global warming’ due to man’s continued use of fossil fuels. If there isn’t going to be a ‘tipping point’ then how do we justify subsidising renewable energy schemes at the expense of security of supply? IMO without this alarmist climate change propaganda the average taxpayer would not support the current levels of investment in renewable energy that is largely justified on the basis of reducing our carbon emissions.

      Steve please don’t snip this post because it sounds like editorialising. It’s important that anyone reading this thread understands the context of what Garth’s study and what it has shown. Re-iterating the ‘alarmist message’ is what the BBC is doing all the time now and its important that this imbalance is countered whenever possible.

      KevinUK

  134. Bill Illis
    Posted Mar 6, 2009 at 6:23 PM | Permalink

    So, after decades of measuring humidity, we still have no reliable humidity data, or at least we only have data that researchers have lots of reasons to question and since this data is so questionable, it should not be shown to anyone.

    All we do know is that the climate models are quite accurate and nobody should be trying to question them. And there should be no doubt about the water vapour assumptions built into them.

    Does that sound at all reasonable to everyone? This is not a strawman – this is more-or-less what some are saying.

  135. JamesG
    Posted Mar 6, 2009 at 7:37 PM | Permalink

    I’m still amazed that people try to separate out water vapour and clouds as if they are separate issues. Obviously they couldn’t be more related. Lindzen doesn’t really separate them, he talks about low level cloud feedbacks as being one of the major negative feedbacks that would accrue from increased water vapour and is still consistently saying the same thing.
    Heres Lindzen in 1997:
    http://www.geocraft.com/WVFossils/LIND0710.html
    Here he is in 2007:
    http://physicsworld.com/cws/article/print/26945
    ie no turnabout.

    Have a read Chris. I get the abiding impression that AGW advocates don’t really read what Lindzen writes, they get their info 3rd hand. Hence the hearsay of skeptical scientists accepting strong water vapour feedback. The real fact is that they never ruled it out, they merely said that the data didn’t support it and that the potential negative feedbacks, including clouds, were not sufficiently considered.

  136. Posted Mar 6, 2009 at 7:54 PM | Permalink

    Ron Cram,

    For one thing, I was originally speaking of a different perspective on water vapor feedback proposed by Lindzen which was independent of IRIS. The topic of IRIS though has been explored in later papers questioning the methodology and assumptions and there has been plenty of scrutiny in the primary literature that should not be discounted. E.g.,

    Click to access IRIS_BAMS.pdf

    http://adsabs.harvard.edu/abs/2002JCli…15.3719C

    Click to access acpd-1-221-2001-print.pdf

    I would write a separate post about Spencer’s paper on my site but it’s not very recent so there’s little point. Spencer’s paper has nothing to do with long-term climate feedbacks, and like many recent things coming from him, its overall importance in blogs is much less than its importance in the actual scientific arena. All he’s seeing is a propagating wave moving to a different region of the world where the air is drier – this has nothing to do with feedback. You cannot infer feedback from a specific form of variability that’s controlled by other things.

    • Ron Cram
      Posted Mar 6, 2009 at 9:47 PM | Permalink

      Re: Chris Colose (#269),

      Chris, did you really just say that 2007 is not very recent and so there is no point in responding to Spencer’s paper? I knew that science advanced quickly but I am surprised you take this view. I take a different view. I find that most climate modelers have not even bothered to read Spencer’s paper. No climate models have attempted any modeling runs with the new information from his paper and no paper has been published attempting to refute Spencer’s paper. Rather than seeing climate science self-correct, most climate modelers are pretending these observations have not been made. It’s rather sad really.

  137. JamesG
    Posted Mar 6, 2009 at 8:26 PM | Permalink

    Oh you can apparently infer whatever you like in climate science. Witness Gavin’s rejoindre to Lindzen:

    “However, Gavin Schmidt of NASA’s Goddard Institute for Space Studies in New York believes that Lindzen’s estimate of the climate’s sensitivity is wrong. According to Schmidt, Lindzen has not properly taken into account the thermal inertia of the oceans, which means that much of the temperature rise associated with the carbon in the atmosphere today will not appear for about 20 years. He adds that Lindzen has also not accounted for the possible cooling effects of aerosols, which, if ignored, also lead to an underestimate of climate sensitivity. As regards the role of clouds and water vapour, Schmidt claims that Lindzen is unique in his belief that they act as a negative feedback, adding that there are now strong observational data to the contrary.”

    Let me see; a handwave about the heat stored in the ocean, which Pielke Snr says isn’t apparently there and which is in any event is conveniently 20 years away, a handwave about the aerosols which are so uncertain the aerosol experts can’t pin down a number or even a sign, and the “strong observational data” is somewhat misleading given the paucity of data and the large variance of the water vapour. Even Dessler thought there was no strong case for it before he did his latest rather iffy, short-trend analysis – which of course ignored clouds too.

    So really you can think that there is a strong positive feedback canceled out by a strong aerosol feedback which will of course disappear in 20 years and then cause rapid warming OR you might look at the same data and make the rather simpler conclusion that there is no strong water vapour feedback and no strong (manmade) aerosol cooling and there probably isn’t warming in the pipeline either. It all depends how you look at it, but the pessimistic view relies on pure speculation about contradictory amplifications.

    • bender
      Posted Mar 6, 2009 at 8:31 PM | Permalink

      Re: JamesG (#274),

      Lindzen has also not accounted for the possible cooling effects of aerosols, which, if ignored, also lead to an underestimate of climate sensitivity

      Go to RC and ask Gavin how he “estimated” the cooling effets of aerosols, and then take this protocol (assuming he’ll give it to you) to a real statistician, and get his assessment.

      • JamesG
        Posted Mar 7, 2009 at 5:00 AM | Permalink

        Re: bender (#275),
        I’ve asked Gavin several things and he’s quite honest about using best guesses. To journalists though he always implies a certainty that isn’t warranted if you are guessing. It seems that if there is a stated range of possible input values, climate modelers tend to assume a gaussian and use a value near the middle. But these ranges are really only just two guessed limit values so that assumption isn’t in the least valid. For aerosols that range is vast and Schwartz’s papers tell us clearly that every value is as likely as any other and they just don’t know which is correct. Hence, I asked Gavin if they did a real sensitivity analysis, ie testing to the full extents of the input ranges. His reply was that doing that wouldn’t be useful. Yet it’s not meant to be useful – it’s a standard test that provides a proper uncertainty range. It’s not difficult to see that such an uncertainty range would be huge, much large than the IPCC admits, and including massive heating and cooling scenarios. Schwartz seems quite panicky though about the possibility of a huge masking aerosol effect which is why he did that work to pin down the CO2 sensitivity.

        Oddly this gaussian distribution assumption seems to be used only on sparse, preselected data or model outputs where it is rarely an appropriate assumption. It never seems to apply to the actual area where it may be a correct assumption – ie on large amounts of error-prone data. Instead they like to just ignore or adjust data if they don’t like it or even look for a new metric that shows what they expect to see.

    • bender
      Posted Mar 6, 2009 at 8:34 PM | Permalink

      Re: JamesG (#274),
      Don’t forget the handwave about the “radiative imbalance” that implies the existence of the committed warming “in the pipe”. Ask at RC about that one.

      • bender
        Posted Mar 6, 2009 at 8:40 PM | Permalink

        Re: bender (#277),
        Hansen J, Nazarenko L, Ruedy R, et al. 2005. Earth’s energy imbalance: Confirmation and implications. Science 308: 1431-1435.

  138. Gerald Browning
    Posted Mar 6, 2009 at 9:00 PM | Permalink

    Ryan Maue (#202),

    Where does this non-sequitur come from in any of my threads? I haven’t said anything about climate models. Spare me? Moving on…

    In your earlier comment (#181) you stated

    Again, as I get exasperated, if you are going to try and publish a paper that bucks consensus or sheds new light on a very controversial topic, you need to come to the gunfight with a lot more than the NCEP Reanalysis data. The paper is littered with caveats about the poor or questionable quality of the data, yet it is used nevertheless, without any assessment of such deficiencies.

    My point is that the climate modelers do not state the caveats in the “results” they obtain from their models (and there are many that I have clearly indicated thruout this site). At least Paltridge et al. have the decency to state the caveats in their use of the reanalysis data. Why don’t you see how many other authors are honest enough to state those caveats? And how many of those questionable results are cited in the literature without any indication of the problems?

    Heinz Kreiss and I have come to the “gunfight” with hard mathematics that is ignored by the high quality “scientists” in atmospheric and climate “science”. Could it be that they do not want to know the truth? Have you read any the results that I have quoted?
    Evidently not. My point was that you have stated that the reanalysis
    data is not sufficient and based on continuum mathematics and numerical analysis, neither are climate models. What does that leave to make any rational scientific argument. Not much.

    FYI I have even seen an Editor refuse to publish an article even though a numerical analysis proof could be given to show that the reviewers were incompetent. Do you think the Editor was biased?

    Jerry

  139. bender
    Posted Mar 6, 2009 at 9:40 PM | Permalink

    Ryan, try to be patient with Jerry. (He has an interesting and relevant story.) He’s not singling you out, just saying that there are many in climatology who hold this double-standard that the climate models are not as junky as reanalysis data. That’s why his argument seemed like a non-sequitir to you. It kinda was. It’s coming in from somewhere else.

    You may (or may not) be interested in the “exponential growth in physical systems” threads #1, #2, #3 that were moderated by Jerry.

  140. Ron Cram
    Posted Mar 6, 2009 at 11:04 PM | Permalink

    snip

    Steve- enough on this issue for now. all the points have been made.

  141. Posted Mar 6, 2009 at 11:50 PM | Permalink

    Garth,
    Your story is all too familiar. I’ve had my share of ridiculous misinterpretations of both my methods and results on a number of things. I sympathize with both you and the reviewers of your paper, though. We need to appreciate what effect it has on science to have this kind of problem thrown at us. Having such a hugely complex problem that 1) doesn’t really respond so well to the old tools, and 2) is of monumental importance to get right, without experiment, while 3) having to share the public sphere with new kinds of very demanding clients, 4) hoards of people speaking excitedly with a range of different skill levels about their own ideas, and 5) a web world of creative thinkers like us too.

    The overload seems to have made the IPCC group much more defensive for fairly understandable reasons, and so that prevents them from taking some of the good input they really need to be benefiting from. I have not read your paper, but your letter raises a question to me whether you consider the range of possible meanings of the anomaly you discuss, leading the reviewer in his haste to give you the vitriolic response you got.

    I find the same apparent omission in the work of a lot of brilliant students of the subject, like the writing of Roger Pielke and his son, and many others, and in various other fields too. There is a lot of new thinking that just does not seem to take into account how one change sets the stage for the next, the way nature actually functions. This is a broad criticism, but I think broadly valid. I’ve been watching it for decades and scratching my head as to what it means. Everyone seems to speak as if the trends they see will go on to change the system they’re in, and not trigger something else in the environment, to respond in some other new way. Invariably the response to any trend is setting off a process in the opposite direction.

    The big question is why do people so generally not seem to recognize disturbance as a predictor of response. The way systems with many separate parts operate is for one thing to make one move, and then other things to make other moves. It goes on and on, ending up with systems acting as a whole, but not all at once. So, the question is, do you think your conjectures about the behavior seen in the data equally considered the things that would come next? Maybe if you did they wouldn’t have responded as if you were overstating the importance of the finding. There are indeed big risks of the IPCC having the theory wrong. So do the sharp eyes needed to see the gaps with need careful approaches, to get good work through the door.

    Phil

  142. Gerald Browning
    Posted Mar 7, 2009 at 12:14 AM | Permalink

    Keneth Fritsch (#279),

    The case may be that the error bars are sufficiently large to disallow conclusions either way.

    Well said. One of the problems with this area of “science” is that many of the definitions, e.g. the midlatitudes, are not precise.
    This can lead to all sorts of nebulous results. I also restate that I have pointed out the importance of the accuracy of the heating and cooling parameterizationsthe in the equatorial region and the fact that those parameterizations are not well understood in that region.

    This is a problem both for data assimilation (where there are few observations) and for climate models where that region is the main source of heating for the earth.

    I also point out that I asked Kevin Trenberth at a seminar at NOAA about some specific details about large scale global weather models used in data assimilation and he was very ignorant about what was going on inside the models.

    Jerry

  143. Posted Mar 7, 2009 at 7:04 AM | Permalink

    I rejected a paper for publication yesterday. It was an unpleasant task, but I can guarantee that the author knew why it was being rejected and who rejected it. It was not a question of the work performed; it was because the author had incorporated a grossly misleading assumption into his data collection scheme.

  144. Steve McIntyre
    Posted Mar 7, 2009 at 10:43 AM | Permalink

    I asked Garth Paltridge if he would be willing to place the review comments online. After discussion with his coauthors, they decided not to – primarily on the basis that this would be perceived as a breach of academic propriety and not all of the authors are in a position to disregard such conventions, should matters blow up.

    However I’ve had the opportunity to read the review comments. The review containing the “cheap shot” also makes valid points. There are comments in both reviews along the lines of Ryan Maue’s comments here. If the other review had been more encouraging, the editor might have been obliged to give the paper an additional opportunity. However, it’s my opinion (and this is based on very limited experience with academic reviews) that a reasonable editor could have managed to separate the cheap shot in the second review from the rest of it and, within the norms of the trade, made a decision to reject.

    I think that there are other issues. It’s not that the reasons for rejecting this particular article are invalid; it’s the seeming failure to be equally diligent in dealing with “iffy” data if it goes the “right” way. Garth observes the workshop view that:

    the radiosonde data were too ‘iffy’ to report the trends publicly in a political climate

    As I’ve said before, the bristlecone data are also “iffy” to say the least, but they support a certain storyline and they continue to be used without apology. We’ve noted wryly in the past that most of the errors that we notice are in one direction – suggesting that people in the field are better are noticing errors that underestimate trend than errors that overestimate trend. It’s hard not to think that there’s a bit of this at work here as well.

    The form of argument against the radiosonde data was also interesting. One of two authorities against the use of the NCEP data was a non-peer reviewed workshop proceeding http://ams.confex.com/ams/pdfpapers/57676.pdf

    It points out that there have been changes in instrumentation and algorithms over time

    Although one can’t draw any conclusion that this is the factor for the trend, it does show that instrument and algorithm changes do have significant impacts on the data.

    One could say the same thing about data sets that are used without compunction.

    It reports:

    Reporting station increase, figure 5, for the Reanlaysis could also be a factor for the humidity trend.

    Could well be. But we’ve observed similar changes in the roster of surface stations – in particular, the decrease in surface stations in GHCN in the early 1990s – and people have speculated that this is a factor in the trend, yet these datasets are used without compunction.

    It observes:

    most of the trends are in the earlier years of the 50-year Renalaysis

    Perhaps so. But this is also the case with Steig et al in Antarctica.

    It observes:

    This study suggest that the future Reanalyses includes a sub-analysis using only the limited well-known, high quality, fixed number stations for GDAS, such that a baseline reference analysis for the full analysis can be established. Meanwhile, it is indispensable to conduct the parallel processes for extended period whenever a new instrument, or processing system, is introduced, such that the impact from the new instrument/process can be understood.

    More or less what Anthony Watts would say about the surface station network.

    I am not at all familiar with the ins and outs of this data set and this controversy and my opinions are expressed with that caveat. It’s not that criticisms of this paper were unjustified – it’s the asymmetry in failing to be equally diligent in reviewing papers that yield answers that people in the field “like” – of which I use the continued use of Graybill bristlecones as a type case. The inconsistency between Ababneh and Graybill at Sheep Mountain, combined with other criticisms, is just as substantive the Bengtsson et al 2004 case against NCEP reanalysis. And yet one is rigorously applied and not the other.

    • Kenneth Fritsch
      Posted Mar 7, 2009 at 11:46 AM | Permalink

      Re: Steve McIntyre (#308),

      Thanks for the well timed summary of what the main initial intent of this thread was. While I think that it is important to keep the sub-threads running through it separate, I would hope, however, that we could get a little more in-depth analysis of the work that Trenberth reported in, “Trends and variability in column-integrated atmospheric water vapor”.

      Click to access Trenberth2005FasulloSmith.pdf

      I think I see more uncertainty (at this point with my layperson’s comprehension of the material) in the conclusions of that paper than perhaps Ryan Maue does. Certainly the recent time trends are dominated by the El Nino event of 1998 and the integrated water vapor reanalyzed data do have large spatial and annual variations.

      Trenberth appeared as anxious to be able show a positive moisture trend as perhaps Paltridge was to show a negative one. My question would be, using the various available reanalysis algorithms, can we definitely and statistically say a trend different than zero exists over the past 20 years or time period when all/most agree that we have a reasonably valid measurement/reanalysis.

    • bender
      Posted Mar 7, 2009 at 4:20 PM | Permalink

      Re: Steve McIntyre (#308),
      “Iffy” data are ok if they support “the consensus”, otherwise they are to be avoided. Into the double-standards database she goes.

  145. Judith Curry
    Posted Mar 7, 2009 at 11:14 AM | Permalink

    Very interesting thread. First, I would like to state that I essentially agree with Ryan Maue’s comments regarding the reanalyses. The issue of tropospheric temperature trends is very difficult and has received a great deal of attention by researchers and also assessment reports; the problems are much worse for humidity and few people have even attempted to do anything with tropospheric humidity trends owing to inaccuracies in the radiosonde humidity measurements and substantial uncertainties in the satellite retrievals. Also, global weather models quickly spin up their own humidity fields and lose the signal from assimilated humidity data.

    Without having actually read Paltridge’s paper, based on what i have read here, I would agree that rejecting it from J. Climate was probably appropriate, unless the paper included a broader discussion of the range of humidity measurements available and looked at other reanalysis products, which apparently the paper did not do. It is arguably time to tackle the tropospheric humidity issue, but this should be done from the perspective of comparing multiple data sources and assessing the uncertainty, before publishing trend analyses in the context of saying something about climate change.

    That said, the comments made by the reviewer of Paltridge’s paper were highly inappropriate, and one would hope that an editor would filter such comments out in any decision, or even reject a review with such comments. Sending such comments to an author is inappropriate in my opinion. Inappropriate comments are not infrequently made by reviewers, and they are surely not made only by defenders of the IPCC, as was the case in the Paltridge review. A paper of mine on hurricanes and global warming received some inappropriate reviews from people who were skeptical of AGW and/or its links with hurricanes, with word like “paranoid”, “bully pulpit”, “hypocritical”, used in direct reference to me personally in the reviews. The paper was eventually published in the Bulletin of the American Meteorological Society (Curry et al. 2006), but this was not an easy or straightforward decision for the editor.

    So what is going on? While such inappropriate review comments are the exception and not the rule, are climate researchers (and hurricane researchers) uniquely unprofessional in science in terms of the review process? I would argue that prior to hurricane Katrina, papers about hurricanes and global warming would not have received such inappropriate reviews. Once the issue became highly policy relevant and politicized and every paper was accompanied by media attention, then emotions flared not just about the science, but also about the policy implications, and these showed up in the review process. I would put forward the hypothesis that such reviews in the climate field probably became more frequent subsequent to the IPCC second assessment report, when the IPCC reports started to have more policy relevance.

    Over the past few years, i have been asked to review many papers on the hurricane/global warming issue, but not very many lately. Lately, I have been asked to review a lot of papers on arctic climate change, especially sea ice. My reaction as a reviewer for a paper on a policy relevant issue, for better or for worse, is frankly different than for a paper that is mainly of academic interest. Based on a quick read of the title and list of authors, I quickly assess whether this is a policy relevant paper, and I identify any “emotional” reaction that I might have based on my own preconceived notions of the subject and what a likely press release might look like. Then I try to put my own emotions on a shelf and ignore them as I consider the arguments made in the paper and how they are presented, hoping to learn something new. After I have done a technical review based on the paper’s content, I then revisit my concerns about the general topic of the paper or the authors that i previously placed “on the shelf”. My reviews are much harder and more inclined to recommend rejection of any policy relevant paper (whether or not it confirms or supports my preconceived notions on the subject), since I personally think such papers should be held to a higher standard. I am actually harder on weak papers that support my preconceived notions, since I don’t want any weak papers out there that will reflect poorly on the broader community (including myself) that is working on these topics.

    Is this an appropriate response for a reviewer faced with such a paper? I have NO IDEA. As a human being, I can’t pretend that I don’t understand the policy relevance of these papers, and as a scientist and as a citizen I want to see scientific progress made on topics of societal relevance. Many of these papers seem to me to be prematurely published given their policy relevance. But if faced with the same level of prematurity in a paper without policy relevance, I would say go ahead and publish it, get the ideas out there and circulating.

    So this raises the issue of what should the role of journal peer review for policy relevant scientific papers? The outcome of the peer review process can be quite random for such papers, depending on the editor’s selection of reviewers, the reviewers’ responses, and the editor’s decision. A number of journals even require that the authors recommend 5 reviewers; assuming that the editor selects some of these reviewers, how can we expect an unbiased review? Being an editor is hard work, but it is very easy to be a poor editor. Any paper can probably get published somewhere, if the author is sufficiently persistent. So does “peer reviewed” really mean much for policy relevant papers? Probably not. Professional societies and other groups publishing journals need to establish better policies for editors so that this process makes more sense especially for policy relevant papers.

    The blogosphere plays an important role in discussing and reviewing such papers. The media does also. The Hoyos et al. (2006) paper (follow on to Webster et al. 2005) skated through the review process in Science. Because of all the media attention that Webster et al. (2005) received, during the press embargo period, journalists sent the Hoyos et al. paper out for review to apparently quite a large number of climate researchers, mathematicians, and statisticians, a number of whom were quoted in media articles or who emailed us personally with questions. So the media managed a far more thorough “peer review” on this paper than did Science.

    Assessments such as the IPCC, NRC reports, and CCSP Synthesis and Assessment Reports become increasingly important for policy relevant topics; while they only include peer reviewed papers, people with a diversity of perspectives are invited to participate in various aspects of the assessment process.

    Personally, I am now much slower to publish anything of policy relevance, although I frequently present my research in a variety of venues and write papers in the “gray literature”. As a senior scientist, I am not particularly worried about the publication rate that appears on my c.v. or the impact score of the journal. But young scientists are under far more pressure to publish, and are rewarded for publishing in high impact journals.

    • Kenneth Fritsch
      Posted Mar 7, 2009 at 12:04 PM | Permalink

      Re: Judith Curry (#309),

      As a senior scientist, I am not particularly worried about the publication rate that appears on my c.v. or the impact score of the journal. But young scientists are under far more pressure to publish, and are rewarded for publishing in high impact journals.

      Yeah, Judith who is counting after the first several hundred papers and a few books.

      On the other hand, we have Ryan Maue, who as a graduate student, has several papers already published. That should impress potential employers.

      • Posted Mar 7, 2009 at 8:55 PM | Permalink

        Re: Kenneth Fritsch (#314), appreciate the friendly compliments, as well as from Judy & bender. Gary Paltridge Re: Garth Paltridge (#323), throughout your paper, there are plenty of cogent points on the limitations of the NCEP Reanalysis. The data assimilation procedures during the past decade have evolved quite a bit, especially since the times of the NCEP Reanalysis. Indeed, a version of the MRF as it was called in 1997, which is the forecast model that the NCEP Reanalysis was fashioned on, is still being run. Having a frozen model is very beneficial in order to determine the positive or negative effects created when the observing system changes.

        As part of the data assimilation procedure, the data is given a weighting based upon a laborious and loosely arbitrary process of defining its error characteristics. The radiosonde network has been around for over 50 years, and typically most operational forecast NWP models as well as reanalysis models treat the raobs similarly. This is much different than how NCEP (GFS) vs. ECMWF handles satellite radiance temperature retrievals (3D vs 4D Var). Thus, over a long time period, the differences between the two model “analysis” are the largest over the oceans and in areas where the in situ/ surface and raob network are sparsely distributed. But where you have a raob, like over Hawaii or the coast of Antarctica, the models tend to agree very closely. The raob gets very large weighting based upon (relatively) well known characteristics.

        From one of the papers (download) I worked on while interning at NRL Monterey, I computed a simple analysis difference for a variety of forecast models and variables, including thickness and temperature. Here is one of the figures which on its own is better than reading the text. RMSD is root-mean-squared differences for a 6-month period between analyzed 500 hPa midtropospheric temperature. The model analyses agree best where you have the radiosondes as anchors. In the Southern Ocean, you are at the mercy of satellite retrievals and there are large differences. You can easy pick out the radiosonde network from this type of image. Ditto if you use the reanalysis data.

        As you can see, there is a radius of influence that definitely extends out of the individual grid cell.

        As we discussed in our paper, if one takes out the ‘no radiosonde data’ squares from the output of the NCEP model, the averages of the remaining squares (about 2% of the total) tell much the same story as when one uses all of them. It was because of this that we tend to believe that, if there are spurious trends in the output of the NCEP model, then they most likely derive from the radiosonde input data rather than from the behaviour of the model. But maybe the belief is false. Any comment from someone?

        This is not the typical way of conducting a sensitivity study since the reanalysis models are multivariate spectral models (not grid point / lat/lon) and the radius of influence of data is not limited to individual grid cells. I don’t buy this explanation at all.

        • Posted Mar 8, 2009 at 8:07 AM | Permalink

          Re: Ryan Maue (#327),

          This is not the typical way of conducting a sensitivity study since the reanalysis models are multivariate spectral models (not grid point / lat/lon) and the radius of influence of data is not limited to individual grid cells. I don’t buy this explanation at all.

          I missing a detail of your argument. You point out that the cells with radiosonde measurements influence the cells around them. This is not only not surprising, but seems expected. You also point out that best agreement with models is in the cells where actual measurements took place. I don’t see how that means that looking at the 2% of cells with direct measurement and comparing it to the overall result isn’t worthwhile. On first principle it seemed like a sensible test that the reanalysis didn’t change the direction of the measured trends.

          A different argument which I would understand is that the 2% of cells with actual measurements are too few to infer the values of the other cells. Of course then one might ask what the purpose of doing the reanalysis at all would be.

    • Willem Kernkamp
      Posted Mar 7, 2009 at 7:58 PM | Permalink

      Re: Judith Curry (#309),

      You bring up a good point Judith:

      So this raises the issue of what should the role of journal peer review for policy relevant scientific papers? The outcome of the peer review process can be quite random for such papers, depending on the editor’s selection of reviewers, the reviewers’ responses, and the editor’s decision.

      I think scientific peer review is due for an overhaul. Perhaps there should be a class of “audited” magazines where every assertion and calculation is replicated by an independent reviewer prior to publication. As an engineering student I found a publication that purported to systematically change the shape of a sail in a wind tunnel using just three independent variables. At first I was very excited as this would be very helpful to my work. Unfortunately, I found out that a complete definition required four independent variables. This meant that the shapes were insufficiently defined and essentially useless. So you see, the problem is not limited to climate science.

      There is just a thin veneer of due diligence when underfunded journalists cite under-reviewed scientific studies to write big opinion pieces.

      Thank you for your extensive comments and your many contributions to the discussions on climate science.

    • Geoff Sherrington
      Posted Mar 9, 2009 at 7:12 PM | Permalink

      Re: Judith Curry (#309),

      Judith, In some respects I am a more senior scientist than you and I predictably have a different reaction to yours. One important seniority difference is that I’m retired now and can say things without fear of jeapordy to my future progress. I am not without grace and I congratulate you on your participation in seminars and blogs like this. If only more of your colleagues were as enlightened. So, please take the following an unemotional comment not intended to offend you.

      But, I could NEVER state as you did

      Without having actually read Paltridge’s paper, based on what I have read here, I would agree that rejecting it from J. Climate was probably appropriate, unless the paper included a broader discussion of the range of humidity measurements available and looked at other reanalysis products, which apparently the paper did not do.

      I have read some of Prof Paltridge’s earlier work (but not this paper) and I pay respect to a D.Sc. They were quite rare degrees and exceptional in our time. Regarding your comment, I feel a lot of sympathy for Garth, if for no other reason that he might be doing a stepwise reanalysis and has to start somewhere. Has he said that he is not going to proceed with further refinement?

      There are analogies with “global temperature reconstruction” whatever that might be. The science community has now reached the stage where it is desirable to go back to first principles, to the very early data, and rebuild it on as solid a foundation as can be agreed. There have been so many ad hoc adjustments and assumptions, not always well-supported by replicated experiments, that the temperature record is a mess. The very first surface station that I looked at in the KNMI compilation last December had adjustment errors of over 1 deg C in annual averages and the second station I looked at was equally bad.

      You will see that I have written nothing political here. The problem is the poor state of the science that has been done to date. To the extent that Garth was commencing to rectify that situation with relative humidity, he deserves assistance, not rejection.

      Opportunistic science had adopted incorrect basic data and done various types of “sophisticated” analysis to it. Sorry, good science does not work this way. First, you get your data as good as is economically and feasibly possible. Only then do you start your fancy anlysis, using error calculations that are in touch with reality and not just so many sigmas of the data set you happen to have taken in faith. In my first encounter with national/global temperatures, about 1990, the weather stations were selectively chosen and they have not really been corrected because the dog ate the homework (which some of us just happened to keep).

      Scientists who have worked for many decades seem less prone to dismiss fundamentals than the bright young things who feed on pieces of dubious data. On that I think we can agree and be friends.

      • Steve Koch
        Posted Nov 12, 2010 at 11:58 PM | Permalink

        Wonderful point! All of his fancy analysis based on dubious data is a waste of time. The first step is to get the data in good shape. After that has been accomplished, then do the fancy analysis.

        It may be, for example, the tree rings of the last several decades were actually OK and that they did not match the temp records because the temp records are not OK. We have a huge number of papers based on bad data. IIRC, the Met is now laboriously recreating the work that Phil Jones did and “lost”. That seems like a good start but the process has to be transparent if it is to inspire confidence.

  146. Posted Mar 7, 2009 at 1:24 PM | Permalink

    By the way, I should forward my thanks as well to Ryan Maue and others who have contributed to (in segments) an interesting discussion.

  147. Gerald Browning
    Posted Mar 7, 2009 at 4:57 PM | Permalink

    snip- now, now. Bite your tongue when talking of other posters. I really don’t want any fights when I’m away.

  148. Gerald Browning
    Posted Mar 7, 2009 at 5:13 PM | Permalink

    Steve McIntyre (#308),

    I agree with Kenneth Fritsch’s comment (#311). The double standard in these areas of hand waving is painfully obvious.

    Jerry

  149. Garth Paltridge
    Posted Mar 7, 2009 at 7:32 PM | Permalink

    Let me thank all of those who have contributed to the debate and argument on this matter (these matters?). It has been an eye-opener in many ways, and we have learnt a lot.
    May I pick up on one scientific issue that bothers me and ask for some help. Let us assume for the sake of argument that the observed humidity information fed into a re-analysis model is completely accurate. Again for the sake of argument let us assume that it has no long-term trend. How might it be that the reanalysis model can introduce a spurious long-term trend if it is being continually ‘nudged’ back towards the observations? I can see that it might happen in those grid squares of the model that in fact have no data input, and in turn this may introduce a trend of the zonal averages. I cannot see (presumably because of my ignorance of the exact operation of the data assimilation process in models) how a trend could emerge in those grid squares that do have actual input of real data.
    As we discussed in our paper, if one takes out the ‘no radiosonde data’ squares from the output of the NCEP model, the averages of the remaining squares (about 2% of the total) tell much the same story as when one uses all of them. It was because of this that we tend to believe that, if there are spurious trends in the output of the NCEP model, then they most likely derive from the radiosonde input data rather than from the behaviour of the model. But maybe the belief is false. Any comment from someone?
    Apropos of which, in the specific context of long-term trends in tropospheric humidity, it seemed to us that an advantage of the NCEP re-analysis over other re-analyses (over ERA 40 for instance) is that it does not complicate the issue by introducing various types of satellite humidity data into the record as they become available. It does for temperature of course.
    Once again, many thanks for everyone’s input.

    • Ron Cram
      Posted Mar 8, 2009 at 12:26 PM | Permalink

      Re: Garth Paltridge (#323),
      Re: Ryan Maue (#327),

      Garth Paltridge raised an interesting question and Ryan Maue provided a thoughtful answer. However, I am not certain I follow.

      Paltridge asked:

      How might it be that the reanalysis model can introduce a spurious long-term trend if it is being continually ‘nudged’ back towards the observations?

      Paltridge further comments:

      I cannot see (presumably because of my ignorance of the exact operation of the data assimilation process in models) how a trend could emerge in those grid squares that do have actual input of real data.

      This seems to me a very fair question. Perhaps Maue answered it brilliantly, but I am not able to follow.

      Maue answers in part:

      As part of the data assimilation procedure, the data is given a weighting based upon a laborious and loosely arbitrary process of defining its error characteristics. The radiosonde network has been around for over 50 years, and typically most operational forecast NWP models as well as reanalysis models treat the raobs similarly. This is much different than how NCEP (GFS) vs. ECMWF handles satellite radiance temperature retrievals (3D vs 4D Var). Thus, over a long time period, the differences between the two model “analysis” are the largest over the oceans and in areas where the in situ/ surface and raob network are sparsely distributed. But where you have a raob, like over Hawaii or the coast of Antarctica, the models tend to agree very closely. The raob gets very large weighting based upon (relatively) well known characteristics.

      From one of the papers (download) I worked on while interning at NRL Monterey, I computed a simple analysis difference for a variety of forecast models and variables, including thickness and temperature. Here is one of the figures which on its own is better than reading the text. RMSD is root-mean-squared differences for a 6-month period between analyzed 500 hPa midtropospheric temperature. The model analyses agree best where you have the radiosondes as anchors. In the Southern Ocean, you are at the mercy of satellite retrievals and there are large differences. You can easy pick out the radiosonde network from this type of image. Ditto if you use the reanalysis data.

      Here’s a question for Ryan Maue. The phrase “tend to agree very closely” is less than precise. Are you agreeing that a trend emerges in the area where the model is nudged back to the radiosonde data but the trend is less steep than over the oceans? Or are you saying the model holds to the radiosonde data and no spurious trend is introduced?

      • Posted Mar 8, 2009 at 8:27 PM | Permalink

        Re: Ron Cram (#352), I am talking about the model analysis, so the initial fields that go into the forecast NWP model. There are about 700 radiosondes that get gobbled up every 12 hours and these are most densely concentrated in the developed nations of the Northern Hemisphere. The issue of representativeness creeps in with any point observations or profiles. Meshing the radiosondes in a dense data network such as the US is much easier than figuring out the North Pacific without in situ surface obs or radiosondes. This is where differences in data assimilation methodology and implementation create the largest analysis differences or uncertainty — which is related to future forecast error.

        • Ryan O
          Posted Mar 8, 2009 at 9:24 PM | Permalink

          Re: Ryan Maue (#379),
          Forgive me if this sounds dense, but based on your previous explanation (#327), the method of combining the radiosonde and satellite measurements seems less than ideal. I do not understand why weighting would be used at all.
          .
          It would seem to be far more correct to calibrate the satellite information against the radiosonde data. This yields 2 cases:
          .
          A. If a good calibration can be obtained, then the satellite data could (and should) be used in lieu of the radiosonde data. The radiosondes would provide a check for accuracy/drift of the satellite measurements over time and to provide a solid overlap when calibrating a new satellite or instrument.
          .
          B. If no good calibration can be obtained, then the satellite data is measuring a different quantity than the radiosonde data and the two data sets should not be merged or spliced by any method. To say otherwise would be analogous to saying that GISTEMP and UAH could be merged for trend analysis, and I imagine very few would call that a good idea.
          .
          I don’t understand why, if a good calibration could be obtained, there would be any weighting of radiosonde data as this would increase the noise in the data due to biases between the multitude of radiosones.
          .
          I don’t understand how, if a good calibration can not be obtained, one can justify merging the data sets by any method – weighting or otherwise.

        • Ron Cram
          Posted Mar 8, 2009 at 10:58 PM | Permalink

          Re: Ryan Maue (#379),

          Forgive me, Ryan, but the answer is less clear to me now than before. Perhaps you could start by explaining exactly what the model outputs are. From the way Paltridge worded the question, it seemed as though the models were putting out gridded cell humidity data along with other data. If this is not so, what are the model outputs? Do they give us gridded cell temps? Do they give us ocean surface temps? Information to calculate ocean heat content? Surface area experiencing low cloud cover?

          I would be interested in Curt Covey’s answer to this as well.

  150. Posted Mar 8, 2009 at 9:59 PM | Permalink

    //”It would seem to be far more correct to calibrate the satellite information against the radiosonde data.”//

    It is.

    • Ryan O
      Posted Mar 8, 2009 at 10:12 PM | Permalink

      Re: Chris Colose (#382), Are you saying it is more correct but is not done, or (for the purpose of determining water vapor content) it is correct and it is done?
      .
      If so, why are the radiosondes included at all?

  151. Kenneth Fritsch
    Posted Mar 9, 2009 at 9:04 AM | Permalink

    If you believe the information that is available in those scholarly documents or academic respources is wrong, write your own papers and win your nobel prize, don’t attack the messenger. You’ve asked questions and I believe I answered them. Those who get much information from blogs like WUWT or “Swindle videos” may not like those answers, but that’s life.

    In terms of a definitive explanation, I am going to say Colose but no cigar.

  152. Kenneth Fritsch
    Posted Mar 9, 2009 at 9:37 AM | Permalink

    Ryan Maue, if we assume that Kevin Trenbreth has the seminal paper on atmospheric water vapor products in the paper: “Trends and variability in column-integrated atmospheric water vapor”, then I have the distinct view that we only have water vapor data that would pass muster with Trenberth for the period 1988 forward and only over the oceans in the form of the RSS SSM/I measurements/reanalysis. We have some radiosonde data that is trusted by Trenberth, but that is only over land. I am not sure were satellite measurements/reanalysis starting in 1979(?) stands.

    I have been reading other studies that take into account all these measurements and not all would conclude what was concluded by Trenberth in the paper noted above. Below I have excerpted some information about SSM/I.

    The RSS SSM/I geophysical dataset consists of data derived from observations collected by SSM/I instruments carried onboard the DMSP series of polar orbiting satellites. These satellite are numbered:

    F08 SSM/I Jul 1987 to Dec 1991
    F10 SSM/I Dec 1990 to Nov 1997
    F11 SSM/I Dec 1991 to May 2000
    F13 SSM/I May 1995 to present
    F14 SSM/I May 1997 to Aug 2008
    F15 SSM/I Dec 1999 to present (Beacon corrected after Aug 2006)

    There are gaps within these data. If you select a date for which no data is available, either a list of acceptable dates will appear, or a blank map with text stating “Data not available” will be posted.

    http://www.ssmi.com/ssmi/ssmi_description.html#ssmi

    The entire SSM/I ocean data set has been completely reprocessed. As of September 13, 2006, all SSM/I data files have been updated from Version-5 to Version-6. Version-5 SSM/I data have been archived off-line and are no longer available except by special request.

    http://www.ssmi.com/ssmi/ssmi_description.html

    I was most curious whether the complete reprocessed data as noted above would affect any of the Trenberth paper conclusions.

    Also while I could find freely available RSS SSM/I data online it is contained in a huge number of separate files (by satellite). Do you have a link to data that are in a more convenient form?

    In the Trenberth paper the CIs were inferred by Monte Carlo sampling, but the limits seemed rather narrow based on the time series graph I saw for 1988-2003. That would be from the perspective of a layperson’s viewings.

    • Posted Mar 9, 2009 at 9:57 AM | Permalink

      Re: Kenneth Fritsch (#399), RSS has the orbit files on their website from each individual satellite. There are also monthly means for each month here with a few integrated moisture stuff. The data is available and I could easily read it, but I am not sure yet what it really is…

      • Ryan O
        Posted Mar 9, 2009 at 10:29 AM | Permalink

        Re: Ryan Maue (#400), As popular as this thread is, I’m not sure you noticed my question:
        .
        Ryan O (#380)
        .
        Chris stated they are calibrated to the radiosonde data, but did not elaborate if the calibration for the humidity measurement was good enough to allow satellite data to be used in lieu of radiosonde data. Your explanation from Ryan Maue (#327) would indicate otherwise.

        • Posted Mar 9, 2009 at 11:43 AM | Permalink

          Re: Ryan O (#403), your concerns are well founded. I only discuss the combination of satellite data and radiosondes in the context of the model data assimilation. The complicated variational techniques are multivariate and since many variables are not sensed directly, i.e. temperatures from satellite radiance profiles, they are produced by the model integration.

          Where there is radiosonde profiles of humidity with height, one must consider representativeness or how well the point observation represents the surrounding environment. The issue of cross-calibration and verification with satellite data must be the concern of a thousand different papers in the data assimilation field. I would have a lot of reading to do before offering more than speculation.

        • Ryan O
          Posted Mar 9, 2009 at 11:48 AM | Permalink

          Re: Ryan Maue (#407), Thank you. Looks like I have some reading to do myself. I have learned quite a bit from your discussions so far.

      • Kenneth Fritsch
        Posted Mar 9, 2009 at 12:27 PM | Permalink

        Re: Ryan Maue (#400),

        Ryan, your reply forced me to go back over some of the links that I had previously reviewed and I can now see where I can obtain monthly RSS SSM/I data. It’s in binary files — and I need practice using R to download binary files.

        As you have noted in another reply, I am finding that there are numerous and subtle inputs into the reanalysis of atmospheric vapor data. Sources like the Trenberth paper do a decent job, in my estimation, of explaining some of these inputs and the perceived limitations.

        • Posted Mar 9, 2009 at 12:54 PM | Permalink

          Re: Kenneth Fritsch (#409),

          Bill Gray has issued a paper (non peer-reviewed) for the Heartland Conference in NYC. He has provided a PDF on his website at CSU. His Figure 6 has the 400 hPa specific humidity from the NCEP Reanalysis from 1948-2008 as evidence of a downward global trend, which is similar to Paltridge et al. (2009).

          One problem with the Gray paper: there are scant references. It is easy to peddle this out into the media, but to get it published, it is light years away.

        • Posted Mar 9, 2009 at 1:02 PM | Permalink

          Re: Kenneth Fritsch (#409), I would be interested in seeing the agreement between the various reanalysis datasets and the RSS data.

        • Ron Cram
          Posted Mar 9, 2009 at 1:33 PM | Permalink

          Re: Ryan Maue (#412),

          Ryan, I asked a few questions back at Ron Cram (#385), which you never addressed. I would find it very helpful and would appreciate it very much if you could find the time to address these questions. In addition, are the outputs from the reanalysis the same outputs as a GCM? Or is the reanalysis model different from a common GCM with different outputs?

        • Posted Mar 9, 2009 at 1:57 PM | Permalink

          Re: Ron Cram (#414). Reanalysis model output is temperature, pressure, wind, and a bunch of other variables, which may or may not have been part of the observations initially ingested during the data assimilation process. The reanalysis updates every 6 hours, so it is re-initialized with new observations each forecast cycle. A climate model does not ingest new observations every 6-hours but may update SST on monthly time scales, to maintain the correct ENSO structure, etc.

          Depending upon the experiment at hand, the same variable fields can be output to files and compared to reality, like a climate model hindcast. These have had mixed success, which is a charitable way of saying little success.

        • Ron Cram
          Posted Mar 9, 2009 at 2:27 PM | Permalink

          Re: Ryan Maue (#415),

          I appreciate your honesty regarding the mixed success. Can you provide me with a textbook or software manual or anything that would describe the totality of the inputs and outputs available? I have played with EdGCM but did not find it particularly useful for what I was looking for.

        • Posted Mar 9, 2009 at 3:05 PM | Permalink

          Re: Ron Cram (#417), a great place to start is the overview reference on the recently completed JRA-25. References There are some detailed comparisons with other reanalysis models as well as honest dialog about the limitations of their model.

  153. Edouard
    Posted Mar 9, 2009 at 10:21 AM | Permalink

    @Chris Colose,

    Thank you for your friendly answer. I reallxyx appreciate that! 🙂

    The summer of 2003 ( not 69) 😉

    http://www.sciencemag.org/cgi/content/abstract/303/5663/1499

    “Multiproxy reconstructions of monthly and seasonal surface temperature fields for Europe back to 1500 show that the late 20th- and early 21st-century European climate is very likely (>95% confidence level) warmer than that of any time during the past 500 years.”

    Best regards

    Eddy

  154. Edouard
    Posted Mar 9, 2009 at 10:59 PM | Permalink

    @Chris Colose
    I don’t agree with you about the MWP and the LIA. The scientist Mangini has found it in China and Chile, all over the world. Asked why some scientists disagree, he explains, that he thinks that they want to save the planet. That is enough explanation for me! No need to tell me the same odd things over an over again for years and years! But at least you are friendly!

    But to the equilibrium and radiative budget. If we speak about the influence of the activity of the sun, even 50 years are to much delay to be accepted by AGW-scienists, even if we know that dansgaard oeschger events, triggered by very small changes in the activity of the sun, have delays of 100 years.

    Warm periods like the holocene begin with a delay of sometimes more than thousand years. The warmest period (optimum) is even 4000 years to late.

    But why should Co2 have a delay? In fact a doubling of Co2 should give us not even 1° C. Locally the raditive budget changes from day to day. Much more than 1° changes happen every day. Only if the world would heat up without interruption, there could be a delay. For years we have no heating of the oceans. There will be no delay for this heating, because it doesn’t exist!

    At the beginning of the holocene we have a temperature rise of 5 to 9° C in some decades, without any rise in greenhouse gases. There is no delay. Except it began hundreds of years to late.

    The holocene optimum didn’t make THE big difference. Where is this big delay from the greenhouse gases? If the temperatures rise more than 5° without them. Why should the earth have needed them afterwords? There must be logic and physics that are not yet known to explain this. If you heat a pot with a stove, nobody would think that the water vapor has heated it the last 2 minutes. It was still the stove.

    Best regards
    Eddy

    • bender
      Posted Mar 9, 2009 at 11:04 PM | Permalink

      Re: Edouard (#341),
      Eddy, this discussion is OT here. It belongs in “unthreaded”. Thanks for your contributions.

    • Jeff Alberts
      Posted Mar 10, 2009 at 9:11 AM | Permalink

      Re: Edouard (#341),

      The holocene optimum didn’t make THE big difference. Where is this big delay from the greenhouse gases? If the temperatures rise more than 5° without them. Why should the earth have needed them afterwords? There must be logic and physics that are not yet known to explain this. If you heat a pot with a stove, nobody would think that the water vapor has heated it the last 2 minutes. It was still the stove.

      I don’t believe there are unknown physical laws (at least nothing major), but unknown or extremely poorly understood processes involved. Anyone who claims to know what’s going to happen when with respect to weather and climate (beyond broad generalities, like “it will be warm in the summer”) are deluding themselves pretty severely. For example, you say:

      even if we know that dansgaard oeschger events, triggered by very small changes in the activity of the sun, have delays of 100 years.

      But I doubt there is any real certainty in that statement. There may be a correlation, but is there an explanation for the “delay” that makes any more sense than the “heat in the pipeline” theory?

  155. Edouard
    Posted Mar 10, 2009 at 12:59 AM | Permalink

    @bender
    I “believe” in science. I always loved it and I trie to find answers to my questions for years and years (and years). To express them in English is not easy for me. I was very happy to find german blogs to communicate, but I got no answers to my questions. I got answers to questions I didn’t ask. “Questions” from the “scpetical zoo” as the german scientist Georg Hoffmann calls them. But not only did he say this. Similar odd things happened to me on Stefan Rahmstorf’s blog. I feel like I’m trapped in a very bad B-movie. Should I think that hundreds of scientists are lying to me? I just couldn’t. Or could I? Do I have to?
    I don’t know any more how to get real answers from AGW-scientists. I don’t know which part of their answer could be science and wich religion. I’m a little bit desperate and a little bit angry.
    In german language and in german culture I know and understand the little tricks of Georg Hoffmann and Stefan Rahmstorf and I know that

    I

    never lie. The only reason I could be wrong in this sad game, is if I was a morone. Those people like others to think that I am. They insult me and even insult Mr. Pielke as I’ve linked above. In my opinion they insult

    the whole world

    .

    One time in my life I must have said this clearly!

    Best regards an thanks for the blog
    Eddy

  156. bender
    Posted Mar 10, 2009 at 9:16 AM | Permalink

    EW, can we move #341 onward to “unthreaded”? Some things worth replying to here.

  157. Kenneth Fritsch
    Posted Mar 10, 2009 at 4:20 PM | Permalink

    I wanted to make three points with this post:

    I agree that the OTs need to move to Unthreaded in order to keep this thread on track – or at the least the track I had in mind which is looking in more detail at water vapor reanalysis.

    Ryan Maue, I have found that I can, by brute force, get zonal atmospheric water vapor trends using the graphic displays at the RSS website here: http://www.remss.com/idx/ion- The displays at that link can be made to show zonal regions of a selected width for longitude and height for latitude that give a monthly mean for that area of the globe in the graphic caption.

    Finally, after rereading the Paltridge paper, I can see the point that the authors are making and at the same time see the criticisms that Ryan Maue has lodged. The authors are obviously making a point that the vapor content trend in the atmosphere has a very critical bearing on the climate model validations, and that being the case, that that data must be processed and reanalyzed if at all possible into more reliable, and therefore, more acceptable form. They use the longer time series that is available with the NCEP water vapor data set with reservations, but at the same time attempting to use that part of the data set that has been considered by other authors to be more reliable (at higher water vapor concentrations) and for time periods considered more reliable. The authors do allude to the satellite data having problems also.

    Now, if one were in a position to show more reliable reanalysis data for atmospheric water vapor that counters the NCEP data, I can see their concerns with the publication of a paper that does not directly address that issue. Another potential issue with the paper, I suspect, would be the implication in the article that the atmospheric water vapor trend is negative by way of the generally suspect NCEP water vapor reanalysis. If one looks at this implication as a challenge to others to provide more reliable water vapor data, it would appear to this layperson to be more benign than looking at it as evidence contradicting climate model theory based on a questionable data set that ignores perhaps better ones.

    In the end the questions I would like to see addressed is what is the best reanalysis product vis a vis atmospheric water vapor content trends (and why) and what are reasonable confidence limits for trends from the currently available reanalysis data sets.

  158. Christopher
    Posted Mar 12, 2009 at 11:55 AM | Permalink

    Re: 340 ‘But, I could NEVER state as you did…”

    I very much agree to this. At a minimum one must read the paper to opine on its usefulness. Apart for this (rather salient point) I very much see the line of reasoning in J Curry’s post and applaud her overt recognition of biases and at least a procedure to shelf them. Also, G Sherrington mentioned, in 340, the idea of starting somewhere. This comes up again with #346 by K Fritsch. Indeed, you have to start somewhere. The notion that one paper will come up with the definitive treatment on any issue is illusory. Some words here read like the authors are being damned for the work they should have done but was only made relevant by the gist of the paper. I’m sure many comments here (and in the reviewers’ comments after the politicizing diatribe was stripped) have provided nudges and that the original authors had thought a great deal about this issue (you only publish a fraction of your work, so much is sandbox, the equivalent of doodling with a data algorithm). Of course, with this peer-reviewed welcome who knows if any follow-ups will occur. I’d probably direct my limited research time elsewhere.

  159. Posted Mar 12, 2009 at 12:33 PM | Permalink

    I downloaded the ECMWF re-analysis data and computed the trends. The results can be found here.

    I did find a negative trend for q over the 44 year period in the Northern mid-latitudes. Trends were not significant globally or in the Southern Hemisphere.

    • Posted Mar 12, 2009 at 12:41 PM | Permalink

      Re: Nicolas Nierenberg (#348), Nicholas, while I would have mixed feels about being promoted to Professor at Florida State, your blog posting significantly overstates my resume. I do not have a PhD (yet).

      On a substantive note, the ECMWF ERA-40 data should be clearly referenced for your study on your blog posting. Another, more recent ERA-interim dataset is available that spans from 1989-2007. I heartily suggest you continue your hypothesis testing with this newer data. It is freely downloadable for research/non-commercial purposes. Now, if you wanted it for commercial purposes, I have an idea that it would cost on the order of millions of dollars for the reanalysis dataset.

      • Posted Mar 12, 2009 at 2:15 PM | Permalink

        Re: ryanm (#349),

        Ryan, sorry about the promotion I made an assumption without checking. You are absolutely right about the attribution I had intended to do that and just forgot when I typed it up today. I will update my post.

        I will take a look at the other data set. Not sure why you made the remark about commercial purposes?

      • Kenneth Fritsch
        Posted Mar 12, 2009 at 4:34 PM | Permalink

        Re: ryanm (#349),

        RyanM, I have attempted to download files from the ERA-40 link in your post and I keep getting the message “Application timeout alarm received”. I have sent an email to the adminstrator for help. Has anyone else had a similar problem.

        I would also like to look at the JRA-25 reanalysis. What is a good link for downloading data from that data set?

        • Posted Mar 12, 2009 at 8:13 PM | Permalink

          Re: Kenneth Fritsch (#352), JRA-25 is only available to those with access to NCAR/UCAR, or university researchers. The ECMWF data portal is fairly easy to use (ERA-interim), check the link in #349. A similar tool should be available for ERA-40.

          Nice work with the RSS data.

        • Posted Mar 12, 2009 at 11:08 PM | Permalink

          Re: Ryan Maue (#353),Re: Kenneth Fritsch (#352),

          I had the same timeout issue. It occurs after going through the process of selecting the appropriate data and requesting the download. ERA-40 worked like a charm, but for some reason the ERA-Interim isn’t working.

  160. Kenneth Fritsch
    Posted Mar 12, 2009 at 1:36 PM | Permalink

    Trenberth used the Version 5 RSS-SSM/I data set for atmospheric water content, q, to claim that it represented a better choice for observing trends in q when validating climate models. SSM/I satellite measurements are only used to measure water content over the oceans.

    Trenberth in his paper linked below compared the RSS-SSM/I for the time period 1988 to 2003 to NCEP and ERA-40 data sets and claimed that the SSM/I data set alone showed a statistically significant positive trend over the time period measured. I had my doubts about the tight confidence limits that Trenberth claimed for the SSM/I trends given the high auto correlation one would expect with measurement like these. Trenberth used a Monte Carlo procedure to establish the authors’ CIs.

    Click to access Trenberth2005FasulloSmith.pdf

    I used the latest Version 6 of the RSS-SSM/I data set (see the link below) from the 07/1987 to 01/2009 by extracting into Excel the mean water content from the graphics generated for a global zone from 50S to 50N. I then loaded that data into R and calculated the trend slope, standard error of that slope, the acf and finally the adjusted trend slope CIs using the Nychka procedure from Santer et al. (2008) with the AR1 factor.

    Below are listed the credits for using an RSS product, the F15 disclaimer, the satellite data and the R code I used and graphics of the higher order auto correlations of the time series and the time series over the period from 07/1987 to 01/2009.

    As it turns out the RSS-SSM/I vapor series is highly auto correlated as shown in the graphic below. The patterns of higher order correlations I suspect are showing seasonal repetitions and remain high relative to AR1. The time series graphic shows what one might visualize as a step function around the 1997-1998 time period.

    The regression does indeed show a positive trend for the atmospheric water vapor content, but when the AR1 correction is applied to the trend slope (TS) confidence intervals (CI) for the standard error (SE) the CIs include zero and thus we cannot conclude that the trend is statistically different than zero.

    TS = 0.00236; SE = 0.00043; Adjusted CI = -0.000052 to 0.00477

    http://www.ssmi.com/idx/ion-p.exe?page=ssmi_monthly.ion

    I used the individual satellite data listed below from when the satellite started to when it ended. I avoided the use of F15 which had the warning listed below at this link:

    F08: 1987.07.09 – 1991.12.30
    F10: 1990.12.08 – 1997.11.14
    F11: 1991.12.03 – 2000.05.16
    F13: 1995.05.03 – Present

    http://www.remss.com/ssmi/ssmi_browse.html

    The effects are severe enough that we have discontinued F15 data production from 2009-Jan-15 until April, 2009. We will revisit our previous correction and apply a new temperature dependent correction later this year. The missing data will be reprocessed and re-released at that time.

    Due to remaining small effects, please DO NOT use F15 data from 2006-Aug-14 forward for climate research.

    The following was noted for using the RSS SSM/I data even though my use would not constitute research or publication:

    Permission is granted to use these data and images in research and publications when accompanied by the appropriate instrument or product specific statement. Click to obtain the statements or see below:

    SSM/I data are produced by Remote Sensing Systems and sponsored by the NASA Earth Science REASoN DISCOVER Project. Data are available at http://www.remss.com.

    Vapor=read.table(“clipboard”) #Put data from Excel on clipboard and read to R
    lmVapor=lm(Vapor[,3]~Vapor[,2])
    summary(lmVapor)
    acf(residuals(lmVapor))$acf[2]
    plot(Vapor[,1],Vapor[,3],type=”b”, main=”Water Content Global Zone 50S to 50N from 07/1987 to 01/2009″,xlab = “Months Starting at 07/1987″,ylab=”Vapor content”,col=”dark red”)

    • Kenneth Fritsch
      Posted Mar 14, 2009 at 9:30 AM | Permalink

      Re: Kenneth Fritsch (#350),

      I wanted to do another analysis of the atmospheric water content, q, using the RSS SSM/I data set at another latitude in order to check the sensitivity of the first result (reported above) of the global zone from 50S to 50N to the latitude zone selection. To that end, I did another analysis for the global zone from 25S to 25N from July 1987 through January 2009 and present the results below. The R code is essentially the same as for the previous analysis and is not shown here.

      There were only slight differences between the trends for the two zones analyzed. They both shown a highly auto correlation of the regression residuals even at higher orders and both time series show a step function at the 1997-1998 time period. Both zones also include zero in the CIs and thus cannot be shown to be statistically different than zero.

      Trend slope = 0.00289; AR1 Corr = 0.707; SE = 0.000798; Adjusted CI = -0.00096 to 0.00673

      The question remains about the trends found by Trenberth that were claimed to statistically significant. Trenberth used Version 5 of the RSS SSM/I data set for the 1988 to 2003 time period while I used the later Version 6 from mid 1987 through January 2009. Trenberth determined limits using a Monte Carlo procedure and I used the Nychka procedure from Santer et al. (2008) to adjust the CIs. Version 5 of the RSS data set is no longer available to the public and can be accessed only by special permission.

  161. Posted Mar 14, 2009 at 3:55 PM | Permalink

    Yesterday I was able to successfully download the data from ECMWF. Over the 19 years of the ERA Interim reanalysis there are no significant trends in q. In the NH mid latitudes the trends are slightly negative at many altitudes, but again they aren’t significant. Thus in the NCEP, ERA 40, and ERA Interim data sets there is no support for increasing q due to a warming atmosphere. I’m sure this is hardly the last or even best word on the subject.

    My post can be found here.

  162. Bill Illis
    Posted Mar 14, 2009 at 4:40 PM | Permalink

    For those checking the humidity level data, note that the models project different changes in RH at different levels and different latitudes.

    For instance, here are the changes in RH of GISS Model E from 1960 to 2003, by latitude and height.

    Here is the average global change by height over the same period.

    But specific humidity, q, should have increased like this big red blob (in ppmv).

    You can play around with different years and different scenarios here. (increasing temp years versus decreasing temp years for example).

    http://data.giss.nasa.gov/modelE/transient/Rc_pj.1.11.html

    Or look at lots of different aspects of the model here.

    http://data.giss.nasa.gov/modelE/transient/climsim.html

  163. Kenneth Fritsch
    Posted Mar 16, 2009 at 1:41 PM | Permalink

    I believe this analysis with the NCEP 1 data set may have been presented here previously, but I wanted to do a direct comparison of my analyses with the RSS SSM/I data set– and show-off my nearly acquired skills downloading and manipulating an ncdf file in R. I used the NCEP data set for the 500 Mbar height for the zonal band from 25S to 25N. I was much impressed with the use of R in reducing the data from a relatively large nc file to the essentials that was required for the analysis.

    The results are shown in the 2 graphs below and the following table. In summary there is much ARn autocorrelation as was the case with RSS SSM/I and the trend is flat and cannot be distinguished from 0. The trend slope was 0.0000128; the standard error was 0.0000931; the p = 0.89 and the AR1 correlation = 0.57. The R code is listed below.

    library(ncdf)
    ncep500=open.ncdf(“NCEP.nc”)
    ncep500
    [1] “file NCEP.nc has 4 dimensions:”
    [1] “lat Size: 73”
    [1] “level Size: 1”
    [1] “lon Size: 144”
    [1] “time Size: 361”
    [1] “————————”
    [1] “file NCEP.nc has 1 variables:”
    [1] “short shum[lon,lat,level,time] Longname:Monthly Mean of Specific Humidity Missval:32766″
    nc500=get.var.ncdf(ncep500)
    dim(nc500)
    [1] 144 73 361
    nc25SN=nc500[,mean(c(27:47)),]
    dim(nc25SN)
    [1] 144 361
    SN25=colMeans(nc25SN)
    x=1:361
    lmSN25=lm(SN25~x)
    summary(lmSN25)
    acf(residuals(lmSN25))$acf[2]
    [1] 0.5715495
    plot(x,SN25,type=”b”, main=”Specific Humidity 25S to 25N from 01/1979 to 01/2009″,xlab = “Months Starting at 01/1979″,ylab=”Secific Humidity”,col=”dark red”)

  164. Posted Jun 5, 2011 at 10:31 PM | Permalink

    Long after everyone else, I have some commentary of water vapor trends (integrated water vapor) in:

    Water Vapor Trends


    and

    Water Vapor Trends – Part Two

    Part Two includes commentary on Paltridge et al 2009 as well as Dessler & Davis 2010.

    Radisonde analyses show increasing water vapor in the Northern Hemisphere. Two important papers on the subject demonstrated this and decided there was not enough quality data for the Southern Hemisphere (Ross & Elliot 2001, Durre et al 2009).
    Satellites show increasing water vapor over the oceans (radiosonde data over the oceans is very sparse).
    Other reanalyses show increasing water vapor, e.g. ERA40

    NCEP/NCAR was demonstrated by Trenberth & Smith in 2005 to be worse than ERA40 – via the dry mass of the atmosphere.

    Paltridge, Arking and Pook don’t demonstrate what is wrong with Trenberth & Smith’s analysis, or what is wrong with the radiosonde trend analyses. They don’t demonstrate what is wrong with the satellite data.

    Trenberth, Fasullo & Smith have already shown NCEP/NCAR water vapor trends in their 2005 paper.

    So what have Paltridge, Arking and Pook added to the sum of knowledge?

  165. Tony Mach
    Posted Feb 7, 2012 at 5:37 AM | Permalink

    As nobody mentioned it: The correct term for this behaviour is “Publication Bias”. Anything in support of the Null Hypothesis has a harder time getting published. Regardless of whether there the null hypothesis is correct hypothesis or not, research results will always fall into a spectrum of support for the null hypothesis or support for something else. As before in other climate science problems, I find it helpful to look into medical sciences, as they deal with matters of “life or death” with regards to their research. There are some remedies like “pre-registration of protocols” or “registration or networking of data collections within fields” that could be useful. There are more remedies that are available, others might be specific to climate research. Dealing with this on a “per study” basis is a problem, and not a solution.

4 Trackbacks

  1. […] involves a tight thermodynamic coupling of the surface and atmosphere. Watts and McIntyre (who has his own post ) make it out to be a bad thing that people are concerned with “iffy” data, which comes […]

  2. […] Climate Audit: ‘A Peek behind the Curtain’ […]

  3. […] conversations at ClimateAudit about the observations of a steady fall in water vapor in the upper atmosphere have been the […]

  4. […] 2010 [–>]. Olvidan las nubes, y otros obtienen resultados diferentes, como Paltridge 2009 [–>]. Y se convierte en una discusión de qué datos valen, y cuáles no. Para los alarmistas sólo […]