Glacier Retreat and Water Availability

This topic has spilled into Unthreaded.

The one comment that I would be inclined to make on this is that, if people are depending on water from glacier retreat in tropical and temperate settings, then it seems to me that their water supply would be equally diminished by glacier stabilization or advance, not just by glacier disappearance. What proportion of the water delivered to (say) India is from the annual precipitation-melt cycle and what proportion from “mined” meltwater from glacier retreat? Or other places?

During the Laurentide glacier retreat from North America, apparently the Ottawa River was equivalent in size to the Amazon.

IPCC WG1 FAQ

Reader Michael Smith asked about the provenance of Figure 1.1 in the SPM for the AR4 Synthesis Report. While we’ve had some discussions of WG1, we’ve not discussed the Synthesis Report before. While following up the references for this Figure, I encountered the WG1 FAQ – a document which I had previously not noticed.

The FAQ are online here.

I suppose that one of the reasons that I hadn’t noticed the document is that it was never presented to IPCC WG1 reviewers, of which I was one. We were provided copies of the chapters and the Summary for Policy-makers, but the FAQ was not provided to Reviewers. See here for what was given to reviewers. I’ve consulted the WG1 SPM that was approved in Feb 2007 and it contains a reference to the FAQ, so some form existed at the time of the SPM.

It’s a useful and interesting summary. But does anyone know anything about when the FAQ document was released, what caused it to be written, how it was written, who wrote it or what its review process consisted of?

Spencer on Cloud Feedback

Roy Spencer has an interesting post on cloud feedback at Pielke Sr (which doesn’t permit comments.) He observes:

On August 8, 2007, I posted here a guest blog entry on the possibility that our observational estimates of feedbacks might be biased in the positive direction. Danny Braswell and I built a simple time-dependent energy balance model to demonstrate the effect and its possible magnitude, and submitted a paper to the Journal of Climate for publication.

The two reviewers of the manuscript (rather uncharacteristically) signed their names to their reviews. To my surprise, both of them (Isaac Held and Piers Forster) agreed that we had raised a legitimate issue. While both reviewers suggested changes in the (conditionally accepted) manuscript, they even took the time to develop their own simple models to demonstrate the effect to themselves.

Of special note is the intellectual honesty shown by Piers Forster. Our paper directly challenges an assumption made by Forster in his 2005 J. Climate paper, which provided a nice theoretical treatment of feedback diagnosis from observational data. Forster admitted in his review that they had erred in this part of their analysis, and encouraged us to get the paper published so that others could be made aware of the issue, too.

And the fundamental issue can be demonstrated with this simple example: When we analyze interannual variations in, say, surface temperature and clouds, and we diagnose what we believe to be a positive feedback (say, low cloud coverage decreasing with increasing surface temperature), we are implicitly assuming that the surface temperature change caused the cloud change — and not the other way around.

This issue is critical because, to the extent that non-feedback sources of cloud variability cause surface temperature change, it will always look like a positive feedback using the conventional diagnostic approach. It is even possible to diagnose a positive feedback when, in fact, a negative feedback really exists.

I hope you can see from this that the separation of cause from effect in the climate system is absolutely critical. The widespread use of seasonally-averaged or yearly-averaged quantities for climate model validation is NOT sufficient to validate model feedbacks! This is because the time averaging actually destroys most, if not all, evidence (e.g. time lags) of what caused the observed relationship in the first place. Since both feedbacks and non-feedback forcings will typically be intermingled in real climate data, it is not a trivial effort to determine the relative sizes of each.

While we used the example of random daily low cloud variations over the ocean in our simple model (which were then combined with specified negative or positive cloud feedbacks), the same issue can be raised about any kind of feedback.

Notice that the potential positive bias in model feedbacks can, in some sense, be attributed to a lack of model “complexity” compared to the real climate system. By “complexity” here I mean cloud variability which is not simply the result of a cloud feedback on surface temperature. This lack of complexity in the model then requires the model to have positive feedback built into it (explicitly or implicitly) in order for the model to agree with what looks like positive feedback in the observations.

Also note that the non-feedback cloud variability can even be caused by…(gasp)…the cloud feedback itself!

Let’s say there is a weak negative cloud feedback in nature. But superimposed upon this feedback is noise. For instance, warm SST pulses cause corresponding increases in low cloud coverage, but superimposed upon those cloud pulses are random cloud noise. That cloud noise will then cause some amount of SST variability that then looks like positive cloud feedback, even though the real cloud feedback is negative.

I don’t think I can over-emphasize the potential importance of this issue. It has been largely ignored — although Bill Rossow has been preaching on this same issue for years, but phrasing it in terms of the potential nonlinearity of, and interactions between, feedbacks. Similarly, Stephen’s 2005 J. Climate review paper on cloud feedbacks spent quite a bit of time emphasizing the problems with conventional cloud feedback diagnosis.

I don’t have an answer to the question of how to separate out cause and effect quantitatively from observations. But I do know that any progress will depend on high time resolution data, rather than monthly, seasonal, or annual averaging. (For instance, our August 9, 2007 GRL paper on tropical intraseasonal cloud variability showed a very strong negative cloud “feedback” signal.)

Until that progress is made, I consider the existence of positive cloud feedback in nature to be more a matter of faith than of science.

Since their discussion forum is closed, I’m sure that they won’t mind if we discuss it here.

IPCC Figure SPM.1

Michael Smith asks:

Since we are discussing uncertainty intervals, I have a question — probably a dumb one, but what the heck.

In the Summary for Policy Makers, IPCC 4th AR, there is a graph “Figure SPM.1” on page 3. See here: http://www.ipcc.ch/pdf/assessment-report/ar4/syr/ar4_syr_spm.pdf

The graph show “Global average surface temperature” since 1850. It includes individual data points for each year as well as an “uncertainty interval”. However, just looking at the graph, it appears that some 40+ years are outside this “uncertainty interval”. What does that “uncertainty interval” mean when 25% of the actual observations are outside of it?

UPDATE: Here are some notes on the provenance of this figure. Some aspects were were previously discussed here and here. Continue reading

Unthreaded #29

Pierrehumbert: Reason for Methodology Used by IPCC is "Illegitimate"

Pierrehumbert recently made the following statement about the truncation of data:

Whatever the source of the purported … data, there is no legitimate reason in a paper published in 2007 for truncating the … record … as they did. There is, however, a very good illegitimate reason, in that truncating the curve in this way helps to conceal the strength of the trend from the reader, and shortens the period in which the most glaring mismatch … occurs.

I totally agree with Pierrehumbert’s condemnation of graphics that are truncated to “conceal” mismatches from a reader. This is a matter that I’ve discussed previously in connection with IPCC and which I would like to review today in light of Pierrehumbert joining with climateaudit in the condemnation of this practice. Prior discussions of the topic at CA include here here and here .

IPCC TAR
Let me refresh the discussion by showing how IPCC TAR concealed the mismatch between the post-1960 decline of the Briffa et al 2001 reconstruction and temperatures, by simply deleting the post-1960 values of the Briffa reconstruction. First here is a graphic from the original Briffa article showing the “divergence problem” – values of the Briffa reconstruction declined sharply in the 2nd half of the 20th century, such that closing values were similar to those in the early 19th century, long before modern warming.

The Briffa et al 2001 reconstruction was one of only three reconstructions in the IPCC TAR spaghetti graph. However, as shown below in the left panel, the graph does not show any mismatch between the Briffa et al reconstruction and the other reconstructions or with temperature, even though late 20th century values of the Briffa recon would be at early 19th century levels – well below the supposed “confidence intervals”. The detail shows why: the Briffa MXD series has been truncated. The Briffa series is in green and ends in 1960, but the truncation is virtually impossible to spot in the spaghetti graph as the green series seems to merge with another dark-colored series. Without the truncation (as you can estimate by examining Figure 1), the late 20th century values of the Briffa series would go to values about equal to early 19th century values, yielding a glaring mismatch.

Figure 2. IPCC TAR Figure 2-21 with blowup.

This unscrupulous truncation was previously reported at CA.

IPCC AR4

This was bad enough in TAR – what about IPCC AR4. While their spaghetti graph included more series, their truncation of the Briffa et al 2001 series was the same as in IPCC TAR as shown below – the Briffa et al 2001 recon is in light blue – see the 1960 truncation in the detail at right.

get_th31.jpg

Figure 3. IPCC AR4 Box 6.4 with blowup.

Review Comments
In my capacity as an IPCC AR4 reviewer, I noticed that the Briffa et al 2001 reconstruction had once again been truncated in 1960, which had the effect, in Pierrehumbert’s words, of “concealing” the “mismatch” from the reader. I observed in language not dissimilar to Pierrehumbert:

Show the Briffa et al reconstruction through to its end; don’t stop in 1960. Then comment and deal with the “divergence problem” if you need to. Don’t cover up the divergence by truncating this graphic. This was done in IPCC TAR; this was misleading. (Reviewer comment ID #: 309-18)]

The IPCC review comments (unavailable at the time of my original post on this but now online here – Go to chapter 6 – Final review comments and then to comment 6-1122) stated:

Rejected: though note ‘divergence’ issue will be discussed, still considered inappropriate to show recent section of Briffa et al. series

So let’s return to Pierrehumbert’s statement:

Whatever the source of the purported … data, there is no legitimate reason in a paper published in 2007 for truncating the … record … as they did. There is, however, a very good illegitimate reason, in that truncating the curve in this way helps to conceal the strength of the trend from the reader, and shortens the period in which the most glaring mismatch … occurs.

While Pierrehumbert made this statement in respect to Courtillot et al 2007, the same principles obviously apply to IPCC AR4. Indeed, the IPCC circumstances are far worse than Courtillot circumstances. First, Courtillot’s interest was in the earlier period, as he recognized the post-1990 divergence. Second, Courtillot used an obsolete data set resulting in a shortened series, but did not actively truncate the data. Neither of these justifies the use of obsolete data, which, like Pierrehumbert, I have criticized.

In the IPCC case, there was an active truncation of “inconvenient” data which had the effect of concealing a mismatch from the reader. Worse, the matter was clearly and explicitly brought to IPCC’s attention and they refused to address the concealing.

In Pierrehumbert’s words, there was no “legitimate reason” for what IPCC did, but a “very good illegitimate reason”. It’s gratifying that Pierrehumbert and realclimate are lending their authority to the condemnation of such practices.

NASA Evasion of Quality Control Procedures

It is a red-letter rule in business that transactions between a company and its insiders or employees must be disclosed. Some of the most egregious breaches by Enron were its attempts to avoid disclosure of writeoffs by selling worthless assets to the infamous limited partnerships organized by company insiders for equally worthless paper issued by the partnerships. Company insiders cannot evade securities laws by pretending to be be acting in a “personal capacity”.

The U.S. federal government has a detailed set of regulations requiring scientific information to be peer reviewed before it is disseminated by the federal government. NASA, which says that it has “employs the world’s largest concentration of climate scientists”, has carried out an interesting manouevre that has the effect of evading the federal Data Quality Act, OMB Guidelines and NASA’s own stated policies. Once again, the system involves an employee purporting to be acting in a “personal capacity”. Here’s how it works. Continue reading

Svalgaard #2

continued from Svalgaard #1 here. . Continued at Svalggard #3.

Weathering and Thermometer Shelters

Former Virginia State Climatologist Patrick J. Michaels wrote an op-ed about his paper with Ross McKitrick from Canada’s University of Guelph in an American Spectator column today about the surface temperature record. This paragraph really caught my eye: “Weather equipment is very high-maintenance. The standard temperature shelter is painted white. If the paint wears or discolors, the shelter absorbs more of the sun’s heat and the thermometer inside will read artificially high. But keeping temperature stations well painted probably isn’t the highest priority in a poor country.”

The Stevenson Screen experiment that I had setup this summer is living proof of this.

Compare the photo of the whitewash paint screen on 7/13/07 when it was new with one taken today on 12/27/07. No wonder the NWS dumped whitewash as the spec in the 70’s in favor of latex paint. Notice that the Latex painted shelter still looks good today while the Whitewashed shelter is already deteriorating.

Click for full sized image
Click image for larger view

Click for full sized image
Click image for larger view

stevenson_screen_7-13-07.jpg
Whitewashed Screen on 7/13/07

stevenson_screen_12-27-07.jpg
Whitewashed Screen on 12/27/07

The whitewash coating I used was from a formula and method provided to me by a chemist at the US Lime Corporation, who is an expert on whitewash. He said the formula was true to historical records of the time when whitewash was used on the shelters. I was amazed to find that after just a few short months, my whitewash coating had lost about 40-50% of it’s surface area. Perhaps there was a mistake in the formula, or perhaps whitewash really is this bad at withstanding weathering.

In any event the statement of Patrick Michaels “Weather equipment is very high-maintenance. The standard temperature shelter is painted white. If the paint wears or discolors, the shelter absorbs more of the sun’s heat and the thermometer inside will read artificially high.” seems like a realistic statement in light of the photos above. The magnitude of the effect in the surface temperature record has yet to be determined, but it seems clear that shelter maintenance, or lack thereof, is a significant micro-site bias factor that has not been adequately investigated nor accounted for in the historical temperature record.

I’ll have more on this experiment soon including temperature time series graphs showing the difference between bare wood, latex painted, and whitewashed shelters.

The Doctrine of RC Infallibility

Over Christmas, I thought good thoughts.

It is disappointing to be dragged back to reality by another stunt at realclimate. Pierrehumbert made a number of very strong allegations about the integrity of the Courtillot et al analysis, and, in particular, contested whether Courtillot had even used Phil Jones data in their analysis.

Pierrehumbert said:

there is the Ugly. These papers cross the line from the merely erroneous into the actively deceptive. Papers in this category commit what Damon and Laut judiciously call a “Pattern of strange errors.”. Papers in this category often use questionable (and often hidden and undocumented) data manipulations to manufacture correlations where none exist. .. We’ll leave it to the reader to decide, after the discussion to follow, whether Courtillot’s paper is merely Bad, or has crossed over into the Ugly.

and

and now for the really ugly part… Bard and Delaygue uncovered a number of errors of a more troubling nature…. The piece de resistance of Courtillot et al., is the following graph, which purports to show that for almost all of the past century, temperature correlates tightly with solar activity and magnetic field variability. The three curves on the graph are, according to the paper, Phil Jones’ global mean temperature record (Tglobe, in red circles) ,…

Pierrehumbert observes that the Courtillot curve does not match the most recent Jones global temperature curve (a point that I agree with) and says:

So if Courtillot’s data is not Jones’ global mean temperature, what is it that Courtillot plotted? We may never know. In his response to Bard and Delaygue, Courtillot claims the data came from a file called: monthly_land_and_ocean_90S_90N_df_1901-2001mean_dat.txt. Bard and Delaygue point out, however, that Jones has no record of any such file in his dataset, and does not recognize the purported “Tglobe” curve as any version of a global mean temperature curve his own group has ever produced.

Pierrehumbert goes on to genially observe:

Between the embarassing showing at the Academie debates and the travesty of science exposed by Bard and Delaygue in the case of the EPSL paper, You’d think that Courtillot would want to fine the nearest hole and go hide in it. Far from it, he was recently spotted giving a talk called “What global warming?” at this prestigious event gathering several famous physicists and chemists. Some people have no shame.

Pierrehumbert continues:

In the revised “Response” Courtillot now admits that the temperature record called “Tglobe” is not from any of Phil Jones’ datasets at all. Courtillot now claims that the data came from a study by Briffa et al. (2001), giving the address of a file stored at NCDC.

In a recent post, I reported that the Courtillot graphic could be easily replicated from column 7 of the data at ftp://ftp.ncdc.noaa.gov/pub/data/paleo/contributions_by_author/briffa1998/briffa2001jgr3.txt, which is entitled:

Observed temperatures from Jones et al. (1999) Rev Geophys

the latter being the source cited in the original article. In the same post, I observed that Jones was a coauthor of Briffa et al 2001 and that, contrary to Pierrehumbert’s allegations, the series almost certainly derived from Jones’ data, although it appeared to be a 20-90N composite calculated by the authors for Briffa et al 2001, rather than a version published in Jones et al 1999 (as indicated in the Briffa et al 2001 archive), while agreeing that it was inappropriate for such an obsolete version to be used in 2007 and noting that the failure of climate scientists to provide accurate data citations (with detailed provenance) contributed to the problem.

On Dec 24, 2007, I sent a short note to realclimate temperately pointing out to them that some of the statements in the Pierrehumbert post were incorrect as follows:

You say:” So if Courtillot’s data is not Jones’ global mean temperature, what is it that Courtillot plotted? We may never know.”

It is actually very easy to determine what Courtillot plotted. The Courtillot Tglobe plot can be replicated by using the column entitled “Observed temperatures from Jones et al. (1999) Rev Geophys” from the data archive for Briffa, Jones et al 2001 located at NCDC at ftp://ftp.ncdc.noaa.gov/pub/data/paleo/contributions_by_author/briffa1998/briffa2001jgr3.txt, and by carrying out the following operations: filter using a an 11-year running mean without end-period paddding, then normalizing on 1900-1990. This is shown at http://www.climateaudit.org/?p=2522.

Even though Briffa, Jones et al 2001 was published in 2001, it only contained temperature data to 1997 – something that should have been picked up by reviewers at the time. Authors in 2007 should obviously not be using this sort of vintage data version, as modern versions are readily available, as others have observed. However, I note that this is far from the only instance where climate scientists have used obsolete data versions and other cases have typically not drawn similar opprobrium.

As others have observed, it appears that the data is a 20-90N composite. The description in the Briffa, Jones et al 2001 archive is not as precise as one might like, as it only says that the series is “Observed temperatures from Jones et al. (1999) Rev Geophys”. Jones et al 1999 only illustrated GLB, NH and SH indexes. The archived version for Briffa, Jones et al 2001 differs from vintage versions of these three series, being most similar to the NH version. The most plausible interpretation of the archive is that it is a 20-90N composite calculated in the course of Briffa, Jones et al 2001 (rather than one of the series from Jones et al 1999 itself.)

Given that Jones is a coauthor of Briffa, Jones et al 2001 and the data in Briffa, Jones et al 2001 used data from Jones et al 1999, it is incorrect for Dr Pierrehumbert to say that the Courtillot temperature record is not “from any of Phil Jones’ datasets” regardless of Jones’ unhelpful and inaccurate communication on the matter. In my opinion, these allegations in Dr Pierrehumbert’s post should be withdrawn.

Between 5 pm Dec 24 and now, despite Christmas, a number of batches of RC posts have cleared, including (at least) some on Dec 24 and some already today. 9 posts have been cleared on the Pierrehumbert thread, including most recently a discussion of the date of the Council of Nicaea, which observed:

date of the (first) Council of Nicaea … was 325. Among its decisions on dogma was that angels are non-physical beings, hence unsexed. Sneers at what appear, taken out of their cultural context, to be absurd beliefs or disputes, are tokens of ignorance rather than sophistication.

It is perhaps appropriate that RC have turned their attention to theological matters as realclimate censored my post, which pointed out an incident of realclimate fallability, as opposed to admitting and correcting that part of their post which was in error.

UPDATE: After this post was placed online, Gavin cleared my post with two deletions. The first deletion was the link to CA showing that the Courtillot figure could be obtained from the Briffa et al 2001 dataset, the following sentence:

This is shown at http://www.climateaudit.org/?p=2522.

The second deletion was the following sentence observing that Courtillot et al were not the only climate scientists to use obsolete data:

However, I note that this is far from the only instance where climate scientists have used obsolete data versions and other cases have typically not drawn similar opprobrium.

Indeed, I’ll probably post up some quotes sometime showing how Mann and Juckes have almost gloried in the use of obsolete data.

UPDATE #2: In the same RC thread, NASA employee and spokesman Gavin Schmidt published a defamatory statement by Eli Rabett, in which Rabett simply invented an account of the original submission by M&M to Nature and then censored my reply to the defamatory statement, which stated:

Re #40. Eli Rabett has, as all too often, simply fabricated calumnies against us, when he stated here :

The Mc’s submitted a garbage can full of issues and the editors at Nature worked to define what was an error, what was a controversy, and what was just silly, all requiring a voluminous correspondence. I guess that the editors told the Mcs that some of their issues were better dealt with in a submitted paper.

All these statements are untrue.

We submitted a short and clearly written article to Nature, online here which did not require a “voluminous” or even any correspondence to review. The initial submission received favorable reviews, also online here including the following:

I find merit in the arguments of both protagonists, though Mann et al. (MBH) is much more difficult to read than McIntyre & McKitrick (MM). Their explanations are (at least superficially) less clear and they cram too many things onto the same diagram, so I find it harder to judge whether I agree with them.
and
In general terms I found the criticisms raised by McIntyre and McKritik worth of being taken seriously. They have made an in depth analysis of the MBH reconstructions and they have found several technical errors that are only partially addressed in the reply by Mann et al.

and subsequently:
I am particularly unimpressed by the MBH style of ‘shouting louder and longer so they must be right’.

Rather than simply spreading calumnies, I would urge Eli to either investigate the facts, and, if he is not sufficiently interested in a matter to ascertain the facts, there is always the option of saying nothing on the matter.

And BTW, Gavin, if you’re going to allow Eli to post false speculations on realclimate, please have the simple courtesy to allow me to respond.

If you’re going to permit adverse comments on people on a blog, then simple fairness requires that you permit them to respond. On the few occasions that Schmidt posted here, I haven’t touched a comma of his posting.

Schmidt’s dishonest application of his posting policy probably does his cause more harm than good.