Boundary Layer Clouds: IPCC Bowdlerizes Bony

As we’ve discussed before (and is well known), clouds are the greatest source of uncertainty in climate sensitivity. Low-level (“boundary layer”) tropical clouds have been shown to be the largest source of inter-model difference among GCMs. Clouds have been known to be problematic for GCMs since at least the Charney Report in 1979. Given the importance of the topic for GCMs, one would have thought that AR4 would have devoted at least a chapter to the single of issue of clouds, with perhaps one-third of that chapter devoted to the apparently thorny issue of boundary layer tropical clouds.

This is what an engineering study would do – identify the most critical areas of uncertainty and closely examine all the issues related to the critical uncertainty. Unfortunately, that’s not how IPCC does things. Instead, clouds are treated in one subsection of chapter 8 and boundary layer clouds in one paragraph.

Interestingly, the language in IPCC AR4 is (using the terminology of climate science) “remarkably similar” to Bony et al (J Clim 2006) url , with the differences as interesting as the similarities. It seems to me that each language change from Bony to IPCC had the effect of papering over or softening the appearance of problems or contradictions, rather than clearly drawing the issues to the attention of the public. (Note – Bony was a lead author of the chapter – another instance of IPCC authors reviewing their own work.)

AR4

Boundary-layer clouds have a strong impact on the net radiation budget (e.g., Harrison et al., 1990; Hartmann et al., 1992) and cover a large fraction of the global ocean (e.g., Norris, 1998a,b). Understanding how they may change in a perturbed climate is thus a vital part of the cloud feedback problem. The observed relationship between low-level cloud amount and a particular measure of lower tropospheric stability (Klein and Hartmann, 1993), which has been used in some simple climate models and in some GCMs’ parametrizations of boundary layer cloud amount (e.g., CCSM3, FGOALS), led to the suggestion that a global climate warming might be associated with an increased low-level cloud cover, which would produce a negative cloud feedback (e.g., Miller, 1997; Zhang, 2004). However, variants of the lower-tropospheric stability measure, which may predict boundary-layer cloud amount as well as the Klein and Hartmann (1993) measure, would not necessarily predict an increase in low-level clouds in a warmer climate (e.g., Williams et al., 2006). Moreover, observations indicate that in regions covered by low-level clouds, the cloud optical depth decreases and the SW CRF weakens as temperature rises (Tselioudis and Rossow, 1994; Greenwald et al., 1995; Bony et al., 1997; Del Genio and Wolf, 2000; Bony and Dufresne, 2005), but the different factors that may explain these observations are not well established. Therefore, understanding of the physical processes that control the response of boundary-layer clouds and their radiative properties to a change in climate remains very limited.

Bony et al 2006

Boundary layer clouds have a strongly negative CRF (Harrison et al. 1990; Hartmann et al. 1992) and cover a very large fraction of the area of the Tropics (e.g., Norris 1998b). Understanding how they may change in a perturbed climate therefore constitutes a vital part of the cloud feedback problem. Unfortunately, our understanding of the physical processes that control boundary layer clouds and their radiative properties is currently very limited.

It has been argued based on the Clausius–Clapeyron formula that in a warmer climate, water clouds of a given thickness would hold more water and have a higher albedo (Somerville and Remer 1984; Betts and Harshvardhan 1987). But the analysis of satellite observations show evidence of decreasing cloud optical depth and liquid water path with temperature in low latitude boundary layer clouds (Tselioudis and Rossow 1994; Greenwald et al. 1995; Bony et al. 1997). This may be due to the confounding effect of many physical processes, such as increases with temperature in precipitation efficiency or decreases with temperature in cloud physical extent (Tselioudis et al. 1998; Del Genio and Wolf 2000).

Klein and Hartmann (1993) showed an empirical correlation between mean boundary layer cloud cover and lower-tropospheric stability (defined in their study as the difference of 700-hPa and near-surface potential temperature). When imposed in simple two-box models of the tropical climate (Miller 1997; Clement and Seager 1999; Larson et al. 1999) or into some GCMs’ parameterizations of boundary layer cloud amount [e.g., in the National Center for Atmospheric Research (NCAR) Community Climate System Model verion 3 (CCSM3)], this empirical correlation leads to a substantial increase in low cloud cover in a warmer climate driven by the larger stratification of warmer moist adiabats across the Tropics, and produces a strong negative feedback. However variants of lower-tropospheric stability that may predict boundary layer cloud cover just as well as the Klein and Hartmann (1993) parameterization, would not necessarily predict an increase in boundary layer cloud in a warmer climate (e.g., Williams et al. 2006 – Clim Dyn; Wood and Bretherton 2006 – J Clim).

The boundary layer cloud amount is strongly related to the cloud types present, which depend on many synoptic-and planetary-scale factors (Klein 1997; Norris 1998a; Norris and Klein 2000). Factors such as changes in the vigor of shallow convection, possible precipitation processes, and changes in capping inversion height and cloud thickness can outweigh the effect of static stability. These factors depend on local physical processes but also on remote influences, such as the effect of changing deep convective activity on the free tropospheric humidity of subsidence regions (Miller 1997; Larson et al. 1999; Kelly and Randall 2001). Evidence from observations, large-eddy simulation models, or climate models for the role of these different factors in cloud feedbacks is currently very limited.

The similarities are self evident. Now let’s look at the differences.

Bony et al said that boundary layer clouds had “strongly negative CRF” (Cloud Radiative Forcing), which IPCC watered down to “strong impact”. I guess that the idea of “strongly negative” feedback was too salacious for the IPCC audience.

  Boundary layer clouds have a strongly negative CRF (Harrison et al. 1990; Hartmann et al. 1992) and cover a very large fraction of the area of the Tropics (e.g., Norris 1998b).  Boundary-layer clouds have a strong impact on the net radiation budget (e.g., Harrison et al., 1990; Hartmann et al., 1992) and cover a large fraction of the global ocean (e.g., Norris, 1998a,b).

The next sentence was identical other than trivial wordsmithing. Bony et al 2006 had stated that the “empirical” Klein and Hartmann (1993) correlation “leads” to a substantial increase in low cloud cover, which resulted in a “strong negative” cloud feedback. Again IPCC watered this down: “leads to” became a “suggestion” that it “might be” associated with a “negative cloud feedback” – the term “strong” being dropped by IPCC.

 Klein and Hartmann (1993) showed an empirical correlation between mean boundary layer cloud cover and lower-tropospheric stability (defined in their study as the difference of 700-hPa and near-surface potential temperature). When imposed in simple two-box models of the tropical climate (Miller 1997; Clement and Seager 1999; Larson et al. 1999) or into some GCMs’ parameterizations of boundary layer cloud amount [e.g., in the National Center for Atmospheric Research (NCAR) Community Climate System Model verion 3 (CCSM3)], this empirical correlation leads to a substantial increase in low cloud cover in a warmer climate driven by the larger stratification of warmer moist adiabats across the Tropics, and produces a strong negative feedback.  The observed relationship between low-level cloud amount and a particular measure of lower tropospheric stability (Klein and Hartmann, 1993), which has been used in some simple climate models and in some GCMs’ parametrizations of boundary layer cloud amount (e.g., CCSM3, FGOALS), led to the suggestion that a global climate warming might be associated with an increased low-level cloud cover, which would produce a negative cloud feedback (e.g., Miller, 1997; Zhang, 2004).

The sentence starting “variants of the lower-tropospheric stability measure…” is identical in both versions.

Bony et al raised an argument about increasing albedo in clouds (dating back to the 1980s), noting three articles opposing this argument. IPCC deleted the mention of the arguments in favor of a higher albedo,

 It has been argued based on the Clausius–Clapeyron formula that in a warmer climate, water clouds of a given thickness would hold more water and have a higher albedo (Somerville and Remer 1984; Betts and Harshvardhan 1987), while keeping the three references to the opposing articles.

But the analysis of satellite observations show evidence of decreasing cloud optical depth and liquid water path with temperature in low latitude boundary layer clouds (Tselioudis and Rossow 1994; Greenwald et al. 1995; Bony et al. 1997).

 –

Moreover, observations indicate that in regions covered by low-level clouds, the cloud optical depth decreases and the SW CRF weakens as temperature rises (Tselioudis and Rossow, 1994; Greenwald et al., 1995; Bony et al., 1997; Del Genio and Wolf, 2000; Bony and Dufresne, 2005),

Bony et al concluded their paragraphs reporting a very limited understanding of the physical processes controlling boundary layer clouds, a sentence that was substantially repeated by IPCC who qualified the admission of lack of understanding by saying that the understanding was limited in respect to “a change in climate”.

 Unfortunately, our understanding of the physical processes that control boundary layer clouds and their radiative properties is currently very limited.  Therefore, understanding of the physical processes that control the response of boundary-layer clouds and their radiative properties to a change in climate remains very limited.

A third party reader might also assume that the section on boundary layer clouds would have benefited from comments from stadiums of IPCC reviewers. In fact, the version as published is almost word for word identical to the version in the First Order Draft. A few comments from reviewers were peremptorily dismissed by the chapter authors.

However, unlike the Hockey Stick section, there were virtually no comments whatever on this section and these were dismissed fairly summarily.

Reviewer Richard Allan observed:

8-586 A 47:54 48:5 It should also be noted that the cooling effect of clouds is primarily felt at the surface during the daytime, while the greenhouse effect of cloud generally heats the atmosphere. [Richard Allan (Reviewer’s comment ID #: 3-83)]

IPCC Authors:

Rejected due to space restrictions (this addition would not be fundamental for the following discussion).

My two cents worth as an interested non-specialist reader: Allan’s comment here seems interesting – it was something that I wasn’t aware of.

Next Allan suggested a seemingly interesting and on point addition to the text.

8-589 A 48:30 48:46 A suggested addition to the discussion of cloud altitude feedbacks: “Cess et al. (2001) [The influence of the 1998 El Nino upon cloud radiative forcing over the Pacific warm pool. J. Climate, 14, 2129–2137] suggested a strong influence of ENSO on cloud altitude and hence the balance between longwave heating and shortwave cooling. It is likely that this is partly a regional effect relating to changes in the vertical motion fields (Allan et al. 2002 [Influence of Dynamics on the Changes in Tropical Cloud Radiative Forcing during the 1998 El Nino J. Climate, 15, 1979-1986]) that may also be linked with decadal fluctuations in cloud properties (Wielicki et al. 2002 [Evidence for large decadal variability in the tropical mean radiative energy budget. Science, 295, 841–844.]) and is unlikely to be related to cloud feedback.” [Richard Allan (Reviewer’s comment ID #: 3-84)]

Again this suggestion was refused by the Chapter Authors.

Rejected. We do not review all the cloud feedback studies published, but assess the main progress that has been done since the TAR in understanding climate change cloud feedbacks. Therefore we do not discuss processes that are unlikely to be involved in climate change cloud feedbacks (e.g. the dynamically-driven change in clouds associated with El-Nino).

This latter excuse raises another interesting aspect of the paragraph on boundary layer clouds. Given the importance of the topic, a third party would assume that AR4 would include many references to a wide variety of studies since AR4 examining every conceivable aspect of marine boundary layer clouds. They rebuff Allan’s suggestion on the basis that they are assessing “progress since the TAR”. However, no fewer than ten of 13 references are pre-TAR (five pre-SAR) – there are only three references to post-TAR literature. Whatever the reason for excluding the Allan comment, it wasn’t because they were already chock-a-block with post-TAR literature.

As noted above, given the importance of clouds in climate sensitivity, and of boundary layer clouds in particular, a third party reader would have expected a comprehensive discussion of all the issues and, in particular, what steps they recommended for the reduction of uncertainties in this area, both of which were conspicuously absent.

June 2009 and the Big Red Spot

NOAA is first of the three main indices to be off the mark with June 2009 at 0.617 deg C, bouncing off the relatively low values of 2008. Given that the data is essentially common to HadISST, this is unsurprising.

The difference between RSS and NOAA/HadCRU values is interesting in terms of the Big Red Spot (enhanced tropical troposphere temperatures: RSS should be going up faster than NOAA/HadCRU, not the opposite.) Here is raw RSS and NOAA data: note that the reference periods are different! (I left the original reference periods to separate the lines a bit better.) You can see how NOAA surface is gaining on RSS T2LT rather than the opposite.


Figure 1. TRP temperatures to June 2009. NOAA (1961-1990) vs RSS (1979-1998).

I’ve done quite a bit of experimenting recently with an interesting program called strucchange (Achim Zeilis). I originally experimented with this program in connection with hurricane data: to test the supposed regimes of Holland and Webster. I looked at it again in connection with the new USHCN algorithm (in case the presently secret USHCN adjustment program was ever disclosed – it uses breakpoints methods as well.)

More recently, I applied it to various crosscuts of the TRP satellite and surface data: RSS vs UAH, Land vs Ocean and various cross-profiles. Many interesting results that I’m assimilating.

Today, I’ll show one such crosscut: Tropical RSS vs NOAA, crosscutting Land, Ocean and All.

As you can see, this particular algorithm finds “significant” breakpoints in these crosscuts. Aside from what the algorithm finds, visually there’s a big difference between the Land and Ocean patterns that seems like it would be hard to justify in climate terms. (The same thing happens with UAH vs NOAA, it just looks different.) There are also significant breakpoints between UAH and RSS.

In most cases, the breakpoint location can be plausibly associated with either the start of one satellite or the end of another. The tricky thing about this association is that there are lot of stitches in the satellite record and the mind is prone to finding associations. More on this below. For now, take a look at the graphic.


Figure 2. TRP NOAA minus RSS. Difference series centered on zero.

But in this case, when one examines the literature on satellite adjustments, I think that there’s pretty good reason to anticipate that breakpoints could occur at satellite switches. Complicating matters is that the literature also reports issues with “drift”.

Also the most cursory examination of the satellite literature shows that it is highly statistical in concept. In some case, there is limited ground truthing between satellites, so they end up having to estimate the adjustment – a statistical operation.

My take on this is that there are going to be at least 6 adjustments that need to be estimated. In some cases, there is both a step adjustment and a drift adjustment. If one admits the possibility of statistical error into the procedure, then you no longer have an AR1 error model in trend estimations (something that I’ll show in another post.) It looks to me like there are 5 or 6 or more step adjustments, which generate highly significant AR1 coefficients, but the underlying process is different and more complicated. This would be a big and interesting project.

In the bottom panel of the above graphic, the increase of NOAA relative to RSS T2LT over land in the past 10 years is particularly consistent. Again, Big Red Spot Theory predicts the opposite. At this point, I’m not inclined to view any of this as “falsifying” Big Red Spot theory, but, more likely, as evidence of “drift” or “bias” in both surface and satellite records. There is certainly food here for people who think that the surface land record is affected by measurement bias.

However, I’m far from convinced that the satellite records are revealed truth. It seems quite possible to me that quite different satellite trends could emerge if there were a couple of inter-satellite adjustment errors. I don’t know right now how one would estimate the potential magnitude of the adjustment errors, but, as soon as one introduces potential step adjustment errors, it becomes pretty hard to estimate trends. More on this on another occasion.

HadISST- June 2009 Values

Lucia reported on HadSST June 2009 values, which are out quickly and are surprisingly high given seemingly low RSS values. Lucia:

it looks like the HadSST’s temperature anomalies may finally break their all time high temperature anomalies. The June anomaly of 0.50 C is a big jump up from 0.355C for May:

The preliminary June 2009 HadISST results are a bit of a surprise given that prior RSS results showed a relatively cool June 2009.

I was hoping to do a quick examination of the data, but, as so often in climate science, the comparison took far longer than it should have because of a strange screwup in how the UK Met Office archived the HadISST data. (They double-counted one month in one of their archives – unfortunately the one that I downloaded. I wasted a LOT of time trying to figure out what was going on until I identified the problem. You’d think that some climate scientist somewhere would have noticed that one month occurred twice in the data, but I guess not. I’ll post a bit on this in a comment below. )

I examined HadISST and RSS for the tropics (20S-20N) – partly to simplify things, partly because I’ve been following tropical temperatures. By accident, my first plot was of the HadISST Junes 2009 preliminary in K. Generally one sees plots in anomaly deg C. However, these are derived from absolute temperatures and it never does any harm to squint at data from different directions. So today we’re going to plot in K rather than anomaly deg C.

First. here is a plot of HadISST for June 2009 (deg K).

For comparison, here is a similar HadISST plot for June 2005 (the big hurricane year):

Here is a plot of the difference – there is obviously a striking asymmetry between SH and NH with 2009 being colder in the NH and warmer in the SH (particularly in the upwelling zones). Squinting back at the 2009 SST plot, it seems relatively cool in the Atlantic hurricane development zone, indicating that 2009 is not likely to have a bumper crop of Atlantic hurricanes.

I did similar plots for RSS TLT and will show a couple. Here is RSS TLT for June 2009 – again in K rather than anomaly deg C. In the K plot, the Sahara is obviously a remarkable anomaly that doesn’t stick out in the anomaly deg C plots.

The June 2009 to June 2005 difference for RSS TLT is far more muted than the corresponding HadISST difference, especially at the edges of the southern tropics. There is an interesting increase in east Africa.

.

Finally here is a plot of June 2009 lapse rate (over the ocean) which again has an interesting pattern. There is a very low lapse rate at the upwelling zone on the west coast of Africa – presumably related to the warm TLT temperatures over the Sahara.

What does this mean, if anything? Dunno.

The SH-ness of the HadISST increase is interesting because it’s definitely been a cool North American summer. We haven’t used our air conditioner once this year and most days, I’ve worn a light sweater. (I’m a Canadian – I like this sort of weather.) Has it been a warm Australian winter?

We’ve been pretty quick to notice temperature declines in other series and need to be equally attentive to temperature increases. In my opinion, as I’ve observed many times, the pure time series properties of the various temperature series do not permit elaborate conclusions one way or the other on the future course of temperatures – either on the part of Rahmstorf and similar data torturers or on the part of people deriving a morale from relatively little temperature increase over the past decade.

Sea Ice At Lowest Level In 800 Years

A few days ago, Jeff Id drew attention to a recent study profiled in sciencedaily which stated:

Sea Ice At Lowest Level In 800 Years Near Greenland
ScienceDaily (July 2, 2009) — New research, which reconstructs the extent of ice in the sea between Greenland and Svalbard from the 13th century to the present indicates that there has never been so little sea ice as there is now.

The new study, Macias Fauria et al (Clim Dyn 2009), was coauthored by John Moore and Aslak Grinsted, whom we’ve encountered recently in connection with Rahmstorf smoothing, and Mauri Timonen, with whom we had very cordial correspondence in connection with our profiling of the long Finnish tree ring chronology.

Co-author John Moore had said in commentary here:

I strongly believe in making data available and codes available when the results are published – sometimes we work on data that others control who may not, unfortunately, want it distributed.

I wrote to lead author Macias Fauria requesting the proxy and reconstruction data for Macias Fauria et al 2009 – the sea ice reconstruction used two proxy series, an RCS version of the Finnish tree ring chronology and Svalbard ice core data. Macias Fauria replied that he could not provide the latter data as co-author Isaksson refused to make it public. He did provide me with the tree ring version that they used, the target sea ice series and the reconstruction. Obviously with half the proxy data missing, any statistical analysis of their methodology is thwarted.

The Macias chronology was not the same as the chronologies that Timonen had previously provided to me – Macias used RCS chronology, Timonen didn’t. This was quickly clarified in correspondence with Timonen and Macias Fauria. I did an RCS chronology calculation (using my RCS algorithm – to my knowledge, Briffa and cohorts have maintained secrecy on their code) on the ADV7638 measurement data that Phil Trans B required Briffa to archive last year (after prolonged obstruction by Briffa.)

I got a series that is recognizably related to the Macias chronology, but also critically different.

First, the Macias version has a somewhat different texture. It looks like it’s been somewhat smoothed – perhaps using Rahmstorf smoothing, perhaps not. (Note: A reader notes below that the text says that the smoothing is a 5-year cubic spline.) Secondly, the Macias version has a pronounced trend relative to my emulation from first principles from the 19th through the 20th century. Third, my emulation showed the frequently seen “divergence” between ring widths and temperature, while the Macias verion did not. Macias Fauria explained to Timonen that he modified the Advance10K data set by:

1. Not include the snags.
2. Remove series belonging to the last period in order to have a more evenly distributed sample depth along all the reconstructed period (no age selection was performed), as seen in Fig. 2 of the paper (otherwise most of the data was from the last two centuries).

Perhaps these changes to the data set account for the difference, perhaps it’s a difference in methodology.


Figure 1. Scaled Finnish tree ring chronologies. Top – emulated from Finnish ADV7638 archived by Briffa; middle – Macias version; bottom – difference.

Reconciling the difference is impossible right now because the data is incomplete and the originating code is thus far unavailable. I, on the other hand, for a mere blog post have placed the code that generates this graphic in the first comment and it should be turnkey with data and functions that I’ve placed online. Wouldn’t it be nice if the author Moore

strongly believed in making data available and codes available when the results are published.

While the ice core data is not archived, it is represented in small squiggles in dead-tree literature, which give some indication of the shape. It was discussed here where the following image showing very elevated values of the washout proxy discussed in Grinsted et al (JGR 2006) were shown:

These seemingly high 12th century values do not enter into the sea ice reconstruction of Macias Fauria et al 2009, because it begins in 1200. I’m sure that there is an excellent reason for not including 12th century data.

Sea Ice – June 2009

June 2009 monthly sea ice data is now out for NH and SH. (Continuing prior sea ice post here.)

The global sea ice anomaly in June 2009 remained positive. Over the 1979-2009 period, there is zero trend in global sea ice anomaly, with a SH increasing trend offsetting a NH decreasing trend. June 2009 NH anomaly was not remarkable.

Daily sea ice anomaly through July 9, 2009 are running at about the median of the past 7 years, about half a million sq km behind 2006-2007 but slightly ahead of 2008.

RSS June – "Worse Than We Thought"

Lucia was quick off the mark with RSS June results. RSS June was 0.075 deg C (reference 1979-1998). The graph shows somewhat of a decline from earlier in the year.

In a joint statement, realclimate authors Gavin Schmidt, Michael Mann, Stefan Rahmstorf and Eric Steig noted their disappointment with market performance. However, Rahmstorf observed that, if these results were embedded in a 15-dimensional manifold, the results were still “worse than we thought”. Michael Mann said that the decline in June RSS values was disinformation from fossil fuel interests and issued a fatwa on those responsible. [Note to realclimate readers – this is a satirical comment; they did not really make the above statements.]

Rahmstorf Rejects IPCC Procedure

Over the past few days, we’ve discussed many peculiar aspects of Rahmstorf smoothing and centering: incorrect disclosure; seeming unawareness of what the smoothing did; unattractive properties of the triangular filter; the enhancement of “successful” prediction; opportunistic policy changes.

It’s not though IPCC hadn’t turned its mind to smoothing. IPCC AR4 enunciated a sensible policy on smoothing in AR4 chapter 3 : “Observations: Surface and Atmospheric Climate Change” Appendix A. In that chapter, they condemned Rahmstorf procedures and, unlike Rahmstorf, described their filter in unambigous terms – no cat-and-mouse. They stated:

In order to highlight decadal and longer time-scale variations and trends, it is often desirable to apply some kind of low-pass filter to the monthly, seasonal or annual data. In the literature cited for the many indices used in this chapter, a wide variety of schemes was employed. In this chapter, the same filter was used wherever it was reasonable to do so. The desirable characteristics of such filters are 1) they should be easily understood and transparent; 2) they should avoid introducing spurious effects such as ripples and ringing (Duchon, 1979); 3) they should remove the high frequencies; and 4) they should involve as few weighting coefficients as possible, in order to minimise end effects. The classic low-pass filters widely used have been the binomial set of coefficients that remove 2Δt fluctuations, where Δt is the sampling interval. However, combinations of binomial filters are usually more efficient, and those have been chosen for use here, for their simplicity and ease of use

These are sensible policies. “Easily understood and transparent” clearly excludes Rahmstorf’s Copenhagen description of a triangular filter of length 29 as – “smoothed over 15 years”. Criterion 2 -excluding ripples and ringing – excludes Rahmstorf’s triangular filter on other grounds. Criterion 4 – “as few weighting coefficients as possible” also precludes Rahmstorf’s filter.

IPCC went so far as to provide a standard filter to “remove fluctuations on less than decadal time scales” for chapter 3, described in unequivocal terms as follows:

The second filter used in conjunction with annual values (Δt =1) or for comparisons of multiple curves (e.g., Figure 3.8) is designed to remove fluctuations on less than decadal time scales. It has 13 weights 1/576 [1-6-19-42-71-96-106-96-71-42-19-6-1]. Its response function is 0.0 at 2, 3 and Δt, 0.06 at 6Δt, 0.24 at 8Δt, 0.41 at 10Δt, 0.54 at 12Δt, 0.71 at 16Δt, 0.81 at 20Δt, and 1 for zero frequency, so for yearly data the half-amplitude point is about a 12-year period, and the half-power point is 16 years. This filter has a very similar response function to the 21-term binomial filter used in the TAR.

Instead of simply complying with standard IPCC procedures, Rahmstorf used a filter procedure described only in the AGU newspaper – the triangular filter properties of which were not described in the original article and indeed the authors say that they unaware of this defect at the time.

As so often in climate science, Rahmstorf changed smoothing policy not just once, but twice. First, in Rahmstorf 2007, he abandoned IPCC policy in favor of an article in the AGU newspaper; then he changed accounting parameters in the Copenhagen Report – all without explicitly stating that he had changed policy from the IPCC report and accompanying the change notice with an explicit accounting of the impact of the change.

Here’s what happens with Rahmstorf’s results if IPCC filter procedures had been followed. Rahmstorf can no longer assert that observations are in the “upper” part of models, with the implication that things are “worse than we thought”. R07 is looking shakier and shakier.

Your Portfolio is "Better than You Thought"

Pension funds all over the world – even university pension funds – are clamoring for the services of Rahmstorf and Associates as pension fund manager. No more Fidelity or Berkshire Hathaway. Yesterday’s men.

Confused by the market? Worried about your investments? Stop your worrying. Rahmstorf and Associates’ portfolio managers will separate signal from noise using a proprietary smoothing method invented by two Arctic scientists.

No more inconvenient portfolio valuations. No more adding up the value of your portfolio at the close of the month. That’s so one-dimensional. Today’s sophisticated portfolio manager at Rahsmtorf and Associates will use an embedding dimension of 15 years to value your portfolio. Your portfolio value is “better than you thought” ( (c) The Team).

Rahmstorf and Associates: Consistently outperforming the market since 1990. Remarkably consistent market outperformance since 1990.


Figure 1. S&P 500 with Rahmstorf smoothing. Thanks to Ross McKitrick for data and idea.

Rahm-Centering: Enhancing "Successful" Prediction

I only have time to post a quick note on this interesting aspect of Rahmstorf’s diagram – how the centering of Rahmstorf et al 2007 interacts with Rahm-smoothing (now conceded by everyone except Rahmstorf to be a simple triangular filter of 2M-1 years) to enhance “successful” prediction.

I noticed this effect when I did a plot using a standard reference period of 1961-1990 (as opposed to Rahmstorf’s unusual selection of 1990. Rahsmtorf had centered on 1990 (using Rahm-smoothed values). Centering on a single year is a procedure that was severely criticized in the blogosphere a couple of years ago and it was odd to see Rahmstorf also center on a single year, even if it was a smoothed version. But it made me think about the impact of centering on Rahm-smoothed 1990 and the results were interesting.

I had collated A1B model information from KNMI (a large 57-run subset of the 81 PCMDI runs) and presumably representative. I converted all models to 1961-1990 anomalies to match HadCRU and did an unsmoothed comparison of model ensemble average and observations showing 1-sigma limits as in the original. Unlike the original diagram, observations are not in the “upper part” of the models. Indeed, they weren’t even in the upper part” when Rahmstorf et al 2007 was written.

An important difference was that 1990 model values were noticeably above observed 1990 values using a standard 1961-1990 reference period, whereas Rahmstorf centered both the model ensemble and observations on the Rahmstorf-smoothed value in 1990. Think about it – now that we’ve confirmed that Rahm-smoothing is a triangular filter with 2M-1 years.


Figure 1. Unsmoothed comparison, 1961-1990 reference period. 1-sigma limits as in the original.

Rahmstorf’s triangular filter has a lot of weight on the edges relative to a more common gaussian filter. For M=15, in the calculation of the 1990 value, a 2M-1 triangular (Rahmstorf) filter will place as much weight on values from 2000-2005 as 1990 values! Let’s stipulate for a moment that the models were designed with an eye on history to the early 1990s (and the model response to Pinatubo is overwhelming evidence that they did) – nothing wrong with that. Actually, AR4 models probably had their eye on history even later than that.

But let’s take a best case: suppose that peeking was cut off after Pinatubo and everything after that was untuned modeling. Let’s further suppose for the simplicity of illustration that models tracked observations exactly up to Pinatubo and then models ran x% hotter than observations and consider the implications under Rahm-smoothing and Rahm-centering.

In the calculation of the Rahm-smoothed 1990 values, Rahmstorf looks forward. Under the above assumptions (i.e. models running hotter than observations), the Rahm-smoothed 1990 model value will be raised relative to the Rahm-smoothed observations. The use of the triangular filter will cause a noticeable larger effect than a gaussian filter.

Rahmstorf now centers both models and observations using these values. The centering step lowers the models relative to observations, in effect, offsetting part of the divergence. The effect is large enough to make a difference on the rhetorical effect of the Rahmstorf diagram.

To be clear, I am not claiming that Rahmstorf et al did this intentionally. Indeed, there is considerable evidence that they had negligible understanding of the properties of their smooth and perhaps even misunderstood what it was doing. However, scientists and reviewers need to be wary of confirmation bias and, once again, this seems to have interfered with the identification of the problem.

UPDATE: This is what the Rahm graphic would look like using IPCC smoothing.

NCAR and Year 2100

Here’s another model output oddity that I noticed from a plot and confirmed with a direct screensave. In January 2100, the SH values for an NCAR PCM run, as archived at KNMI, (expressed as an anomaly here) jumps 2.5 deg C from 0.3 deg C to 2.81 deg C, before relaxing to lower values over the next few years. The NH has an offsetting problem with 45-90N dropping over 2 deg C in Jan 2100. Overall it seems to average out. (Did I hear someone say that it therefore “doesn’t matter”?)

The problem occurs exactly in January 2100 and one can hardly help wondering whether something has been spliced somewhere along the way – a model version of the Hansen Y2K problem. Again, I do not know whether this is an artifact of how KNMI handles the data extraction or whether it’s a property of the model.

SH (0-90) Scrape

45-90N Scrape