In our recent discussion of Dessler v Spencer, UC raised monthly centering as an issue in respect to the regressions of TOA flux against temperature. Monthly centering is standard practice in this branch of climate science (e.g. Forster and Gregory 2006, Dessler 2010), where it is done without any commentary or justification. But such centering is not something that is lightly done in time series statistics. (Statisticians try to delay or avoid this sort of operation as much as possible.)

When you think about it, it’s not at all obvious that the data should be centered on each month. I agree with the direction that UC is pointing to – a proper statistical analysis should show the data and results without monthly centering to either verify that the operation of monthly centering doesn’t affect results or that its impact on results has a physical explanation (as opposed to being an artifact of the monthly centering operation.)

In order to carry out the exercise, I’ve used AMSU data because it is expressed in absolute temperatures. I’ve experimented with AMSU data at several levels, but will first show the results from channel 4 (600 mb) because they seem quite striking to me and because troposphere temperatures seem like a sensible index of temperature for comparing to TOA flux (since much TOA flux originates from the atmosphere rather than the surface.)

In the graphic below, the left panel plots the CERES TOA Net flux (EBAF monthly version) against monthly AMSU channel 4 temperatures. (Monthly averages are my calculation.) The right panel shows the same data plotted as monthly anomalies. (HadCRU, used in some of the regression studies, uses monthly anomalies.) The red dotted line shows the slope of the regression of flux~temperature, while the black dotted line shows a line with a slope of 3.3 (chosen to show a relationship of 3.3 wm-2/K). Take a look – more comments below.

Figure 1. CERES TOA Net Upward Flux (EBAF) vs AMSU Channel 4 (600 mb) Temperature. Left – Absolute; right – monthly anomaly.

The differences between regressions before and after monthly centering are dramatic, to say the least.

Considering absolute values (left) first. Unlike the Mannian r2 of 0.018 of Dessler 2010, the relationship between TOA flux and 600 mb temperature is very strong (r2 of 0.79). TOA flux is net downward when 600 mb temperature is at a minimum (Jan – northern winter/southern summer) and is net upward in northern summer (July) when global 600 mb temperature is at its maximum.

The slope of the regression line is 7.7 wm-2/K (slopes greater than 3.3 wm-2/K are said to indicate negative feedback.) There is an interesting figure-eight shape as a secondary but significant feature. This residual has 4 zeros during the year – which suggests to me that it is related to the tropics (where incoming solar radiation has a 6-month cycle maxing at the equinoxes, with the spring equinox stronger than the fall equinox.)

In “ordinary” statistics, statisticians try to fit things with as few parameters as possible. In this case, a linear regression gives an excellent fit and, with a little experimenting, a linear regression plus a cyclical term with a 6-month period would give an even better fit. There doesn’t seem to be any statistical “need” to take monthly centering in order to get a useful statistical model.

Now let’s look at the regression after monthly centering – shown on the same scale. Visually it appears that the operation of monthly centering has damaged the statistical relationship. The r^2 has been decreased to 0.41 – still much higher than the r^2 of Dessler 2010. (The relationship between TOA flux and 600 mb temperatures appears to be stronger than the corresponding relationship with surface temperatures, especially HadCRU.)

Interestingly, the slope of the regression line is now 2.6 wm-2/K i.e. showing positive feedback.

I’ve done experiments comparing AMSU 600 mb to AMSU SST and both to HadCRU. The results are interesting and will be covered on another occasion.

In the meantime, the marked difference between regression results before and after taking monthly centering surely warrants reflection.

I try to avoid speculations on physics since I’ve not parsed the relevant original materials, but, suspending this policy momentarily, I find it hard to visualize a physical theory in which the governing relationship is between monthly anomalies as opposed to absolute temperature. Yes, the relationship between absolute quantities still leaves residuals with a seasonal cycle, but it would be much preferable in statistical terms (and presumably physical terms) to explain the seasonal cycle in residuals in some sort of physical way, rather than monthly centering of both quantities (24 parameters!).

If there is a “good” reason for monthly centering, the reasons should be stated explicitly and justified in the academic articles (Forster and Gregory 2006, Dessler 2010) rather than being merely assumed – as appears to have happened here. Perhaps there is a “good” reason and we’ll all learn something.

In the meantime, I think that we can reasonably add monthly centering to the list of questions surrounding the validity of statistical analyses purporting to show positive feedbacks from the relationship of TOA flux to temperatures. (Other issues include the replacement of CERES clear sky with ERA clear sky and the effect of leads/lags on Dessler-style regressions.)

I suspect that it may be more important than the other two issues. We’ll see.

PS – there are many interesting aspects to the annual story in the figure shown above. The maximum annual inbound flux is in the northern winter (Jan) and the minimum is in the northern summer – the difference is over 20 wm-2, large enough to be interesting. The annual cycle of outbound flux and GLB temperature reaches a maximum in the opposite season to the one that would expect from the annual cycle of inbound flux. I presume that this is because of the greater proportion of land in the NH as Troy observed. In effect, energy accumulates in the SH summer and dissipates in the NH summer. An interesting asymmetry.

**Note**: AMSU daily information is at http://discover.itsc.uah.edu/amsutemps/. I’ve uploaded scripts that scrape the daily information from this site for all levels and collate into monthly averages. See script below. My collation of monthly data is also uploaded.

source(“http://www.climateaudit.info/scripts/satellite/amsu.txt”)

amsu=makef()

amsum=make_monthly(amsu)download.file(“http://www.climateaudit.info/data/satellite/amsu_monthly.tab”,”temp”,mode=”wb”)

load(“temp”)

amsum=amsu_monthly

The CERES data used here is the EBAF version from downloaded from http://ceres-tool.larc.nasa.gov/ord-tool/jsp/EBAFSelection.jsp as ncdf file getting TOA parameters. Collated into time series and placed at CA:

download.file(“http://www.climateaudit.info/data/ceres/ebaf.tab”,”temp”,mode=”wb”)

load(“temp”); tsp(ebaf)

The graphic is produced by:

A=ts.union( ceres=-ebaf[,"net_all"],amsu=amsum[,"600"], ceresn=anom(-ebaf[,"net_all"]),amsun=anom(amsum[,"600"]))

month= factor( round(1/24+time(A)%%1,2))

A=data.frame(A) #reverse sign by convention

A$month=month

nx= data.frame(sapply(A[,1:2], function(x) tapply(x,month,mean,na.rm=T) ) )# if (tag) png(file=”d:/climate/images/2011/spencer/ceres_v_amsu600.png”,h=480,w=600)

layout(array(1:2,dim=c(1,2) ) )

plot(ceres~amsu,A ,xlab=”AMSU 600 mb deg C”,ylab=”Flux Out wm-2″,ylim=c(-10,10),xlim=c(251.5,255),xaxs=”i”,yaxs=”i”)

lines(A$amsu,A$ceres )

points(nx$amsu,nx$ceres,pch=19,col=2)

title(“Before Monthly Normal”)

for(i in 1:11) arrows( x0=nx$amsu[i],y0=nx$ceres[i],x1=nx$amsu[i+1],y1=nx$ceres[i+1], lwd=2,length=.1,col=2)

i=12; arrows( x0=nx$amsu[i],y0=nx$ceres[i],x1=nx$amsu[1],y1=nx$ceres[1], lwd=2,length=.1,col=2)

text( nx$amsu[1],nx$ceres[1],font=2,col=2, “Jan”,pos=2)

text( nx$amsu[7],nx$ceres[7],font=2,col=2, “Jul”,pos=4)

abline(h=0,lty=3)

fm=lm(ceres~amsu,A); summary(fm)

round(fm$coef,3)

a=c(250,255);b=fm$coef

lines(a,fm$coef[1]+a*fm$coef[2],col=2,lty=3,lwd=2)

abline( 3.3* b[1]/b[2], 3.3,lty=3)

text(251.5,9,paste(“Slope:”, round(fm$coef[2],2)),pos=4,col=2,font=2)plot(ceresn~amsun,A ,xlab=”AMSU 600 mb deg C”,ylab=”Flux Out wm-2″,ylim=c(-10,10),xlim=c(-1.75,1.75),xaxs=”i”,yaxs=”i”)

lines(A$amsun,A$ceresn )

title(“After Monthly Normal”)

abline(h=0,lty=3)

fmn=lm(ceresn~amsun,A); summary(fmn)

round(fmn$coef,3) # 2.607

a=c(-2,2);b=fmn$coef

lines(a,b[1]+a*b[2],col=2,lty=3,lwd=2)

abline(0,3.3,lty=2)

text(-1.5,9,paste(“Slope:”, round(fmn$coef[2],2)),pos=4,col=2,font=2)

## 227 Comments

One advantage of working with seasonally detrended data must be the reduced risk of spurious correlations as many variables have annual cycles, and so will likely appear to correlate even if there is no causal relationship.

Does it “matter” when one “gets” a result that happens to side with a particular POV?

Of course it matters when a result agrees with ones point of view. The result is “successful.” Further investigation goes in that direction. Then someone comes along and says, for example, “Why did you do that? That’s dumb! You cannot do that. You must do it ‘this’ way.” Then…

Why is the left hand graph reminding me of that ‘Analemma’ thing on my globe?

(except upside down and tipped a bit)…

RR

Because the sun drives climate?

/yeah, cheap shot, I confess.

RuhRoh, exactly the first thing that came to my mind. Could be a pure conincidence, but then again . . .

Thanks, Steve, for the excellent work and explanation. This is significant.

That’s just fun. Thanks.

I have a sense of Deja’ Vu about these numbers. Is this topic related in some way to a previous topic, Some Simple Questions?

Steve – yes.

I’d love to take credit for this idea (assuming I’m the Troy referenced here), but I don’t recall saying it (probably one of the more physics-oriented commentors). Someone else should come forward with a bow for this one.

I’m guessing that the AMSU 600 mb showed better correlation than either SST or HadCRUT? We found that there was a lag between TLT and SST temperatures, and that since the bulk of the Planck response (80-85%) is coming from the atmosphere rather than the surface, it only makes sense that you would get a better correlation when regressing the TOA fluxes against atmospheric rather than surface temperatures. I raised this issue also in a comment at SoD, because I’m curious why both the Spencer and Forster camps seem to agree that feedbacks occur instanteously with surface temperatures, when the bulk of the OLR change is expected to come from atmo temperature changes occuring months later.

Steve – someone wrote about this at one of the technical blogs. I thought it was you, but I guess not. I’ll re-check sometime.

The first person to say that the phase lag between insolation and global temperature should be about 180 degrees in the “simple questions” thread was, it would appear, me:

http://climateaudit.org/2011/09/13/some-simple-questions/#comment-303124

Others seconded this, and David Smith seems to have illustrated that indeed, the global temperature cycle for the atmosphere follows what essentially amounts to insolation weighted more heavily toward the Northern Hemisphere, rather than the whole Earth:

http://lukewarmplanet.wordpress.com/2011/09/12/tropospheric-temperature-data-from-aqua-channel-5/

I assume that is the technical blog Steve is talking about.

Re on the influence of land on the 600mb (AQUA ch 5) annual cycle:

http://lukewarmplanet.wordpress.com/2011/09/12/tropospheric-temperature-data-from-aqua-channel-5/

Nature, unfortunately, doesn’t always follow the Julian calendar. The daily data, in my opinion, can be handled in ways which hint at interesting intramonthly and intraseasonal features which are poorly-visible in monthly averaging. An example is here:

http://lukewarmplanet.files.wordpress.com/2011/09/0929115.jpg

The plot shows anomaly oscillations, possibly MJO-related, which get watered-down in any monthly averaging. There are also variations in amplitude. The reduced amplitude in the fall of 2007, for instance, preceeded a tropospheric temperature drop associated with a La Nina. Was the reduced amplitude an indication of the global atmosphere “depressing a clutch” to disengage from one state to another? A similar change in amplitude was associated with the 2010 La Nina.

It’s a silly analogy, I realize, and climate science likely has the answer, but it’s nevertheless an intriguing data-wiggle to a non-meteorologist, one that gets lost in monthly averaging.

Erl Happ thinks along similar lines.

erl happ September 29, 2011 at 12:44 am

Read here – but often wondered about before that.

The great strangeness of climate is that such basic questions have been drowned out by ‘the science is settled’. But that’s not science, this is.

Your improved regression reflects, of course, the common seasonal influence. But that’s not of interest here.

There’s no point in being purist about the time component of averaging. Your absolute temps, varying from 251K to 255K, and far more significantly averaged in space, from pole to equator. This averaging also masks most of the local seasonal variation.

The reason for going to anomalies is that they are looking to a regression to relate changes. ΔR vs ΔT. And there’s actually not much point in trying to localise those effects in time and space. The reason is that we don’t expect to find them linked on that local scale. People here have been looking at lags of months to years. That means looking for some effect that survives a huge amount of mixing, in both space and time. That comes back to conserved quantities, particularly heat, for which T is a proxy.

On your last point about the 20 W/m2 difference – isn’t this comparable to the orbital eccentricity differential?

why?

The seasons carry out a natural experiment in which the incoming flux varies by more than 20 wm-2.

why wouldn’t that be of interest?

“why wouldn’t that be of interest?”It could be if you were looking at the response of temperature to flux. But the papers recently discussed have been looking at the response of flux to temperature. And while the seasonal oscillation of temperature might seem to be worth investigating for its effect on flux, it is too confounded with the other annual effects (like TSI) for attribution.

What a strange comment. Isn’t the response of temperature to flux a pretty important issue in respect to doubled CO2?

They’ve been discussing feedback which involves both elements. Not that the scope of the papers precludes the discussion of related topics.

Nick, you’re just arm-waving again. Unless you’ve done statistical analysis on this precise point, you don’t know. yes, there are annual effects, but that doesn’t in itself preclude analysis.

It seems to me that the approach of the Dessler-type articles – with their Mannian r2 of 0.018 – indicate that effects are being confounded in the academic litchurchur (or else they’d get a more impressive statistical relationship.)

Of course there is.

There may be valid reasons for monthly centering, but merely complaining about purism is not one of them.

But this does not necessarily entail

monthlycentering. One could define an anomaly relative to an annual mean.Some commenters have. I have no particular views on the matter as it’s a new topic for me. The regressions in the litchurchur appear to be premised on the idea of rather rapid adjustment. I see no logical inconsistency in examining the implications of both alternatives.

Yes. It is the orbital eccentricity difference. It’s a large number compared to doubled CO2. If feedbacks operate rapidly, then why wouldn’t this potentially yield interesting information. Monthly centering erases much of this information.

My point here is that monthly centering needs to be parsed and not merely assumed as Forster and Gregory and their successors have done. Please note that I am open to reasons on this, but they need to be more than simple armwaving of the type that you’ve offered so far.

“One could define an anomaly relative to an annual mean. “But if you did, using the analyses of SB/LC/D or even the systems analysis methods discussed at CA, you would then be attributing the orbital cycle of 20 W/m2 to temperature changes.

Steve;defining an anomaly with respect to an annual mean is purely a definition and implies NOTHING about attribution of the orbital cycle. why do you say such things?“defining an anomaly with respect to an annual mean is purely a definition”It may seem so, in the sense that since the annual mean is a single global number, it’s really just using a different temperature zero point. But generally anomaly calculation is more significant. It’s like postulating a linear model and calculating residuals. You make some prediction of the temperature based, and investigate the deviation after you’ve allowed for that.

So if you take an anomaly relative to an annual mean, you do not take into account annual cycles. They become part of the data you try to interpret in terms of the hypothesis you are investigating. In this case, it is that flux changes can be attributed to surface temperature changes. So when you see an annual flux cycle which is largely due to orbital eccentricity, your model will attribute that to temperature change, with bad effects on the relationship that you deduce.

If you take the anomaly relative to a monthly mean, you take out (most of) all annually periodic cycles. That may include some genuine effects of temperature on flux. But the information that remains, though reduced, can be more revealing about the actual relation between temperature and flux. Masking effects have been removed.

I find it difficult to understand the defence of the general use of anomalies on the basis they “take account of” a particular phenomena in a model. This is an assumption that is empirically testable (unless we truly are just normalising a dimension). So, even if the data set being used is highly preprocessed, it should be done, shouldn’t it? (And if the preprocessing is such that it makes absolute measures dodgy, then the same applies to the anomaly).

This is post normal science. GEt with the program.

Actually, deseasonalizing data has a much longer history in economic statistics.

Where considerable effort goes into testing the assumption that one is dealing with a structural time series, and that the assumptions used in the detrending methodology are not being violated.

Actually considering that CO2 concentration itself changes by 10ppm over a year, and does not change equally among the hemispheres, I would think that matching effects of TSI & CO2 versus flux would be an important factor in all modeling of CO2 effects on temperature when working under the presumption that CO2 drives climate change. The idea that this isn’t considered in models is fantasy, right?

Yes. It is the orbital eccentricity difference. It’s a large number compared to doubled CO2. If feedbacks operate rapidly, then why wouldn’t this potentially yield interesting information. Monthly centering erases much of this information.Steve

Why is the number 20 watts/m2? Is this some averaging at work? I design and monitor the performance of spacecraft power systems and the difference from January 3rd to July 3rd (perihelion to amphelion) is about 7% or over 50 watts/m2.

This has always bothered me in climate science.

To prove my point, here is the power profile of the SMART-1 spacecraft, launched in 2004 to the Moon from a Geo transfer orbit. Unfortunately, the graph that I got from ESA obscured the dates a bit but the point is there. You can even see the variation in spacecraft power due to the Moon’s varying distance from the Sun as it goes around the Earth.

What am I missing here?

http://www.panoramio.com/photo/60028512

“What am I missing here?”Geometry. The solar flux is being added to the outgoing IR etc, which is measured in W/m2 of Earth’s surface. So it’s divided by 4 – 341 W/m2. 7% of that is about 24 W/m2. But then part of that is directly reflected.

Geometry. The solar flux is being added to the outgoing IR etc, which is measured in W/m2 of Earth’s surface. So it’s divided by 4 – 341 W/m2. 7% of that is about 24 W/m2. But then part of that is directly reflected.Nick

Thanks but there is still a bit of confusion here. The SMART-1 spacecraft was not in Low Earth Orbit (LEO). I am familiar with the added outgoing IR as I have built spacecraft that take advantage of the extra flux when at altitudes of a few hundred km.

SMART-1 began in a 400 x 43,000 km orbit at an inclination of 7 degrees from the equator. The added IR is not added to the solar flux at these altitudes and indeed by the second peak, the spacecraft would have been almost out to lunar distance as it went into lunar orbit shortly thereafter and you can see the flux variation in the SMART-1 EPS data..

In the spacecraft power systems design world we use an incoming average flux at 1358 w/m2 with a number in January of 1390 w/m2 and 1328 w/m2 on July 3rd. This is a substantial variation in solar flux. This average flux is stated in the engineering world as AM0 or Air Mass 0 with zero being no atmosphere at the distance from the Sun at Earth’s average.

In the solar energy world, we use a standard of Air Mass 1, which at sea level is stated at 1,000 w/m2. The rest is reflected, refracted, and or absorbed. I build solar power systems for use on the Earth as well and I just installed a system at Yellowstone National Park. This system has a name plate power of 8960 watts. This means at AM-1 conditions, the panels should put out that much power +/- 5%. However, at the place where our system is installed, which is Bechler station at 8200 ft altitude, the output of our system, as I measured it just two weeks ago, when I measured the output of the system, it hit almost 11,000 watts output. We have seen this before at high altitude sites with our hardware. That is 22.7% increase in power. If you use the worse case of 5% over name plate power and add to that the flux at that altitude, you get about 16% greater sunlight at this altitude, which roughly corresponds to the reduced atmospheric pressure at that altitude.

I have never YET seen these insolation variations discussed in the literature when looking at CGM models or even estimates regarding absorption/emission of visible radiation.

It is obvious to me as an engineer with an engineering physics degree that we are not doing a very good job with our computer models or data taking/manipulation to deal with the real world as it presents itself out there. This goes directly (to keep Steve happy) to the climate sensitivity issue. If the climate was as sensitive to the 0.012% variation in the concentration of a trace gas as the modelers tell us it is, then this should show up in much more obvious ways than it does, especially on a monthly basis. We should be able to see some pretty big differences between high altitude locations in the southern hemisphere in the summer vs high altitude locations in the summer in the northern hemisphere.

In the Aqua data that has been presented here by some you can easily see the variation in flux at altitudes above 68,000 ft as it directly matches the variation of the distance of the Earth from the sun in its periodicity. Why in the world would the large flux variations (well over 50-60 w/m2 not show up as a variation in the climate between similar northern vs southern hemisphere high altitude locations?

Steve I hope that I am not off topic here but it seems that if the climate was as sensitive as we are led to believe, that this would be a slam dunk to show up in the data sets.

Dennis,

Thanks for your views.

Are there similar plots available for the Aqua and Terra satellites (operational characteristics for the MODIS program) or other polar-orbiting sats?

Are there similar plots available for the Aqua and Terra satellites (operational characteristics for the MODIS program) or other polar-orbiting sats?Available, or available publicly?

There is data available but you would have to get it out of the engineering teams. I design power systems for space as well as solar power systems on the ground so I get information like this. A better source that would be less influenced by the Earth’s IR would be from a GEO comsat. That data is generally protected by NDA’s but it could probably be obtained for scientific purposes from the right operator.

I am an amateur but:

Nick says :”The reason for going to anomalies is that they are looking to a regression to relate changes. ΔR vs ΔT.”

Seems to me the left graph “before” gives the feedback signal, and the right graph “after” gives the monthly noise. Totally meaningless to regress it n’est ce pas?

“Seems to me the left graph “before” gives the feedback signal,”It gives a strong “signal”, but it isn’t feedback. The 20 W/m2 flux variation is mostly due to the Earth’s orbit. And the variation in T is the remains of the seasonal variation after adding NH and SH – it represents only the difference, produced largely by the disparity in land mass. It’s hard to see any feedback there.

After monthly anomalies, obviously the variation is much reduced, and noise is a problem. But that’s where you have to look.

Nick, if anomalies after monthly centering are the “right” way to analyse data, why aren’t they used in GCM parameterizations? Just asking.

Steve,

I’m not sure what parameterizations you have in mind. here is CAM3.0 parameterizing aerosols, and they use monthly-mean climatology.

Are you being intentionally obtuse? GCM output, as I’m sure you are well aware, is denominated in deg Kelvin not monthly anomalies. If deg K is as uninteresting as you say here, then why don’t GCM operate in monthly anomalies?

He’s just making cherry pie. And moving on.

Using anomalies does help because it eliminates the systematic bias shown in the output from different GCMs.

“Are you being intentionally obtuse?”No, you’re being inexact. Solving for field variables is not parameterization. But yes, of course the solutions are in absolute temp, velocity etc. There is an issue about discretization and sub-grid averaging. A very big issue in fluid mechanics.

It lies behind my earlier comments about purism in time averaging. In CFD, you pay a lot of attention to the relation between time interval and space interval. Not only is it a waste to be “purist” but it often causes instability.

So GCM’s work in intervals of 30 min or so, and space grids of about 100 km. Below that scale they implicitly average. As always, the ability to meaningfully average is based on conservation laws.

OK, Nick, then maybe you (or your colleagues) should do the necessary experiments in a reproducible manner that demonstrate this and/or show me the studies that do so, because I am not aware of any that do show that the ability to meaningfully average, as it has been done in paleoclimatology, is based upon conservation laws. TIA

PS: It has been many years since I took physical chemistry, but I’d be willing to do some review in order to get up to speed on this.

CDQ, Nick is dissembling again. There are a couple of really good threads on CA archives with Dr Browning on his and Dr Kreiss and Gavel’s work.

What Browning and Kreiss show mathematically is that it is physically impossible to extend the time one of these models are run before the exponentially increase in errors swamp the matrix. The amount of time depends on the grid size and step size. Where as Nick is calling it “purist”, it is a physical limitation inherrent to the too large grid size and too large time step.

In order that they get around this, modellers use hyperviscosity and adiabatic adjustment in order according to Dr. Gavin or his source to prevent negative mass and/or energy in a grid. This makes the models engineering applications more so than physical (physics) applications. However, note that there is just one run of the independent actual response instead of the 10′s of thousands that the engineering models require to validate the usefullness. Also note, extrapolation, which is what all the GCM’s do, is known to not neccessarily correct, and requires a separate validation. In other words as two modellers stated in the the peer reveiwed literature, it will take 130 or more years to show that a 100 year model was correct, Tebaldi and Knutti.

PS I am bad at spelling, my apologies if I have erred.

Well, John, thanks for replying. I was being a bit facetious as well as serious. I should not have left off the ;) in my post. I am aware of Dr. Browning’s posts.

I also have a few points of contention with respect to the premises and axioms underlying these papers which seem to be in fundamental disagreement with the real system. Earth has a weather system. Our climate is a statistical summary of previously realized weather. How the statistical model gets specified is important.

We also live in and adapt to the atmospheric surface boundary layer and that’s where the weather action is. These papers being discussed mostly don’t seem to take that into account. Is it really being too purist to expect the ‘experts’ to have done the foundational work showing that their procedures are justified? Just who’s fooling whom here :), really?

I did not mind the missing smiley. I like to point out that there are real reasons to take a model with a big grain of salt. Even more so, those that use a methodological approach incompletely. And those that extrapolate even with proven methodology are suspect, much less those extrapolate that don’t. So if it is overkill. I am a bit sorry, but not that much ;) .

Not sure that the two plots capture the same trend. The problem is with the left plot. Part of the observed steeper trend will come from the dissimilar land areas of the Northern and Southern Hemispheres. The only way to eliminate that effect is to construct trend lines for the same month of many years (or combine such trend lines as in the right hand plot by combining monthly anomalies). In fact, you can rediscover the trend from the right hand plot in the cloud of summer data points in the left hand plot. This is, in essence, the same point that Nick Stokes was making.

The data is still very interesting because the annual cycle provides a natural experiment. In particular when we also include the annual CO2 fluctuation in the picture. For example, it would be interesting to make comparisons between Northern and Southern Hemisphere positions taken at a six month offset from each other that have nearly identical conditions except that the CO2 content is different due to the annual cycle. If perfectly executed this would yield a net outgoing flux as a function of atmospheric temperature and CO2 fraction (limited to the specific type of location selected).

Here’s a plot in the style of the figure in the head post relating AMSU 600 mb to AMSU SST temperatures – left absolute, right anomaly. In the anomaly version, the annual cycle does not exist.

I realize that Nick Stokes thinks that the annual cycle is uninteresting, but I find it very interesting and potentially relevant to the regressions proposed in the academic literature.

For example, it seems interesting to me that

1) GLB surface temperatures rise about 0.5 deg C in the southern summer (Jan) while GLB 600 mb temperatures don’t rise very much;

2) during the northern spring (Mar-Jun), GLB surface temperatures decline (~.4 deg C) while 600 mb temperatures rise quite sharply (~1.6 deg C) and then

3) 600 mb decline by about 1.6 deg C with a small (~0.1 deg C) decrease in GLB surface (AMSU).

The diagrams presented here do not themselves constitute exhaustive analyses. They suggest various analyses e.g. NH, SH, tropics, for example.

“I realize that Nick Stokes thinks that the annual cycle is uninteresting”Well, I said “not of interest

here“. And the reason is that, while the annual variation of surface temperature may cause variation in the flux, and vice versa, the confounding effects means that you can’t make the necessary attribution. That’s not a consequence of sophisticated statistical analysis – it’s a practical issue. How could you do it? Flux, for example, has a more or less known cycle of 20 W/m2 from orbital TSI. There’s another big effect of NH/SH annual insolation variation, each hemisphere having a different albedo. There are the marked annual patterns of monsoons (clouds etc).The different altitude effects are of independent interest, but likely have more to do with the fact that the main heating of the atmosphere is at the surface, in latitudes where the sun is seasonally reasonably high in the sky, and mixing is more effective at higher altitudes. The former would cause a lag, and the latter an attenuation of the cycle, and these seem to be reflected in your plots.

Those two sentences are awkwardly confusing to me. Can you please re-state?

Nick,

Does this 20Wm-2 delta refer just to the instantaneous TSI at perihelion and aphelion? It seems to me that there is also an issue of accounting for those deltas over the amount of time that they are in effect (the orbit time spent near perihelion (in days or months) is less than than the same number of days/months near aphelion). Since perihelion and aphelion happen to appear in Jan and Jul, respectively, I’d expect those time-integrations to make noticeable impacts on temperatures seen in NH and SH at those points separate from just a difference in TSI, itself. Are these being accounted for?

OUH,

I haven’t looked much at the details of the orbit effects. They are removed by removing annual cycles generally, not by direct quantification of the effect.

In other words, we average them out so we don’t have to worry about how to explain them.

Input signal with large variation (due to elliptic orbit) is modulated by the clouds and enters the thermal energy storages of Earth. In addition, the angle of arrival changes all the time, and you need Nautical Almanac to track the GP, geographical position of the Sun. Motion of GP causes daily cycles, changes in hours of daylight, and seasons. Too difficult to handle all this, it is much easier to downsample the observed data to monthly averages (matrix D) and then left-multiply it by M (as in the above code) and make gridded average of the result (matrix G). All you need to do is to make sure that the linear transformation G*M*D will not affect your results (physical or statistical). For purely statistical arguments it is quite easy to show the effects, as the transformation is linear (something I’ve been working on). Steve’s result is more on the physical side.

Now that we have the argument ‘Natural factors cannot explain G*M*D*observations’, it is interesting to see what we can say about the actual observations. G*M*D is not invertible, but some statistical statements *) can possibly be backtracked into the observation domain.

*) such as one in IPCC AR4WG1, “The Durbin Watson D-statistic (not shown) for the residuals, after allowing for first-order serial correlation, never indicates significant positive serial correlation.”

… transform true anomaly to mean anomaly to have a parameter that does vary linearly in time… By re-reading the lecture notes and with little programming I got one example out:

Computed insolation per month / m2 for Trondheim, Norway, shifted one month (ad hoc, some delay is ok I guess) and then LS-fitted to observed temperature averages (Jones data). I did the same for circular orbit, and it seems that summer is too warm in that case:

The eccentricity effect is very weak, axial tilt of course dominates. And these are not additive but multiplicative factors, so it is not easy to extract the global eccentricity effect out.

updated this a bit in here, http://uc00.wordpress.com/2014/03/19/insolation-vs-temperature/ , no need for ad hoc delay in this model

(sorry for commenting old post, coding is slow process as I do this only on long-haul flights)

Steve, could you clarify how the anomalies are calculated here. Are you referring to deseasonlised data rather than what is usually called an anomaly? ie deviation from the mean over some arbitrary period.

Steve: I’ll post up code later today. In this case, they are deviations from the monthly mean over the period.

Physics and time lag:

Today physicists use hPa (and not mb), W (and not w), °C (and not deg C).

I can’t discover the graph, saw it 3 months ago, but plots by month of global radiation at different altitudes, surface to tropopause, show different patterns at low altitude than at high. The NH summer rise becomes less prominent at higher altitudes, so comparison of 600 mb pressure altitude with surface needs some more qualification as to physics before math perhaps. At even higher altitude, IIRC, outgoing by month is about horizontal. (Disregard if the graph used anomaly data).

Your first graphic is very interesting but you are still seeing some regression attenuation. Though the improved correlation shows you have better S/N , this raises the question: what signal?

The trivial model for lambda has _random_ ran and non-rad terms. It does not have a cyclic term. (This is an over simplification that I think it is essential to address before drawing any conclusions from it).

It would be instructive as you say to add two cyclic terms to your regression model here that represent the NH and SH annual variation with amplitudes that would be determined by the regression.

My guess is that when you have removed the significant error in X that this contributes your linear estimator will be a bit higher (let me guess it will be nearer to 9.2 ;) ).

This would then be a third close result by different methods arriving at very similar figures.

It seems that there is a strong component that can be modelled as linear. It now becomes necessary to ask what this linear relation represents in climate.

It seems that there is a very strong short term negative feedback. The idea of removing the seasonal component is an attempt to eliminate these significant cyclic terms in order to get closer to a situation where the trivial model can be applied to infer lambda. (It is , of course, perfect right and proper to ask if this is being done correctly or distorting the result).

The lower corr. of the deseasonalised data is not a surprising result and does not in itself suggest this process is bad. What it does underline and is STILL not being looked at effect of reg. attenuation due to noise in x . This is much more pronounced in such a case because of the reduced S/N.

This is the ONLY reason why you are getting a “slope” of 2.61 .

May I take this opportunity to suggest referring to this number as the “regression estimator” or similar. A fuzzy mess like that does not have a “slope”, calling it that makes the instant, subconscious and false inference that this value represents the linear relationship.

BTW . I have not had time to decorticate LC11 but I suspect their careful selection of periods with good S/N is effectively doing something similar. I think this raises the same questions about what the resulting linear regression represents. I think their method has merit but am unsure about the physical interpretation.

This is starting to get somewhere. :)

As I’ve remarked elsewhere, looking at the residual trend that is produced by R.stl there are oscillations with something like 3 and 5 year periods. (Poss. artefacts but I don’t think so).

The larger part of these swings are a close match to ENSO variations. This requires a term reflecting this if the simple model is to be used to infer lambda for the climate system. ( This would also address one of the key issues raised by Dr. D. )

This is why I don’t give too much weight my 9.2 from fitting the simple model to satellite data. I am probably having to exaggerate some parameters to make the random terms partially simulate the missing cyclic term.

Equally the 4.88 year time constant, that Bart and I both got by independent means, may be more to do with period of the cyclic forcing than exponential response of a model lacking this term.

This would be in agreement with the divergence of Bart’s bode plots for real data and the model that kick in quite strongly around 0.2 per year (cf 5 y)

I have also noted that Spencers lag regression plot of real data crosses the two points approx +/- 18mth that is not modelled by either the super-computer models nor the simple model. This shows they both fail to capture a significant feature of the data.

This would be an expected result of the three year oscillation shown in the trend.

Accounting for these two significant, non random forcings in the equation should lead to a situation where the estimated lambda gets closer to the physical meaning being sought.

It may well also remove a significant amount of the noise in x problem that is confounding the use of OLS regression.

P. Solar:

I run processes that cannot be simplfied in the anomoly monthly average manner. Using such averaged data sets to look at residence times or control parameters is a very crude approximation at best. Worse, it can lead to wrong beliefs about a system. I wonder if you can comment on the F(t)=(F1+F2+F3..)dT assumptions that to me contra indicate the use of anomlies and averages. I am used to systems that do have some stiff components such that small errors acn cause the estimated system response to go from 200s to 2000s. But it seems to me in general that approaches that do not take such into account have assumed they do not exist. I do not see that this has been shown, just that it has been stipulated.

John, I just sent a lengthy reply , maybe it’s held up in moderation but it’s not showing.

In short I think it’s valid for linear processes.

John, I did comment on this to Steve on a earlier thread. I think this kind of split requires the quantities concerned to be linear.

Temp and heat content are linear quantities as is radiation flux. Radiation is integrated over time to give a temperature increase. A time lag is a linear translation.

Means, linear translations and integration are linear transformations. Eg. the mean of a sum is equal to the sum of the means. It seems that this is what is behind this kind of approach. It sounds like this is not applicable in your field.

I don’t recall this issue having been addressed in the litchurchur. As Steve points out this seems to be assumed to be valid rather than being a stated assumption with justifications.

Maybe this is obvious to those in the field and does not need to be restated in every paper.

Some other factors may be more complex but it would seem that this is at least a fair first order approximation.

Hmm, I’m not so sure linear translation is linear in this sense but that is not pertinent to the question of seasonal decomposition and the point John raised.

P Solar the reason I asked is that I have a stiff situation with a simple forcing of a fuel, water, air system. Temperature, I do not think can be related to the linear system of forcings as stated. This is because there is an assumption of the mean temperature and mean energy being linear. But the water, air, heat system is in phase space. In other words the claim is made that the average state of the system IS found based on Temperature, but state is defined by H, T, wv%,P not average of T and wv. I do not consider it a bad assumption at the earth boundary, though not strictly true due to evapotranspiration, nor bad assumption at the TOA. But I find it a bit questionable to be looking, say at a water feedback, and not express its state(s) as they should be stated, a function of the phase space. I believe what they use is a psudeo equilibrium assumption that is questionable in a control volume that has phase change water vapor to water condensate, and not have enthalpy. An assumption of a constant adiabatic response I beleve is also one of the assumptions.

Since this is going to thermo and the host has asked not to let threads get side tracked on this issue, perhaps we should discontinue. But i think those two assumptions above are part of the not stated every paper background.

I think you are misunderstanding the use of linear here. I’m not saying total energy has a linear relation to temperature.

A physical quantity is said to be linear if its changes are additive. For ex. supposing surface temp rises 0.5K due to SW and 0.1K due to LW , the two incremental changes can be added to find the change due to total irradiance.Unlike air drag where you cannot add the drag at 20 mph to the drag at 10mph to find the drag at 30mph.

the F(t)=(F1+F2+F3..)dT you refer to is an assumption that the physical quantities are additive.

Yours is a very pertinent comment to the question of using deseasonalised data. I apologise that my reply was not clearer on this use of the term linear.

A better drag example would have been to say you cannot add the drag due to the forward motion of a vehicle to the drag caused by a head-on wind to find the total drag.

I agree about drag. But that is also true of the phase envelope of water in an air-water-watervapor system. In fact what is worse, is that defining the system as heat engine at TOA with boundary conditions means that it is actually an entropy engine. This definetly means it is like your drag example. This system is defined wrt entropy not temperature. Temperature at T^4 is the defining boundary condition, but as you say, I think it does highlight problems with “deseasonalized” data. To me a problem with deseasonilizing is that it is detrending. And several threads/papers have pointed out the problems with trending after detrending. If we were not trying to determine relationships except at TOA, I would agree it would not matter. However, that is not the case with feedback which occurs throughout the control volume.

A minor point:

“Eg. the mean of a sum is equal to the sum of the means.”

P Solar, are you saying that ( 2+2+2 + 3+3+3+3 ) /7 is equal to { (2+2+2)/3 + (3+3+3+3)/4 } /2 ?

Eduardo

Eduardo Coasta

If the variable X is red2, blue2, green2 and the variable Y is red3, blue3, green3, yellow3

the variable X+Y is the 12 combinations of one of the 2s multiplied with one of the 3s

Thus E(X+Y)=E(X)+E(Y)=5

Sorry: Eduardo COSTA

and it should be “added”, not multiplied, which gives 12 sums of (2+3) each

No, sorry, it would have been better to write it mathematically:

E(x)+E(y)=E(x+y) ; where E() is the expectation value aka mean.

in your example 2+3=5

From Nick Stokes:

“There’s no point in being purist about the time component of averaging.”

What a profoundly weird comment. Is it a more correct or less correct method? Does it aid understanding or inhibit understanding? Does it lead to spurious results or robust results? Maybe these are the key “points”.

In regard to anomalies, I think the widespread use of these in climate science is a systemic problem. It is like plugging an anomaly into the ideal gas law and expecting useful information.

I found it extremely revealing but I guess I’ve been studying climate science too long to find it weird. Nick’s point was that other, prior averaging of temperature meant that it made no sense to be ‘purist’ about taking monthly anomalies. In for a penny of impurity, in for a pound of it – climate science in a nutshell. David Wojick made the wider point well I thought on Climate Etc. in February.

But I would agree. Nick is right that there’s no point in being “purist” … about anything.

.

But the pea in the thimble here is that Steve’s concern is purely *pragmatic*, not *purist*. What’s the effect of taking one piece of signal and folding it up into some other signal of interest, basically pretending it doesn’t exist? This is not purism, it’s pragmatism. This is where Nick mis-diagnoses Steve’s line of inquiry.

.

The fact is there are those that will argue that a model is inadequate just because it can be made more detailed. Steve’s inquiry is not of that nature. Purism is not the issue.

But Purism is an interesting word, both the linguistic variety, which most sociolinguists decry, and the offshoot of Cubism. Thanks to Steve, bender and Google for jogging my memory on that.

“What’s the effect of taking one piece of signal and folding it up into some other signal of interest,”If, for example, you want to unfold the annual variation of temperature and take it as representing the seasonal effect, you’ll be wrong. That has already been swamped in the spatial averaging, which adds together NH summer and SH winter. All that remains is the difference based largely on the NH having more land.

I say there’s no point in being purist because averaging means you are only going to get information that survives mixing in time and space. That reflects conserved quantities like heat. If you want to refine the scale to try to incorporate more physics, you have to look at it systematically. You can’t recover by tinkering with time averaging what you have lost in space averaging.

I think this is another of the sort of off the top of you head remark on your part that Steve complains about. How about trying to demonstrate that mathematically? I get the feeling that if you’re correct, the whole of the paleo-climate project would collapse.

My example was straightforward. Most of the seasonal information was lost in combining NH and SH, spatially. A shadow remains, because NH has more land. But if you really want to find an effect based on seasonal temperature variations, you have to recover the space resolution. Otherwise it’s lost, and can’t be recovered by modifying how you look at the space averages in time.

Rather than refuting it, here you are *making* my argument, but now in the spatial domain. I like this.

.

For the record, I did not advocate *any* approach, despite your invitations to do so. So I’m glad you prefaced your supposition of wrongness with “if”.

Given that long-wave radiation varies with the 4th power of temperature, one really needs to work with absolute temperature. Either taking a monthly mean or an anomaly as described is not valid in any sense because it violates radiative physics. IMHO.

T^4 can be approximated as kT for small deviations about a large value. It’s a bit like sin(x)=x for small values around zero.

300K is large compared to the variations in question. I too thought this was bad until I actually tested the error as a percentage. Maybe you should have tried that too ;)

Since T^4 is about 10^10 and kT is about 10^(-21) with different units (if k is the Boltzmann constant (or even the Stefan-Boltzmann constant)), you lost me this time. But I would agree that T^4 is more or less a straight line around 300K for the anomalies used.

My k was an arbitrary const not the S-B const. So we are in agreement. I was simply stating the linear approximation.

Then you might want to add a second constant: k1*T+k2 (k2 negative) otherwise if T^4=kT (follows k=T^3) the derivatives would be 4*T^3 and just T^3

The reason why that works is that T1^4-T2^4 factors to (T1^2+T2^2)(T1^2-T2^2), which further factors to (T1^2+T2^2)(T1+T2)(T1-T2). When T1 and T2 are close, the (T1^2+T2^2)(T1+T2) part is essentially constant, and it behaves like (T1-T2). Remember, in radiative heat transfer, both temperatures matter.

Isn’t the problem the fact that T itself is an average of temperatures that do have significant variation?

Where I live, a hot summer day can be 300 K, and a cold winter night 265

265^4 is only 61% of 300^4

I respectfully suggest that integration is required, to avoid losing information that may prove to be significant.

I don’t think I have posted here before because I don’t have the maths to properly understand much of the discussion, but this has now touched on something that has bothered me for a long time.

The way I understand it, the whole point of using an anomaly (regardless of what it is based on, daily. monthly, yearly) is to try and remove the effects of yearly changes due to orbital issues so that other effects can be observed. Do I have that right?

However, by removing the absolute values and the changes in those values we are losing sight of the enormous energy transfer during this yearly period. For example, as Steve has noted, the NH summer has a much higher atmospheric temp than the SH summer: OK, this is explained as the effect of NH land mass being bigger than SH, but considering the fact that the incoming energy is virtually constant,

[Steve: actually the annual variation is over 20 wm-2 due to orbital eccentricity and the greatest incoming flux is in the SH summer.]

this difference in atmospheric temperature represents a serious flow of energy – presumably into and out of the upper oceans. With such a large flow of energy in each direction, twice a year, the minor fluctuations in the “anomalies” are almost meaningless and – very probably – well below the the sensitivities of our instruments. Hence the pathetic r^2 values when we try and plot these.

[

Steve: I don’t think that it’s an instrumental problem. If there’s an issue, it’s a conceptual and methodological one.

Maybe I am stating the obvious here and people already just “know” this stuff, but I really do think we are struggling to look past the log in our eye, to find the spec of dust in someone else’s.

An early (1978) paper by Ellis, Vonder Haar, Levitus and Oort comparing the annual cycles of net radiation flux and of ocean heat storage may be of interest: ‘The annual variation in the global heat balance of the earth’, JGR 83, pp 1958-1562.

Interesting reference. The structure of the annual cycle described in it seems to stand up with the better recent data. It would be interesting to compare what early ideas of the parameters were to more recent ones.

I have not followed these discussions in detail and my stats are very weak. So, maybe these comments are way off base and topic. Denizens of CA are always on top of the situation and will let me know if that is the case.

The Earth’s systems have never been and will never be in radiative-energy transport equilibrium. The TOA radiative-energy imbalance will always wiggle and it’s not clear that it wiggles about some kind of roughly-constant average energy level. There are no driving potentials to obtain that kind of wiggling. What happens to the energy after it enters the systems is always changing and thus affecting the radiative-energy transport states within the systems and thus the emitted radiative energy. The fraction that makes it to the surface is always changing, too. The lack of equilibrium, and the consequent constantly-changing states within the system, contribute to both temporal and spatial heterogeneities.

The radiative-equilibrium concept, so far as I know, has never been quantified relative to the time scale over which the concept is

assumedto be valid. The spatial averaging scale is very roughly taken to be some limited aspects of the contents of the entire Earth systems. The papers generally report yearly-average values of the temperature of the atmosphere near the surface ( 10 m I think ).What I have not yet seen discussed are the effects of all the heterogeneities, both temporally and spatially, when the above hypotheses are introduced. Somehow, it seems that it is again

assumedthat these real-world effects cannot be sufficiently large to in-validate the averaging. There are, however, physical situations for which temporal and spatial heterogeneities sufficiently dominate that averaging which does not account for these can never produce estimates of the states of the system with sufficientfidelity to be called predictions.

I’ve often wondered if that is not the case for the Earth’s climate systems.

Application of concepts such as equilibrium sensitivity to the Earth’s systems might work out if the systems were approaching an actual equilibrium state after perturbations of, say, CO2 content in the atmosphere. If the systems were in fact returning to an equilibrium state, and the present state was way way out on the long tail of that approach, the concept might be valid. None of this obtains for the Earth’s systems. That’s not a realistic concept.

Neither is the concept of a transient sensitivity because of the constantly changing nature of all aspects of the systems of interest. There might be some time period over which the system responses are sufficiently more-or-less repetitive that would allow for a rough estimate. However, that approach does not address the effects of spatial heterogeneities. The far southern latitudes, the tropics, and the far northern latitudes are all different from each other.

If anyone can point to any IPCC-cited literature that squarely addresses Dan Hughes’ concern, it would be appreciated. Failing that, where would you expect this topic to be covered in the ARs?

.

Topic probably not appropriate for this thread. Still.

What happens if one substitutes weekly centering instead of monthly centering?

You’ll get interesting figures. This http://climateaudit.org/2011/09/13/some-simple-questions/#comment-303279 seems to hold all the time ( I don’t really have Dirichlet-style perfectly rigorous proof ). 180 ‘month’ anomaly for UAH:

UC – can you explain these figures some more? I think I know what you mean, but your comments are pretty oracular so far.

Hah, I thought I was the only one who didn’t understand them.

==========

You could think of this presentation as somewhat more “oracular” then the UC notes:

http://www.newton.ac.uk/programmes/CLP/seminars/090614002.pdf

It is about “Climate Sensitivity” — really.See the work in the area of page 30.

Just don’t skip any slides on the way there. Then let me know if the work here is more useful, or less useful.

For any (n by m) matrix [tex]x_{ij}[/tex] where i is month and j the years (and number of points = n*m), such that the mean over each month is zero (i.e. [tex]\sum^m_{j=1} x_{ij} = 0[/tex] for all i), the cumulative sum of OLS trends is [tex]C=m*\sum^m_1 a_j[/tex] where [tex]a_j = (n\sum^n_{i=1} i*x_{ij} – \sum^n_{i=1} x_{ij} \sum^n_{i=1} i)/(n \sum^n_{i=1} i^2_{ij} – (\sum^n_{i=1} i)^2 )[/tex]. The denominator is constant for all j, and so C depends on the double sum over i and j of the numerator. The first term is zero (swapping the summation order) because each the time-series for each month sums to zero, and the second is zero because the overall sum must be zero also. Thus C=0.

can you fix the latex? Obviously got the short code syntax wrong…

% 12-month anomaly:

yrs=30;

mons=12;

tot=yrs*mons;

Xa=eye(mons);

Xaa=repmat(Xa,yrs,1);

M=eye(tot)-Xaa*pinv(Xaa);

Anomaly=M*Raw;

% for 180-’month’ anomaly use:

% yrs=2;

% mons=180;

Now you’re speaking my language.

% Sum of annual trends:

x=[ones(mons,1) (1:mons)'];

% blkdiag(x,x,x, …):

xo=zeros(mons,2);

X=[x repmat(xo,1,(yrs-1))];

X=[X; xo x repmat(xo,1,(yrs-2))];

for i=2:(yrs-1)

X=[X;repmat(xo,1,i) x repmat(xo,1,(yrs-1)-i)];

end

pX=pinv(X);

ave=zeros(1,((yrs)*2));

ave(2:2:end)=1;

max(abs(ave*pX*M))

%ans =

%

% 8.6086e-017

% ave*pX*M seems to be vector of zeros. Sum of annual trends ave*pX*M*Raw is then % zero,

% whatever is the input series Raw.

Note that Dessler adds a trend to the data:

so my trend-analysis is not completely OT. One could ask whether he inserted a trend or staircase function (result of adding the trend before the anomaly operation)? Interestingly, it doesn’t matter for his result (pm 0.18 W/m2/K). M is symmetric and idempotent, so one gets the same slope in both cases. Furthermore, there is no need to deseasonalize DRcloud at all to get the result 0.54 for the cloud feedback.

Steve

I would be very interested to see what Ross thinks about these regressions. It looks like an identification problem, where a one-variable model is inadequate.

this is simply a first cut at the data to examine the issue of monthly centering. It looks like the figure-eight could be modeled relatively easily.

Yes, it looks like a Lissajous figure. Try

t<-0:12*(pi/6)

y<- 8*sin(t) + 2.7*sin(2*t)

x<-253+sin(t-0.22)

plot(x,y,type="l")

Ah , at last someone who knows what a phase plot represents. What is the “slope”? ;) LOL

Seriously, That look very close. Can you express that as an R formula? That is one area of R I’m having trouble mastering.

This is quite a good illustration of the regression attenuation problem. If we set the phase lag of 0.22 lm(y ~ x) returns a slope of 8.0 despite the presence of the second sine.

This is because there is zero correlation between sin(t) and sin(2t) [over an integer number of cycles]

With the phase shift , lm gives a regression estimator of 7.75 .(cf Steve’s 7.7 on the real data)

The data is starting to decorrelate due to the lag and a simple regression starts to deviate, this increases with the lag. Note that the figure does not “tip” , it gets “fatter”. The “slope” of the figure remains the same but the regression estimator gets lower and lower.

That is a pretty clear demonstration of why Andy Dessler is fooling himself (and anyone else who gives any credence to his results) about the value of climate feedback.

This, gentlemen, is the sorrowful bottom line of where the “science” of positive climate feedback comes from.

“This, gentlemen, is the sorrowful bottom line of where the “science” of positive climate feedback comes from.”No, this is Steve’s graph. Climate scientists would use monthly anomalies to avoid this situation where orbital variations are regressed against seasonal NH/SH temp differences.

The phase plot earlier was in R code. It should run as pasted.

Nick I realise this is usually done with deseasonalised data and have said in several places why I think that may be more useful in searching lambda.

Here, I am using the formula you derived as an

abstractdemonstration of the problem of doing naive OLS regression on data where it is not appropriate.The real data used to estimate feedback has a whole world of other uncorrelated junk and different lags. The phase plot is not a pretty lissagou figure it’s regurgitated hairball full of puke. ;)

That is why Dr D. is seeing such awful R2 values.

Even with this nice clean example we see the effect of regression attenuation. It’s a nice clear demonstration because we can play with the lag and see correlation drop but the “slope” of the figure stays the same.

I do see this issue being dealt with or even acknowledged ANYWHERE in the litchuchur.

Are you able to see how this may be a problem?

The

notI take it is implied?I’m glad to see someone is paying attention ;)

I think the reason D sees low r2 values is just that there isn’t a strong relationship. That’s what he says, anyway. He’s arguing against LC and SBCM, who say that there is.

As to the graph in the post, if you do a regression you actually get a slope which is not meaningless, but doesn’t capture the oscillatory structure. That should show up as non-random residuals. But there’s no indication that such structures are a problem in the deseasonalized analysis.

I did not say the regression result was “meaningless”. I said it was suffering from regression attenuation caused by the lag that produces an artificially low result.

Saying “I think…” in the face of such a clear demonstration is rather weak.

Are you seriously suggesting that the data D. is working with does not have any oscillations, lags, correlated or uncorrelated noise or errors-in-x-variable that could cause a similar artificial reduction in the regression estimator?

Please try to answer yes or no rather than diverging elsewhere.

No, D’s data is similar to the kinds of data on which millions of regressions are performed, in many fields (eg econ). An error range is quoted which is meant to embrace these issues.

But D’s claim has been repeatedly misrepresented here. He isn’t claiming to have established any particular trend. He isn’t even claiming to have shown that it is positive. He is just showing that it is unlikely that there is a large negative trend.

He claims that in the media and Trenberth claims it too.

When you were making similar claims on an earlier thread that D was not “claiming to have established any particular trend. He isn’t even claiming to have shown that it is positive.” I pointed out D10 concludes: “My analysis suggests that the short-term cloud feedback is likely positive …. “.

So what you should have said is: “Unfortunately D has claimed more than he showed saying ‘that the short-term cloud feedback is likely positive’, he simply showed that his analysis was unable to detect any significant feedback”.

Well, he says “My analysis

suggeststhat the short-term cloud feedback islikelypositive”and goes on to say:

“However, owing to the apparent time-scale dependence of the cloud feedback and the uncertainty in the observed short-term cloud feedback, we cannot use this analysis to reduce the present range of equilibrium climate sensitivity of 2.0 to 4.5 K”Probably time to give this a rest, but good to see you no longer feel it is a misrepresentation of D to suggest he claimed a (likely) positive trend. (My understanding is that in IPCC speak this amounts to > 66% probability.)

Getting an acknowledgement from you that he was over egging the pudding in this regard is probably a bridge too far.

A graph of the Lissajous phase plot is here.

my question was how you got the coeffs. Hand-rolled or regression? If you have a regression method , would you like share it ?

Hand-rolled. It’s a sequential process. The shape of the curve says you’re looking to plot the first two harmonics. Start with first harmonics without phase lag. That’s a line segment, and regression could be used, though I did it by eye. Then tweak phase shift to get an ellipse that looks about wide enough. Then introduce the second harmonic, tweak the coef until the crossover is about right.

I’m sure it could be done much more scientifically.

No opinion, I haven’t looked at any of the papers or data or debates. Offhand I would think a VAR model is the way to deal with bi-directional causality, but I don’t have any time myself to try it out.

“I find it hard to visualize a physical theory in which the governing relationship is between monthly anomalies as opposed to absolute temperature.”

I think that this is a more profound statement than you realize. When I first started reading blogs about climate science I was struck by this and couldn’t understand why all the detrending and anomalies rather than using the physical quanties. After all, a physical theory should predict physical quantities.

My suspicion is that it was originally done to hide bad behavior by the models and then became common practice. If you look at the temperature maps produced by different models, they can differ among themselves by 5C or more. An anomaly makes this problem go away. The same is true for such nonsense as the global average temperature, which has no physical meaning, and its anomaly. You can hide a lot of dirt under that rug. After a while this propagated to all kinds of analysis.

My conspiracy theory of the day.

A slightly different type of Monthly Centering:

http://www.visitationnorth.org/index.php?option=com_content&view=article&id=380:monthly-centering-prayer&catid=43:fall-2011-offerings&Itemid=34

Bidirectional causality at its best.

I find these results absolutely fascinating. Here we have evidence that feedbacks seem to exist on a very strong negative scale. Since UC’s comment, I’ve wondered what would result from this sort of plot and now I wonder just why such strong evidence of a negative feedback isn’t important. The anomaly approach made no sense once I thought about his comment. We have gigantic annual forcings and the response by outward radiation exceeds anything climate science would expect according to atmospheric theory.

I wonder where the experts are in the comment thread. Certainly, Gavin could shoot this down in a second. Why isn’t it an important result? Certainly, cloud feedback should occur on a 3 plus month scale and were it positive, steve should have a different result. Since the slope is so strongly more positive than 3.3, the feedback has certainly occurred and it is negative.

I have to be missing something please tell me.

See my two main comments early in this thread. This is the response to a strong cyclic driving force. I don’t think it can be attributed to what is being called lambda in a model without any cyclic term.

I don’t think one feedback factor is any more realistic than a one slab ocean model. Shallow waters could react much faster and provide a stronger but less sustained feedback.

If we are seeking the long term feedback response I’m not sure month to month variations are where we need to look.

PS, I’m not saying one slab of mixed layer is necessarily bad for decadal time scale.

The thing I would warn about is that feedback must be determined by the regression of (N – TOA Flux) against T, where N is the forcing, not simply TOA flux against T. This is because the TOA flux observed is a combination of the forcing + feedback (which goes to Spencer’s point about unknown N corrupting our feedback estimates), so we must remove the forcing to isolate the feedback parameter (lambda). If we are using the absolute flux measurements, not monthly anomalies, we have a significant solar forcing (as you say, “gigantic annual forcings”) to take into account before regressing the TOA flux against T. This is why I don’t think that 7.7 W/m^2/K slope necessarily reflects the climate feedback parameters, unless that Y axis is actually removing the different solar forcing associated with each month.

Here is plot of CERES.net vs hadCRUT global showing just the long term trend extracted.

http://tinypic.com/view.php?pic=2mrblvo&s=7

this confirms your point in a way but on the other end. Even when the monthly variations are removed, there is a very strong, long term component (circa 3yr) that is part of “N”.

As the caption indicates that last plot was UAH, here’s hadCRUT

http://tinypic.com/view.php?pic=2u44s5l&s=7

Very similar but a clearer oscillation. (Recall this is a lag plot but the lag response of a sine is also a sine)

P. Solar, I’m having a hard time understanding what those charts are showing. Merely regressing the flux against temperature won’t tell you much about feedback unless the forcing (N) is small relative to the feedback*T term, correct?

Troy: Your comment appears to be correct, but regressing against N – TOA flux assumes that there is no lag between the forcing N and the resulting change in TOA flux. I suspect the forcing must be integrated over time before it becomes a temperature anomaly (not necessarily a surface temperature anomaly). Further time may pass before the temperature anomaly dissipates into space as a TOA flux anomaly. There is an annual forcing (forcing anomaly?) associated with the eccentricity of the earth’s orbit. That appears to produce immediate (maximal in January) large temperature anomalies in the stratosphere (low heat capacity?) and delayed (maximal in April) smaller anomalies at the surface. The troposphere, which doesn’t absorb nearly as much solar radiation as the surface or the stratosphere, lags behind the surface (maximal in July, when the earth is furtherest from the sun).

Superimposed on this annual cycle due to eccentricity may be smaller effects due to seasonality. In temperature zones, the warmest temperatures over the land occur about 1 month after the longest day, while SSTs (and some coastal temperatures) lag about three months. The Northern Hemisphere has much more land area than the Southern, producing asymmetry.

Frank,

While I agree it that it takes time after the forcing for the temperature to increase and yield a feedback, what I’m saying is that the forcing itself is contained within the TOA flux anomaly…that is, the TOA flux = forcing – feedback * T. This is what I mean by needing to remove the forcing from the TOA flux anomaly before regressing to get the feedback term. Merely regressing TOA flux against T leaves the forcing term in there (and your resulting estimate won’t be lambda), and that forcing term has a large seasonal component.

Steve says: “This residual has 4 zeros during the year – which suggests to me that it is related to the tropics (where incoming solar radiation has a 6-month cycle maxing at the equinoxes, with the spring equinox stronger than the fall equinox.) ”

Canadian Spring equinox being late SH summer, benefiting from the orbital eccentricity.

This cycle presumably looks like a halfwave rectified version of the cycle seen from higher latitudes that we are more familiar with. One side will have a somewhat larger magnitude due to the hemisphere differences already noted and as witnesses in your plot.

Does it bear any resemblance to the seasonal component extracted from hadSST by R.stl ?

http://oi55.tinypic.com/2hd1u6f.jpg

Estimating the cyclic contribution to X data to be about half the linear component and 1/5 in Y, would suggest a rough and ready correction for reg attenuation of 7.7*sqrt((1+1/2)*(1+1/5))= 10.33

I’ll make a less clunky estimate once the code it up.

I’d be very interested to see the slope estimate from a regression model including a cyclic term . This is approaching LC11 numbers , which is perhaps not entirely surprising. I think it is measuring a similar situation.

Could you throw a bone occasionally to those of us who don’t eat and drink climate acronyms and data sets? Would it kill you to give the URLs and column numbers of the data you’re using once in awhile?

I’m sure the cognoscenti here all know the URL and column number for “AMSU 600 mb deg c” by heart, but after more than an hour the best I could come up with was http://vortex.nsstc.uah.edu/data/msu/t2/tmtday_5.4, which doesn’t appear to give absolute temperature, and, in any event, the related readme is opaque as to what’s in the various data columns. Why present your readers with such barriers?

I’m positive there are a lot of folks out there who could contribute mightily to analyzing these issues, but they don’t, because they find the effort of obtaining the data and decoding the jargon just too frustrating. My experience, after dealing with a wide range of technologies over several decades, is that the biggest impediments to understanding usually are not the technical concepts themselves but rather the jargon and poor exposition in which they’re cloaked.

And, this site, I’m sorry to say, is consistent with that experience at least as far as the jargon and exposition go.

Steve: In this case, I didn’t post up a turnkey script.However, I’ve provided dozens if not hundreds of turnkey scripts and have provided many materials for interested parties to examine, including a few posts ago on dessler v spencer. Yes, you’re entitled to ask, but I think that whining is unwarranted.

I agree that clear description of source data sets is important and in my scripts I try to carefully document exact provenance – something that is seldom done in the peer reviewed litchurchur.

Joe,

I think the data referenced is at the UAH Discover website:

http://discover.itsc.uah.edu/amsutemps/execute.csh?amsutemps

Choose channel 5, which refers to the 600 mb pressure layer, and then you can go to Show Data As Text to get the actual daily temperature values, separated into columns by year.

Oops, chop off that end part:

http://discover.itsc.uah.edu/amsutemps/

troyca,

Bless you.

Steve,

Yes, in a moment of weakness I was churlish, and I apologize for the tone.

But I believe–no, I know–that your work’s influence, great as it is, would be many times greater if you would not write as if you expected everyone to have read and internalized all your previous posts for the past five years. Sure, a neophyte can’t expect to understand everything instantly without effort. But an occasional review of the bidding, e.g., reviewing exactly what is a “chronology” is or including a link to an explanation, would draw in many readers who, rightly or wrongly, are not otherwise going to do the research.

And, speaking (as I believe you did) of scripts, your commendable practice of providing so many is largely compromised by your failing to repeat often enough where to find them. It was a long time before I was aware of http://www.climateaudit.info/scripts/. Clicking on the “Steve’s Public Data Archive” link was no help in finding it. Maybe a “Scripts” hyperlink on your page would be of value?

People may find what you write on your blog plausible–I certainly do–but what they’re really persuaded by is what they can work out for themselves. And the number of people who will indeed work it out for themselves decreases exponentially with the number of hoops they must jump through before they can start their analyses.

Many of us who are less adept at the mathematics sympathize with your point, Joe. However Steve is best at exploring the details. It takes special skills and adequate time to synthesize, condense, and translate to a different audience. The job is waiting for the right candidate to apply.

It’s a tough standard to live up to 100% of the time.

For acronyms, please see

http://climateaudit101.wikispot.org/Glossary_of_Acronyms

(on the sidebar)

– and please add new ones you find!

Thanks, Pete Tillman

These 2 figures illustrate what I have mentionned on the other thread.

Most people think that using monthly anomalies, removes the yearly periodicity (e.g the signals with a period of 1 year due to the Earth’s orbit).

It does much more.

It removes all signals with periods 12, 6, 4, 3, 2.4 and 2 months.

As signals with periods 3 and 6 (seasonal effects) are important for the system, a big part of the real correlation is due to signals having this period.

Once one has removed all signals with the above mentionned periods (what happens in the right part of the figure 1) , the correlations are destroyed and the correlation coefficient dramatically decreases.

This could be easily seen if a power spectrum is done.

Then one would see that the power that is in the 12,6,4,3,2.4 and 2 periods in the left part of the Figure 1 is missing in the right part of the Figure 1.

From the physical point of view it is obvious that the “system” depicted in the right part of Figure 1 has been stripped of its most significant (less than 1 year) periodic signals and one can only guess what significance was left in the leftovers.

It is extremely stretched and actually has no justification to postulate that the 6 removed periods are irrelevant.

Besides it is useless too. The real system being shown in the left part figure, it is this one which should be analysed and

ONLY after the end of this analysis should attributions be attempted.When one removes wholesale 6 important periods and just handwaves them away as being not interesting for some particular question, then the value of the “analysis” of monthly anomalies keeps the same handwaving character.

Unless, of course, one rigorously proves that

allsignals with 12,6,4,3,2.4 and 2 periods are external to the phenomenon under study and independent of it.This is clearly not done in the case analysed here.

What you say is a rigorous and reasonable. Whether is it reasonable to that rigorous with a grossly non rigorous analysis like simpifying the whole climate system to lambda*T does not necessarily follow. It remains a good point to consider exactly what is being done.

It maybe that the deseasonal approach, in subtracting something rather than averaging it out, is throwing the baby out with the bathwater.

I’m not sure that is the case for lambda but I share your instinctive distrust of this kind of data mangling.

Nick Stoles, is this a case of “purism”?

Well, it’s wrong. Most people think that using monthly anomalies removes annual periodicity.

Yes, that means the base annual sinusoid and all its harmonics. That’s obvious. Thay all have annual periodicity. And you lose seasonal effects. That’s why it’s often called de-seasonalizing.

AMSU daily information is at http://discover.itsc.uah.edu/amsutemps/. CERES data (EBAF versoin) was downloaded from http://ceres-tool.larc.nasa.gov/ord-tool/jsp/EBAFSelection.jsp

See post for details.

??

Many thanks , scripts save a lot wasted time digging.

Steve: that’s why I try to place them online and, if I forget to do so, try to be quick responding. It seems to me that academic articles skimp on providing tools to get and retrieve the data as used is because it makes it harder for people to examine the statistics carried out in the litchurchur – which, as we see, is often surprisingly banal.

Yes , this is a tactic akin to church using latin for centuries to ensure kept control of knowledge and the layman had to depend on them. They have the knowledge, we should believe.

When you try to access that knowledge and question the tenets of faith we are given , we are denounced as heretics and pilloried .

The parallels are amazing. I digress.

The point of my post, that was a bit mangled by WP, was that I got a 404 on amsu.txt , was it meant to be amsu-retrieve.txt ?

The non-availability of data is only a problem when academic science is being directly used to propose and produce public policy. To my knowledge, this is unique to climate science. This novel process of ‘

from academic science directly to public policy‘ by-passes all the engineering studies and evaluations that are (supposed to be) done by, e.g., the USDA, the FDA, the EPA, and all the other regulatory bodies that evaluate the science and set up the field tests e.g., clinical trials for the FDA, that evaluate the academic reports, and that challenge and validate the claimed outcomes. The climate science process is a short-circuit and therefore entirely inappropriate.So, in virtually all cases except climate science, academic science stays in the academy. Data and methods are shared among academics typically on request, and usually there is no urgency because there is no public impact. I have to say, too, that in my experience the materials and methods sections of published papers are typically enough to reproduce results. In academic Chemistry I’ve never seen the methodological obscurantism that seems so systemic in AGW climate science.

In any case, climate scientists pushing for direct policy outcomes have over-stepped their proper bounds (and violated their tenure responsibilities if they are employed at a public university). They have completely subverted the in-place systems required for translating physical results into public policy. In that, they’ve been abetted by politicians who have abandoned deliberative process, by the EPA regulators whose first commitment should be to their own methodological integrity (rather than to political dictates), and by science reporters who have committed to the policy while ignoring the complete circumvention of the test and evaluation process.

Pat I think an issue with “climate science” is that it is an observational science rather than an experimental science. A chemist can write down the procedures and technique used in the lab to measure the spectral lines of a new compound. You dont even need to publish the raw data because it’s verifiable – any one with a lab can also go and repeat the same measurement. But in climate science you cannot repeat the last ice age. You’re left reading the tea leaves. People going over and over re-analysing the same data set when in an experimental science they would be “repeating the measurement”. None of this re-analysis gets rid of systematic errors. You’re stuck trying to guess how some old thermometer worked, or how old tree rings grew. You cannot measure everything you’d like to. Its hardly a controlled experiment. And then they make public policy on it!

You’re right, Rob. But observational or not, the unique issue in climate science of deliberate obscurantism and willful subversion of science and process remains.

But further, compare with the situation with another observational science, Astronomy. I’d suggest a serious ethical divergence between astronomers and AGW climate scientists. Especially when considering the real global threat represented by a potentially incoming large bolide. That reality is far more physically credible and potentially far more destructive than CO2-induced global climate disruption.

Nevertheless, we don’t see astronomers jiggering data, subverting peer review, suppressing uncertainties, spreading alarmist propaganda, trying to impose policy, indulging in character assassination, and liberating billions per year for large telescopes and defensive satellite arrays.

Astronomers have retained their integrity and remained ethical and modest. AGW climate scientists have recruited and built a Lysenkoistic cabal. The difference could not be more stark.

Wish I could agree there, Pat, but stories from certain key quarters suggest that they too have their own issues that parallel those of Climate Science in many ways.

It’s just that astronomers aren’t trying to take up back to the dark ages like some of the climate scientists. There is a known, huge price to pay for mitigation and an uncertain price for doing nothing.

I take your point Pat. Normally, nature itself keeps a scientist honest. An astronomer crying wolf and reporting an incoming asteroid will be kept honest by the telescopes of thousands of other professional and amateur astronomers. Nature will eventually keep the climate scientist honest, but we might be dead before then! In the meantime, considering the policy implications, these guys have to be kept honest by much greater public scrutiny.

Steve: Dutch Uncle time

Best not to go into assigning motives, I think: mind-reading, which you usually (and commendably) avoid. Though I don’t doubt that’s what happens sometimes….

RE: LITURCHUR

This was amusing the first few times, but is growing tiresome (imo), and may make you look a bit silly to new visitors. We do get your point.

Keep up the good work!

Cheers — Pete Tillman

Since the threading is scrambling replies again:

This is a cmt on SMc’s inline reply to PSolar, Sep 30, 2011 at 7:14 AM

can anyone express the formula Nick suggested as an R “formula” that can be used with lm() ?

thanks.

Yes, here is a lm() routine to fit the two-harmonic phase plot:

y=cbind(A$amsu,A$ceres)n=nrow(y)

t=1:n*(pi/6)

x=cbind(cos(t),cos(2*t),sin(t),sin(2*t)) # two harmonics

a=lm(y~x)$fitted; ###Regression fitted phase plot

#Graphics

png(“lissalm.png”,h=750)

plot(A$amsu,A$ceres,type=”l”,asp=0.3,ylim=c(-10,10),xlim=c(251.5,255))

lines(a,col=”#1166ff”,lwd=3)

for(i in 1:11) arrows( x0=nx$amsu[i],y0=nx$ceres[i],x1=nx$amsu[i+1],y1=nx$ceres[i+1], lwd=2,length=.1,col=2)

i=12; arrows( x0=nx$amsu[i],y0=nx$ceres[i],x1=nx$amsu[1],y1=nx$ceres[1], lwd=1,length=.1,col=2)

dev.off()

Here is the picture – colors as in Fig 1, but phase plot added in blue

The regression should really be normalised by standard deviation, or some other way of matching dimensions. Here is the code that does that:

y=cbind(A$amsu,A$ceres)n=nrow(y)

s=diag(sd(y,na.rm=T))

ys=y%*%solve(s)

t=1:n*(pi/6);

x=cbind(cos(t),cos(2*t),sin(t),sin(2*t)) # two harmonics

a=lm(ys~x)$coef%*%s;

af=cbind(1,x)%*%a #Regression fitted phase plot

The fitted expression is:

T = 252.98 +0.85*cos(u+2.66) + 0.22*cos(2u-0.42)Flux = -0.54 + 7.55*cos(u+2.59) + 2.64*cos(2u-2.12)

u=2πt, t in years.

Thanks , that intersting.

7.55/.85 = 8.882353

2.64/.22= 12

The first gives an idea of the attenuation in Steve’s original regression fit at the top of the post where he was getting 7.7 . A good example of how doing a regression on a linear model gives an artificially lower value even on fairly clean data when there is a lag that is not accounted for.

The second figure is very much the kind of value LC11 came up with. This is the tropical 6mth seasonal cycle. Their study did centre on the tropics and in focussing on the periods of maximum change it may be the magnitude of this response that they were revealing. I still have had time to properly analysis their method.

I assume you left another not.

Btw an amusing test would be to make the analysis based on weekly anomalies (52 of them) instead of monthly.

Then it would throw out 26 periods.

No idea about the result but somehow I expect that it would be again something different.

Yet taking an arbitrary averaging period should not change the results, should it?

If all the data points were averaged together, there would be only one number for each series and the trend would be undefined. I’m guessing there is less information, is that is the correct term, as data points in fewer but longer intervals are averaged.

Yes, but only so long as the measured system response is a result of the same physical phenomena and processes. Causality is the key. If causality is not considered, connections with reality are easily broken. If the same value of the system response is measured but the system arrived at that value due to different physical phenomena and processes, the values should not be averaged. And of course when the situation in the real world is that there are a multitude of phenomena and processes occurring ( the usual case ), the measured system response must be clearly dominated by the same phenomena and processes for each observation.

Consider that convective heat transfer and fluid friction empirical data are characterized to be associated with laminar, transitional, and turbulent fluid-flow states under natural, mixed, and forced convection, plus the relative orientation of the fluid motion, a surface of interest, and gravity. All of these for simple, steady-state conditions and a homogeneous fluid. Other real-world situations introduce additional considerations. No one would consider averaging a measured system response from arbitrary combinations of these considerations.

This issue is related also to the fact that partial derivatives, the usual ‘everything else being constant’ arm waving, in general, can not be actually measured in real world systems. Everything always varies. In computer model world, for example, changing a parameter and re-evaluating the model to observe its effects on the system response of interest, does not lead to evaluation of the effects of that parameter alone in the partial-derivative sense.

Reduction of observations to solely a time series is just about the ultimate in suppressing considerations of causality. And maybe averaging observations over time periods is the ultimate suppression. I think the problem is introduced at the same time that ODEs are considered to be useful, and then equilibrium states are invoked so leading to algebraic equations.

Justifications for suppression of the wiggles when data are considered as a time series should be presented on the basis of causality at the time the suppression is applied.

Corrections of incorrectos will be appreciated.

“Reduction of observations to solely a time series is just about the ultimate in suppressing considerations of causality.”

Oh , I can do better than that !

How about you then dump the time-series dependency of both variables and plotting them a scatter plot?

If I understand this right, the yearly global temperature cycle causes an outgoing radiation cycle with strong negative feedback at high confidence levels.

Nick Stokes says this negative feedback does not necessarily apply for other types of forcing, because it is “too confounded with other annual effects (like TSI) for attribution”.

On the other side, I think this yearly experiment produces feedbacks all the time which would occur under any forcing scenario, such as increasing/shrinking sea ice, increasing/shrinking snow cover, increasing/shrinking cloud cover etc. and still there is this stongly negative result, so it may not be just an outlier.

“with strong negative feedback at high confidence levels”If you’re referring to Fig 1, that’s not a correct inference at all. What is plotted is nett upward flux, and it is dominated by TSI variation from orbital eccentricity. That happens to vary negatively with temperature (high influx, low temp). The reasons may be interesting, but in no way can be interpreted as temperature modifying the Earth’s orbit.

This is a forced oscillation. The overall magnitude of the response (7.7 or whatever) is the primary system response, in no way is this the “feedback”.

However, the fact that there is a strong signal in clearly identified cycles is interesting and may tell us something useful about the system.

Since the cause of the two main cycles is geometric (orbital eccentricity and earth tilt producing seasonal variations dominated by the tropical 6 month cycle) we can be pretty sure they are purely sinusoidal. Thus there |

mayObe an indication of a feedback in the residuals. Clearly subtracting out the monthly trends will remove, forever, both the forcing AND any feedback that is present. I guess this is the point Steve and UC are making.Much of this discussion seems to have been based on the misinterpretation that this “slope” which is not a slope represents the climate feedback. It does not.

It is possible that the regression estimator of the residual plot at the top of the post may include an indication of a linear, in phase feedback term if only the regression were done correctly. (NOT a la Dessler)

The “slope” of 2.61 does NOT give a value of climate feedback. It is a value that is reduced by regression attenuation. Correcting that requires a study and knowledge of at least the magnitude of the noise and other grek in there that is causing the attenuation.

There is no magic correct answer. However, the incorrectness of taking that deformed result to be the true climate feedback is incontrovertible. With R2 values seen that error will be very significant. The correct slope could be 2 or 3 times that 2.61 .

“However, the incorrectness of taking that deformed result to be the true climate feedback is incontrovertible.”I don’t think anyone did that. This is all-sky flux (I believe). Dessler, SB etc are looking at ΔR_cloud. And Dessler did not use this data.

Again it is the METHOD I am criticising not the data source.

I gave a clear demonstration of this earlier and asked you to give a yes or no response and not to diverge elsewhere. You proceeded to diverge elsewhere.

I don’t care if it’s cloud flux or cloud fluff, if the data is full of errors and noise in the x variable as it is in all this work, OLS REGRESSION IS INVALID. Period.

Steve, re-reading your original post, there seems to be confusion of too very different things.

“The right panel shows the same data plotted as monthly anomalies. (HadCRU, used in some of the regression studies, uses monthly anomalies.) ”

This discussion and UC’s point, as I understand it, concerns using deseasonalised data, ie time series that have gone through something like R.stl decomposition and had the seasonal component subtracted.

You often seem to refer to these quantities as monthly anomalies.

That is not at all the same thing that is given in hadCRUT3 which is a time series of monthly deviations from a _unique_ long term average. (1960-1990 or whatever)

The term anomaly itself is pretty stupid. They are simple differences from some arbitrary mean.

Steve: I think that you’ve misunderstood HadCRu anomalies. They are calculated relative to monthly averages 1961-1990, not one LT average.

Apologies, too many hours staring at the screen.

All these datasets , HadCRUT, UAH etc have the annual variations removed by one means or another, otherwise we’d be seeing the huge cyclic trends you posted above.

It may also explain a feature of the SB lag regression plots that has been troubling me since I saw them… hmm.

So what can be summarised from looking at non denatured data that record absolute physical properties?

In summary, there are strong signals that may provide useful information about responses and lags in the global system.

These signals do not give any obvious information about “climate feedback”. There may be something in the residuals if these major cycles are removed (as opposed to the usual abstract deseasonalisation).

1. Two clearly difinable sinusoidal cycles are evident. They seem to be attributable to orbital eccentricity and tropical seasonal variation. the temperature response of the orbital cycle is about 4x that of the tropical one.

2. There is a good S/N ratio. (approx 10:1 by eye)

3. The overall maximum extentions are about 15 W/m2 and 1.0K ; this gives an overall “slope” of 7.5 W/m2/K , this is what is found by fitting an inappropriate linear model by OLS (=7.7)

4. Fitting a model which better represents the data reveals two cycles that are synchronous but out of phase. The amplitudes are 8.8 and 12 , both individually greater than the supposed OLS slope.

5. If we wish to regard the OLS result as representing the dominant cycle, we see that even with clean data and and a small phase lag the result is lower than the magnitude of the response.

6. The small phase lag and presence of the lesser decorrelated cycle produce regression attenuation via errors in the x variable that lead to errors if this is presumed to represent the “slope” of a linear relation.

There is some valuable information about the system response here but it does not reveal anything directly that has bearing on feedback and climate sensitivity. There is good S.N but the signal is not the climate feedback.

Maybe it needs to be taken further.

Bottom line: don’t do regression fits of linear functions on noisy, lagged, cyclic data.

Steve, there seem to be a few errors in your scipts.

the first line gets a 404 on :

http://www.climateaudit.info/scripts/satellite/amsu.txt”

there is a http://www.climateaudit.info/scripts/satellite/amsu_retrieve.txt

Is that what it should be or did you forget to post a differenct version called amsu.txt ?

I took what there was, don’t know if that was the file you intended. :?

Should this make_anom() ???

Hmm.

Corrections would be welcome.

Steve: sorry about that. I have a habit of leaving my workspace open too long. I was trying to respond too quickly here and didn’t shut down and re-collate to ensure consistency. It is amsu_retrieve.anom=function(x) {

month= factor(round(time(x)%%1,2));levels(month)=1:12

norm=unlist(tapply(x,month,mean,na.rm=TRUE))

anom=month; levels(anom)=norm

anom=x- as.numeric(as.character(anom))

anom=ts(anom,start=tsp(x)[1],freq=12)

return(anom)

}

anom <- function(x) {

return(x – rep(unlist(tapply(x, cycle(x), mean, na.rm = TRUE)), length(x)/12))

}

cycle() gets you the months from a ts so you dont have to turn it into a factor

tapply will coerce cycle to a factor.

Steve Mc: Mosh’s function presumes that the time series starts on month 1 and ends on month 12. If it starts in month 3 (as CERES), then a factor gives you the right result.

Touche`

Actually steve, using factors may help me solve an interesting little problem of how to meterological years..

I posted “What happens if one substitutes weekly centering instead of monthly centering?” TomVonk suggests

“Btw an amusing test would be to make the analysis based on weekly anomalies (52 of them) instead of monthly.”

Anyone involved in banking knows that not all months should be ascribed equal weight. Nine should be weighted 1.107, three by 1.071 and one unweighted (Of course every fourth year, except for the odd century, the weightings will be different).

Of course someone will come in with “it does not matter, we are only dealing with anomalies”. But apples and pears come to mind.

sorry, eight not nine – I can’t count..

Is the temperature of the pictures on the left side really in °C, and not in K?

pdtillman

“RE: LITURCHUR

This was amusing the first few times, but is growing tiresome (imo), and may make you look a bit silly to new visitors. We do get your point.”

I would have drawn the precisely opposite conclusion. If you are right and it “was amusing the first few times” then surely a new visitor will not think it silly but will also be amused.

Quite apart from this logical problem with your conclusion, I think the term ‘liturchur’ is wonderfully expressive – it highlights a mantra repeated ad nauseam by many who propound the theory of dangerous AGW, viz. if it is not in the ‘peer reviewed literature’ then it is apocryphal; and conversely, if it is in the ‘peer reviewed literature’ then it must carry with it an aura of received truth.

Apparently, one is supposed to leave one’s critical faculties at the cover sheet as one delves into the hallowed pages of the published work.

Since much of the published work is drivel (an observation not restricted to climate science) this approach is ludicrous.

For me, the term ‘liturchur’ says all this – it is as economical in expression as the finest poetry.

Agreed. Let Steve be Steve. Minor stylistic criticisms come across as patronizing and self-indulgent. He’s developed a unique style over the years and knows what works.

The correct spelling is ‘litchurchur’.

========

Which is important, because “church” is a substring.

Have to admit to not spotting this!

I’d assumed it was based on the constant references to the phrase and how such things become elided over time (like “hella” for “helluva” for “hell of a”)

It’s chuckling.

=======

There are a few others lurking within the ivory towers – iss-yew comes to mind. “There are some iss-yews to be discussed.”

Others?

[sorry posted this out of sequence higher up.]

So what can be summarised from looking at non denatured data that record absolute physical properties?

In summary, there are strong signals that may provide useful information about responses and lags in the global system.

These signals do not give any obvious information about “climate feedback”. There may be something in the residuals if these major cycles are removed (as opposed to the usual abstract deseasonalisation).

1. Two clearly difinable sinusoidal cycles are evident. They seem to be attributable to orbital eccentricity and tropical seasonal variation. the temperature response of the orbital cycle is about 4x that of the tropical one.

2. There is a good S/N ratio. (approx 10:1 by eye)

3. The overall maximum extentions are about 15 W/m2 and 1.0K ; this gives an overall “slope” of 7.5 W/m2/K , this is what is found by fitting an inappropriate linear model by OLS (=7.7)

4. Fitting a model which better represents the data reveals two cycles that are synchronous but out of phase. The amplitudes are 8.8 and 12 , both individually greater than the supposed OLS slope.

5. If we wish to regard the OLS result as representing the dominant cycle, we see that even with clean data and and a small phase lag the result is lower than the magnitude of the response.

6. The small phase lag and presence of the lesser decorrelated cycle produce regression attenuation via errors in the x variable that lead to errors if this is presumed to represent the “slope” of a linear relation.

There is some valuable information about the system response here but it does not reveal anything directly that has bearing on feedback and climate sensitivity. There is good S.N but the signal is not the climate feedback.

Maybe it needs to be taken further.

Bottom line: don’t do regression fits of linear functions on noisy, lagged, cyclic data.

There are two clearly definable sinusoidal cycles in the absolute flux, which yield the Lissajous type of pattern with an apparent 6-month cycle in addition to the 12-month cycle. I had speculateed that the 6-month cycle arose from something to do with the tropics (where the maximum is at the equinox and thus a 6-month cycle.)

Further examination shows that the reason is rather different. Below is a plot showing absolute values of flux in (solar) and flux out (SW plus LW). Comments below.

Black – Incoming solar flux (from CERES SYN); outgoing SW and LW flux (CERES EBAF). The latter version chosen for energy balance.

The outgoing flux is the sum of two sinusoidals. Outgoing SW flux is in phase with incoming solar. (Albedo varies a little on an annual basis, but it is relatively constant.)

On the other hand, outgoing LW flux is almost 180 degrees out of phase with incoming solar, reaching a maximum in NH summer. The amplitude of the LW sinusoidal is about 62% of the amplitude of the SW sinusoidal. LW flux appears to be a function of incoming solar flux over land (as noted in another blog discussion recently – I don’t recall which.)

The combination of the two effects results in the amplitude of outgoing flux being damped from the amplitude of incoming flux.

Unsurprisingly oceans are important in dampening the amplitude, as they accumulate heat in the SH summer and lose heat in the SH winter/NH summer.

============

Unsurprisingly oceans are important in dampening the amplitude, as they accumulate heat in the SH summer and lose heat in the SH winter/NH summer.

=============

Doesn’t this indicate that there is a significant capactiance and therefore lag in the system

In comment 305964, Nick Stokes has a lm() routine with a graph with the phase in blue. If you use this, could you detrend the data, and plot the residuals as a sequence of time?

Here I am the amateur again: I can see an “average” figure 8 for the seasonal variation, assuming one can generate an average for each day over the yearly cycle, and generate the plot.

The sensitivity to CO2 should be seen as earlier(before) years being slower to warm and faster to cool, as compared to latter(after) years where CO2 levels are higher. (Just another way of saying that the summer gets longer and the winter shorter up in Canada.

The amount of that shift at any point is the integral of the CO2 “forcing” since equinox.

Seems to me there is someone with better math skills than me needed to tease out that information from the “before” figure 8, “average” figure 8, and “after” figure 8.

Re the left figure above at Steve McIntyre Posted Sep 28, 2011 at 11:01 PM |

If you plotted AMSU 400 mb deg C against AMSU 600 mb deg C, much or all of the character would vanish. This is because the seasonal rise in July Aug is not seen at higher altitudes. This is turn means that there is a particular feature of the process at/near ground level that produces the seasonal rise. If one is dealing in radiative physics (as converted from w m^-2 to K), then I am puzzled why the monthly rise vanishes with altitude, with the implication that radiation departing from the outer atmosphere has radially constant geometry all year, which one might intuitively expect.

So, what is the mechanism that gives the monthly rise close to earth surface each year, but not further from it? The answer probably stares me in the face, but I can’t see it.

The AMSU temperature at 600 hPa (=600mbar) is around -5°F (or -21°C = 252 K). 250°C (as in the figures) would be almost 500°F, which cannot be explained even as Trenberth’s missing heat.

“Anomaly” (K or °C) and “slope” (W/m^2*K) have units, too.

I’m with you all the way for correct units, so let’s get it right.

W/m^2/K or W/(m^2*K) NOT W/m^2*K

Thanks for bringing me up to date about the modern (Excel) way of defining the prorities of mathematical operations. When I was learning physics one could eg write the units of mu as Vs/Am, and 2 equal / after each other were forbidden (one had to be longer).

That does not change the fact that the way units are defined in the SI is exemplary, science at its best.

Maybe you are already aware of this recent reconstruction:

http://www.clim-past.net/7/1011/2011/cp-7-1011-2011.pdf

I just found it amusing that in the abstract the authors express having great difficulty believing their own results!

“At High Medieval Times, the amplitude in the reconstructed temperature variability is most likely overestimated; nevertheless, above-average temperatures are obvious during this time span, which are followed by a temperature decrease.”

(Sorry for being off topic)

Funny bits. Problems calibrating to temerpatures. Missing LIA due to human activity. I would not put too much stock in an exploratory study

ok so it’s all a load of fun but what does this mean for Climate Science…how many papers should be debunked before they appear in the official summary in AR5?

The plotting of monthly anomalie data seems to be removing information about the flux and temperature relation. Imagine you can go

back to 1814 and plot a series of French artillery shots. You plot the position of where each missile lands on a grid with (0,0) being

your artillery piece location, and the Y-axis being where you want the round to go. You likely will see a pattern that includes a normal

distribution about some downrange value, with likely an offset in cross-range due to pointing errors and wind. Downrange errors are a

function of different shot charges, elevation errors and downrange windage. If you remove the crossrange bias, you usually get the

ballistic dispersion of the gun, and this is important. If you are the engineer responsible for minimizing the ballistic dispersion,

this is you main interest. You might want to remove the bias by centering data about the centroid of the shots. On the other hand,

if you are designing the aiming mechanism (both elevation and crossrange) then you need the bias information. You really want to

include the crossrange and downrange deviation from the aim point. You center data depending on what is needed in the analysis. First

plot all the data. Look at it. Think about it. Do not automatically go to the data centering button.

The monthly anomaly plot (regression line slope of 2.61) suggests only a weak positive feedback… (3.3/2.61) = 1.264, corresponding to 3.71/2.61 = 1.42 degrees per doubling of CO2. I find that result interesting all by itself. 1.42C per doubling is pretty low sensitivity compared to the IPCC ‘most likely’ estimate of ~3.2C per doubling.

I don’t think it’s to be identified as feedback at all. It’s just the relation of temperature and nett flux after removal of annual periodicity. What Dessler and S&B, L&C have been doing is looking at a subset of flux that can be attributed to clouds, which may be a feedback from temperature. The big feedback is water vapor, not touched by this analysis.

Nick,

The value of 3.3 watts-M^(-2)/degree is the expected sensitivity absent all feedbacks (that is, in the absence of changes in water vapor, clouds, etc.) If the measured net flux TOA responds to a change in temperature by less than 3.3 that indicates the atmosphere has a positive net feedback, if more than 3.3, that indicates the atmosphere has a negative net feedback. It seem to me that how the TOA flux changes with temperature anomaly is pretty much the definition of climate sensitivity. Why do you think otherwise?

No, the definition of climate sensitivity is how much the temperature anomaly responds to forcing. With that expressed as W/m2, the units are °C/(W/m2).

Nett flux imbalances are transient; in the long run they must balance, with or without AGW. So you can’t get equilibrium CS by looking at TOA flux. The question is, what surface T is needed to achieve balance.

Nick,

Humm… OK I was sloppy in my word usage. I should have said “inverse of climate sensitivity” not climate sensitivity.

Net flux imbalances may be transient (how could they not be with so much noise?), but it still seems to me reasonable that if a positive temperature anomaly regularly corresponds (over many years) with a positive anomaly in TOA outward flux (and vice-versa) then that correlation is at least consistent with causation.

And besides, do you really mean to suggest that simply increasing temperature (all else equal) that will not increase TOA outward flux? Come on Nick, if you increase the solar intensity, which increases the surface temperature, then the TOA outward flux increases; it pretty much has to. Now just substitute radiative forcing for an increase in solar intensity.

“Now just substitute radiative forcing for an increase in solar intensity.”No, you can’t. Longterm, radiation out balances solar in. That’s just Cons En for the planet, and isn’t changed by forcings other than solar (unless they actually generate energy).

“if a positive temperature anomaly regularly corresponds (over many years) with a positive anomaly in TOA outward flux (and vice-versa)”Putting a blanket on increases your temp. But it doesn’t increase heat flux to the environment – in fact, there is a transient decrease. But again, you just can’t have a sustained positive flux anomaly, unless solar has increased.

‘Longterm, radiation out balances solar in.’

Almost. Some energy of sunlight is stored by the biotica. Maintaining an O2 atmosphere and storage of highly reduced biomass (swamps, peat and ocean organic carbon at the bottom of the oceans). Carbonates, like the chalk of the White Cliffs of Dover also took a lot of energy to deposit.

Information is actually energy and the earth is a highly information rich environment.

Nick,

Sure, but I think that begs the issue a bit. Taking a blanket off decreases your temp. But it doesn’t decrease heat flux to the environment – in fact, there is a transient increase. How the system responds (in terms of changes in TOA flux) to temperature anomalies almost certainly has to be related to sensitivity to forcing. A modest increase from GHG forcing ought to be not much different in surface temperature response from a modest increase in solar forcing. If ‘natural variability’ produces temperature anomalies, and these temperature anomalies are shown to strongly correlate with anomalies in TOA outward flux, then that still seems to me pretty good information about the “sensitivity” of the system.

No the 2.61 is not the relation of temperature and nett flux … it’s not anything at all.

Fitting a linear relation to that data is worse than meaningless, it’s misleading. There is the instant assumption it shows an underlying relation.

If some attempt were made to correctly do the OLS fit with error in x , one may look at inferring a relationship. It would not be positive feedback but negative.

We’re at cross-purposes here. I’m not talking about the 2.61 or OLS. I’m saying that any relationship inferred between flux and temperature is not a feedback from temperature, positive or negative, without some argument as to why it can be interpreted so. I don’t think that argument has (or can) be made.

Interesting comments, my view is the change in TSI each season is a perfect test of whether clouds cause a negative feedback.

Clearly, in summer clouds form a negative feedback and in winter their role is reversed. It seems obvious to me that the cloud feedback can be either direction subject to how far from equilibrium the system is pushed.

Hence arguments about whether it is positive or negative are mute, its both and varies subject to absolute temperature and its distance from the “ideal” equilibrium temperature. If the mean temperature is increased due to increased atmospheric co2 concentrations it would initially raise temperatures only to later be offset as the mean annual feedback is pushed more towards being slightly more negative.

I don’t know why climatologists find it so hard to observe the obvious, come up with logical concepts and correctly interpret useful empirical data. They prefer to look at 30 year averages than out the window to see whats happening around them!

If the mean cloud feedback varies subject to deviation of mean temp from ideal, it can be measured statistically as positive today, but could turn negative in the future with increased co2. It means all this analysis becomes academic. The only way to REALLY understand how cloud feed backs will likely work is to study them on a day to day basis, not as averages over long periods assuming the relationships hold has mean temp changes. And that is the main point here, there is always the assumption that the cloud feedback is a set response and is not dynamic as is typical of our climate.

Nick Stokes wrote:

“What Dessler and S&B, L&C have been doing is looking at a subset of flux that can be attributed to clouds, which may be a feedback from temperature. The big feedback is water vapor, not touched by this analysis.”

Isn’t “cloud” a term that refers to a discrete region of the atmosphere that is saturated with water vapor? Did you mean “water vaporization,” as in evaporation? Are you saying that the important stuff happens at the surface and not at the TOA?

I’m referring to the feedback of water acting as a GHG. T rises, more water evaporates from the ocean, specific humidity rises, and increases the IR opacity of the atmosphere, causing T to rise further.

Nick, have you ever modeled a system with a positive feedback?

The thing is that a positive feedback in a system isn’t stable unless their is an opposing rate with a rate constant about an order of magnitude large or is of a higher order. So if you have a first-order positive feedback you can stabilize it if the opposing flux is second order.

If not, the system runs away until it saturates.

Thus, one warm year, more water vapor. More water vapor, more GHG, more heat trapped. Next year warmer and so more water vapor, e.t.c. Finally the oceans boil away.

Which ever way you analyze it, this runaway has not occurred so some mechanism must exist to stop this from happening.

My guess is those large white fluffy things in the air that block sunlight.

“Nick, have you ever modeled a system with a positive feedback?”Yes, and I’ve built them. Oscillators, multivibrators…

But the arithmetic here is fairly well known. As a radiating body (without allowing for feedbacks) the Earth would emit about 3 W/m2 for every 1°C rise in surface temp. So if a forcing of 3 W/m2 were imposed, a rise of 1°C would provide a balancing efflux.

But wv as a GHG creates a feedback effective flux of about 1.5 W/m2 in response to that 1°C warming. With that, only a nett 1.5 W/m2 escapes for each 1°C rise, and it takes 2°C to balance the forcing.

Doubling CO2 gives about 3.7 W/m2 forcing, so in the way sensitivity is usually quoted, that is about 2.4 °C per doubling.

But of course there are other feedbacks, positive and negative. They would need to add up to about 3 W/m2/°C (twice wv) to create the runaway you describe.

The system is non-linear, and these numbers describe gradients at a particular state. With runaway, the system moves to a new state where the feedbacks are below critical. This could be Venus-like, or something much less. There are indications that the Earth may have reached critical states in the past, and undergone limited but rapid changes.

Nick,

“But wv as a GHG creates a feedback effective flux of about 1.5 W/m2 in response to that 1°C warming. With that, only a nett 1.5 W/m2 escapes for each 1°C rise, and it takes 2°C to balance the forcing.”

Well, that depends quite a lot on the absolute temperature. Does a 1C rise in Antarctica, say from -35C to -34C, increase water vapor concentration enough to add 1.5 W/M^2 extra forcing? I kinda doubt that.

Steve, on the other hand, air temperatures in the high arctic get above freezing in the summer.

You can see where this goes, if you extend the regions where, and durations for which, positive feedback occur as the globe warms.

Carrick,

Sure, the warmer it is the more important water vapor feedback. I do not suggest that water vapor does not add to forcing, and it is clear temperature increases ought to on average yield positive feedback via increases in water vapor. I just don’t think it is so clearly defined as Nick suggests. The smallest rate of warming over the past 100 years (the tropics) is also where the water vapor concentration is the highest. The largest increase in average temperature is for the arctic winter, where temperatures are low enough for water vapor to be not so big a factor.

“The smallest rate of warming over the past 100 years (the tropics) is also where the water vapor concentration is the highest.”I think this is common but mistaken thinking. GHG warming accumulates over decades. On that time scale, the atmosphere is very well mixed. Spatially, cause and effect are separated by mixing. Whatever causes the accumulated heat to be unevenly distributed, it isn’t the location where it was generated.

“GHG warming accumulates over decades.” I think this is common but mistaken thinking. The entire system (even with ongoing increases in GHG forcing) is today remarkably close to ‘in balance’. If there is a current imbalance (as evidenced by ocean heat accumulation) that imbalance is at most ~0.35 watt/M^2, or ~0.15% of the average short wave solar flux absorbed by the Earth. That absorbed heat is too small to represent much ‘unrealized warming’ warming. The current temperature is quite close (probably 0.2C to 0.4C, depending on how much aerosol offset you think there is) to what it would be if there were suddenly zero net ocean heat uptake. There is very little of anything “accumulating over decades” except CO2 in the atmosphere.

>> except CO2 in the atmosphere

However, because of Henry’s law, C02 is in a cycle. It is regularly absorbed by water in polar regions and also expelled from water in equatorial regions. It can’t really accumulate.

DocMartyn:

What you need is a stabilizing nonlinearity in the damping sector (that is the energy loss must increase as the amplitude grows). Stefan-Boltzman does this stabilization by providing a radiative heat loss that depends (of course) on T^4.

A classic system with positive feedback (negative damping) and stabilizing nonlinearity is the van der Pol oscillator.

There is a variant on this that uses a time-delayed stiffness instead of the negative damping.

The existence of oscillations in the climate system is (to me) evidence that there are net positive feedbacks at work on some scales,and that the system is already in a sub-critical operating point.

Carrick, if there is one thing we can be sure of it is that heat = more evaporation, pointing out the Wiki page to van der Pol oscillators does not get us anywhere.

If increased water vapour = heat, then increased heat = water vapour; positive feedback. Additionally T^4 is in K so is pretty close to linear from 288-298.

A feedback must exist, clouds are the obvious place to look.

I appreciate understatement of this:

” I find it hard to visualize a physical theory in which the governing relationship is between monthly anomalies as opposed to absolute temperature. ”

Such a theory might give rise to an incredible equation like this:

E(month) = kT(month)^4

http://wattsupwiththat.com/2011/10/06/high-level-cloud-and-surface-temperature/

For the record, there does seem to be some absolute temp data available for HadCRUT

http://www.cru.uea.ac.uk/cru/data/temperature/absolute.nc

## One Trackback

[...] (that is, they don’t take monthly anomalies) to show the radiative response. Steve McIntyre has explored this as well. The result is that you get higher r^2 values, but I think this may inflate the [...]