Here the 1-dbar or 2-dbar T, S, and P data from CTD stations occupied during these

cruises are analyzed. First, T data reported on the 1990 International Temperature Scale (ITS-

90) are converted to the 1968 International Practical Temperature Scale (IPTS-68) using a simple

linear formula (Saunders 1991) since the 1980 Equation of State (EOS-80) was formulated using

IPTS-68, not ITS-90. Then potential temperature referenced to the surface (‘ˆ†T) is computed using

EOS-80. All fields from each CTD profile are low-passed vertically with a 40-dbar half-width

Hanning filter. The results are then sub-sampled at 10-dbar intervals for analysis.

The vertically filtered station data from each section are interpolated onto an evenly

spaced latitudinal or longitudinal grid (depending on section orientation) at 0.033° spacing using

a shape-preserving piecewise cubic Hermite interpolant at each pressure level.

They go on to describe how they have adjusted for autocorrelation:

Ascertaining the statistical significance of ‘ˆ†T changes requires estimates of the effective

number of degrees of freedom in ‘ˆ†T fields. Integral spatial scales for ‘ˆ†T are estimated from

autocovariances (e.g., Von Storch and Zwiers 2001). Here the effective number of degrees of

freedom at each level, estimated as the latitude or longitude range sampled at each level (which

varies because of topography) divided by the integral spatial scale for that level, is used

throughout the error analysis, including application of Student’s t-test for 95% confidence limits.

So we have data that has been:

1. Measured.

2. Linearly transformed.

3) Transformed by the equation of state, which is (to a good approximation) cubic in potential temperature, quadratic in pressure, and linear in salinity.

4) Low-pass filtered vertically with a 40-dbar half width Hanning filter.

5) Sub-sampled at 10 dbar intervals.

6) Interpolated every 0.033° latitude or longitude using a cubic interpolation at each pressure level.

Now, they say that the number of degrees of freedom for this is the sampling range (in degrees) divided by the “integral spatial scale”. This is referenced to “Von Storch and Zwiers 2001” … but they neglect to put the exact reference into the reference list at the end of the paper.

Because of this, it’s not clear what definition of “integral spatial scale” they are using. There are two main divisions of integral scales, Eulerian and Lagrangian. Within these divisions, there are several methods:

Area under the full autocorrelation function.

Area under the part of the autocorrelation function up to the first zero crossing.

Area under the part of the autocorrelation function to the point where it drops to 1/e.

Sum of squares of values up to one or the other of the above stopping points.

Depending on which one is chosen, the values will be quite different. In addition, their method for estimating the number of degrees of freedom does not seem to include the number of samples taken … but I may be just misunderstanding the paragraph immediately above.

Now, at the end of all of that, they are reporting a warming on the order of 0.05°C … wish I knew more about their methods, but that seems to be at or beyond the limits of detection. I’m not saying it’s wrong … just that it’s not substantiated. Remember that, because of the “equation of state” transform, that any errors in both salinity and pressure will appear in their temperature record. Remember also that they are reporting a change of two parts in a hundred thousand … like I said, the limits of detection or beyond.

So, I wasn’t ignoring the paper … just lacking data.

w.

]]>Then fix it dear Willy, dear Willy …

]]>Lindzen explains the dangers of making corrections/adjustments to previously collected climate data. Very serious stuff.

]]>Consider a difficult measurement: for example, equatorial sea surface temperatures during the last glacial maximum. A program called CLIMAP determined some 20 years ago that these temperatures were indistinguishable from today’s. At the same time, it was the practice of the modeling community to assume that glacial maximum was due to reduced CO2, and they concluded that equatorial sea surface temperatures should have been considerably colder than those at present. As I have noted, all measurements involve errors (errors in actual measurements, errors in sampling, errors in assumptions underlying measurement techniques, etc.) An implicit assumption in such situations is that the errors ‘€” even if unknown ‘€” are random so that we can hope that they will largely cancel out. Let us imagine that we have all these errors in a box. We take out each error and examine it to see if it will help reconcile the models with the observations by decreasing the estimate of equatorial sea surface temperature. If it does, we apply the correction; if not we throw it back in the box. At the end of the process, the observations agree with the model, and the errors that were corrected were genuine errors that were genuinely corrected, but it is pretty safe to assume that the errors remaining in the box are no longer random, and that applying them will lead to increasing equatorial sea surface temperatures and increased differences between models and observations. The difficulty with the situation in reality is that the errors are often unknown at first, and so any error identified has a legitimate claim to be corrected. However, the fact that in climate science, such corrections inevitably lead to reconciliation of observations with the models leads one to strongly suspect bias. Demonstrating such bias is, nonetheless, difficult unless one has the expertise and resources to search for and examine other sources of error.