Credentials please. If the David Hodge in New Zealand shown on linkedin you are a climate warmer and no wonder you can’t see the forest for the trees.

Jerry

]]>David,

The only real results came when l_2 norms were used to test the accuracy of different curves. You are just another nonsense spouter with nothing to back it up. Bindidon himself stated the non-robustness of the UAH and RSS data. Obviously you read into the scientific facts the result you want, not what the facts state.

Jerry

]]>Kudos to Nick, Olof and Bindidon for putting up with the fog of errors cast by Dr Browning. It may sound perverse but I learn a lot when knowledgeable people try to educate the less educated.

Cheers

]]>Nick,

So if it was off the rails why did you provide code to support Olof’s result?

The original post made claims as to land surface stations being able to provide adequate data to determine the global mean surface temperature and that is clearly not the case.

Olof claimed that a sparse (18 point) mesh is sufficient which contradicts all numerical analysis theory unless the field is very smooth (almost constant which is true because of all the averaging) and in his case dominated by a few ( equatoria) values.

1. bindidon’s own statement confirmed the sensitivity of UAH data, i.e., it is not robust so it is a poor indicator of anything even though it is global.

2.If the data is not robust, integration of the data has no meaning. And even if the UAH data were accurate, the accuracy of the sum relative to the true integral is not known.

Any field can be measured by relative norms. The units are the same in the numerator and denominator so the answer being in percentages is perfectly reasonable. Norms were invented exactly to measure the differences between functions.These are anomalies so no mean need to be subtracted in the denominator and the mean cancels in the numerator. And the l_2 norm used here has been a very reliable indicator of the accuracy. I have used this exact relative norm published in many manuscripts accepted by reputable journals.

Jerry

]]>*“So the equatorial region is still more crucial relative to the full mesh than your extra tropic subset.”*

This is all way off the rails. The head post was about GISS and interpolation. The discussion got onto the requirements for numerical integration, which is the more general issue. Olof showed that for UAH, a very coarse mesh could do fairly well, implying that sparseness of data in the full grid was unlikely to be a problem. It would probbaly have been better to have shown something similar with GISS, but UAH is readily available through KNMI.

Then it went off the rails with red herrings about tropics, 9000 lines of code, sondes etc. There are two separate problems

1. Is UAH a reliable measure of, well, something?

2. Can it be integrated accurately?

The second one is the relevant one for this thread. The dense mesh is available; how a subset of that can be used to estimate the integral is independent of whether UAH is accurate, tropic-dominated or whatever. It is just a set of numbers with various time and space scales, and the issue is a numerical one of convergence. It is an analogue, possibly imperfect, for surface temperature.

Olof sought a representative subgrid, and showed agreement. Jerry has sought a biased grid (tropics) and also found agreement. That probably just means there isn’t much bias; ie the tropics average tracks the global average fairly well. If true, that is just a physical fact about the UAH field, as measured. It doesn’t have anything to do with the integration technique.

And the 75% etc error numbers are a nonsense, just as they were for Celsius. You can’t use temperatures in ratios, at least not if they are not in Kelvin. If they were, you’d get very small numbers which wouldn’t mean anything. The reason is that there is an arbitrary offset (temp differences are meaningful). Here the denominator is an anomaly; choose a different anomaly base and you’ll get a different error.

]]>bindidon,

In your own words as to the robustness of the UAH data:

2. And you are assuming the satellite data is accurate ( the 9000 line code has been changed a number of times with different results each time).

“Believe me: that is a point I am quite aware of! I could show you amazing charts comparing UAH6.0 with UAH5.6…”

Jerry

]]>Nick,

Here is the code if you want to check my numbers.

Jerry

N=38 # number of years

ann=matrix(NA,N,3) # to fill with annual averages

mon=array(NA,c(12,N,3)) # and monthly

iq=1:N; x=1978+iq # Range of years

op=!file.exists(“uah.sav”)

graphics.off()

if(op){uah=array(NA,c(144,72,12,N))}else{load(“uah.sav”)}

for(ip in iq){ # loop over years

if(op){ # read and scrub up UAH file

b=readLines(sprintf(“http://www.nsstc.uah.edu/data/msu/v6.0/tlt/tltmonamg.%s_6.0”,x[ip]))

b=gsub(“-9999″,” NA “,b)

b=gsub(“-“,” -“,b); i=grep(“LT”,b); b=b[-i]

v=scan(text=b)/100

dim(v)=c(144,72,12)

uah[,,,ip]=v

}else{v=uah[,,,ip]/100}

for(ik in 1:3){ # first full mesh, then 18 nodes

# if(ik==2) {lon=seq(0,143,1); lat=seq(24,48,1) ; print(lat)}

# if(ik==2) {lon=seq(0,143,1); lat=c(16,17,55,56) ; print(lat)}

if(ik==2) {lon=seq(0,143,1); lat=c(seq(12,23,1),seq(47,60,1)) ; print(lat)}

if(ik==3) {lon=seq(24,120,48); lat=seq(20,60,8)}

if(ik==1){lon=0:143; lat=0:71}

y=v[lon+1,lat+1,] # flattened subset data matrix

n=2:length(lat)

#weights by exact integration of latitude bands

a=c(0,(lat[n]+lat[n-1])/2,72)

a= rep(-diff(cos(a*pi/72)),each=length(lon)) # weights

# normalising denom; with same pattern of NA as data

wa=sum(y[,,1]*0+a,na.rm=T) # integral of 1

s=1:12

for(i in 1:12){

s[i]=round(sum(y[,,i]*a, na.rm=T)/wa,3) # integrating and /wa

}

mon[,ip,ik]=s

ann[ip,ik]=mean(s)

}

}# ip years

save(uah,file=”uah.sav”)

# Plotting annual curves

cl=c(“red”,”blue”,”green”)

png(“jerry.png”,width=900)

plot(x,ann[,2],type=”n”,ylim=c(-.6,.8),xlab=”year”,ylab=”Anomaly”,main=”Integration of UAH mesh and Jerry’s subset”) # sets up

for(i in 1:3)lines(x,ann[,i],col=cl[i],lwd=2)

for(i in 1:3)points(x,ann[,i],col=cl[i],pch=19,cex=1.5)

for(i in 1:9)lines(x,x*0-0.4+i/10,col=”#888888″) # gridlines

legend(“topleft”,c(“Full 2.5 deg grid”,”33 points”,”18 points”),text.col=cl)

compare <- function(base,curve){

#print(sum((base-curve)**2))

#print(sum(base**2))

error <- sqrt(sum((base-curve)**2)/sum(base**2))

return(error)

}

errolof=compare(ann[,1],ann[,3])

errbrow=compare(ann[,1],ann[,2])

cat("errolof = ", errolof,"\n")

cat("errbrow = ", errbrow,"\n")

dev.off()

]]>bindidon,

Your relative error for 60S to 32.5S and 32.5N to 60N is 34% and mine from 30S to 30N is 26%.

So the equatorial region is still more crucial relative to the full mesh than your extra tropic subset.

Jerry

]]>binidon

“The difference between real science and pseudo- science is the robustness of the result to perturbations.”

bindidon assumes the UAH data is robust, but we have seen that it changes with every change in the 9000 lines of code. That is far from robust.

]]>bindidon,

What happened to your previous 2 5 degree bands north and south. Embarrassed by the 75 % error?

Jerry

]]>