We know much less about CRU methodology than GISS methodology.

]]>Recently I noticed that the Hadley values I had downloaded as they were first published for the first four months of 2008 had subsequently been changed.

I have noticed occasional minor adjustments after the fact in most of the records, but this adjustment covered four successive months and was not “minor.

Original record

J -0.105

F +0.039

M +0.430

A +0.250

“Corrected” record

J +0.054

F +0.192

M +0.445

A +0.254

The net difference is an average of +0.083C per month, so fairly significant in a record where annual changes are only a fraction of this amount.

So my question, “Has the Met Office changed its method of calculating the reported monthly values or has it started some ex post facto “corrections” to the monthly record for the first four months of 2008, in order to “mitigate” the current cooling trend?

I sincerely hope that the latter this is not the case.

Up until now, I have always assumed that it is only the GISS record that has been compromised (and is therefore out of line with the others).

Do we now have a similar problem with the Hadley record?

If anyone has any information on what has happened, I would appreciate hearing it.

Max

]]>In the interest of accuracy on a science based and auditing site, you really should multiply kilometres by 0.625 to get miles, otherwise you end up with a 4% error. Not much in temperature but a lot when travelling.

Lewis Carroll was quite a guy apparently.

]]>Have not seen you post for a while. If you’ve been away, welcome back.

Maybe the agenda is a little more complex than you summarise.

Systematic bias in choice of methods, reported with several different caveats, so that some unexpected outcomes will fit past assertions.

“I’ll be Judge, I’ll be Jury,” said cunning old Fury;

“I’ll try the whole cause, and condemn you to death”.

Written by a mathematician.

]]>Judgement for the plaintiff.

Next topic. ]]>

But out in the real practical world where I live, has this sort of statistical stuff ever been applied to anything that is useful.

Oh yes, quite a lot, actually. Communications as well as radar are heavily based on detection theory, which is statistics wrapped up in engineering form. Component analysis techniques are used widely in many engineering fields, particularly w.r.t. any sort of detection problem (such as comm, radar, image processing). Of course, the distributions for the desired “signal” (the thing you’re attempting to detect) as well as accompanying noise and/or other impairments, are typically known *apriori*. These distributions are known often because they have been constructed in a particular way, or they can be tested. Of course, even when you have this additional information from a theoretical/empirical viewpoint, once you get out into the real world it all gets fuzzy, and many of the detection methods either don’t work as well as advertised, or stop working altogether.

but where is the practical realisation of something like a proxy study.

Hehe… $64,000 question, eh? 🙂

Mark

]]>We have statistical analyses of observational Trends with all sorts of adjustments and smoothing, we have proxy studies of things that might represent temperature and then do the statistics bit with them.

Then we have computer models that simulate what might happen in the real climate system, and we even start to discuss and analyse them with statistics as if the model were real. I suppose why not!

Then we have a model of how the atmospheric gases might absorb infra red and warm up, and I suppose someone has applied statistics to that as well.

But out in the real practical world where I live, has this sort of statistical stuff ever been applied to anything that is useful. I know a lot of obscure mathematical theory does get used in all sorts of stuff, even the thing I am typing on! but where is the practical realisation of something like a proxy study.

]]>Oops, didn’t pay attention, sorry. Let me try to combine those two versions, ]]>