In my last post, I observed that NOAA’s Talking Points applied their new “adjustments” to supposedly prove that NOAA’s negligent administration of the USHCN network did not “matter”.
In order to illustrate the effect of the new methods in this post, I’ll compare the new adjustments (post-TOBS) to the old adjustments (post-TOBS) on a “good” station – Orland CA, a prototype “good” station, discussed at the outset of surfacestations.org, discussed at WUWT here and CA here in early 2007.
The station history for Orland (at CDIAC) says that it has been in its present location for (at least) most of the 20th century and has had minimal changes during that time, other than perhaps time-of-observation (TOBS). The TOBS adjustment is carried forward into USHCN-v2. As I understand it, NOAA’s New Adjustment Method replaces station-history based adjustments for instrumentation changes and station location (the latter formerly done in FILNET).
As a benchmark, here is the difference between FILNET (adjusted) and TOBS for Orland in the “old” USHCN. Adjustments in the 20th century are negligible – in keeping with station history information that indicates no changes in location.
Now here is the net adjustment in the “New” USHCN.
Two points jump out. Look first at the monthly adjustments at the right hand side. In the “old” method, there weren’t any adjustments to recent data – where metadata did not indicate any relevant change. In the “new” method, there are all sorts of jittery little adjustments. They seem to average out, but why introduce these jitters in the first place? It’s starting to look like a pointless Hansen-esque (ROW-style) adjustment that simply distorts the underlying data.
On a larger scale, the new adjustment noticeably increases the 20th century trend at Orland.
These graphics strongly indicate to me that the effect of the algorithm – regardless of whatever good intentions may underlie it – is that data from lower quality stations is being blended into the presently archived Orland data. I presume that something similar is happening to other “good” stations (though I’ve only examined one example so far.) (Note that Orland is a CRN3 station. However, its excellent continuity makes it a pretty attractive station for benchmarking and visually it doesn’t look a “bad” CRN3 station).
Based on this example, it looks like NOAA’s Talking Points comparison is between the overall average and 70 “adjusted” stations – AFTER the good stations have been adjusted. :)