The new USHCN was scheduled to come out a couple of years ago. A paper describing it has finally appeared, discussed by Pielke Sr here. I haven’t reviewed the new paper – something that I’ll be looking for is whether they rely on “homemade” changepoint methods to supposedly achieve homogeneity – “homemade” in the sense that the changepoint methods were developed within USHCN and are not algorithms that are described in Draper and Smith or similar statistical text or described in statistical literature off the Island.
If so, intuitively, I’m suspicious of the idea that software by itself is capable of fixing “bad” data. For me, one of the main lessons of the Hansen Y2K episode was that it refuted the claim that Hansen’s wonder adjustments were capable of locating and adjusting for bad data – simply because the GISS quality control mechanisms were incapable of locating substantial Y2K jumps throughout the USHCN network. The argument with Mann’s bristlecones is similar – Mann’s “fancy” software was incapable of fixing bad data – in that case, the opposite was the case: it magnified bad data.
These are the sorts of things that one has to watch out for when a “fancy” method without a lengthy statistical pedigree is introduced to resolve a contentious applied problem.