one has to consider the quality of satellite information from the 1930s, which, as I understand it, is less complete than satellite information from the 1990s

I would think that the “satellite information from the 1930s” is non-existent rather than “less complete”.

I posted a request on any updates or new threads on this topic many months ago. Doesn’t seem to have got through. If further work has been done comparing the crn12(3?) stations to GISTEMP, where can it be found?

If this project has been abandoned, could you let us know? The work has been very impressive.

]]>As noted in the above post I wanted to use other change point methods with which to analyze the same data. I found an excellent link summarizing the various regime change/ change point methods here:

http://www.beringclimate.noaa.gov/regimes/rodionov_overview.pdf .

The method I used is described here:

http://ams.allenpress.com/archive/1520-0442/15/17/pdf/i1520-0442-15-17-2547.pdf

and involves a simple two phase linear regression scheme where an F statistic is calculated for all possible two periods by diving the main time series into all possible 2 part divisions with a minimum of 9 points in any given part. The F statistics for every division is then compared to find a maximum F value and one that exceeds that for an F value that would occur by mere chance. If the main series produces a statistically significant maximum F that series is divided at that point and the two series are put under the same analyses to find any further statistically maximum F values exist. This is done exhaustively until no more F maximum values exceed that for the p limit used for statistical significance (in my case it was set at p =0.05).

The method described here is fortunately the one used by Menne in his paper here:

http://ams.confex.com/ams/pdfpapers/100694.pdf

I was able to use the CRU data set in the Menne paper to verify that I was using the linear regression scheme method properly. I was able to match all three change points for the CRU series from 1860-2005.

The regression scheme detected **no** statistically significant change points in my CRN45-CRN123 anomaly difference series. The F values did, however, peak very close to the 1958 change point year that I found with the mean regime method that I used earlier. From all this I conclude that while the difference in the anomaly trends between the CRN45 and CRN123 stations may have been concentrated in the 1950s an objective analysis indicates that the changes probably occurred over a longer period of time. In general, I also conclude that subjectively eyeballing a change point from a graph can be very misleading – and wrong.

I read the Menne paper and as I recall the change point(s) determined in the paper were claimed for climate change regimes and would not correspond to the regime change I determined for CRN45-CRN123 temperature anomalies which would better fit a time period of changing quality between the CRN123 and CRN45 stations that the Watts team is picking up in its current audit.

The regime change point algorithm that I used was developed to find regime changes in real time and as well as after the fact. It was suggested that it might be helpful for Bering Sea fisherman detecting regime changes in fish populations. I used it because it was available and not because it might be the best instrument to be applied to my analysis. This algorithm determines change points as breaking in one data point time period whereas other algorithms I have seen (in the Menne paper as I recall again) can determine change points changing over several data points. I need to look further at these other algorithms.

The algorithm that I used also finds change points for standard deviations in a series and when I applied it I found a change point for the CRN45-CRN123 time series at 1958 — the same as for the mean.

]]>Have you had a look at Menne’s paper? it “argues” kinda sorta for a 1964 Break point.

]]>That would put the 1958 regime change point near the middle of the steep trend in the CRN45-CRN123 anomaly differences and indicate that the CRN45 and CRN123 differences could be concentrated in 1951 to 1969 time period.

That is 1951 to 1969, not 1958 to 1969.

]]>The algorithm for regime change for means gives the operator choices of significance level, cut-off length for determining a regime change, Huber’s weight parameter for accounting for outliers, red noise estimation method with subsample size and the use prewhitening of the data.

http://www.beringclimate.noaa.gov/regimes

I used the NOAA Excel add-in to look for regime changes in my previously reported time series of the temperature anomaly differences CRN45-CRN123 over the period 1920-2005 using the USHCN urban data set on the latest available Watts CRN station ratings. I used only those stations that have complete USCHN data for the period 1920-2005.

I did variations on the tunable parameters in order to get an idea of the sensitivity of the resulting regime change(s) to the parameter selections. What I found is what the authors of the algorithm claimed, i.e. the parameters having effects are the probability and cut-off length. A cut-off length of 3 years found no significant regime changes (indicating to me that any changes did not occur so fast that a 3 year cut-off would detect it), 5 and 10 year cut-offs always found a regime change at 1958 and only 1958 regardless of the other parameters used with the exception of probabilities larger than 0.10. A 20 year cut-off found regime changes at 1958 under the same conditions that it was found for the 5 and 10 year cut-offs and an indication of one for the year 2005 (that would take more years to truly confirm as a trend as 2005 was a single year at the end of the series). Changing the probabilities beyond 0.10 showed change points at 1934, 1945, 1976 and 2005. These change points appeared to me to be more due to the noisy character of the station differences than to anything statistically significant.

Going back to the CRN45-CRN123 anomaly differences over the 1920-2005 time periods (see Post # 271 in this thread), one could make a case for a flat trend difference 1920-1950, followed by a relatively steep trend 1951-1969 and then followed by another flat trend difference from 1970-2005. That would put the 1958 regime change point near the middle of the steep trend in the CRN45-CRN123 anomaly differences and indicate that the CRN45 and CRN123 differences could be concentrated in 1958 to 1969 time period.

]]>I think you may be getting into the domain of mixing those errors you know and those errors you don’t know. Kind of like the known unknowns versus the unknown unknowns.

You have hit the nail on its head. The problem with adjustments is that they introduce non-random bias error as opposed to the random errors due to natural fluctuation and measurement error. You can statistically estimate the effect of random error and indicate the uncertainty in a probabilistic fashion with error bars. However, the amount of bias (which has the effect of sometimes dramatically reducing the confidence level of the error bounds) cannot be determined by simply looking at the size of the adjustment because part of the adjustment may or may not be warranted. In that case, if you merely widen the error bars, you won’t have a clue of what the actual confidence level of those bounds is.

The worst part of it is that most of the temperature adjustments seem to be done automatically by the fine software used by the keepers of the flame rather than by scientific justification based on actual knowledge of geography and other conditions. By the way, one approach to get a handle on the adjustment bias might be to look at relationships between other climatic factors such as rain, sun or cloud, etc. and adjusted vs. non-adjusted temperatures.

]]>Many of these data problems have well defined uncertainity intervals (TOBS, UHI, station moves etc). Instead of adjusting the data the error bars should be widened. In some cases, like station moves the uncertainty would would huge, however, the data can still be used provided the uncertainty is carried through to the end. When developing trends these uncertainty intervals would like shrink to something useful even if they did not disappear.

]]>