Note that you are using mean of the data as a reference, not the true global mean temperature.

Isn’t the true global mean unknown, otherwise sampling were unnecessary? A CI, according to my books, should bracket the respective population parameter with the specified probability. It is necessarily spread around the sample mean, the best estimate available for the true mean. Did I get something wrong ?

]]>Thanks. A more ambitious future study would utilize only

studies that reported confidence intervals, and then use this information

in addition to their conformity to the consensus to establish weights

and to compute CI’s for the consensus. Craig has found such CIs for

#11 (Holmgren), but I doubt that all the others were so careful.

UC —

The choice of a reference period is admitedly arbitrary, but Craig’s choice of the bimillennial average is quite reasonable for this data set. ]]>

UC,

Note that you are using mean of the data as a reference, not the true global mean temperature.

Good point, I would nuance it slightly differently though (and this adds to the changing of number of proxies or the use of weighted means): the CIs presented are not for the global mean, but for the average of the 18 locations (which may be slightly different), or for the average of the 16 locations if we remove some on S/N grounds. Furthermore, if we weight to reduce the variance, this may increase the disparity between the mean of the locations and global mean.

To get the best estimate of global mean, I suspect, requires a balancing act between:

* Geographical representation (most area covered)

* Weighted mean (to get the best S/N for the given locations)

* Relationship between this and the global mean (perhaps derived by relating limited numbers of weather station data to complete networks?)

I would have thought, given an arbitrary set of proxies and a S/N estimate for each, it should be possible to derive an “optimal” set of weights to minimise CIs *for the global mean* on this basis.

I believe someone already did a plot similar to this in an earlier thread (limited number of stations vs. full network).

]]>Interesting work! Note that you are using mean of the data as a reference, not the true global mean temperature. That reminded me of Baron Munchausen (bootstrap in English version, in German version he draws himself up by his own hair (?)).

Our search for scientific CIs is progressing, though. Let me try the same method with Mann’s network..

]]>by Hu McCulloch

(Perhaps Steve might want to start a fresh thread on this)

I have now constructed valid CIs for Craig Loehle’s unweighted average global temperature series, using the method I described at #11 above (in the 11/21 thread #2405). The exact standard errors vary with the number and identity of the proxies that are included at any point in time, but generally are around 0.16 dC. A plot of a 95% CI (+/- 2se) about Craig’s series is linked at http://www.econ.ohio-state.edu/jhm/AGW/Loehle/OLSCI.gif and should appear below:

The MWP is significantly above the bimillennial average during approximately 850-1050. There is also a significantly below-average LIA approximately 1450-1750.

Detailed graphs of the individual series etc are in a (still very preliminary) note at http://www.econ.ohio-state.edu/jhm/AGW/Loehle/LoehleGraphs.doc. As discussed there, the variances of the 18 series differ considerably. Two of them, #3 Cronin and #10 deMenocal, have such high variances that they are actually detrimental to the variance of the unweighted average of the series. Two others, #6 Korhola and #13 Viao, have very low variances about the mean and so are very informative.

Because of the unequal variances, Weighted Least Squares (WLS) estimates that weight each series in inverse proportion to its variance are more efficient. The standard errors are generally around 0.10 dC, a considerable improvement over the unweighted Ordinary Least Squares (OLS) estimates above. The WLS estimates with a 95% CI are linked at http://www.econ.ohio-state.edu/jhm/AGW/Loehle/WLSCI.gif and should appear below:

The significant portions of the MWP and LIA are about the same with WLS as for OLS, so thanks to Craig’s data set, the MWP is alive and well. (The little bump in the late 20th century at the right end of the graphs must be Al Gore!)

Only rarely were all 18 series active. Usually 14-16 were availble, though occasionally the number fell as low as 12. If there is popular request, I could post the graph of this that appears in the paper linked above.

I was able to replicate Craig’s series from the raw data Steve posted to within an average absolute difference of 0.0018 dC and a max abs difference of 0.0919 dC, but only by using a 29-year rolling mean rather than 30 as indicated in the paper or 31 as Craig says he in fact used in #263 above. The average abs difference was 0.0105 dC and the max abs difference 0.203 dC using a 31-year centered rolling mean, and comparably bad with a 30-year rolling mean, whether 14/1/15 or 15/1/14. I take this to mean that he in fact used a 29-year rolling mean.

]]>I believe they had both already said it was meaningless crap, leading me to believe trying to do a real one was similarly meaningless.

Let’s not make conclusions yet, this is a young thread😉 MBH98 CIs are clearly meaningless crap (we know the method).

For Loehle07, we haven’t even tried yet. #11 and Loehle07 Fig. 3 are OK starting points, but both methods provide finite CIs for any input data. Next step is to read the original publications. If the original reconstruction errors can be shown to be independent of target temperature, we can come back to the global scale. If not, then there’s more work to do..

]]>Did you try your method and post the results? If you did, I missed it (or didn’t understand it).

I’m just an agnostic bystander, although I do at least semi-understand the basic stuff like confidence intervals and standard deviation, at least as far as normal distribution goes. Beyond that, I get lost. In order to do error bars at all, don’t you need to know or be able to calculate the +/-?

]]>One of the issues that JEG had was that the c.i. should reflect how well the series were calibrated to temperature. This is not addressed by Hus idea, and is what I was attempting above.

As I tried to make clear in #261, the method I proposed back in #11 does not require one to know in advance how well each series was calibrated, since that error just shows up in Vi along with the error of the locality’s true temperature’s ability to measure global temperature. The time series of residuals will tell you the combined effect, provided Vi is constant over time.

Also not addressed by Hu is that the data at any one point in time may be more variable than at others.

Admittedly, but as I already noted, your 30-year smoothing will make them much more homoskedastic than they appear in the plots of your raw data that Steve posted on an earlier thread. If this is still a concern, one could fit a GARCH model to each study’s residuals and still have lots of DOF to spare. But some structure has to be imposed, and as a first cut I think constancy is better than no CIs at all.

How about simple confidence intervals on the mean computed from the data at each point in time? If the data at year t were vary variable, the c.i. would be wider, which is what you want.

This works if you’re willing to assume that all studies have equal variances (or known variances), both with respect to their ability to measure local temperature, and with respect to their locality’s ability to measure global temperature. I think that’s a lot worse than just saying they’re unknown and unequal but constant over time.

Sam Urbinto (#265):

Seriously, I would think that what UC and Jean S. have already done is fine. Or that the task is either a) impossible or b) not needed. Or both.

Even UC admits that the CI’s he computed were all wrong — they were just intended to be an illustration of how bad Mannian CI’s would have been in this context. And Jean S never computed an alternative that I am aware of.

]]>