Yes, we’ve had a bit of a heated anticipation of the anniversary starting with Steve quoting from one of Kaufman’s CG emails on the first of the month.

]]>This would be the optimal time for any post(s) reviewing the content and significance of the Climategate files, 5 years on….

]]>Of course it’s relevant. If instrumental temperature records can be used as proxies, then sure other instrumentals are valid.

The next part is off topic, but I think the use of reconstructions as proxies should be looked at more closely.

I think the authors Tingley and Huybers might have some insights on the use of a Bayesian analysis in these matters even when they do appear naive and wrongheaded in the selection of proxies. I found the linked thread below at CA where it appears that these authors did not compensate varve thicknesses for compaction. Like others doing these reconstructions one has to wonder what motivates them to jump unto a proxy or proxy version that gives the sought after answer without doing a thorough investigation.

https://climateaudit.org/2013/04/14/tingley-and-huybers-varve-compaction/

]]>Another way in which the real debate is understandable – or should be – to the man on the Clapham Omnibus. So much faux-statistical window dressing but the host here never lets us lose that basic grounding in reality.

]]>again, at the end of the day, all one is doing is assigning weights to the various proxies. The problem arises from inconsistent data, not from methods that are insufficiently complicated.

]]>My point here is that those authors with whom I am most familiar and make these basic errors in temperature reconstructions were not using a Bayesian approach.

I was not acquainted with the Tingley and Huybers Bayesian based reconstruction/analysis. I am sure that a Bayesian approach could be as readily flawed as a frequentist one. I am interested in learning whether some of the Bayesian tools could uniquely offer some insights into the analysis of the proxy data. A sensitivity test using various priors might be enlightening.

]]>Kenneth, I strongly disagree with the following distinction.

You say:

the classical frequentist approach in selecting proxies using ex post fact methods – as opposed to using an a prior criteria based on some reasonable physical understanding of the proxy response to temperature and other climate variables

There’s nothing “classical frequentist” about ex post screening. Such methodology can be equally criticized by frequentists and Bayesians.

Nor is Bayesianism any sort of magic fix for data analysis. Tingley and Huybers, for example, included upside-down and contaminated Tiljander sediments in their Bayesian analysis.

Again, at the end of the day, all that is happening in these reconstructions is little more than the selection of a vector of weights. More complicated methods will huff and puff longer, but they still only result in a vector of weights.

And the more time is spent on complicated and poorly understood methods, the less attention is placed on the data itself.

]]>Pekka P, thanks for the link to the Tingley paper. On my first quick read I came away with somewhat the same view as SteveM. I would like to see more details on using a Bayesian approach in analyzing these proxy data used for temperature reconstructions.

I have wondered as a raw beginner in using Bayesian analysis whether hierarchical modeling would be a valuable tool in modeling proxy and instrumental temperature data. I am familiar with the limitations that are presented by the classical frequentist approach in selecting proxies using ex post fact methods – as opposed to using an a prior criteria based on some reasonable physical understanding of the proxy response to temperature and other climate variables – but am not at all sure about how Bayesian analysis could better handle this problem. The frequentist approach to this problem of selecting valid temperature proxies that is used, directly or indirectly, by most to all those publishing temperature reconstructions is fraught with basic errors in assuming that a temperature signal can be found in these proxies and all that is needed is some (blackbox) method along with some rather subjective selection of proxies to reveal it. The authors of this paper speak to some of the limitations of the current approaches to making temperature reconstructions, but I did not see on my first scanning of the paper any direct language that would confirm the basic errors I see.

I would suppose that the Bayesians would say that assumptions are made in any modeling and that their approach admits to it by using a prior. The authors in this paper offer an interesting discussion of choosing priors for the Bayesian analysis. I do not know whether the statement below that was excerpted from that paper offers hope for insights into these matters, but an actual analysis would be most interesting to see.

“In situations where scientific expertise may be equivocal, the Bayesian approach allows for multiple analyses based on different priors; the agreement or differences in results based on these different priors may be of scientific interest in their own right. In many cases, however, the parameters may be hard to interpret or there may be a paucity of reasonable scientific knowledge that can inform prior selection for them.”

]]>