I would encourage you to post the plots. I think it is germaine to the discussion to show how robust (or not) the Loehle reconstruction is. ]]>

Thank you very much for posting a “negative” finding. It’s sign of a scientific and fair mind to do so.

]]>http://www.climateaudit.org/?p=2380#comment-161601

For what it’s worth, I re-analyzed the data using what I consider a better method of normalizing. The results are very close to the published Loehle results. I am not posting a new plot because it would not add anything to the discussion.

If anyone’s interested, I normalized this way:

1. Linear interpolation between years on all proxies (no extrapolation outside range)

2. Found all years with data from all 18 proxies (~1300 years)

3. Normalized proxies to have zero mean over the common years

And include Antarctic ice cores. And more sediment cores. And more speleothems….

]]>I read JEGs post over at his blog on this…

http://thatstrangeweather.blogspot.com/2007/11/loehle-reconstruction.html

http://thatstrangeweather.blogspot.com/2007/11/let-audit-climate-audit.html

Also note this interchange:

CL: “I don’t have my lifework tied up in this. Maybe we could collaborate on the remake?”

JEG: “I appreciate the offer. I need to read some of your other work to decide if your work ethics are compatible with mine. Perhaps we can meet ?”

I think JEG is going to be a real asset here. I can put up with a smartass who really knows his stuff.

So, bonjour, Julien. Wilkommen to CA!

Cheers — Pete Tillman

–

“Fewer scientific problems are so often discussed yet so rarely decided

by proofs, as whether climatic relations have changed over time.”

— Joachim von Schouw, 1826.

It just seems JEG hasn’t been around this long enough, or familiar enough with the issues and side issues of everything that’s happening. Hope he sticks around long enough to become better acquainted with things. He says he’ll be back.

What it all boils down to is this. No matter how smart you are, no matter your credentials, no matter what models you use, no matter what the substance does on its own alone in a lab, and no matter how scientific the process, if you claim “2X of Y = Z” for something in the real world, I say, prove it with a formula or empirically. If not, it’s a guess. Might be a SWAG, but sorry.

It’s not up to me to prove your proxy is a good one. It’s your job to prove it is, or provide the data and the methods to replicate it. Craig’s willingness to take criticism, want to make his paper better and more accurate, to have it audited and verified, and to make his methods and sources open and available is how everyone should be behaving, if anyone wants policy makers to take this field of climate and its results seriously.

Anyway, that’s just in general. I have the feeling this paper, along with any improvements based upon the multi-disciplinary review of it here, is going to change a lot of things about “business as usual”

Speaking of that, Hansen is working on a new paper.

Hansen, J., 2008: Tipping point: Perspective of a climatologist. In The State of the Wild 2008: A Global Portrait of Wildlife, Wildlands, and Oceans. E. Fearn and K.H. Redford, Eds. Wildlife Conservation Society/Island Press, in press.

]]>Yes, group delay is an unfortunate issue. It may be possible to back off somewhat on the smoothing (e.g., go down to 20- or 15-year, or even 10-year averaging) to get closer to the data end-point. You would have to look at the data to see whether that is feasible/sensible. ]]>

I throw down the challenge: how do I carry uncertainty from the raw data through the temperature recon at irregular intervals into the final smoothed curve?

Don’t bother. Just calculate the standard deviation at a bunch of representative points on your 18 proxy curves, after all processing, and plot them as error bars. Each of the curves is supposed to be a measurement of temperature corrupted by some unknown (and inhomogeneous) noise process. Since you are just performing a straight average of the curves, plotting standard deviations between the curves is as good a way as any of summarizing the variance (and hence uncertainty) in the predictions.

Without a lot more information on the provenance and reliability of each proxy, I’d be pretty wary of any other approach.

]]>