I kinda understand what you are saying … lets say we have 5 proxy’s – with Agassiz’s 1.38 deg C one of them, and the rest are each at -5 deg C. And that we have 5 “periods” of 5 years each starting 1920 and ending 1940, with one proxy dropping out each period, so only Agassiz-Renland is standing in 1940.

In 1920 the coverage includes all 5 proxy sets – so: [-5 -5 -5 -5 +1.38]

That “average” would be -3.72.

In 1925 the coverage include 4 proxy sets – so: [-5 -5 -5 +1.38]

That “average” would be -3.41, and the “change” from prior period “average” value would be +0.32.

In 1930 the coverage include 3 proxy sets – so: [-5 -5 +1.38]

That “average” would be -2.87, and the “change” from prior period “average” value would be +0.53.

In 1935 the overage include 2proxy sets – so: [-5 +1.38]

That “average” would be -1.81, and the “change” from prior period “average” value would be +1.06.

Last, in 1940 the coverage include just the 1 proxy set – Agassiz [+1.38]

The “average” would be +1.38, and the “change” from prior period “average” value would be +3.19.

I think that effect is what you are describing. And can see how you can get a number larger than one individual proxy.

But that is in no way remotely representative of the real world – and I simply cannot believe a legitimate scientist would ever publish it.

To try to better understand I tried some random numbers, but a little more representative of the range of values in the data. If we use the same method above but start with: [+1 -0.6 +2.14 -1.5 +1.38] then remove one each period – by the time we get to the last, 1940, period the change per period is +1.44.

It appears, in order to get a “change” number in the last step that is significantly higher than any one remaining data point in the remaining average, the remaining proxy’s in the *prior* period have to be pretty highly opposite – negative in this case. However, even if that did occur it still would not be a remotely accurate representation of the real data, and should have no place in a professional paper?.

Or I might just still be confused and over my head ;-)

]]>Why are we not concerned about the reality of the big temp drop that we don’t recognize?

Are we subject to “task fixation”, like flying a jet into the back of an aircraft carrier because we are so fixated on getting on the damn deck that we don’t notice the carrier is rising on a swell?

]]>bernie1815, it’s important to understand something. The authors didn’t pick values to perturb series by, and they didn’t pick results they “liked.” What they did is create a thousand perturbed versions of each series to “see what happened.” The idea was to see how dating errors could manifest. From that, you determine what range of values the series could have. The authors may or may not have messed up in the process, but the underlying idea is sound.

Craig Loehle, be very careful when you say it “is not the right way to handle dating error.” There is very rarely a “right way” to do things. There are often several different options, and none are “quite right.”

You may have found a “right” way to handle dating error, but don’t assume it’s the only one. And if your way is not the only right way, pointing people to your approach won’t help them see what’s wrong with Marcott et al’s approach. That makes it seem like nothing but self-promotion. I know that probably wasn’t your intention, but that is the effect.

*I’m still chewing on a secondary issue tied to this one. Time-shifting series in a Monte Carlo experiment like the authors did will reduce the signal of any given series. However, there are a lot of things that impact the extent of this effect. It’s difficult for me to estimate what effect it’d have on the overall results. It seems like it’d be a smooth with weird coefficients determined by dating uncertainty, but… it’s weird.

]]>Imagine if you had a dozen series with values of -100 in 1920 and 1940. What would happen if you combined them with Agassiz-Renland using the authors methodology? Every time a series with -100 drops out, you’d have a huge increase in your average. It doesn’t matter how much Agassiz-Renland changes. As long as it is larger than the values dropping out, the average will increase. That’s true whether Agassiz-Renland increases from 1920 to 1940 or stays constant. In fact, it could even happen if it decreased in that period.

Second, even IF you can increase an average above the highest data point value in the group, to me the result would no longer represent reality. It physically did not get warmer than the highest temp achieved during the period?

Again, I want to stress you’re focusing on how much a series changes in the period. What matters is the value of the series, not how much it changes. However, you are right on the idea of this showing the methodology is bad. It does not work anytime series start to drop out (primarily, the endpoints).

I know it looks like the culprit may more likely be in the redating, but I’d still like to try and learn about this issue as well.

I don’t agree that is a likely culprit. It certainly is an odd issue with the paper, but redating proxies cannot create the effect we see. It could contribute by changing the lineup of proxies at the end (thus changing which series might be given undue weight), but it could not speak to the methodology.

Steve McIntyre may correct me, but I think what I’ve discussed is the methodological cause of their uptick. I know it can create upticks like that; I’ve done so with synthetic data. It’d be weird to me if I found something in the author’s methodology that can create them but didn’t in this case. Still, I’m open to new ideas.

But again, let me stress this. What matters for this issue is not the change in a series. What matters is the value of that series.

]]>Brandon …. I guess see where you can increase the average that way, however I still can’t think thru how the average can ever be higher than the highest data point – which Agassiz is for the NH.

And Agassiz shows 1.38 deg C increase 1900-1940, actually its 0.40 deg C for 1920 – 1940, while Marcott shows 1.93 deg C during that time.

Second, even IF you can increase an average above the highest data point value in the group, to me the result would no longer represent reality. It physically did not get warmer than the highest temp achieved during the period?

I know it looks like the culprit may more likely be in the redating, but I’d still like to try and learn about this issue as well.

]]>Do you have a link to a non-paywalled version of the paper? ]]>