http://www.tmgnow.com/repository/solar/lassen1.html

RealClimate quoted the following paper as a comment on it indicating that there are errors in the last 15 years.

It sounds like the divergence problem being discussed here. Has anyone looked at both studies?

]]>What is the 0 to 50 and 50 to 100 in their Figure 4 (3 panel drawing of which you show one panel). I mean in terms of verification, calibration and uninstrumented?

Over what period do they do rsq? In figure 4?

What does it mean to do rsq during the uninstrumented period? And what is the data? Or the curve that varies from best fit model?

]]>In their RegEM calculations, they usd the MBH98 proxy network AFTER PC calculations. They have a multivariate method after the PC calculations, which is a form of partial least squares, although they don;t know it. See my posts on the Linear Algebra of MBH.

The MBH98 multivariate method ends up giving weights to the proxies which overweights the bristlecones enabling them to dominate, since there is no overall signal in the rest of the proxies. I presume that RegEM gives similar weights to the bristlecones, since the answer is so similar. Is RegEM “right”? There are lots of texts on multivariate methods, I’ve been re-reading them lately. If I saw RegEM written up in a real statistics text with a discussion of its properties, then you could begin to think about what’s involved.

The RegEM thing is typical Mannian. You have an obtuse description of a method that is unused in the general statistical world, although there is a third party reference by Tapio Schneider, who doesn’t seem to be in the Hockey Team locker room. But you don’t see this in Draper and Smith or usual texts. In this case they have provided code so there’s a chance of wading through and seeing what they are actually doing.

When I finally sorted out the linear algebra of the MBH98 multivariate method,it boiled down to a method which one could see described in chemometrics – although Mann doesn’t know it, and about which some properties are known. That’s one of the things that I’m working on off air.

It’s possible that worrying through the algebra of RegEM might lead to some other reduction to a known method. Also sometimes the methods “converge” a little since the proxies are almost orthogonal in some systems.

At the end of the day, the RegEM is just assigning weights to the proxies. If there’s a temperature signal in temperature-sensitive proxies, you should be able to get it with an average.

]]>I’m not done with the reading, but the one snippet where they talk about how well RegEM matches MBH98 is bizarre. Yeah if it’s completely independant that’s specatular. but what about the much easier inference that it essentially does the same thing as MBH98? How do they square this particular closeness of two methods with the spectacular differences in the Buerger and Cubash paper for 64 model examples?

]]>Steve: That’s redundant. Shouldn’t use both scare quotes and the word putative. It’s redundant. Almost like a double negative. Poor writing.

]]>Where is the section that discusses rsq in the text of the paper? I skimmed paper and could not find it, but it is long.

]]>1. How about the issue of annual data and low frequency signals wrt mean changing?

2. Could you explain please what goes on in the purple part of the figures where the match is exact and the other parts? How do these periods connect to what we’ve heard of in terms of verification, calibration periods (and then that period that is outside of either)?

3. Clarifying question. When you or Mann talk about rsq, what is it you mean? of what data to what model?

4. the 3 variables that he lists in the examples, are they over the 0-100 period or over the 0-50 period? And what is going on with normalization or standardization (if anything) in the comparisons?

]]>