Benchmarking from VZ Pseudoproxies

Von Storch et al 2004 advocated using climate models to generate pseudoproxies to test the properties of proposed multivariate methods. Hardly unreasonable. I might argue that these are long-winded ways of generating proxy series with certain kinds of temporal and spatial covariance structures, but there’s much to be said for testing methods on some standard data. Their own networks of pseudoproxies is much too "tame" to be adequately realistic, but, if you can’t understand what the tame networks do, you’ll never understand what the "wild" networks do – a lack of understanding presently being demonstrated by Wahl and Ammann and other Hockey Team supporters.

Also for all the huffing and puffing about "moving on", there’ s nothing wrong with testing multivariate methods against the MBH98 proxy data set as an example of a "wild" network. I don’t think there’s much in it that’s usable for climate studies, but it’s an interesting statistical collection and, at this point, people interested in the field can use it as a benchmark.

For people that have not studied multivariate statistical literature, it is hard to communicate the variety of interconnected multivariate methods. In fields other than climate science, you usually have to prove the validity of a method through benchmarking and explain its statistical properties before using the method. In climate science, these steps do not seem to be required by major journals, such as Science or Nature, despite, in the latter case, having seemingly appropriate policies on paper.

I’ve done reconstructions under various multivariate methods using pseudoproxies from "Region 1" (55 series) from the erik167 run (kindly provided by Eduardo Zorita). I’m still experimenting, but the results seem pretty interesting and one of many things that I should work up further. The cases here are all direct reconstructions of the NH average, without using "climate field reconstructions". The Mannian temperature PC1 closely resembles the NH average. I’ve looked at the following methods:
1. Scaled Composite (average after scaling in the calibration period – 1902-1980 used here after MBH);
2. Ordinary Least Squares (OLS) – here this is inverse multivariate regression (which is what Groveman and Landsberg did)
3. Partial Least Squares – I’m using this label according to my characterization of the actual MBH method as seet out in my Linear Algebra posts. Unless modified, this is undetrended. The method as applied here includes rescaling.
4. Partial Least Squares Detrended. This is the much criticized VZ implementation of MBH.
5. PC1- covariance.
6. PC1-correlation. Despite the use of PC methods in the tree ring networks, MBH did NOT apply PC methods to their proxy networks formed from collating tree ring PCs with other proxies (the 22 series in the AD1400 has 3 PC series; the 112 series in the AD1820 has 31 PCs).
7. Principal Components Regression (PCR). This is regression on the principal components, as opposed to reconstructing from the principal components.

I’ve not checked out Ridge Regression here, much less RegEM. Stone and Brooks 1990 prove that Ridge Regression forms a continuum (with one parameter) between OLS and PLS, so in a general sense one can conclude that Ridge Regression results will be intermediate between OLS and PLS results. Canonical Correlation Analysis; Lasso methods – there are numerous other plausible multivariate methods.

Decomposition by Scale
There’s a lot of talk about low-frequency and high-frequency results in the current debate. However, I don’t think that the examples to date are very helpful. I like looking at wavelet decompositions of variance by scale , aggregating low-, medium- and high- frequency scales together. I used a "la8" wavelet here, but I don’t think that much turns on it. I think that the decomposition shown here is more useful than what has been put forward so far in the literature.

I’m not going to show the decompositions for all cases, but will show a couple of extreme cases. The first figure shows the OLS reconstruction. You will see that it provides a remarkable fit in the calibration period, that it has much less low-frequency variance than the target and generally poor out-of-sample performance even in this very tame network. In a later graphic, I’ll show the behavior of the coefficients in this fit. Groveman and Landsberg use this methodology and I can see some of the same features in coefficient patterns there. It looks to me like an unusable reconstruction and any of the solar results based on correlations of solar proxies to Groveman and Landsberg would accordingly be questionable. This example nicely illustrates "overfitting".

Figure 1. Wavelet decomposition of Reconstruction from Multiple Linear Regression from erik167 pseudoproxies in Region 1. Top 4 panels – NH reconstruction and decomposition by scales ( high: 1-8 years; medium 16-64 years; low – 128+ years). Red – Echo-G NH. Bottom panel – Proportion of variance by scale. Dark cyan is NH target , used as reference in other decompositions.

At the other extreme is a reconstruction using an unweighted average of scaled pseudoproxies. In this case, there is much better recovery of low-frequency information, as you can see by comparing the proportion of variance in each scale (BTW the proportion of high-frequency variance in the underlying Echo-G target seems unduly low, but that’s a spearate issue.) One of the reasons that this method performs so well is the very "tameness" of the underlying network – the noise is white, also the proportion of noise is constant, about 50% in the erik167 example, yielding very high (0.7) correlations to gridcell temperature.

Figure 2. Wavelet decomposition of Scaled Composite from erik167 pseudoproxies. Top 4 panels – NH reconstruction and decomposition by scales ( high: 1-8 years; medium 16-64 years; low – 128+ years). Red – Echo-G NH. Very top panel is 2x scale of next 3 panels. Bottom panel – Proportion of variance by scale. Dark cyan is NH target , used as reference in other decompositions.

I’ve done these for all the cases, illustrated in the next graphs, but you should get the idea.

Next here is a spaghetti graph of results from 3 multivariate methods: OLS, re-scaled PLS (MBH98) and scaled composite. You see that MBH-style PLS is intermediate between OLS and scaled composite. The better performance of a scaled composite in recovering low-frequency information was observed inpassing in von Storch et al 2004, but, IMHO, they did not pay sufficient attention to this result. In each case, the attenuation of variance comes from a greater proportion of high-frequency variance in the reconstruction than in the original. This is a property of the multivariate method. As a method, Moberg tries to avoid this by ensuring that the reconstruction has similar proportions of low-frequency and high-frequency variance – which doesn’t necessarily make this reconstruction "right", but it is at least attentive to the problem and a direction worth pursuing.

Figure 3. Spaghetti graph of selected multivariate methods. The blow-up is not because of intrinsic interest to the period, but just to show detail a little better in a different scale.

The next graph shows similar results for other multivariate methods, including detrended and nondetrended Mannian PLS. Given the amount of hyper-ventilating by the Goon Line about trending versus de-trending in Mannian methods, there is surprisingly little impact in this particular case. It’s actually not even clear that undetrended performs better; I expected to see more difference and don’t entirely understand why I’m not seeing more difference. I’m wondering whether there is still some discrepancy in MBH implementation by VZ which actually leads them to overstate the differences. Wouldn’t that be ironic? I’ll chase Eduardo for code on how he did this step. In any event, the impact of trending versus not-detrending is microscopic under my calculations. The more salient issue is the markedly inferior performance of Mannian PLS to some simple alternatives such as unweighted average or simple principal components. (I realize that further issues arise in a less tame network, but the hyperventilation is all about the tame network.)

Figure 4. Another spaghetti graph of selected multivariate methods.

The next graphic compares some common verification statistics for the different methods, again with extremely interesting results. Here I think that there is a great benefit from considering a broader range of multivariate methods. Look at the OLS statistics: a very strong calibration r2, an RE statistic of just under 0.5 (about the range of early period MBH reconstructions) and negligible verification r2 (the reason that you can’t see it in the graph is because, it’s 0.002.).

The "good" methods have RE statistics up in the 0.7 range, rather than the 0.4 range. In our GRL article (Reply to Huybers is our most recent position), we suggested that a 0.51 benchmark for RE significance. B’rger and Cubasch is the first article to try to come to grips with this conundrum, although their suggested benchmark of 0.25 hardly follows from their example. In passing, Mann’s review of B’rger and Cubasch is typical of Hockey Team dreck on this topic.

The CE statistics, praised by the NAS Panel, are all negative for fairly decent reconstructions.

Despite recent hyper-ventilation about r2 being a "high-frequency" measure, it contains both low and high-frequency information. I’m not advocating r2 as a magic bullet, merely not looking at one statistic. In this case, based on these particular networks, I’d be inclined to say that verification r2 is probably more helpful than CE, but I’m not tied to this position. My objection to Mann’s handling of the r2 was that MBH98 said that this was one of the verification statistics that they calculated and, in IPCC TAR, he claimed "skill" in multiple verification statistics. If they didn’t want to use it, argue that; just don’t claim skill if it isn’t there.

Notice the fifth column which is the explained variance for the entire period – after all, by using RE statistics in a short verification period, one is hopefully estimating RE statistics for the process or at least the longer period. The best performers in this tame network are the unweighted composite and the principal components. Mannian PLS shows poor performance in comparison. All methods under-capture low-frequency variance. Detrending or not in a PLS method makes negligible contribution.

Figure 5. Verification statistics for selected multivariate methods.

Finally, here are plots of the weighting factors for some different methods. These "weighting factors" have different terminologies – in one case they are called "regression coefficients"; in another case, they are the first eigenvector (re-scaled). A scaled composite has roughly equal weighting factors. In this "tame" netowrk, the principal components methods have virtually identical coefficients and come close to recovering a simple average.

Figure 6. Weighting Factors for Several Multivariate Methods

One can prove theoretically that, in a white noise network with noise mixed in in equal amounts, the optimum result comes from assigning exactly equal weights to each series, so that the white noise cancels out. In such a case, the variance of the white noise reduces as $\sqrt{N}$. The more variability in the weights, the worse the performance. For exxample, if you load your weights on just one series, you get no white noise cancelling and retain the original variance proportion.

If you have both positive and negative weights in this very tame network, then the weights cause the proportion of signal to be reduced without a gain in noise cancellation. That’s one thing that caught my eye in the Groveman and Landsberg situation – you are not regressing onto causes and so you don’t want opposite signed coefficients. Ridge regression and even Mannian implicit PLS are less bad than OLS in this context, but simple averages or simple principal components trump both. You’d think that this would have been done before Mann proposed his method – that’s what’s done in econometrics or any other discipline except climate science, but, hey, they’re the Hockey Team.

I think that the best way to proceed to a more complicated test case is to use a linear mixed effects method in which you allow for heteroskedasticity. I’ve experimented a little with this on tree ring networks but have run into memory problems. (It’s not that the memory requriements are huge; it’s that I really need to get a new computer- which I’ve been talking about for a year now; I’m always worried about the amount of time to convert and the time has never seemed right. Maybe this will prompt me to do it.)

Reference script (as an aide-memoire): zorita/make.residuals.txt

1. Steve Bloom
Posted Jul 4, 2006 at 4:50 PM | Permalink

“Hockey Team goon[s]”? “In climate science, these steps do not seem to be required by the Hockey Team house organs, such as Science or Nature”? Steve M., I have to say I remain mystified as to why you feel the need to fling insults like these. They certainly don’t help you get published (anywhere but E+E), and I doubt they’ll do much for the Jerry Norths of the world if and when they do visit here. I guess that leaves red meat for your cheerleaders as the only explanation.

(Note to Nanny regarding an issue raised several months ago: The long-awaited Stott et al paper relating in part to the regional aerosol issue is out, but unfortunately behind a subscription wall. My subscription budget doesn’t extend to JoC, but perhaps you have one. If not, with that many authors a public-access link to it should turn up soon enough.)

2. welikerocks
Posted Jul 4, 2006 at 5:39 PM | Permalink

#1 You said:

I have to say I remain mystified as to why you feel the need to fling insults like these. They certainly don’t help you get published (anywhere but E+E), and I doubt they’ll do much for the Jerry Norths of the world if and when they do visit here. I guess that leaves red meat for your cheerleaders as the only explanation.

And you had to bite first.

Who are you trying to fool anyway with all that “mystified”stuff?

3. Steve Bloom
Posted Jul 4, 2006 at 7:42 PM | Permalink

Sometimes a cigar is just a cigar, rocksy. Steve M. says he wishes to gain credibility for his views in the clinate science field, including in the context of publication. Some of the climate scientists and editors he’s trying to impress visit here from time to time, and I’m confident that many of those who don’t will have juicy snippets circulated to them from time to time. My mystification at his use of such terminology is that it actively assists anyone who might be trying to marginalize him and his views. Do you think it helps Steve to have someone like Jerry North visit here only to see many of his (Jerry’s) colleagues referred to as goons and aspersions cast on the two most important science publications? I’ll allow that such language does make this site a lot more entertaining than it would otherwise be, but I don’t recall Steve M. listing the entertainment value of this site as a goal on the same level as the one I mentioned above.

4. Bruce
Posted Jul 4, 2006 at 8:01 PM | Permalink

So what do you think about the NAS report Steve B?

5. welikerocks
Posted Jul 4, 2006 at 8:06 PM | Permalink

Pooh ha ha

You should tell us why the Mann et als of the world shouldn’t be worried about their own behavior within the science community, not the other way around.

6. Steve McIntyre
Posted Jul 4, 2006 at 8:12 PM | Permalink

#1,3. Steve B., if I felt that you had any serious interest in how I do, I’d pay some attention to your advice on how to deal with things. I’ll tell you what – if Eduardo Zorita or Gerd Bürger or someone like that suggests that I tone down or withdraw the description of Ammann and Wahl, then I’ll do so.

The reason why I call Ammann a "goon" is that I sat face to face at lunch with him in San Francisco – jes, I even bought lunch for him – and told him that Wahl and Ammann specifically misrepresented what we’d done. After sitting face-to-face, he went off and didn’t change a comma in the misrepresentations, even knowing that he’d misrepresented us. What else should I call him?

As to NAture and Science, what are you going to say about journals that publish dreck like Osborn and Briffa 2006 and Hegerl et al 2006. I’ll take my chances with the journals when I’m ready.

BTW I put D’Arrigo et al 2006 in a different class entirely from the Osborn dreck. While even DWJ06 gets cute here and there (Yamal versus Polar Urals; post-1985 verification), it’s substantive and Rob Wilson is a decent guy and a very good influence on anything he’s associated with.

Having ventilated a bit, this is a pretty substantive post. Therefore, not because I want to curry favor with anyone, but because I do not want it turned into a discussion of Ammann, I’ll edit down my language, while not acknowledging that descriptions are inappropriate in this particular case.

7. Steve Bloom
Posted Jul 4, 2006 at 8:39 PM | Permalink

Re #4: I think they threaded the needle nicely. Bruce, please bear in mind several things: 1) While it’s true that the TAR used MBH to show something it didn’t really support (this is von Storch’s main criticism), even so the TAR only assigned it a two-thirds probability of being correct. 2) The correctness of MBH was never important relative to the TAR’s main conclusions (i.e., they would have held even with a warmer MWP and a cooler LIA). 3) That an early paper is later shown to have flaws is not considered to be a sin in the world of science. 4) It was extremely clear to everyone on that panel that there was a denialist agenda behind most of the attacks on MBH. 5) The panel members, the NRC and the NAS had no interest in having themselves made into denialist targets. 6) This entire controversy can be expected to fade into obscurity with the publication of the AR4 next year. 7) Steve M. will be treated very nicely in the interim, but if he wants that to continue he should start producing some papers that advance the science.

Re #5: Mike Mann? You mean the scientist who co-authored the lead article in EOS last week? Are we talking about the same person? Seriously, rocksy, you’ve just made it clear what you’re here for.

8. Steve Bloom
Posted Jul 4, 2006 at 9:43 PM | Permalink

Re #6: Steve M., I realize you’re extremely upset at a number of those people, and that’s fair enough. I too get upset at a lot of people (I deal quite a bit with local elected offiials), but I try to be aware of the consequences of insulting them in a public place where they and their friends are likely to hear about it. (And of course sometimes I do it anyway, but usually there’s someone there to tell me I just shot myself in the foot.)

That said, when you as the site proprietor use such language it tends to increase the noise level, and believe it or not I don’t just come here to bait the cheerleaders. IMHO some more or less good-natured snarking (which you are quite good at) is enough to keep them coming back without having to descend into insults.

Finallt, in my experience scientists tend toward the shy and reticent. I’ll bet if you would ask Eduardo or Gerd about it they’d agree that avoiding the nastier insults is probably for the best.

9. Lee
Posted Jul 4, 2006 at 9:46 PM | Permalink

It is *so* fun to read hartlod’s posts….

10. Steve McIntyre
Posted Jul 4, 2006 at 10:02 PM | Permalink

#8. Steve B., I agree with what you say. It’s not good writing practice to use adjectives as I tell certain people. I was a little tired and lapsed into some self-indulgent writing. The most effective writing is to arrange facts mercilessly so that you don’t need to use adjectives. Lord knows, it’s easy enough to merely arange facts with the Hockey Team. Thus, as you’ve noticed, I’ve dialed back the text, not to curry favor, but because it was self-indulgent.

One point: I’m not as “upset” with people as you might think. There’s lots of stuff that I don’t worry about. If I was a conventional careerist, I would care. Too many people worry about what Science or Nature might think. I don’t need to be one of them. And, for your information, my correspondence with, say, Karl Ziemelis of Nature has been quite cordial. He doesn’t mind being challenged; he’s very wily and it amuses me to deal with wily people.

11. nanny_govt_sucks
Posted Jul 4, 2006 at 10:07 PM | Permalink

The long-awaited Stott et al paper relating in part to the regional aerosol issue is out

Great. I look forward to reading the part where it says Chinese aerosols cause warming while North American aerosols cause cooling and are somehow able to fly against the wind to the Southern Hemisphere to cause cooling there as well. That part should be interesting.

12. Gerald Machnee
Posted Jul 4, 2006 at 10:13 PM | Permalink

Re #7 – 6) **This entire controversy can be expected to fade into obscurity with the publication of the AR4 next year. **
You are dreaming, Steve B. If AR4 does not do some real science, they will not clarify Global Warming issues, especially if they rely on the same group of people.
**Re #5: Mike Mann? You mean the scientist who co-authored the lead article in EOS last week?**
Is this the article that only uses a select group for references at the end?

13. Steve McIntyre
Posted Jul 4, 2006 at 10:21 PM | Permalink

#12. The Hockey Team practice of only citing one another’s articles was noticed in a blog report on Holivar here . You can get a gist (as I did) from the Swedish-English translation – insert webpage – at
http://www.systransoft.com/index.html.

14. Posted Jul 4, 2006 at 10:31 PM | Permalink

Hmm. Hockey Club. I like that better.

15. Jean S
Posted Jul 5, 2006 at 4:58 AM | Permalink

re# 13: This is interesting (from the link):

En annan sak Mann tog upp var att han inte ansàƒ⤧ medeltida vàƒ⣲meperioden (MWP) eller lilla istiden (LIA) vara globalt, eftersom han inte sàƒ⤧ nàƒ⤧ot sàƒ⤤ant i sin kurva. Dàƒ⣲emot nàƒ⣭nde han en annan kurva (det var nog enda gàƒ⤮gen han refererade till personer UTANFàƒ’€”R sin egen forskargrupp) fràƒ⤮ Afrika som visade just en varm tropik under LIA och en kall tropik under MWP. Mann ansàƒ⤧ att jorden var fast i ett El Nino-liknande tillstàƒ⤮d under LIA och ett La Nina-tillstàƒ⤮d under MWP. Hur det sedan kommer sig att El Nino gàƒ⵲ hela jorden varm i nutid (som rekordàƒ⤲et 1998) fàƒ⵲klarade han inte. Dessutom sa han tidigare under sin fàƒ⵲elàƒ⣳ning att man aldrig skall tro pàƒ⣠en kurva, varfàƒ⵲ det faller sig lite lustigt att han senare anvàƒ⣮der just en enda kurva fàƒ⵲ att “bevisa” att MWP och LIA inte existerade globalt. Mann kunde dàƒ⣲emot stràƒ⣣ka sig till att erkàƒ⣮na att Europa och àƒ⵳tra Nordamerika upplevde en MWP och LIA. Han verkade dock inte ha tagit intryck av de posters som fanns vid konferensen, dàƒ⣠ett flertal av dem visade MWP och LIA runt om i vàƒ⣲lden (sàƒ⤶àƒ⣬ i Sydamerika som Asien och Afrika).

Another thing Mann took up was that he didn’t believe the medieval warmperiod (MWP) or the little iceage (LIA) were global, because he did not see something like that in his curve. On the contrary he named another curve (it was the only time he refered to a person OUTSIDE his own researchgroup) from Africa, which showed a warm tropic during LIA and a cold tropic during MWP. Mann believed the Earth was stuck to a El Nino-like state during LIA and a La Nina-like during MWP. How does it happen then that El Nino made the whole Earth warm in present times (like the record year 1998) he did not explain. Besides he said earlier during his lecture that one should never trust in one curve, and therefore it is little amusing that he later uses just only one curve to “prove” that MWP and LIA did not exist globally. On the other hand Mann was able to go as far as to admit that Europe and eastern North America experienced an MWP and LIA. Nevertheless he seemed not to have been influenced by the posters in the conference, as several of them showed MWP and LIA around the world (in South America as well as in Asia and Africa).

So for Mann the ultimate truth is still in his curve and he does not let any small details like other researchers’ result to influence that.

16. Eduardo Zorita
Posted Jul 5, 2006 at 5:25 AM | Permalink

Steve,

I have had a quick look at the results you are showing here, and I really appreciate their constructive aspect: comparing different methods, showing the drawbacks and advantages of each one, etc. I think this can develop to a a very interesting and useful paper. Be aware, however, that some people will critize the use of the ERIk simulation (in my opinion not completely justified), but we could probably find a solution to this.

17. Jean S
Posted Jul 5, 2006 at 5:32 AM | Permalink

re #1/#3/#8: Steve B, is this the way the “real scientists” are talking (from the Holivar report, link in #13):

Dàƒ⣲efter bàƒ⵲jade han [R. Bradley] ganska aggressivt att hoppa pàƒ⣠nàƒ⤧ra vàƒ⣬kàƒ⣮da klimatforskare som har en mer skeptisk hàƒ⤬lning till klimatfràƒ⤧an. Bradley ansàƒ⤧ dem vara kàƒ⵰ta och finansierade av oljeindustrin, speciellt ExxonMobil. Alla som talade emot hockeyklubban var pàƒ⣠ett eller annat sàƒ⣴t Exxon-finansierade.

Then he [R. Bradley] started jumping on some well known climate researchers who have more skeptical attitude to the climate question. Bradley felt that they were bought and financed by the oil industry, especially by ExxonMobil. Everyone who talked against hockey stick was someways Exxon-financed.

18. Jean S
Posted Jul 5, 2006 at 5:35 AM | Permalink

re #16: Well, why don’t you write a joint paper? 🙂

19. Steve McIntyre
Posted Jul 5, 2006 at 5:56 AM | Permalink

#18. Jean S, it’s unfair to Eduardo to try to negotiate somethimg like that on air. He and his associates have been civil to me both publicly and privately; they’ve encouraged me to continue what I do and have sent me large data sets to experiment with. They face a realpolitik as well. Things will be what they will be.

20. Steve McIntyre
Posted Jul 5, 2006 at 6:05 AM | Permalink

#16. It’s amazing how much the critics misunderstand how you used the simulations. For the articles in question, it was nothing other than a method of generating data with a form of spatial and temporal covariance. Having said that, it might be useful to simply specify the forms of covariance.

In a pure white noise model, one can show that the attenuation of low-frequency depends on the ratio $\frac {(\sum {b_i})^2 } {\sum{b_i^2}$, where $b_i$ are the weights. Something like this also seems to hold for the forms of covariance in erik plus white noise, but I got stuck at trying to specify the covariance in this format and then wandered off to consider topical NAS issues.

21. Jean S
Posted Jul 5, 2006 at 6:20 AM | Permalink

re: #19: I’m unable to grasp the meaning of your reply, but I withdraw from this before anymore misunderstandings.

22. Michael Jankowski
Posted Jul 5, 2006 at 7:02 AM | Permalink

I have to say I remain mystified as to why you feel the need to fling insults like these. They certainly don’t help you get published (anywhere but E+E)

So publications are heavily based on civility, not their worthiness with regard to science?

Re #5: Mike Mann? You mean the scientist who co-authored the lead article in EOS last week? Are we talking about the same person?

Do you chastize Mike Mann when he flings insults? It doesn’t seem to affect him when it comes to being published in Nature, EOS, etc.

23. TCO
Posted Jul 5, 2006 at 7:49 AM | Permalink

I wonder if using shapes (straight lines, idealized hockey sticks, sinusoids of different frequencies, etc.) would be a better approach then something so far “downstream” as simulated proxies. IOW, you are investigating the method and it’s tendancies versus data types. Start with the most simple, stylized first. I think this is a good path to insights.

Also, my memory is dim, but I think I asked a long time ago about red versus white noise and how they interact with the MBH method flaws. I think at the time, you said that you did not think the interaction was important. That there were a lot of other issues with autocorrelation, but that you were not concerned here. It sounds lately as if you are coming back to thinking such an interaction is important. (But my memory is dim.)

Regardless, of if I brought it up, I wonder if the important thing about interaction of red noise with the method (versus white) is just the tendancy that random walk will have for “longer runs” and in effect, greater chance of hockey stick shapes.

24. TCO
Posted Jul 5, 2006 at 7:51 AM | Permalink

Steve, lot of content in here, and I will put some brain power to understanding the results later. Not ignoring that there is a lot of work done here.

25. TCO
Posted Jul 5, 2006 at 7:54 AM | Permalink

I think that you should include a method which is PC estimation (using a Preisendorfer’s n of PCs).

26. Steve Bloom
Posted Jul 5, 2006 at 4:45 PM | Permalink

Re #12: “If AR4 does not do some real science…” Oh, please, Gerald. Also, the article in question had to do with hurricanes and was co-authored with Kerry Emanuel.

Re #22: Steve M. has some severe disadvantages when it comes to publication, e.g. no relevant degree, no job history in the field, no lengthy list of prior publications on a breadth of climate topics, difficulty getting qualified co-authors, lack of a personal network in the field. It’s really quite amazing he’s gotten as far as he has. Adding to those disadvantages, however trivially, makes no sense. Mike, OTOH, has no such disadvantages.

27. welikerocks
Posted Jul 5, 2006 at 5:12 PM | Permalink

Especially if SteveM tried to submit to the “Journal of Climate”
A journal of the American Meteorological Society right?

here is the main page:
http://tinyurl.com/o99eh

This is where Gavin Schmidt, and Michael Mann are listed as editors.

Where Mann, M.E., Rutherford, S., Wahl, E., Ammann, C got an “independant” paper on “Testing the Fidelity of Climate Reconstruction Methods” published.

Where MM says on the RC blog that the NAS report is “a quickly prepared” report and then gives the link here for his better paper published by the Journal of Science:

http://tinyurl.com/hvr3r

He says at RC:
“that an independent study not cited, but published well before the NAS report was drafted, comes to very different conclusions. This reflects one of a number of inevitable minor holes in this quickly prepared report”

28. welikerocks
Posted Jul 5, 2006 at 5:16 PM | Permalink

re 27:

*Typo*

should say

http://tinyurl.com/hvr3r

29. John M
Posted Jul 5, 2006 at 5:35 PM | Permalink

Also RE: 26, reply to 22

As someone with a PhD in Chemistry who’s interacted with other experts for over thirty years, I must say I’m a little less in awe of degreed people than you are.

If you have some spare time, you might want to check out the April issue of Smithsonian. In it there is an article entitled “Odyssey’s End? The Search for Ancient Ithaca”. Although it’s not about climate science, it relates to paleontology/archeology/geology. It focuses on Robert Bittlestone, a management consultant with no formal training in archeology or geology who is being taken seriously with his theory that the legendary island of Ithaca is now a peninsula of the larger island Cephalonia. The article points out the analogy to Heinrich Schliemann (a businessman who uncovered the mystery of Troy), and Michael Ventris, an architecht, who was the first to translate the Minoan language of Crete.

“…Bittlestone is part of an honorable tradition of inspired amateurs who have made extraordinary discoveries outside the confines of conventional scholarship.”

Interestingly, even the scholars who don’t buy the theory and who are quoted in the article appear to be even-tempered and professional in their criticism. Not sure if this is because the field is more open-minded than climate science or if Smithsonian is just a very professional publication with very responsible editing.

30. Bruce
Posted Jul 5, 2006 at 5:37 PM | Permalink

Re #26: Surprising then, given those impeccable credentials, that Mr Mann’s work should have attracted the comment that it has. Or have I missed something?

31. Steve McIntyre
Posted Jul 5, 2006 at 6:33 PM | Permalink

The funny thing is that the Hockey Team are “amateurs” at statistics. Mann: “I am not a statistician”. And he knows more than the others.

The salient question is not my qualificiations but why are we relying on unverified statistical results by people who are unqualified in statistics (aka “climate scientistis”.)

32. TCO
Posted Jul 5, 2006 at 7:03 PM | Permalink

Bloom, the biggest thing holding Steve back from being accepted is not WRITING papers! Second biggest is poor scoping/organization. Third is lack of feel for what papers belong in what journals.

Oh…but da man is keeping him down. Not.

33. welikerocks
Posted Jul 5, 2006 at 7:25 PM | Permalink

#31

I don’t know if it means anything or not, but when I found “The Journal of Climate” website the first time it said somewhere on the front :

Please note that the Journal of Climate no longer publishes “Letters”.

and it then had link that turned out to be a .pdf next to it. I downloaded it by accident. I still have it. The link is not there anymore on the webpage. (at least I don’t see it anywhere now, don’t know if I am blind)

It’s kind of interesting to read.
Reasons why as of May 2005, they don’t support a Letters section.

First reason:

success rate of acceptance was very poor. 75% being rejected.

then a paragraph of reasons: reviewers required major revisions, others wanted more information etc. …? (data? perhaps? 😉 )

Reviewers didn’t have time to look at them it became so popular to submit:

Example they give:

129 submissions were articles, 7 were letters, in the first six weeks of 2005.

Most those letters were “screened” and in paranthesis “rejected by the editors before review process” Most that made it through screening process were also rejected by the review process.

And it also put unnecessary “burden” on the reviewers as well as on the authors. Submission of Letters slowed down the publication of articles Something about “timeliness” of publication wouldn’t occur. Then it said “many of these papers were eventually resubmitted as Notes or Articles.

The last reason is that they are going to go to a fully electronic, to reduce publication time, making it less crutial to have a rapid communication section in the Journal of Climate.

It’s just a one page pdf. It was interesting to read.

34. TCO
Posted Jul 5, 2006 at 7:50 PM | Permalink

Steve, when you write a paper, do you follow the editor’s notice to contributers, how do you follow it, and where do you find it.

35. TCO
Posted Jul 5, 2006 at 7:51 PM | Permalink

http://www.sfwriter.com/ow05.htm

36. TCO
Posted Jul 5, 2006 at 7:53 PM | Permalink

http://www.toomanythoughts.org/blog/2003/12/robert-heinleins-5-rules-for-writers.html

37. Gerald Machnee
Posted Jul 5, 2006 at 8:11 PM | Permalink

Re # 26 -** “If AR4 does not do some real science…” Oh, please, Gerald. Also, the article in question had to do with hurricanes and was co-authored with Kerry Emanuel.**
Yes, Steve B., those were separate statements. the first one refers to AR4 and I meant EXACTLY what I said – they are not doing real science – they are looking at papers to see what they will accept. And they accepted the “hockey stick” without knowing how it was done, and they do not know how to back out of it, plus they get accepted when certain people are on the committee.
And I know the other article was on hurricanes and yes, I said the references at the end are limited to certain people and it follows so are the conclusions.
**It’s really quite amazing he’s gotten as far as he has. Adding to those disadvantages, however trivially, makes no sense. Mike, OTOH, has no such disadvantages.**
YOUR mind might find it amazing. However, I am not the slightest bit surprised. When I read the amazing amount of different studies that Steve M quotes and discusses, I realize that this man has a tremendous capacity for absorption and analysis and amazingly – he is doing it on his own time. If he had more time and could get some paid workers, there would be much more gnashing of teeth as the audits and studies surged ahead. You will recall that last year you made a comment that Steve M will soon be forgotten.
Re Mike having no such disadvantages – we had a saying at work – “he has reached his level of ——-” Now he likely has tenure.
What does not surprise me is your continuing childish remarks in this post. Remember what our mothers said “If you cannot say something nice, better not to say anything at all.”

38. Ken Robinson
Posted Jul 5, 2006 at 9:53 PM | Permalink

Just inserting a few “blanks” to force TCO’s long URL off the sidebar so I can read the blog.

39. Ken Robinson
Posted Jul 5, 2006 at 9:53 PM | Permalink

another blank

40. Ken Robinson
Posted Jul 5, 2006 at 9:54 PM | Permalink

One more time; let’s hope Spam Karma doesn’t kick in.

41. Ken Robinson
Posted Jul 5, 2006 at 9:54 PM | Permalink

And one more should do it…

42. Ken Fritsch
Posted Jul 6, 2006 at 10:05 AM | Permalink

#8. Steve B., I agree with what you say. It’s not good writing practice to use adjectives as I tell certain people. I was a little tired and lapsed into some self-indulgent writing. The most effective writing is to arrange facts mercilessly so that you don’t need to use adjectives. Lord knows, it’s easy enough to merely arange facts with the Hockey Team. Thus, as you’ve noticed, I’ve dialed back the text, not to curry favor, but because it was self-indulgent.

As I recall, Steve M, your intent for this blog was to provide a public venue for you to defend your criticism of the HS and, since others were evidently making that defense more of a personal issue than you had anticipated, to defend yourself. It kind of gives this blog a split personality and I evaluate and judge them separately.

One is the objective analysis and detailing of the work that lead to the HS and its defenses and the criticisms of it. The other is the interaction of the persons involved in the above ongoing discussion at a person to person level and primarily as seen through Steve M’s eyes –upfront and personal. I think an observant and intelligent reader at this site can differentiate these two aspects of the blog and derive value from both. One is about the direct quest for truth while the other is a personal view of how the “system” is used in finding and publicizing those truths — and for the latter, suggestions for change given both explicitly and implicitly. The personal views, that most of us here are qualified to judge, are what I believe makes blogs like this one so popular and accomplishes it here by showing the human influences and touches that would not be apparent from simply reading a scientific paper.

As a skeptic I do, however, attempt to understand the reactions of those who would see us (not necessarily you, Steve M, or even most of the participants at this site) more as denialists and do think such an understanding of their position can limit the negative content that we attach to it and the space I think we sometimes waste on it.

I think a lot has to do with the circumstantial nature of the evidence for AGW that many have evidently decided is more than sufficient for anyone, except those who they surmise, for any number of reasons, are in denial. Central to that evidence are (1) the recent increases in the concentration of atmospheric carbon dioxide, a gas that can, at least, be shown simplistically to act as an energy trap for the sun’s radiation at the earth’s surface, (2) that these concentrations are different because they are “unnatural” and (3) at least through the late 1990s, accelerated increases of officially measured and presented global temperatures. All leading to the view that, while one cannot proclaim AGW from direct evidence, surely the circumstantial evidence is sufficiently overwhelming that the main effort need only be directed towards looking for concurring more direct or further circumstantial evidence.

While the political sides could just as easily have been reversed, I believe Dan Rather and his cohorts at CBS were sincerely convinced that circumstantial evidence was so good about George Bush’s National Guard duty that they were able to be less than critical of an apparently forged document (as revealed from the work of blogs) that agreed with their position. Like the people behind the Rather episode, some of the more ardent AGW people, I judge, are convinced that they are rendering a great service for the nation and world.

Dr. Mann and his cohorts attempted to further the circumstantial case for AGW and a case, the making of which, I am sure that they judge is in the best interest of the nation and world. It would appear that they are so convinced of the AGW circumstantial evidence that they judge that while they may have erred in their methodology in this instance, their main conclusion remains beyond the pale and that they need only move on to different methodologies that will support this conclusion more legitimately.

43. welikerocks
Posted Jul 7, 2006 at 7:49 AM | Permalink

#42, very nice overview of the situation, except all the nice nice you can look for has already morphed into political indoctrination in my children’s science classes. I can’t fail to mention taxes and regulations, new policies and fines on everything you can think of relating to GW as well here.

AGW is real and “most scientists agree” in the state that I happen to work and live in.

I believe everyone associated with Rathergate mess was fired.
One can only hope the same for Mann and the cohorts.

44. TCO
Posted Jul 8, 2006 at 9:28 AM | Permalink

Steve: I would imagine that you must get a little tired at times to do a bunch of calculations and then see the discussion center almost entirely on the “you were too snarky” crap rather then getting into the meat of the analyses. Given the unfinished nature of this article, it is hard for me to engage, but will do my best:

A. Side bar still interfering with reading.

B. Listing of methods: (1) Need a better description of each method. A citation for each. A simple description of what it is. A description of any differences in your example from the “classical case”. (Be very clear about differentiating last two sentences/issues for the reader.) (2) There seems to be some parenthetical issues in the method descriptions. Better to edit out. Even if one accepts segues, it is hard to tell if they are segues or are description of method. (3) Need to add PC estimation (use of a number of PCs). Having two different types of PC1 and no PC estimation (where some number of PCs are retained) is inappropriate given general usage of PCA methods.

45. TCO
Posted Jul 8, 2006 at 9:43 AM | Permalink

C. How are “multivariate methods” generally proved in validity and statistical propensities in other fields? Do you mean that each article that uses one of these methods has some justification for why it is used versus others? Or that there was foundational work done for each one (and should be for Mannian estimation)?

D. I think your comment about “not using climate field reconstructions” is important. Does this mean that you trained on grid cells versus on a larger area? To the extent that Mann did train on the climate field, then, this work will not well compare to his propensities. That said, I’ve always thought the training on large climate averages was a recipe for data mining and statistical rabbit pulling, but someone needs to show this in an article too. (Since he got away with it and since climatology needs to be educated. We can’t just say it in one liners, we have to drive it home with multiple bricks to the head.)

46. TCO
Posted Jul 8, 2006 at 9:50 AM | Permalink

E. Simple description of the scaling in scaling composite is in order. What I really liked about Huyber was the scalar equation that clarified the whole off-center, correlation, covariance kerfuffle.

F. You should add in simple average as a method. Especially since para before makes a comparison of “mann PC1” and NH average.

G. Number 3 is a bit confusing. It seems like you are trying to make a point, argue a label for MBH and the discussion of such is distracting. If you can rewrite, would help.

47. Ken Fritsch
Posted Jul 8, 2006 at 9:52 AM | Permalink

re #43:

..except all the nice nice you can look for has already morphed into political indoctrination in my children’s science classes.

I do not intend to make nice, nice, but have a selfish reason for my approach. We need more “bonus” discussions exemplified by the postings of Andre and Paul Dennis on oxygen 18 isotope fractionation and the changing degree of correlation with temperatures as one goes from the mid to high latitudes towards the tropics. It directly related to the Thompson article and was a point not directly mentioned in the article.

Steve Bloom, I believe, pointed to the Thompson article for Steve M to post and that was also a bonus. John A’s and others’ comments, views and observations of the AGW scene are of interest to me. What is a waste of time to me is when participants go head to head for an extended posting time back and forth with a battle that is much more personal than anything else. I used the term with the modifier “extended” to avoid any self-criticism for my giving unsolicited advice to TCO for giving unsolicited advice to Steve M.

I should also note that the Paul Dennis’ posts brought forth some observations and interesting comments by Tim Ball on fossil analyses and the possible incorrect conclusions that could made in using them as proxies if indeed some of these organisms changed internal processes but retained their external appearance.

48. TCO
Posted Jul 8, 2006 at 9:58 AM | Permalink

H. Number 6: all after first period is extremely confusing and probably irrelevant. Either rewrite, delete or move to a different part of the article.

I. 7: I should know this, but am struggling. What does it mean to regress on the principle components? You make principle components of the grid cells temps and regress proxies on that? (Does this become like a climate field effect?)

49. TCO
Posted Jul 8, 2006 at 10:27 AM | Permalink

J. Am confused by para on looking at variation by SCALE. How is that different from what other people do? And what do you mean by SCALE?

K. How long is the calibration time. Is it just 78 years? Perhaps this is part of the problem, that you only calibrate over the time of a single uptrend and if you went back to 1850 and captured a lower frequency variation, results would improve?

L. What is the black line? Is that something from the models and the the pseudoproxies are also from the same model but with some noise added? Actually would be good to have a whole paragraph that lays all this out.

50. welikerocks
Posted Jul 8, 2006 at 10:39 AM | Permalink

#47 Ken, I totally agree with you.
I agreed with your first reply too. I added my own from personal experience at home here with my kids. I didn’t mean to sound like I didn’t! It’s a sore spot for me: the schools giving a small one-sided view of GW theory.
And yes, the isotope discussion is outstanding! (I formed my questions hoping that it would become such!) I think your observations are valid and interesting!

51. Lee
Posted Jul 8, 2006 at 10:45 AM | Permalink

One comment on the tropical dO18 discussion.

The NAS report specifically mentions the dificulty with tropcial dO18 interpretation, but makes the point that tropical dO18 values are moving outside observed levels, and that therefore the integrated temp/precip effects on dO18 is anomalous.

52. TCO
Posted Jul 8, 2006 at 11:31 AM | Permalink

M. I don’t understand what the high, med, low freq plots are. Are they difference plots? From the reconstruction only? And what happened with the filter to make each different?

M.1. Hard to read the scales on the high, med, low plots. They are just so tiny. Can’t tell if med or low freq are that different from example to example.

M.2. What is desirable behavior of these graphs? Straight line at zero? Not sure how to get inferences from the plots.

M.3. Not clear what the desirable shape of the bar charts (light, dark cyan) should be? Zero difference? Would be good to make this a table of numbers or something.

53. Ken Fritsch
Posted Jul 8, 2006 at 1:33 PM | Permalink

re #51:

The NAS report specifically mentions the dificulty with tropcial dO18 interpretation, but makes the point that tropical dO18 values are moving outside observed levels, and that therefore the integrated temp/precip effects on dO18 is anomalous.

The cogent (and to me most qualifying and revealing) comment from the Thompson NAS report on this topic is:

Although the factors driving the current dO18 enrichment (warming) may be debated, the tropical ice core dO18O composite (Fig. 6A) confirms that it is unusual from a 2,000-yr perspective. Regardless of whether dO18 is interpreted as a function of temperature, precipitation, and/or atmospheric circulation, the important message clearly preserved in these high-elevation ice fields is that the large-scale dynamics of the tropical climate system have changed.

From Paul Dennis’ comments at Post #14 from “Thompson Remarkably Similar” thread (see below) it would appear that many could agree that the oxygen 18 isotope proxy indicates the climate changed in these tropic locations, but that there are many complications and uncertainties in making statements about temperature. Dennis’ comments also caution on the oxygen 18 isotope proxy underestimating the magnitude of past temperatures using the O18 proxy in the tropics.

The individual plots in the Thompson article showing relatively extended plateaus in recent times when it is assumed that global temperatures (and tropical glaciers receding as pointed to in this same article) were rising at accelerated rates would tend, in my mind, to point to uncertainties in what is being measured here.

The point I’m trying to make is the relationship between the isotope composition of precipitation and temperature appears to be robust and is exactly what we might expect using a Rayleigh type distilaltion model for the atmosphere.

..Now, and here is the very BIG BUT that Andre has pointed out in his excellent graph. At sub-tropical and tropical latitudes the relationship breaks down. The link with temperature becomes difficult to define and appears rather erratic. We believe that rain-out effects during monsoon and intense tropical rain events are very important at these latitudes. Empirically we observe that the isotope composition of precipitation is strongly correlated with the amount of rainfall.

Everything I have said applies to modern day precipitation. If we try and use the observed relationships between precipitation isotope composition and temperature to estimate past temperatures then there are some problems. A common observation is that we underestimate the magnitude of a temperature change. This has been observed in Greenland and temperate latitudes too. In Greenland, the temperature change estimated from the shift in oxygen isotope composition of the ice between the present and a period in the glacial past is lower than that estimated from borehole temperature profiles. In temperate latitudes, for example in the UK, temperature shifts estimated between the glacial and modern using groundwater isotopes are not as large as we estimate using dissolved noble gas mixing ratios.

54. Lee
Posted Jul 8, 2006 at 1:43 PM | Permalink

re 51 – Ken, I suspect you’re making a point I’m not picking up, but it looks like your post is just a more detailed restatement of what I said?

55. TCO
Posted Jul 8, 2006 at 2:18 PM | Permalink

N. (Musing for a sec) I wonder if certain methods are more inclined to give good low freq matches (perhaps at the expense of high freq matching)? If this were the case, we would want to choose the method that gives us the best low freq match if what we care about is low freq fidelity (and I think we do). Note that this is different from saying that high freq matches are not good. I agree with Z’s nice comment that we need to have some high freq matching in the validation period to feel like the proxies have any good prediciton abilities, any physicality and are not just nonsense indicators that happen to match a trend of single sample (going up for 1900-1980). Musing, musing. Sorry, can’t express it better.

56. TCO
Posted Jul 8, 2006 at 2:30 PM | Permalink

O. One of the interesting things is how much better averages do then least squares regressions (to give weightings)!

P. What is the verification period in this example. I think it ought to be 1000-1900, no? Just checking.

Q. All of the methods have very similar verification rsq except for OLS. Why is it so dramatically different and is there any significant difference in the verification rsq’s of the non-OLS crowd?

57. TCO
Posted Jul 8, 2006 at 3:00 PM | Permalink

R. Ok, I get it. What you mean about the “good methods”. When you write an article would be good to explicitly connect higher cal RE with higher verification RE (I think that is your point.) BTW, it is interesting. Good observation.

S. With all these graphs, would help if you labeled what is the relevant time period. Is the REver, really REcalibration? Have you retained a Mannian calibration, verification and proxy extension framework? Or in this example, do we have calibration and then verification being over the overall period (since we “know” the temp from the model?)

T. I guess I sorta lose track. What is CE and what time frame does it pertain to and why don’t you have a couple of them the way you do with RE and rsq?

58. TCO
Posted Jul 8, 2006 at 3:23 PM | Permalink

U. I would like to see the weighting factors for the scaled coefficients. They won’t be equal since the scaling has occurred. And in a case (unlike this one, but perhaps like the real MBH data set) where there is some significant variation in scale of variation, the decision to scale or not will make a significant difference. Just as correleation and covariance were significantly different in the MBH case. (Not saying which is right, btw, saying that turning the knob affects the result with MBH type data).

59. Steve McIntyre
Posted Jul 8, 2006 at 3:34 PM | Permalink

U. scaling makes no real difference here because of the construction of the pseudoproxies – signal plus an equal amount of white noise. IT only makes a difference in the North American network because of the standard deviation of the bristlecones is lower than the the other trees.

60. TCO
Posted Jul 8, 2006 at 3:36 PM | Permalink

V. “One can prove…” This part seems to make sense from sampling theory and porfolio risk reduction and the like. But still, would like a citation to the published proof.

W. “If you have opposite signed…” Hmmm. Interesting. By extension, would this also be the same for any differences in weighting (even if all positive)? And in a real data situation, not this pseudoproxy exercise, we would want some weights to go in the opposite direction, no? Although, that raises all the issues of non-physicality, data mining, teleconnections, etc. But sometimes, might be indicated, no?

61. TCO
Posted Jul 8, 2006 at 3:46 PM | Permalink

X. “I think that the best way to proceed to a more complicated test case is to use a linear mixed effects method in which you allow for heteroskedasticity.”

Reaction: Dude, that’s a mouthfull and a brainful. You would need to explain and elaborate on it. Actually at this point, you’d be better off just splitting the papers. One paper on the aforegoing and the next on the linear heteroskedacity. Build the house solid brick by solid brick. Don’t connect things with straw.

62. TCO
Posted Jul 8, 2006 at 3:49 PM | Permalink

#59 (U). Then you should either do without the scaling or show the difference is insignificant. But you are introducing complexities and deviating from simplest style controls. I would show both if I were you. Maybe at the end of the paper, you need to speculate a bit on how the whole thing would differ with more MBH style input (more of a mixed proxy set). Of course, you could write the whole paper and do it for various types of input data, but then I get really worried about the amount of work and length of the paper. Maybe just do this one for the simple case and do the next one to see interesting effects of more dissimilar input series.

63. bender
Posted Nov 20, 2007 at 11:57 PM | Permalink

How could such a nice opening post get overrun by such trollishness? Hopefully those days are over.

64. Mark T
Posted Nov 21, 2007 at 12:38 AM | Permalink

Judith should read that post referred to in #187 because your wavelet decomposition scheme would be useful in her attempts to isolate the various high/mid/low frequency components of hurricane occurrence vs. climate data. The 5y and 10y periodicity in hurricane occurrence would go some way to explaining (i.e. accounting for) the low count for 2007.

I’m curious if this has been attempted with simple short-time Fourier transform methods (like a spectrogram)? Granted, I must confess to being a fan of wavelets myself (I have history with them), but it would be interesting to see a comparison. Wavelets are good solutions for spiky/intermittent data (I analyzed speech and a heartbeat, for example), but there is a level of regularity in temperature reconstructions (visually apparent) that almost begs for the periodicity of Fourier. Just curious… perhaps an exercise for future efforts of my own? 🙂

Mark

65. Mark T
Posted Nov 21, 2007 at 12:39 AM | Permalink

Oh, my quote was from bender in the other thread, in which he was referring to this thread, but I felt my comment was more appropriate here.

Mark

66. Posted Nov 13, 2008 at 11:00 PM | Permalink

Good post

67. Posted Jun 12, 2009 at 8:16 AM | Permalink

Steve,

This should be titled seven ways to make a hockey stick. It must have taken a long time to set up seven different regression-ish methods.

Calibration by assumption of what the signal is, cannot be used in proxy literature. If anything changed in climate science I hope it would be that. Shock and recovery on a random network are all I see here. I don’t mind the PC1 so much but again the assumption that the result is temperature is hard to swallow.

The OLS method has the highest DOF and allows the method to make the best fit. Of course more negative thermometers arise (very similar to the optics example in my email). However, negative thermometers are accepted in many of the published results. I don’t understand how some negative thermometers are ok and many are not. If we’re to believe that a flipped proxy is ok, then hell we should run OLS all day long.

Of course negative weights are crap in most proxies and IMO so is scaling the same type of proxy by different levels.

It’s an interesting post, I’ll read it again later today.

The style of your new work has changed, as have TCO’s posts. I wonder if TCO is cooking his brain with Alcohol – really.

In this case, there is much better recovery of low-frequency information, as you can see by comparing the proportion of variance in each scale

After what I’ve learned here, perhaps it should say there is much stronger amplification of low frequency information in contrast to better recovery. – not a criticism.