Ferson et al. on Interactions between Data Mining and Spurious Regression

One of the papers that has most informed my views on multiproxy studies (and I’ve mentioned it from time to time) is Ferson et al. [2003], Understanding Spurious Regressions in Financial Economics which I read a couple of years ago. "Spurious regression" here is a false relationship between series, frequently observed with highly autocorrelated series – random walks are the classic example of Granger and Newbold [1974], but the effect is also observable in finite samples of high-AR1 series. These very high AR1 coefficients are characteristic of both proxies and temperature PC series. Some of the phrases from Ferson should send chills up the spine of anyone relying on multiproxy studies:

Data mining for predictor variables [proxies] interacts with spurious regression bias. The two effects reinforce each other because more highly persistent series are more likely to be found significant in the search for predictor variables. Our simulations suggest that many of the regressions in the literature, based on individual predictor variables, may be spurious…

If the expected return accounts for 1 per cent of the stock return variance, mining among 5 to 10 instruments has as much impact as 50 to 100 instruments with no spurious regression. Assuming we sift through only 10 instruments, all of the regressions from the previous studies in Table I appear consistent with a spurious mining process. The pattern of evidence in the instruments in the literature is similar to what is expected under a spurious mining process with an underlying persistent expected return. In this case, we would expect instruments to arise, then fail to work out of sample.

Readers of this blog are familiar with the fact that the classic proxies are hugely autocorrelated, especially the NOAMER PC1 (whose autocorrelation is off the chart, as is the Gaspé tree ring series.) I’ve been aware of these big autocorrelations for a long time, although I’m only now getting into a position where I can discuss these issues from a more theoretical perspective. There are some interesting connections of Ferson et al [2003] with ARMA(1,1) processes, an observation made in Deng [2005]. These connections sparked my binge of posts in August on ARMA (1,1) processes, although the connection would not be apparent and I didn’t mention it at the time. It’s possible that some portion of the autocorrelation in the bristlecones is exacerbated by a non-stationary trend (fertilization). Trends are hard to separate out from high autocorrelation. The difference is not material to any points made here.

Obviously, we’re getting into very high AR1 coefficients in both proxies and temperature PC1s. Modeled as ARMA(1,1) processes, the AR1 coefficient is typically even higher than in a pure AR1 model, intensifying the problem in any situations of interest here. Anytime you see AR1 coefficients greater than 0.9 and especially greater than 0.95, warning labels need to be attached to the regressions as you get into spurious regression territory in finite samples.

Here is a more extended excerpt (I’ll try to do a more detailed commentary some time):

Abstract: Even though stock returns are not highly autocorrelated, there is a spurious regression bias in predictive regressions for stock returns related to the classic studies of Yule [1926] and Granger and Newbold [1974]. Data mining for predictor variables interacts with spurious regression bias. The two effects reinforce each other because more highly persistent series are more likely to be found significant in the search for predictor variables. Our simulations suggest that many of the regression in the literature, based on individual predictor variables, may be spurious.

“Unlike the regressions in those papers [Yule 1926 and Granger and Newbold 1974], asset pricing regressions use rates of return, which are not highly persistent, as the dependent variables. However, asset returns are the expected returns plus unpredictable noise. If the expected returns are persistent, then there is a risk of finding a spurious relation between the return and an independent, highly autocorrelated lagged variable.

Where there is no persistence in the true expected return, the spurious regression phenomenon is not a concern. This is true even when the measured regressor is highly persistent. …

Given persistent expected returns, we find that spurious regression can be a serious concern. The problem for stock returns gets worse as the autocorrelation in the expected returns increases and as the fraction of the stock return variance attributed to the conditional mean increases. ..we find that 7 of the 17 statistics that would be considered significant using traditional standards are no longer significant in view of the spurious regression bias. We therefore call into question the validity of specific instruments identified in the literature, such as the term spread, boo-to-market ratio and dividend yield.

Data mining, in the form of a search through the data for high-R2 predictors, results in regressions whose apparent explanatory power occurs by chance. (p. 1410). ..In the presence of spurious regression, persistent variables are likely to be mined and the two effects reinforce each other. …If the expected return accounts for 1 per cent of the stock return variance, mining among 5 to 10 instruments has as much impact as 50 to 100 instruments with no spurious regression. Assuming we sift through only 10 instruments, all of the regressions from the previous studies in Table I appear consistent with a spurious mining process.

The pattern of evidence in the instruments in the literature is similar to what is expected under a spurious mining process with an underlying persistent expected return. In this case, we would expect instruments to arise, then fail to work out of sample. With fresh data, new instruments would arise then fail; the dividend yield rose to prominence in the 1980s, but fails to work in post-1990 data. The book-to-market ratio seems to have weakened in recent data, With fresh data, new instruments seem to work. There are two implications. First we should be concerned that these new instruments are likely to fail out of sample. Second, any stylized facts based on empirically motivated instruments and asset pricing tests based on such tests should be viewed with scepticism.

We see plenty of examples in climate science that correspond to Ferson’s hypothethical analyst. Consider Jacoby searching through 36 sites to locate the 10 most "temperature sensitive". Consider Briffa (or Esper) searching through hundreds of Schweingruber (and other) series. No one knows how the series in Crowley and Lowery were selected, but one presumes that a little "sifting" might have taken place.

Consider Mann’s PC methodology, which, in a sense, automates Jacoby’s procedure (this is an equivalence that I’ve tested through simulations.) Whereas Jacoby’s mining is accomplished by a weighting of 1/N on included series and 0 on excluded series, Mann’s PC1 puts weightings close to 1/N on mined series (bristlecones) and close to 0 on excluded series. You have to think about it a little, but it’s not hard once you see it. I’m inching towards a little more subtle explanation of Mann’s regression module, but it looks like this is a form of data mining as well, with weightings of the 112 variables being high or low according to the data-mining criterion. (With 112 variables in a regresion-type model for a serie sof length 79, one would expect some decent calibration R2’s. If the selected predictors are persistent (per Ferson), you will get high REs against a short (48 year) verification period, but poor out-of-sample R2’s. Sound famliar?

Ferson’s comments about out-of-sample breakdown very much influence my expectations about the post-1980 behavior of bristlecones, Gaspé, TTHH etc. We’re already seeing complaints by Jacoby that TTHH has broken down as a linear proxy; Briffa complains about some "unknown anthropogenic" factor lowering late 20th century RW and MXD series. Unreported Gaspé re-sampling did not show a hockeystick. If Hughes’ 2002 sampling at Sheep Mountain had shown big ring widths following emperature, we’d have heard about it. The dog didn’t bark.

Just like Ferson’s scenario of new proxies emerging as the old ones fail out-of-sample, isn’t that what we’re seeing now? All of sudden we’re seeing offshore Oman coldwater diatoms, Briffa-adjusted Yamal tree rings. At any given time, there will be some "proxies" with hockey stick shapes. But do they work out-of-sample? Is this science "advancing" or simply more interaction of data mining and spurious regression?

References: Ferson, W., S. Sarkissian and T Simin, 2003. Ferson, et al., 2003. Spurious regressions in financial economics, Journal of Finance, 58(4), 1393-1413; http://www.cass.city.ac.uk/faculty/g.urga/files/FersonEtAl2003.pdf
Ferson, W., S. Sarkissian and T Simin, 2003b. Is Stock return predictability spurious?, J of Inv Management 4(3), 1-10.
Deng, Ai, 2005. Understanding spurious regression in
financial economics. http://econ.bu.edu/perron/seminar-papers/spurious_1.pdf

60 Comments

  1. Ross McKitrick
    Posted Sep 22, 2005 at 10:07 AM | Permalink

    Among the interesting things about the results Ferson et al. is that they examine stationary but highly autocorrelated series. The Granger-Newbold results concerned random walks, an extreme, nonstationary limit of autocorrelation, and Phillips showed that the usual regression formulae (esp. t and F stats) had no limiting distributions in this case. But Ferson looks at autocorrelated series, which adds to the pile of concerns 3 ways: it widens the class of data for which the problem is relevant; cointegration does not resolve the problem (as it does for the degenerate asymptotics in Philips 1986) and it seems to facilitate this striking amplification between data mining and strong autocorrelation.

    However the set-up in Ferson differs from paleoclimate work in one respect. The predicted variable r is regressed on the predictor Z lagged once, ie r(t) = a + b*Z(t-1) + v(t). In a paleoclimate model the lag is not applied (which actually raises a further problem of simultaneity bias). However the bias in the t-stat arises because under the null of b=0 the residual v inherits the autocorrelation of r via the “true” model (equation 2 in the paper) in which r is driven by an autocorrelated latent variable. If tree rings actually reflect temperature conditions there’s a latent heat variable (pun intended) whose persistence will affect v(t) even if the regression model being tested is r(t) = a + b*Z(t) + v(t).

    From what I can see it is highly plausible to conjecture that the data mining/spurious regression phenomenon is at work in the paleoclimatology area. I would also conjecture that because the models do not use lags there is a simultaneous equations bias, so the slopes are biased as well as the standard errors.

  2. Steve McIntyre
    Posted Sep 22, 2005 at 10:37 AM | Permalink

    What a strange blog. Now Ross and I are chatting on it.

    Ross, I’m not arguing at this point for a precise transposition of Ferson’s results to paleoclimate. Just, as you say, that the issues are clearly in common and should be highly worrying for a paleoclimatologist worried about actual statistical significance, if such exist.

    I think the transposition is more likely to be based on Deng [2005]’s interpretation of Ferson’s results (Deng’s Lemma 2), where he does away with the lag structures and models the stock returns as an ARMA(1,1) process

    r[t+1] = rho* r[t] + (u[t+1] + theta* u[t])

    So you have an ARMA(1,1) target with high rho (>0.9). (Deng doesn’t show the transposition which he says is trivial. No doubt it is, but I haven’t replicated the equivalence yet.)

    Temperature series, especially the ocean ones, model as ARMA(1,1) with high red-zone rho and high theta, just like Deng [2005]. The principal components of temperature and temperature averages go even more into the red zone.

    I don’t see why the ocean temperature series don’t meet the requirements of Deng [2005] and that their ARMA(1,1) carries straight over to fulfil the spurious regression requirements relative to persistent predictor proxies. I didn’t talk in my post about the havoc that this wreaks on diagnostics oriented to AR1 (e.g. Durbin-Watson).

    In the construction of some of the tree ring temperatur reconstructions, lags come into play. Briffa’s Tornetrask reconstruction reconstructs temperature using both RW[t] and RW[t-1] as predictors (on the theory that RW[t] integrates both T[t] and T[t-1]. However, that’s not material here (I don’t think).

  3. John A
    Posted Sep 22, 2005 at 10:39 AM | Permalink

    Re #1:

    Yet paleoclimatology is transfixed by tree rings and climate models and the climatic “information” contained therein. By analogy, this infatuation is similar to the infatuation with tech stocks in the late 1990s – “ignore the uncertainties and the losses, buy the rising stock!”

    What we have is a mania, and manias don’t end with whimper but with a crash. When this mania ends, there’ll be a crisis of confidence in climate science as a whole, not just with tree rings (as there was a crisis of confidence in the stock market and capitalism) but with the whole enterprise. Rising stars fall very hard, in my experience.

    Your and Steve’s investigation has revealed that science as a whole cannot afford anything less than full disclosure of underlying data, especially if that science is used to inform or shape public policy. It’s the methodology of science – peer review, journal publication and “Scientific announcement by press conference” which will be most under scrutiny.

  4. Dave Dardinger
    Posted Sep 22, 2005 at 11:21 AM | Permalink

    Steve,

    You say, “Briffa’s Tornetrask reconstruction reconstructs temperature using both RW[t] and RW[t-1] as predictors”. Well, we’ve had some discussion about moisture here and it’d seem to me that RW[t] would be dependent on RF[t] (RainFall) and RF[t-1] since carryover moisture would be a big part of how well a tree would grow in a given season. So this would also need to be thrown into the stew.

    In fact, it you think about it, a given tree’s likely RW[t] should be heavily weighted on some general ‘fitness measure (FM)’which a tree ends the previous season with so RW[t] = f(FM[t-1], T[t], RF[t]….) Trying to figure out what items should be in this fitness measure would be interesting and not necessarily identical with the items X[t] which directly affect the ring width of a given year.

  5. Steve McIntyre
    Posted Sep 22, 2005 at 11:45 AM | Permalink

    Tree ring cores are hugely autocorrelated.

    Different authors standardize differently. In the MBH corpus, I can tell pretty much tell who did a tree ring series by its time series properties regardless of the species or site – Stahle’s fall in one range; Jacoby and Graybill’s in another; Briffa and Schweingruber in a 3rd.

  6. Peter Hearnden
    Posted Sep 22, 2005 at 12:40 PM | Permalink

    Re #3. Oh, another ‘end of the world’ prediction. OK, only the end of the climate science world (and the rest of science if you can manage that I see) as we know it, but that kind of prediction none the less. When is the climate science world going to end ‘John’? Care to name the date 😉

  7. Mike Hollinshead
    Posted Sep 22, 2005 at 12:51 PM | Permalink

    Re: 3

    John,

    The catastrophic effect of the unmasking of the poor science being practiced in climatology on the reputation of the discipline has been on my mind ever since the claims of the modelers emerged. Just the simple counterfact that the modelers from the get go and persistently since have forecast drought for the continental interiors under a warming trend when all the evidence we have is for the reverse, made me highly suspicious of their model accuracy. To get such a major and well understood feature of climate persistently wrong is a clear indication of their uselessness for policy.

    Understanding climate is so important at this point in history that the consequences could be enormous from the point of view of creating a sustainable future for humans on the planet.

    The consequences of a cooling, which some solar based models suggest is in the cards, could be catastrophic if it turns out to be a major one. Every major cooling has been accompanied by pandemics of disease with extremely high mortality rates (30% to 50% for the Little Ice Age onset, for example). They have been accompanied by mass migrations which have caused enormous social and economic disruptions. All of the historical invasions of China and Europe by the Steppe nomads were driven by the drying out of the grasslands under the impetus of cooling. Warfare became endemic. Cities and civilizations fell apart in Asia, in Europe and in the Americas.

    Warm periods, on the other hand, have coincided with the emergence and growth of new civilizations.

    Coolings are the problem, not warmings. It is what happens to precipitation which is the key issue, not temperature per se.

    Now add in the fact that the great emerging wealth creators and the planet’s two most populous nations – India and China – are both dependent on the reliability of monsoon rains, which become unreliable and frequently fail under cooler climatic conditions, to feed their populations. You are talking 36% of world population in those two countries alone, quite apart from populations in the rest of Asia, in Sub Saharan Africa, the Andean countries and the Mexican highlands, all of which will suffer reduced precipitation under a cooling.

    Are the United States and Canada going to be able to ship them surplus grain? Not a chance, because the Great Plains will dry out as well.

    Now, consider a situation in which climatologists have good evidence that a cooling is in the cards, but the discipline is so much in disrepute that the information is disregarded by policy makers.

    The consequences would be too awful to contemplate.

  8. John A
    Posted Sep 22, 2005 at 1:01 PM | Permalink

    Re #6 Yet another “Peter Hearnden (wilfully?) misreads comment and makes personal attack”. Enough said.

    Re #7

    Mike,

    I agree with your thesis, but by the time the modellers and the bad dendrochronologists finally get debunked, we will be staring at another cooling period (which will be predicted “after the fact”) and the world economy hobbled by Kyoto-like stratjackets, will be unable to mitigate the social consequences of increased desertification.

    Sometimes I think some people want mass starvation or pandemic in order to justify their core beliefs that everything must get worse.

  9. fFreddy
    Posted Sep 22, 2005 at 1:02 PM | Permalink

    Re #5

    In the MBH corpus, I can tell pretty much tell who did a tree ring series by its time series properties regardless of the species or site

    Steve, could you expand on that? What properties are peculiar to each individual, and what does it tell you about their approach to the task ?

  10. Mike Hollinshead
    Posted Sep 22, 2005 at 1:12 PM | Permalink

    John A.

    I agree.

    Meanwhile imbecilities like this are being foisted on us.

    British scientists urge big cut in aviation growth to avoid dangerous climate change
    21 September 2005

    Author: By MICHAEL McDONOUGH (Associated Press Writer)
    Provider: Associated Press

    LONDON – Britain should drastically reduce the growth of air travel to bring greenhouse gas emissions within levels that will avoid dangerous climate change, a report by leading environmental scientists said Wednesday.

    http://www.fuelcelltoday.com/FuelCellToday/IndustryInformation/IndustryInformationExternal/NewsDisplayArticle/0,1602,6505,00.html

  11. Mike Hollinshead
    Posted Sep 22, 2005 at 1:13 PM | Permalink

    John A.

    I agree.

    Meanwhile imbecilities like this are being foisted on us.

    British scientists urge big cut in aviation growth to avoid dangerous climate change
    21 September 2005

    Author: By MICHAEL McDONOUGH (Associated Press Writer)
    Provider: Associated Press

    LONDON – Britain should drastically reduce the growth of air travel to bring greenhouse gas emissions within levels that will avoid dangerous climate change, a report by leading environmental scientists said Wednesday.

  12. Mike Hollinshead
    Posted Sep 22, 2005 at 1:14 PM | Permalink

    John A.

    I agree.

    Meanwhile incredible recommendations like this are being foisted on us.

    British scientists urge big cut in aviation growth to avoid dangerous climate change
    21 September 2005

    Author: By MICHAEL McDONOUGH (Associated Press Writer)
    Provider: Associated Press

    LONDON – Britain should drastically reduce the growth of air travel to bring greenhouse gas emissions within levels that will avoid dangerous climate change, a report by leading environmental scientists said Wednesday.

    http://www.fuelcelltoday.com/FuelCellToday/IndustryInformation/IndustryInformationExternal/NewsDisplayArticle/0,1602,6505,00.html

  13. JerryB
    Posted Sep 22, 2005 at 1:29 PM | Permalink

    John A,

    It seems that Spam Karma has struck again. Mike’s post went away.

    I will guess that that Mike was Mike Hollinshead, whose posts have been disappearing for the past few days.

  14. TCO
    Posted Sep 22, 2005 at 1:49 PM | Permalink

    The data mining thing and the comparison of stocks and proxies is interesting conceptually. On RealClimate, they made an interesting comment about some strained correlations wrt solar forcing. Basically, they said that being at 95% confidence for a given relation is not relevant if you took a hopper of 20 relations to get that one that showed what you want. The acid test of course is what happens in the future.

    I thought it was a very good comment by the tree huggers and it reminded something I read once about a stock picking scheme. What this guy said you should do to steal people’s money is send out 100,000 letters (divided in 5 groups) and pick 5 “hot stocks” that you predict how they will do in the next week (stick to high beta…he didn’t say that…but I like to. It makes me sound all corp-fin.) Pick the stock that did the best and then send letters to that 20,000 people dividing again into groups of 5 and assigning new “touts”. Pick the group of 4,000 who had the best performance and then repeat with 5 more picks. now you will be down to 800 people who think you are the next coming of god. Ask them all to invest with your 10% load hedge fund. Works like a charm…

    Anyhoo…noone mentioned my example, but it was in my head. Instead someone pointed out to RC, “great post. Doesn’t Mann have a danger of doing that with his proxies?”. The RC response was, “no, because he picked proxies that did well in 1900-1980 and then looked at how they did in 1850-1900.” But I think that method still runs a real danger of cherry picking. How do we know that something analagous to my stick tout scheme was not performed? Of course, the real answer, from this dude at RC should have been, “no, we have the results of the picked proxies from 1995-2005 and they are doing great recently”. But he did not come back with that. Or even seem to think that way. blows me away that he could be so biased. That he could know how to analyze a situation properly and then not apply it to his own side’s science…

  15. John A
    Posted Sep 22, 2005 at 2:18 PM | Permalink

    Re: #13 Thanks for the headsup Jerry, I have restored Mike’s comments and his karma now should be increased as a result.

    Mike,

    If it happens again, send an e-mail to Steve and I’m sure he’ll sort it out (as I’m travelling at the moment)

  16. Steve McIntyre
    Posted Sep 22, 2005 at 2:26 PM | Permalink

    In stock market terms, if you’ve got an indicator which “worked” in the early stages of a bull market, it might still “verify” in the later stages of a bull market, but it will probably crap out when the bears arrive.

    Secondly, if you apply a Ferson-type approach to RE and verification statistics, it’s pretty obvious to see how data mining and spurious correlation interact in this context. One need only look at the CENSORED bristlecone calculations. They checked to see whether their calculations worked without data mining the bristlecones – they didn’t. So they didn’t report them. The whole thing has reminded me of a goofy stock market system from the beginning.

    Aain re the RC response: Mann looked at how “well” his reconstruction did in the verificaiton period – the R2 was ~0. So it didn’w work at all. Therefore, he cherrypicked statistics to report. It is not simply picking a basket of cherries; it’s high-grading the cherries in the basket.

  17. John A
    Posted Sep 22, 2005 at 2:31 PM | Permalink

    Re: #16 He salted the statistical assay and didn’t want to be “intimidated” into showing the split core.

  18. TCO
    Posted Sep 22, 2005 at 2:48 PM | Permalink

    In stock market terms, if you’ve got an indicator which “worked” in the early stages of a bull market, it might still “verify” in the later stages of a bull market, but it will probably crap out when the bears arrive.

    Grrr…stock price shirt term movements are not autocorrelated. Market efficiency prevents that. Look at scatter plots in academic literature that have examined that…

  19. fFreddy
    Posted Sep 22, 2005 at 3:33 PM | Permalink

    Grrr…stock price shirt term movements are not autocorrelated. Market efficiency prevents that. Look at scatter plots in academic literature that have examined that…

    Total cock. If you want to know how markets trade, talk to the traders, not the academics. And try not to say things like “the market has no memory”, or there will be sniggering.

  20. ClimateAudit
    Posted Sep 22, 2005 at 5:28 PM | Permalink

    Re #18: You can make highly autocorrelated stock price series – look at my Tech PC1.

    If you data mine, you can find indicators that have high R2. If you mine further, you can find indicators that have “calibration” R2 and short verification period RE.

    I think that you’ve got my position backwards. I don’t believe in goofy systems. I haven’t tried to articlate my view of stock prices conceptually, but would probably have a Mandelbrotian view. For example, let’s say that you have a trend in tech stocks; I might think that the prices are autocorrelated so that the “trend” is likely to continue but still not think that it was a good investment. How do you reconcile this? Non-normality and mean-reverting wildness. The odds may favor the continuation of the trend (autocorrelation) but the correction would be extreme and sharp. Sort of an anti-lottery.

    But let’s not talk about this. I haven’t thought about this a whole lot and I’ve got enough to think about it.

  21. Paul Penrose
    Posted Sep 22, 2005 at 5:46 PM | Permalink

    Re #6: Just as the cold-fusion area is now in disrepute because of bad science, so climate science will be in the near future. You can count on it – the hammer will fall.

  22. John S.
    Posted Sep 22, 2005 at 5:48 PM | Permalink

    WRT #14 et cetera.

    Out of sample prediction (a la MBH) looks good when written up in journals but that is not what actually goes on. The general approach is that you withold some data as a test of the validity of your model. Only models that work with the withheld data are then considered valid. It seems very stringent. However, another approach that sounds less stringent is structural stability. In this, you run a model on the full data set and check whether or not it changes in any subsamples. In practice these are identical.

    The problem is that the researcher knows of the full data set and will only report a model that works on both the full data set and the chosen subsample. If they write it up that they did the subsample first and then the full sample next – that is called out of sample prediction and is considered a ‘very good thing’. If they write it up that they did the full sample first and then checked for structural stability that is also good – but doesn’t qualify for being a ‘very good thing’ (TM, patent pending) in the same way.

    The only true test of out of sample performance is actually to wait a few years and use data that was absolutely unavailable to the researcher. It is amazing the number of (economic) models that pass out of sample tests when published but break down a few years later when truly new data are available. The statistical method is not as robust as it is presented to be (for example, the 1 in 20 problem) and is invariably not written up in the way that it was actually done.

    N.B. This is not a criticism as such of any authors who do this – practically all researchers will experience this. It is a shortcoming of the statistical method, the publication requirement and human nature. Indeed, with the best authors, it is more a shortcoming of readers’ interpretation than the authors.

  23. John A
    Posted Sep 22, 2005 at 6:07 PM | Permalink

    Just as the cold-fusion area is now in disrepute because of bad science, so climate science will be in the near future. You can count on it – the hammer will fall.

    What bothers me is that perfectly good, methodical scientific research will get caught in the downdraft.

  24. Steve McIntyre
    Posted Sep 22, 2005 at 10:30 PM | Permalink

    TCO, I think that we;re on the same page. I don;t believe in systems or in young brokers that haven’t seen a bear market. I believe in fundamentals. I rarely trade.

  25. MarkR
    Posted Sep 23, 2005 at 2:19 AM | Permalink

    Re No 8

    “Sometimes I think some people want mass starvation or pandemic in order to justify their core beliefs that everything must get worse”

    I am afraid the root of this is a political one.

    The drivers of the research effort and the consequent manipulation of the results is research money. That research money is, in the main, provided by Governments, as a result of political pressure, (only political presure persuades Governments to spend money). The researchers report what they believe the politicians want to hear, and what will lead to the provision of further research grants.

    Where does the political pressure on Governments to spend this particular research money come from?

    It comes mainly from from relatively small pressure groups who believe that capitalism is evil, and that the biggest and baddest capitalists are the oil companies. Therefor they seek to attack capitalism through the means of “environmentalism”. Note that this “environmentalism” is never directed at Communist countries. No pressure on the Russians to close all their reactors after Chernobyl, no pressure on Russian oil companies, or Chinese coal fired power stations. This is entirely directed at the West and Capitalism. See how the environmentalist entangle themselves when they blight whole areas of Britain with huge wind power generators, hugely destabilise the marine environment with coastal water power generation schemes, all in the name of protecting the environment. See how when the alternative of relatively safe and clean nuclear energy is put before them, they refuse it. They do not want their energy problem to be solved. Nothing will satisfy them except the demise of the oil companies, and through that Capitalism.

    Sadly there are those to whom the end jusifies the means and they are prepared to bend every rule, distort any statistic, and sustain any loss to achieve it.

    The tragedy is that the huge sums of money being wasted could be used to solve real problems.

    And by the way climatology is not the onle branch of science which is being abused in this way, this problem is endemic to much of government funded science.

  26. Paul Gosling
    Posted Sep 23, 2005 at 2:38 AM | Permalink

    Re 26

    Oh yes, the evil scientists, part of the conspiracy to bring down capitalism, thats what it is all about, hell, they are probably all a bunch of commies anyway. If only business was allowed to get on with it, without any burdensome regulation everything would be fine and dandy.

    Steve

    Your theses is that climate scientists data mining of proxies selects those that happen to correlate with temperature simply by chance during the standardisation period, yes? If that were the case and tree ring width in fact bears no relation to temperature, then picking a similar length period outside of the standardisation period should produce no correlation to temperature at all, with those same proxies showing temperatures going up, down and staying the same. Is that the case? If so, I am staggered that a) they use this data at all, b) it has not been thrown out by other climate scientists as clear nonsense.

  27. Peter Hearnden
    Posted Sep 23, 2005 at 2:42 AM | Permalink

    Re #21. When? I’ve be reading predictions of immentent cooling (presumably you mean a cooling trend?) for years. So, lets see you put up a date of when this will start happen in a sustained way (nearest year will do).

    Re #8 Sometimes I think some people want mass starvation or pandemic in order to justify their core beliefs that everything climate science produces is wrong.

  28. fFreddy
    Posted Sep 23, 2005 at 3:00 AM | Permalink

    Re TCO, #24

    I’ll stick with messieurs Brealey, Myers, Copeland, Miller and Modigliani

    I forget, were any of this lot involved in LTCM ?

    The statistics (academic papers with MATH) do not bear you out.

    We can have faith in the maths when we are in a real maths environment, like number theory. When the maths is intersecting with the real world (which is where this sort of statistics lives), we need to be a bit more cautious – which is the whole point of the hockey stick and this blog.

    If you still don’t believe me, you might like to wonder why all the world’s top academic institutions are not entirely self-funded from the profits of the hedge funds run by their economics and maths professors.
    Although that might be a good answer next time they launch a funding drive …

  29. John A
    Posted Sep 23, 2005 at 4:56 AM | Permalink

    Re: #28 And the wait for the first factual comment from Hearnden continues….

  30. Steve McIntyre
    Posted Sep 23, 2005 at 7:04 AM | Permalink

    RE #27: PAul, you’re missing one statistical point (I think) and not believing another. Many, if not most, proxies in MBH98 have no relationship to gridcell temperature. The AD1400 tree ring network have a mean correlation to gridcell temperature 0f minus 0.08 and a mean correlation to divisional precipitation of 0.29 – so any reconstruction relying on them is likely a reconstruction of precipitation.

    Some tree ring series have decent correlations to gridcell temperature e.g. Polar Urals. HEre the issue is statistical: if your methodology has looked through a lot of series and selected this one, then you have to allow for the selection process. For example, Jacoby picked the 10 “most temperature sensitive” sites out of 36 examined and discarded the other 26. He got a hockey stick. To benchmark whether the hockey stick was significant under this methodology, the basis of comparison is to generate 36 red noise series from sites with similar persistency, pick the 10 most “Temperature sensitive” and average them. I’ve done this and Jacoby’s hockey stick is about at the median in hockey-stick-ness. The risk of red-noise-ness to the selection is the subsequent downturn in the TTHH series, which I’ve posted on: one of the few with any reported followup.

    A more objective procedure would be to hypothesize that (say) high-altitude larch sites were temperature proxies and report them all, not just the chery picked. When you see large-scale averages, late 20th century values are down, which Briffa complains about and attributes to a “unknown anthropogenic” cause – it could also be a an upside-down U response to temperature (pace TTHH), which renders reconstructions of the MWP based on linear models inaccurate by itself. If the large-scale aggregates trend down and the scientists consistently highlight individiaul tree ring series with upsticks (e.g. Sol Dav,(Mongolia, Yamal), then one fears that a data mining process has taken place. This is both a form of bias and needs to be reflected in the statistics.

  31. Brooks Hurd
    Posted Sep 23, 2005 at 11:39 AM | Permalink

    Re: 26
    The thirst for funding whether it comes from the governement or private sources drives many of the studies which are announced with frightening headlines. Climate science is not alone. When you check on the source of these headlines, the actual study is often a work in progress. There are frequently no peer reviewed articles. In fact, I often find that the study does not even support the headlines.

    Last year, there were headlines about genetically engineered seeds spreading throughout the general seed supply. The report upon which this story was seemingly based was available on the organization’s website. The report stated clearly that they had only checked two locations in the US and there was too little data upon which to draw any conclusion.

    Some one at the organization sexed up the story. The “sexed up” press release, which was only vaguely similar to the study, then made its way onto the wire services. CNN ran it verbatim. The same story made its way into many newspapers for several days. All of this occured seemingly without anyone checking the facts.

    Since the story died after I few days, I assume that some editor finally did check the facts and let his journalist friends know that the stories which they had been printing were essentially BS. One must hope that will occur in other instances of studies that are “sexed up” to create scare headlines.

    After listening to the press last night with angry voices (unidentified) proclaiming that the situation of Katrina and Rita will become common place in the near future if we do not take drastic measures to stop global warming, I hope that this occurs soon with the more outrageous claims of some who purport to practice climate science.

  32. Posted Sep 23, 2005 at 11:52 AM | Permalink

    #31. As tree response to temps is an inverted U and averages show a downturn, isn’t it equally valid to claim this is an indication of global warming as response is slipping down the other side! Whatever happen to trees, its due to warming.

    #32. It would be worthwhile to publish a ‘Quack index’ of scientists based on the number of their press releases. Interestingly, the word quack comes from the similarity of fairground hawkers of useless medicines to the loud sound a duck makes.

  33. TCO
    Posted Sep 23, 2005 at 9:15 PM | Permalink

    Fred: You’re not knowing those names is like talking to someone who wants to argue against chemical thermodynamics “in the real world”, but has never heard of Gibbs or Helmholtz. And no, none of them were in LTCM. Blows me away, that you don’t know who they are. Unless you suddenly blow me away, I doubt we can have any sophisticated discussion. I would love to hear your evidence theoretical or statistically-valid observational that argues against market efficiency or for market memory or Dogs of the Dow or whatever you’re in to. I just don’t think it’s going to happen. Maybe you will amaze me though. Unless, I’m seriously surprised, I will likely not bother sitting here and giving you the intro level courses (since you don’t know the concepts) and the evidence behind them for basic finance. I have to waste too much time on EconBrowser with numskulls who don’t know the difference between a shift of the demand curve versus movement of the intersection of supply and demand based on a shift of the supply curve (i.e. price feedback effect) (i.e. the first day lecture of a garden variety econ class.)

  34. TCO
    Posted Sep 23, 2005 at 9:17 PM | Permalink

    P.s. Usually I like you as a poster, but when you get into a fight on the football field and I’m the coach, then I make you do spinners. Gotta be that way…

  35. Paul Penrose
    Posted Sep 24, 2005 at 12:06 AM | Permalink

    Re #27: When they don’t make their data and analysis programs available, it makes it difficult to refute it, and most people won’t put in the effort. Also, since these studies are reviewed by peers that are working in the same field, they either have a bias to believe the results beforehand, or they simply trust the authors, or they don’t want to rock the boat (and expose their next study to extra scrutiny), so they don’t look to closely. So don’t count on this kind of cursory peer review to catch problems – it takes people like Steve to expose them and take the heat in the process.

  36. Peter Hearnden
    Posted Sep 24, 2005 at 2:01 AM | Permalink

    So, Paul, your spirit of scientific inquiry is not to see if it’s right or wrong but to ‘refute it’. Reeks of bias doesn’t it…

  37. John S
    Posted Sep 24, 2005 at 2:22 AM | Permalink

    TCO – you might want to tone it down – there are more things in heaven and earth than are dreamt of in your philosophy. There are myriad failures of the efficient markets hypothesis – they just aren’t generally bigger than the bid-ask spread (have a read of http://www.princeton.edu/~ceps/workingpapers/91malkiel.pdf if you feel like it). And this is notwithstanding the fact that most people who think they can beat the market are deluding themselves.

  38. John A
    Posted Sep 24, 2005 at 2:25 AM | Permalink

    …your spirit of scientific inquiry is not to see if it’s right or wrong but to “refute it’. Reeks of bias doesn’t it…

    In science, to verify whether a proposition is right or wrong, the proposition must be falsifiable. If the data and methodology are unavailable, then neither conclusion can be reached.

    To the biased, all refutations are biased…

  39. fFreddy
    Posted Sep 24, 2005 at 3:08 AM | Permalink

    Re #34, TCO
    Calm down, old chap. I do know who they are: the point I was making is that academics don’t necessarily make good traders, sometimes spectacularly so. Often, the same is true of the young MBAs who think that all economic theory is somewhere between holy writ and laws of nature.
    I suppose at least some of it is the fault of teachers who can’t tell the difference between “a history of market prices behaves like…” and “market pricing is…”. However, the solution is the same as ever: assume you don’t know everything and pay attention to what’s around you.
    I think my point is that, while the theory of Messrs Gibbs and Helmholtz (of whom, you are correct, I have never heard) may flow directly through to the real world of chemical thermodynamics, the same is not always true of economic theory and traded markets. Any theory that is entirely dependent on a perfect market, for example, is immediately vulnerable to the fact that there are no perfect markets. A market is not a theoretical abstraction, it is the trader and his mates down the pub.
    Ummm, at the risk of winding you up unnecessarily, your post #34 is slightly reminiscent of Hunter or Hearnden. Go talk to some traders, and get a fuller picture.

    And let’s dial this down a touch. For my part, I apologise for my slightly robust response in Post #19. I blame it on the years I spent in the trading room.

  40. Louis Hissink
    Posted Sep 24, 2005 at 3:57 AM | Permalink

    A quick note on why economic theory of the predicting kind never can – a is a simple misuse of the differentiual calculus of the physical sciences. It works in Newtonian system because mass does not vary with time.

    Economic prices however are subjective valuations and are independent of time. Graphing stock prices, for example, over time might tell you what happened over time, but it is certainly not true to assert then that the quantum of a stock is dependent on its position along the earth’s orbit around the sun, and thus a function of time.

    It is for this reason that the Austrian School rejects mathematical economics as a predictive tool, and equally the geoscientific rejection of computer models based on the misuse of intensive variables and attempting to force fit coupled, non-linear chaotic climate systems to complex predictive formulae.

  41. TCO
    Posted Sep 24, 2005 at 11:26 AM | Permalink

    It’s not that market efficiency is a holy writ of nature or that it is prefect. It’s that any tiny inefficiency creates an arbitrage opportunity (yes, John I recognise your point about transaction costs, but we were talking about being able to say that a tech stock that went up for 6 months will go up for another 6 or the like). Arbitrage opportunities get traded away. “Who makes the best traders” is irrelevant to a discussion of the point of market efficiency. Since the markets are hugely efficient and traders operate on the very periphery of the inefficiencies and drive the market to efficiency. From an investment perspective, CAPM rules, not candlescticks or momentum or dogs of the DOW. Burden of proof is on those who claim to know better to prove it (with statistics, guys, this is a stats site…), just as it is for progressive betting scheme adherants.

  42. TCO
    Posted Sep 24, 2005 at 11:27 AM | Permalink

    fF, I just answered without reading. Where did you trade? What did you do? dish…

  43. TCO
    Posted Sep 24, 2005 at 1:19 PM | Permalink

    I’m in the middle of reading it (am at page 22) . So far, it’s a very good article and very accessible. And guy is from my school of religion. And none of the points are news to me or change what I was trying to emphasize.

  44. TCO
    Posted Sep 24, 2005 at 2:26 PM | Permalink

    finished the article:

    1. above comments stand.

    2. Author makes interesting observation (strong one), that he doesn’t just beleive that reports of market inefficiences drive correction of them (elimination of arbitrage and sufficient to keep us in the temple of the cash is king NPV lovers) but that more often than not, these reports are not even correct AT THE TIME.

    3. Interesting that in aenecdote in this paper, it is the finance PROFESSIONAL who advocates market efficiency from his experience of trying to beat the market as opposed to the econ paper writing academic.

    4. Interesting similarity wrt climate change in the tendancy of people to try to cherry pick correlations (and then they don’t work going forward). Doesn’t say that there are no new discrepancies to be found or that changes are not the result of changed behavior to correct the discrepancy. But pretty clearly speaks out against any trivial belief in trends or memory. Sorta like the basic point that you can sit behind the roulette wheel and count reds and blacks but it makes no difference on the next roll. and that even if you come up with a trend based on a given observation of the wheel, it is apt to be of the watching 20 times to find one example of a study that shows non-random behavior at a 95% confidence…

  45. Steve McIntyre
    Posted Sep 24, 2005 at 5:14 PM | Permalink

    Now you need to read Deng [2005]’s discussion of Ferson. It’s a little harder, but puts it in an interesting context, which I’m working at understanding.

  46. Ray Soper
    Posted Sep 24, 2005 at 5:28 PM | Permalink

    I hesitated to escalate the arguably off-topic issue of how markets work (is there a similar blog that focusses on these issues where we can get right into it?) but I do think that useful insights can be drawn.

    I have studied with great interest over the years the theories and practice of the various market participants – the CAPM people, the traders, the chartists, the fundamentalists etc. It seems to me that a common mistake is to argue that only one approach can be true instead of looking at all of the tools/viewpoints in an effort to understand how really the underlying system works.

    It seems to me that the best approach is to see that all of the approaches are reflections/manifestations of an underlying fundamental, and to use a more “constellatory” way of thinking that tries to survey the whole scene and to discern in a more profound way what is going on.

    A very common problem in the markets is that many participants (especially academics and usually not traders) fail to recognise the difference between the underlying (the company) and the secondary derivative linked to it (the share traded in the market place).

    Fact is that in the market we are dealing with shares and usually not the company itself. Supply/demand for shares can be affected by all kinds of things including the fundamental performance of the company, the interest rate environment, sector “fashion”, whether funds are collectively over-weight or under-weight, a large shareholder selling down to fund an un-related acquisition, a partner paying an excessive price at the margin to secure a control block etc etc. A huge factor is participation by the public. When the markets have been rising, and the papers are full of new fortunes being made in the stock market, the public tends to join in adding their demand. And similarly, when a market falls, investors decamp – not just the public, but also the funds.

    The market is like a speedometer – an indicator that oscillates according to whether the demand or supply of the bits of paper is high or low on any particular day. The thing that makes long term fundamentals useful is that in the long run the shares should reflect the underlying performance of the company, and if that company reliably, year in year out, delivers consistent growth in sales, earnings and dividends then the share price will surely grow. However, there are many examples of companies delivering unbroken growth in fundamentals where the share price has done nothing but decline for quite long periods. There was a period where McDonalds for example (I think in the 70s) delivered consistent strong growth, but the market went from paying 50 times earnings at the outset to paying less than 10 times some years later with the share price falling as a result.

    What charts show is the playing out of the short term supply/demand for the bits of paper. Each daily open-high-low-close price trace is an end on view of the interaction of the price eqilibrium through the day. In effect a share price chart is an x/y chart where the x-axis is time, the y-axis is price. Have a y/z look at the price action on any one day, and you will see a supply curve (for bits of paper) interacting with a demand curve.

    To the extent that the factors leading to either excess demand or excess demand can persist over time, the chart can demonstrate persistent trends.

    Amazingly (at least to me) you can do a whole course on finance with the academics, and at the “best” universities without being exposed to a discussion of this nature. I would think that many traders and successful investors would understand this stuff. Warren Buffet for example has a position on it. He would say that it is far too hard to try and understand or analyse the factors that drive demand/supply for the bits of paper, but that if you can find well managed companies with strong market positions you can be pretty confident that in the long run, their share prices will outperform.

    I’m not sure how this all relates to climate science, but somehow it seems to me that it does.

  47. TCO
    Posted Sep 24, 2005 at 9:26 PM | Permalink

    steve, did you reply to more than 5 of my posts? I won’t know where you’ve been.

    On the articles: haven’t read the first one. Getting into it now.

    To Ray: I think that the “every system has value” is not such a good approach. Do you think that a progressive roulette better and the MIT Blackjack Team both have valid systems? Regarding the micro-details of executions (y-z chart), that is a well-studied area. Sites: willmotte finance (google it) and econbrowser are a couple. Willmotte will be well above either of our head. Quite technical “rocket scientists” over there. Econbrowser is very approachable and better bet. More macro/peak oil slant than finance.

  48. Dave Dardinger
    Posted Sep 24, 2005 at 10:07 PM | Permalink

    Hah, TCO! Serves you right. You’ve been making it next to impossible for me to see what’s new here other than your wandering through volumes of forgotten lore.

  49. TCO
    Posted Sep 24, 2005 at 10:47 PM | Permalink

    agreed. but there has to be a better way. this is the computer age.

  50. Paul Penrose
    Posted Sep 24, 2005 at 11:21 PM | Permalink

    Re #37: Peter, you took my comments out of context. This is a classical false debating technique, but a transparent one. Is that the best you can do? The question was basically why did the climate researchers use the data if it was bad and why did their peers not throw it out as “nosense”. There is an assumption there that the data is bad, so I addressed the question in that context – that it is bad.

  51. Ray Soper
    Posted Sep 25, 2005 at 1:26 AM | Permalink

    TCO, re 48. I didn’t mean to say that every “system has value”. What I was saying is that it is instructive to look at each of the different views/perspectives, to see the differences between them, but also to discern the links. In the case of the markets, in my view both fundamental analysis and technical analysis tools have value. For example, in the case of the market approaches, very often stocks showing consistent growth in sales per share, earnings per share, and especially dividends per share over a long time will also show a persistent uptrend that is meaningful to chartists. In fact, one way to find stocks that have these fundamental characteristics is to use charts to find stocks that might meet the criteria. If I had to choose just one tool long term, I would choose fundamental analysis emphasising growth. But I don’t have to use just one tool, so I don’t. Use the tools that are there for the purpose that they are useful for, and to throw light on the underlying. Above all, don’t confuse the the tool for the the underlying.

  52. TCO
    Posted Sep 25, 2005 at 8:06 AM | Permalink

    I’m more interested in understanding the validity of the theories than “using tools”. Unless you are a paid hedge fund manager, you should be buy and hold in a diversified no load portfolio. Tools that I do use are valuation and some comparables analysis, but that is more for internal strategy. I think “tools for investing” is highly over-rated. Like “systems for Las Vegas”.

  53. TCO
    Posted Sep 25, 2005 at 10:27 AM | Permalink

    Broke down and read the article (the first “easy” one)!

    1. Data mining seems intuitive to me (sitting in Casino, noting “runs” and making a big deal of them, but not including that you didn’t look at experiment in entirety–runs not counted.)

    2. Spurious regression is less obvious. Is there some mathematical reason why random walks regressed show this effect in a different way than in “normal statistics”? Can one write a thereom to prove it? And what is it that makes this happen (intuition) about a random walk versus a normal process?

    3. Not clear to me the comment about “expected return” under the noise being autocorrelated. Just didn’t follow the point. Need it repeated at my level.

    4. Is the connection of the two problems something that is nescessarily true for random walks? And could it be proved in a thereom vice this observational approach?

    5. Think that you (steve) would be benefited by looking at some of the same literature in other fields (sociology, physics, stats, OR, climatology, in addition to econ) that hits these same issues of stats problems. It might teach you some other issue. Also, it will have more power to others if they see that many fields are looking at this…not just those evil capitalist free-marketeers in the business school…

    6. whole article seems to be backing up my (poorly stated, but intutively correct) view of efficient markets…

    7. Appendix don’t understand entire second to last para.

    8. Appendix: Don’t understand last para.

    9. 5-13: Kinda lost it in there and could not keep head in the game. I see that they find a lot of spurious regression when they take certain things into effect, but could not follow it close enough to agree/disagree with rationale of their evaluation.

    10. 13-17: like above.

    11. conclusions:
    a. did not follow first point (wrt expected return and null hypothesis. are they saying that if the markets are truely efficient, the spurios regressions are not an issue?), but that if they are inefficient, we won’t calculate the inefficiency correctly?
    b. I love the sentence about calling to term certain traditional instruments…take that ya nutter-not-worshipping-at-alter-of-efficient-markets-as-you-should-be-but-instead-advocating-that-you-can-beat-the-system-or-even-that-anyone-can-and-that-the-markets-are-very-irrational.
    c. like the kershlam (on the previous studies being acheivable by datamaining in as little as 10 series.
    d. on the content though, is the “interaction” of data-mining and spurious regression anything complicated or subtle mathematically? Is it as simple as saying that if I datamine in a casino with cheats (for winning strats) that I will find a lot of cheats represented? I mean…did we need to do all that work to show this?
    e. good observation that false regressions don’t work out of period.
    f. good point about broader implications derived from such analysis should not be used.

    12. Was that good enough or do I need to reread 5-17? 😉

    —-

    meta-comment: still seems wierd to me that Mann or the others don’t come over here to chat, debate etc the statistics. Mann would seem to have very much the background to be intrigued by and engage on the nitty gritty of spurious regression and all that. But instead, he sort of sits there like a typical jerman academic mandarin…

  54. Steve McIntyre
    Posted Sep 25, 2005 at 11:54 AM | Permalink

    TCO, pretty good. It’s not like I can just read through the harder articles and fully grab all the points. I’ve been working at these articles back and forth, getting footholds.

    When you see such comments about an intereaction between data mining and autocorrelated series and you see very similar situation in multiproxy studies – especially the fact that the MOST important series in the reconstructinos are MORE autocorrelated than the target – it’s hard not to think that there’s something pretty essential going on.

    Deng [2005] is quite hard. What I find intriguing about it is the tie-in to ARMA (1,1) series with high AR1 coefficients, which drags in a LOT of proxy series and temperature series. It looks to me like everything’s in the red zone and alarm sirens should be on.

  55. TCO
    Posted Sep 25, 2005 at 3:57 PM | Permalink

    So I still wonder: is the “interaction” as simple as my datamining in a casino for winning strats and finding cheats represented? Or is there something more complicated. If it’s as simple as what I’ve expressed, then these are very “disaggregatable” concepts.

  56. fFreddy
    Posted Sep 26, 2005 at 2:55 AM | Permalink

    Re TCO, #42
    I think you are confusing the efficient market hypothesis with the efficient pricing of linked financial instruments, driven by arbitrage trading. Don’t – they are completely different things.

    Thank you for the Wilmott Finance pointer, though. It looks to have some interesting stuff.

  57. fFreddy
    Posted Sep 26, 2005 at 5:07 AM | Permalink

    Ref #47, Ray Soper

    I think the only entirely general statement on how markets work with which I agree is the old crack about traded markets are a dynamic balance of greed and fear.
    With regard to the various technical trading techniques, many of them work some of the time, and the trick is being able to recognise when they are going to stop working. For example, I remember some people did perfectly well out of the candlesticks, if they had a good eye for when to stop and when to start again. It could be that this was at a time (late 80s) when an awful lot of Japanese investors were particularly keen on candlesticks, so there was an element of self-fulfilling prophecy about it, but who cares ? If it works, it gets done.
    I would be slightly cautious about thinking of these techniques as manifestations of some possible underlying fundamental technique. Markets go through major changes in mentality that would probably need a different fundamental – most obviously, from boom-time to bust-time, for example.
    Your focus on weight of money is absolutely right. If enough people decide that tech stocks/south-east asian stocks/tulips/whatever are hyper-valuable, then their price goes up. It doesn’t matter if it’s crazy, it happens anyway. The people who get out in time, whether through superior judgement or blind luck, get rich; lots of others get poor.
    Regarding McDonalds in the 70s – I was still at school then, so I don’t know, but I would guess that what you describe was in the early 70s when the markets were booming. At that time, everyone tends to focus on the go-go stocks and lose interest in the boring old reliable performers. Come the crash in the mid-70s, I would guess that boring old McDonalds would suddenly become much more popular. This effect was certainly visible among boring old bricks and mortar companies in 2000, when the tech bubble burst.
    Regarding price as the intersection of supply and demand – when you have a market-maker with a trading book, this tends to act like a spring joining the suppply and demand curves. When market confidence is high, this spring is quite long and elastic; when people are scared, it is short and stiff.
    Regarding the z-axis on your trading chart – I think I would really recommend putting trading volume on there.
    I entirely agree about academic finance courses. They should spend some time reading books by actual market participants, like “Liars’ Poker” or whatever is a more recent equivalent, and less time honking like geese (see today’s FT).
    I am not so sure how well this stuff relates to climate science, except as a source of misapplied statistics. Climate is a hugely complicated system with non-linear elements, but at its root, it is a finite number of blind physical processes. Financial markets involve new inventions and humans, with all the irrationality that implies. I think there is a qualitative difference.

    “Don’t confuse the tool for the underlying” – absolutely right.

    Sorry for the rather ill-disciplined response – hope there is something of use to you.

  58. TCO
    Posted Sep 26, 2005 at 7:19 AM | Permalink

    I’m not confusing the two, fred. I’m making the point that you can use concepts to understand things. Arbitrage, supply/demand, etc. For instance if you think about it from an arbitrage standpoint, any January effect (once published, if even valid) should get traded away.

    Saying that the candlesticks work some of the time is like saying betting on the reds in Las Vegas works some of the time. Can you recognize when the candlesticks are working? Can you prove that it’s different from chance? (with you know…evil math…and statistics) Hopefully, you have something better than the guy who watches for five minutes before he puts money on the wheel and bets the trend to continue (or reverse).

    Bottom line: burden of proof is on you to show that markets have memory. I haven’t heard anything, other than “science doesn’t matter in this field”. Great fact-based approach…not.

    If you want to dish on some shenanigans of traders, that is fine. Or even just describe some of your past work/lessons learned. I’m interested in hearing that stuff. Keep it tactical though: you’re not going to convince me that candlesticking is the way to go…

    BTW: LIAR’S POKER was thin on either explanation of trading strategies, trading methods or even fraud. About all it explained was that banking even at the higher levels is a lot like retail banking or insurance sales: people pushing products and making money off the public by selling them instruments. Not a big realization to me. Book seems to appeal to 22 year old wannabes hustling for that Bear Sterns interview the most.

  59. fFreddy
    Posted Sep 26, 2005 at 10:02 AM | Permalink

    Re #59, TCO

    I’m making the point that you can use concepts to understand things.

    Yes you can, but I’m not sure that you are. For the avoidance of doubt, an arbitrage is a set of trades in linked, but independently traded, financial instruments that enables you to lock in a profit with no further market risk, only delivery risk (i.e., that one of your counterparties goes bust before settling). The January effect is/was an anomaly, but not an arbitrage. If everyone trades on it, it will indeed go away, until the January when some external factor causes a massive dump on January 2nd, and the traders lose their shirts and get a bit more cautious the following year.

    Saying that the candlesticks work some of the time is like saying betting on the reds in Las Vegas works some of the time.

    No it isn’t. What I said included “…if they have a good eye for when to stop and when to start again.” Markets have an awful lot more clues about when they are moving out of a particular behavioural phase than do roulette wheels. Roulette wheels are very close to their theoretical abstractions; traded markets are not.

    with you know…evil math

    I don’t think maths is evil, which is part of the reason I have a degree in the subject. What I do have is practical experience and more of an eye for when maths is relevant.
    Regarding roulette strategies, my advice would be to run the wheel. I once had the chance to do this for a night and made a 40% return on my investment – absolute, not annualised. Best risk/return I’ve ever seen – what a great racket.

    Bottom line: burden of proof is on you to show that markets have memory.

    Fortunately for me, I don’t feel burdened to prove anything. Fortunately for you, I am a nice guy, so here’s an example.
    The Fed does an auction of Treasury bonds. A week after settlement, the market dumps. A lot of institutions do not automatically mark to market, and don’t like to realise a loss, so they keep the bonds on their books until the market comes back up a few days/weeks/months later. Every time the market tries to go up through the price at which the bonds were issued, these institutions are suddenly able to cut their positions without a loss, so they do. This makes it hard for the market to get past this price, which is why it is called a resistance level. This can go on for a while, until, usually, there is enough buying pressure to force the market price up well beyond the resistance level.
    If you only look at the price history, you will see a lot of little moves up and down, followed by a big move up. You explain this as random chance, and worry that financial markets display high kurtosis. You would be wrong – it is what I would call an example of the market having a memory.

    I haven’t heard anything, other than “science doesn’t matter in this field”.

    Then you should listen better. What you should have heard is that you are placing too much faith in theory and not enough faith in observation, which is very much the scientific approach.

    Anyway, this is off-topic and I don’t want to annoy our host, so I will stop posting on this. If I haven’t persuaded you that there is more to markets than economic theory, then I shall just have to live with the disappointment.

  60. TCO
    Posted Sep 26, 2005 at 11:57 AM | Permalink

    If the market is irrational, then this creates an arbitrage opportunity for rational investors.

    No, you don’t have to prove anything. But if you don’t have stats to prove that your “gut” on candlesticks (or your “indicator system”, which I bet is not reeducible to an algorithm hence being your gut, you advocate) is effective, then I won’t value your comment. You are like a RealClimate “believer”. You haven’t shown me one thing to prove that your obeservations based on your vaunted expertise trump the people who’ve tried to analyze the problem in a fact-based mathematical manner. and the finance professional in the article in this post disagrees with your market timer attitude.

    If the Fed Bonds are predictably mispriced, that is a penny lying on the sidewalk. go pick it up. Start a hedge fund to do so. Or better yet, prove the original assertion with statistics.